text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Why doesn't GVP post alert levels? The assignment of official hazard status designations, often call volcanic alert levels, is the responsibility of national or regional volcano observatories, civil defense agencies, or other designated government officials. Each country creates their own protocols for inter-agency responsibilities, communication, and collaboration. The following discussion is adapted from content provided by the World Organization of Volcano Observatories (WOVO), with permission. In a volcanic crisis, there is often worldwide interest in the volcano's hazard alert levels. With the exception of color codes for aviation (see below), though, there is no standardized international volcano alert levels system. This is due to: (a) wide variation in the behavior of individual volcanoes and in monitoring capabilities, and (b) the different needs of populations, including different languages and symbolism of colors used. National volcano observatories have developed hazard status protocols that are regionally variable and differ significantly in detail. The WOVO site contains links to the regional volcano observatories and the alert systems they utilize. Organizations with an interest in natural hazards are strongly cautioned against posting global volcano hazard alerts or eruption "forecasts" that do not originate from official agencies with both responsibility for, and familiarity with, those volcanoes. Posting of alert levels can have major public safety and economic implications, and should be done with care. The data needed to provide alert levels come from onsite and remote monitoring instrumentation and are best evaluated by staff of regional volcano observatories who are the most familiar with activity at their volcanoes. Any re-posting or publication of an official hazard status or alert level should include the date (and time if available) that the status became effective, the broad parameters of the system being used, the issuing agency, and a link (if appropriate) so that users can easily find the most current information. The responsible observatories and organizations are listed on the WOVO website, and readers are directed to these organizations for information on current volcano alert levels. There is no WOVO-endorsed source of worldwide volcanic alert levels, with the exception of aviation color codes. For those seeking a near real-time overview of current reported activity that incorporates direct observatory sources WOVO recommends the Weekly Volcanic Activity Report compiled by the Smithsonian Institution's Global Volcanism Program and the U.S. Geological Survey's Volcanic Hazards Program. Many instances of aircraft flying into volcanic ash clouds have demonstrated the life-threatening and costly damages that can be sustained. Consequently, a global volcanic alert level system for aviation has been developed (as part of the International Airways Volcano Watch, a universal warning system coordinated by the International Civil Aviation Organization, a UN specialist agency). This system uses four color codes, designed to help pilots, dispatchers, and air-traffic controllers quickly find the status of numerous volcanoes that might endanger aircraft. The color codes reflect conditions at or near a volcano and are not intended to indicate hazards posed downwind by drifting ash - all discernible ash clouds are assumed to be highly hazardous and to be avoided. The aviation color code should not be extrapolated to represent hazards posed on the ground, which might be quite different. Local observatories may have a completely different system to describe ground hazards independent of the ACC. The color codes are defined as below. |Aviation Color Code||Definition| |GREEN||Volcano is in normal, non-eruptive state. Or, after a change from a higher level: Volcanic activity considered to have ceased, and volcano reverted to its normal, non-eruptive state.| |YELLOW||Volcano is experiencing signs of elevated unrest above known background levels. Or, after a change from higher level: Volcanic activity has decreased significantly but continues to be closely monitored for possible renewed increase.| |ORANGE||Volcano is exhibiting heightened unrest with increased likelihood of eruption. Or, volcanic eruption is underway with no or minor ash emission. [specify ash-plume height if possible]| |RED||Eruption is forecast to be imminent with significant emission of ash into the atmosphere likely. Or, eruption is underway with significant emission of ash into the atmosphere. [specify ash-plume height if possible]| Global Volcanism Program, 2013. Volcanoes of the World, v. 4.7.1. Venzke, E (ed.). Smithsonian Institution. Downloaded 18 Jul 2018. https://doi.org/10.5479/si.GVP.VOTW4-2013
<urn:uuid:31d83c0f-c6f2-4320-82b7-d2ae870039aa>
2.921875
938
FAQ
Science & Tech.
20.438509
95,481,815
Divisibility, the Fundamental Theorem of Number Theory Counting and the numbers that thus came forth are among the earliest achievements of mankind’s awakening intellect. As numbers came to being, their intriguing properties were revealed, and symbolic meanings were assigned to them. Besides portending fortune or doom, numbers afforded a mathematical expression to many other aspects of existence. For instance, the ancient Greeks considered the divisors of a number that are less than the number itself to be its parts; indeed, they so named them. And those numbers that rise up from their parts, like Phoenix, the bird that according to legend rises up from its own ashes, were viewed as the embodiment of perfection. Six is such a perfect number, since it is the sum of its parts 1, 2, and 3; 28 and 496 are also perfect. Euclid (third century b.c.) already knew that a number of the form 2 n−1(2 n − 1) is perfect if the second factor is prime.1 It was Leonhard Euler (1707–1783), more than two thousand years later, who first showed that any even perfect number must be of this form. To this day it is still unknown whether or not there exist odd perfect numbers. KeywordsFundamental Theorem Great Common Divisor Common Multiple Prime Property Canonical Decomposition Unable to display preview. Download preview PDF. - 1.Euclid did not have this notation to use. He described it in the following way: if the sum of a geometric sequence starting with 1 and having a ratio of 2 is prime, then multiplying this sum by the last element of the sequence yields a perfect number. (Euclid: Elements. Sir Thomas L. Heath, New York, Dover Publications, 1956.)Google Scholar
<urn:uuid:f64f3f23-8802-42f1-965d-4ea9268dca73>
4.125
370
Truncated
Science & Tech.
58.265527
95,481,820
“What’s understudied is the living component of the Arctic and that includes humans,” said Syndonia “Donie” Bret-Harte, associate professor of biology at the University of Alaska Fairbanks and co-author of a paper to be published September 11, 2009 in the journal Science. The paper reviews current knowledge on the ecological consequences of climate change on the circumpolar Arctic and issues a call for action in several areas of global climate change research. “Humans live in the Arctic with plants and animals and we care about the ecosystem services such as filtering water, fiber and food production and cultural values that the Arctic provides” said Bret-Harte, who specializes in Arctic plant ecology in Alaska. The global average surface temperature has increased by 0.72 F (0.4 C) over the past 150 years and the average Arctic temperature is expected to increase by 6 C. “That’s a mind bogglingly large change to contemplate and keep in mind that no one lives at the average temperature,” Bret-Harte said. The international team of scientists who collaborated on this paper reviewed dozens of research documents on the effects of circumpolar Arctic warming. They note that numerous direct effects including lengthening of growing season following a rapid spring melt, earlier plant flowering and appearance of insects following a warmer spring, deaths of newborn seal pups following melting of their under-snow birthing chambers have other, often more subtle, indirect effects on plants, animals and humans that warrants increased attention. Understanding how changes in plant and animal populations affect each other and how they affect the physical or nonliving components of the Arctic is critical to understanding how climate warming will change the Arctic. One effect studied intensively at the UAF Institute of Arctic Biology Toolik Field Station on Alaska’s North Slope is shrub expansion on the tundra. “Shrubs are increasing on the tundra as the climate warms and more shrubs will lead to more warming in the spring,” said Bret-Harte. Snow reflects most incoming radiation, which is simply light that can transfer heat. Shrubs that stick out of the snow in spring absorb radiation and give off heat. In this positive feedback cycle, the heating of the air immediately above the snow warms the snow, causing it to melt sooner. Warmer soils lead to increased nutrient availability, which contributes to greater shrub growth, which then contributes to still more warming. Another effect studied intensively in Alaska occurs under the snow. “We need to better understand how winter comes and goes and how that drives shifts in plant-animal interactions,” said Jeff Welker, professor of biology at the University of Alaska Anchorage. When it didn’t snow at Toolik Field Station until Thanksgiving a few years ago the soil got cold and stayed cold. So cold that microbes in the soil were barely active. The spring green-up was slow in coming and likely affected caribou forage, says Welker. In 2008, the snow started falling in September and never quit. The warmer winter soils with their active microbes were insulated from the cold and were able to provide nutrients to plants that stimulated growth. The authors call for immediate attention to the conservation of Arctic ecosystems; understanding the ecology of Arctic winters; understanding extreme events such as wildfires and extended droughts; and the need for more baseline studies to improve predictions. “This paper identifies gaps in our knowledge, what we need to be doing and where the public needs to spend its money,” said Welker. The research team was led by Eric Post, Penn State University, and included biologists, ecologists, geographers, botanists, anthropologists, and fish and wildlife experts from the University of Alberta and the Canadian Wildlife Service in Canada; Aarhus University and the University of Copenhagen in Denmark; the University of Helsinki in Finland; the Arctic Ecology Research Group in France; the Greenland Institute of Natural Resources in Greenland; the University Centre on Svalbard, the University of Tromsø, and the Centre for Saami Studies in Norway; the University of Aberdeen and the University of Stirling in Scotland; Lund University and the Abisko Scientific Research Station in Sweden; the University of Sheffield in the UK; and the Institute of Arctic Biology and the U.S. Geological Survey at the University of Alaska Fairbanks, the Environment and Natural Resources Institute of the University of Alaska Anchorage, and the University of Washington in the United States. Support was provided by Aarhus University, The Danish Polar Center, and the U.S. National Science Foundation. CONTACT: Syndonia “Donie” Bret-Harte, associate professor of biology, Institute of Arctic Biology University of Alaska Fairbanks. 907-474-5434, email@example.com. Jeffrey Welker, professor of biology, director, Environment and Natural Resources Institute University of Alaska Anchorage. firstname.lastname@example.org, 907-257-2701 Marie Gilbert, information officer, Institute of Arctic Biology, University of Alaska Fairbanks. 907-474-7412, email@example.com Marie Gilbert | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:fb3e3172-a509-4e3d-9d9f-bb380972ed04>
3.9375
1,727
Content Listing
Science & Tech.
36.396953
95,481,831
Our immune system plays an essential role in protecting us from diseases, but how does it do this exactly? Dutch biologist Suzanne van Helden discovered that before dendritic cells move to the lymph nodes they lose their sticky feet. This helps them to move much faster. Immature dendritic cells patrol the tissues in search of antigens. After exposure to such antigens they undergo a rigorous maturation process. During this maturation the dendritic cells migrate to the lymph nodes to activate T cells. Suzanne van Helden studied the adhesion and migration of both immature and mature dendritic cells.Dendritic cell as a general Van Helden not only demonstrated that dendritic cells lose their podosomes very quickly during maturation but she also identified the substances that are responsible for their disappearance. The presence of prostaglandin E2 is indispensable for this disassembly. In addition, it appears that dendritic cells lose their podosomes after interaction with certain bacteria. What is striking is that only gram-negative bacteria lead to podosome loss. Gram-positive bacteria do not have this effect. Van Helden concludes that dendritic cells can apparently distinguish between different pathogens.Dendritic cells in action Van Helden carried out her research within a group of scientists that study the function of dendritic cells in different ways. The research comprises not only fundamental research, as in Van Helden's case, but also preclinical and clinical trials. The research was made possible by a grant from NWO. Spinoza Prize winner Carl Figdor supervised Van Helden during her research. Kim van den Wijngaard | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:489ddd18-2797-487b-915d-a98d760864cf>
3.625
922
Content Listing
Science & Tech.
36.546695
95,481,858
Ploidy - Wikipedia The nucleus of a eukaryotic cell is haploid if it has a single set of chromosomes, each one not being part of a pair.By extension a cell may be called haploid if its nucleus is haploid, and an organism may be called haploid if its body cells (somatic cells) are haploid. Haploid - Simple English Wikipedia, the free encyclopedia Haploid is the term used when a cell has half the usual number of chromosomes.A normal eukaryotic organism is composed of diploid cells, one set of chromosomes from each parent. UCMP Glossary: H UCMP Glossary: H | A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z | habit-- The general growth pattern of a plant.A plant's habit may be described as creeping, trees, shrubs, vines, etc. Doubled haploidy - Wikipedia A doubled haploid (DH) is a genotype formed when haploid cells undergo chromosome doubling. Artificial production of doubled haploids is important in plant breeding.. Haploid cells are produced from pollen or egg cells or from other cells of the gametophyte, then by induced or spontaneous chromosome doubling, a doubled haploid cell is produced ... Meiosis — bozemanscience Paul Andersen explains how the process of meiosis produces variable gametes. He starts with a brief discussion of haploid and diploid cells. He compares and contrasts spermatogenesis and oogenesis. Haploidie – Wikipedia Von Haploidie (altgriechisch ἁπλόος haplóos, ‚einfach‘) wird gesprochen, wenn das Genom (Erbgut) einer (Eukaryoten-)Zelle oder eines Prokaryoten (z. B. eines Bakteriums) nur einfach vorhanden ist, also jedes Allel in jeweils nur einer einzigen Ausprägung vorkommt. Double-Haploid Induction And Plant-Breeding Process ... Syngenta scientists have identified the genetic source of haploid induction, a process that greatly speeds up plant breeding. Biology Dictionary - D - Macroevolution.net - Biology ... Biology Dictionary - D to DZO: Meanings of biology terminology and abbreviations starting with the letter D. Genetic recombination) :: DNA from the Beginning Animation in Concept 11: Genes get shuffled when chromosomes exchange pieces, DNA from the Beginning Glossary of Terms: H - Physical Geography Habitat Location where a plant or animal lives. Hadean Geologic eon that occurred from 3800 to 4600 million years ago. . The Earth's oldest rocks date to the end of this time per
<urn:uuid:aca63750-4fa2-427a-8dd6-f609f5831ed9>
3.9375
652
Knowledge Article
Science & Tech.
52.032618
95,481,906
Washington: Contrary to popular belief, cold snaps like the ones that hit the eastern United States in the past winter are not a consequence of climate change, says a new study. The results, published in the Journal of Climate, showed that global warming actually tends to reduce temperature variability. Repeated cold snaps led to temperatures far below freezing across the eastern United States in the past two winters. Parts of the Niagara Falls froze, and ice floes formed on Lake Michigan. But scientists at ETH Zurich in Switzerland and the California Institute of Technology in the US led by Tapio Schneider, professor of climate dynamics at ETH Zurich, found that the extreme winters were not a result of climate change. They used climate simulations and theoretical arguments to show that in most places, the range of temperature fluctuations will decrease as the climate warms. So not only will cold snaps become rarer simply because the climate is warming. Additionally, their frequency will be reduced because fluctuations about the warming mean temperature also become smaller. However, Schneider noted that "despite lower temperature variance, there will be more extreme warm periods in the future because the Earth is warming". Using a highly simplified climate model, they examined various climate scenarios to verify their theory. It showed that the temperature variability in mid-latitudes indeed decreases as the temperature difference between the poles and the equator diminishes. Climate model simulations by the Intergovernmental Panel on Climate Change (IPCC) showed similar results: as the climate warms, temperature differences in mid-latitudes decrease, and so does temperature variability, especially in winter. Temperature extremes will therefore become rarer as this variability is reduced. But this does not mean there will be no temperature extremes in the future, the researchers added.
<urn:uuid:82f2d0ae-9ad2-4d12-9ad7-052bcea3df2c>
3.609375
354
News Article
Science & Tech.
16.032245
95,481,934
The recent Stern Review on the Economics of Climate Change highlights the importance of climate change science – not only physical and biological science but also engineering, economics and social science – in the assessment of the economic impacts. As well as mitigating the causes of change, research shows that people need to adapt and learn to live with environmental changes. But many people still refuse to accept that the science is sufficiently robust to allow decisions to be taken about the biggest challenge facing our planet. Professor Thorpe said, “In light of the scientific evidence, summarised in the Stern Review, it is hard to understand how people can make statements like ‘climate change is all down to variations in the sun’s radiation’; or ‘we really don’t know what is going to happen.’ I am willing, on behalf of NERC, to accept the challenge of a public debate with sceptics to try to correct misinformation with actual scientific knowledge.” NERC has set up an online discussion forum where all comers can air their views. Log on to www.nerc.ac.uk and follow the links. The forum will be open until 31 January 2007. Professor Thorpe added, “If you don't believe the science then please tell us why, or if you are confused about it then ask a question. In either case we will respond. Come and join the debate!” Professor Thorpe and his team of experts will provide the scientists' response to all climate–related comments or questions posted on the website. Marion O'Sullivan | alfa Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:2e2991f4-96f7-4881-9e97-c0ef8bc4ee1c>
3.09375
982
Content Listing
Science & Tech.
44.517432
95,481,955
Introduction to Probability by Leif Mejlbro Publisher: BookBoon 2009 Number of pages: 61 In this book you will find the basic mathematics of probability theory that is needed by engineers and university students. Topics as Elementary probability calculus, density functions and stochastic processes are illustrated. Home page url Download or read it online for free here: by Oliver Knill - Overseas Press This text covers material of a basic probability course, discrete stochastic processes including Martingale theory, continuous time stochastic processes like Brownian motion and stochastic differential equations, estimation theory, and more. by Douglas Kennedy - Trinity College This material was made available for the course Probability of the Mathematical Tripos. Contents: Basic Concepts; Axiomatic Probability; Discrete Random Variables; Continuous Random Variables; Inequalities, Limit Theorems and Geometric Probability. by Robert M. Gray - Springer A self-contained treatment of the theory of probability, random processes. It is intended to lay theoretical foundations for measure and integration theory, and to develop the long term time average behavior of measurements made on random processes. by Peter G. Doyle, J. Laurie Snell - Dartmouth College In this work we will look at the interplay of physics and mathematics in terms of an example where the mathematics involved is at the college level. The example is the relation between elementary electric network theory and random walks.
<urn:uuid:3c8a1c19-74f6-4da4-a0c3-4dab7310a4f6>
2.78125
302
Content Listing
Science & Tech.
13.374732
95,481,960
For more than 140 years nearly all that scientists knew about this animal was derived from one lonely specimen preserved in a jar of alcohol in the Natural History Museum, London. Today, in a paper in the open access journal ZooKeys, a team of bat biologists led by Don Buden of the College of Micronesia published a wealth of new information on this “forgotten” species, including the first detailed observations of wild populations. And it is none too soon, says paper co-author Kristofer Helgen of the Smithsonian’s National Museum of Natural History, as the low-lying atolls this bat calls home are likely to be increasingly affected by rising ocean waters brought on by climate change. “Very little is known about many of the mammals that live on remote Pacific islands, including this beautiful flying fox,” Helgen said. “This study gives us our first close look at a remarkable bat.” The lone London specimen was collected in 1870 from the Mortlock Islands, a series of atolls that are part of the Federated States of Micronesia in the west-central Pacific Ocean. British biologist Oldfield Thomas used this specimen to name the species Pteropus phaeocephalus in 1882. But during a recent study of the bat, Buden discovered that a German naturalist voyaging on a Russian expedition had observed and named the animal some 50 years earlier. “We found a report written by F.H. Kittlitz in 1836 describing his expedition to the Pacific Islands in the late 1820s. In that report he describes the flying-foxes of the Mortlocks and names them Pteropus pelagicus,” Buden said. “This means the species was named long before Thomas’s description in 1882.” According to internationally established rules for naming animals, the earliest available scientific name of a species must be officially adopted. In the ZooKeys paper the scientists comply with this rule by renaming the Mortlock Islands flying fox Pteropus pelagicus. Not only does Kittlitz correctly deserve credit for the discovery of the species 50 years earlier than previously thought, but he can now also be credited for its “new” original name. During their research—which involved careful study of the skulls and skins of related flying fox species in 8 different museums on 3 different continents—the researchers straightened out a second point of confusion in the scientific literature regarding these animals. They demonstrated that flying foxes from the nearby islands of Chuuk Lagoon, long regarded as the separate species Pteropus insularis, are best regarded as a subspecies of Pteropus pelagicus. This finding shows that the Mortlock flying fox has a wider geographic distribution than previously realized. New fieldwork on the Mortlock Islands revealed more than name changes. The article describes the first study of the behavior, diet and conservation status of this flying fox, finding that the Mortlock Islands support a small population of 900 to 1,200 bats scattered across a land surface of only 4.6 square miles. Legal rules have brought better protection to the species, which was once heavily hunted and exported for food. But the future of the species remains uncertain, Helgen points out. In their study of Mortlock Islands geography, the researchers learned none of the islands are more than a few meters high. Rising sea levels, generated by climate change, pose a serious threat to the flying foxes’ habitat and its food resources through flooding, erosion and contamination of freshwater supplies. “When we think of climate change having an impact on a mammal species, what comes to mind most immediately is an Arctic animal like the polar bear, which depends on sea ice to survive,” Helgen said. “But this flying fox may be the best example of a mammal species likely to be negatively impacted by warming global climates. Here is a tropical mammal that has survived and evolved for hundreds of millennia on little atolls near the equator. How much longer will it survive as sea levels continue to rise?” Kelly Carnes | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8524ced3-c52a-423b-916b-4ec0e8fd6995>
4.03125
1,429
Content Listing
Science & Tech.
41.072281
95,481,966
THE SOLAR CYCLE AND ITS EFFECT ON THE EARTH The motion of the sun can trigger earthquakes. The solar flares also have the capability of altering the length of the day. There exists a correlation between solar activities and weather. There is a correlate between various geophysical phenomena, volcanic eruption, earthquake, solar activities and the length of the day. DYNAMICS OF A PHOTON “Photons are often described as “wavelets” because a single photon covers only a very small amount of space” (www.play-hookey.com) .A photon of light is produced when light from a source (sun, etc) of an appropriate frequency (threshold frequency) impinges on the atom or metal surface, a phenomenon referred to as the photoelectric effect. The light impinging on the metal surface could be UV or infrared rays of appropriate frequency. The number of emitted electron varies from one atom to the other due to the difference in the characteristics intensity of light. The more the intensity of the light, the more the number of emitted electron or photon of light. Another reason for the difference in the amount of photon of atoms in the periodic chart is due to the difference in the mass.The heavier the atom, the more the no emitted electron.In the same vein the energy of the emitted electron depends on the frequency of the impinging light rays. SIMILARITIES AND DIFFERENCES OF THE VARIOUS COMPONENTS OF THE ELECTROMAGNETIC SPECTRUM Infrared rays and radio waves carry less energy per photon than visible light. They have low frequency and high wavelength. On the contrary, UV rays, x ray and gamma rays have high frequency and low wavelength. All travel at the same speed as light. The difference lies in the difference in the wavelength .X rays have found their use in medicine as they penetrate human body. Radio waves have weak penetrating power. Visible light can pass through transparent objects e.g. glass. USE OF ELECTROMAGNETIC SPECTRUM TO DETERMINE COMPOSITION AND MOTION OF STAR Today it’s been possible to determine the chemical composition of the stars. The valuable tool to do this is via spectroscopy (i.e. the study of a thing using spectra) .Astrophysics and spectroscopy are closely related. “Astrophysics is the aspect of astronomy that deals with the physical properties of stars, galaxies and other astronomical objects” (Astrophysics on astronomical.org) .Sunlight can be separated into its various colors via a prism. Dark lines in the spectrum indicate areas with little or no light. On the earth similar line could be seen with hot gases spectra. These patterns correspond to specific element. The chemical elements in the sun which is mainly hydrogen are also found on planet earth. The vast similarities in the absorption line of sun and the stars lead to the conclusion that the stars composed majorly of helium and hydrogen with traces of other elements. So many information is revealed by the absorption line pattern of stars. A large domain of stellar spectra contains absorption lines . The star must be made up of an outer part which is less dense, cooler and atmospheric and also the inside must be hot and denser and produces a continuous spectrum. The temperature of the earth is inversely proportional to the distance from the earth’s centre. Stars lack molten interior as opposed to what is observed in other planets. The denser part is also gaseous because of the high temperature. From EM spectrum it appears to us as if the stars (and sun, planet and moon) rotates around us. On a daily basis, it arises in the east and set in the west. This is the diurnal motion. Ken, Bigelow.”Characteristics of a Photon.”(1996, 2000-2007) Retrieved from www.play-hookey.com on Sept. 27, 2008. Scientia Astrophysical Organization.”Astrophysics and Astronomy.” Retrieved from: www.Astrophysical.org on Sept 27, 2008.
<urn:uuid:74e14674-84ea-4983-a507-51b3e43761a9>
3.78125
861
Knowledge Article
Science & Tech.
49.867825
95,481,974
Effects of Global Warming. Energy radiates from the earth surface. Energy radiates from the atmosphere. Greenhouse gases are being warmed by the radiation from earth and “trap heat”. Radiation from the sun warms the earth’s surface. Without greenhouse gases: -18 degrees! (average temp). Energy radiates from the atmosphere Greenhouse gases are being warmed by the radiation from earth and “trap heat” Radiation from the sun warms the earth’s surface -18 degrees! (average temp) +15 degrees (average. temp) 1 degree warmer today than 100 years ago. Which season shows the biggest change in temperature? Source: Temperature and Precipitation Trends in Canada During the 20th Century Xuebin Zhang,*Lucie A. Vincent, W.D. Hogg and AinNiitsooClimate Research Branch, Meteorological Service of Canada But it isn’t about the temperature itself. What are the effects? Melting Polar Caps – affects the habitat of polar bears which need ice cover in order to hunt their prey (seals) Predictions vary (a lot!) but even a 1-2 meter rise would mean millions of people have to move. some areas and decreased in others.
<urn:uuid:5e8c34fd-99d1-4da7-a290-8fe3b91cf992>
3.96875
270
Knowledge Article
Science & Tech.
54.539671
95,481,979
Reactivity in Chemistry CC13. Multiple Bonds in Coordination Complexes Metal-Ligand Multiple Bonds For the most part, we have looked at donor atoms that provide one pair of electrons to a metal. In chelation, two donor atoms on the same ligand can provide a total of four electrons to the metal. In addition, some ligands can form double (or triple) bonds to a metal, providing four or even six electrons from one donor atom. Oxides may be the most common multiply-bonded ligand. In biology, attention has turned to the role of iron and copper oxides as active intermediates in a variety of enzymes that use molecular oxygen to oxidize substrates. The most notable example is cytochrome P450, a ubiquitous class of oxidizing agents found in most organisms. In humans, cytochrome P450 is found in a variety of tissues, incorporating oxygen atoms into small molecules for a plethora of reasons. One interesting use of cytochrome P450 is as a detoxifying agent in the liver, where it converts C-H bonds in fat-soluble compounds into C-OH groups (alcohols), which can then be excreted via the kidneys and urinary system. The active intermediate involved in breaking the C-H bond appears to be a terminal iron oxide, Fe=O. Figure CC13.1. The proposed activated iron center of Cytochrome P450. The heme ring is simplified. Oxides are also important industrially. Olefin metathesis catalysts, which are used in refining alkenes in petroleum into more useful isomers, are often metal oxides, which are converted under the reaction conditions into metal carbenes (see below). Explain, in terms of intermolecular interactions, why oxidation by cytochrome P450 is a necessary step for removal of many compounds from human tissues. Show how a metal orbital and a ligand orbital can combine to form a pi bond. How many different transition metal orbitals could participate in this bond? Often, "terminal" metal-ligand multiple bonds could be in equilibrium with "bridging" ligands between two metal atoms. Show, with drawings, how the oxide ligands on two adjacent Fe=O groups could form a single Fe2O2 unit. A second important class of metal-ligand multiple bonds is the carbenes. Carbenes contain metal-carbon double bonds. They are often divided into two classes: Fischer carbenes and Schrock carbenes or alkylidenes. Fischer carbenes were developed by E.O. Fischer, who shared a Nobel Prize with Geoff Wilkinson in 1973 for other work. Fischer carbenes have a heteroatom attached to the double bonded carbon, such as an oxygen or nitrogen. They can be somewhat more stable than alkylidenes, which have only hydrogens or carbons attached to the double bonded carbon. Figure CC13.2. A Fischer carbene complex. Show, with drawings of molecular orbitals, why Fischer carbenes are stabilized by the presence of adjacent heteroatoms. Alkylidenes were discovered by Dick Schrock, now at MIT, when he was working at DuPont in the early 1970's. While trying to place some bulky alkyl groups on tantalum, he noticed spectroscopic evidence that suggested a double bond. DuPont didn't have him doing this experiment for a particular reason; he was employed as a basic scientist, whose job it was to produce new information for its own sake. However, Schrock quickly realized he had found something with important applications: these kinds of structures had been proposed by Chauvin as intermediates in olefin metathesis, a process used in petroleum refining, but they hadn't been observed before. Years later, Schrock and other workers, including Bob Grubbs at Cal Tech, were able to develop new alkylidene-based catalysts useful in polymer chemistry and organic synthesis. For their contributions in this area, Schrock, along with Bob Grubbs at Caltech and Yves Chauvin at IFP, shared the Nobel Prize in chemistry in 2005. Figure CC13.3. A Schrock carbene or alkylidene complex. Suggest some examples of spectroscopic evidence that may have tipped Schrock off about the presence of a metal-carbon double bond in his product. Determine the electron count at the metal in the following complexes. Metal-Metal Multiple Bonds Just as additional orbital interactions can lead to metal-ligand multiple bonds, they can also make multiple bonds between metals. Metal-metal bonds in coordination complexes are interesting because they are a sort of halfway state between bulk elemental metals, in which arrays of atoms are bonded together more or less without limit, and molecules, which have discrete shapes and sizes. Non-molecular compounds can sometimes be difficult to study (although they also provide some advantages). Sometimes, researchers are interested in the behaviour of compounds having metal-metal bonds because they can provide insight into metals. It isn't easy to tell how many bonds there are between two metal atoms. Usually, researchers first speculate a multiple bond is present when an x-ray structure shows that the two metal atoms are very close together. Molecular orbital calculations are then performed to get an idea of the electronic structure of the compound. The number of bonds that should be drawn between the two atoms may still be open to debate; most workers draw a bond for every pair of electrons shared between the atoms. However, sometimes there are electrons in antibonding levels as well, so the true bond order between the metals is lower. A sigma bond has maximum electron density along the bond axis. A pi bond has maximum electron density above and below the bond axis (electron density is divided into two lobes along the bond). A delta bond has maximum electron density above and below and in front of and behind the bond axis (electron density is divided into four lobes along the bond). Show how two metal d orbitals can combine to form a delta bond. Determine the electron count at each metal in the following complexes. One of the shortest metal-metal bonds on record was reported by Klaus Theopold at the University of Delaware and Clark Landis at the University of Wisconsin-Madison in 2007. The Cr-Cr distance is reported as 1.8028(9) Angstroms (the 9 in parentheses is the error in the last digit, i.e. +/- 0.0009). They believe each chromium atom in the complex has 18 electrons. a) How many metal-metal bonds are there between the chromium atoms? b) Show drawings of the overlap between pairs of orbitals that could be responsible for the metal-metal bond. This site is written and maintained by Chris P. Schaller, Ph.D., College of Saint Benedict / Saint John's University (with contributions from other authors as noted). It is freely available for educational use. Structure & Reactivity in Organic, Biological and Inorganic Chemistry by Chris Schaller is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported License. Send corrections to email@example.com This material is based upon work supported by the National Science Foundation under Grant No. 1043566.Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. Back to Coordination Chemistry Index Back to Web Materials on Structure & Reactivity in Chemistry
<urn:uuid:8307cf4c-cd3e-4a66-ae7c-c811161d1057>
3.515625
1,578
Academic Writing
Science & Tech.
39.459069
95,481,981
Outline of air pollution dispersion - The following outline is presented as an overview and topical guide to air pollution dispersion: Air pollution dispersion – distribution of air pollution into the atmosphere. Air pollution is the introduction of particulates, biological molecules, or other harmful materials into Earth's atmosphere, causing disease, death to humans, damage to other living organisms such as food crops, or the natural or built environment. Air pollution may come from anthropogenic or natural sources. Dispersion refers to what happens to the pollution during and after its introduction; understanding this may help in identifying and controlling it. Air pollution dispersion has become the focus of environmental conservationists and governmental environmental protection agencies (local, state, province and national) of many countries (which have adopted and used much of the terminology of this field in their laws and regulations) regarding air pollution control. - 1 Air pollution emission plumes - 2 Air pollution dispersion models - 3 Air pollutant emission - 4 Characterization of atmospheric turbulence - 5 Miscellaneous other terminology - 6 See also - 7 References - 8 Further reading - 9 External links Air pollution emission plumes Air pollution emission plume – flow of pollutant in the form of vapor or smoke released into the air. Plumes are of considerable importance in the atmospheric dispersion modelling of air pollution. There are three primary types of air pollution emission plumes: - Buoyant plumes — Plumes which are lighter than air because they are at a higher temperature and lower density than the ambient air which surrounds them, or because they are at about the same temperature as the ambient air but have a lower molecular weight and hence lower density than the ambient air. For example, the emissions from the flue gas stacks of industrial furnaces are buoyant because they are considerably warmer and less dense than the ambient air. As another example, an emission plume of methane gas at ambient air temperatures is buoyant because methane has a lower molecular weight than the ambient air. - Dense gas plumes — Plumes which are heavier than air because they have a higher density than the surrounding ambient air. A plume may have a higher density than air because it has a higher molecular weight than air (for example, a plume of carbon dioxide). A plume may also have a higher density than air if the plume is at a much lower temperature than the air. For example, a plume of evaporated gaseous methane from an accidental release of liquefied natural gas (LNG) may be as cold as -161 °C. - Passive or neutral plumes — Plumes which are neither lighter or heavier than air. Air pollution dispersion models There are five types of air pollution dispersion models, as well as some hybrids of the five types: - Box model — The box model is the simplest of the model types. It assumes the airshed (i.e., a given volume of atmospheric air in a geographical region) is in the shape of a box. It also assumes that the air pollutants inside the box are homogeneously distributed and uses that assumption to estimate the average pollutant concentrations anywhere within the airshed. Although useful, this model is very limited in its ability to accurately predict dispersion of air pollutants over an airshed because the assumption of homogeneous pollutant distribution is much too simple. - Gaussian model — The Gaussian model is perhaps the oldest (circa 1936) and perhaps the most commonly used model type. It assumes that the air pollutant dispersion has a Gaussian distribution, meaning that the pollutant distribution has a normal probability distribution. Gaussian models are most often used for predicting the dispersion of continuous, buoyant air pollution plumes originating from ground-level or elevated sources. Gaussian models may also be used for predicting the dispersion of non-continuous air pollution plumes (called puff models). The primary algorithm used in Gaussian modeling is the Generalized Dispersion Equation For A Continuous Point-Source Plume. - Lagrangian model — a Lagrangian dispersion model mathematically follows pollution plume parcels (also called particles) as the parcels move in the atmosphere and they model the motion of the parcels as a random walk process. The Lagrangian model then calculates the air pollution dispersion by computing the statistics of the trajectories of a large number of the pollution plume parcels. A Lagrangian model uses a moving frame of reference as the parcels move from their initial location. It is said that an observer of a Lagrangian model follows along with the plume. - Eulerian model — an Eulerian dispersion model is similar to a Lagrangian model in that it also tracks the movement of a large number of pollution plume parcels as they move from their initial location. The most important difference between the two models is that the Eulerian model uses a fixed three-dimensional Cartesian grid as a frame of reference rather than a moving frame of reference. It is said that an observer of an Eulerian model watches the plume go by. - Dense gas model — Dense gas models are models that simulate the dispersion of dense gas pollution plumes (i.e., pollution plumes that are heavier than air). The three most commonly used[dubious ] dense gas models are: - The DEGADIS model developed by Dr. Jerry Havens and Dr. Tom Spicer at the University of Arkansas under commission by the US Coast Guard and US EPA. - The SLAB model developed by the Lawrence Livermore National Laboratory funded by the US Department of Energy, the US Air Force and the American Petroleum Institute. - The HEGADAS model developed by Shell Oil's research division. Air pollutant emission - Types of air pollutant emission sources – named for their characteristics - Sources, by shape – there are four basic shapes which an emission source may have. They are: - Point source — single, identifiable source of air pollutant emissions (for example, the emissions from a combustion furnace flue gas stack). Point sources are also characterized as being either elevated or at ground-level. A point source has no geometric dimensions. - Line source — one-dimensional source of air pollutant emissions (for example, the emissions from the vehicular traffic on a roadway). - Area source — two-dimensional source of diffuse air pollutant emissions (for example, the emissions from a forest fire, a landfill or the evaporated vapors from a large spill of volatile liquid). - Volume source — three-dimensional source of diffuse air pollutant emissions. Essentially, it is an area source with a third (height) dimension (for example, the fugitive gaseous emissions from piping flanges, valves and other equipment at various heights within industrial facilities such as oil refineries and petrochemical plants). Another example would be the emissions from an automobile paint shop with multiple roof vents or multiple open windows. - Sources, by motion - Sources, by urbanization level – whether the source is within a city or not is relevant in that urban areas constitute a so-called heat island and the heat rising from an urban area causes the atmosphere above an urban area to be more turbulent than the atmosphere above a rural area - Urban source – emission is in an urban area - Rural source – emission is in a rural area - Sources, by elevation - Surface or ground-level source - Near surface source - Elevated source - Sources, by duration - Puff or intermittent source – short term sources (for example, many accidental emission releases are short term puffs) - Continuous source – long term source (for example, most flue gas stack emissions are continuous) - Sources, by shape – there are four basic shapes which an emission source may have. They are: Characterization of atmospheric turbulence Effect of turbulence on dispersion – turbulence increases the entrainment and mixing of unpolluted air into the plume and thereby acts to reduce the concentration of pollutants in the plume (i.e., enhances the plume dispersion). It is therefore important to categorize the amount of atmospheric turbulence present at any given time. This type of disperson is scale dependent. Such that, for flows where the cloud of pollutant is smaller than the largest eddies present, there will be mixing. There is no limit on the size on mixing motions in the atmosphere and therefore bigger clouds will experience larger and stronger mixing motions. And hence, this type of dispersion is scale dependent. The Pasquill atmospheric stability classes Pasquill atmospheric stability classes – oldest and, for a great many years, the most commonly used method of categorizing the amount of atmospheric turbulence present was the method developed by Pasquill in 1961. He categorized the atmospheric turbulence into six stability classes named A, B, C, D, E and F with class A being the most unstable or most turbulent class, and class F the most stable or least turbulent class. - Table 1 lists the six classes - Table 2 provides the meteorological conditions that define each class. The stability classes demonstrate a few key ideas. Solar radiation increases atmospheric instability through warming of the Earth's surface so that warm air is below cooler (and therefore denser) air promoting vertical mixing. Clear nights push conditions toward stable as the ground cools faster establishing more stable conditions and inversions. Wind increases vertical mixing, breaking down any type of stratification and pushing the stability class towards neutral (D). Table 1: The Pasquill stability classes |Stability class||Definition||Stability class||Definition| Table 2: Meteorological conditions that define the Pasquill stability classes |Surface windspeed||Daytime incoming solar radiation||Nighttime cloud cover| |m/s||mi/h||Strong||Moderate||Slight||> 50%||< 50%| |< 2||< 5||A||A – B||B||E||F| |2 – 3||5 – 7||A – B||B||C||E||F| |3 – 5||7 – 11||B||B – C||C||D||E| |5 – 6||11 – 13||C||C – D||D||D||D| |> 6||> 13||C||D||D||D||D| |Note: Class D applies to heavily overcast skies, at any windspeed day or night| Incoming solar radiation is based on the following: strong (> 700 W m−2), moderate (350-700 W m−2), slight (< 350 W m−2) Other parameters that can define the stability class The stability class can be defined also by using the - Temperature gradient - fluctuations in wind direction - Richardson number - Bulk Richardson number - Monin–Obukhov length - Historical stability class data – known as the Stability Array (STAR) data, for sites within the USA can be purchased from the National Climatic Data Center (NCDC). Advanced methods of categorizing atmospheric turbulence Advanced air pollution dispersion models – they do not categorize atmospheric turbulence by using the simple meteorological parameters commonly used in defining the six Pasquill classes as shown in Table 2 above. The more advanced models use some form of Monin-Obukhov similarity theory. Some examples include: - AERMOD – US EPA's most advanced model, no longer uses the Pasquill stability classes to categorize atmospheric turbulence. Instead, it uses the surface roughness length and the Monin-Obukhov length. - ADMS 4, – United Kingdom's most advanced model, uses the Monin-Obukhov length, the boundary layer height and the windspeed to categorize the atmospheric turbulence. Miscellaneous other terminology - (Work on this section is continuously in progress) - Building effects or downwash: When an air pollution plume flows over nearby buildings or other structures, turbulent eddies are formed in the downwind side of the building. Those eddies cause a plume from a stack source located within about five times the height of a nearby building or structure to be forced down to the ground much sooner than it would if a building or structure were not present. The effect can greatly increase the resulting near-by ground-level pollutant concentrations downstream of the building or structure. If the pollutants in the plume are subject to depletion by contact with the ground (particulates, for example), the concentration increase just downstream of the building or structure will decrease the concentrations further downstream. - Deposition of the pollution plume components to the underlying surface can be defined as either dry or wet deposition: - Dry deposition is the removal of gaseous or particulate material from the pollution plume by contact with the ground surface or vegetation (or even water surfaces) through transfer processes such as absorption and gravitational sedimentation. This may be calculated by means of a deposition velocity, which is related to the resistance of the underlying surface to the transfer. - Wet deposition is the removal of pollution plume components by the action of rain. The wet deposition of radionuclides in a pollution plume by a burst of rain often forms so called hot spots of radioactivity on the underlying surface. - Inversion layers: Normally, the air near the Earth's surface is warmer than the air above it because the atmosphere is heated from below as solar radiation warms the Earth's surface, which in turn then warms the layer of the atmosphere directly above it. Thus, the atmospheric temperature normally decreases with increasing altitude. However, under certain meteorological conditions, atmospheric layers may form in which the temperature increases with increasing altitude. Such layers are called inversion layers. When such a layer forms at the Earth's surface, it is called a surface inversion. When an inversion layer forms at some distance above the earth, it is called an inversion aloft (sometimes referred to as a capping inversion). The air within an inversion aloft is very stable with very little vertical motion. Any rising parcel of air within the inversion soon expands, thereby adiabatically cooling to a lower temperature than the surrounding air and the parcel stops rising. Any sinking parcel soon compresses adiabatically to a higher temperature than the surrounding air and the parcel stops sinking. Thus, any air pollution plume that enters an inversion aloft will undergo very little vertical mixing unless it has sufficient momentum to completely pass through the inversion aloft. That is one reason why an inversion aloft is sometimes called a capping inversion. - Mixing height: When an inversion aloft is formed, the atmospheric layer between the Earth's surface and the bottom of the inversion aloft is known as the mixing layer and the distance between the Earth's surface and the bottom of inversion aloft is known as the mixing height. Any air pollution plume dispersing beneath an inversion aloft will be limited in vertical mixing to that which occurs beneath the bottom of the inversion aloft (sometimes called the lid). Even if the pollution plume penetrates the inversion, it will not undergo any further significant vertical mixing. As for a pollution plume passing completely through an inversion layer aloft, that rarely occurs unless the pollution plume's source stack is very tall and the inversion lid is fairly low. Air pollution dispersion models - ADMS 3 (Atmospheric Dispersion Modelling System) – advanced atmospheric pollution dispersion model for calculating concentrations of atmospheric pollutants emitted both continuously from point, line, volume and area sources, or intermittently from point sources. - CANARY (By Quest) - NAME (dispersion model) - Bibliography of atmospheric dispersion modeling - AP 42 Compilation of Air Pollutant Emission Factors - Atmospheric dispersion modeling - Roadway air dispersion modeling - Useful conversions and formulas for air dispersion modeling - List of atmospheric dispersion models - Yamartino method - List of atmospheric dispersion models - Air Pollution Dispersion: Ventilation Factor by Dr. Nolan Atkins, Lyndon State College - Bosanquet, C.H. and Pearson, J.L. (1936).The spread of smoke and gases from chimney, Trans. Faraday Soc., 32:1249. - Atmospheric Dispersion Modeling - Beychok, Milton R. (2005). Fundamentals Of Stack Gas Dispersion (4th ed.). author-published. ISBN 0-9644588-0-2. (Chapter 8, page 124) - Features of Dispersion Models publication of the European Union Joint Research Centre (JRC) - DEGADIS Technical Manual and User's Guide (US EPA's download website) - UCRL-MA-105607, User's Manual For Slab: An Atmospheric Dispersion Model For Denser-Than-Air Releases, Donald Ermak, June 1990. - HEGADIS Technical Reference Manual - Walton, John (April 1973). "Scale-Dependent Diffusion". Journal of Applied Meteorology: 548. doi:10.1175/1520-0450(1973)012<0547:sdd>2.0.co;2. - Pasquill, F. (1961). The estimation of the dispersion of windborne material, The Meteorological Magazine, vol 90, No. 1063, pp 33-49. - Pasquill, F. (February 1961). "The estimation of the dispersion of windborne material". Meteorological Magazine. 90: 33–49. - Seinfeld, John (2006). Atmospheric Chemistry and Physics: From Air Pollution to Climate Change. Hoboken, New Jersey: John Wiley & Sons, Inc. p. 750. ISBN 978-0-471-72018-8. - NCDC website for ordering stability array data - AERMOD:Description of Model Formulation - ADMS 4 Description of the model by the developers, Cambridge Environmental Research Consultants. - Turner, D.B. (1994). Workbook of atmospheric dispersion estimates: an introduction to dispersion modeling (2nd ed.). CRC Press. ISBN 1-56670-023-X. www.crcpress.com - Beychok, Milton R. (2005). Fundamentals of Stack Gas Dispersion (4th ed.). author-published. ISBN 0-9644588-0-2.
<urn:uuid:65bae42e-fcae-468d-9759-9079119a9463>
3.546875
3,842
Knowledge Article
Science & Tech.
34.957737
95,481,985
A University of Utah study shows how various regions of North America are kept afloat by heat within Earth’s rocky crust, and how much of the continent would sink beneath sea level if not for heat that makes rock buoyant. Of coastal cities, New York City would sit 1,427 feet under the Atlantic, Boston would be 1,823 feet deep, Miami would reside 2,410 feet undersea, New Orleans would be 2,416 underwater and Los Angeles would rest 3,756 feet beneath the Pacific. Mile-high Denver’s elevation would be 727 feet below sea level and Salt Lake City, now about 4,220 feet, would sit beneath 1,293 feet of water. But high-elevation areas of the Rocky Mountains between Salt Lake and Denver would remain dry land. “If you subtracted the heat that keeps North American elevations high, most of the continent would be below sea level, except the high Rocky Mountains, the Sierra Nevada and the Pacific Northwest west of the Cascade Range,” says study co-author Derrick Hasterok, a University of Utah doctoral student in geology and geophysics. “We have shown for the first time that temperature differences within the Earth’s crust and upper mantle explain about half of the elevation of any given place in North America,” with most of the rest due to differences in what the rocks are made of, says the other co-author, David Chapman, a professor of geology and geophysics, and dean of the University of Utah Graduate School. People usually think of elevations being determined by movements of “tectonic plates” of Earth’s crust, resulting in volcanism, mountain-building collisions of crustal plates, stretching apart and sinking of inland basins, and sinking or “subduction” of old seafloor. But Hasterok and Chapman say those tectonic forces act through the composition and temperature of rock they move. So as crustal plates collide to form mountains like the Himalayas, the mountains rise because the collision makes less dense crustal rock get thicker and warmer, thus more buoyant. The study – published online in the June issue of Journal of Geophysical Research-Solid Earth – is more than just an entertaining illustration of how continents and mountains like the Rockies are kept afloat partly by heat from Earth’s deep interior and heat from radioactive decay of uranium, thorium and potassium in Earth’s crust. Scientists usually attribute the buoyancy and elevation of various continental areas to variations in the thickness and mineral composition (and thus density) of crustal rocks. But Chapman says researchers have failed to appreciate how heat makes rock in the continental crust and upper mantle expand to become less dense and more buoyant. “We found a good explanation for the elevation of continents,” Hasterok says. “We now know why some areas are higher or lower than others. It’s not just what the rocks are made of; it’s also how hot they are.” Chapman says it will take billions of years for North American rock to cool to the point it becomes denser, sinks and puts much of the continent underwater. Coastal cities face flooding much sooner as sea levels rise due to global warming, he adds. Why it is Important to Know How Heat Affects Elevation The new study’s scientific significance is that by accounting for composition, thickness and, now, temperature of crustal rock in North America, scientists can more easily determine how much elevation is explained by forces such as upwelling plumes of molten rock like the “hotspot” beneath Yellowstone, and places where vast areas of mantle rock “dripped” down, letting mountains like the Sierras rise higher. The new method also will make it easier to identify areas where crustal rocks are unusually hot due to higher-than-average concentrations of radioactive isotopes. Chapman says temperatures in Earth’s crust and upper mantle often are inferred from measurements in boreholes drilled near the surface, while elevation reflects average rock temperatures down to 125 miles beneath Earth’s surface. Inconsistencies in both measurements can be used to reveal the extent to which borehole temperatures are affected by global warming or changes in groundwater flow. Elevation increases in a given area could provide notice – tens of millions of years in advance – of volcanic processes beginning to awake deep in the lithosphere, he adds. Most Regions Would Sink, but Seattle would Soar Some locations – sitting atop rock that is colder than average – actually would rise without the temperature effect, which in their case means without refrigeration. Instead of its current perch along saltwater Puget Sound, Seattle would soar to an elevation of 5,949 feet. Seattle sits above a plate of Earth’s crust that is diving or “subducting” eastward at an angle. That slab of cold, former seafloor rock insulates the area west of the Cascades from heat deeper beneath the slab. Removing that effect would warm the Earth’s crust under Seattle, so it would expand and become more buoyant. To calculate how elevations of different regions would change if temperature effects were removed, the researchers did not remove all heat, but imagined that region’s rock was as cold as some of North America’s coldest crustal rock, which still is 750 degrees Fahrenheit at the base of the crust in Canada. Here are other locations, their elevations and how they would sink if their crust had the same temperature:Atlanta, 1,000 feet above sea level, 1,416 feet below sea level. A Lesson from the Abyssal Ocean Depths Chapman says it may seem paradoxical, but “the answer to questions about the elevation of Earth’s continental areas starts in the oceans.” The Earth’s crust averages 4 miles thick beneath the oceans and 24 miles thick under continents. The crust and underlying layer, the upper mantle, together are known as the lithosphere, which has a maximum thickness of 155 miles. The lithosphere is broken up into “tectonic plates” that slowly drift, changing the shapes, locations and configurations of continents over the eons. Ice floats on water because when water freezes it expands and becomes less dense. Rock and most other materials expand and become less dense when heated. Hasterok says it has been well known for years that “elevations of different regions of the continents sit higher or lower relative to each other as a result of their density and thickness. Most elevation that we can observe at the surface is a result of the buoyancy of the crust and upper mantle.” He adds that elevation changes also can stem from heating and expansion of rocks that makes them more buoyant – a phenomenon named “thermal isostasy” that explains “why the hot mid-ocean ridges are much higher relative to the cold abyssal plains.” New ocean floor crust is produced by volcanic eruptions at undersea mountain ranges known as mid-ocean ridges. Molten and hot rock emerges to form new seafloor, which spreads away from a ridge like two conveyor belts moving opposite directions. As new seafloor crust becomes older and cooler over millions of years, it becomes denser and loses elevation. Chapman and Hasterok say there is a 10,000-foot elevation difference between the peaks of the mid-ocean ridges and older seafloor. Given that, Chapman says he has been puzzled that differences in rock temperature never have been used to explain elevations on continents. “Our goal was to show that temperature variations add a significant contribution, not only to the ocean floor, but also to continental elevation,” Hasterok says. “For example, the Colorado Plateau sits 6,000 feet above sea level, while the Great Plains – made of the same rocks [at depth] – are much lower at 1,000 feet. We propose this is because, at the base of the crust, the Colorado Plateau is significantly warmer [1,200 degrees Fahrenheit] than the Great Plains [930 degrees Fahrenheit].” When You’re Hot You’re High in North America Chapman says that in the study, he and Hasterok “removed the effects of composition of crustal rocks and the thickness of the crust to isolate how much a given area’s elevation is related to the temperature of the underlying rock.” First, they analyzed results of previous experiments in which scientists measured seismic waves moving through Earth’s crust due to intentional explosions. The waves travel faster through colder, denser rock, and slower through hotter, less dense rock. Then they used published data in which various kinds of rocks were measured in the laboratory to determine both their density and how fast seismic waves travel through them. The data allowed researchers to calculate how rock density varies with depth in the crust, and thus how much of any area’s elevation is due to the thickness and composition of its rock, and how much is due to heating and expansion of the rock. Seafloor crust has the same composition and thickness most places away from the tall mid-ocean ridges, so it is easy for scientists to observe how elevations vary with ocean crust temperature. But to determine the temperature effect on continents, “we wave this wand and create a transformed continental crust that is everywhere the same thickness [24 miles] and composition [2.85 times the density of water],” Chapman says. “Once we’ve done that, we can see the thermal effect.” That, in turn, made it possible to calculate how much heat flow contributes to elevation in each of 36 tectonic provinces – sort of “mini-plates” – of North America. For example, the New England (Central) Appalachians Province has an average elevation of 897 feet, but if its rocky crust were cooled to that of old, colder continental crust like the Canadian Shield, the province would sit 563 feet below sea level, a drop of 1,460 feet. New York City, within that province, has an elevation listed as 33 feet. Subtract 1,460 feet and the Big Apple gets dunked 1,427 feet below sea level. Lee Siegel | EurekAlert! Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:bf75b46c-7d08-446d-98d4-d87e0949137b>
3.828125
2,794
Content Listing
Science & Tech.
43.852668
95,482,009
Evaluating Claims of Oldest Fossil Microbes December 21, 2017 Are these fossils 3.45 billion years old? Are they even fossils? Serious questions need to be asked when the news gets excited about world records for oldest life on Earth. Planetary Youth Continues After Cassini December 19, 2017 The data gathering phase is over, but the data mining phase will continue for years. This entry also shares some news about other solar system objects showing youthfulness. It’s Over: Dark Energy Was Fake Science December 16, 2017 It's being called the Worst Theoretical Prediction in the History of Physics. Dark energy, and its cousin dark matter, are not showing up in any empirical tests. Unusual Fossils Twisted to Support Darwinism December 15, 2017 A fossil is a standalone reality. Darwinism is a story that force-fits these standalone realities into a predetermined narrative. Watch how it is done. Habitable Planets Just Got Much More Rare December 13, 2017 If this scientist’s theory about the origin of magnetic fields is correct, habitable planets will be few and far between. Earth has a magnetic field sufficient to support life. Venus does not. Why does “Earth’s twin” lack this protective shield? According to secular geophysicists, a magnetic field is generated by a dynamo in a planet’s […] Is This Duck a Dinosaur? December 12, 2017 Before swallowing the hype about the latest dino-bird fossil, ask some hard questions. Alien Intrusion: Half of Humans Believe in Space Aliens December 10, 2017 A powerful new documentary coming on January 11 investigates the rise in belief in space aliens, UFOs and abductions. Well-Adapted Underwater Animals that Defy Evolution December 8, 2017 A wide variety of organisms spend all or part of their time under the surface of water, having just the equipment they need to thrive. Amazing Fossils Found in Flood Deposits December 4, 2017 Flood geology explains these unique fossils like slow-and-gradual geology cannot. Lightning Can Produce Carbon-14 November 29, 2017 In a surprise announcement, Japanese researchers found that lightning bolts can be powerful enough to cause nuclear fission, leading to new isotopes— including carbon-14. Is Dark Matter a Myth? November 28, 2017 More precise tests continue to fail to find dark matter or dark energy. How long do scientists get to look for occult phenomena that may not be real? Kingdom of David and Solomon Supported by Growing Evidence November 25, 2017 The evidence is coming together to support the Biblical record of David and Solomon. An Israeli publication updates the latest finds. Fossil Forest Found in Antarctica November 20, 2017 Claimed to be 280 million years old, stumps of fossil trees retain original material in the world's coldest climate. New Guidebook for Earthlings Published November 19, 2017 A new book by two CEH authors can help readers understand their spacecraft and its mission. Dr Henry Richter, distinguished NASA scientist, former Caltech professor and inventor, the last surviving manager of Explorer 1 (January 31, 1958), has a new book worth sharing. Its title appeals to all readers on the planet: Spacecraft Earth: A […] Anomalies in Planetary Magnetic Fields November 17, 2017 Something is terribly wrong in the geodynamo theory for magnetic fields.
<urn:uuid:91a1f08a-bc90-479a-b245-b7ac956cb367>
3.03125
726
Content Listing
Science & Tech.
46.165009
95,482,014
In this chapter we have collected some typical examples of practical applications of Boolean methods. These applications include a sequencing problem due to R. Fortet, a problem of timetable scheduling (written by I. Rosenberg and including also results of G. Mihoc and E. Balas), a problem in coding theory, a problem of plant location, and various other applications. KeywordsPlant Location Parametric Solution Elementary Test Average Crop Supplementary Work Unable to display preview. Download preview PDF.
<urn:uuid:e7306bfa-8944-4e27-9f9a-0ef4413a08c9>
2.6875
100
Truncated
Science & Tech.
28.119375
95,482,017
Use Codelite to Write The First Program"Hello World" This mini tutorial describes how to create the classic 'Hello World' console application! - From the main menu bar go to the 'Workspace' menu and select the entry 'Create New Project'. - In the 'New Project' dialog that came up, select the 'Console' category - Choose the console project that suits your compiler. For this demo, I selected 'Simple executable (g++)' - Enter the project name: "HelloWorld" - Select the path for the project. Note: the path must exist! - If you check the option 'Create the project under a separate directory', codelite will create a new folder under the selected path with the given project name - Click OK You should now have a workspace + project named 'HelloWorld' with a single main.cpp file. - Click F7 to build it and. - Once the build completed successfully, use Ctrl-F5 to execute it
<urn:uuid:5f4c16bd-e5b0-4dc1-8bf1-c132288b906b>
2.734375
209
Tutorial
Software Dev.
48.1525
95,482,046
The size and mass of Venus is similar to those of the Earth; however, its atmospheric dynamics are considerably different and they are poorly understood due to limited observations and computational difficulties. Here, we developed a data assimilation system based on the local ensemble transform Kalman filter (LETKF) for a Venusian Atmospheric GCM for the Earth Simulator (VAFES), to make full use of the observational data. To examine the validity of the system, two datasets were assimilated separately into the VAFES forecasts forced with solar heating that excludes the diurnal component Qz; one was created from a VAFES run forced with solar heating that includes the diurnal component Qt, whereas the other was based on observations made by the Venus Monitoring Camera (VMC) onboard the Venus Express. The VAFES-LETKF system rapidly reduced the errors between the analysis and forecasts. In addition, the VAFES-LETKF system successfully reproduced the thermal tide excited by the diurnal component of solar heating, even though the second datasets only included horizontal winds at a single altitude on the dayside with a long interval of approximately one Earth day. This advanced system could be useful in the analysis of future datasets from the Venus Climate Orbiter ‘Akatsuki’. During the past two decades, data assimilation has become an effective tool in planetary atmospheric science. Observation data are sampled irregularly in space and time, even in studies of the Earth’s atmosphere. Therefore, the global and continuous datasets produced by general circulation models (GCMs) that use data assimilation are considerably useful tools because they are dynamically and thermodynamically consistent. Amongst several data assimilation schemes, the local ensemble transform Kalman filter (LETKF)1 is one of the most powerful and efficient schemes. Hence, it has been successfully applied to the terrestrial2, 3 and Martian4, 5 atmospheres. Data assimilation has not yet been attempted for the Venusian atmosphere. This may not only be attributed to the limited amount of meteorological observations available but also to the computational difficulties arising from employing theoretical models and GCMs. Hence, a realistic structure of the Venusian atmosphere has not been produced using GCMs. Recently, we developed a Venusian atmospheric GCM, named AFES-Venus (referred to as VAFES in this paper), based on the Atmospheric GCM for the Earth Simulator (AFES)6. Because AFES is highly optimized for the Earth Simulator, one of the world’s largest vector super-computers provided by the Japan Agency for Marine-Earth Science and Technology (JAMSTEC), one advantage of the VAFES is that we can perform high-resolution simulations to reproduce realistic structures of the Venusian atmosphere. Using the VAFES, we successfully investigated barotropic/baroclinic instability waves7, 8 and elucidated a puzzling temperature structure, called the ‘cold collar’, at high latitudes9. Though the VAFES is a simple dynamical model wherein the distribution of solar heating is prescribed, a Newtonian cooling scheme is used to simplify the infrared radiative transfer and the cloud physics is not included, the atmospheric structures reproduced by the model were in good agreement with the observations. Therefore, it is expected that the VAFES can be used for data assimilation. The AFES-LETKF data assimilation system has already been developed2 for the terrestrial atmosphere, and the experimental ensemble reanalysis10,11,12,13 has been successfully performed. The VMC14 provided Venusian cloud images at the cloud-top level (approximately 70 km) for over eight years since April 2006. From these images, we derived daily horizontal winds at approximately 70 km using cloud tracking for a period from May 2006 to January 201015. We used these wind data for data assimilation in a test case, although the data were spatiotemporally sparse as compared to the terrestrial observational data. In this study, we developed the VAFES-LETKF system wherein the LETKF was applied to the VAFES, and it was tested with two observational datasets. This was the first data assimilation experiment for the Venusian atmosphere. The two datasets included horizontal winds at only a single altitude above the cloud top. One dataset was synthesized observational data produced by a VAFES simulation run forced by Qt (solar heating that includes the diurnal component), whereas the other dataset was observational data based on VMC15. The VAFES forecasts, which were assimilated with the observational data, were produced from a VAFES simulation run forced by Qz (solar heating that excludes the diurnal component). This indicated that the thermal tide, which is strongly excited in the cloud layer by the diurnal component of solar heating16,17,18 and significantly affects the superrotation19, 20, was not included in the basic run. Therefore, if the VAFES-LETKF system functions appropriately, it is anticipated that the thermal tide would be reproduced in the runs using both observational datasets. Our primary goal was to demonstrate that the VAFES-LETKF data assimilation system functions appropriately, which can be useful for future observations. Effect of data assimilation In a data assimilation scheme, an improved estimate (referred to as an analysis) is derived by combining observations and short-time forecasts. The LETKF is a type of the ensemble square root Kalman filter that seeks the analysis solution with minimum error variance. Using a 31-member ensemble of VAFES simulation runs, the uncertainty of the model forecast was characterized in the current system. The minimal interval for the data assimilation cycle was 6 h. The four-dimensional LETKF uses 7-h time slots to produce each analysis. Therefore, observations can be assimilated every hour if they are available1, 21. Tables 1 and 2 provide a summary of our experiments. We prepared several idealised observations of 70-km horizontal winds for the cloud-top level at intervals of 1, 6 and 24 h (Cases H1, H6 and H24, respectively). These idealised observations were produced by the VAFES simulation run forced by Qt (Case Qt wherein solar heating excites the thermal tide). In accordance with real satellite observations, we used single-altitude horizontal winds. The other observational data were based on cloud tracking of the ultraviolet images captured by the VMC15 (Case Vmc), which included approximately 70-km horizontal winds located in a narrow dayside region from approximately 60°S to 30°N between approximately 07:00 local time (LT) and 17:00 LT, which correspond to 80°W and 80°E longitudes, provided the sub-solar point (12:00 LT) was positioned at 0°E longitude. Note that the sub-solar point was set to move westwards, which was consistent with the direction of the planet rotation assumed in the VAFES. The time intervals of the VMC horizontal wind data were approximately one Earth day. In this study, we used 73 observations of horizontal winds during a period from 28 January 2008 to 26 April 2008. All observations captured the thermal tide component; however, the VAFES forecasts to be assimilated did not capture this component because the diurnal component of solar heating was excluded (Case Qz wherein solar heating only included the zonal mean component). This indicates that atmospheric motions in all test cases would ‘relax’ to those in Case Qz when there was no observation. In addition, a free-run forecast (Case Frf) was performed to produce a background that employed a 31-ensemble of Case Qz runs without observations, i.e. without data assimilation. In all runs, the resolution was fixed to T42L60, with 128 times 64 grids and 60 layers extending horizontally and vertically, respectively, from the flat ground to 120 km. To set up the experiments, we performed numerical integrations from an idealised superrotating state for Cases Qz and Qt for four Earth years. The modelled atmospheres reached quasi-steady states within approximately one Earth year and were maintained for more than 10 Earth years8. The results for Cases H1, H6, H24, Vmc and Frf comprise a 31-ensemble mean of each member. Figure 1 shows that the VAFES-LETKF data assimilation system rapidly reduced the root-mean-square (RMS) error between the analysis and the subsequent forecast at each grid point at 70 km for both the zonal and meridional winds, except for Case Frf (background). For Case H24, the cycle of data assimilation (i.e. executed once daily), was clearly apparent; however, the RMS error was smaller than that for Case Frf which did not include observations. While the meridional winds were by an order of magnitude smaller than the zonal winds, the meridional winds associated with disturbances are of the order same as that of the zonal winds with disturbances. This is why the RMS errors for the zonal and meridional winds during assimilation were almost of the same order. Note that the RMS error in the temperature field did not converge, even for Case H1 (not shown), although the temperature was also modified to be in balance with horizontal winds (Figs 2 and 3). Because the temperature field balanced with horizontal winds can be produced such that its horizontal average remains unchanged, the difference in the reference temperatures of Cases Qz and Qt (approximately a few degrees K) did not converge with data assimilation conducted using only horizontal winds. It was confirmed that the RMS error of the temperature field converged when the temperature was included in the observational data (not shown). Reproducibility of the thermal tide The thermal tide is a global-scale atmospheric wave excited by solar heating, which moves along with the Sun. Because approximately 60% of the solar flux is absorbed at the cloud levels of 45–70 km on Venus, the thermal tide is strongly excited in this location, and it propagates vertically. It has been expected from theoretical researches that components with the zonal wave numbers of 1 and 2, referred to as the diurnal and semidiurnal tides, respectively, would be predominant at the cloud levels16, 17. Below (Above) the cloud-top level, the amplitude of the diurnal (semidiurnal) tide was larger than that of the semidiurnal (diurnal) tide. These structures of the thermal tide can be observed in Case Qt (Fig. 2c and d) wherein the thermal tide was directly excited by solar heating. The westward phase tilt with height indicates the upward propagation of the thermal tide. Even though the observational data were corrected only for the 70-km winds, Fig. 2e and f show that the three-dimensional structure associated with the thermal tide appears clearly, even in Case H24, and it propagates upwards above 70 km. Note that the RMS errors for the zonal and meridional winds in Case H24 do not converge. These errors were considerably reduced only when the observations were conducted. Nevertheless, the thermal tide structure with a zonal wave number of 1, which is similar to that obtained for Case Qt (Fig. 2c and d), was found in the temperature field, even though the temperature was not included in the observational data. Compared with Case Qt, the amplitude of the thermal tide found in Case H24 was approximately half of that found in Case Qt. It is worth noting that the VAFES-LETKF data assimilation system successfully reproduced the thermal tide not only in the horizontal winds but also in the temperature field. This was done by assimilating the temporally sparse observational data that did not include the temperature (once a day for Case H24). The thermal tide and its vertical propagation were not present for Case Qz (Fig. 2a and b) wherein the thermal tide was eliminated by excluding the diurnal component from solar heating. These results clearly showed that the data assimilation with the inclusion of the thermal tide component in the horizontal winds produced temperature deviations associated with the thermal tide as a dynamically balanced state. Since the vertical shear of the zonal mean zonal wind was different between Cases H24 and Qt, the inclinations of the phase of the thermal tide differed from each other. Figure 3 shows the results for Case Vmc. The thermal tide was successfully reproduced, even though only 70-km-altitude horizontal winds were used. These winds were located on the dayside of the southern hemisphere with a time interval of approximately 24 h. The wind and temperature components antisymmetric about the equator were also induced by the meridional winds. These winds move across the equator and were obtained from the VMC data15. Impact of data assimilation on general circulation Since the thermal tide propagates vertically, as shown in Figs 2 and 3, it is expected to transport zonal momentum upwards. Therefore, the general circulation may be substantially influenced in the upper layer of 70 km by data assimilation. Figure 4 shows latitude–height cross sections of the zonal mean zonal wind obtained in the quasi-equilibrium states for Cases Qz, Qt, H24 and Vmc. In Case Qz, without the thermal tide (Fig. 4a), strong mid-latitude jets caused by the enhanced mean meridional circulation (not shown) were found to emerge. In contrast, in Case Qt (Fig. 4b), the faster zonal wind at the equatorial region with mid-latitude jets shifted to the lower latitudes of 30°–45° appearing at the cloud-top level. In addition, in Case H24 (Fig. 4c), the faster zonal wind appeared at 60–90-km levels in low latitudes compared to that found in Case Qz. The meridional distribution of the zonal wind at the cloud level was intermediate as compared to those observed in Cases Qz and Qt. Furthermore, the fast zonal wind in low latitudes and the remarkable mid-latitude jets were similar to those found in Cases of Qt and Qz, respectively. In Case Vmc (Fig. 4d), while it seems that the zonal wind was somewhat accelerated at the cloud level, it was accompanied by remarkable mid-latitude jets, as found in Case Qz. It is worth noting that unlike Case Qt, the zonal wind in Cases H24 and Vmc were minimally decelerated above 75 km. For a comprehensive observation, contours of the latitude–height cross sections of the zonal mean zonal wind obtained for Cases Frf, H1 and Vmc are shown in Fig. 5a,c and e, respectively. For Case Frf, without the thermal tide, strong mid-latitude jets were observed (Fig. 5a), which is similar to that in Case Qz (Fig. 4a). These were found to be common in previous GCM studies22, 23 that were conducted by excluding the thermal tide. In contrast, in Case H1, the faster zonal wind located in the equatorial region with mid-latitude jets shifted to the lower latitudes of 30°–45° and appeared at the cloud-top level (Fig. 5c), which is similar to that in Case Qt (Fig. 4b), as observed in other GCM studies7, 8, 24, 25 that were conducted considering the thermal tide. This meridional distribution of the superrotation also agrees well with the observations26,27,28. Furthermore, a remarkable deceleration of the zonal wind above 75 km was caused by the thermal tide20, which was also in good agreement with the zonal wind estimated from the observed temperature29. In Case Vmc (Fig. 5e), while the zonal wind was somewhat accelerated at the cloud level, it was accompanied by remarkable mid-latitude jets, as in Case Frf. Unlike Case H1, although the thermal tide was excited by data assimilation in Cases H24 and Vmc, the zonal wind was minimally decelerated above 75 km. For Case Vmc, approximately three-fourth of the horizontal area at 70 km did not have observational data, while all of the area investigated in Case H24 had observational data. Since we forced the VAFES run by Qz that includes only the zonal mean component, the atmospheric motions in Cases Vmc and H24 will ‘relax’ to those in Case Qz. This is largely due to the relatively sparse observations that were available within the approximate 24-h time intervals. Because a strong latitudinal temperature gradient exists in the layer located at the 45–75-km level wherein the temperature difference between the equator and the pole is more than 25 K (Fig. 5b,d and f), baroclinic instability waves and Rossby-type waves8, 25 appeared in a weakly stratified layer located at 50–60 km. The ensemble spreads of the meridional wind and temperature for Cases Frf, H1 and Vmc are shown in Fig. 5 (colour). They were normalised by the horizontal average at each level (indicated by line plots in right small panels) in order to observe the latitudinal distributions. The spread indicates the extent to which the analysis can be trusted and the locations where disturbances actively appear. The horizontally averaged spreads increased with altitude due to the lower atmospheric density in the upper layer, except in Cases H1 and Vmc. In Cases H1 and Vmc, the spreads were significantly and slightly reduced, respectively, at approximately 70 km because the observations were limited to 70 km. Hence, this result suggests that the impacts of data assimilation extend over approximately 10 km in the vertical direction. In Cases Frf and Vmc, the meridional distributions of the averaged spread showed that active disturbances appeared in two regions. One was located at the mid-latitudes of approximately 60°N (60°S) and extended from 60 to 80 km. In this area, the vertical shear and the latitudinal temperature gradient were significant. It has been inferred from previous research8, 25 that the large spreads could be caused by baroclinic instability waves. The other was located at high latitudes near the poles from 40 to 70 km (Fig. 5a and e). Since the vertical shear is small and the meridional gradient of the absolute vorticity changes its sign in this region (not shown), the large spreads could be caused by barotropic instability waves. In Case H1, similar structures can be observed in the spreads of meridional wind and temperature. The large spreads in the meridional wind also appeared at low latitudes from 60 to 70 km, and significant spreads in the temperature extended from 70 km to approximately 30 km. Since the thermal tide was excited at 70 km propagates downwards as well as upwards, these differences amongst the cases could be attributed to the structure of the zonal mean zonal wind affected by the thermal tide. Summary and Discussion In this study, we developed a data assimilation system comprising the VAFES and the LETKF and applied it to the Venusian atmosphere for the first time. Since Venus is far from Earth and observational methods are quite restricted, detailed observational data, such as frequent multilevel winds and temperature, cannot be obtained as easily as they can be obtained for the terrestrial atmosphere. However, the results of this study confirmed that even the limited data acquired from satellite observations could significantly improve Venus GCM forecasts. Data assimilation using horizontal winds at a single altitude on the dayside corrected long period disturbances, such as those caused by the thermal tide. It is strongly expected that the VAFES-LETKF analysis data produced from past and/or future observations will enable us to investigate and reconsider many important atmospheric features, such as the superrotation, the cold collar and the polar vortex. In addition, it was noted that the LETKF can be easily applied to any type of GCM. For example, the Laboratorie Meteorologie Dynamic (LMD) GCM24, 25 is one of the most advanced Venus GCMs that includes detailed physical processes. The LETKF could improve the physical parameters used in GCMs. The Venus Climate Orbiter ‘Akatsuki’ began the observation of the Venusian atmosphere on 9 December 201530, 31. This orbiter provides frequent data approximately every 2 h, which comprises cloud distributions, horizontal winds derived at multiple altitudes and temperature distributions at the cloud top32. The actual dynamics of atmospheric circulation on Venus remains unclear. Currently, there are many uncertainties, including baroclinic and/or barotropic instability waves, planetary waves, gravity waves and turbulences. We do not know their level of importance in Venusian atmospheric dynamics and their importance with regard to GCMs. It is strongly expected that the Akatsuki data with the VAFES-LETKF data assimilation system will enable us to reproduce more reliable models of the Venus atmosphere. Such reanalysis data will greatly help us to elucidate the actual atmospheric circulation and understand the dynamics of the Venusian atmosphere. We used a full nonlinear Venus GCM, named AFES-Venus (hereafter, VAFES)7, with simplified physical processes. This system is based on the AFES6. The resolution was set to T42L60, where T and L denote the triangular truncation number for spherical harmonics and the number of vertical levels, respectively. Then, there are 128 times 64 horizontal grids with 60 vertical levels. The vertical domain extended from the flat ground to approximately 120 km, with an almost constant altitude grid spacing of 2 km. The model included vertical and horizontal eddy diffusion. The vertical eddy diffusion coefficient was 0.15 m2 s−1. The horizontal eddy viscosity was represented by the second-order hyperviscosity. Damping time for the maximum wave number component was set at approximately 0.1 Earth days. Rayleigh friction with a damping time of 0.5 days was employed only at the lowest level to mimic the surface friction. In the upper atmosphere above 80 km, a sponge layer was assumed only for the eddy components and the damping times were gradually increased with height. A typical example of the increase in the damping time was 2500, 0.1 and 0.05 days, at 90, 100 and 110 km, respectively. Convective adjustment was also applied to eliminate static instability. The vertical and horizontal distributions for solar heating were based on the research of Tomasko et al.33 Solar heating was decomposed into a zonal mean component and a deviation from the zonal mean (diurnal component), which excite the mean meridional (Hadley) circulation and the thermal tide, respectively. Two cases were simulated: Case Qt included both components, whereas Case Qz included only the zonal mean component. The infrared radiative process was simplified by a Newtonian cooling approximation wherein the coefficients of cooling were based on Crisp34. The relaxation time decreased almost exponentially from the surface to 120 km in approximately 25000–0.1 days (refer to Fig. 1a in Sugimoto et al.7). The temperature was relaxed to a prescribed horizontally uniform temperature distribution based on the Venus International Reference Atmosphere35. While the temperature was relaxed to the horizontally uniform field, the latitudinal gradient of the temperature was maintained by solar heating in this model. Further, the atmospheric motions, such as the Hadley cell and baroclinic instability waves, were driven by solar heating. Other details of the model settings were described in our previous research7,8,9. The initial state of the velocity field was assumed to be an idealised superrotating flow in the solid-body rotation. The zonal wind increased linearly with height from the ground to 70 km. The velocity at the equator was set at 100 m s−1 at 70 km and was maintained constant above 70 km. Thus, the latitudinal profile of the initial zonal velocity above 70 km could be calculated by 100 × cos θ m s−1, where represents the latitude. The temperature distribution was set to be in gradient wind balance with the zonal wind to suppress the initial instability. It was assumed that the direction of the planetary rotation and the basic zonal wind was eastwards (positive). Using this initial state, we performed nonlinear numerical simulations for more than four Earth years in Cases Qt and Qz. The leapfrog method was employed for time integrations with increments of 600 s. The quasi-equilibrium datasets sampled at 1-h intervals in Case Qt were used as idealised observations, whereas those sampled at 8-h intervals in Case Qz were used as the initial conditions for each 31-member ensemble, which was used as the ensemble run in the data assimilation. The local ensemble transform Kalman filter (LETKF) was based on previous research2, 10,11,12,13. It is an approximation of the Kalman filter and finds the best estimate (analysis) with minimum error variance in model estimates and in observations. Moreover, the LETKF is a square root filter method36 of the ensemble Kalman Filter37 and a deterministic filter in which no randomly perturbed observations are used. It is localised by considering only the observations within a prescribed horizontal and vertical distance38, 39. The ensemble transform Kalman filter40 approach is also used for acceleration. These techniques contribute to computational efficiency, and calculations are performed on massive parallel computers to produce a realistic high-dimensional atmospheric forecast model41. In the current VAFES-LETKF data assimilation system, the 31-member ensemble and 10% multiplicative spread inflation were employed. The localisation parameters were chosen to be 400 km in horizontal and log P = 0.4 in vertical, where P is pressure. We set observation errors with a 4.0-m s−1 standard deviation to the horizontal winds field, which was slightly less than the upper limit of the standard deviation of 7.0 m s−1 suggested by Kouyama et al.15. We checked the dependency of the results on these localisation parameters and observation errors and found that when these parameters or errors were set at double or half values, no significant changes were observed. Furthermore, a test case with a 63-member ensemble was also considered to check the saturation of the ensemble. The results indicated that the uncertainty of the model forecast was sufficiently characterized by the 31-member ensemble of the VAFES run. The time interval of the data assimilation cycle was set to 6 h. The four-dimensional LETKF comprised 7-h time slots at each analysis, and the observations were assimilated every hour depending on their availability1, 21. Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. This study was conducted under the joint research project of the Earth Simulator Center entitled ‘Simulations of Atmospheric General Circulations of Earth-like Planets by AFES’. VAFES-LETKF data assimilation system integrations were performed on the Earth Simulator with the support of JAMSTEC. In addition, this study is supported by the following grants: JSPS KAKENHI 25800265, 25247075, 15K17767, 16H02225 and 16H02231. N.S. would like to thank T. Navarro, T. Horinouchi, Y. Matsuda and Y.-Y. Hayashi. The data from the simulations are available upon request from the corresponding author. The LETKF code developed in this study is based on the code publicly available at https://github.com/takemasa-miyoshi/letkf. The GFD-DENNOU Library was used for creating figures.
<urn:uuid:fe26b0f8-d647-4c43-bf07-ae774cc9bf1f>
2.859375
5,758
Truncated
Science & Tech.
35.309807
95,482,066
+44 1803 865913 Edited By: Alan Cooper and James Power 343 pages, Figs, tabs, maps Examines the dispersal of organisms within and between habitat patches, and how land uses interact with dispersal processes to determine species distributions. Mainly concerned with understanding the effects and consequences of habitat fragmentation on plant and animal species populations, the spread of pests in agricultural landscapes, and the ecology of species of conservation value. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I am not an easy shopper to please, but NHBS goes beyond my highest expectations in every way Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:0bda9a80-aa50-472d-a1a5-7687d486a632>
3.09375
166
Product Page
Science & Tech.
29.810791
95,482,067
The rate of an enzymatic reaction depends upon the temperature, pH, substrate concentration, and the presence of activators, co-enzymes, and enzyme inhibitors. The reactions of enzymes usually accelerate with an increase of temperature; however, since enzymes are proteins and are denatured at elevated temperatures, reaction rates increase only to the point where denaturation overcomes the accelerating effect of increasing temperature. Enzymes exert their influence by combining with the substrate to form an enzyme-substrate complex, which then decomposes to give the products and release the enzyme for further action. Because of this, the rate at which enzyme products are formed depends both on the concentration of the enzyme-substrate complex and the rate of its decomposition. The formation of the complex depends on mass action between the enzyme and the substrate; therefore, taking a given quantity of an enzyme, the complex increases with the quantity sufficient to convert nearly all the enzyme into the complex. (72 , 195 , 306 , 363 ) Timestamp: Saturday, 19-Nov-2011 13:18:41 PST Retrieved: Saturday, 21-Jul-2018 07:26:23 GMT
<urn:uuid:25466840-1d04-4e00-9f6c-3651e58f2c4c>
3.375
235
Knowledge Article
Science & Tech.
12.432461
95,482,073
On Friday, January 22, as the first snowflakes of a historic blizzard began piling atop America’s east coast, a team of more than twenty engineers and scientists hauled food, clothes, cots, and mattresses into a building at NASA’s Goddard Spaceflight Center in Greenbelt, Maryland. The team would remain at the facility until well after the snow let up Sunday morning, keeping watch over a giant, humming “Space Environment Simulator.” The simulator is a cylindrical chamber and, as you might expect from its name, it simulates the conditions of outer space. Its metallic exterior is covered in networks of pipes. Its interior might be the coldest place on Earth. At -233 degrees Celsius (nearly 400 degrees below zero Fahrenheit) it is certainly colder than any of the planet’s natural environments, including the poles. Sitting inside the Space Environment Simulator are the guts of the most far-seeing camera ever built by humans. This camera will soon be launched into deep space, to image the first stars to flare into being after the Big Bang, and maybe, if we are very lucky, the exhaled gases of life forms that live in the atmospheres of distant planets. It has been more than 20 years since the first plans were drawn up for what is now called the James Webb Space Telescope. The Webb is the successor to history’s most productive scientific instrument, the Hubble Space Telescope, and it will pack more than 100 times that telescope’s seeing power. But its long path from back-of-the-napkin idea to tangible, spaceflight-ready hardware has been rocky. The telescope’s design and development phases were plagued by delays, and cost overruns have boosted its price tag to a cool $9 billion. In 2011, several members of the House committee that funds the Webb alleged gross mismanagement, and called for its termination. NASA was forced to submit a new budget and a new development plan for the Webb, one that culminates in a fast-approaching launch date in 2018. “It is very important for us to stay on schedule,” NASA engineer Begoña Vila recently told me. “We need to finish on time.” Even if it means working through a blizzard. NASA has already put the telescope’s sensors through acoustic testing, to make sure they can withstand the deep, bone-jarring roar of lift-off. The fully-assembled Webb will be 75 feet wide, but when the telescope leaves Earth it will be bundled up and stuffed into the tip of a rocket that will deposit it nearly one million miles away. Once it reaches its new home, the Webb will attempt an unprecedented feat of reverse origami. Over a period of months, it will unfurl its giant sunshade, its instruments, and its 18 gold-coated mirrors, becoming, in the process, the largest space observatory in the known universe. The stakes for this metamorphosis are high. Astronauts have flown out to service the Hubble Space Telescope several times since it reached orbit in 1990. But the Webb is being sent to deep space, some three times the distance of the Moon. Even with tomorrow's technology, repair at such a remove from Earth will be extremely difficult. If for any reason the Webb should fail after launch, it will likely be left to idle out of reach, a stillborn in the void. That’s why testing is so important. Engineers at NASA Goddard have been working around the clock to make sure that the Webb will function in the frigid, airless environment it will soon call home. Back in October, its instruments were lowered into the center’s Space Environment Simulator. At more than 40 feet tall, the cylindrical simulator is itself an impressive specimen of the technological sublime. After the instruments were sealed in, the simulator’s interior was transformed into an artificial abyss. Vacuum pumps sucked out its air until the interior pressure was only a billionth as strong as Earth’s atmosphere. When I visited NASA Goddard in early January, I could see ice forming on the pipes that feed into it. Liquid nitrogen and helium had been released into the chamber, to cool it to the -387 degrees Fahrenheit temperature the Webb will experience in deep space. The most powerful of the telescope’s four instruments will need to be cooled even further, to just north of absolute zero, the point where all motion ceases. Only in the grips of that deep chill will it be able to detect faint, long-journeying starlight from galaxies more distant and ancient than any that have ever been glimpsed. The Space Environment Simulator requires constant monitoring—even through the weekend—to ensure the stability of the chamber, and the priceless instruments it holds. The team typically divides each Saturday and Sunday into three eight-hour shifts. But with a blizzard looming, they worried that the region’s impassable roads would make shift changes impossible. The team decided to bunk down at Goddard. In the run up to the weekend, they tested the facility’s back-up generators, in case the storm’s high winds knocked out power to the simulator, causing its tiny region of manufactured void to fill up with warm air, exposing the instruments to a violent and potentially ruinous temperature shift. They did their best to make themselves comfortable. “We thought we had big offices,” Begoña Vila told me. “But not for full-sized mattresses.” Signs were posted where team members were sleeping, so people would know to be quiet. A small gate was stretched across the building’s single available shower. “We had a sign so people would know whether someone was inside,” Vila said. I asked Vila about the food situation. “Some people brought chili beans,” she said. “Not everyone thought that was the best choice for the weekend.” Whatever sacrifices that particular dining choice may have entailed, they appear to have been worth it. The testing continued without a hitch, and after months of frigid temperatures, the simulator is now beginning its weeks-long warming up period. Once the guts of the camera are lifted out with a custom crane, and joined to the golden mirrors, the telescope will be shipped to Los Angeles, where its tennis-court-sized sunshade is being built by Northrop Grumman. From there, the whole thing will be assembled and caravanned to NASA’s Johnson Space Center in Houston, which is home to the world’s largest Space Environment Simulator. Houston’s simulator has a mythic past. Its interior was once graced by the spacecraft that carried human beings to the Moon. Next year, the entire James Webb Space Telescope will be lowered into it, for a final major test. Assuming it passes, the countdown to launch will begin. One can only imagine what NASA’s engineers will be feeling when lift-off is imminent. For many, the Webb is their life’s great work. Never again will they labor so long on a project, especially one that is so consequential. The Webb will let humans see clear back to the beginning of time, and for an encore it may give us reason to believe that we are not alone in this universe. Already, you can feel a buzz building at NASA. And you can feel something else, too. In early January, I stood on a catwalk with one of the Webb’s lead engineers, overlooking a hangar-sized testing room. On his face, he wore a distinctive impression, equal parts fear and exhilaration. At one point he turned to me and made himself heard over the steady droning of the simulator. “We’re all scared shitless,” he said. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:4afccbd8-d29c-4017-b313-8c20e04f81a3>
3.453125
1,655
Truncated
Science & Tech.
51.677911
95,482,097
energy for the solar spectrum. At the edge of our atmosphere peak energy for the solar spectrum occurs at the blue edge of the visible range at approximately 380 nm. At the surface of the earth, because of the atmospheric intervention, the peak energy occurs between 500 and 600 nm, approximately centered in the visible range (Goldman and Horne 1983). 188.8.131.52 Physical Properties of Water Water is one of the most important substances for life. Most of the mass of organisms is water and water is necessary for many physiological and biochemical processes. Water is the most abundant liquid on earth and it also simultaneously occurs on earth in solid and gaseous form. It is also almost the only inorganic liquid and the others are not common (elemental mercury, for example). The physical and chemical properties of water are responsible for the diverse forms and interactions that it is capable of assuming. Water is a simple molecule composed of one oxygen atom combined with two hydrogen atoms. In a sense it is oxidized hydrogen, the product of combustion of organic and other materials. (Water is a product of cellular respiration, for example.) This simple molecule normally would have characteristics similar to those of ammonia or hydrogen sulfide. That is, it would almost exclusively occur in gaseous form at normal temperatures of the earth's surface. However, its remarkable tendency to occur in liquid state is derived from the structure of the water molecule. Oxygen has strong electronegative properties. Its combination with hydrogen results from sharing electrons with the two hydrogen atoms. This is an example of covalent bonding with an inherent assymetry to the bonds. The angle formed between the bonded hydrogen atoms is approximately 105 degrees. This angle is greater than theoretically predicted (90 degrees) because of the repulsive force between the two similarly charged hydrogen atoms. The result is a polar molecule with the negative charge on the oxygen end and the positive charge associated with the hydrogen. Water, then, is a polar solvent. Furthermore, the electronegative pole associated with the oxygen of one molecule may form a weak but significant bond with the electropositive or hydrogen portion of another water molecule, a
<urn:uuid:fa5040c0-2c38-4573-8408-c7cb6c8b498d>
3.765625
466
Knowledge Article
Science & Tech.
30.347754
95,482,099
The plasma ring was thought to be caused by the triboelectric effect which causes a discharge of free electrons . A static field also formed around the jet . The plasma ring also emitted radio frequency (RF) radiation together with ion-acoustic waves . Read the full article and the research paper at the links below : - Engineers Create Stable Plasma Ring in Open Air - Toroidal plasmoid generation via extreme hydrodynamic shear - Toroidal plasmoid generation via extreme hydrodynamic shear (PDF) - About Static - Plasma oscillation - Ionacoustic waves and Langmuir waves (PDF) - Ion acoustic wave generation by a standing electromagnetic field (PDF) - Alfvén wave Also related : - On the Cascade Spectrum of Langmuir Waves in HAARP Heating Experiments - Emission of RF radiation from laser-produced plasmas - Femtosecond Lasers Create 3-D Midair Plasma Displays You Can Touch - Sticky tape surprise
<urn:uuid:7a756cb8-b234-48f0-baf9-29e0d489dcd3>
3.21875
216
Truncated
Science & Tech.
4.722381
95,482,107
The end of summer is near and the school year will soon start again. What better way to kick off the school year than an interesting conservation lesson? Bring the life cycle of a Monarch right into your classroom! Raising a monarch in your classroom is a simple yet rewarding experience. If you haven’t raised monarchs before, don’t worry! A few tips and you’ll be a pro! When starting, you have the option to order monarch eggs or find them yourselves. It is actually suggested that you find your own eggs because monarchs that are commercially raised do not always find their way to Mexico. Not sure how to find your own monarch eggs and/or caterpillars? Read below! Step One: Finding the Eggs & Caterpillars Monarch butterfly eggs can be tricky to find if you don’t know exactly what you are looking for. However, once you do, it’s a piece of cake! The best time to start looking for monarchs is after the Fourth of July. The monarchs will begin to arrive in June and will stick around until late August. Generally, monarchs lay their eggs on the underside of the milkweed leaf. Grab the tip of the leaf and gently pull the leaf back so you can see the underside. Be careful not to grab too much of the leaf as you could grab an egg or caterpillar! The egg of a monarch is milky white and oval shaped with faint vertical white lines. If you do not see the lines, do not be discouraged. The lines may not be visible in a dim light. It is very common to see eggs of other insects on the plant so look very closely to make sure you have the right egg! Helpful tip: Monarchs like to lay their eggs on young, fresh milkweed. While they will lay eggs on taller, older milkweed, the younger milkweed is more likely to host monarch eggs. As the end of summer approaches, it is more likely that you will find already hatched caterpillars rather than their eggs. A young caterpillar is very small so be on the look out for even the smallest of caterpillars! A great way to track down a caterpillar is to look for milkweed that has been eaten away. If you can see that some of the leaves have been eaten, chances are it was a caterpillar! Once you locate an egg or caterpillar, cut the milkweed plant stem with scissors and place the plant in water. A butter, whipped cream, or any other type of disposable plastic container with a lid works great for this. With a knife, make two small incisions that form an “X” in the lid. Fill the container to the brim with water and put the lid on. The stem of the milkweed plant should then fit snugly in the X-shaped incision. A plastic fast food cup and lid also work well. It is very important that whatever container you use must have a lid. If it doesn’t the caterpillar could fall off of the plant and drown in the water. Step Two: Raising the Caterpillars Once the caterpillars hatch and begin to grow, they will eat a lot of milkweed. It is very important that you keep up with them as they begin to eat away the plant. Once the caterpillars are close to finishing one plant, cut a new plant and place it with the old plants. There is no need to move the caterpillar from one plant to another as the caterpillars will make their way to the new plant on their own. Before throwing out the old plants, make sure you examine it for any caterpillars! If you have ever raised caterpillars before you know that they poop… a lot. If you don’t keep up with them, you could end up with quite a mess! A great way to keep your monarch area clean is to lay layers of paper towels down. That way as the caterpillars do their business you can just roll a layer of paper towels up to throw away. Step Three: Waiting for the Chrysalises to Form In order to see the full metamorphosis of the caterpillar into the butterfly, it is very important that you have your caterpillars in an enclosed container. If you don’t, the caterpillars could make their chrysalis somewhere where you will never see it! Sometimes, the caterpillars will form their chrysalis on the milkweed, but generally they like to find something more stable. A mesh lid works very well but as long as you have a lid the caterpillars will more than likely make their chrysalis there. Step Four: Hatching the Butterflies This is the most exciting part! The metamorphosis lasts for about 10-14 days. As the chrysalis gets closer to hatching, it will turn dark. Eventually you will be able to see a scrunched up version of a monarch inside the chrysalis. When you can see this, the monarch will be hatching soon! After the monarch has emerged, it is very important to let the butterfly dry its wings. When butterflies emerge from a chrysalis, their wings are wet and they are not able to fly. If you do release them when their wings are wet, make sure you place them on a bush or shrub. Otherwise, if you wait a few hours, they should be able to fly when you release them. When you grab the monarch, you must be very careful as their wings are very fragile. Pinch the wings together and grab both wings gently near the body. What if my caterpillar doesn't make its chrysalis on the top of my container? If the chrysalis is on a milkweed leaf… - Leave it be: You will have to be very careful if you do this. Carefully remove any caterpillars on the same plant as they will eat away the rest of the plant and the chrysalis will not have any support to hold it up. The biggest problem with leaving the chrysalis on the milkweed is that eventually the milkweed plant will die. If the plant is not fresh, this could happen before the butterfly hatches. - Remove the chrysalis: If you remove the chrysalis, you must hang it up. You cannot leave the chrysalis sit on the ground. The best way to remove a chrysalis is to carefully cut the leave off of the plant then trim away the leaf to a small section about the size of a quarter. A great way to hang the chrysalis is by alligator clips like the ones pictures below in this butterfly box. If the chrysalis is on the side of the container… The best solution is to leave the chrysalis there and be very careful. As long as you are very careful, the chrysalis should still hatch. What if the chrysalis falls? What does milkweed look like? I ran out of milkweed. Now what? - #FarmWithGrit - July 13, 2018 - Position Open: 4 County MNM Assistant - July 2, 2018 - Congratulations to Courtney Heiser, Our 2018 Scholarship Winner! - June 7, 2018 - 6 Ways to Get Your Kids Outside This Summer – A Blog with Sarah Schott - June 5, 2018 - The Sandusky River Needs Your Help! - May 12, 2018 - Ohio Forestry and Wildlife Conservation Camp - May 10, 2018 - The Monarch Butterfly: Common Questions & What YOU Can Do - April 26, 2018 - Check Out Videos from the 2018 Conservation Tillage & Technology Conference - April 24, 2018 - Online: 2017 National Conference on Cover Crops & Soil Health - April 24, 2018 - Online: 2017 National Conference on Cover Crops & Soil Health - April 12, 2018
<urn:uuid:5cb50f66-a9fd-4af4-8f9e-f485965ec431>
3.1875
1,629
Tutorial
Science & Tech.
66.671634
95,482,122
As much as two-thirds of Earth's carbon may be hidden in the inner core, making it the planet's largest carbon reservoir, according to a new model that even its backers acknowledge is "provocative and speculative." In a paper scheduled for online publication in the Proceedings of the National Academy of Sciences this week, University of Michigan researchers and their colleagues suggest that iron carbide, Fe7C3, provides a good match for the density and sound velocities of Earth's inner core under the relevant conditions. The model, if correct, could help resolve observations that have troubled researchers for decades, according to authors of the PNAS paper. The first author is Bin Chen, who did much of the work at the University of Michigan before taking a faculty position at the University of Hawaii at Manoa. The principal investigator of the project, Jie Li, is an associate professor in U-M's Department of Earth and Environmental Sciences. "The model of a carbide inner core is compatible with existing cosmochemical, geochemical and petrological constraints, but this provocative and speculative hypothesis still requires further testing," Li said. "Should it hold up to various tests, the model would imply that as much as two-thirds of the planet's carbon is hidden in its center sphere, making it the largest reservoir of carbon on Earth." It is now widely accepted that Earth's inner core consists of crystalline iron alloyed with a small amount of nickel and some lighter elements. However, seismic waves called S waves travel through the inner core at about half the speed expected for most iron-rich alloys under relevant pressures. Some researchers have attributed the S-wave velocities to the presence of liquid, calling into question the solidity of the inner core. In recent years, the presence of various light elements—including sulfur, carbon, silicon, oxygen and hydrogen—has been proposed to account for the density deficit of Earth's core. Iron carbide has recently emerged as a leading candidate component of the inner core. In the PNAS paper, the researchers conclude that the presence of iron carbide could explain the anomalously slow S waves, thus eliminating the need to invoke partial melting. "This model challenges the conventional view that the Earth is highly depleted in carbon, and therefore bears on our understanding of Earth's accretion and early differentiation," the PNAS authors wrote. In their study, the researchers used a variety of experimental techniques to obtain sound velocities for iron carbide up to core pressures. In addition, they detected the anomalous effect of spin transition of iron on sound velocities. They used diamond-anvil cell techniques in combination with a suite of advanced synchrotron methods including nuclear resonant inelastic X-ray scattering, synchrotron Mössbauser spectroscopy and X-ray emission spectroscopy. Other U-M authors of the PNAS paper are Zeyu Li and Jiachao Liu of the Department of Earth and Environmental Sciences. The study was supported by the National Science Foundation and the U.S. Department of Energy. It also benefited from a Crosby Award from the U-M ADVANCE program and U-M's Associate Professor Support Fund. Jim Erickson, 734-647-1842, email@example.com Jim Erickson | newswise Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:5d3ba039-ebfb-4ca4-92e1-f5182c46550c>
3.625
1,314
Content Listing
Science & Tech.
38.231661
95,482,127
Sage-grouse & Sagebrush Ecosystem USGS & USFS Release New Rangeland Fire Science Plan New plan identifies priority science needs in 5 areas: fire, invasive plants, restoration, sagebrush & greater sage-grouse, and climate & weatherFind out how USGS has been a leader in sagebrush ecosystem research and continues to meet the priority science needs of management agencies. We bring a diversity of expertise and capabilities to address a wide variety of science needs at multiple spatial scales and are committed to providing high quality science to our management partners. Greater Sage-grouse Annotated Bibliography Synthesizes the scientific literature published since records of decision were completed for 2015 BLM/USDA Forest Service land use plan amendments for greater sage-grouse, and provides potential management implications of the science.See the report Meet our lead scientists and team members Information about USGS principal investigators (PIs) working on sage-grouse or sagebrush ecosystem issues is available here.Meet the PIs Looking for more information? We have put together a list of related resources and links for the sage-grouse and sagebrush ecosystem.Find out more In the western U.S., big sagebrush ecosystems provide habitat for about 350 species of conservation concern, but are sensitive to disturbance and are in decline. Big sagebrush ecosystems have low and variable precipitation, which makes restoration a challenge. Methodological guidelines are needed to rapidly determine vegetation responses to wildfire and post-disturbance treatments, such as seeding and herbicide applications. Reestablishing perennial native shrubs is essential to short-term rehabilitation and long-term restoration of plant communities in the sagebrush ecosystem following wildfire. Long-term trends in restoration and associated land treatments in the southwestern United States Restoration treatments, such as revegetation with seeding or invasive species removal, have been applied on U.S. public lands for decades. Temporal trends in these management actions have not been extensively summarized previously, particularly in the southwestern United States where invasive plant species, drought, and fire have altered dryland...Copeland, Stella M.; Munson, Seth M.; Pilliod, David S.; Welty, Justin L.; Bradford, John B.; Butterfield, Bradley J. U.S. Geological Survey sage-grouse and sagebrush ecosystem research annual report for 2017 The sagebrush (Artemisia spp.) ecosystem extends across a large portion of the Western United States, and the greater sage-grouse (Centrocercus urophasianus) is one of the iconic species of this ecosystem. Greater sage-grouse populations occur in 11 States and are dependent on relatively large expanses of sagebrush-dominated habitat. Sage-grouse...Hanser, Steven E. Greater sage-grouse (Centrocercus urophasianus) nesting and brood-rearing microhabitat in Nevada and California—Spatial variation in selection and survival patterns Greater sage-grouse (Centrocercus urophasianus; hereinafter, "sage-grouse") are highly dependent on sagebrush (Artemisia spp.) dominated vegetation communities for food and cover from predators. Although this species requires the presence of sagebrush shrubs in the overstory, it also inhabits a broad geographic distribution with significant...Coates, Peter S.; Brussee, Brianne E.; Ricca, Mark A.; Dudko, Jonathan E.; Prochazka, Brian G.; Espinosa, Shawn P.; Casazza, Michael L.; Delehanty, David J.
<urn:uuid:e5a45689-e6d5-4db3-b5f4-f1357b306502>
2.96875
750
Content Listing
Science & Tech.
22.682165
95,482,129
Bacteria found in soil may harbor a potential game-changer for drug design. A new study in Nature Communications, suggests scientists could build better drugs by learning from bacteria-derived molecules called thiocarboxylic acids. Thiocarboxylic acids caught authors attention because of their rarity in nature and similarity to lab-made molecules called carboxylic acids. Carboxylic acids are good "warheads" because they can home in on biological targets, making them a key ingredient in many antibiotics, heart disease medications, and more. The researchers took a closer look at two natural products, platensimycin (PTM) and platencin, (PTN) that have been extensively investigated as potential antibiotics. Much to their surprise, platensimycin and platencin, which have been known for over a decade to be carboxylic acids, are actually made by bacteria as thiocarboxylic acids. The researchers revealed, for the first time, the exact genes, and the enzymes they encode, that bacteria use to create thiocarboxylic acids. They identify a thioacid cassette encoding two proteins, PtmA3 and PtmU4, responsible for carboxylate activation by coenzyme A and sulfur transfer, respectively. From there, the scientists set out to test whether nature-made thiocarboxylic acids could also act as biological warheads. The researchers discovered that, as antibiotics, platensimycin and platencin thiocarboxylic acids appeared to bind to their biological targets even better than their carboxylic acid counterparts. "That was exciting to see," senior author says. "We've now identified thiocarboxylic acids as natural products that can be used as drugs, and thiocarboxylic acids as warheads should be applicable to man-made drugs as well." Interestingly, thiocarboxylic acids appear to have been hiding in plain sight. The molecules were thought to be rare and have not been appreciated to date as a family of natural products. Thanks to the current findings, the researchers now know how these producst are made in nature. Upon searching databases of bacterial genomes, the researchers found that many species of bacteria around the world have the genes to produce thiocarboxylic acids. "There are many, many thiocarboxylic acid natural products waiting to be discovered, making them a treasure trove of potential new drug leads or drugs" says the senior author. Natural products in bacteria unearthed - 221 views
<urn:uuid:897d8b67-5d6c-4685-8d4e-63256eb35b13>
3.578125
545
News Article
Science & Tech.
18.659016
95,482,134
Scientists from Kiel find explanation for geochemically distinct parallel tracks of volcanoes formed by the same volcanic hotspot Located in the South Atlantic, thousands of kilometers away from the nearest populated country, Tristan da Cunha is one of the remotest inhabited islands on earth. Together with the uninhabited neighboring island of Gough about 400 kilometers away, it is part of the British Overseas Territories. Both islands are active volcanoes, derived from the same volcanic hotspot. A team of marine scientists and volcanologists from the GEOMAR Helmholtz Centre for Ocean Research Kiel, from the University of Kiel and the University of London discovered that about 70 million years ago, the composition of the material from the Tristan-Gough hotspot deposited on the seafloor changed. In the international scientific journal Nature Communications, the team provides an explanation for this compositional change that could help explain similar findings in other hotspots worldwide. Volcanic hotspots can be found in all oceans. "Pipe-like structures, so-called 'Mantle Plumes', transport hot material from the earth's interior to the base of the earth's lithospheric plates. As the mantle material rises beneath the plate, pressure release melting takes places and these melts rise to the surface forming volcanoes on the seafloor," explains Professor Kaj Hoernle from GEOMAR, lead author of the current study. As the earth's plates move over the hotspots, the volcanoes are moved away from their sources but new volcanoes form above the hotspots. "As a result long chains of extinct volcanoes extend from the active volcano located above the hotspot for over thousands of kilometers in the direction of plate motion", adds the volcanologist. Unlike most other hotspots, scientists can trace the history of the Tristan-Gough hotspot back to its initiation. Huge outpourings of flood basalts in Etendeka and Brazil at the initiation of the hotspot 132 million years ago most likely contributed to the breaking apart of the Gondwana supercontinent into new continents including Africa and South America. The rifting apart of Africa and South America has led to the formation of the South Atlantic Ocean basin. As the Atlantic widened, two underwater mountain ranges (the Walvis Ridge and Guyot Province on the African Plate and the Rio Grande Rise on the South American Plate) formed above the hotspot. The active volcanic islands of Tristan da Cuhna and Gough lie at the end of the track on the African Plate. Several expeditions, including two with the German research vessel SONNE (I) led by Kiel researchers, recovered samples from these submarine mountains. Geochemical analyzes show that the oldest parts of the Walvis Ridge, as well as the intial volcanic outpourings on the continents, have compositions similar to the presently active Gough volcano. The northwestern part of the Walvis Ridge and Guyot Province younger than 70 million years, however, is divided into two geographically distinct geochemical domains: "The southern part also shows the geochemically enriched Gough signature, while the northern part is geochemically less enriched, similar to the present Tristan da Cunha Volcano", says co-author Joana Rohde. A very likely explanation is hidden more than 2,500 kilometers deep in the Earth's lower mantle. At the base of the lower mantle beneath southern Africa, seismic surveys have shown a huge lens of material, which has different physical properties than the surrounding mantle material. This lens is called a "Large Low Shear Velocity Province" (LLSVP). The Tristan-Gough hotspot is located above the margin of this LLSVP. "In its early stages, the plume only appears to have sucked in material from the LLSVP," explains Professor Hoernle, "but over the course of time the LLSVP material at the NW side of the margin was exhausted and material from outside the LLSVP was drawn into the base of the plume." Since then, the plume has contained two types of compositionally distinct mantle, leading to the formation of parallel but compositionally distinct plume subtracks. "At some point in the future, the plume might be completely cut off from the LLSVP lens, again erupting only one type of composition, but now Tristan rather than Gough type of material." says the volcanologist. This model is also applicable to other hotspot tracks such as Hawaii. There, too, is evidence that parallel chains of volcanoes emit geochemically distinct material with one or the other composition dominating at different times in the history of the hotspot. A second LLSVP exists beneath the Pacific. "Thanks to the investigations at the Tristan-Gough-Hotspot, we now understand better the mysterious processes taking place in the interior of our planet," says Professor Hoernle. Jan Steffen | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e08437e6-52b8-46e4-bf77-c93ff12c6afa>
4.15625
1,648
Content Listing
Science & Tech.
35.903296
95,482,161
The lotis blue butterfly is believed to be extinct and has not been seen in the wild since 1994. Conservationists believe that if they still exist they may be found in only a few remote areas of Mendocina on the northern coast of California. This butterfly is small with a wingspan of about one inch. Males are more brightly colored than females; their upper wings are deep violet-blue in color with a black border and fringe of white scales along the outer wing margin. Females are brown or sometimes bluish-brown, with a wavy band of orange near the outer wing margins. Both male and females have grayish-white undersides scattered with black spots. The preferred habitat of this species is wet meadows and sphagnum willow bogs where as caterpillars, it is believed that they feed on the seaside bird's-foot trefoil (Lotus formosissimus), a common plant found on the Mendocino coast in damp coastal prairies. Like most butterflies, the caterpillars depend on their host plant to survive. Females lay their eggs on these plants, so that their young can feed on them in order to grow. The exact cause of the decline of this species is not known. Some conservationists believe that this species has suffered from habitat disturbance by humans and natural drought. Its bog habitat is known to undergo a natural process of dry seasons, but human activities may have prevented the formation of new bogs in the past. Conservation plans for this species include a captive breeding program that would be initiated immediately following the rediscovery of live specimens. Lotis Blue Butterfly Facts Last Updated: May 9, 2007 To Cite This Page: Glenn, C. R. 2006. "Earth's Endangered Creatures - Lotis Blue Butterfly Facts" (Online). Accessed 7/20/2018 at http://earthsendangered.com/profile.asp?sp=565&ID=9. Need more Lotis Blue Butterfly facts? Captive cheetah gives birth to largest litter ever recorded For the first time in history, a captive cheetah has successfully given birth to eight healthy cubs. It is said that only around 10,000 cheetahs remain in the wild in Africa along with 100 or fewer in Iran.
<urn:uuid:ee5b35d9-348c-40b9-915c-0085ef49104b>
3.609375
471
Knowledge Article
Science & Tech.
55.915244
95,482,182
Researchers inadvertently boost surface area of nickel nanoparticles for catalysis Researchers from North Carolina State University and the Air Force Research Laboratory have discovered that a technique designed to coat nickel nanoparticles with silica shells actually fragments the material – creating a small core of oxidized nickel surrounded by smaller satellites embedded in a silica shell. The surprising result may prove useful by increasing the surface area of nickel available for catalyzing chemical reactions. "Nickel is noteworthy for its widespread applications in catalysis," says Joe Tracy, an associate professor of materials science and engineering at NC State and corresponding author of a paper on the work. "One reason you'd want to coat nickel nanoparticles in porous silica is to embed them in a neutral substrate to maintain their efficiency as catalysts in chemical reactions. So the fact that this process could increase their surface area at the same time could prove to be beneficial." The researchers employed a widely used approach called reverse microemulsion, or reverse micelle, to apply a silica coating to nickel nanoparticles that were approximately 27 nanometers (nm) in diameter. But they found that the technique results in an oxidized nickel core that was 7 nm in diameter, surrounded by oxidized nickel satellites only 2 nm in diameter – all enclosed in a silica shell that was 30 nm in diameter. "At first we thought we'd made a mistake, but we were able to reproduce the result over and over again," says Brian Lynch, a Ph.D. student at NC State and lead author of a paper on the work. "When oxidized and reduced at high temperatures, we found that the core-and-satellite nickel nanoparticles did not significantly change size or shape, suggesting that they would function well in the environments needed to catalyze chemical reactions," Tracy says. "This was an unexpected discovery, but we're happy with how it turned out." The paper, "Synthesis and Chemical Transformation of Ni Nanoparticles Embedded in Silica," is published in the journal Nanoscale. The paper was co-authored by Bryan Anderson, a former Ph.D. student at NC State, and Joshua Kennedy of the Air Force Research Laboratory. The work was done with support from the National Science Foundation, under grants CBET-1605699 and DMR-1056653; the Air Force Research Laboratory Materials and Manufacturing Directorate, and the Air Force Office of Scientific Research, under grant 16RXCOR324. Related Journal Article
<urn:uuid:3f1df640-bc19-4dfb-ae94-46c79fe9eec9>
3.046875
503
Truncated
Science & Tech.
26.865647
95,482,186
The main source of light on Earth is the Sun. Sunlight provides the energy that green plants use to create sugars mostly in the form of starches, which release energy into the living things that digest them. This process of photosynthesis provides virtually all the energy used by living things. Historically, another important source of light for humans has been fire, from ancient campfires to modern kerosene lamps. With the development of electric lights and power systems, electric lighting has effectively replaced firelight. Some species of animals generate their own light, a process called bioluminescence. For example, fireflies use light to locate mates, and vampire squids use it to hide themselves from prey. The primary properties of visible light are intensity, propagation direction, frequency or wavelength spectrum, and polarization, while its speed in a vacuum, 299,792,458 metres per second, is one of the fundamental constants of nature. Visible light, as with all types of electromagnetic radiation (EMR), is experimentally found to always move at this speed in a vacuum. In physics, the term light sometimes refers to electromagnetic radiation of any wavelength, whether visible or not. In this sense, gamma rays, X-rays, microwaves and radio waves are also light. Like all types of light, visible light is emitted and absorbed in tiny "packets" called photons and exhibits properties of both waves and particles. This property is referred to as the waveparticle duality. The study of light, known as optics, is an important research area in modern physics. Generally, EM radiation, or EMR (the designation "radiation" excludes static electric and magnetic and near fields), is classified by wavelength into radio, microwave, infrared, the visible region that we perceive as light, ultraviolet, X-rays and gamma rays. The behavior of EMR depends on its wavelength. Higher frequencies have shorter wavelengths, and lower frequencies have longer wavelengths. When EMR interacts with single atoms and molecules, its behavior depends on the amount of energy per quantum it carries. EMR in the visible light region consists of quanta (called photons) that are at the lower end of the energies that are capable of causing electronic excitation within molecules, which leads to changes in the bonding or chemistry of the molecule. At the lower end of the visible light spectrum, EMR becomes invisible to humans (infrared) because its photons no longer have enough individual energy to cause a lasting molecular change (a change in conformation) in the visual molecule retinal in the human retina, which change triggers the sensation of vision. There exist animals that are sensitive to various types of infrared, but not by means of quantum-absorption. Infrared sensing in snakes depends on a kind of natural thermal imaging, in which tiny packets of cellular water are raised in temperature by the infrared radiation. EMR in this range causes molecular vibration and heating effects, which is how these animals detect it. Above the range of visible light, ultraviolet light becomes invisible to humans, mostly because it is absorbed by the cornea below 360 nanometers and the internal lens below 400. Furthermore, the rods and cones located in the retina of the human eye cannot detect the very short (below 360 nm) ultraviolet wavelengths and are in fact damaged by ultraviolet. Many animals with eyes that do not require lenses (such as insects and shrimp) are able to detect ultraviolet, by quantum photon-absorption mechanisms, in much the same chemical way that humans detect visible light. Various sources define visible light as narrowly as 420 to 680 to as broadly as 380 to 800 nm. Under ideal laboratory conditions, people can see infrared up to at least 1050 nm; children and young adults may perceive ultraviolet wavelengths down to about 310 to 313 nm. Plant growth is also affected by the color spectrum of light, a process known as photomorphogenesis.
<urn:uuid:77997c9b-d477-4b12-b940-2061483a40f1>
3.90625
786
Knowledge Article
Science & Tech.
31.615031
95,482,193
|Scientific Name:||Balaenoptera physalus (Linnaeus, 1758)| |Infra-specific Taxa Assessed:| |Taxonomic Notes:||The subspecific phylogeny of Fin Whales has not yet been fully elucidated, but some authors recognize a northern hemisphere subspecies B. p. physalus and a southern hemisphere subspecies B. p. quoyi which has a larger body size. Clarke (2004) proposed a pygmy subspecies B. p. patachonica Burmeister, 1865, but this is not widely accepted and no genetic analysis has been performed.| |Red List Category & Criteria:||Endangered A1d ver 3.1| |Assessor(s):||Reilly, S.B., Bannister, J.L., Best, P.B., Brown, M., Brownell Jr., R.L., Butterworth, D.S., Clapham, P.J., Cooke, J., Donovan, G.P., Urbán, J. & Zerbini, A.N.| |Reviewer(s):||Taylor, B.L. & Notarbartolo di Sciara, G.| The cause of the population reduction in this species (commercial whaling) is reversible, understood, and is not currently in operation. For this reason, the species is assessed under criterion A1, not under A2, A3 or A4. The analysis in this assessment estimates that the global population has declined by more than 70% over the last three generations (1929-2007), although in the absence of current substantial catches it is probably increasing. Most of the global decline over the last three generations is attributable to the major decline in the Southern Hemisphere. The North Atlantic subpopulation may have increased, while the trend in the North Pacific subpopulation is uncertain. |Previously published Red List assessments:| |Range Description:||Fin Whales occur worldwide, mainly, but not exclusively, in offshore waters. They are rare in the tropics, except in certain cool-water areas, such as off Peru. | In the North Atlantic, the Fin Whale’s range extends as far as Svalbard (Norway) in the northeast (but rarely far into the Barents Sea), to the Davis Strait and Baffin Bay (Canada and Denmark (Greenland)) in the northwest (but rarely into the inner Canadian Arctic), to the Canary Islands (Spain) in the southeast, and to the Antilles in the southwest (Rice 1998, Perry et al. 1999), but it is rare in the Caribbean and Gulf of Mexico (Ward et al. 2001). Their main summer range in the Northwest Atlantic extends from Cape Hatteras (39°N) (US) northward (Anon. 2005a). In former times, Fin Whales were caught year-round near the Straits of Gibraltar. While there may be some north-south migration between summer and winter, it does not necessarily involve the entire population, and North Atlantic Fin Whales may occur to some extent throughout the year in all of their range, as suggested by acoustic data (Clark 1995). There is a resident subpopulation in the central and western Mediterranean which is genetically distinct from that of the North Atlantic (Bérubé et al. 1998). The species also occurs rarely in the eastern Mediterranean (Notarbartolo di Sciara et al. 2003). In the eastern North Pacific, Fin Whales occur year-round off the central and southern California coast (Anon. 2003). They occur in summer off the entire coast of western North America from California into the Gulf of Alaska. Fin Whales marked off California in winter were recaptured in summer by whaling operations along the entire coast, suggesting migration. Offshore, Fin Whales occur across the North Pacific north of 40°N, at least from May to September in summer, with some tendency for a northward shift in distribution in high summer, when they also enter the Okhotsk Sea (Miyashita et al. 1995). They occur in the Bering Sea and some have been seen in the Chuckchi Sea, but rarely in the Beaufort Sea (Angliss and Outlaw 2004). Fin Whales occur, albeit in small numbers, in Hawaiian waters in both summer and winter (Anon 2005b). They are rare or absent throughout the tropical North Pacific. While there appears to be some migration, acoustic data suggests that overall there is no marked seasonality in distribution in the North Pacific (Watkins et al. 2000), in contrast to the traditional view of the Fin Whale as a migratory species. Gulf of California The Fin Whales inhabiting the Gulf of California constitute a resident, genetically isolated subpopulation (Bérubé et al. 2002). Telemetry information has shown year-round residency in this area, with seasonal latitudinal movements (Urbán et al. 2005). East China Sea Fin Whales in the East China Sea are generally recognized as being a distinct subpopulation from those of the North Pacific (Fujino 1960). Fin Whales appear to be rare today off the Korean peninsula and southern and central Japan, but large numbers were caught there in the 20th century (IWC 2006); it is not clear whether these animals were part of the East China Sea population or a separate grouping. While some Fin Whales do penetrate into the high Antarctic, along with Blue, Minke and Humpback Whales, the bulk of the Fin Whale summer distribution is in middle latitudes, mainly 40°S-60°S in the southern Indian and South Atlantic oceans, but 50°-65°S in the South Pacific, as evidenced by both sightings data and past catches (Miyashita et al. 1995, IWC 2006a). The winter distribution is poorly known, but based on catch results Fin Whales were formerly common off southern Africa in winter and became scarce there following depletion of the species in the Southern Ocean, consistent with this being a wintering area of a migratory population (Best 2003). Catches were mainly off South Africa, but in the early 20th century there were also catches off Angola, Congo and Mozambique (Best 1994). Recent sighting were made in the mid-latitude region (between 55°S and 61°S) by the IWC/SOWER (Southern Ocean Whale and Ecosystem Research Program). A high density area of Fin Whales was observed between 0°E and 5°E in the south of Bouvet Island (Ensor et al. 2007). Large numbers of Fin Whales were caught off South Georgia in the past, but the species is not common there now (Moore et al. 1999). It is assumed that animals caught at South Georgia were migratory (Mackintosh 1965), but the location of their wintering grounds is unknown. Fin Whales are now rare in Brazilian waters, but there is virtually no information from the period before the depletion of the whales around South Georgia (Zerbini et al. 1997). A few were taken in a brief period of whaling in southern Brazil in the early 1960s. Winter catches of Fin Whales off Chile, which also declined from the 1950s onwards in line with declining Southern Ocean stocks, are also suggestive of a wintering ground. Fin Whales were caught off Peru for only a few years from 1965 (prior to that the industry had focused on sperm whales), and catches petered out in the early 1970s. The map shows where the species may occur based on oceanography. The species has not been recorded for all the states within the hypothetical range as shown on the map. States for which confirmed records of the species exist are included in the list of native range states. Native:Algeria; Angola; Antarctica; Argentina; Australia; Belgium; Bermuda; Bouvet Island; Brazil; Canada; Cape Verde; Chile; China; Congo; Congo, The Democratic Republic of the; Croatia; Cyprus; Denmark; Ecuador; Egypt; Falkland Islands (Malvinas); Faroe Islands; Fiji; France; French Southern Territories (Kerguelen); Gabon; Germany; Gibraltar; Greece; Greenland; Heard Island and McDonald Islands; Iceland; India (Andaman Is., Laccadive Is., Nicobar Is.); Indonesia; Iran, Islamic Republic of; Iraq; Ireland; Isle of Man; Israel; Italy; Japan; Korea, Democratic People's Republic of; Korea, Republic of; Lebanon; Libya; Madagascar; Malaysia; Malta; Mauritius (Rodrigues); Mexico; Monaco; Morocco; Mozambique; Namibia; Netherlands; New Caledonia; New Zealand; Norway; Oman; Pakistan; Peru; Philippines; Portugal; Réunion; Russian Federation; Saint Helena, Ascension and Tristan da Cunha (Tristan da Cunha); Saint Pierre and Miquelon; Saudi Arabia; Seychelles (Aldabra); Slovenia; South Africa; South Georgia and the South Sandwich Islands; Spain; Sri Lanka; Svalbard and Jan Mayen; Sweden; Syrian Arab Republic; Tunisia; Turkey; United Arab Emirates; United Kingdom; United States; Venezuela, Bolivarian Republic of |FAO Marine Fishing Areas:| Arctic Sea; Atlantic – eastern central; Atlantic – Antarctic; Atlantic – southeast; Atlantic – northwest; Atlantic – northeast; Atlantic – southwest; Atlantic – western central; Indian Ocean – Antarctic; Indian Ocean – eastern; Indian Ocean – western; Mediterranean and Black Sea; Pacific – northeast; Pacific – northwest; Pacific – southeast; Pacific – eastern central; Pacific – western central; Pacific – Antarctic; Pacific – southwest |Range Map:||Click here to open the map viewer and explore range.| North Atlantic Fin Whales were comprehensively assessed by the International Whaling Commission (IWC) Scientific Committee (SC) in 1991 (IWC 1992), and an update for the northern part of the region was undertaken in 2006 in a joint workshop with the North Atlantic Marine Mammal Commission (NAMMCO) (IWC 2007a). North Atlantic Fin Whale stocks had previously been assessed by the IWC Scientific Committee in 1976 (IWC 1977). Based mainly on past whaling operations, the IWC recognizes seven management areas in the North Atlantic: Nova Scotia; Newfoundland-Labrador; West Greenland; East Greenland-Iceland; North Norway; West Norway-Faeroe Islands; British Isles-Spain-Portugal. Based on genetic evidence, it is now considered more likely that there are from two to four breeding stocks, which use these seven management areas in different proportions (IWC 2007a). The best available estimates of recent abundance accepted by the IWC Scientific Committee (IWC 2007c) are: 25,800 (CV 0.125) in 2001 for the central North Atlantic (East Greenland-Iceland, Jan Mayen (Norway) and the Faeroes (Denmark)); 4,100 (CV 0.21) in 1996-2001 for the northeastern North Atlantic (North and West Norway); 17,355 (CV 0.27) in 1989 for the Spain-Portugal-British Isles area (Buckland et al. 1992); and 1,722 (CV 0.37) for West Greenland in 2005 (IWC 2007b) There are no complete estimates for the western North Atlantic , but partial estimates are 1,013 (95% CI 459-2,654) for Newfoundland in 2002-3 (IWC 2007a), and 2,814 (CV 0.21) for the east coast of North America from the Gulf of St Lawrence southward (Anon. 2005a). Subject to a caveat concerning the different dates of the surveys, these figures can be summed to provide a rough total estimate of about 53,000 around the year 2000. No significant trends were found in the total abundance for any of the above areas, but when the area west and southwest of Iceland was singled out, a significant increasing trend was found (IWC 2007a). Fin Whales were heavily exploited in the late 19th and early 20th centuries, starting in 1876, particularly off Norway, Iceland, the Faeroes and British Isles. Whaling then spread to Spain, Greenland and eastern Canada, and exploitation continued at a lower level until the 1980s. Catch statistics for the early years are probably incomplete, and a large number of whales were killed but lost, due to lines breaking, etc., perhaps up to one-half in the first 20-25 years and one-third in the next 15-20 years (Tønnessen 1967). The IWC Scientific Committee added 50% to recorded catches up to 1915 to allow for this (IWC 2007a): recorded catches up to 1915 total 15,315 Fin Whales plus 29,024 unspecified whales of which about half may have been Fin Whales, thus the total kill may have been about 45,000 up to 1915. The total recorded catch post-1915 has been about 55,000 Fin Whales. The approximate figures by area are: Canada 12,000; Norway 10,000; Iceland 10,000; Faeroes 5,000; Greenland 1,000; British Isles 3,000; Spain and Portugal 11,000; and pelagic operations 3,000. The behaviour evident for the various North Atlantic Fin Whale populations following earlier reductions by whaling differs. It ranges from clear evidence of recovery to no firm indications of any increase. An estimated 14,000 Fin Whales were killed off northern Norway during 1876-1904, and a further 1,500 during 1948-71, but Fin Whales are rare there now (although quite abundant off western Spitsbergen, where about 1,500 whales had been killed during 1904-11) (Øien 2003, 2004). An estimated 12,000 Fin Whales were killed off Iceland during 1890-1915, until whaling was suspended partly due to concerns about the reductions in the stocks, but the modern abundance data suggest that the there has been a recovery in the population that may still be continuing, particularly west of Iceland, despite catches during 1948-89 averaging about 220 per year (Branch and Butterworth 2006). An estimated 10,000 Fin Whales were taken from the Faeroes, but about 25% of these were actually caught off eastern Iceland (IWC 2007a). Whaling from the Faeroes and West Norway petered out during the 1960s as whales became scarce (IWC 1977), but catches had apparently been mainly of migrating whales rather than whales belonging to local populations. The impact of catches on the fin whale stocks in the Northwest Atlantic is unclear (Mitchell 1972). Catches of about 7,000 Fin Whales taken near the Straits of Gibraltar in the 1920s apparently reduced the local abundance, and Fin Whales are still rare there today, but this did not seem to affect the abundance of Fin Whales off northern Spain, where catches continued until 1985. Within the Mediterranean, the population was estimated in 1991 from surveys covering much of the western basin at 3,583 (CV 0.27) (Forcada et al. 1996). It is likely, but not certain, that the historical catches near the Strait of Gibraltar were from the North Atlantic rather than from this population (Sanpera and Aguilar 1992). Palsbøll et al. (2004) found that Mediterranean Fin Whales probably have a small but non-zero genetic exchange with Fin Whales elsewhere in the North Atlantic. The Mediterranean subpopulation contains fewer than 10,000 mature individuals and is subject to ongoing threats that may be causing a decline, but data on trend in abundance are insufficient to determine this (Reeves and Notarbartolo di Sciara 2006). North Pacific Fin Whale stocks have not been assessed in depth by the IWC Scientific Committee since 1973, when the assessment by Ohsumi and Wada (1974) was accepted, and that was updated by Allen (1977). The stock in the western North Pacific was estimated to have declined from an “initial level” of 44,000, to 17,000 in 1975. The figures refer to the “exploitable” population, above the minimum allowed size at capture. However, these assessments were based on indices of catch-per-unit-effort (CPUE) and sightings-per-unit-effort (SPUE) that did not meet modern requirements for the analysis of such data (e.g. IWC 1989), although there is no doubt that the populations had declined to some unknown extent. The current abundance of Fin Whales in the North Pacific is not well known, because survey coverage has been patchy, and not all available data have been analysed. Current estimates indicate a population of 5,700 whales in the Bering Sea, coastal Aleutian Islands and Gulf of Alaska (Moore et al.2002, Zerbini et al. 2006). Zerbini et al. (2006) estimated a trend in abundance of 4.8%/year with a nominal CV of 0.15 for Fin Whales in the northern Gulf of Alaska from 1987 to 2003, but recalculation of the variance from the data indicates low precision (95% confidence limits -1.6-11.1%). Based on surveys conducted in 1996 and 2001, an estimated 3,300 (CV 0.31) fin whales occur in summer/autumn off the west coast of the US (Barlow 2003a). Apart from a small population in Hawaiian waters (estimated 174 animals, CV 0.72; Barlow 2003b), there are no recent estimates of Fin Whale abundance in the remainder of the North Pacific. Some relevant data exist, including data collected under the Japanese Research Programme in the North Pacific (JARPN) (Tamura et al. 2005), but these do not appear to have been analysed with respect to Fin Whale abundance. Fin Whales seem to be abundant in the central offshore part of the Okhotsk Sea, based on Japanese surveys conducted in 1989, 1990, 1992, 1999, 2000 and 2003, but no abundance estimate has been calculated (Miyashita 2004). Given the lack of a comprehensive recent estimate, the estimate of 17,000 in 1975 from the earlier assessment is used for this assessment, but the global assessment is not particularly sensitive to the figure used for the North Pacific. Over 74,000 Fin Whales are recorded caught by modern whaling in the North Pacific during 1910-75, plus about 20,000 unspecified whales during 1900-30, of which a substantial proportion may have been Fin Whales. Fin Whales were protected by the IWC from whaling in the North Pacific from 1976 onwards, but small Korean catches continued until the early 1980s. As to whether Fin Whales recovered from exploitation in the North Pacific, the evidence is, as for the North Atlantic, mixed. Over 24,000 Fin Whales are recorded caught off coastal Japan and the Korean peninsula from 1910 onwards; annual catches peaked at over 1,000 whales in 1915 and declined steadily thereafter. Fin Whales appear to be rare there now (Miyashita et al. 1995, Kim et al. 2004). Similar patterns of apparent exhaustion of stocks occurred elsewhere. For example, about 4,000 fin whales were taken by stations in British Columbia, western Canada, until catches ceased in 1967, with signs of rapid decline in the last 10 years of operation (Gregr et al. 2000). Gulf of California This genetically isolated subpopulation was estimated in 2004 from a mark-recapture analysis of photo-identification data at 613 (CI 426-970) (Díaz-Guzman 2006). There are no data on population trend for this subpopulation. Telemetry information shows an all-year residency in the Gulf of California with seasonal latitudinal movements by these whales (Urbán et al. 2005). East China Sea There do not appear to be any current or historical estimates of abundance for fin whales in the East China Sea. Along with other baleen whales, the IWC has traditionally managed Southern Hemisphere Fin Whales on the basis of six management areas, Areas I through VI, which are longitudinal pie slices 50°-70° wide. The areas were originally chosen as putative management stocks for humpback whales, and later used for all baleen whales, with little or no biological support (Donovan 1991). Over 725,000 Fin Whales have been recorded caught in the Southern Hemisphere during 1905-76 (IWC 2006b). There was a series of assessments in the 1970s, including a synthesis by Chapman (1976) which was reassessed by Breiwick (1977), and updated (for Areas II-VI) by Allen (1977). A reassessment of Area VI Fin Whales was inconclusive (IWC, 1980). These assessments were based on a combination of evidence, including trends in CPUE by whaling fleets, sighting rates by Japanese scouting vessels, and inferences on recruitment and mortality rates from age and length data. Their reliability is questionable on various grounds. For example, the IWC Scientific Committee subsequently determined that CPUE data should only be used for stock assessments when the nature of the whaling operations is fully described (IWC 1989). A reanalysis of the historical data using modern methods and insights is warranted. Less indirect estimates are available from sightings data for more recent times. IWC (1995) gives estimates of 18,000 (CV 0.47) using data from 1966-79 and 15,000 (CV 0.61) using data from 1979-88 for the total population of Fin Whales south of 30°S in summer. These are obtained by extrapolating abundance estimates for the area south of 60°S from the International Decade of Cetacean Research (IDCR) international surveys, to the area south of 30°S using Japanese scouting vessel data. A slightly finer stratification of the same data yielded estimates of 8,387 for 1966-79 and 15,178 for 1979-88 (IWC 1996; no variances given). Despite their low precision, these estimates suggest that the previous assessments of the populations were seriously over-optimistic. Best (2003) suggested a similar conclusion, based on declines of fin whale catch and sighting rates of 89-97% on the former South African winter whaling grounds during 1954-75. The most recent estimate of 15,178 for 1979-88 is used for the purpose of this assessment and referred to the year 1983, the middle of the period to which it refers. Use of updated results from subsequent IDCR surveys (Branch and Butterworth 2001) would lead to an estimate of 38,185 referenced to the year 1997 (Mori and Butterworth 2006), but use of this for the population trajectory computation would hardly affect the results because the predicted trajectory passes close to this value anyway. (The trajectory shown in Fig.1. is for the mature population, and hence not directly comparable). Biological parameters and assessment The generation time for a non-depleted fin whale population is estimated to be 25.9 years (Taylor et al. 2007). The time period of three generations is 1929-2007. Estimates of age at sexual maturity for female fin whales, based on observed proportions mature by age, are 6-7 years in the Southern Hemisphere from British catches in the 1960s (Lockyer 1972) and Japanese catches in the 1960s and early 1970s (Mizroch 1981), but these values are likely negatively biased due to selection against smaller animals. For the North Atlantic, Gunnlaugsson and Víkingsson (2006) estimated an average of 8.9 years from catches off Iceland during 1967-89, but with some indication of an increase over time from 7.5 years during 1967-78 to 9.25 years during 1979-89. Aguilar et al. (1988) estimated 7.9 years using the same method from catches off Spain during 1979-84. Slightly different values are obtained using alternative methods. There do not seem to be any precise values for the North Pacific, but Kimura et al. (1958) estimated 8-12 years. For the purpose of this assessment, an age at maturity of eight years is assumed, corresponding to an age at first reproduction of nine years. The values of other biological parameters (age at first capture, net recruitment rate, and natural mortality rate) were taken from the previous Scientific Committee assessments (Allen 1977). Because the available published assessments for this species are not up to date, an updated population assessment is conducted here to enable assessment of the population reduction over the period 1929-2007 relative to the A criterion. While the available data do not permit a scientifically rigorous estimation of the extent of population reduction, it is reasonable to use conventional population assessment methods to provide a crude indication of the extent of possible reduction relative to the criteria. A conventional deterministic age-structured model with an age at first capture (“recruitment”) (ar) and an age at first reproduction (am), and linear density-dependence was applied to the North Pacific, North Atlantic and Southern Hemisphere regions separately. The parameter values are listed in Table 1 in the supplementary material (which forms an integral part of this assessment). The starting year was 1874 in the North Atlantic and 1900 in the North Pacific and Southern Hemisphere. The sex ratio of the population and catches is assumed to be 50:50. The results of this population assessments can be found in the supplementary material, which forms an integral part of this assessment. |Current Population Trend:||Unknown| |Habitat and Ecology:||The available quantitative evidence suggests that the fin whale is a catholic feeder, sometimes preying heavily on fish but mostly on crustaceans. In Icelandic catches, 96% contained krill only, 2.5% a mixture of krill and fish, and 1.6% fish only (Sigurjónsson and Víkingsson 1997), while only one of 267 fin whales caught in the northeast Pacific off British Columbia, Canada, contained fish (Flinn et al. 2002), and over 99% of stomachs with food in the Antarctic contained krill (Kawamura 1994). On the other hand, Overholtz and Nicolas (1979) reported apparent feeding by fin whales on American sand lance (sand eel) Ammodytes americanus in the northwest Atlantic, and Mitchell (1975) found that capelin comprised 80-90% of prey in fin whales caught off Newfoundland. Capelin abundance is extremely variable over time, and Fin Whales may feed opportunistically on capelin in high-capelin years.| |Use and Trade:||Large-scale commercial harvesting of this species has ceased, but continues at a smaller scale in the North Atlantic and Antarctic.| Prior to the advent of modern whaling in the late 19th century, Fin Whales were largely immune from human predation because they were too hard to catch. Fin Whales were depleted worldwide by commercial whaling in the 20th century. Fin Whales have been protected in the Southern Hemisphere and North Pacific since 1975, and catches ceased in the North Atlantic by 1990, except for small “aboriginal subsistence” catches off Greenland. Commercial catches resumed off Iceland in 2006, with nine fin whales being taken that year. A Japanese fleet resumed experimental catches of Fin Whales in the Antarctic in 2005, taking 10 whales each during 2005/06 and 2006/07, with plans to take 50 per year from the 2007/08 season (IWC 2006a). It seems unlikely that catching of fin whales will return to the high levels of previous years, not least due to the limited market demand for whale products. Fin Whales are one of the more commonly recorded species of large whale reported in vessel collisions (Laist et al. 2001). Five fatal collisions were recorded off the US east coast during 2000-04 (Cole et al. 2006). Collisions with vessels appear to be a significant, but not necessarily unsustainable, source of mortality for the Mediterranean population (Panigada et al. 2006, Reeves and Notarbartolo di Sciara 2006). Fin Whales are occasionally caught in fishing gear as a by-catch. Four deaths and serious injuries from this source were reported from the eastern US coast during 2000-04 (Cole et al. 2006); recent Japanese Progress Reports to the IWC (www.iwcoffice.org/sci_com/scprogress.htm) reported about one Fin Whale by-caught per year on average. The IWC set catch limits at zero for fin whales in the North Pacific and Southern Hemisphere from 1976. The IWC adopted a provision (popularly known as the commercial whaling moratorium) in 1982 to set all catch limits for commercial whaling to zero from1986. This provision does not apply to Norway or the Russian Federation which have objected to this provision. Iceland also considers itself not bound by the provision, based on a reservation attached to its adherence to the treaty governing the IWC. Limited “aboriginal subsistence” whaling is permitted by the IWC for Fin Whales in Greenland.Fin Whales are listed on Appendix I of the Convention on Trade in Endangered Species (CITES), but this does not apply to Iceland, Norway and Japan, who hold reservations. Fin whales are also listed on Appendices I and II of the Convention on Migratory Species (CMS). Under the Agreement for Conservation of Cetaceans in the Black and Mediterranean Seas (ACCOBAMS), fin whales in the Mediterranean, along with other cetaceans, are protected from deliberate killing by signatories to the agreement. |Citation:||Reilly, S.B., Bannister, J.L., Best, P.B., Brown, M., Brownell Jr., R.L., Butterworth, D.S., Clapham, P.J., Cooke, J., Donovan, G.P., Urbán, J. & Zerbini, A.N. 2013. Balaenoptera physalus. The IUCN Red List of Threatened Species 2013: e.T2478A44210520.Downloaded on 21 July 2018.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:6f72a351-dfee-4818-9204-4ef1a00c0dcc>
2.96875
6,304
Knowledge Article
Science & Tech.
50.321336
95,482,201
"Heat waves are known to kill hundreds of people in the United States every year and are the leading cause of weather-related fatalities; usually outstripping the combined effects of hurricanes, tornadoes, lightning and flash floods. " "One of the most likely disasters to strike the Central Indiana region is an extreme heat event of considerable duration and strength, the researcher says. Johnson, a geography professor in the School of Liberal Arts at IUPUI, and colleagues of the Indiana University Institute for Research and Social Issues, are currently conducting two studies on the impact of heat waves on vulnerable populations within urbanized areas. The goal is to develop vulnerability models designed to assist emergency personnel in their response and mitigation to heat wave incidents. It is hoped that the models of vulnerability and associated communication interactions developed by CDC will have a significant impact in lowering heat-related mortality and the associated economic cost of the health effects of at-risk populations. These studies are funded by NASA ($828,000.00) and an internal Indiana University awards totaling $75,000.00 with collaborators at CDC's National Center for Environmental Health in Atlanta, Ga. The studies funded by NASA initially involve examining heat-related vulnerability in Phoenix, AZ, Philadelphia, PA and Dayton, OH. Indianapolis will eventually be involved in this study with funding from an IU grant to develop a preliminary heat wave vulnerability system for the Indianapolis area. The models use complex statistical modeling tools and visualization, and space-borne satellite imagery to identify individual “hot spots” within the four cities and develop vulnerability maps based on the occurrence of past mortality during extreme heat events. For interviews with Daniel P. Johnson, please call 317-278-5536 Rich Schneider | Newswise Science News Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:66cc9a5c-a0b9-4143-871f-855b7ab152a2>
3.359375
943
Content Listing
Science & Tech.
30.176992
95,482,222
- Open Access Beenomes to Bombyx: future directions in applied insect genomics © BioMed Central Ltd 2003 Published: 26 February 2003 The recent sequencing of the Anopheles gambiae genome showcases the genetic breadth of insects and a trend towards sequencing organisms directly involved with human welfare. We describe traits in other insect species that make them important candidates for genomics projects, and review several recent workshops aimed at uniting researchers working with insect species to efficiently address problems in medicine, biotechnology, and agriculture. The recent sequencing of the Anopheles gambiae genome is a watershed event in genomics for two reasons. First, this species is of sufficient phylogenetic distance from the previously sequenced Drosophila melanogaster to provide the best view to date of changes in genome organization and composition across the insects. The 250 million-year spread between these species, abetted by a high rate of sequence evolution, allows genomic comparisons over an evolutionary time-scale equal to that between humans and fish , larger by one-third than that between humans and chickens. Although this is a fraction of the distance covered by insects as a whole, it allows new tests of inferences drawn from Drosophila about gene function in insects in general. The second reason that the Anopheles gambiae genome is a landmark is that Anopheles is the first animal to be sequenced, other than ourselves, whose actions have a strong direct impact on human lives. In the near future such 'applied' genomic projects will probably become the norm, as agencies involved with human health and agriculture develop plans to sequence key pests and beneficial species. This trend is particularly evident in insect genomics. The next two species in the insect genome queue, the honey bee (Apis mellifera) and silkworm moth (Bombyx mori), were selected in part because of their longstanding use in agriculture. Other insect candidates, including another mosquito (Aedes aegypti), the medfly (Ceratitis capitata), and flour beetle (Tribolium castaneum), also have longstanding histories of research driven by their impacts on humans. In this article, we discuss criteria that might be used to evaluate the candidacy of various insect taxa for whole-genome sequencing. Specifically, we compare and contrast genome size, current genetic knowledge, species diversity, and the human impact of insects from 11 different insect orders and suggest how scientists and funders could use these criteria to help justify and prioritize future sequencing efforts. In addition, we briefly summarize recent scientific workshops aimed at integrating scientists and research programs focused on questions concerning basic and applied genomics in non-traditional insect species. Genome sequencing criteria in insects Here, we use four criteria, from among the many possible, to compare the merits of insects from the 11 different insect orders shown in Figure 1. First, we use genome size based on estimates in the Animal Genome Size Database as a predictor of direct sequencing costs. We estimate a mean genome size for each order, weighted at the level of family (for example, the numerous estimates of genome size in the fly family Drosophilidae were averaged and used as a single data point). This correlates well with estimates for the smallest genome in each order, and arguably is a more relevant estimate of the genome size of potential candidate species. We assume that sequencing costs increase linearly with genome size, given that any economy of scale achieved in sequencing larger genomes is likely to be mitigated by a need for higher sequence redundancy prior to assembly of large genomes. Our results indicate strengths in all criteria for the holometabolous insect orders (those with complete metamorphosis) - Hymenoptera, Diptera, Lepidoptera and Coleoptera - as predicted previously (see Figure 1 legend for the common names of insects in these orders). Coleoptera are the most speciose worldwide, but have slightly lower economic impact than Lepidoptera, Hymenoptera, and Diptera. Beyond the holometabolous insects, the order Homoptera stands out for having species with generally small genomes and great economic and agricultural importance. The representatives from the primitive insect orders Thysanura and Odonata, while valuable from the standpoint of phylogenetic breadth, fare poorly compared to other orders using our criteria. The emphasis in these criteria on insects with recognized human impact and ecological importance is not meant to negate the value of model insect species as sequencing candidates. Model insect genomes can provide general insights into biological mechanisms, gene structure and function, and the conserved evolutionary processes that select for certain genetic traits. Thus model organisms, as illustrated by species of Drosophila, yield invaluable insights for all insect genomes. And as a final caveat, we should emphasize that although we present several ways to compare the merits of different insect groups, we do not mean to infer that these are the only criteria useful for such decisions. (Our views are our own and need not reflect the opinions of our agency or the US government.) Recent insect genome collaborations, and progress Several recent workshops have been held with a specific focus on insect genomics and its applications. The Comparative Insect Genomics Workshop (Washington DC, USA; October 2001; sponsored by the US Department of Agriculture) was the first international meeting of scientists from academia, private industry, and government with the purpose of addressing and promoting the broad field of insect genomics. Discussions at this meeting focused on current approaches for analyzing and comparing genomes, the evaluation of candidate insects for genome sequencing, and ways to coordinate genomic efforts and ensure public access to materials and datasets. Leaders from the fruitfly, nematode, plant, and microbial genomics communities discussed the evolution of their own genome initiatives, and offered critiques of impending projects in insects. Because Drosophila-associated projects have served as models for all insect genomicists, there was substantial discussion of how new insect projects might benefit, and might benefit from, studies involving Drosophila. FlyBase , a key database for Drosophila genetics, forms one venue for comparative analyses in insects that is already widely used by those working on other insect species. Similar resources available through the US National Institutes of Health and the Gene Ontology Consortium were also identified as being key to generating testable inferences for new genome sequences. Recognizing the success of the completed and ongoing dipteran genome projects, several working groups have formed to develop and promote genome projects in new insect groups. Within the Hymenoptera, an international genomics effort has been emerging for several years around the honey bee, arguably the best studied and economically most important member of this group. Propelled in part by the Comparative Insect Genomics Workshop, a successful funding white paper was submitted to the US National Human Genome Research Institute for honey bee genome sequencing (now nearing completion) at the Baylor College of Medicine Genome Center. A more recent Honey Bee Biotechnology Workshop (Sapporo, Japan; July 2002) focused both on details of this genome project and on independent genomics efforts. New applications of functional genomic techniques described there foretell the many ways researchers will use genomic data to answer basic and applied questions in this species. As one example, two lab groups discussed the successful application of RNA interference methods in honey bee embryos and brains. Within the Lepidoptera, an international genomics effort has centered on the economically important silkworm moth , for which a completed genome sequence is expected in 2004. The recent International Workshop of Lepidopteran Genomics (Tsukuba, Japan; September 2002) focused on key aspects of this genome project, most notably the integration of large-insert libraries, expressed sequence tags (ESTs), and applications of transgenic technologies. The International Lepidopteran Genome Project has been charged with applying new technologies to compare the genomes of a growing list of agriculturally important moths and butterflies. Among these, the crop-feeding heliothine moths have long been appreciated as significant genome candidates. One privately funded genome project in this group, involving the tobacco budworm Heliothis virescens, is apparently complete but remains inaccessible to the public. By contrast the Bombyx mori project and projects involving additional heliothine species are expected to be carried out with full public access. Although no formal gatherings have been held to date, working groups representing additional insect orders (for example within the Coleoptera and Homoptera) continue to develop within the insect genomics research community. Well-defined and concerted research efforts, combined with advancing technologies and access to post-genomic tools and data, will speed advances in these taxa. As one example, functional studies using RNA interference and related methods are now feasible for all insect species, using orthologs identified through matches with current genome projects. Additionally, newly available large-insert libraries, for example those available through , can be used to begin testing for synteny and structure in diverse insect genomes. Finally, comparative genomics databases from flies, moths, and bees will undoubtedly be used to inform other genomics projects. In conclusion, the field of insect genomics is experiencing an exceptional year that should invigorate insect genetic studies. The outbreak of genome sequences is also likely to impact genetic studies more broadly. New estimates suggest that 61% and 66% of protein coding sequences from Drosophila and Anopheles, respectively, have known orthologs in non-insect genomes (human, mouse, Arabidopsis, worm, yeast, zebrafish, rat and rice [15, 16]). This upward trend (only 20-30% of Drosophila genes were identified as having non-insect matches two years ago ) is certain to continue with incoming genome data for bees, moths, and their relatives. Researchers studying insect genomes can look forward to using these shared traits to better address general problems in medicine, biotechnology, agriculture, and evolutionary biology. - Holt RA, Subramanian GM, Halpern A, Sutton GG, Charlab R, Nusskern DR, Wincker P, Clark AG, Ribeiro JM, Wides R, et al: The genome sequence of the malaria mosquito Anopheles gambiae. Science. 2002, 298: 129-149. 10.1126/science.1076181.PubMedView ArticleGoogle Scholar - Christophides GK, Zdobnov E, Barillas-Mury C, Birney E, Blandin S, Blass C, Brey PT, Collins FH, Danielli A, Dimopoulos G, et al: Immunity-related genes and gene families in Anopheles gambiae. Science. 2002, 298: 159-165. 10.1126/science.1077136.PubMedView ArticleGoogle Scholar - Wheeler WC, Whiting M, Wheeler QD, Carpenter JM: The phylogeny of the extant hexapod orders. Cladistics. 2001, 17: 113-169. 10.1006/clad.2000.0147.View ArticleGoogle Scholar - Gregory T: Animal Genome Size Database. [http://www.genomesize.com] - GenBank. [http://ncbi.nih.gov/Genbank] - Borror DJ, Triplehorn CA, Johnson NF: An Introduction to the Study of Insects. 1989, Philadelphia,: Saunders College, SixthGoogle Scholar - CAB Abstracts. [http://www.cabi-publishing.org/Products/Database/Abstracts/Index.asp] - Kaufman TC, Severson DW, Robinson GE: The Anopheles genome and comparative insect genomics. Science. 2002, 298: 97-98. 10.1126/science.1077901.PubMedView ArticleGoogle Scholar - FlyBase. [http://flybase.bio.indiana.edu] - National Center for Biotechnology Information. [http://www.ncbi.nlm.nih.gov] - Gene Ontology Consortium. [http://www.geneontology.org] - International Lepidopteran Genome Project. [http://www.ab.a.u-tokyo.ac.jp/lep-genome] - SilkBase. [http://www.ab.a.u-tokyo.ac.jp/silkbase] - GENEFinder Resource. [http://hbz.tamu.edu] - Gilbert DG: euGenes: a eukaryote genome information system. Nucleic Acids Res. 2002, 30: 145-148. 10.1093/nar/30.1.145.PubMedPubMed CentralView ArticleGoogle Scholar - euGenes. July 2002, [http://iubio.bio.indiana.edu:8089] - Rubin GM, Yandell MD, Wortman JR, Gabor Miklos GL, Nelson CR, Hariharan IK, Fortini ME, Li PW, Apweiler R, Fleischmann W, et al: Comparative genomics of the eukaryotes. Science. 2000, 287: 2204-2215. 10.1126/science.287.5461.2204.PubMedPubMed CentralView ArticleGoogle Scholar
<urn:uuid:612b8bab-c8ac-4f02-80f0-baa732826675>
3.15625
2,753
Academic Writing
Science & Tech.
28.752872
95,482,238
High-resolution radar maps of the lunar surface at 3.8-cm wavelength - 72 Downloads The entire earth-facing lunar surface has been mapped at a resolution of 2 km using the 3.8-cm radar of Haystack Observatory. The observations yield the distribution of relative radar backscattering efficiency with an accuracy of about 10% for both the polarized (primarily quasispecular or coherent) and depolarized (diffuse or incoherent) scattered components. The results show a variety of discrete radar features, many of which are correlated with craters or other features of optical photographs. Particular interest, however, attaches to those features with substantially different radio and optical contrasts. An anomaly near 63° is noted in the mean angular scattering law obtained from a summary of the radar data. KeywordsRadar Radar Data Lunar Surface Angular Scatter Optical Photograph Unable to display preview. Download preview PDF. - Evans, J. V. and Hagfors, T.: 1971, inAdv. Astron. and Astrophys., Academic Press, New York and London, pp. 29–105.Google Scholar - Moore, H. J. and Zisk, S. H.: 1973, ‘Calibration of Radar Data from Apollo 17 and Other Mission Results’, submitted forApollo 17 Preliminary Science Report, NASA; to be published.Google Scholar - Pettengill, G. H. and Thompson, T. W.: 1968,Icarus 8, 457–471.Google Scholar - Pettengill, G. H., Zisk, S. H., Thompson, T. W.: 1973,The Moon, this issue, p. 3.Google Scholar - Pieters, C., McCord, T. B., Zisk, S. H., and Adams, J. B.: ‘Lunar Black Spots and the Nature of the Apollo 17 Landing Area’,J. Geophys. Res., to be published.Google Scholar - Pollack, J. B. and Whitehill, L.: 1972,J. Geophys. Res. 77, 4289–4303.Google Scholar - Thompson, T. W.: 1973,The Moon, this issue, p. 51.Google Scholar - Tyler, G. L.: 1968,Nature 219, 1243–1244.Google Scholar - Zisk, S. H. and Moore, H. J.: 1972,Apollo 16 Preliminary Science Report, NASA SP-315, Government Printing Office.Google Scholar - Zisk, S. H., Carr, M. H., Masursky, H., Shorthill, R. W., and Thompson, T. W.: 1971,Science 173, 808–811.Google Scholar
<urn:uuid:d41061ce-2486-4000-a86e-122b6f7fec7f>
2.5625
576
Truncated
Science & Tech.
77.218881
95,482,239
Vascular plants: ferns and relatives - Plant Biology Vascular plants—Ferns and relatives These plants are seedless plants, but unlike the bryophytes, they do have vascular tissue (xylem and phloem). Because of the presence of vascular tissue, the leaves of ferns are their relatives are better organized than the mosses and liverworts. 1. Division Psilotophyta The sporophytes in this division have neither true leaves nor roots, their stems and rhizomes fork evenly. Whisk ferns—Whisk ferns are the simplest of the vascular plants. They consist of evenly forking stems with small protuberances called enations. They lack leaves and roots. These ferns have a central vascular cylinder composed of xylem and phloem. Reproduction—Whisk ferns reproduce via gametangia. Spores will germinate into tiny saprophytic gametophytes on the surfaces which antheridia and archegonia are scattered. The resulting zygote will develop a foot and a rhizome. An upright stem is produced when the foot separates from the rhizome. 2. Division Lycophyta These plants have stems that are covered with photosynthetic microphylls. Microphylls are leaves with a single vein and a trace that is not associated with a leaf gap. Club mosses –there are two types of club mosses that still have living representatives. Ground pines will develop sporangia in the axils of sporophylls. Each gametophyte may be capable of producing several sporophytes. The second type of club moss include the spike mosses. These plants are heterosporous and have a ligule, or little tongue, on each microphyll. The microspores will develop into male gametophytes with antheridia (sperm producing); while the megaspores will develop into female gametophytes with archegonia (egg producing). The biggest difference between the ground pines and the spike mosses is the presence of the ligule (spike mosses) and the spike mosses produce two types of spores and gametophytes (heterospory). Quillworts are found partially submerged in water for at least part of the year. Their microphylls (leaves) look somewhat like porcupine quills although they lack the rigidity of an actual porcupine quills. The microphylls arise from a corm like base. The corm base has a cambium that will remain active for many years. Club moss spores have many uses including flash powder, medicine, talcum powder, as well as ornamental uses and novelty items. 3. Division Sphenophyta The plants in this division have ribbed stems that contain silica deposits in the epidermal cells. Their scale-like microphylls lack chlorophyll. However, the silica in the stems makes these plants useful for scouring. Horsetails and scouring rushes occur in both the branched and unbranched forms. Either way, they are jointed stems with small whorls of scale-like leaves at the base of the plant. Horsetails, or Equisetum, are hollow in the center of their stem which contains cylinders of carinal and vallecular canals. The hollow stems arise from extensively branching rhizomes just beneath the surface of the soil. Some types of horsetails have non-photosynthetic stems. They reproduce via strobili, cones, that are produced in the spring in all species. Horsetail spores have ribbon like elaters that are sensitive to humidity. An interesting note about horsetails; usually equal numbers of male and female gametophytes are produced by these plants, however, the female gametophytes may become bisexual and the development of more that one sporophyte from a gametophyte is common. As mentioned, horsetails have been used for scouring. They can be eaten if the silica is removed, however, they really don’t make for a feast. Plants from this division have also been used for medicinal purposes including as a diuretic, treatment for tuberculosis and treating venereal diseases. Horsetails have been used as an ingredient in shampoos and metal polish. 4. Division Pterophyta Ferns are the most common and best recognized examples of the vascular seedless plants. The leaves of ferns have megaphylls—or leaves with more than one vein and a leaf trace/leaf gap association—which are large and usually subdivided into many lobes. Hence, the ‘finger’ look to a fern plant. Fern fronds, as the leaves are called, typically are dissected and feathery in appearance. However, realize that they do vary quite a bit as far as external structure and form goes. Fern fronds start as croziers or fiddleheads which unfurl into the main leaf form. On the underside of the frond, patches of sporangia can be found. These sporangia are usually in sori, which are clusters; and are sometimes covered by a clear flap called the indusium. This setup allows for the maximum protection of the sporangia, but allows for distribution. Individual sporangia have an ‘spring-loaded’ annulus which allows for mature spores to be catapulted out of the sporangium. The gametophytes of ferns are called prothalli and develop after the spores germinate. Prothalli contain both archegonia and antheridia and only one zygote develops into a sporophyte. Ferns are used for ornamental purposes as well as stuffing materials for bedding in tropical regions. Fern fronds are used in weaving baskets and hats, brewing some types of ale and numerous folk medicine practices. rating: 4.43 from 6909 votes | updated on: 17 Aug 2005 | views: 3703131 |
<urn:uuid:cf3c6aeb-e96a-4dc8-b2fc-633bc4f085cc>
3.9375
1,276
Knowledge Article
Science & Tech.
35.84186
95,482,241
Enjoy some of the extensive magazine, newspaper and web-based coverage of our work through the years. Enjoy a sampling of print media featuring Dr. Nichols' efforts collected on ISSU. When S. Hoyt Peckham first arrived in Baja California, Mexico, to study the foraging ecology of loggerhead turtles, he had a demoralizing surprise: The beaches were littered with the carcasses of the endangered reptiles. After interviewing local fishers, Peckham discovered that they were accidentally catching many of the turtles and then tossing them overboard. So Peckham, a graduate student at the University of California, Santa Cruz, changed the focus of his dissertation to this problem, called by-catch. Now, 6 years later, he and his colleagues are publishing another surprise. In the 17 October issue of PLoS ONE, the team reports that just 80 small fishing skiffs in Baja have an enormous impact. In fact, they kill roughly the same number of loggerhead turtles as many hundreds of fishing vessels in the North Pacific combined--nearly 1000 turtles a year. "That number is shocking," says marine scientist Carl Safina of Stony Brook University in New York state. But during Peckham's fieldwork, he also cultivated relationships with fishers that are now paying off for turtle conservation. Loggerhead turtles (Caretta caretta) live in the Atlantic, Pacific, and Indian oceans. Those in the Pacific nest on beaches in southern Japan, then as juveniles swim 12,000 kilometers to Baja where they feed on abundant crabs. It's a dangerous journey: Fishers kill an average of 1300 of the turtles a year in the North Pacific by snagging them on longlines barbed with thousands of hooks. Due to all these threats, the World Conservation Union considers the species endangered throughout its range. But local fishers in Baja had a hard time believing the official status of the loggerheads. Peckham recalls that when he talked with them, many would doubt him because they had caught so many. "I've heard that so many times: 'What do you mean they're endangered?'" Peckham's first step was to figure out where the turtles were living. Starting in 1996, co-author Wallace Nichols of the California Academy of Sciences in San Francisco and others had begun outfitting loggerheads in Baja with telemetry devices. The team discovered that 26 of 30 loggerheads spent most of their time within the fishing grounds of a dozen small fleets, which typically use 6- to 9-meter-long boats called pangas. Read more here Recent research has confirmed what many boaters already know – you experience emotional, behavioral... continue TORONTO, July 9, 2018 /CNW/ - According to the National Marine Manufacturers... continue
<urn:uuid:4dba2351-d5e7-4885-b2a0-268f5b1d6eb2>
3.09375
564
Content Listing
Science & Tech.
46.38822
95,482,278
posted by Noelle In an experiment 1.015 g of a metal carbonate, containing an unknown metal M, is heated to give the metal oxide and 0.361 g CO2. MCO3(s) + heat MO(s) + CO2(g) What is the identity of the metal M? mass MO = 1.015 - 0.361 = 0.654 g moles CO2 = 0.361/44 = ?? moles MO = same as mols CO2.] moles = grams/molar mass. You have moles and grams, solve for molar mass of MO, then subtract 16 from that to arrive at the atomic mass M.
<urn:uuid:b3480564-b098-4149-a19e-a18f2e3fdf53>
3.203125
148
Q&A Forum
Science & Tech.
103.097575
95,482,282
Open Access Journals gaining more Readers and Citations 700 Journals and 15,000,000 Readers Each Journal is getting 25,000+ Readers |University of Manchester, UK| |ScientificTracks Abstracts: J Aeronaut Aerospace Eng| |Detecting changes in random processes as quickly and accurately as possible is important for many scenarios. Examples include: detecting a plane using radar; identifying nuclear material at ports; reacting to breakages in atomic clocks on satellites and; determining when is the best time to buy/sell stocks and shares. Using advanced applied probability, it is possible to provide an optimal time to stop and declare that a change has occurred (optimal in the sense of minimizing the delay after the change) with a fixed probability of error. This collaborative work looks at problems of this type applied to issues in detecting breakages in clocks on board satellites. The sophisticated solutions of these optimal stopping problems show that the first hitting time of a test statistic to a defined boundary is the quickest possible decision time for a given level of accuracy (see figure). This means that no other method can outperform the algorithms used, which is a valuable asset in high performance systems. This research has two high profile satellite applications: The New Horizons mission, and The Galileo Project. Most recently solutions of this type have been involved in helping engineers from NASA detect an unusual change in the two on-board quartz clocks (which are replied upon to beam accurate data back to earth) as its satellite passed Pluto. These methods are also helping resolve similar problems in detecting the breakages in the atomic clocks used in the Galileo project; the first global navigation system primarily for civilian use is being developed by the European Union. The accuracy of these clocks is critical to accurate positioning with a 100 nanosecond error meaning positioning could be out by up to 30 meters on the ground.| Peter Johnson is currently pursuing his Post-doctorate studies at University of Manchester, Manchester. He has mainly worked with optimal stopping and free boundary problems in the area of sequential analysis. However, he also has a keen interest in “HMM tracking algorithms, filtering theory and non-linear optimal stopping problems”. His research area is Applied Probability. Email: [email protected] |PDF | HTML|
<urn:uuid:a963311d-135a-4f32-bd24-2ba3d495010b>
2.546875
460
Academic Writing
Science & Tech.
23.658571
95,482,312
He's known for for the "Boyle Law" which describes the relationship between the absolute pressure and volume of gas. His general great accomplishment in chemistry is known to be the fact that he changed the science from a qualitative to a quantitative one. Lavoisier is most noted for his discovery of the role oxygen plays in combustion. He recognized and named oxygen and hydrogen, and he also was the first to establish that sulfur was an element rather than a compound. Also discovered that although matter may change its form or shape, its mass will always stay the same Joseph is usually credited for discovering oxygen, having isolated it in gaseous states, although Carl Scheele and Antoine Lavoisier also claimed to discovered oxygen. Also invented soda water, his writing on electricity,and his discovery in many "airs" gases. He discovered oxygen, and identified molybdenum, tungsten, barium, hydrogen, and chlorine. He also discovered organic acids tartaric, oxalic, uric, lactic, and citric, as well as hydrofluoric, hydrocyanic, and arsenic acids. His largest accomplishments was disproving Berthollet with the law of definite proportions, which is sometimes also known as Proust's Law. John Dalton is best known for his pioneering work in the development of modern atomic theory, and his research in colorblindness. He made fundamental contributions to the field of analytical geometry and was a pioneer in the investigations of cathode rays that led eventually to the discovery of the electron. He estimated the mass of cathode rays by measuring the heat generated when the rays hit a thermal junction and comparing this with the magnetic deflection of the rays. His experiments suggested not only that cathode rays were over 1000 times lighter than the hydrogen atom, but also that their mass was the same in whichever type of atom they came from. He concluded that the rays were composed of very light, negatively charged particles which were a universal building block of atoms. Perrin showed that cathode rays were of negative electric charge nature. He computed Avogadro's number through several methods. He explained solar energy by the thermonuclear reactions of hydrogen Crookes identified the first known sample of helium. Then he turned his attention to the newly discovered phenomenon of radioactivity. Won a Noble Prize in 1903, which was shared with her husband Pierre, in Physics. Also in 1911 won the Noble Prize for Chemistry. Pierre and one of his students made the first discovery of nuclear energyHe also investigated the radiation emissions of radioactive substances, and through the use of magnetic fields was able to show that some of the emissions were positively charged, some were negative and some were neutral Soddy showed that an atom moves lower in atomic number by two places on alpha emission, higher by one place on beta emission. Won a Noble Prize for discovering the neutron. He is known for being the father of nuclear physics. James discovered the neutron and went on to measure its mass.
<urn:uuid:a3e8bfaf-561a-4c7e-ad55-3521487dcf8d>
3.890625
615
Knowledge Article
Science & Tech.
31.398807
95,482,326
Root mean square This article needs additional citations for verification. (March 2010) (Learn how and when to remove this template message) In statistics and its applications, the root mean square (abbreviated RMS or rms) is defined as the square root of the mean square (the arithmetic mean of the squares of a set of numbers). The RMS is also known as the quadratic mean and is a particular case of the generalized mean with exponent 2. RMS can also be defined for a continuously varying function in terms of an integral of the squares of the instantaneous values during a cycle. The RMS value of a set of values (or a continuous-time waveform) is the square root of the arithmetic mean of the squares of the values, or the square of the function that defines the continuous waveform. In Physics, the RMS current is the "value of the direct current that dissipates power in a resistor." In the case of a set of n values , the RMS The corresponding formula for a continuous function (or waveform) f(t) defined over the interval is and the RMS for a function over all time is The RMS over all time of a periodic function is equal to the RMS of one period of the function. The RMS value of a continuous function or signal can be approximated by taking the RMS of a sequence of equally spaced samples. Additionally, the RMS value of various waveforms can also be determined without calculus, as shown by Cartwright. In common waveforms If the waveform is a pure sine wave, the relationships between amplitudes (peak-to-peak, peak) and RMS are fixed and known, as they are for any continuous periodic wave. However, this is not true for an arbitrary waveform, which may or may not be periodic or continuous. For a zero-mean sine wave, the relationship between RMS and peak-to-peak amplitude is: For other waveforms the relationships are not the same as they are for sine waves. |DC-shifted square wave| |Modified sine wave| In waveform combinations Waveforms made by summing known simple waveforms have an RMS that is the root of the sum of squares of the component RMS values, if the component waveforms are orthogonal (that is, if the average of the product of one simple waveform with another is zero for all pairs other than a waveform times itself). (If the waveforms are in phase, then their RMS amplitudes sum directly.) In electrical engineering A special case of RMS of waveform combinations is: Average electrical power Electrical engineers often need to know the power, P, dissipated by an electrical resistance, R. It is easy to do the calculation when there is a constant current, I, through the resistance. For a load of R ohms, power is defined simply as: However, if the current is a time-varying function, I(t), this formula must be extended to reflect the fact that the current (and thus the instantaneous power) is varying over time. If the function is periodic (such as household AC power), it is still meaningful to discuss the average power dissipated over time, which is calculated by taking the average power dissipation: (where denotes the mean of a function) (as R does not vary over time, it can be factored out) (by definition of RMS) So, the RMS value, IRMS, of the function I(t) is the constant current that yields the same power dissipation as the time-averaged power dissipation of the current I(t). Average power can also be found using the same method that in the case of a time-varying voltage, V(t), with RMS value VRMS, By taking the square root of both these equations and multiplying them together, the power is found to be: Both derivations depend on voltage and current being proportional (i.e., the load, R, is purely resistive). Reactive loads (i.e., loads capable of not just dissipating energy but also storing it) are discussed under the topic of AC power. In the common case of alternating current when I(t) is a sinusoidal current, as is approximately true for mains power, the RMS value is easy to calculate from the continuous case equation above. If Ip is defined to be the peak current, then: where t is time and ω is the angular frequency (ω = 2π/T, where T is the period of the wave). Since Ip is a positive constant: Using a trigonometric identity to eliminate squaring of trig function: but since the interval is a whole number of complete cycles (per definition of RMS), the sin terms will cancel out, leaving: A similar analysis leads to the analogous equation for sinusoidal voltage: Where IP represents the peak current and VP represents the peak voltage. Because of their usefulness in carrying out power calculations, listed voltages for power outlets (e.g., 120 V in the USA, or 230 V in Europe) are almost always quoted in RMS values, and not peak values. Peak values can be calculated from RMS values from the above formula, which implies VP = VRMS × √, assuming the source is a pure sine wave. Thus the peak value of the mains voltage in the USA is about 120 × √, or about 170 volts. The peak-to-peak voltage, being double this, is about 340 volts. A similar calculation indicates that the peak mains voltage in Europe is about 325 volts, and the peak-to-peak mains voltage, about 650 volts. RMS quantities such as electric current are usually calculated over one cycle. However, for some purposes the RMS current over a longer period is required when calculating transmission power losses. The same principle applies, and (for example) a current of 10 amps used for 12 hours each day represents an RMS current of 5 amps in the long term. The term "RMS power" is sometimes erroneously used in the audio industry as a synonym for "mean power" or "average power" (it is proportional to the square of the RMS voltage or RMS current in a resistive load). For a discussion of audio power measurements and their shortcomings, see Audio power. where R represents the ideal gas constant, 8.314 J/(mol·K), T is the temperature of the gas in kelvins, and M is the molar mass of the gas in kilograms per mole. The generally accepted terminology for speed as compared to velocity is that the former is the scalar magnitude of the latter. Therefore, although the average speed is between zero and the RMS speed, the average velocity for a stationary gas is zero. When two data sets—one set from theoretical prediction and the other from actual measurement of some physical variable, for instance—are compared, the RMS of the pairwise differences of the two data sets can serve as a measure how far on average the error is from 0. The mean of the pairwise differences does not measure the variability of the difference, and the variability as indicated by the standard deviation is around the mean instead of 0. Therefore, the RMS of the differences is a meaningful measure of the error. In frequency domain The RMS can be computed in the frequency domain, using Parseval's theorem. For a sampled signal , where is the sampling period, where and N is number of samples and FFT coefficients. In this case, the RMS computed in the time domain is the same as in the frequency domain: Relationship to other statistics From this it is clear that the RMS value is always greater than or equal to the average, in that the RMS includes the "error" / square deviation as well. Physical scientists often use the term "root mean square" as a synonym for standard deviation when it can be assumed the input signal has zero mean, i.e., referring to the square root of the mean squared deviation of a signal from a given baseline or fit. This is useful for electrical engineers in calculating the "AC only" RMS of a signal. Standard deviation being the root mean square of a signal's variation about the mean, rather than about 0, the DC component is removed (i.e. RMS(signal) = Stdev(signal) if the mean signal is 0). - Central moment - Geometric mean - L2 norm - Least squares - Mean squared displacement - Table of mathematical symbols - True RMS converter - Average rectified value (ARV) - A Dictionary of Physics (6 ed.). Oxford University Press. 2009. ISBN 9780199233991. - Cartwright, Kenneth V (Fall 2007). "Determining the Effective or RMS Voltage of Various Waveforms without Calculus" (PDF). Technology Interface. 8 (1): 20 pages. - Nastase, Adrian S. "How to Derive the RMS Value of Pulse and Square Waveforms". MasteringElectronicsDesign.com. Retrieved 21 January 2015. - Chris C. Bissell and David A. Chapman (1992). Digital signal transmission (2nd ed.). Cambridge University Press. p. 64. ISBN 978-0-521-42557-5. - "ROOT, TH1:GetRMS".
<urn:uuid:f949f3f3-19fb-432b-89d3-63eec3279d17>
4.0625
1,989
Knowledge Article
Science & Tech.
52.16106
95,482,367
The Geostrophic Component In this chapter the geostrophic component (gt) is analysed and related to the synoptic pressure field. A simple model is proposed to predict the geostrophic wind from the configuration of low and high pressure systems, and to establish the extent of association between this wind, which describes a meteorological phenomenon, and the geostrophic component (gt) obtained in the last chapter by filtering the raw data series (wt). It is shown that the model is fairly robust towards its parameterization and that it achieves a good agreement between (gt) and the predicted geostrophic wind despite its simplicity. It can therefore be assumed that (gt) captures the main characteristics of the geostrophic wind and that the initial decomposition is physically meaningful. We then go on to specify particular synoptic configurations in terms of wind speed and direction, which will be referred to as synoptic states. The objective is to find regularities in the passage of high and low pressure systems, and to establish basic weather patterns. It is demonstrated that a complete partition of (gt) into the synoptic states roughly reflects the seasonal weather patterns as outlined in section 1.2. KeywordsPressure System Central Pressure Geostrophic Wind Synoptic State Synoptic Chart Unable to display preview. Download preview PDF.
<urn:uuid:5ea8a110-28a6-4867-a259-abc4f6006d22>
3.078125
272
Truncated
Science & Tech.
30.184179
95,482,369
Throughout history, mankind has been using the natural energy of the sun, known as solar energy to meet his energy needs. Solar power involves using these emissions of heat and light by the sun to produce electrical or thermal energy. The sun, the most inexhaustible, renewable source of energy known, reaches the Earth's surface to provide 10,000 times more energy than we consume. Numerous devices for collecting solar energy and converting it into electricity have been developed throughout the years, and solar energy is now being used in a variety of ways. While solar energy has been available to mankind since prehistoric times, we have not always been able to use it as effectively as other sources due to a lack of technology. Solar energy may be used for simple purposes, like drying clothes, cooking, heating swimming pools and buildings, or for more complex tasks like powering telecommunication towers, solar cars, and satellites through solar power technologies which can be divided into mainly two groups; solar thermal technologies and photovoltaics. The latter convert sunlight directly into electricity while solar thermal technologies use solar energy to convert light to heat to electrical energy. Solar thermal technologies being developed include concentrating solar power systems, flat plate solar collectors, and passive solar heating. Concentrating solar power technologies generate electricity with heat through collectors. Concentrating solar collectors typically use reflective materials such as mirrors or lenses to concentrate the sun's energy to provide heat energy which is then converted into electricity. This can be done in three ways. The first method is the parabolic trough systems which use curved mirrors to concentrate the sun's heat onto a tube which contains a fluid, usually oil. The hot oil then boils water to produce steam which is used to generate electricity. Alternatively, mirrors in the shape of a dish can be also used to concentrate... Bibliography: Stefan Lovgren. (2005). "Spray-On Solar-Power Cells Are True Breakthrough." Green Energy. (2004). [Online], Power from the sun. (2003). [Online], Solar Energy. (2003). [Online], Solar Energy. (2002). [Online], Solar Power. (2001). [Online]. Please join StudyMode to read the full document
<urn:uuid:fbb028b7-c60c-4f7b-a9a2-dd1b54eddea1>
3.65625
446
Truncated
Science & Tech.
35.156561
95,482,376
NOTE: Most of the High Level Science Products are unavailable while unscheduled maintenance is being performed. They will be incrementally restored over the course of this week. We apologize for any inconvenience. Cluster Lensing And Supernova survey with Hubble (CLASH) An Innovative Survey to Place New Constraints on the Fundamental Components of the Cosmos using the Hubble Space Telescope Latest Updates June 23, 2017: New redshift catalogs based on VLT and GLASS spectra have been added for six clusters. See the headers of the catalog files for references and acknowledgment information. Note that the MACS 0416 and MACS 1206 "_vlt_vimos_*zcat.txt" files are replaced by the "_vlt_muse_*zcat.txt" files. May 8, 2017: New catalogs from Molino et al. (2017) have been added. 08 Dec. 2015: Corrected CRPIX1 and CRPIX2 values in Zitrin model FITS files for MACSJ1115+01. By observing 25 massive galaxy clusters with HST's new panchromatic imaging capabilities (Wide-field Camera 3, WFC3, and the Advanced Camera for Surveys, ACS), CLASH will accomplish its four primary science goals: Map, with unprecedented accuracy, the distribution of dark matter in galaxy clusters using strong and weak gravitational lensing; Detect Type Ia supernovae out to redshift z ~ 2, allowing us to test the constancy of dark energy's repulsive force over time and look for any evolutionary effects in the supernovae themselves; Detect and characterize some of the most distant galaxies yet discovered at z > 7 (when the Universe was younger than 800 million years old - or less than 6% of its current age); Study the internal structure and evolution of the galaxies in and behind these clusters. Abell 383. Credit: CLASH Science Team The CLASH Multi-Cycle Treasury program (12065 PI: Marc Postman) will observe these 25 clusters over a 3 year period. The team has been awarded a total of 524 orbits of time on HST to conduct this program. For more information see the CLASH website at http://www.stsci.edu/~postman/CLASH/Home.html. This table lists the clusters that have been released by the CLASH team. The first version of the data will be released approximate 2 months after the last data are taken for a cluster. A second version will be released about 6 months after the last observation of the cluster. Note that at this time, all HST observational data have been released (30-mas and 65-mas scale data). In addition to the optical and NIR data here at MAST, IR data and catalogs from Spitzer, and submillimeter data from Bolocam, are both archived at IRSA (Bolocam, Spitzer). N.B.: MS 2137.3-2353 = MACS J2140.2-2339 at IRSA. * These data are also available via anonymous ftp: † The mass model for RXJ 1532 is based on only one candidate multiply-imaged system (not confirmed) and on constraints from weak lensing. Hence, it is a relatively crude model compared to those for the other CLASH clusters. See Zitrin et al. 2015 (ApJ, 801, 44) for details. The Merten and Zitrin models for MACSJ0416-24, MACSJ0717+37, MACSJ1149+22, and RXJ2248-4431 (a.k.a. Abell S1063) were delivered as part of the Frontier Fields Lens Model program. Their file contents are identical to the Frontier Field versions, but have had their file names and headers updated to change "frontier" to "clash". The photometric redshift catalog for the cluster MACS1311-03 used B and V band images taken with WFI at ESO and z' band images taken with IMACS at Magellan, in addition to the Subaru data. Those FITS files can be found inside that cluster's Subaru data directory. The photometric redshift catalog for the cluster MACS1423+24 used Ks band images taken with WIRCAM at CFHT, in addition to the Subaru data. Those FITS files can be found inside that cluster's Subaru data directory. The photometric redshift catalog for the cluster RXJ1347-1145 used g' band images taken with Megacam at CFHT, in addition to the Subaru data. Those FITS files can be found inside that cluster's Subaru data directory. A source catalog for the cluster RXJ2248-4431 (a.k.a., Abell S1063) using WFI at ESO was delivered as part of the Frontier Fields program. We provide a link to this file within the CLASH table on this page for convenience. All the mosaics provided here have already had all distortion removed by drizzling, and correct astrometry is provided in their header WCS (represented by CRVAL1, CRVAL2, CRPIX1, CRPIX2, CD1_1, CD1_2, CD2_1, CD2_2). Their CTYPE keywords also correctly reflect this (being set to "RA---TAN" and "DEC--TAN'). However, the Astropy tools "all_pix2world" or "all_world2pix" (contained in "wcs") currently attempt to apply SIP coefficients that are found in the headers, and therefore may not give correct astrometry. A solution is instead to use the Astropy tools "wcs_pix2world" or "wcs_world2pix" (also contained in "wcs"), which do not make this error and which give correct astrometry for these mosaics. Another alternative is to not use Astropy but instead use WCSTools, available from CfA, which also correctly calculates astrometry for these mosaics.
<urn:uuid:b4bf2770-b474-4d0d-94d2-3d9cb9672caf>
2.6875
1,286
News (Org.)
Science & Tech.
55.998103
95,482,396
The Hot Important Science The idea of 'The Hot Important Science (THIS)' comic strips was born in March 2018, Helsinki, Finland during an intensive brain storming session between a researcher and a designing student. THIS's mission is to bring you topically scientific stories, discoveries etc. every month in an easy-to-read format. Scientific Content - Yongmei Gong; Artistic Design - Pei Yu Lin Machine learning technology with less human supervision had been pushed to the spotlight after Google's AlphaGo Zero outplayed both top human players and other AI programs in the board game Go. This year IBM surprised us with another smart machine. Monday, June 18th 2018, IBM unveiled its Project Debater - the first AI system that can debate humans on complex topics. The technique used in developing Project Debater includes numerous cutting-edge machine learning and data mining algorithms. Let’s follow Thisis to have a look at some of the basics about machine learning and Project Debater. Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) technology has been used in genome editing for more than a decade. New applications are discovered all the time. In THIS NO.3 Thisis is going to talk you through how CRISPR can make treatments for the widest possible range of human diseases through cheaper and more precise gene editing possible. One of the most influential physicists of the twentieth century, Prof. Stephen Hawking, has passed away peacefully at his home in Cambridge home in March 14th, 2018. He is a theoretical physicist, cosmologist, author, and director of research at the Centre for Theoretical Cosmology at the University of Cambridge as well as a great science communicator. He is the author of the international bestseller popular science book The Brief History of Time and has appeared in many movies and TV series, including The Big Bang Theory. In THIS NO.2 we invite you to review three of Prof. Hawking's great contributions to science: The property of the black hole, The quantum origin of the universe and Time traveling with Thisis. Enjoy! The idea of 'The Hot Important Science (THIS)' comic strips was born in March 2018, Helsinki, Finland during an intensive brain storming session between a researcher and a designing student. THIS's mission is to bring you topically scientific stories, discoveries etc. every month in an easy-to-read format. We do not want to tell you that we spent a whole month on creating the first post because we want you to be objective about THIS. But we do hope you enjoy reading them! Thank you and see you the next month!
<urn:uuid:5d5e620b-0bdf-4426-ab9b-d594d06f8959>
2.921875
545
Content Listing
Science & Tech.
48.280076
95,482,404
It is an important and dynamic area of research, molecular imprinting is one of the approach to produce materials with recognize the ability as compare to the natural system. A featured preparation method of MIPs as selective adsorbents is molecular imprinting The artificial receptors will be developed by various Molecular Imprinting He said that some of countries imposed ban on such hazard chemicals including DEHP, DBP, DIBP, and BBP generally and particularly in pediatrics neonatal and maternity wards accessories in hospitals and they have started the use of molecular imprinting popular technology for removing phthalates. International Conference on Molecular Imprinting (8th: 2014: Zhenjiang, China) Edited by Xinhua Yuan is a new and practical technique, which leads to the preparation of selective recognition sites in a polymer matrix. The first molecular imprinting medium reported in literature was that of dye molecules in silica matrix (1). is a tool for synthesizing tailor-designed molecular recognition sites in polymers structured at micrometer and nanometer scales. Also reports about the molecular imprinting as a promising technology for preparing artificial receptors based on molecularly imprinted polymers containing tailor-made recognition sites were presented. Junqiu Liu, combines a range of methods including molecular imprinting , supramolecular self-assembly, and genetic engineering and employ a range of macromolecular scaffolds, including dendrimers, polymeric micelles, polymer nanoparticles, hydrogels, giant nanotubes and proteins in their study of selenoenzymes, which they hope will lay the ground work for future research to investigate not just selenoenzymes but to help elucidate further the catalytic function of enzymes in a wider context. The detection can be read in real time, instead of after days or weeks of laboratory analysis, meaning the nanotube molecular imprinting technique could pave the way for biosensors capable of detecting human papillomavirus or other viruses weeks sooner than available diagnostic techniques currently allow. techniques have shown that polymer structures can be used in the development of sensors capable of recognizing certain organic compounds, but recognizing proteins has presented a difficult set of challenges.
<urn:uuid:adaac9e1-656d-472a-8be8-4d223ee26491>
3.09375
462
Knowledge Article
Science & Tech.
-11.528909
95,482,405
Free and Forced Vibrations of Simple Systems Mechanical, acoustical, or electrical vibrations are the sources of sound in musical instruments. Some familiar examples are the vibrations of strings (violin, guitar, piano, etc), bars or rods (xylophone, glockenspiel, chimes, clarinet reed), membranes (drums, banjo), plates or shells (cymbal, gong, bell), air in a tube (organ pipe, brass and woodwind instruments, marimba resonator), and air in an enclosed container (drum, violin, or guitar body). KeywordsNormal Mode Simple System Forced Vibration Mechanical Impedance Equivalent Electrical Circuit Unable to display preview. Download preview PDF. - Beyer, R.T. (1974). “Nonlinear Acoustics” (Naval Sea Systems Command ), Chapter 2.Google Scholar - Crawford, F.S. Jr. (1965). “Waves.” Berkeley Physics, Vol. 3, Chapter 1. McGraw-Hill, New York.Google Scholar - Fletcher, N.H. (1982). Transient response and the musical characteristics of bowed-string instruments. Proc. Wollongong Coop. Workshop on the Acoustics of Stringed Instruments (A. Segal, ed.). University of Wollongong, Australia.Google Scholar - Kinsler, L.E., Frey, A.R., Coppens, A.B., and Sanders, J.V. (1982). “Fundamentals of Acoustics,” 3rd ed., Chapter 1. Wiley, New York.Google Scholar - Main, I.G. (1978). “Vibrations and Waves in Physics.” Cambridge Univ. Press, London and New York.Google Scholar - Morse, P.M. (1948). “Vibration and Sound,” 2nd ed., Chapter 2. McGraw-Hill New York. Reprinted 1976, Acoustical Soc. Am., Woodbury, New York.Google Scholar - Rossing, T.D. (1982). “The Science of Sound,” Addison-Wesley, Reading, Massachusetts.Google Scholar - Skudrzyk, E. (1968). “Simple and Complex Vibrating Systems,” Chapters 1 and 2. Pennsylvania State University, University Park, Pennsylvania.Google Scholar
<urn:uuid:bfcc83e4-30b4-4b35-933a-aefdab916d93>
3.140625
515
Truncated
Science & Tech.
60.230421
95,482,444
The history of chemistry represents a time span from ancient history to the present. By 1000 BC, civilizations used technologies that would eventually form the basis of the various branches of chemistry. Examples include extracting metals from ores, making pottery and glazes, fermenting beer and wine, extracting chemicals from plants for medicine and perfume, rendering fat into soap, making glass, and making alloys like bronze. The protoscience of chemistry, alchemy, was unsuccessful in explaining the nature of matter and its transformations. However, by performing experiments and recording the results, alchemists set the stage for modern chemistry. The distinction began to emerge when a clear differentiation was made between chemistry and alchemy by Robert Boyle in his work The Sceptical Chymist (1661). While both alchemy and chemistry are concerned with matter and its transformations, chemists are seen as applying scientific method to their work. Chemistry is considered to have become an established science with the work of Antoine Lavoisier, who developed a law of conservation of mass that demanded careful measurement and quantitative observations of chemical phenomena. The history of chemistry is intertwined with the history of thermodynamics, especially through the work of Willard Gibbs. The earliest recorded metal employed by humans seems to be gold which can be found free or "native". Small amounts of natural gold have been found in Spanish caves used during the late Paleolithic period, c. 40,000 BC. Silver, copper, tin and meteoric iron can also be found native, allowing a limited amount of metalworking in ancient cultures. Egyptian weapons made from meteoric iron in about 3000 BC were highly prized as "Daggers from Heaven". Arguably the first chemical reaction used in a controlled manner was fire. However, for millennia fire was seen simply as a mystical force that could transform one substance into another (burning wood, or boiling water) while producing heat and light. Fire affected many aspects of early societies. These ranged from the simplest facets of everyday life, such as cooking and habitat lighting, to more advanced technologies, such as pottery, bricks, and melting of metals to make tools. It was fire that led to the discovery of glass and the purification of metals which in turn gave way to the rise of metallurgy. During the early stages of metallurgy, methods of purification of metals were sought, and gold, known in ancient Egypt as early as 2900 BC, became a precious metal. Certain metals can be recovered from their ores by simply heating the rocks in a fire: notably tin, lead and (at a higher temperature) copper, a process known as smelting. The first evidence of this extractive metallurgy dates from the 5th and 6th millennium BC, and was found in the archaeological sites of Majdanpek, Yarmovac and Plocnik, all three in Serbia. To date, the earliest copper smelting is found at the Belovode site, these examples include a copper axe from 5500 BC belonging to the Vin?a culture. Other signs of early metals are found from the third millennium BC in places like Palmela (Portugal), Los Millares (Spain), and Stonehenge (United Kingdom). However, as often happens with the study of prehistoric times, the ultimate beginnings cannot be clearly defined and new discoveries are ongoing. These first metals were single ones or as found. By combining copper and tin, a superior metal could be made, an alloy called bronze, a major technological shift which began the Bronze Age about 3500 BC. The Bronze Age was period in human cultural development when the most advanced metalworking (at least in systematic and widespread use) included techniques for smelting copper and tin from naturally occurring outcroppings of copper ores, and then smelting those ores to cast bronze. These naturally occurring ores typically included arsenic as a common impurity. Copper/tin ores are rare, as reflected in the fact that there were no tin bronzes in western Asia before 3000 BC. After the Bronze Age, the history of metallurgy was marked by armies seeking better weaponry. Countries in Eurasia prospered when they made the superior alloys, which, in turn, made better armor and better weapons. This often determined the outcomes of battles. Significant progress in metallurgy and alchemy was made in ancient India. The extraction of iron from its ore into a workable metal is much more difficult than copper or tin. It appears to have been invented by the Hittites in about 1200 BC, beginning the Iron Age. The secret of extracting and working iron was a key factor in the success of the Philistines. In other words, the Iron Age refers to the advent of ferrous metallurgy. Historical developments in ferrous metallurgy can be found in a wide variety of past cultures and civilizations. This includes the ancient and medieval kingdoms and empires of the Middle East and Near East, ancient Iran, ancient Egypt, ancient Nubia, and Anatolia (Turkey), Ancient Nok, Carthage, the Greeks and Romans of ancient Europe, medieval Europe, ancient and medieval China, ancient and medieval India, ancient and medieval Japan, amongst others. Many applications, practices, and devices associated or involved in metallurgy were established in ancient China, such as the innovation of the blast furnace, cast iron, hydraulic-powered trip hammers, and double acting piston bellows. Philosophical attempts to rationalize why different substances have different properties (color, density, smell), exist in different states (gaseous, liquid, and solid), and react in a different manner when exposed to environments, for example to water or fire or temperature changes, led ancient philosophers to postulate the first theories on nature and chemistry. The history of such philosophical theories that relate to chemistry can probably be traced back to every single ancient civilization. The common aspect in all these theories was the attempt to identify a small number of primary classical element that make up all the various substances in nature. Substances like air, water, and soil/earth, energy forms, such as fire and light, and more abstract concepts such as ideas, aether, and heaven, were common in ancient civilizations even in absence of any cross-fertilization; for example in Greek, Indian, Mayan, and ancient Chinese philosophies all considered air, water, earth and fire as primary elements. Around 420 BC, Empedocles stated that all matter is made up of four elemental substances--earth, fire, air and water. The early theory of atomism can be traced back to ancient Greece and ancient India. Greek atomism dates back to the Greek philosopher Democritus, who declared that matter is composed of indivisible and indestructible atoms around 380 BC. Leucippus also declared that atoms were the most indivisible part of matter. This coincided with a similar declaration by Indian philosopher Kanada in his Vaisheshika sutras around the same time period. In much the same fashion he discussed the existence of gases. What Kanada declared by sutra, Democritus declared by philosophical musing. Both suffered from a lack of empirical data. Without scientific proof, the existence of atoms was easy to deny. Aristotle opposed the existence of atoms in 330 BC. Earlier, in 380 BC, a Greek text attributed to Polybus argues that the human body is composed of four humours. Around 300 BC, Epicurus postulated a universe of indestructible atoms in which man himself is responsible for achieving a balanced life. With the goal of explaining Epicurean philosophy to a Roman audience, the Roman poet and philosopher Lucretius wrote De Rerum Natura (The Nature of Things) in 50 BC. In the work, Lucretius presents the principles of atomism; the nature of the mind and soul; explanations of sensation and thought; the development of the world and its phenomena; and explains a variety of celestial and terrestrial phenomena. Much of the early development of purification methods is described by Pliny the Elder in his Naturalis Historia. He made attempts to explain those methods, as well as making acute observations of the state of many minerals. The elemental system used in Medieval alchemy was developed primarily by the Persian-Arab alchemist J?bir ibn Hayy?n and rooted in the classical elements of Greek tradition. His system consisted of the four Aristotelian elements of air, earth, fire, and water in addition to two philosophical elements: sulphur, characterizing the principle of combustibility; "the stone which burns", and mercury, characterizing the principle of metallic properties. They were seen by early alchemists as idealized expressions of irreducibile components of the universe and are of larger consideration within philosophical alchemy. The three metallic principles: sulphur to flammability or combustion, mercury to volatility and stability, and salt to solidity. became the tria prima of the Swiss alchemist Paracelsus. He reasoned that Aristotle's four-element theory appeared in bodies as three principles. Paracelsus saw these principles as fundamental and justified them by recourse to the description of how wood burns in fire. Mercury included the cohesive principle, so that when it left in smoke the wood fell apart. Smoke described the volatility (the mercurial principle), the heat-giving flames described flammability (sulphur), and the remnant ash described solidity (salt). Alchemy is defined by the Hermetic quest for the philosopher's stone, the study of which is steeped in symbolic mysticism, and differs greatly from modern science. Alchemists toiled to make transformations on an esoteric (spiritual) and/or exoteric (practical) level. It was the protoscientific, exoteric aspects of alchemy that contributed heavily to the evolution of chemistry in Greco-Roman Egypt, the Islamic Golden Age, and then in Europe. Alchemy and chemistry share an interest in the composition and properties of matter, and prior to the eighteenth century were not separated into distinct disciplines. The term chymistry has been used to describe the blend of alchemy and chemistry that existed before this time. The earliest Western alchemists, who lived in the first centuries of the common era, invented chemical apparatus. The bain-marie, or water bath is named for Mary the Jewess. Her work also gives the first descriptions of the tribikos and kerotakis.Cleopatra the Alchemist described furnaces and has been credited with the invention of the alembic. Later, the experimental framework established by Jabir ibn Hayyan influenced alchemists as the discipline migrated through the Islamic world, then to Europe in the twelfth century. During the Renaissance, exoteric alchemy remained popular in the form of Paracelsian iatrochemistry, while spiritual alchemy flourished, realigned to its Platonic, Hermetic, and Gnostic roots. Consequently, the symbolic quest for the philosopher's stone was not superseded by scientific advances, and was still the domain of respected scientists and doctors until the early eighteenth century. Early modern alchemists who are renowned for their scientific contributions include Jan Baptist van Helmont, Robert Boyle, and Isaac Newton. There were several problems with alchemy, as seen from today's standpoint. There was no systematic naming scheme for new compounds, and the language was esoteric and vague to the point that the terminologies meant different things to different people. In fact, according to The Fontana History of Chemistry (Brock, 1992): The language of alchemy soon developed an arcane and secretive technical vocabulary designed to conceal information from the uninitiated. To a large degree, this language is incomprehensible to us today, though it is apparent that readers of Geoffery Chaucer's Canon's Yeoman's Tale or audiences of Ben Jonson's The Alchemist were able to construe it sufficiently to laugh at it. Chaucer's tale exposed the more fraudulent side of alchemy, especially the manufacture of counterfeit gold from cheap substances. Less than a century earlier, Dante Alighieri also demonstrated an awareness of this fraudulence, causing him to consign all alchemists to the Inferno in his writings. Soon after, in 1317, the Avignon Pope John XXII ordered all alchemists to leave France for making counterfeit money. A law was passed in England in 1403 which made the "multiplication of metals" punishable by death. Despite these and other apparently extreme measures, alchemy did not die. Royalty and privileged classes still sought to discover the philosopher's stone and the elixir of life for themselves. There was also no agreed-upon scientific method for making experiments reproducible. Indeed, many alchemists included in their methods irrelevant information such as the timing of the tides or the phases of the moon. The esoteric nature and codified vocabulary of alchemy appeared to be more useful in concealing the fact that they could not be sure of very much at all. As early as the 14th century, cracks seemed to grow in the facade of alchemy; and people became sceptical. Clearly, there needed to be a scientific method where experiments can be repeated by other people, and results needed to be reported in a clear language that laid out both what is known and unknown. In the Islamic World, the Muslims were translating the works of the ancient Greeks and Egyptians into Arabic and were experimenting with scientific ideas. The development of the modern scientific method was slow and arduous, but an early scientific method for chemistry began emerging among early Muslim chemists, beginning with the 9th-century chemist J?bir ibn Hayy?n (known as "Geber" in Europe), who is sometimes regarded as "the father of chemistry". He introduced a systematic and experimental approach to scientific research based in the laboratory, in contrast to the ancient Greek and Egyptian alchemists whose works were largely allegorical and often unintelligble. He also invented and named the alembic (al-anbiq), chemically analyzed many chemical substances, composed lapidaries, distinguished between alkalis and acids, and manufactured hundreds of drugs. He also refined the theory of five classical elements into the theory of seven alchemical elements after identifying mercury and sulfur as chemical elements.[verification needed] Among other influential Muslim chemists, Ab? al-Rayh?n al-B?r?n?,Avicenna and Al-Kindi refuted the theories of alchemy, particularly the theory of the transmutation of metals; and al-Tusi described a version of the conservation of mass, noting that a body of matter is able to change but is not able to disappear.Rhazes refuted Aristotle's theory of four classical elements for the first time and set up the firm foundations of modern chemistry, using the laboratory in the modern sense, designing and describing more than twenty instruments, many parts of which are still in use today, such as a crucible, cucurbit or retort for distillation, and the head of a still with a delivery tube (ambiq, Latin alembic), and various types of furnace or stove. For practitioners in Europe, alchemy became an intellectual pursuit after early Arabic alchemy became available through Latin translation, and over time, they improved. Paracelsus (1493-1541), for example, rejected the 4-elemental theory and with only a vague understanding of his chemicals and medicines, formed a hybrid of alchemy and science in what was to be called iatrochemistry. Paracelsus was not perfect in making his experiments truly scientific. For example, as an extension of his theory that new compounds could be made by combining mercury with sulfur, he once made what he thought was "oil of sulfur". This was actually dimethyl ether, which had neither mercury nor sulfur. Practical attempts to improve the refining of ores and their extraction to smelt metals was an important source of information for early chemists in the 16th century, among them Georg Agricola (1494-1555), who published his great work De re metallica in 1556. His work describes the highly developed and complex processes of mining metal ores, metal extraction and metallurgy of the time. His approach removed the mysticism associated with the subject, creating the practical base upon which others could build. The work describes the many kinds of furnace used to smelt ore, and stimulated interest in minerals and their composition. It is no coincidence that he gives numerous references to the earlier author, Pliny the Elder and his Naturalis Historia. Agricola has been described as the "father of metallurgy". In 1605, Sir Francis Bacon published The Proficience and Advancement of Learning, which contains a description of what would later be known as the scientific method. In 1605, Michal Sedziwój publishes the alchemical treatise A New Light of Alchemy which proposed the existence of the "food of life" within air, much later recognized as oxygen. In 1615 Jean Beguin published the Tyrocinium Chymicum, an early chemistry textbook, and in it draws the first-ever chemical equation. In 1637 René Descartes publishes Discours de la méthode, which contains an outline of the scientific method. The Dutch chemist Jan Baptist van Helmont's work Ortus medicinae was published posthumously in 1648; the book is cited by some as a major transitional work between alchemy and chemistry, and as an important influence on Robert Boyle. The book contains the results of numerous experiments and establishes an early version of the law of conservation of mass. Working during the time just after Paracelsus and iatrochemistry, Jan Baptist van Helmont suggested that there are insubstantial substances other than air and coined a name for them - "gas", from the Greek word chaos. In addition to introducing the word "gas" into the vocabulary of scientists, van Helmont conducted several experiments involving gases. Jan Baptist van Helmont is also remembered today largely for his ideas on spontaneous generation and his 5-year tree experiment, as well as being considered the founder of pneumatic chemistry. Anglo-Irish chemist Robert Boyle (1627-1691) is considered to have refined the modern scientific method for alchemy and to have separated chemistry further from alchemy. Although his research clearly has its roots in the alchemical tradition, Boyle is largely regarded today as the first modern chemist, and therefore one of the founders of modern chemistry, and one of the pioneers of modern experimental scientific method. Although Boyle was not the original discoverer, he is best known for Boyle's law, which he presented in 1662: the law describes the inversely proportional relationship between the absolute pressure and volume of a gas, if the temperature is kept constant within a closed system. Boyle is also credited for his landmark publication The Sceptical Chymist in 1661, which is seen as a cornerstone book in the field of chemistry. In the work, Boyle presents his hypothesis that every phenomenon was the result of collisions of particles in motion. Boyle appealed to chemists to experiment and asserted that experiments denied the limiting of chemical elements to only the classic four: earth, fire, air, and water. He also pleaded that chemistry should cease to be subservient to medicine or to alchemy, and rise to the status of a science. Importantly, he advocated a rigorous approach to scientific experiment: he believed all theories must be proved experimentally before being regarded as true. The work contains some of the earliest modern ideas of atoms, molecules, and chemical reaction, and marks the beginning of the history of modern chemistry. Boyle also tried to purify chemicals to obtain reproducible reactions. He was a vocal proponent of the mechanical philosophy proposed by René Descartes to explain and quantify the physical properties and interactions of material substances. Boyle was an atomist, but favoured the word corpuscle over atoms. He commented that the finest division of matter where the properties are retained is at the level of corpuscles. He also performed numerous investigations with an air pump, and noted that the mercury fell as air was pumped out. He also observed that pumping the air out of a container would extinguish a flame and kill small animals placed inside. Boyle helped to lay the foundations for the Chemical Revolution with his mechanical corpuscular philosophy. Boyle repeated the tree experiment of van Helmont, and was the first to use indicators which changed colors with acidity. In 1702, German chemist Georg Stahl coined the name "phlogiston" for the substance believed to be released in the process of burning. Around 1735, Swedish chemist Georg Brandt analyzed a dark blue pigment found in copper ore. Brandt demonstrated that the pigment contained a new element, later named cobalt. In 1751, a Swedish chemist and pupil of Stahl's named Axel Fredrik Cronstedt, identified an impurity in copper ore as a separate metallic element, which he named nickel. Cronstedt is one of the founders of modern mineralogy. Cronstedt also discovered the mineral scheelite in 1751, which he named tungsten, meaning "heavy stone" in Swedish. In 1754, Scottish chemist Joseph Black isolated carbon dioxide, which he called "fixed air". In 1757, Louis Claude Cadet de Gassicourt, while investigating arsenic compounds, creates Cadet's fuming liquid, later discovered to be cacodyl oxide, considered to be the first synthetic organometallic compound. In 1758, Joseph Black formulated the concept of latent heat to explain the thermochemistry of phase changes. In 1766, English chemist Henry Cavendish isolated hydrogen, which he called "inflammable air". Cavendish discovered hydrogen as a colorless, odourless gas that burns and can form an explosive mixture with air, and published a paper on the production of water by burning inflammable air (that is, hydrogen) in dephlogisticated air (now known to be oxygen), the latter a constituent of atmospheric air (phlogiston theory). In 1773, Swedish chemist Carl Wilhelm Scheele discovered oxygen, which he called "fire air", but did not immediately publish his achievement. In 1774, English chemist Joseph Priestley independently isolated oxygen in its gaseous state, calling it "dephlogisticated air", and published his work before Scheele. During his lifetime, Priestley's considerable scientific reputation rested on his invention of soda water, his writings on electricity, and his discovery of several "airs" (gases), the most famous being what Priestley dubbed "dephlogisticated air" (oxygen). However, Priestley's determination to defend phlogiston theory and to reject what would become the chemical revolution eventually left him isolated within the scientific community. In 1781, Carl Wilhelm Scheele discovered that a new acid, tungstic acid, could be made from Cronstedt's scheelite (at the time named tungsten). Scheele and Torbern Bergman suggested that it might be possible to obtain a new metal by reducing this acid. In 1783, José and Fausto Elhuyar found an acid made from wolframite that was identical to tungstic acid. Later that year, in Spain, the brothers succeeded in isolating the metal now known as tungsten by reduction of this acid with charcoal, and they are credited with the discovery of the element. Italian physicist Alessandro Volta constructed a device for accumulating a large charge by a series of inductions and groundings. He investigated the 1780s discovery "animal electricity" by Luigi Galvani, and found that the electric current was generated from the contact of dissimilar metals, and that the frog leg was only acting as a detector. Volta demonstrated in 1794 that when two metals and brine-soaked cloth or cardboard are arranged in a circuit they produce an electric current. In 1800, Volta stacked several pairs of alternating copper (or silver) and zinc discs (electrodes) separated by cloth or cardboard soaked in brine (electrolyte) to increase the electrolyte conductivity. When the top and bottom contacts were connected by a wire, an electric current flowed through the voltaic pile and the connecting wire. Thus, Volta is credited with constructing the first electrical battery to produce electricity. Volta's method of stacking round plates of copper and zinc separated by disks of cardboard moistened with salt solution was termed a voltaic pile. Thus, Volta is considered to be the founder of the discipline of electrochemistry. A Galvanic cell (or voltaic cell) is an electrochemical cell that derives electrical energy from spontaneous redox reaction taking place within the cell. It generally consists of two different metals connected by a salt bridge, or individual half-cells separated by a porous membrane. Although the archives of chemical research draw upon work from ancient Babylonia, Egypt, and especially the Arabs and Persians after Islam, modern chemistry flourished from the time of Antoine-Laurent de Lavoisier, a French chemist who is celebrated as the "father of modern chemistry". Lavoisier demonstrated with careful measurements that transmutation of water to earth was not possible, but that the sediment observed from boiling water came from the container. He burnt phosphorus and sulfur in air, and proved that the products weighed more than the original. Nevertheless, the weight gained was lost from the air. Thus, in 1789, he established the Law of Conservation of Mass, which is also called "Lavoisier's Law." Repeating the experiments of Priestley, he demonstrated that air is composed of two parts, one of which combines with metals to form calxes. In Considérations Générales sur la Nature des Acides (1778), he demonstrated that the "air" responsible for combustion was also the source of acidity. The next year, he named this portion oxygen (Greek for acid-former), and the other azote (Greek for no life). Lavoisier thus has a claim to the discovery of oxygen along with Priestley and Scheele. He also discovered that the "inflammable air" discovered by Cavendish - which he termed hydrogen (Greek for water-former) - combined with oxygen to produce a dew, as Priestley had reported, which appeared to be water. In Reflexions sur le Phlogistique (1783), Lavoisier showed the phlogiston theory of combustion to be inconsistent. Mikhail Lomonosov independently established a tradition of chemistry in Russia in the 18th century. Lomonosov also rejected the phlogiston theory, and anticipated the kinetic theory of gases. Lomonosov regarded heat as a form of motion, and stated the idea of conservation of matter. Lavoisier worked with Claude Louis Berthollet and others to devise a system of chemical nomenclature which serves as the basis of the modern system of naming chemical compounds. In his Methods of Chemical Nomenclature (1787), Lavoisier invented the system of naming and classification still largely in use today, including names such as sulfuric acid, sulfates, and sulfites. In 1785, Berthollet was the first to introduce the use of chlorine gas as a commercial bleach. In the same year he first determined the elemental composition of the gas ammonia. Berthollet first produced a modern bleaching liquid in 1789 by passing chlorine gas through a solution of sodium carbonate - the result was a weak solution of sodium hypochlorite. Another strong chlorine oxidant and bleach which he investigated and was the first to produce, potassium chlorate (KClO3), is known as Berthollet's Salt. Berthollet is also known for his scientific contributions to theory of chemical equilibria via the mechanism of reverse chemical reactions. Lavoisier's Traité Élémentaire de Chimie (Elementary Treatise of Chemistry, 1789) was the first modern chemical textbook, and presented a unified view of new theories of chemistry, contained a clear statement of the Law of Conservation of Mass, and denied the existence of phlogiston. In addition, it contained a list of elements, or substances that could not be broken down further, which included oxygen, nitrogen, hydrogen, phosphorus, mercury, zinc, and sulfur. His list, however, also included light, and caloric, which he believed to be material substances. In the work, Lavoisier underscored the observational basis of his chemistry, stating "I have tried...to arrive at the truth by linking up facts; to suppress as much as possible the use of reasoning, which is often an unreliable instrument which deceives us, in order to follow as much as possible the torch of observation and of experiment." Nevertheless, he believed that the real existence of atoms was philosophically impossible. Lavoisier demonstrated that organisms disassemble and reconstitute atmospheric air in the same manner as a burning body. With Pierre-Simon Laplace, Lavoisier used a calorimeter to estimate the heat evolved per unit of carbon dioxide produced. They found the same ratio for a flame and animals, indicating that animals produced energy by a type of combustion. Lavoisier believed in the radical theory, believing that radicals, which function as a single group in a chemical reaction, would combine with oxygen in reactions. He believed all acids contained oxygen. He also discovered that diamond is a crystalline form of carbon. While many of Lavoisier's partners were influential for the advancement of chemistry as a scientific discipline, his wife Marie-Anne Lavoisier was arguably the most influential of them all. Upon their marriage, Mme. Lavoisier began to study chemistry, English, and drawing in order to help her husband in his work either by translating papers into English, a language which Lavoisier did not know, or by keeping records and drawing the various apparatuses that Lavoisier used in his labs. Through her ability to read and translate articles from Britain for her husband, Lavoisier had access knowledge from many of the chemical advances happening outside of his lab. Furthermore, Mme. Lavoisier kept records of Lavoisier's work and ensured that his works were published. The first sign of Marie-Anne's true potential as a chemist in Lavoisier's lab came when she was translating a book by the scientist Richard Kirwan. While translating, she stumbled upon and corrected multiple errors. When she presented her translation, along with her notes to Lavoisier Her edits and contributions led to Lavoisier's refutation of the theory of phlogiston. Lavoisier made many fundamental contributions to the science of chemistry. Following Lavoisier's work, chemistry acquired a strict quantitative nature, allowing reliable predictions to be made. The revolution in chemistry which he brought about was a result of a conscious effort to fit all experiments into the framework of a single theory. He established the consistent use of chemical balance, used oxygen to overthrow the phlogiston theory, and developed a new system of chemical nomenclature. Lavoisier was beheaded during the French Revolution. In 1802, French American chemist and industrialist Éleuthère Irénée du Pont, who learned manufacture of gunpowder and explosives under Antoine Lavoisier, founded a gunpowder manufacturer in Delaware known as E. I. du Pont de Nemours and Company. The French Revolution forced his family to move to the United States where du Pont started a gunpowder mill on the Brandywine River in Delaware. Wanting to make the best powder possible, du Pont was vigilant about the quality of the materials he used. For 32 years, du Pont served as president of E. I. du Pont de Nemours and Company, which eventually grew into one of the largest and most successful companies in America. Throughout the 19th century, chemistry was divided between those who followed the atomic theory of John Dalton and those who did not, such as Wilhelm Ostwald and Ernst Mach. Although such proponents of the atomic theory as Amedeo Avogadro and Ludwig Boltzmann made great advances in explaining the behavior of gases, this dispute was not finally settled until Jean Perrin's experimental investigation of Einstein's atomic explanation of Brownian motion in the first decade of the 20th century. Well before the dispute had been settled, many had already applied the concept of atomism to chemistry. A major example was the ion theory of Svante Arrhenius which anticipated ideas about atomic substructure that did not fully develop until the 20th century. Michael Faraday was another early worker, whose major contribution to chemistry was electrochemistry, in which (among other things) a certain quantity of electricity during electrolysis or electrodeposition of metals was shown to be associated with certain quantities of chemical elements, and fixed quantities of the elements therefore with each other, in specific ratios. These findings, like those of Dalton's combining ratios, were early clues to the atomic nature of matter. In 1803, English meteorologist and chemist John Dalton proposed Dalton's law, which describes the relationship between the components in a mixture of gases and the relative pressure each contributes to that of the overall mixture. Discovered in 1801, this concept is also known as Dalton's law of partial pressures. Dalton also proposed a modern atomic theory in 1803 which stated that all matter was composed of small indivisible particles termed atoms, atoms of a given element possess unique characteristics and weight, and three types of atoms exist: simple (elements), compound (simple molecules), and complex (complex molecules). In 1808, Dalton first published New System of Chemical Philosophy (1808-1827), in which he outlined the first modern scientific description of the atomic theory. This work identified chemical elements as a specific type of atom, therefore rejecting Newton's theory of chemical affinities. Instead, Dalton inferred proportions of elements in compounds by taking ratios of the weights of reactants, setting the atomic weight of hydrogen to be identically one. Following Jeremias Benjamin Richter (known for introducing the term stoichiometry), he proposed that chemical elements combine in integral ratios. This is known as the law of multiple proportions or Dalton's law, and Dalton included a clear description of the law in his New System of Chemical Philosophy. The law of multiple proportions is one of the basic laws of stoichiometry used to establish the atomic theory. Despite the importance of the work as the first view of atoms as physically real entities and introduction of a system of chemical symbols, New System of Chemical Philosophy devoted almost as much space to the caloric theory as to atomism. French chemist Joseph Proust proposed the law of definite proportions, which states that elements always combine in small, whole number ratios to form compounds, based on several experiments conducted between 1797 and 1804 Along with the law of multiple proportions, the law of definite proportions forms the basis of stoichiometry. The law of definite proportions and constant composition do not prove that atoms exist, but they are difficult to explain without assuming that chemical compounds are formed when atoms combine in constant proportions. A Swedish chemist and disciple of Dalton, Jöns Jacob Berzelius embarked on a systematic program to try to make accurate and precise quantitative measurements and insure the purity of chemicals. Along with Lavoisier, Boyle, and Dalton, Berzelius is known as the father of modern chemistry. In 1828 he compiled a table of relative atomic weights, where oxygen was set to 100, and which included all of the elements known at the time. This work provided evidence in favor of Dalton's atomic theory: that inorganic chemical compounds are composed of atoms combined in whole number amounts. He determined the exact elementary constituents of large numbers of compounds. The results strongly confirmed Proust's Law of Definite Proportions. In his weights, he used oxygen as a standard, setting its weight equal to exactly 100. He also measured the weights of 43 elements. In discovering that atomic weights are not integer multiples of the weight of hydrogen, Berzelius also disproved Prout's hypothesis that elements are built up from atoms of hydrogen. Motivated by his extensive atomic weight determinations and in a desire to aid his experiments, he introduced the classical system of chemical symbols and notation with his 1808 publishing of Lärbok i Kemien, in which elements are abbreviated by one or two letters to make a distinct abbreviation from their Latin name. This system of chemical notation--in which the elements were given simple written labels, such as O for oxygen, or Fe for iron, with proportions noted by numbers--is the same basic system used today. The only difference is that instead of the subscript number used today (e.g., H2O), Berzelius used a superscript (H2O). Berzelius is credited with identifying the chemical elements silicon, selenium, thorium, and cerium. Students working in Berzelius's laboratory also discovered lithium and vanadium. Berzelius developed the radical theory of chemical combination, which holds that reactions occur as stable groups of atoms called radicals are exchanged between molecules. He believed that salts are compounds of an acid and bases, and discovered that the anions in acids would be attracted to a positive electrode (the anode), whereas the cations in a base would be attracted to a negative electrode (the cathode). Berzelius did not believe in the Vitalism Theory, but instead in a regulative force which produced organization of tissues in an organism. Berzelius is also credited with originating the chemical terms "catalysis", "polymer", "isomer", and "allotrope", although his original definitions differ dramatically from modern usage. For example, he coined the term "polymer" in 1833 to describe organic compounds which shared identical empirical formulas but which differed in overall molecular weight, the larger of the compounds being described as "polymers" of the smallest. By this long superseded, pre-structural definition, glucose (C6H12O6) was viewed as a polymer of formaldehyde (CH2O). English chemist Humphry Davy was a pioneer in the field of electrolysis, using Alessandro Volta's voltaic pile to split up common compounds and thus isolate a series of new elements. He went on to electrolyse molten salts and discovered several new metals, especially sodium and potassium, highly reactive elements known as the alkali metals. Potassium, the first metal that was isolated by electrolysis, was discovered in 1807 by Davy, who derived it from caustic potash (KOH). Before the 19th century, no distinction was made between potassium and sodium. Sodium was first isolated by Davy in the same year by passing an electric current through molten sodium hydroxide (NaOH). When Davy heard that Berzelius and Pontin prepared calcium amalgam by electrolyzing lime in mercury, he tried it himself. Davy was successful, and discovered calcium in 1808 by electrolyzing a mixture of lime and mercuric oxide. He worked with electrolysis throughout his life and, in 1808, he isolated magnesium, strontium and barium. Davy also experimented with gases by inhaling them. This experimental procedure nearly proved fatal on several occasions, but led to the discovery of the unusual effects of nitrous oxide, which came to be known as laughing gas. Chlorine was discovered in 1774 by Swedish chemist Carl Wilhelm Scheele, who called it "dephlogisticated marine acid" (see phlogiston theory) and mistakenly thought it contained oxygen. Scheele observed several properties of chlorine gas, such as its bleaching effect on litmus, its deadly effect on insects, its yellow-green colour, and the similarity of its smell to that of aqua regia. However, Scheele was unable to publish his findings at the time. In 1810, chlorine was given its current name by Humphry Davy (derived from the Greek word for green), who insisted that chlorine was in fact an element. He also showed that oxygen could not be obtained from the substance known as oxymuriatic acid (HCl solution). This discovery overturned Lavoisier's definition of acids as compounds of oxygen. Davy was a popular lecturer and able experimenter. French chemist Joseph Louis Gay-Lussac shared the interest of Lavoisier and others in the quantitative study of the properties of gases. From his first major program of research in 1801-1802, he concluded that equal volumes of all gases expand equally with the same increase in temperature: this conclusion is usually called "Charles's law", as Gay-Lussac gave credit to Jacques Charles, who had arrived at nearly the same conclusion in the 1780s but had not published it. The law was independently discovered by British natural philosopher John Dalton by 1801, although Dalton's description was less thorough than Gay-Lussac's. In 1804 Gay-Lussac made several daring ascents of over 7,000 meters above sea level in hydrogen-filled balloons--a feat not equaled for another 50 years--that allowed him to investigate other aspects of gases. Not only did he gather magnetic measurements at various altitudes, but he also took pressure, temperature, and humidity measurements and samples of air, which he later analyzed chemically. In 1808 Gay-Lussac announced what was probably his single greatest achievement: from his own and others' experiments he deduced that gases at constant temperature and pressure combine in simple numerical proportions by volume, and the resulting product or products--if gases--also bear a simple proportion by volume to the volumes of the reactants. In other words, gases under equal conditions of temperature and pressure react with one another in volume ratios of small whole numbers. This conclusion subsequently became known as "Gay-Lussac's law" or the "Law of Combining Volumes". With his fellow professor at the École Polytechnique, Louis Jacques Thénard, Gay-Lussac also participated in early electrochemical research, investigating the elements discovered by its means. Among other achievements, they decomposed boric acid by using fused potassium, thus discovering the element boron. The two also took part in contemporary debates that modified Lavoisier's definition of acids and furthered his program of analyzing organic compounds for their oxygen and hydrogen content. The element iodine was discovered by French chemist Bernard Courtois in 1811. Courtois gave samples to his friends, Charles Bernard Desormes (1777-1862) and Nicolas Clément (1779-1841), to continue research. He also gave some of the substance to Gay-Lussac and to physicist André-Marie Ampère. On December 6, 1813, Gay-Lussac announced that the new substance was either an element or a compound of oxygen. It was Gay-Lussac who suggested the name "iode", from the Greek word (iodes) for violet (because of the color of iodine vapor). Ampère had given some of his sample to Humphry Davy. Davy did some experiments on the substance and noted its similarity to chlorine. Davy sent a letter dated December 10 to the Royal Society of London stating that he had identified a new element. Arguments erupted between Davy and Gay-Lussac over who identified iodine first, but both scientists acknowledged Courtois as the first to isolate the element. In 1815, Humphry Davy invented the Davy lamp, which allowed miners within coal mines to work safely in the presence of flammable gases. There had been many mining explosions caused by firedamp or methane often ignited by open flames of the lamps then used by miners. Davy conceived of using an iron gauze to enclose a lamp's flame, and so prevent the methane burning inside the lamp from passing out to the general atmosphere. Although the idea of the safety lamp had already been demonstrated by William Reid Clanny and by the then unknown (but later very famous) engineer George Stephenson, Davy's use of wire gauze to prevent the spread of flame was used by many other inventors in their later designs. There was some discussion as to whether Davy had discovered the principles behind his lamp without the help of the work of Smithson Tennant, but it was generally agreed that the work of both men had been independent. Davy refused to patent the lamp, and its invention led to him being awarded the Rumford medal in 1816. After Dalton published his atomic theory in 1808, certain of his central ideas were soon adopted by most chemists. However, uncertainty persisted for half a century about how atomic theory was to be configured and applied to concrete situations; chemists in different countries developed several different incompatible atomistic systems. A paper that suggested a way out of this difficult situation was published as early as 1811 by the Italian physicist Amedeo Avogadro (1776-1856), who hypothesized that equal volumes of gases at the same temperature and pressure contain equal numbers of molecules, from which it followed that relative molecular weights of any two gases are the same as the ratio of the densities of the two gases under the same conditions of temperature and pressure. Avogadro also reasoned that simple gases were not formed of solitary atoms but were instead compound molecules of two or more atoms. Thus Avogadro was able to overcome the difficulty that Dalton and others had encountered when Gay-Lussac reported that above 100 °C the volume of water vapor was twice the volume of the oxygen used to form it. According to Avogadro, the molecule of oxygen had split into two atoms in the course of forming water vapor. Avogadro's hypothesis was neglected for half a century after it was first published. Many reasons for this neglect have been cited, including some theoretical problems, such as Jöns Jacob Berzelius's "dualism", which asserted that compounds are held together by the attraction of positive and negative electrical charges, making it inconceivable that a molecule composed of two electrically similar atoms--as in oxygen--could exist. An additional barrier to acceptance was the fact that many chemists were reluctant to adopt physical methods (such as vapour-density determinations) to solve their problems. By mid-century, however, some leading figures had begun to view the chaotic multiplicity of competing systems of atomic weights and molecular formulas as intolerable. Moreover, purely chemical evidence began to mount that suggested Avogadro's approach might be right after all. During the 1850s, younger chemists, such as Alexander Williamson in England, Charles Gerhardt and Charles-Adolphe Wurtz in France, and August Kekulé in Germany, began to advocate reforming theoretical chemistry to make it consistent with Avogadrian theory. In 1825, Friedrich Wöhler and Justus von Liebig performed the first confirmed discovery and explanation of isomers, earlier named by Berzelius. Working with cyanic acid and fulminic acid, they correctly deduced that isomerism was caused by differing arrangements of atoms within a molecular structure. In 1827, William Prout classified biomolecules into their modern groupings: carbohydrates, proteins and lipids. After the nature of combustion was settled, a dispute about vitalism and the essential distinction between organic and inorganic substances began. The vitalism question was revolutionized in 1828 when Friedrich Wöhler synthesized urea, thereby establishing that organic compounds could be produced from inorganic starting materials and disproving the theory of vitalism. This opened a new research field in chemistry, and by the end of the 19th century, scientists were able to synthesize hundreds of organic compounds. The most important among them are mauve, magenta, and other synthetic dyes, as well as the widely used drug aspirin. The discovery of the artificial synthesis of urea contributed greatly to the theory of isomerism, as the empirical chemical formulas for urea and ammonium cyanate are identical (see Wöhler synthesis). In 1832, Friedrich Wöhler and Justus von Liebig discovered and explained functional groups and radicals in relation to organic chemistry, as well as first synthesizing benzaldehyde. Liebig, a German chemist, made major contributions to agricultural and biological chemistry, and worked on the organization of organic chemistry. Liebig is considered the "father of the fertilizer industry" for his discovery of nitrogen as an essential plant nutrient, and his formulation of the Law of the Minimum which described the effect of individual nutrients on crops. In 1840, Germain Hess proposed Hess's law, an early statement of the law of conservation of energy, which establishes that energy changes in a chemical process depend only on the states of the starting and product materials and not on the specific pathway taken between the two states. In 1847, Hermann Kolbe obtained acetic acid from completely inorganic sources, further disproving vitalism. In 1848, William Thomson, 1st Baron Kelvin (commonly known as Lord Kelvin) established the concept of absolute zero, the temperature at which all molecular motion ceases. In 1849, Louis Pasteur discovered that the racemic form of tartaric acid is a mixture of the levorotatory and dextrotatory forms, thus clarifying the nature of optical rotation and advancing the field of stereochemistry. In 1852, August Beer proposed Beer's law, which explains the relationship between the composition of a mixture and the amount of light it will absorb. Based partly on earlier work by Pierre Bouguer and Johann Heinrich Lambert, it established the analytical technique known as spectrophotometry. In 1855, Benjamin Silliman, Jr. pioneered methods of petroleum cracking, which made the entire modern petrochemical industry possible. Avogadro's hypothesis began to gain broad appeal among chemists only after his compatriot and fellow scientist Stanislao Cannizzaro demonstrated its value in 1858, two years after Avogadro's death. Cannizzaro's chemical interests had originally centered on natural products and on reactions of aromatic compounds; in 1853 he discovered that when benzaldehyde is treated with concentrated base, both benzoic acid and benzyl alcohol are produced--a phenomenon known today as the Cannizzaro reaction. In his 1858 pamphlet, Cannizzaro showed that a complete return to the ideas of Avogadro could be used to construct a consistent and robust theoretical structure that fit nearly all of the available empirical evidence. For instance, he pointed to evidence that suggested that not all elementary gases consist of two atoms per molecule--some were monatomic, most were diatomic, and a few were even more complex. Another point of contention had been the formulas for compounds of the alkali metals (such as sodium) and the alkaline earth metals (such as calcium), which, in view of their striking chemical analogies, most chemists had wanted to assign to the same formula type. Cannizzaro argued that placing these metals in different categories had the beneficial result of eliminating certain anomalies when using their physical properties to deduce atomic weights. Unfortunately, Cannizzaro's pamphlet was published initially only in Italian and had little immediate impact. The real breakthrough came with an international chemical congress held in the German town of Karlsruhe in September 1860, at which most of the leading European chemists were present. The Karlsruhe Congress had been arranged by Kekulé, Wurtz, and a few others who shared Cannizzaro's sense of the direction chemistry should go. Speaking in French (as everyone there did), Cannizzaro's eloquence and logic made an indelible impression on the assembled body. Moreover, his friend Angelo Pavesi distributed Cannizzaro's pamphlet to attendees at the end of the meeting; more than one chemist later wrote of the decisive impression the reading of this document provided. For instance, Lothar Meyer later wrote that on reading Cannizzaro's paper, "The scales seemed to fall from my eyes." Cannizzaro thus played a crucial role in winning the battle for reform. The system advocated by him, and soon thereafter adopted by most leading chemists, is substantially identical to what is still used today. In 1856, Sir William Henry Perkin, age 18, given a challenge by his professor, August Wilhelm von Hofmann, sought to synthesize quinine, the anti-malaria drug, from coal tar. In one attempt, Perkin oxidized aniline using potassium dichromate, whose toluidine impurities reacted with the aniline and yielded a black solid--suggesting a "failed" organic synthesis. Cleaning the flask with alcohol, Perkin noticed purple portions of the solution: a byproduct of the attempt was the first synthetic dye, known as mauveine or Perkin's mauve. Perkin's discovery is the foundation of the dye synthesis industry, one of the earliest successful chemical industries. German chemist August Kekulé von Stradonitz's most important single contribution was his structural theory of organic composition, outlined in two articles published in 1857 and 1858 and treated in great detail in the pages of his extraordinarily popular Lehrbuch der organischen Chemie ("Textbook of Organic Chemistry"), the first installment of which appeared in 1859 and gradually extended to four volumes. Kekulé argued that tetravalent carbon atoms - that is, carbon forming exactly four chemical bonds - could link together to form what he called a "carbon chain" or a "carbon skeleton," to which other atoms with other valences (such as hydrogen, oxygen, nitrogen, and chlorine) could join. He was convinced that it was possible for the chemist to specify this detailed molecular architecture for at least the simpler organic compounds known in his day. Kekulé was not the only chemist to make such claims in this era. The Scottish chemist Archibald Scott Couper published a substantially similar theory nearly simultaneously, and the Russian chemist Aleksandr Butlerov did much to clarify and expand structure theory. However, it was predominantly Kekulé's ideas that prevailed in the chemical community. British chemist and physicist William Crookes is noted for his cathode ray studies, fundamental in the development of atomic physics. His researches on electrical discharges through a rarefied gas led him to observe the dark space around the cathode, now called the Crookes dark space. He demonstrated that cathode rays travel in straight lines and produce phosphorescence and heat when they strike certain materials. A pioneer of vacuum tubes, Crookes invented the Crookes tube - an early experimental discharge tube, with partial vacuum with which he studied the behavior of cathode rays. With the introduction of spectrum analysis by Robert Bunsen and Gustav Kirchhoff (1859-1860), Crookes applied the new technique to the study of selenium compounds. Bunsen and Kirchhoff had previously used spectroscopy as a means of chemical analysis to discover caesium and rubidium. In 1861, Crookes used this process to discover thallium in some seleniferous deposits. He continued work on that new element, isolated it, studied its properties, and in 1873 determined its atomic weight. During his studies of thallium, Crookes discovered the principle of the Crookes radiometer, a device that converts light radiation into rotary motion. The principle of this radiometer has found numerous applications in the development of sensitive measuring instruments. In 1862, Alexander Parkes exhibited Parkesine, one of the earliest synthetic polymers, at the International Exhibition in London. This discovery formed the foundation of the modern plastics industry. In 1864, Cato Maximilian Guldberg and Peter Waage, building on Claude Louis Berthollet's ideas, proposed the law of mass action. In 1865, Johann Josef Loschmidt determined the exact number of molecules in a mole, later named Avogadro's number. In 1865, August Kekulé, based partially on the work of Loschmidt and others, established the structure of benzene as a six carbon ring with alternating single and double bonds. Kekulé's novel proposal for benzene's cyclic structure was much contested but was never replaced by a superior theory. This theory provided the scientific basis for the dramatic expansion of the German chemical industry in the last third of the 19th century. Today, the large majority of known organic compounds are aromatic, and all of them contain at least one hexagonal benzene ring of the sort that Kekulé advocated. Kekulé is also famous for having clarified the nature of aromatic compounds, which are compounds based on the benzene molecule. In 1865, Adolf von Baeyer began work on indigo dye, a milestone in modern industrial organic chemistry which revolutionized the dye industry. Swedish chemist and inventor Alfred Nobel found that when nitroglycerin was incorporated in an absorbent inert substance like kieselguhr (diatomaceous earth) it became safer and more convenient to handle, and this mixture he patented in 1867 as dynamite. Nobel later on combined nitroglycerin with various nitrocellulose compounds, similar to collodion, but settled on a more efficient recipe combining another nitrate explosive, and obtained a transparent, jelly-like substance, which was a more powerful explosive than dynamite. Gelignite, or blasting gelatin, as it was named, was patented in 1876; and was followed by a host of similar combinations, modified by the addition of potassium nitrate and various other substances. An important breakthrough in making sense of the list of known chemical elements (as well as in understanding the internal structure of atoms) was Dmitri Mendeleev's development of the first modern periodic table, or the periodic classification of the elements. Mendeleev, a Russian chemist, felt that there was some type of order to the elements and he spent more than thirteen years of his life collecting data and assembling the concept, initially with the idea of resolving some of the disorder in the field for his students. Mendeleev found that, when all the known chemical elements were arranged in order of increasing atomic weight, the resulting table displayed a recurring pattern, or periodicity, of properties within groups of elements. Mendeleev's law allowed him to build up a systematic periodic table of all the 66 elements then known based on atomic mass, which he published in Principles of Chemistry in 1869. His first Periodic Table was compiled on the basis of arranging the elements in ascending order of atomic weight and grouping them by similarity of properties. Mendeleev had such faith in the validity of the periodic law that he proposed changes to the generally accepted values for the atomic weight of a few elements and, in his version of the periodic table of 1871, predicted the locations within the table of unknown elements together with their properties. He even predicted the likely properties of three yet-to-be-discovered elements, which he called ekaboron (Eb), ekaaluminium (Ea), and ekasilicon (Es), which proved to be good predictors of the properties of scandium, gallium, and germanium, respectively, which each fill the spot in the periodic table assigned by Mendeleev. At first the periodic system did not raise interest among chemists. However, with the discovery of the predicted elements, notably gallium in 1875, scandium in 1879, and germanium in 1886, it began to win wide acceptance. The subsequent proof of many of his predictions within his lifetime brought fame to Mendeleev as the founder of the periodic law. This organization surpassed earlier attempts at classification by Alexandre-Émile Béguyer de Chancourtois, who published the telluric helix, an early, three-dimensional version of the periodic table of the elements in 1862, John Newlands, who proposed the law of octaves (a precursor to the periodic law) in 1864, and Lothar Meyer, who developed an early version of the periodic table with 28 elements organized by valence in 1864. Mendeleev's table did not include any of the noble gases, however, which had not yet been discovered. Gradually the periodic law and table became the framework for a great part of chemical theory. By the time Mendeleev died in 1907, he enjoyed international recognition and had received distinctions and awards from many countries. In 1873, Jacobus Henricus van 't Hoff and Joseph Achille Le Bel, working independently, developed a model of chemical bonding that explained the chirality experiments of Pasteur and provided a physical cause for optical activity in chiral compounds. van 't Hoff's publication, called Voorstel tot Uitbreiding der Tegenwoordige in de Scheikunde gebruikte Structuurformules in de Ruimte, etc. (Proposal for the development of 3-dimensional chemical structural formulae) and consisting of twelve pages text and one page diagrams, gave the impetus to the development of stereochemistry. The concept of the "asymmetrical carbon atom", dealt with in this publication, supplied an explanation of the occurrence of numerous isomers, inexplicable by means of the then current structural formulae. At the same time he pointed out the existence of relationship between optical activity and the presence of an asymmetrical carbon atom. American mathematical physicist J. Willard Gibbs's work on the applications of thermodynamics was instrumental in transforming physical chemistry into a rigorous deductive science. During the years from 1876 to 1878, Gibbs worked on the principles of thermodynamics, applying them to the complex processes involved in chemical reactions. He discovered the concept of chemical potential, or the "fuel" that makes chemical reactions work. In 1876 he published his most famous contribution, "On the Equilibrium of Heterogeneous Substances", a compilation of his work on thermodynamics and physical chemistry which laid out the concept of free energy to explain the physical basis of chemical equilibria. In these essays were the beginnings of Gibbs' theories of phases of matter: he considered each state of matter a phase, and each substance a component. Gibbs took all of the variables involved in a chemical reaction - temperature, pressure, energy, volume, and entropy - and included them in one simple equation known as Gibbs' phase rule. Within this paper was perhaps his most outstanding contribution, the introduction of the concept free energy, now universally called Gibbs free energy in his honor. The Gibbs free energy relates the tendency of a physical or chemical system to simultaneously lower its energy and increase its disorder, or entropy, in a spontaneous natural process. Gibbs's approach allows a researcher to calculate the change in free energy in the process, such as in a chemical reaction, and how fast it will happen. Since virtually all chemical processes and many physical ones involve such changes, his work has significantly impacted both the theoretical and experiential aspects of these sciences. In 1877, Ludwig Boltzmann established statistical derivations of many important physical and chemical concepts, including entropy, and distributions of molecular velocities in the gas phase. Together with Boltzmann and James Clerk Maxwell, Gibbs created a new branch of theoretical physics called statistical mechanics (a term that he coined), explaining the laws of thermodynamics as consequences of the statistical properties of large ensembles of particles. Gibbs also worked on the application of Maxwell's equations to problems in physical optics. Gibbs's derivation of the phenomenological laws of thermodynamics from the statistical properties of systems with many particles was presented in his highly influential textbook Elementary Principles in Statistical Mechanics, published in 1902, a year before his death. In that work, Gibbs reviewed the relationship between the laws of thermodynamics and statistical theory of molecular motions. The overshooting of the original function by partial sums of Fourier series at points of discontinuity is known as the Gibbs phenomenon. German engineer Carl von Linde's invention of a continuous process of liquefying gases in large quantities formed a basis for the modern technology of refrigeration and provided both impetus and means for conducting scientific research at low temperatures and very high vacuums. He developed a methyl ether refrigerator (1874) and an ammonia refrigerator (1876). Though other refrigeration units had been developed earlier, Linde's were the first to be designed with the aim of precise calculations of efficiency. In 1895 he set up a large-scale plant for the production of liquid air. Six years later he developed a method for separating pure liquid oxygen from liquid air that resulted in widespread industrial conversion to processes utilizing oxygen (e.g., in steel manufacture). In 1883, Svante Arrhenius developed an ion theory to explain conductivity in electrolytes. In 1884, Jacobus Henricus van 't Hoff published Études de Dynamique chimique (Studies in Dynamic Chemistry), a seminal study on chemical kinetics. In this work, van 't Hoff entered for the first time the field of physical chemistry. Of great importance was his development of the general thermodynamic relationship between the heat of conversion and the displacement of the equilibrium as a result of temperature variation. At constant volume, the equilibrium in a system will tend to shift in such a direction as to oppose the temperature change which is imposed upon the system. Thus, lowering the temperature results in heat development while increasing the temperature results in heat absorption. This principle of mobile equilibrium was subsequently (1885) put in a general form by Henry Louis Le Chatelier, who extended the principle to include compensation, by change of volume, for imposed pressure changes. The van 't Hoff-Le Chatelier principle, or simply Le Chatelier's principle, explains the response of dynamic chemical equilibria to external stresses. In 1884, Hermann Emil Fischer proposed the structure of purine, a key structure in many biomolecules, which he later synthesized in 1898. He also began work on the chemistry of glucose and related sugars. In 1885, Eugene Goldstein named the cathode ray, later discovered to be composed of electrons, and the canal ray, later discovered to be positive hydrogen ions that had been stripped of their electrons in a cathode ray tube; these would later be named protons. The year 1885 also saw the publishing of J. H. van 't Hoff's L'Équilibre chimique dans les Systèmes gazeux ou dissous à I'État dilué (Chemical equilibria in gaseous systems or strongly diluted solutions), which dealt with this theory of dilute solutions. Here he demonstrated that the "osmotic pressure" in solutions which are sufficiently dilute is proportionate to the concentration and the absolute temperature so that this pressure can be represented by a formula which only deviates from the formula for gas pressure by a coefficient i. He also determined the value of i by various methods, for example by means of the vapor pressure and François-Marie Raoult's results on the lowering of the freezing point. Thus van 't Hoff was able to prove that thermodynamic laws are not only valid for gases, but also for dilute solutions. His pressure laws, given general validity by the electrolytic dissociation theory of Arrhenius (1884-1887) - the first foreigner who came to work with him in Amsterdam (1888) - are considered the most comprehensive and important in the realm of natural sciences. In 1893, Alfred Werner discovered the octahedral structure of cobalt complexes, thus establishing the field of coordination chemistry. The most celebrated discoveries of Scottish chemist William Ramsay were made in inorganic chemistry. Ramsay was intrigued by the British physicist John Strutt, 3rd Baron Rayleigh's 1892 discovery that the atomic weight of nitrogen found in chemical compounds was lower than that of nitrogen found in the atmosphere. He ascribed this discrepancy to a light gas included in chemical compounds of nitrogen, while Ramsay suspected a hitherto undiscovered heavy gas in atmospheric nitrogen. Using two different methods to remove all known gases from air, Ramsay and Lord Rayleigh were able to announce in 1894 that they had found a monatomic, chemically inert gaseous element that constituted nearly 1 percent of the atmosphere; they named it argon. The following year, Ramsay liberated another inert gas from a mineral called cleveite; this proved to be helium, previously known only in the solar spectrum. In his book The Gases of the Atmosphere (1896), Ramsay showed that the positions of helium and argon in the periodic table of elements indicated that at least three more noble gases might exist. In 1898 Ramsay and the British chemist Morris W. Travers isolated these elements--called neon, krypton, and xenon--from air brought to a liquid state at low temperature and high pressure. Sir William Ramsay worked with Frederick Soddy to demonstrate, in 1903, that alpha particles (helium nuclei) were continually produced during the radioactive decay of a sample of radium. Ramsay was awarded the 1904 Nobel Prize for Chemistry in recognition of "services in the discovery of the inert gaseous elements in air, and his determination of their place in the periodic system." In 1897, J. J. Thomson discovered the electron using the cathode ray tube. In 1898, Wilhelm Wien demonstrated that canal rays (streams of positive ions) can be deflected by magnetic fields, and that the amount of deflection is proportional to the mass-to-charge ratio. This discovery would lead to the analytical technique known as mass spectrometry in 1912. Marie Sk?odowska-Curie was a Polish-born French physicist and chemist who is famous for her pioneering research on radioactivity. She and her husband are considered to have laid the cornerstone of the nuclear age with their research on radioactivity. Marie was fascinated with the work of Henri Becquerel, a French physicist who discovered in 1896 that uranium casts off rays similar to the X-rays discovered by Wilhelm Röntgen. Marie Curie began studying uranium in late 1897 and theorized, according to a 1904 article she wrote for Century magazine, "that the emission of rays by the compounds of uranium is a property of the metal itself--that it is an atomic property of the element uranium independent of its chemical or physical state." Curie took Becquerel's work a few steps further, conducting her own experiments on uranium rays. She discovered that the rays remained constant, no matter the condition or form of the uranium. The rays, she theorized, came from the element's atomic structure. This revolutionary idea created the field of atomic physics and the Curies coined the word radioactivity to describe the phenomena. Pierre and Marie further explored radioactivity by working to separate the substances in uranium ores and then using the electrometer to make radiation measurements to 'trace' the minute amount of unknown radioactive element among the fractions that resulted. Working with the mineral pitchblende, the pair discovered a new radioactive element in 1898. They named the element polonium, after Marie's native country of Poland. On December 21, 1898, the Curies detected the presence of another radioactive material in the pitchblende. They presented this finding to the French Academy of Sciences on December 26, proposing that the new element be called radium. The Curies then went to work isolating polonium and radium from naturally occurring compounds to prove that they were new elements. In 1902, the Curies announced that they had produced a decigram of pure radium, demonstrating its existence as a unique chemical element. While it took three years for them to isolate radium, they were never able to isolate polonium. Along with the discovery of two new elements and finding techniques for isolating radioactive isotopes, Curie oversaw the world's first studies into the treatment of neoplasms, using radioactive isotopes. With Henri Becquerel and her husband, Pierre Curie, she was awarded the 1903 Nobel Prize for Physics. She was the sole winner of the 1911 Nobel Prize for Chemistry. She was the first woman to win a Nobel Prize, and she is the only woman to win the award in two different fields. While working with Marie to extract pure substances from ores, an undertaking that really required industrial resources but that they achieved in relatively primitive conditions, Pierre himself concentrated on the physical study (including luminous and chemical effects) of the new radiations. Through the action of magnetic fields on the rays given out by the radium, he proved the existence of particles electrically positive, negative, and neutral; these Ernest Rutherford was afterward to call alpha, beta, and gamma rays. Pierre then studied these radiations by calorimetry and also observed the physiological effects of radium, thus opening the way to radium therapy. Among Pierre Curie's discoveries were that ferromagnetic substances exhibited a critical temperature transition, above which the substances lost their ferromagnetic behavior - this is known as the "Curie point." He was elected to the Academy of Sciences (1905), having in 1903 jointly with Marie received the Royal Society's prestigious Davy Medal and jointly with her and Becquerel the Nobel Prize for Physics. He was run over by a carriage in the rue Dauphine in Paris in 1906 and died instantly. His complete works were published in 1908. New Zealand-born chemist and physicist Ernest Rutherford is considered to be "the father of nuclear physics." Rutherford is best known for devising the names alpha, beta, and gamma to classify various forms of radioactive "rays" which were poorly understood at his time (alpha and beta rays are particle beams, while gamma rays are a form of high-energy electromagnetic radiation). Rutherford deflected alpha rays with both electric and magnetic fields in 1903. Working with Frederick Soddy, Rutherford explained that radioactivity is due to the transmutation of elements, now known to involve nuclear reactions. He also observed that the intensity of radioactivity of a radioactive element decreases over a unique and regular amount of time until a point of stability, and he named the halving time the "half-life." In 1901 and 1902 he worked with Frederick Soddy to prove that atoms of one radioactive element would spontaneously turn into another, by expelling a piece of the atom at high velocity. In 1906 at the University of Manchester, Rutherford oversaw an experiment conducted by his students Hans Geiger (known for the Geiger counter) and Ernest Marsden. In the Geiger-Marsden experiment, a beam of alpha particles, generated by the radioactive decay of radon, was directed normally onto a sheet of very thin gold foil in an evacuated chamber. Under the prevailing plum pudding model, the alpha particles should all have passed through the foil and hit the detector screen, or have been deflected by, at most, a few degrees. However, the actual results surprised Rutherford. Although many of the alpha particles did pass through as expected, many others were deflected at small angles while others were reflected back to the alpha source. They observed that a very small percentage of particles were deflected through angles much larger than 90 degrees. The gold foil experiment showed large deflections for a small fraction of incident particles. Rutherford realized that, because some of the alpha particles were deflected or reflected, the atom had a concentrated centre of positive charge and of relatively large mass - Rutherford later termed this positive center the "atomic nucleus". The alpha particles had either hit the positive centre directly or passed by it close enough to be affected by its positive charge. Since many other particles passed through the gold foil, the positive centre would have to be a relatively small size compared to the rest of the atom - meaning that the atom is mostly open space. From his results, Rutherford developed a model of the atom that was similar to the solar system, known as Rutherford model. Like planets, electrons orbited a central, sun-like nucleus. For his work with radiation and the atomic nucleus, Rutherford received the 1908 Nobel Prize in Chemistry. In 1903, Mikhail Tsvet invented chromatography, an important analytic technique. In 1904, Hantaro Nagaoka proposed an early nuclear model of the atom, where electrons orbit a dense massive nucleus. In 1905, Fritz Haber and Carl Bosch developed the Haber process for making ammonia, a milestone in industrial chemistry with deep consequences in agriculture. The Haber process, or Haber-Bosch process, combined nitrogen and hydrogen to form ammonia in industrial quantities for production of fertilizer and munitions. The food production for half the world's current population depends on this method for producing fertilizer. Haber, along with Max Born, proposed the Born-Haber cycle as a method for evaluating the lattice energy of an ionic solid. Haber has also been described as the "father of chemical warfare" for his work developing and deploying chlorine and other poisonous gases during World War I. In 1905, Albert Einstein explained Brownian motion in a way that definitively proved atomic theory. Leo Baekeland invented bakelite, one of the first commercially successful plastics. In 1909, American physicist Robert Andrews Millikan - who had studied in Europe under Walther Nernst and Max Planck - measured the charge of individual electrons with unprecedented accuracy through the oil drop experiment, in which he measured the electric charges on tiny falling water (and later oil) droplets. His study established that any particular droplet's electrical charge is a multiple of a definite, fundamental value -- the electron's charge -- and thus a confirmation that all electrons have the same charge and mass. Beginning in 1912, he spent several years investigating and finally proving Albert Einstein's proposed linear relationship between energy and frequency, and providing the first direct photoelectric support for Planck's constant. In 1923 Millikan was awarded the Nobel Prize for Physics. In 1909, S. P. L. Sørensen invented the pH concept and develops methods for measuring acidity. In 1911, Antonius Van den Broek proposed the idea that the elements on the periodic table are more properly organized by positive nuclear charge rather than atomic weight. In 1911, the first Solvay Conference was held in Brussels, bringing together most of the most prominent scientists of the day. In 1912, William Henry Bragg and William Lawrence Bragg proposed Bragg's law and established the field of X-ray crystallography, an important tool for elucidating the crystal structure of substances. In 1912, Peter Debye develops the concept of molecular dipole to describe asymmetric charge distribution in some molecules. In 1913, Niels Bohr, a Danish physicist, introduced the concepts of quantum mechanics to atomic structure by proposing what is now known as the Bohr model of the atom, where electrons exist only in strictly defined circular orbits around the nucleus similar to rungs on a ladder. The Bohr Model is a planetary model in which the negatively charged electrons orbit a small, positively charged nucleus similar to the planets orbiting the Sun (except that the orbits are not planar) - the gravitational force of the solar system is mathematically akin to the attractive Coulomb (electrical) force between the positively charged nucleus and the negatively charged electrons. In the Bohr model, however, electrons orbit the nucleus in orbits that have a set size and energy - the energy levels are said to be quantized, which means that only certain orbits with certain radii are allowed; orbits in between simply don't exist. The energy of the orbit is related to its size - that is, the lowest energy is found in the smallest orbit. Bohr also postulated that electromagnetic radiation is absorbed or emitted when an electron moves from one orbit to another. Because only certain electron orbits are permitted, the emission of light accompanying a jump of an electron from an excited energy state to ground state produces a unique emission spectrum for each element. Niels Bohr also worked on the principle of complementarity, which states that an electron can be interpreted in two mutually exclusive and valid ways. Electrons can be interpreted as wave or particle models. His hypothesis was that an incoming particle would strike the nucleus and create an excited compound nucleus. This formed the basis of his liquid drop model and later provided a theory base for the explanation of nuclear fission. In 1913, Henry Moseley, working from Van den Broek's earlier idea, introduces concept of atomic number to fix inadequacies of Mendeleev's periodic table, which had been based on atomic weight. The peak of Frederick Soddy's career in radiochemistry was in 1913 with his formulation of the concept of isotopes, which stated that certain elements exist in two or more forms which have different atomic weights but which are indistinguishable chemically. He is remembered for proving the existence of isotopes of certain radioactive elements, and is also credited, along with others, with the discovery of the element protactinium in 1917. In 1913, J. J. Thomson expanded on the work of Wien by showing that charged subatomic particles can be separated by their mass-to-charge ratio, a technique known as mass spectrometry. American physical chemist Gilbert N. Lewis laid the foundation of valence bond theory; he was instrumental in developing a bonding theory based on the number of electrons in the outermost "valence" shell of the atom. In 1902, while Lewis was trying to explain valence to his students, he depicted atoms as constructed of a concentric series of cubes with electrons at each corner. This "cubic atom" explained the eight groups in the periodic table and represented his idea that chemical bonds are formed by electron transference to give each atom a complete set of eight outer electrons (an "octet"). Lewis's theory of chemical bonding continued to evolve and, in 1916, he published his seminal article "The Atom of the Molecule", which suggested that a chemical bond is a pair of electrons shared by two atoms. Lewis's model equated the classical chemical bond with the sharing of a pair of electrons between the two bonded atoms. Lewis introduced the "electron dot diagrams" in this paper to symbolize the electronic structures of atoms and molecules. Now known as Lewis structures, they are discussed in virtually every introductory chemistry book. Shortly after publication of his 1916 paper, Lewis became involved with military research. He did not return to the subject of chemical bonding until 1923, when he masterfully summarized his model in a short monograph entitled Valence and the Structure of Atoms and Molecules. His renewal of interest in this subject was largely stimulated by the activities of the American chemist and General Electric researcher Irving Langmuir, who between 1919 and 1921 popularized and elaborated Lewis's model. Langmuir subsequently introduced the term covalent bond. In 1921, Otto Stern and Walther Gerlach establish concept of quantum mechanical spin in subatomic particles. For cases where no sharing was involved, Lewis in 1923 developed the electron pair theory of acids and base: Lewis redefined an acid as any atom or molecule with an incomplete octet that was thus capable of accepting electrons from another atom; bases were, of course, electron donors. His theory is known as the concept of Lewis acids and bases. In 1923, G. N. Lewis and Merle Randall published Thermodynamics and the Free Energy of Chemical Substances, first modern treatise on chemical thermodynamics. The 1920s saw a rapid adoption and application of Lewis's model of the electron-pair bond in the fields of organic and coordination chemistry. In organic chemistry, this was primarily due to the efforts of the British chemists Arthur Lapworth, Robert Robinson, Thomas Lowry, and Christopher Ingold; while in coordination chemistry, Lewis's bonding model was promoted through the efforts of the American chemist Maurice Huggins and the British chemist Nevil Sidgwick. |From left to right, top row: Louis de Broglie (1892-1987) and Wolfgang Pauli (1900-58); second row: Erwin Schrödinger (1887-1961) and Werner Heisenberg (1901-76)| In 1924, French quantum physicist Louis de Broglie published his thesis, in which he introduced a revolutionary theory of electron waves based on wave-particle duality in his thesis. In his time, the wave and particle interpretations of light and matter were seen as being at odds with one another, but de Broglie suggested that these seemingly different characteristics were instead the same behavior observed from different perspectives -- that particles can behave like waves, and waves (radiation) can behave like particles. Broglie's proposal offered an explanation of the restriction motion of electrons within the atom. The first publications of Broglie's idea of "matter waves" had drawn little attention from other physicists, but a copy of his doctoral thesis chanced to reach Einstein, whose response was enthusiastic. Einstein stressed the importance of Broglie's work both explicitly and by building further on it. In 1925, Austrian-born physicist Wolfgang Pauli developed the Pauli exclusion principle, which states that no two electrons around a single nucleus in an atom can occupy the same quantum state simultaneously, as described by four quantum numbers. Pauli made major contributions to quantum mechanics and quantum field theory - he was awarded the 1945 Nobel Prize for Physics for his discovery of the Pauli exclusion principle - as well as solid-state physics, and he successfully hypothesized the existence of the neutrino. In addition to his original work, he wrote masterful syntheses of several areas of physical theory that are considered classics of scientific literature. In 1926 at the age of 39, Austrian theoretical physicist Erwin Schrödinger produced the papers that gave the foundations of quantum wave mechanics. In those papers he described his partial differential equation that is the basic equation of quantum mechanics and bears the same relation to the mechanics of the atom as Newton's equations of motion bear to planetary astronomy. Adopting a proposal made by Louis de Broglie in 1924 that particles of matter have a dual nature and in some situations act like waves, Schrödinger introduced a theory describing the behaviour of such a system by a wave equation that is now known as the Schrödinger equation. The solutions to Schrödinger's equation, unlike the solutions to Newton's equations, are wave functions that can only be related to the probable occurrence of physical events. The readily visualized sequence of events of the planetary orbits of Newton is, in quantum mechanics, replaced by the more abstract notion of probability. (This aspect of the quantum theory made Schrödinger and several other physicists profoundly unhappy, and he devoted much of his later life to formulating philosophical objections to the generally accepted interpretation of the theory that he had done so much to create.) German theoretical physicist Werner Heisenberg was one of the key creators of quantum mechanics. In 1925, Heisenberg discovered a way to formulate quantum mechanics in terms of matrices. For that discovery, he was awarded the Nobel Prize for Physics for 1932. In 1927 he published his uncertainty principle, upon which he built his philosophy and for which he is best known. Heisenberg was able to demonstrate that if you were studying an electron in an atom you could say where it was (the electron's location) or where it was going (the electron's velocity), but it was impossible to express both at the same time. He also made important contributions to the theories of the hydrodynamics of turbulent flows, the atomic nucleus, ferromagnetism, cosmic rays, and subatomic particles, and he was instrumental in planning the first West German nuclear reactor at Karlsruhe, together with a research reactor in Munich, in 1957. Considerable controversy surrounds his work on atomic research during World War II. Some view the birth of quantum chemistry in the discovery of the Schrödinger equation and its application to the hydrogen atom in 1926. However, the 1927 article of Walter Heitler and Fritz London is often recognised as the first milestone in the history of quantum chemistry. This is the first application of quantum mechanics to the diatomic hydrogen molecule, and thus to the phenomenon of the chemical bond. In the following years much progress was accomplished by Edward Teller, Robert S. Mulliken, Max Born, J. Robert Oppenheimer, Linus Pauling, Erich Hückel, Douglas Hartree, Vladimir Aleksandrovich Fock, to cite a few. The underlying physical laws necessary for the mathematical theory of a large part of physics and the whole of chemistry are thus completely known, and the difficulty is only that the exact application of these laws leads to equations much too complicated to be soluble. It therefore becomes desirable that approximate practical methods of applying quantum mechanics should be developed, which can lead to an explanation of the main features of complex atomic systems without too much computation. Hence the quantum mechanical methods developed in the 1930s and 1940s are often referred to as theoretical molecular or atomic physics to underline the fact that they were more the application of quantum mechanics to chemistry and spectroscopy than answers to chemically relevant questions. In 1951, a milestone article in quantum chemistry is the seminal paper of Clemens C. J. Roothaan on Roothaan equations. It opened the avenue to the solution of the self-consistent field equations for small molecules like hydrogen or nitrogen. Those computations were performed with the help of tables of integrals which were computed on the most advanced computers of the time. In the 1940s many physicists turned from molecular or atomic physics to nuclear physics (like J. Robert Oppenheimer or Edward Teller). Glenn T. Seaborg was an American nuclear chemist best known for his work on isolating and identifying transuranium elements (those heavier than uranium). He shared the 1951 Nobel Prize for Chemistry with Edwin Mattison McMillan for their independent discoveries of transuranium elements. Seaborgium was named in his honour, making him the only person, along Albert Einstein and Yuri Oganessian, for whom a chemical element was named during his lifetime. By the mid 20th century, in principle, the integration of physics and chemistry was extensive, with chemical properties explained as the result of the electronic structure of the atom; Linus Pauling's book on The Nature of the Chemical Bond used the principles of quantum mechanics to deduce bond angles in ever-more complicated molecules. However, though some principles deduced from quantum mechanics were able to predict qualitatively some chemical features for biologically relevant molecules, they were, till the end of the 20th century, more a collection of rules, observations, and recipes than rigorous ab initio quantitative methods. This heuristic approach triumphed in 1953 when James Watson and Francis Crick deduced the double helical structure of DNA by constructing models constrained by and informed by the knowledge of the chemistry of the constituent parts and the X-ray diffraction patterns obtained by Rosalind Franklin. This discovery lead to an explosion of research into the biochemistry of life. In the same year, the Miller-Urey experiment demonstrated that basic constituents of protein, simple amino acids, could themselves be built up from simpler molecules in a simulation of primordial processes on Earth. Though many questions remain about the true nature of the origin of life, this was the first attempt by chemists to study hypothetical processes in the laboratory under controlled conditions. In 1983 Kary Mullis devised a method for the in-vitro amplification of DNA, known as the polymerase chain reaction (PCR), which revolutionized the chemical processes used in the laboratory to manipulate it. PCR could be used to synthesize specific pieces of DNA and made possible the sequencing of DNA of organisms, which culminated in the huge human genome project. An important piece in the double helix puzzle was solved by one of Pauling's students Matthew Meselson and Frank Stahl, the result of their collaboration (Meselson-Stahl experiment) has been called as "the most beautiful experiment in biology". They used a centrifugation technique that sorted molecules according to differences in weight. Because nitrogen atoms are a component of DNA, they were labelled and therefore tracked in replication in bacteria. In 1970, John Pople developed the Gaussian program greatly easing computational chemistry calculations. In 1971, Yves Chauvin offered an explanation of the reaction mechanism of olefin metathesis reactions. In 1975, Karl Barry Sharpless and his group discovered a stereoselective oxidation reactions including Sharpless epoxidation,Sharpless asymmetric dihydroxylation, and Sharpless oxyamination. In 1985, Harold Kroto, Robert Curl and Richard Smalley discovered fullerenes, a class of large carbon molecules superficially resembling the geodesic dome designed by architect R. Buckminster Fuller. In 1991, Sumio Iijima used electron microscopy to discover a type of cylindrical fullerene known as a carbon nanotube, though earlier work had been done in the field as early as 1951. This material is an important component in the field of nanotechnology. In 1994, Robert A. Holton and his group achieved the first total synthesis of Taxol. In 1995, Eric Cornell and Carl Wieman produced the first Bose-Einstein condensate, a substance that displays quantum mechanical properties on the macroscopic scale. Classically, before the 20th century, chemistry was defined as the science of the nature of matter and its transformations. It was therefore clearly distinct from physics which was not concerned with such dramatic transformation of matter. Moreover, in contrast to physics, chemistry was not using much of mathematics. Even some were particularly reluctant to use mathematics within chemistry. For example, Auguste Comte wrote in 1830: Every attempt to employ mathematical methods in the study of chemical questions must be considered profoundly irrational and contrary to the spirit of chemistry.... if mathematical analysis should ever hold a prominent place in chemistry -- an aberration which is happily almost impossible -- it would occasion a rapid and widespread degeneration of that science. However, in the second part of the 19th century, the situation changed and August Kekulé wrote in 1867: I rather expect that we shall someday find a mathematico-mechanical explanation for what we now call atoms which will render an account of their properties. As understanding of the nature of matter has evolved, so too has the self-understanding of the science of chemistry by its practitioners. This continuing historical process of evaluation includes the categories, terms, aims and scope of chemistry. Additionally, the development of the social institutions and networks which support chemical enquiry are highly significant factors that enable the production, dissemination and application of chemical knowledge. (See Philosophy of chemistry) The later part of the nineteenth century saw a huge increase in the exploitation of petroleum extracted from the earth for the production of a host of chemicals and largely replaced the use of whale oil, coal tar and naval stores used previously. Large-scale production and refinement of petroleum provided feedstocks for liquid fuels such as gasoline and diesel, solvents, lubricants, asphalt, waxes, and for the production of many of the common materials of the modern world, such as synthetic fibers, plastics, paints, detergents, pharmaceuticals, adhesives and ammonia as fertilizer and for other uses. Many of these required new catalysts and the utilization of chemical engineering for their cost-effective production. In the mid-twentieth century, control of the electronic structure of semiconductor materials was made precise by the creation of large ingots of extremely pure single crystals of silicon and germanium. Accurate control of their chemical composition by doping with other elements made the production of the solid state transistor in 1951 and made possible the production of tiny integrated circuits for use in electronic devices, especially computers. "Something has been said about the chemical excellence of cast iron in ancient India, and about the high industrial development of the Gupta times, when India was looked to, even by Imperial Rome, as the most skilled of the nations in such chemical industries as dyeing, tanning, soap-making, glass and cement... By the sixth century the Hindus were far ahead of Europe in industrial chemistry; they were masters of calcinations, distillation, sublimation, steaming, fixation, the production of light without heat, the mixing of anesthetic and soporific powders, and the preparation of metallic salts, compounds and alloys. The tempering of steel was brought in ancient India to a perfection unknown in Europe till our own times; King Porus is said to have selected, as a specially valuable gift from Alexander, not gold or silver, but thirty pounds of steel. The Moslems took much of this Hindu chemical science and industry to the Near East and Europe; the secret of manufacturing "Damascus" blades, for example, was taken by the Arabs from the Persians, and by the Persians from India." "Two systems of Hindu thought propound physical theories suggestively similar to those of Greece. Kanada, founder of the Vaisheshika philosophy, held that the world was composed of atoms as many in kind as the various elements. The Jains more nearly approximated to Democritus by teaching that all atoms were of the same kind, producing different effects by diverse modes of combinations. Kanada believed light and heat to be varieties of the same substance; Udayana taught that all heat comes from the sun; and Vachaspati, like Newton, interpreted light as composed of minute particles emitted by substances and striking the eye." "To form an idea of the historical place of Jabir's alchemy and to tackle the problem of its sources, it is advisable to compare it with what remains to us of the alchemical literature in the Greek language. One knows in which miserable state this literature reached us. Collected by Byzantine scientists from the tenth century, the corpus of the Greek alchemists is a cluster of incoherent fragments, going back to all the times since the third century until the end of the Middle Ages." "The efforts of Berthelot and Ruelle to put a little order in this mass of literature led only to poor results, and the later researchers, among them in particular Mrs. Hammer-Jensen, Tannery, Lagercrantz, von Lippmann, Reitzenstein, Ruska, Bidez, Festugiere and others, could make clear only few points of detail... The study of the Greek alchemists is not very encouraging. An even surface examination of the Greek texts shows that a very small part only was organized according to true experiments of laboratory: even the supposedly technical writings, in the state where we find them today, are unintelligible nonsense which refuses any interpretation. It is different with Jabir's alchemy. The relatively clear description of the processes and the alchemical apparatuses, the methodical classification of the substances, mark an experimental spirit which is extremely far away from the weird and odd esotericism of the Greek texts. The theory on which Jabir supports his operations is one of clearness and of an impressive unity. More than with the other Arab authors, one notes with him a balance between theoretical teaching and practical teaching, between the `ilm and the `amal. In vain one would seek in the Greek texts a work as systematic as that which is presented for example in the Book of Seventy." (cf. Ahmad Y Hassan. "A Critical Reassessment of the Geber Problem: Part Three". Archived from the original on 2008-11-20. Retrieved .) |year=(help). English translation (extract).
<urn:uuid:ccaf0011-9ef3-40de-987a-e1fcc6d13ce0>
3.5625
20,452
Knowledge Article
Science & Tech.
27.346235
95,482,456
Study of Molecular Motion in Polymers by the NMR Method A considerable number of the problems in the physics and chemistry of polymers which may be solved by the NMR method are directly or indirectly related to the effect of molecular motion in the polymer on the NMR spectrum. Thus, the effect of the degree of crystallinity and the chain structure of the polymer on the form of the NMR line, examined in Chapter III, is explained ultimately by molecular motion. In this chapter we discuss only certain trends in the study of molecular motion in polymers. KeywordsPolymethyl Methacrylate Natural Rubber Molecular Motion Glass Transition Point Broad Component Unable to display preview. Download preview PDF.
<urn:uuid:f1981bca-336f-46fc-b146-c88a61b92a6f>
2.515625
141
Truncated
Science & Tech.
29.738696
95,482,462
“The accelerator has been performing well given the challenges of running at the new collision energy of 13 teraelectronvolts (TeV),” says Mike Lamont of the LHC Operations team. “We’re currently running with 2244 bunches of protons in the machine, spaced at intervals of 25 nanoseconds, which is an achievement in itself.” The new energy regime highlighted several issues for the Operations team, including increased electron-cloud effects at high beam intensities, and falling particles of dust inside the beam-pipe causing premature beam dumps. To resolve these issues, the team has been patiently ramping up the beam intensity over a period of weeks using small numbers of bunches in various configurations. With the LHC now confidently colliding protons at 13 TeV, it’s time to switch to lead. “The main scope of our upcoming technical stop is to install zero-degree calorimeters for CMS and ATLAS to prepare for the upcoming lead run,” says Marzia Bernardini of the CERN Engineering department. These detectors, just next to the beamline at two interaction points of the LHC ring, measure the energy of neutral particles coming from the collisions. These measurements help physicists to understand the size of the collision area in lead-lead interactions. Of the seven experiments on the LHC ring, the 10,000 tonne ALICE detector is the most specialized for studying lead-lead collisions, in which the hundreds of protons and neutrons in two lead nuclei smash into one another at energies of upwards of a few trillion electronvolts each. This forms a miniscule fireball in which everything “melts” into a quark-gluon plasma. “This new energy regime is of course very interesting for ALICE,” says ALICE spokesperson Paolo Giubellino. “In this run we will also have much larger statistics to work with and our detector has been significantly upgraded since the LHC’s first run. So we now have a better instrument to study the system to a much higher precision and at even higher temperatures than during run 1!” And ALICE is not the only collaboration interested in lead ions. “There’s a big team in CMS studying the data and ATLAS is also very interested,” says Lamont. Following their participation in the 2013 proton-lead run, LHCb will now be taking lead-lead data too; evidence of a growing enthusiasm for the study of lead-lead collisions at the LHC’s new energy frontier.
<urn:uuid:f4e2b219-ada3-4462-b5b7-e3fc065755e6>
2.546875
543
News Article
Science & Tech.
40.099377
95,482,473
(ăl'kān), any of a group of aliphatic hydrocarbons whose molecules contain only single bonds (see chemical bond). Alkanes have the general chemical formula CnH2n+2. An alkane is said to have a continuous chain if each carbon atom in its molecule is joined to at most two other carbon atoms; it is said to have a branched chain if any of its carbon atoms is joined to more than two other carbon atoms. The first four continuous-chain alkanes are methane, CH4; ethane, C2H6; propane, C3H8; and butane, C4H10. Names of continuous-chain alkanes whose molecules contain more than five carbon atoms are formed from a root that indicates the number of carbon atoms and the suffix -ane to indicate that the compound is an alkane; e.g., alkanes with 5, 6, 7, 8, 9, and 10 carbon atoms in their molecules are pentane, hexane, heptane, octane, nonane, and decane, respectively. The name of a branched-chain alkane is formed by adding prefixes to the name of the continuous-chain alkane from which it is considered to be derived; e.g., 2-methylpropane (called also isobutane) is thought of as being derived by replacing one of the hydrogen atoms bonded to the second (2-) carbon atom of a propane molecule with a methyl (CH3) group, forming CH3CH(CH3)2. Chemically, the alkanes are relatively unreactive. They are obtained by fractional distillation from petroleum and are used extensively as fuels. The alkanes are sometimes referred to as the methane series (after the simplest alkane) or as paraffins. (hì´´drōkär'bən), any organic compound composed solely of the elements hydrogen and carbon. The hydrocarbons differ both in the total number of carbo Member of a group of hydrocarbons having the general formula CnH2n + 2, commonly known as paraffins. As they contain only single covalent bonds, alka Hydrocarbon compound with the general formula C n H 2n+2 . Alkanes have a single carbon-carbon bond and form an homologous series whose...
<urn:uuid:269e52f8-2f96-4f3a-950b-9c064d7e5c46>
3.65625
498
Knowledge Article
Science & Tech.
44.230902
95,482,476
"This gives us a new way of studying light-matter interactions," said Valeria Kleiman, a UF associate professor of chemistry. "It enables us to study not just how the molecule reacts, but actually to change how it reacts, so we can test different energy transfer pathways and find the most efficient one." Kleiman is the principal investigator in the research featured in a paper set to appear Friday in the journal Science. Her work focuses on molecules known as dendrimers whose many branching units make them good energy absorbers. The amount of energy the synthetic molecules can amass and transfer depends on which path the energy takes as it moves through the molecule. Kleiman and three co-authors are the first to gain control of this process in real time. The team demonstrated that it could use phased tailored laser pulses -- light whose constituent colors travel at different speeds -- to prompt the energy to travel down different paths. "What we see is that we control where the energy goes by encoding different information in the excitation pulses," Kleiman said. Researchers who now test every new molecular structure for its energy storage and transfer efficiency may be able to use what Kleiman called a new spectroscopic tool to quickly identify the most promising structures for photovoltaic devices. "Imagine you want to go from here to Miami, and the road is blocked somewhere," she said. "With this process, we're able to say, 'Don't take that road, follow another one instead.'" The other authors of the Science paper are Daniel Kuroda, C.P. Singh and Zhonghua Peng. The research was supported by UF and the National Science Foundation. Valeria Kleiman | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ee02b9ec-3155-46c1-83bf-4a861b8e756e>
3.0625
933
Content Listing
Science & Tech.
40.759135
95,482,478
Zee Media Bureau/Shruti Saxena New Delhi: In a new observation, astronomers have identified an image of a white dwarf expelling out its outer layers. The event is an example of a celestial leftover, namely 'Jupiter's Ghost'. It is named such as it occupies the same amount of space in the sky which the Jupiter planet itself takes. The amazing image of the white dwarf shows the gas surrounding the star shapes up in the double-shell structure of the nebula and emit X-rays, shown in blue colour. The cooler concentrations of gas is shown by green glow. Jupiter's Ghost is located some 3,000 light-years away. Astronomers expect that our mighty Sun will undergo the same process after it consumes its hydrogen.
<urn:uuid:a8e9d045-a7e1-4bd2-b851-a681bd58149d>
2.671875
159
News Article
Science & Tech.
51.860945
95,482,501
|시간 제한||메모리 제한||제출||정답||맞은 사람||정답 비율| |1 초||32 MB||141||90||84||69.421%| The city of Osijek has recently been plagued by a swarm of mosquitoes. The solution to this problem was proposed long ago by Mr. Perić, a brave inventor from Benkovci, in an episode of the TV-show Gitak called "Globalno sjelo"1. Among other inspiring inventions, he presented a mosquito trap. It is basically a box with which you cover the mosquito after it falls for the piece of cheese or "kajmak" you placed there, depending on what your mosquitoes prefer. Simple, isn't it? If you're lucky, the box can cover more than one mosquito. You have spotted N mosquitoes on the table and know their positions precisely. What is the area of the smallest square-shaped box that can, placed parallel to the sides of the table, cover all the mosquitoes? The box, of course, can cover the mosquito with its edge. The first line of input contains the integer N (2 ≤ N ≤ 20), the number of spotted mosquitoes. Each of the following N lines contains the positions of mosquitoes as space-separated integer coordinates X and Y (1 ≤ X, Y ≤ 100) in an imaginary coordinate system whose axes are the sides of the table. At least two mosquitoes will be in different positions. The first and only line of output must contain the required area of the smallest square-shaped box (expressed, of course, in unit squares of the aforementioned coordinate system). 3 3 4 5 7 4 3 4 1 5 5 1 10 5 5 10 Clarification of the first sample test: A square with vertices (3,3) and (7,7) solves all the problems.
<urn:uuid:182f62fa-7400-438c-81bb-20fdf6cd2733>
3.015625
433
Academic Writing
Science & Tech.
70.289849
95,482,505
Life originated on the Earth more than 3.5 billion years ago. However, the scientists are still disputing over the possible sources of the life origin. The matter is that life on our planet evolved from the molecular level to the level of bacteria organisms within 0.5 - 1 billion years, this period being very short for such an important evolutionary step. The researchers are still racking the brains over this mystery. One of the popular hypothesis asserts that some germs of life have been brought to the Earth from space. But what exactly could have been brought from space and how could the germs have originated in space? E.A. Kuzicheva and N.B.Gontareva, research assistants from the Institute of Cytology, Russian Academy of Sciences, have confirmed a possibility of abiogenous synthesis of complex organic compounds (monomeric units of nucleic acids) on the surface of comets, asteroids, meteorites and space dust particles in the outer space. Therefore, it is possible that the above monomeric units of nucleic acids could have got to the Earth and thus could have significantly reduced the time period of the evolution process. On the surface of space bodies the scientists have found all kinds of various organic molecules (amino acids, organic acids, sugars etc.) and the components required for their synthesis. Obviously, it is there that organic substances are being synthesised, but the researchers can not be sure of this fact, until the experiments confirm their assumptions. The scientists from St. Petersburg reproduced synthesis of one of the DNA components - 5`-adenosine monophosphate (5`-AMP) under the conditions specially designed to simulate the space environment. In order to synthesise 5`-AMP it is required to combine adenosine and inorganic phosphate. On the Earth the reaction goes in the solution, but there are no solvents whatsoever in space, therefore the researchers dried them in the air and got a pellicle. Synthesis requires energy. The major source of energy in the outer space both at present and in the prebiotic period of the Earth history has been the solar ultraviolet radiation of different wavelengths. Therefore, the pellicles were irradiated by a powerful ultraviolet lamp. Naturally, the synthesis was carried out in vacuum, and the researchers used the lunar soil, delivered to the Earth by the `Moon-16` station from the Sea of Abundance, as a model of the comet, meteorite, interplanetary or cosmic dust. The soil represented basaltic dust of the dark-grey colour, the diameter of its particles being less than 0.2 millimetres. After 7-9 hours of ultraviolet irradiation of the dry pellicles the scientists acquired several compounds, mainly 5`-AMP, one of the DNA/RNA monomers. The energy of radiation does not promote synthesis alone, it also facilitates decomposition of the initial and newly-synthesised compounds, the more powerful the radiation is, the more extensively the decomposition goes. However, the lunar soil provided some protection. It has appeared that a small pinch of the lunar soil protects organic substances from the destructive ultraviolet impact - the lunar soil helps to increase the 5`-AMP yield by 2.7 times. Natalia Reznik | alphagalileo Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:6bf1a162-d523-44e7-bcf0-89db7ed71e80>
3.515625
1,298
Content Listing
Science & Tech.
39.153619
95,482,508
In the October 31 issue of Science magazine, Allison Stephenson, a Ph.D. candidate in geoscience, and Patricia Dove, professor of geoscience in the College of Science at Virginia Tech, and colleagues* report that a hydrophilic peptide, similar in character to those found in calcifying organisms, significantly enhances the magnesium (Mg)-content of calcite. "We knew from another study in our group (Elhadj et al., 2006, PNAS) that the chemistry of simple peptides as well as proteins could be tuned to control crystal growth rate and change crystal morphology," said Dove. "From that understanding, we realized that the water-structuring abilities of certain biomolecules could also influence the amount of impurities that can go into minerals." "All organisms use proteins to grow minerals into complex shapes with remarkable functions," said lead author Stephenson. "But this finding is especially meaningful for geologists because Mg-content in carbonates is used as a 'paleo thermometer'. That is, we know that Mg content increases with temperature, but now we see that certain biomolecules could also affect those 'signatures'. The findings raise questions about the interplay of different factors on metal-contents in biominerals." The findings also offer new insights for materials synthesis because a high degree of control on impurities is often necessary to give specific properties such as strength or electrical conductivity. By using biomolecules, it may be possible to tune impurities to desired levels, Dove said. "Also, this basic research suggests new ways of looking at biochemical origins of pathological skeletal mineralization, and whether local biochemistry could influence the uptake of toxic metals into human skeletons," Stephenson said. Susan Trulove | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:7d2bb521-8fbc-47fe-9c29-c8999ee6bd2a>
3.0625
994
Content Listing
Science & Tech.
34.867561
95,482,522
Views » April 29, 2010 The Global Volcanic Feedback Loop If researchers are right about the impact of glacier melt and rising seas, volcanic and earthquake activity will accelerate. We are in for a hell of a ride. Volcanoes can radically alter climates. We know that. But we are now learning that the reverse also holds: Climate change is causing volcanic eruptions, earthquakes and tsunamis. Evidence that volcanoes can heat or cool the climate dates back millennia. In recent geological time, particles from the 1815 Mount Tambora eruption in Indonesia circled the globe, blocking out significant sunshine for months and contributing to “the year without summer.” In 1816 the northern hemisphere was hit by unprecedented frosts every month, which caused massive crop failure and hundreds of thousands of deaths. Volcanoes can also promote warming though CO2 emissions, although at far lower levels than human activity. Scientists are now warning that global warming will likely accelerate volcanic activity, which will in turn intensify climate change. Many of the world’s volcanoes are topped by glaciers, which can be hundreds of meters thick, with each cubic meter of ice weighing nearly a ton. As this massive layer of ice melts, built-up pressure on the rock below is released, shifting the earth. Magma can leak through cracks, and when it contacts the remaining ice, the molten rock explodes into globe-circling ash clouds. Also vulnerable to climate change are the more than 60 percent of the world’s active volcanoes that reside on continental margins and island chains. As the planet warms, the weight of rising sea levels will warp faults, stimulating eruptions and quakes. “Not every volcanic eruption and earthquake in the years to come will have a climate-change link,” Bill McGuire, professor of Geophysical Hazards at University College (UK), told New Scientist. “Yet as the century progresses, we should not be surprised by more geological disasters as a direct and indirect result of dramatic changes to our environment.” University of Iceland volcanologist Freysteinn Sigmundsson doubted a climate link to the April eruption of Eyjafjallajökull, describing its capping glacier as too light and small. But what the eruption did show was how the interaction of glaciers and volcanic fires not only produces massive ash clouds, but melts great hunks of the glacier itself. After the eruption, flash floods ruined surrounding farmland. Even without volcanic activity, glacial melting is linked to flooding through glacial lake outburst flood (GLOF). The unlovely acronym describes a phenomenon that is increasing around the world: runoff from melting glaciers gathers in newly formed or greatly expanded lakes retained by unstable banks. Some have already burst; one roiling flood was recorded moving at 15,000 cubic meters per second–more than five times the volume of Niagara Falls. In the 1950s the area below Nepal’s Trakarding glacier, for example, was a small pond. Now, as the glacier shrinks, Tsho Rolpa Lake is more than 2 miles long and 500 feet deep. If the bank breaks, as many scientists warn it will, 2 billion cubic feet of water would sweep away villages, threatening some 10,000 lives. Norbu Sherpa survived a GLOF that raged for hours through the Nepali village of Ghat in 1985. “My family and I were all in our house when we heard a big explosion [and saw] a big black stream of mud, including rocks and trees, rushing down the mountain,” he told the World Wildlife Federation. The GLOF destroyed crops and an almost completed $1.5 million hydropower plant. The Himalayan glaciers are rapidly disappearing. Nepal alone now has almost 9,000 glacial lakes, and the International Center for Integrated Mountain Development warns that 204 are at risk of GLOF. Less sudden but more catastrophic is what happens next, after the Himalayan glaciers and much of the high snows are gone. The meltwater now feeds 12 rivers that supply more than two billion people–nearly one-third of humanity – with water. The loss of this water source not only threatens economies, culture and community–it imperils national and global security. Water shortages, droughts, floods, crop failures and famine will follow and the ripples of disaster will spread around the world, driving up food and commodity prices, fueling tribal and national conflicts and water wars, and creating waves of climate refugees. At the same time, if researchers are right about the impact of glacier melt and rising seas, volcanic and earthquake activity will accelerate. We are in for a hell of a ride. Terry J. Allen Terry J. Allen, an In These Times senior editor, has written the magazine's monthly investigative health and science column since 2006. if you like this, check out: - Plant a Tree: A Native Community Takes a Step to Reverse Climate Change - Big Coal Is Using This Small Oklahoma Town as a Toxic Waste Dump - It’s Time for the Climate Movement to Embrace a Federal Jobs Guarantee - Scott Pruitt’s Real Scandal Is Helping Turn the U.S. Into a Full-Fledged Petrostate - The Caribou in the Copper Mine
<urn:uuid:f73c0fd7-0292-453b-9c27-cffe14f77260>
3.78125
1,094
News Article
Science & Tech.
44.441039
95,482,530
Skip to main content McMurdo Dry Valleys LTER Areas of Research Browse by Category Streams Data Sets Find Stream Field Notes Streams Field Manual Glacier Data Sets Met Data Sets Soils Data Sets Lakes Data Sets Genetic Data Sets Restricted Data Access Onyx at Vanda Onyx at Lower Wright Lakes Blue Boxes East Lake Bonney West Lake Bonney Canada Glacier RT Friis Hills RT Howard Glacier RT Lake Bonney RT Lake Fryxell RT Lake Hoare RT Lake Vanda RT Miers Valley RT Mt. Fleming RT Taylor Glacier RT Data Explorer Dashboard Techs and Staff Glacier Stakes Map Soil Projects Map Education and Outreach McMurdo Dry Valleys LTER News Research on Blood Falls Highlighted Dr. Jill Mikucki from Dartmouth College describes ancient microbial communities living in Blood Falls below the Taylor Glacier Glaciers and the Simple Life in Antarctica's Dry Valleys Hassan Basagic from Portland State University describes the essential role of polar glaciers in supporting the bare-bones ecosystems in the Dry Valleys Meteorological Data now updated through 07-08 season Meteorological Data can now be downloaded through the 07-08 season. Fryxell, Hoare and Bonney stations contain data through the extended season. The Dry Valleys on NASA's Earth Observatory Blood Falls and the Dry Valleys are featured on NASA's Earth Observatory website Basecamp website for project management is now available Scanned Stream Field Notes Scanned images of hand-written stream field notes are now searchable online back to 1993. Many thanks to Jane Turner! Live Stream Flow Data from Antarctica Live flow data are being telemetered from the Onyx River in Antarctica by the USGS. Download data here. Read our Blogs from Antarctica! MCM members are currently in the field. Read about life and research in Antarctica. "Cold, Hard Facts" Read Peter Doran's Op-Ed piece on global warming in the July 27 New York Times! REAL-TIME stream data from Antarctica! The USGS Wyoming Water Science Center now has real-time streamflow data available for the Onyx River in Wright Valley. New Online Dry Valleys Interactive Map Check out the new interactive online GIS map and other mapping options for MCM-LTER researchers. The Lost Seal in the classroom Carol Landis and April Jacobs were featured in the Worthington News in Columbus, OH. Diana Wall featured on The Online NewsHour The Online version of PBS's NewsHour features an interview with Diana Wall and a slideshow from the Dry Valleys LTER. Diana Wall named director of CSU's School of Global Environmental Sustainability the School of Global Environmental Sustainability is an umbrella organization that encompasses all environmental education and research at Colorado State Univ. Diana Wall is it's founding director. Wall featured in DISCOVER magazine Diana Wall was featured as one of the 8 environmental scientists to comment on what was learned over the past 25 years and what is expected in the next 25 years by DISCOVER magazine (April, 2005 issue). Wormherders' work featured in new book The Wormherders' research was featured in another book by published by Island Press entitled Under Ground: How creatures of mud and dirt shape our world, by Yvonne Baskin. New MCM LTER website Welcome to the new MCM LTER website! You will continue to see additions and improvements to this site in terms of both content and data accessibility. Wall edits SCOPE volume Diana Wall edited the book Sustaining Biodiversity and Ecosystem Services in Soils and Sediments.
<urn:uuid:2866eb0a-6d73-4dd2-8207-81e5fde97cca>
2.71875
808
Content Listing
Science & Tech.
38.82201
95,482,557
In a paper published in Monthly Notices of the Royal Astronomical Society, astronomers Dr Justin Read, Professor George Lake and Oscar Agertz of the University of Zurich, and Dr Victor Debattista of the University of Central Lancashire use the results of a supercomputer simulation to deduce the presence of this disk. They explain how it could allow physicists to directly detect and identify the nature of dark matter for the first time. Unlike the familiar ‘normal’ matter that makes up stars, gas and dust, ‘dark’ matter is invisible but its presence can be inferred through its gravitational influence on its surroundings. Physicists believe that it makes up 22% of the mass of the Universe (compared with the 4% of normal matter and 74% comprising the mysterious ‘dark energy’). But, despite its pervasive influence, no-one is sure what dark matter consists of. Prior to this work, it was thought that dark matter forms in roughly spherical lumps called ‘halos’, one of which envelopes the Milky Way. But this ‘standard’ theory is based on supercomputer simulations that model the gravitational influence of the dark matter alone. The new work includes the gravitational influence of the stars and gas that also make up our Galaxy. Stars and gas are thought to have settled into disks very early on in the life of the Universe and this affected how smaller dark matter halos formed. The team’s results suggest that most lumps of dark matter in our locality merged to form a halo around the Milky Way. But the largest lumps were preferentially dragged towards the galactic disk and were then torn apart, creating a disk of dark matter within the Galaxy. “The dark disk only has about half of the density of the dark matter halo, which is why no one has spotted it before,” said lead author Justin Read. “However, despite its low density, if the disk exists it has dramatic implications for the detection of dark matter here on Earth.” The Earth and Sun move at some 220 kilometres per second along a nearly circular orbit about the centre of our Galaxy. Since the dark matter halo does not rotate, from an Earth-based perspective it feels as if we have a ‘wind’ of dark matter flowing towards us at great speed. By contrast, the ‘wind’ from the dark disk is much slower than from the halo because the disk co-rotates with the Earth. “It's like sitting in your car on the highway moving at a hundred kilometres an hour”, said team member Dr Victor Debattista. “It feels like all of the other cars are stationary because they are moving at the same speed.” This abundance of low-speed dark matter particles could be a real boon for researchers because they are more likely to excite a response in dark matter detectors than fast-moving particles. “Current detectors cannot distinguish these slow moving particles from other background ‘noise’,” said Prof. Laura Baudis, a collaborator at the University of Zurich and one of the lead investigators for the XENON direct detection experiment, which is located at the Gran Sasso Underground Laboratory in Italy. “But the XENON100 detector that we are turning on right now is much more sensitive. For many popular dark matter particle candidates, it will be able to see something if it’s there.” This new research raises the exciting prospect that the dark disk – and dark matter – could be directly detected in the very near future. FURTHER INFORMATIONMonthly Notices of the Royal Astronomical Society paper Robert Massey | alfa What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:a3118eb8-884f-49b5-b869-fb6b5c1c0c3d>
4.03125
1,403
Content Listing
Science & Tech.
43.149381
95,482,563
The Tiny Clams That Ate the Bay Delta | KCET The Tiny Clams That Ate the Bay Delta An explanatory series focusing on one of the most complex issues facing California: water sharing. And at its core is the Sacramento-San Joaquin Bay Delta. Stay with kcet.org/baydelta for all the project's stories. Spend enough time reading about the plight of fish and other aquatic animals in the Bay Delta and you'll come across a whole lot of mentions of an ongoing collapse of the estuary's ecosystem. That collapse has been in progress for a long time, but biologists started getting really worried in the 1980s. That collapse rivals diversions of water for our cities and farms as the major threat to the Delta's suffering wildlife. The problem is pretty basic. Aquatic ecosystems are like almost any other: they wouldn't exist without the plants in them turning sunlight into biomass. In the Bay Delta, most of those plants are what's called "phytoplankton": the tiny floating algae and other photosynthesizers that form the base of the food chain. Phytoplankton feed small animals such as zooplankton, which feed larger animals, which feed even larger animals. The amount of phytoplankton in an ecosystem is thus a rough measure of an ecosystem's biological productivity. And for centuries, the Bay Delta was one of the most productive ecosystems in the world. Until recently. In the last few decades, the amount of phytoplankton in the Bay Delta has dropped catastrophically. That collapse in productivity has echoed out along each link in the food web: animals from zooplankton to gigantic fish are starving as a result. And the likely culprits are tiny, have hard shells, and live in unimaginable numbers in the Bay Delta. The Amur River clam (Corbula amurensis) and Asian clam (Corbicula fluminea) have essentially split the Bay Delta between them, and together they have cut off the Bay Delta's aquatic ecosystem at its base. The Amur River clam originated in China and the Siberian region in Russia, though the clams in the Bay Delta may be descended from a lineage that first invaded estuaries in Japan. Corbula amurensis probably arrived in the Bay Delta when its free-floating larvae hitched a ride in ballast water in cargo ships, an important vector by which invasive marine organisms are moved around the world. It was first noticed in the Bay Delta in 1986, when a dredge in San Francisco Bay found three individuals. In the three decades since, the Amur River clam has become near-ubiquitous throughout the saltier portions of the Bay and Delta, forming solid-packed colonies on many continually submerged surfaces, including mudflats. Those colonies can easily hold more than 2,000 half-inch clams per square meter of surface. The Asian clam has been around probably four decades longer, and it's three or four times the size of the Amur River clam. It has spread similarly, forming massive colonies. Unlike the Amur River clam, the Asian clam prefers fresher water. And so the two species have neatly divided the Bay Delta estuary between them, with the Amur River clam downstream of the saltwater-freshwater mixing zone, and the Asian clam upstream. That rough boundary between the two species' invaded turf shifts as that mixing zone moves upstream and down depending on how much fresh water flows out of the Sacramento and San Joaquin rivers and out to sea. The two species' ranges generally overlap a little right where the two rivers finally flow into Suisun Bay around the city of Antioch, but a few months of drought will encourage Corbula to move eastward along with the intruding salt water into the Delta proper. Conversely, higher than average freshwater flows through the Delta -- remember those? We used to have those sometimes -- they will encourage Corbicula to colonize a little farther downstream and into the Bay. Regardless of whether the clams in the neighborhood are Corbula in salty or brackish water, or Corbicula in fresh, their main effects on the local environment are roughly the same. Both clam species are take up any contaminants in the water and incorporate it into their tissue, whether they're pesticides or oil refinery effluent or improperly disposed motor oil. Any organism that eats the clams ingests every substance the clams have accumulated. That's especially significant in the case of selenium, a mineral that occurs naturally in soils of the San Joaquin Valley, and gets washed into the Bay Delta with irrigation runoff -- though it's also a component of pollution from the Bay Delta's oil refineries. Selenium made national headlines back in the 1980s when irrigation water that ran off in the Kesterson National Wildlife Refuge ended up causing reproductive toxicity to waterfowl. Pictures of deformed baby ducks graced the evening news, and we suddenly learned a whole lot about the effects of too much selenium on living things. Though Kesterson National Wildlife Refuge was closed shortly after and work has been done to reduce selenium concentrations in agricultural drain water, all the selenium in the San Joaquin river basin does eventually end up in the Bay Delta, and Corbula and Corbicula take it in. When the handful of fish that have learned to eat the invasive clams -- including the green and white sturgeons and the Sacramento splittail -- eat the clams, they get a full dose of each clam's selenium uptake. Selenium has been shown to interfere with fish reproduction, meaning that fish like the sturgeons and splittail become less fertile. And they're already in enough trouble: the Bay Delta's green sturgeon is listed as a Threatened species under the federal Endangered Species Act, and the white sturgeon and Sacramento splittail ought to be. But if there's nothing to eat, it doesn't matter whether you can reproduce. Clams and many other mollusks eat by sucking in water, then expelling it through their gills, which filter out edible plankton suspended in the water. One clam by itself can make a surprisingly large impact on the amount of plankton in the water. Finding an Amur River clam or an Asian clam by itself in the Bay Delta would be unusual. With thousands of clams in some square meters of the Bay and Delta, the Amur River clam and the Asian clam become very efficient black holes for both phytoplankton and the zooplankton that eats it. The arrival of these two species of clams in the Bay and Delta, and their astonishingly fast domination of the submerged environment, meant a drastic and persistent decline in the amount of plankton in the water. And that means less food for larger animals, including the Threatened Delta smelt, juvenile Chinook salmon, and steelhead that the state has spent countless millions of dollars and thousands of person-hours in unpleasant meetings attempting to restore. The decades-long general decline in the Bay Delta's ecological productivity stems directly from a lack of phytoplankton, and that decline in phytoplankton can mainly be laid at the clams' feet. An aquatic ecosystem like the Delta without its phytoplankton is like a grassland with no grass, or a forest with no trees. Though there are other contributors to the overall system decline in the Delta, including herbicides in agricultural runoff that have their own impacts on phytoplankton, the Amur River and Asian clams -- just in trying to survive in a new place -- have essentially clearcut the Delta's aquatic ecosystem. The destruction ripples outward along the food web -- which has a whole lot less food than it used to. There's probably no way to change the fact that the Delta is now home to possibly billions of these invasive clams. Some researchers suggest that restoring some of the Delta's historic floodplains might help, by providing a "nursery" for phytoplankton during the spring floods that the clams wouldn't be able to colonize, seeing as such floodplains generally dry out completely for at least part of the year. And as those flooded lands can also provide a nursery for some of the same native fish now struggling to make their way in a Delta with less phytoplankton, restoring floodplains might prove an elegant solution to a few different problems in the Delta. Until that happens, attempts to restore fish in a Delta without phytoplankton might be about as likely as restoring bison herds on a grassless plain. Without a sustainable food source, the restored wildlife faces little hope of survival. For ongoing environmental coverage in March 2017 and afterward, please visit our show Earth Focus, or browse Redefine for historic material. KCET's award-winning environment news project Redefine ran from July 2012 through February 2017. Following a screening of “Puzzle”, actress Kelly MacDonald, actor David Denman and director Mark Turtletaub attended a Q&A hosted by Cinema Series host Pete Hammond. A Q&A will immediately follow the screening with Glenn Close, Jonathan Pryce, Christian Slater, Annie Starke and director Björn Runge. The stocks of two of the largest private prison contractors skyrocketed in the month after President Trump’s inauguration and have continued to grow. Here are some of the coolest ways to chill out in the greater Palm Springs area. - 1 of 67 - next ›
<urn:uuid:daf05445-79d7-44d6-88af-a1cc94a346df>
3.359375
2,001
Content Listing
Science & Tech.
38.938995
95,482,577
Faraday cageWikipedia Open wikipedia design. A Faraday cage or Faraday shield is an enclosure used to block electromagnetic fields. A Faraday shield may be formed by a continuous covering of conductive material or in the case of a Faraday cage, by a mesh of such materials. Faraday cages are named after the English scientist Michael Faraday, who invented them in 1836. A Faraday cage operates because an external electrical field causes the electric charges within the cage's conducting material to be distributed such that they cancel the field's effect in the cage's interior. This phenomenon is used to protect sensitive electronic equipment from external radio frequency interference (RFI). Faraday cages are also used to enclose devices that produce RFI, such as radio transmitters, to prevent their radio waves from interfering with other nearby equipment. They are also used to protect people and equipment against actual electric currents such as lightning strikes and electrostatic discharges, since the enclosing cage conducts current around the outside of the enclosed space and none passes through the interior. Faraday cages cannot block stable or slowly varying magnetic fields, such as the Earth's magnetic field (a compass will still work inside). To a large degree, though, they shield the interior from external electromagnetic radiation if the conductor is thick enough and any holes are significantly smaller than the wavelength of the radiation. For example, certain computer forensic test procedures of electronic systems that require an environment free of electromagnetic interference can be carried out within a screened room. These rooms are spaces that are completely enclosed by one or more layers of a fine metal mesh or perforated sheet metal. The metal layers are grounded to dissipate any electric currents generated from external or internal electromagnetic fields, and thus they block a large amount of the electromagnetic interference. See also electromagnetic shielding. They provide less attenuation from outgoing transmissions versus incoming: they can shield EMP waves from natural phenomena very effectively, but a tracking device, especially in upper frequencies, may be able to penetrate from within the cage (e.g., some cell phones operate at various radio frequencies so while one cell phone may not work, another one will). A common misconception is that a Faraday cage provides full blockage or attenuation; this is not true. The reception or transmission of radio waves, a form of electromagnetic radiation, to or from an antenna within a Faraday cage is heavily attenuated or blocked by the cage, however, a Faraday cage has varied attenuation depending on wave form, frequency or distance from receiver/transmitter, and receiver/transmitter power. Near-field high-powered frequency transmissions like HF RFID are more likely to penetrate. Solid cages generally attenuate fields over a broader range of frequencies than mesh cages. In 1836, Michael Faraday observed that the excess charge on a charged conductor resided only on its exterior and had no influence on anything enclosed within it. To demonstrate this fact, he built a room coated with metal foil and allowed high-voltage discharges from an electrostatic generator to strike the outside of the room. He used an electroscope to show that there was no electric charge present on the inside of the room's walls. Although this cage effect has been attributed to Michael Faraday's famous ice pail experiments performed in 1843, it was Benjamin Franklin in 1755 who observed the effect by lowering an uncharged cork ball suspended on a silk thread through an opening in an electrically charged metal can. In his words, "the cork was not attracted to the inside of the can as it would have been to the outside, and though it touched the bottom, yet when drawn out it was not found to be electrified (charged) by that touch, as it would have been by touching the outside. The fact is singular." Franklin had discovered the behavior of what we now refer to as a Faraday cage or shield (based on Faraday's later experiments which duplicated Franklin's cork and can). Additionally, Giovanni Battista Beccaria discovered this effect a long time before Faraday too. A continuous Faraday shield is a hollow conductor. Externally or internally applied electromagnetic fields produce forces on the charge carriers (usually electrons) within the conductor; the charges are redistributed accordingly due to electrostatic induction. The redistributed charges greatly reduce the voltage within the surface, to an extent depending on the capacitance, however, full cancellation does not occur. If a charge is placed inside an ungrounded Faraday cage, the internal face of the cage becomes charged (in the same manner described for an external charge) to prevent the existence of a field inside the body of the cage, however, this charging of the inner face re-distributes the charges in the body of the cage. This charges the outer face of the cage with a charge equal in sign and magnitude to the one placed inside the cage. Since the internal charge and the inner face cancel each other out, the spread of charges on the outer face is not affected by the position of the internal charge inside the cage. So for all intents and purposes, the cage generates the same DC electric field that it would generate if it were simply affected by the charge placed inside. The same is not true for electromagnetic waves. If the cage is grounded, the excess charges will go to the ground instead of the outer face, so the inner face and the inner charge will cancel each other out and the rest of the cage will retain a neutral charge. Effectiveness of shielding of a static electric field is largely independent of the geometry of the conductive material, however, static magnetic fields can penetrate the shield completely. In the case of a varying electromagnetic fields, the faster the variations are (i.e., the higher the frequencies), the better the material resists magnetic field penetration. In this case the shielding also depends on the electrical conductivity, the magnetic properties of the conductive materials used in the cages, as well as their thicknesses. A good idea of the effectiveness of a Faraday shield can be obtained from considerations of skin depth. With skin depth, the current flowing is mostly in the surface, and decays exponentially with depth through the material. Because a Faraday shield has finite thickness, this determines how well the shield works; a thicker shield can attenuate electromagnetic fields better, and to a lower frequency. Faraday cages are Faraday shields which have holes in them and are therefore more complex to analyze. Whereas continuous shields essentially attenuate all wavelengths shorter than the skin depth, the holes in a cage may permit shorter wavelengths to pass through or set up "evanescent fields" (oscillating fields that do not propagate as EM waves) just beneath the surface. The shorter the wavelength, the better it passes through a mesh of given size. Thus to work well at short wavelengths (i.e., high frequencies), the holes in the cage must be smaller than the wavelength of the incident wave. Faraday cages may therefore be thought of as high pass filters. - Faraday cages are routinely used in analytical chemistry to reduce noise while making sensitive measurements. - Faraday cages, more specifically dual paired seam Faraday bags, are often used in digital forensics to prevent remote wiping and alteration of criminal digital evidence. - The US and NATO Tempest standards, and similar standards in other countries, include Faraday cages as part of a broader effort to provide emission security for computers. - Automobile and airplane passenger compartments are essentially Faraday cages, protecting passengers from electric charges, such as lightning - A booster bag (shopping bag lined with aluminium foil) acts as a Faraday cage. It is often used by shoplifters to steal RFID-tagged items. - Similar containers are used to resist RFID skimming. - Elevators and other rooms with metallic conducting frames and walls simulate a Faraday cage effect, leading to a loss of signal and "dead zones" for users of cellular phones, radios, and other electronic devices that require external electromagnetic signals. During training firemen and other first responders are cautioned that their two-way radios will probably not work inside elevator cars and to make allowances for that. Small, physical Faraday cages are used by electronics engineers during equipment testing to simulate such an environment to make sure that the device gracefully handles these conditions. - Properly designed conductive clothing can also form a protective Faraday cage. Some electrical linemen wear Faraday suits, which allow them to work on live, high-voltage power lines without risk of electrocution. The suit prevents electric current from flowing through the body, and has no theoretical voltage limit. Linemen have successfully worked even the highest voltage (Kazakhstan's Ekibastuz–Kokshetau line 1150 kV) lines safely. - Austin Richards, a physicist in California, created a metal Faraday suit in 1997 that protects him from tesla coil discharges. In 1998, he named the character in the suit Doctor MegaVolt and has performed all over the world and at Burning Man nine different years. - The scan room of a magnetic resonance imaging (MRI) machine is designed as a Faraday cage. This prevents external RF (radio frequency) signals from being added to data collected from the patient, which would affect the resulting image. Radiographers are trained to identify the characteristic artifacts created on images should the Faraday cage be damaged during a thunderstorm. - A microwave oven utilizes a Faraday cage, which can be partly seen covering the transparent window, to contain the electromagnetic energy within the oven and to shield the exterior from radiation. - Plastic bags that are impregnated with metal are used to enclose electronic toll collection devices whenever tolls should not be charged to those devices, such as during transit or when the user is paying cash. - The shield of a screened cable, such as USB cables or the coaxial cable used for cable television, protects the internal conductors from external electrical noise and prevents the RF signals from leaking out. - Anechoic chamber - Anti-static bag - Conductive textile - Electromagnetic field - Electromagnetic interference - Foil hat - Gauss's law - Mylar blanket - "Michael Faraday". Encarta. Archived from the original on 31 October 2009. Retrieved 20 November 2008. - J. D. Krauss, Electromagnetics, 4Ed, McGraw-Hill, 1992, ISBN 0-07-035621-1 - "The Annals of Electricity, Magnetism, and Chemistry; and Guardian of Experimental Science". 1 January 1840 – via Google Books. - https://people.maths.ox.ac.uk/trefethen/chapman_hewett_trefethen.pdf Mathematics of the Faraday Cage- S. Jonathan Chapman David P. Hewett Lloyd N. Trefethen - Hamill, Sean (22 December 2008). "As Economy Dips, Arrests for Shoplifting Soar". The New York Times. Retrieved 12 August 2009. |Wikimedia Commons has media related to Faraday cages.| - Faraday Cage Protects from 100,000 V :: Physikshow Uni Bonn - Notes from physics lecture on Faraday cages from Michigan State University - Michael Faraday: The Invention of Faraday Cage background and related experiment - Top Gear's Richard Hammond is protected from 600,000 V by a car (a Faraday Cage). - Top Gear's Richard Hammond as a human lightning rod - protected by a Voltrex Suit - The Faraday Cage: What Is It? How Does It Work?
<urn:uuid:171decf7-8bea-4e2c-a796-ccea3fb8859b>
3.796875
2,383
Knowledge Article
Science & Tech.
32.104441
95,482,580
University of Leicester researchers have taken a step forward in helping to create a defence for earth's technologies –from the constant threat of space weather. They have implemented a "double pulse" radar-operating mode on two radars, which form part of a global network of ground based coherent scatter radars called SuperDARN (Super Dual Auroral Radar Network). These radars allow observations of space weather, which can have devastating impacts for technologies on earth. James Borderick, of the Radio and Space Plasma Physics group, within the Department of Physics and Astronomy, said: "Intense space weather events are triggered by the explosive release of energy stored in the Sun's magnetic fields. "A strong burst of electromagnetic energy reaches the Earth with the potential to disrupt many of our fundamental services, such as satellite and aviation operations, navigation, and electricity power grids. Telecommunications and information technology are likewise vulnerable to space weather. "All modern societies rely heavily on space systems, for communications and resource information (meteorological, navigation and remote sensing). There are high cost and high risks associated with the consequences of space weather events, as insurance companies recognise. "We have implemented a new "double pulse" radar-operating mode on the Radio Space Plasma Physics Group's Co-operative UK Twin Located Auroral Sounding System (CUTLASS) radars. "The new sounding mode enhances our temporal resolution of plasma irregularities within the ionosphere. The resolution increase may help our understanding of coupling processes between the solar wind and the Earth's magnetosphere by allowing the observation of smaller scale phenomena with an unprecedented resolution. "Utilising our new radar mode and the vastness of ground based and space based instruments at our disposal, we are ever increasing our understanding of the countless phenomena associated with the Solar-Terrestrial interaction, and one day, may lead us to the accurate predictions of intense weather events- and an active defence." The research introduces the importance of utilising ground-based measurements of the near space environment in conjunction with spacecraft observations and then proceeds to explain the direct influences of space weather on our own technological systems. Mr Borderick will be presenting his doctoral research at the Festival of Postgraduate Research, which is taking place on Thursday 25th June in the Belvoir Suite, Charles Wilson Building between 11.30am and 1pm. J.D. Borderick | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:be509822-a1c7-46be-aca7-3881bf9aa0d7>
3.21875
1,067
Content Listing
Science & Tech.
30.143449
95,482,589
I need someone help with a Mathematics question January 28, 2007 3:40am CST Here is the question: How many minutes before 12 noon is it if 72 minutes ago it wastwice as many minutes past 9 am? • Sri Lanka 28 Jan 07 To do this first we must convert the time into minutes of the day. 9 am will be (9x60) 540th minute of the day, and 12 noon will be (12x60) 720th minute of the day. The minute time now is X. 7 72 minutes ago it was X-72. Since it is X minutes now the number of minutes past 12 is X - 720. Twice this amount is 2X - 1440 I is said that 72 minutes ago it was 9 past twice (X-720) So X -72 = 540 + (2X-1440) We will simplify this using Algebra. X - 72 = 540 + 2X - 1440 X - 2X = 540 - 1440 + 72 -X = -828 X = 828 So now we are in the 828th minute of the day. Divide this by 60 and we get 13 hours and 48 minutes. So now it is 1.48 PM. Check whether this is correct. 72 minutes ago it was 12.36 This is 216 minutes after 9 am. 1.48 pm is 108 minutes after 12 noon. So our calculation is correct.
<urn:uuid:15122173-1cbb-4fb7-bc53-c389281806a0>
2.78125
290
Q&A Forum
Science & Tech.
104.638529
95,482,600
The development of polymer composite liquid oxygen LO2 tanks is a critical step in creating the next generation of launch vehicles. Future launch vehicles need to minimize the gross liftoff weight (GLOW), which is possible due to the 25%-40% reduction in weight that composite materials could provide over current aluminum technology. Although a composite LO2 tank makes these weight savings feasible, composite materials have not historically been viewed as "LO2 compatible." To be considered LO2 compatible, materials must be selected that will resist any type of detrimental, combustible reaction when exposed to usage environments. This is traditionally evaluated using a standard set of tests. However, materials that do not pass the standard tests can be shown to be safe for a particular application. This paper documents the approach and results of a joint NASA/Lockheed Martin program to select and verify LO2 compatible composite materials for liquid oxygen fuel tanks. The test approach developed included tests such as mechanical impact, particle impact, puncture, electrostatic discharge, friction, and pyrotechnic shock. These tests showed that composite liquid oxygen tanks are indeed feasible for future launch vehicles.
<urn:uuid:8c177766-c036-458f-ac56-79c9fc7c6619>
2.96875
224
Truncated
Science & Tech.
19.514895
95,482,602
Amplification processes for small primary differences in the properties of enantiomers Differences in the free energy of enantiomers may have two principal reasons: 1. the result of a special arrangement of external fields, or 2. asymmetric interaction due to parity-non-conservation. The effects, in general, will be very small so that an amplification is necessary in order to make them measurable. In doing so, the main problem is not to find a process of high amplification potential, but a reliable one where the obtained enantiomeric resolution is really due to differences in their properties and not an artifact caused by uncontrolled experimental parameters. Well-known cascading processes have been examined under this aspect, and two new types of amplifying processes involving solid phases and especially designed for this purpose, are described. Key wordsChemical Evolution Origin of Optical Activity Chemical Amplification Cascading Processes Unable to display preview. Download preview PDF. - Darge, W. (1973). Thesis, Technical University Aachen (1972). Cf. also W. Thiemann, W.Darge, Proc. of the Conference on “Origin of Life”, BarcelonaGoogle Scholar - Thiemann, W., Wagener, K. (1970). Angew. Chem. 82, 776. — Cf. also Angew. Chem., International EditionGoogle Scholar - Yamagata, Y. (1966). J. Theoret. Biol. 11, 495Google Scholar
<urn:uuid:372e640e-4b18-4a91-b2ee-25b27d0abffc>
2.625
308
Academic Writing
Science & Tech.
35.771508
95,482,604
Chemistry of radioactive dating Most of the time, the -ray is emitted within 10Nuclides with atomic numbers of 90 or more undergo a form of radioactive decay known as spontaneous fission in which the parent nucleus splits into a pair of smaller nuclei.Ancient alchemists attempted but failed to turn different substances into gold.Modern nuclear chemistry, sometimes referred to as radiochemistry, has become very interdisciplinary in its applications, ranging from the study of the formation of the elements in the universe to the design of radioactive drugs for diagnostic medicine.In fact, the chemical techniques pioneered by nuclear chemists have become so important that biologists, geologists, and physicists use nuclear chemistry as ordinary tools of their disciplines.(Only a handful of nuclides with atomic numbers less than 83 emit an -particle.) The product of -decay is easy to predict if we assume that both mass and charge are conserved in nuclear reactions. The product of this reaction can be predicted, once again, by assuming that mass and charge are conserved. They rapidly lose their kinetic energy as they pass through matter. Collins English Dictionary - Complete & Unabridged 2012 Digital Edition © William Collins Sons & Co. Nuclear chemistry is the study of the chemical and physical properties of elements as influenced by changes in the structure of the atomic nucleus. Radiocarbon dating is one such type of radiometric dating. A process for determining the age of an object by measuring the amount of a given radioactive material it contains. All atoms have a certain value of mass number which is derived as follows.
<urn:uuid:eacea22e-11e5-4033-a0e5-5541ebb7f8c2>
3.546875
323
Knowledge Article
Science & Tech.
29.898302
95,482,624
Make Nitric Acid – The Complete Guide Jun 3, 2009 a nitrate salt: - sodium nitrate - potassium nitrate - ammonium nitrate - a nitrate based fertilizer, as long as it passes the nitrate test Cu + 4HNO3 → CuNO32 + 2NO2 + 2H2O | Copper react with nitric acid Nov 4, 2017 Nitric Acid From Thin Air Aug 9, 2015 I make nitric acid out of air, water, and electricity. [the same process that creates NO2 naturally: lightning] How to get an egg inside a bottle | Live Experiments (Ep 28) | Head Squeeze BBC Earth Lab May 18, 2013 How to do egg in a bottle science experiment Apr 4, 2015 Chemistry-Magic Show 2013 Aug 26, 2013 Using Phenolphthalein As An Acid – Base Indicator Apr 17, 2012 Phenolphthalein indicator is pink or fuchsia in alkaline solutions starting at about pH 8.2. Phenolphthalein loses color again above pH 12. Feb 3, 2015 phenolphthalein is synthesized by the condensation of phenol and phthalic anhydride Make phenolphthalein- A Acid Base Indicator Jul 31, 2016 Phenol is very poisonous, take extreme caution when working with it. Wear a gas mask at all times. Make your OWN pH Indicator from Red Cabbage! Thoisoi2 – Chemical Experiments! Sep 29, 2014 red cabbage: 28.7 mg/100g anthocyanin elderberries: 1317 mg/100g blueberries: 48 mg/100g, Dec 18, 2012 Rose petals, herbal (hibiscus) teas, grape juice, apple skins—students will love testing their favorite flowers or fruits. Hydrogénophosphate de disodium [IUPAC name] Disodium hydrogen phosphate [IUPAC name] © Royal Society of Chemistry 2015 10 Science Projects for Elementary School Students Jun 7, 2016 Cartesian diver (diving ketchup experiment) Put out a candle without blowing it! CO2 is heavier than air soda can jump Aluminium (or Aluminum) – Periodic Table of Videos May 11, 2014 © Royal Society of Chemistry
<urn:uuid:6151ab9d-77bb-4152-ade0-edda5dbd92d3>
3.234375
515
Content Listing
Science & Tech.
44.715909
95,482,649
This article needs additional citations for verification. (April 2014) (Learn how and when to remove this template message) Electroluminescence (EL) is an optical phenomenon and electrical phenomenon in which a material emits light in response to the passage of an electric current or to a strong electric field. This is distinct from black body light emission resulting from heat (incandescence), from a chemical reaction (chemiluminescence), sound (sonoluminescence), or other mechanical action (mechanoluminescence). Electroluminescence is the result of radiative recombination of electrons & holes in a material, usually a semiconductor. The excited electrons release their energy as photons - light. Prior to recombination, electrons and holes may be separated either by doping the material to form a p-n junction (in semiconductor electroluminescent devices such as light-emitting diodes) or through excitation by impact of high-energy electrons accelerated by a strong electric field (as with the phosphors in electroluminescent displays). It has been recently shown that as a solar cell improves its light-to-electricity efficiency (improved open-circuit voltage), it will also improve its electricity-to-light (EL) efficiency. Examples of electroluminescent materials Electroluminescent devices are fabricated using either organic or inorganic electroluminescent materials. The active materials are generally semiconductors of wide enough bandwidth to allow exit of the light. The most typical inorganic thin-film EL (TFEL) is ZnS:Mn with yellow-orange emission. Examples of the range of EL material include: - Powdered zinc sulfide doped with copper (producing greenish light) or silver (producing bright blue light) - Thin-film zinc sulfide doped with manganese (producing orange-red color) - Naturally blue diamond, which includes a trace of boron that acts as a dopant. - Semiconductors containing Group III and Group V elements, such as indium phosphide (InP), gallium arsenide (GaAs), and gallium nitride (GaN) (Light-emitting diodes.) - Certain organic semiconductors, such as [Ru(bpy)3]2+(PF6−)2, where bpy is 2,2'-bipyridine The most common electroluminescent (EL) devices are composed of either powder (primarily used in lighting applications) or thin films (for information displays.) Light-emitting capacitor, or LEC, is a term used since at least 1961 to describe electroluminescent panels. that are still made as night lights and backlights for instrument panel displays. Electroluminescent panels are a capacitor where the dielectric between the outside plates is a phosphor that gives off photons when the capacitor is charged. By making one of the contacts transparent, the large area exposed emits light. Electroluminescent automotive instrument panel backlighting, with each gauge pointer also an individual light source, entered production on 1960 Chrysler and Imperial passenger cars, and was continued successfully on several Chrysler vehicles through 1967. Sylvania Lighting Division in Salem and Danvers, MA, produced and marketed an EL night lamp (right), under the trade name Panelescent at roughly the same time that the Chrysler instrument panels entered production. These lamps have proven extremely reliable, with some samples known to be still functional after nearly 50 years of continuous operation. Later in the 1960s, Sylvania's Electronic Systems Division in Needham, MA developed and manufactured several instruments for the Apollo Lunar Lander and Command Module using electroluminescent display panels manufactured by the Electronic Tube Division of Sylvania at Emporium, PA. Raytheon, Sudbury, MA, manufactured the Apollo guidance computer, which used a Sylvania electroluminescent display panel as part of its display-keyboard interface (DSKY). Powder phosphor-based electroluminescent panels are frequently used as backlights for liquid crystal displays. They readily provide gentle, even illumination for the entire display while consuming relatively little electric power. This makes them convenient for battery-operated devices such as pagers, wristwatches, and computer-controlled thermostats, and their gentle green-cyan glow is common in the technological world. They require relatively high voltage (between 60 and 600 volts). For battery-operated devices, this voltage must be generated by a converter circuit within the device. This converter often makes an audible whine or siren sound while the backlight is activated. For line-voltage-operated devices, it may be supplied directly from the power line. Electroluminescent nightlights operate in this fashion. Brightness per unit area increases with increased voltage and frequency. Thin film phosphor electroluminescence was first commercialized during the 1980s by Sharp Corporation in Japan, Finlux (Oy Lohja Ab) in Finland, and Planar Systems in the US. Here, bright, long-life light emission is achieved in thin film yellow-emitting manganese-doped zinc sulfide material. Displays using this technology were manufactured for medical and vehicle applications where ruggedness and wide viewing angles were crucial, and liquid crystal displays were not well developed. In 1992, Timex introduced its Indiglo EL display on some watches. Recently, blue-, red-, and green-emitting thin film electroluminescent materials that offer the potential for long life and full color electroluminescent displays have been developed. In either case, the EL material must be enclosed between two electrodes and at least one electrode must be transparent to allow escape of the produced light. Glass coated with indium tin oxide is commonly used as the front (transparent) electrode while the back electrode is coated with reflective metal. Additionally, other transparent conducting materials, such as carbon nanotube coatings or PEDOT can be used as the front electrode. The display applications are primarily passive (i.e., voltages are driven from edge of the display cf. driven from a transistor on the display). Similar to LCD trends, there have also been Active Matrix EL (AMEL) displays demonstrated, where circuitry is added to prolong voltages at each pixel. The solid-state nature of TFEL allows for a very rugged and high-resolution display fabricated even on silicon substrates. AMEL displays of 1280x1024 at over 1000 lines per inch (lpi) have been demonstrated by a consortium including Planar Systems. Electroluminescent technologies have low power consumption compared to competing lighting technologies, such as neon or fluorescent lamps. This, together with the thinness of the material, has made EL technology valuable to the advertising industry. Relevant advertising applications include electroluminescent billboards and signs. EL manufacturers are able to control precisely which areas of an electroluminescent sheet illuminate, and when. This has given advertisers the ability to create more dynamic advertising that is still compatible with traditional advertising spaces. An EL film is a so-called Lambertian radiator: unlike with neon lamps, filament lamps, or LEDs, the brightness of the surface appears the same from all angles of view; electroluminescent light is not directional and therefore hard to compare with (thermal) light sources measured in lumens or lux. The light emitted from the surface is perfectly homogeneous and is well-perceived by the eye. EL film produces single-frequency (monochromatic) light that has a very narrow bandwidth, is absolutely uniform and visible from a great distance. In principle, EL lamps can be made in any color. However, the commonly used greenish color closely matches the peak sensitivity of human vision, producing the greatest apparent light output for the least electrical power input. Unlike neon and fluorescent lamps, EL lamps are not negative resistance devices so no extra circuitry is needed to regulate the amount of current flowing through them. A new technology now being used is based on multispectral phosphors that emit light from 600 to 400nm depending on the drive frequency; this is similar to the colour changing effect seen with aqua EL sheet but on a larger scale. Electroluminescent lighting is now used as an application for public safety identification involving alphanumeric characters on the roof of vehicles for clear visibility from an aerial perspective. Electroluminescent lighting, especially electroluminescent wire (EL wire), has also made its way into clothing as many designers have brought this technology to the entertainment and night life industry. Engineers have developed an electroluminescent "skin" that can stretch more than six times its original size while still emitting light. This hyper-elastic light-emitting capacitor (HLEC) can endure more than twice the strain of previously tested stretchable displays. It consists of layers of transparent hydrogel electrodes sandwiching an insulating elastomer sheet. The elastomer changes luminance and capacitance when stretched, rolled and otherwise deformed. In addition to its ability to emit light under a strain of greater than 480% its original size, the group's HLEC was shown to be capable of being integrated into a soft robotic system. Three six-layer HLEC panels were bound together to form a crawling soft robot, with the top four layers making up the light-up skin and the bottom two the pneumatic actuators. The discovery could lead to significant advances in health care, transportation, electronic communication and other areas. - Raguse, John (April 15, 2015). "Correlation of Electroluminescence with Open-CIrcuit Voltage from Thin-Film CdTe Solar Cells". Journal of Photovoltaics. 5 (4): 4. doi:10.1109/JPHOTOV.2015.2417761. - Proceedings of the National Electronics Conference, Volume 17, National Engineering Conference, Inc., 1961 ; page 328 - Raymond Kane, Heinz Sell, Revolution in lamps: a chronicle of 50 years of progress, 2nd ed., The Fairmont Press, Inc., 2001 ISBN 0881733784, pages 122–124 - Donald G. Fink and H. Wayne Beaty, Standard Handbook for Electrical Engineers, Eleventh Edition, McGraw-Hill, New York, 1978, ISBN 0-07-020974-X pp 22-28 - Ron Khormaei, et al., "High Resolution Active Matrix Electroluminescent Display", Society for Information Display Digest, p. 137, 1994. - "Active Matrix Electroluminescence (AMEL)" (PDF). Archived from the original (PDF) on 2012-07-22. - "air-el". Federal Signal. Retrieved July 23, 2016. - Diana Eng. "Fashion Geek: Clothes Accessories Tech". 2009. - Cornell University (March 3, 2016). "Super elastic electroluminescent 'skin' will soon create mood robots". Science Daily. Retrieved March 4, 2016. |Wikimedia Commons has media related to Electroluminescence.|
<urn:uuid:f56909bd-33b0-47e7-af1e-a2605bd40914>
3.875
2,323
Knowledge Article
Science & Tech.
25.639919
95,482,703
The Midwest Region includes Illinois, Indiana, Iowa, Michigan, Minnesota, Missouri, Ohio and Wisconsin. Find a location near you Endangered Species Program Conserving and restoring threatened and endangered species and their ecosystems Lake Erie Watersnake Nerodia sipedon insularum The Lake Erie watersnake was removed from of the list of federally endangered and threatened species on August 16, 2011. This species was originally listed as a federally threatened species on August 30, 1999. Threatened species are animals and plants that are likely to become endangered in the foreseeable future. Endangered species are animals and plants that are in danger of becoming extinct. Identifying, protecting, and restoring endangered and threatened species is the primary objective of the U.S. Fish and Wildlife Services endangered species program. What is the Lake Erie Watersnake? Appearance - Adult Lake Erie watersnakes are uniform gray in color or have incomplete band patterns. They resemble the closely related northern watersnake (Nerodia sipedon sipedon), but often lack the body markings, or have only a pale version of those patterns. Lake Erie watersnakes grow to 1 1/2 to 3 1/2 feet in length. They are not venomous. Habitat - In summer, the snakes live on the cliffs, ledges and rocky shorelines of limestone islands and forage in the nearshore waters of Lake Erie. During winter, Lake Erie watersnakes hibernate underground. Reproduction - Lake Erie watersnakes mate from late May through early June. During this time they may be found in large “mating balls” which typically consist of one female and several males. Young snakes are born mid-August through September with an average litter size of 23. Feeding Habits - Historically, the snakes fed on amphibians and native fish such as madtom, stonecat, logperch, and spottail shiners. However, during the 1990’s the round goby (Neogobius melanostomus) an invasive fish, established itself in the Great Lakes and caused population declines of many native fish. Today, 90 percent of the watersnake’s diet is round goby and 10 percent is mudpuppies and native fish. Range - Lake Erie watersnakes can be found on a group of limestone islands in western Lake Erie, and on a portion of the Catawba/Marblehead peninsula in Ohio. Lake Erie watersnakes that lived on islands more than one mile from the Ohio mainland were protected under the Endangered Species Act. Watersnakes on the Ohio mainland, Mouse Island, and Johnson’s Island were never protected under the Endangered Species Act. Currently, all Lake Erie watersnakes remain protected under Ohio State wildlife law. Why is the Lake Erie Watersnake Threatened? Eradication - The snakes are often killed by humans. Habitat Loss or Degradation - Lake Erie watersnakes declined because of destruction of their shoreline habitat and excavation of winter hibernation habitat for developments. What Is Being Done to Prevent Extinction of the Lake Erie Watersnake? Listing - The Lake Erie watersnake was added to the U.S. List of Endangered and Threatened Wildlife and Plants and received protections provided by the Endangered Species Act, which included protection from intentional killing and destruction of habitat. Recovery Plan - As a threatened species, the U.S. Fish and Wildlife Service developed a recovery plan that described and prioritized actions needed to help the snake survive. Research - Researchers are studying the Lake Erie watersnake to find the best way to manage for the snake and its habitat. Habitat Protection - Some shoreline areas have been permanently protected as natural areas. New developments are incorporating features that provide habitat for the snakes and measures to minimize coastal shoreline habitat loss. Community Involvement - U.S. Fish and Wildlife Service personnel are working with local communities to develop programs that benefit both the community and the snake. Public Education - Public outreach programs are raising awareness of the snake, its plight, and its role in the ecosystem. What Can I Do to Help Prevent the Extinction of Species? Learn - Learn more about the Lake Erie watersnake and other endangered and threatened species. Understand how the destruction of habitat leads to loss of endangered and threatened species and our nation's plant and animal diversity. Tell others about what you have learned. Join or Volunteer - Join a conservation group; many have local chapters. Volunteer at a local National Wildlife Refuge, nature center, or zoo. Support – Support efforts to protect, conserve, or restore natural areas. Create – Create backyard habitat for wildlife, especially amphibians. Protect Water Quality - Protect water quality by safely disposing of unused or expired medicines. Never place medicine down the drain, toilet, or garbage disposal where it could impact surface and ground water quality. Also, properly dispose of all hazardous chemicals such as paint, fertilizer, pesticides, and motor oil. Last updated: March 12, 2018
<urn:uuid:fbce52a4-a3b5-454d-982d-01781476d3cc>
3.71875
1,053
Knowledge Article
Science & Tech.
27.258728
95,482,719
NASA’s Curiosity rover has lately made a stunning discover on Mars that would assist scientists get one step nearer to determining if the Pink Planet has ever supported life. The 1-ton Curiosity rover additionally found a fleeting spike within the ranges of methane at its touchdown web site, Gale Crater. Over the course of 4 measurements in two months on Mars, common methane levels elevated 10 fold earlier than rapidly dissipating, however the reason for the fluctuation remains to be unknown. Researchers are notably fascinated with discovering methane on alien worlds as a result of residing organisms produce an amazing quantity of the gasoline on Earth. Whereas discovering vital quantities of methane on Mars is not a sure-fire signal of previous or current life — geological processes also can produce the gasoline — it is nonetheless a great start line, in line with many scientists. [The Search for Life on Mars in Photos] “Proper now, it is an excessive amount of of a single-point measurement for us actually to leap to any conclusions,” Paul Mahaffy of NASA’s Goddard Area Flight Heart in Greenbelt, Maryland one of many authors of the brand new methane examine, instructed Area.com. “So all we will actually do is lay out the chances. And we actually ought to have an open thoughts. Perhaps there are microbes on Mars cranking out methane, however we positive cannot say that with any certainty. It is simply hypothesis at this level.” A brand new baseline The brand new examine, which was printed on-line as we speak (Dec. 16) within the journal Science, additionally reveals that Curiosity discovered methane ranges within the Martian atmosphere to be, on common, about zero.7 components per billion. This stage is decrease than earlier estimates and calculations, however nonetheless larger than earlier Curiosity readings of methane printed final yr. The rover’s earlier measurements did not find any trace of methane within the Martian environment; nevertheless, scientists discovered a option to focus the rover’s samples of the environment, permitting them to get the newest information in regards to the gasoline. The shortage of methane measured earlier by Curiosity was disappointing for a lot of scientists due to its probably damning implication for locating Martian life. However the brand new measurements may imply there’s hope but. “The unique Science [journal] paper was very detrimental about there being any credence to massive fluctuations in methane,” Jan-Peter Muller, an ExoMars and Curiosity staff member that is not instantly concerned with the brand new examine, instructed Area.com. “The present paper exhibits that such conclusions must be taken with quite a lot of skepticism till ample information has been collected.” Water and spikes in methane One other examine printed in Science as we speak additionally particulars one other thrilling Curiosity discover on Mars. Utilizing a pattern of clay, scientists have measured the hydrogen within the Martian environment about three billion to three.7 billion years in the past. The brand new discovering may assist pin down when the Pink Planet misplaced its liquid floor water. NASA officers additionally introduced throughout a information convention on the assembly of the American Geophysical Union as we speak that Curiosity has measured natural compounds in a rock the rover drilled into on the Martian floor. The molecules may have been delivered to Mars by way of meteorites, or they might be native to the Pink Planet, officers added. “We’ll preserve engaged on the puzzles these findings current,” John Grotzinger, Curiosity undertaking scientist of the California Institute of Know-how in Pasadena, said in a statement. “Can we study extra in regards to the energetic chemistry inflicting such fluctuations within the quantity of methane within the environment? Can we select rock targets the place identifiable organics have been preserved?” The momentary spike within the stage of methane discovered within the Martian environment is considerably puzzling for the scientists who found it. Curiosity discovered that the background stage of methane averages out to about zero.7 components per billion, however the spike introduced these ranges as much as a median of seven components per billion, in simply 60 Mars days. That is notably stunning as a result of scientists count on methane on Mars to have a lifetime of about 300 years, for much longer than it truly caught round close to Curiosity, in line with Christopher Webster, the lead writer of the brand new examine. [7 Biggest Mars Mysteries Ever] Curiosity scientists made their first, stunning measurement of the methane in November 2013, when the gasoline clocked in at 5.5 components per billion, Webster stated. After about two extra weeks, the researchers repeated the measurement with Curiosity’s SAM (Sample Analysis at Mars) instrument and located the degrees had been at 7 components per billion. They discovered this identical stage the subsequent time they measured. The fourth measurement, taken a pair weeks later, got here in at 9 components per billion, however six weeks later the methane ranges had been again to background ranges, in line with Webster. It is attainable trapped little bit of gasoline let loose someplace close to Curiosity triggered the rise in methane, scientists speculate. This burp may have created a momentary rise within the methane stage across the rover, a rise that dissipated comparatively rapidly. “Due to the best way it [the methane] behaves, we imagine it is a smaller, nearer supply [rather] than it’s a larger, additional away supply,” Webster instructed Area.com. “However so far as the supply of that methane, we can not rule out organic exercise, whether or not it is as we speak or previously, and we can not rule out geophysical exercise.” zero of 10 questions full Different methane measurements Scientists have seen fluctuations within the stage of methane in the Martian atmosphere earlier than, utilizing orbiters and Earth-based technique of trying on the planet, stated Malynda Chizek Frouard, a Mars methane researcher at New Mexico State College who’s unaffiliated with the examine. The brand new information from Curiosity may assist create higher fashions of the Martian environment, Chizek Frouard added. Researchers can now attempt to “create situations the place a burst of methane would produce the identical form of variation that they noticed in Gale Crater,” Chizek Frouard instructed Area.com. Webster and his staff say it’s attainable to slim down the supply of the methane additional, however Curiosity most likely is not as much as the duty. Scientists will want new instruments at Mars that may probe the planet’s skinny environment to see what kind of methane is current. Sure isotopes of the gasoline may point out that life varieties created the methane in some unspecified time in the future in Mars’ historical past, whereas different isotopes would probably imply that geological forces are chargeable for producing the gasoline. “This can be a huge shock to us,” Webster stated. “And right here we’re, writing the subsequent chapter.” Editor’s Notice: This story was up to date at 1:55 p.m. EST (1855 GMT) to incorporate new particulars.
<urn:uuid:3729a843-0326-402d-a853-58f4385a5c64>
3.1875
1,472
Truncated
Science & Tech.
32.998704
95,482,737
- Open Access Phagosome proteomes open the way to a better understanding of phagosome function © BioMed Central Ltd 2007 Published: 15 March 2007 Phagocytic cells take up microbes and other particles into membrane-bounded organelles called phagosomes. Studies on the protein and lipid composition of model phagosomes containing latex beads are the first step in a systems-biology approach to understanding how these organelles function. In the vast majority of cases, the microbe inside the phagosome is killed and digested, but a number of important pathogens, including the bacterium Mycobacterium tuberculosis, which kills around two million people each year, have acquired the ability to survive, and even replicate, in this hostile environment. Each type of pathogen that exploits intracellular vesicles seems to have evolved a different survival strategy. Phagosome maturation follows a defined biochemical program, and different pathogens probably redirect this program in a unique fashion. Pathogen proteins and/or lipids released inside phagosomes alter signaling pathways in the phagosomal membrane or in the cytoplasm. A pathogen-containing phagosome in, for example, a macrophage, has three distinct 'compartments'. These are the pathogen itself; the luminal contents, which are enriched in hydrolases, protons, and ions such as Ca2+, and have a still poorly defined redox state; and the phagosomal membrane, the boundary between the pathogen and the cytoplasm. This last controls most phagosome functions, including their fusion, recycling, and interactions with the cytoskeleton. Determining the molecular composition of the phagosome membrane and phagosomal contents is essential if we are to understand in detail how these organelles function. Knowing how a 'normal' phagosome works would provide a strong foundation for understanding how pathogens alter phagosome maturation. This could lead to the development of drugs that block pathogen-induced alteration of phagosome signaling. That might appear a tall order, but a simple model system of phagocytosis involving the uptake of latex beads has recently opened up this problem to molecular dissection. In the most recent study of this sort, a proteomic analysis of latex bead phagosomes (LBPs) in cultured Drosophila melanogaster S2 cells, Stuart et al. have identified more than 600 phagosome-associated proteins. Of the 140 proteins identified in mouse LBPs in earlier studies , 70% have orthologs in the Drosophila phagosome, indicating a high degree of conservation. Recent analyses of LBPs in Dictyostelium discoideum by Gotthard et al. [3, 4] have revealed around 1,380 proteins, of which 179 have been identified. Latex bead phagosomes As first shown by Wetzel and Korn in 1969 , phagosomes enclosing latex beads (usually 0.5-3 μm in diameter) can be easily and cleanly isolated by flotation in a sucrose gradient. The enclosed beads float upwards against a strong centrifugal force, which enables LBPs to be purified to a level of contaminants of less than a few percent [3, 6]. LBPs are isolated in one step, whereas all other membrane-bounded organelles require multiple steps of purification. In the presence of ATP and other necessary components, isolated LBPs have been shown to be able to carry out most phagosome functions. They will fuse with endosomes and lysosomes, bind microtubules, move along microtubules, promote the assembly of actin filaments and bind to them, and become acidified [7, 8]. Phagosomes containing non-pathogenic M. smegmatis, but not those containing the pathogens M. tuberculosis and M. avium, have also been shown to assemble actin , confirming that LBPs are a good model for providing insights into the behavior of phagosomes containing non-pathogenic bacteria. Proteomic analyses of LBPs One of the first proteomic studies using LBPs was that of Garin et al. , who determined a partial proteome of LBPs in the mouse J774 macrophage cell line 2 hours after internalization and identified 171 phagosome proteins. A continuation of this analysis has since identified more than 800 of the estimated 1,000 proteins in mouse macrophage phagosomes (M Desjardins, personal communication). Burlak et al. identified about 200 proteins in a proteomic analysis of LBPs from human neutrophils. As well as mammalian studies, LBPs have been used to analyze phagocytosis in other organisms. Marion et al. carried out a proteomic analysis on phagosomes isolated from the human protozoan pathogen Entamoeba histolytica using magnetic beads coated with human serum. Around 150 proteins were identified, including myosins and other actin-binding proteins. LBPs have also been used in extensive proteomics analyses of phagosomes from Drosophila and Dictyostelium [3, 4, 12], which are described in more detail below. Proteins of similar function are consistently detected in all the phagosomes studied. In mature phagosomes, major classes of luminal proteins include hydrolases and other bacteriocidal proteins. In the phagosome membrane are found the various subunits of the proton transporter H+-ATPase, other transporters and ion channels, heterotrimeric G proteins, monomeric GTPases of the Rab and Rho families, SNARE fusion machinery, actin-binding and microtubule-binding proteins, clathrin and COP proteins of vesicle coats, and a spectrum of signaling proteins such as protein kinase C and phospholipase D (PLD). PLD is only one of many lipid-converting enzymes that are active in the LBP membrane [8, 9]. Collectively, these analyses leave no doubt that the phagosome, even when it contains only an inert bead, is a complex signaling machine. A systems approach to understanding phagosome function and phagocytosis In Dictyostelium, phagocytic uptake of latex beads can be highly synchronized, enabling a detailed kinetic analysis. In contrast to phagocytosis in mammalian cells, in which the particles, or their remains, usually stay within the cells, Dictyostelium phagosomes synchronously exocytose their contents about one hour after uptake. This is a clear signal that the maturation is complete. In their most recent proteomic analysis of Dictyostelium LBPs, Gotthardt et al. made a detailed analysis of six different phagosome maturation stages, differentiating a total of 1,388 phagosome protein spots on two-dimensional gel electrophoresis. The analysis revealed a fascinating, and hitherto unexpected, dynamic record of phagosome maturation. Sets of phagosome proteins were identified that were up- or downregulated on phagosomes at well-defined times in the maturation cycle. For example, a comparison of LBPs isolated after 5 minutes with those isolated after 15 minutes revealed that 469 protein spots present at the earlier time had disappeared from the 15-minute phagosome (presumably by recycling or degradation) whereas 130 proteins had appeared at 15 minutes that were absent earlier. Identification of the complete phagosome proteome is still in progress. In their impressive study of LBPs in Drosophila cells, Stuart et al. also took a systems-biology approach. Having first identified 617 LBP proteins, they extended the analysis using both RNA interference (RNAi), to knock down protein expression, and bioinformatics. Bioinformatic approaches were used to identify proteins that had been shown to interact with the 617 identified LBP proteins. The rationale was that this 'interactome' would identify phagosome proteins that interact only transiently or weakly with identified phagosome proteins. Such proteins do not co-purify with phagosomes, but might be functionally very important. The interaction map shows an impressive set of linked proteins, with a number of functional classes that one would not have expected on phagosomes, although some were suggested in the earlier proteomic analyses, such as components of the spliceosome and of protein translation machinery, whose role in phagosomes remains to be demonstrated. Less surprising was the presence of proteasome and chaperone proteins, which fitted with earlier functional analyses [13, 14]. One protein complex found by Stuart et al. that had not been noticed on phagosomes previously was the exocyst complex, which controls some exocytic docking and fusion events. Extensive RNAi screening was used to selectively knock down the 617 LBP proteins, and 220 additional proteins predicted from the interactome to test their potential roles in the phagocytosis of the Gram-negative bacterium Escherichia coli and the Gram-positive Staphylococcus aureus . The fact that 28% of the RNAs tested affected the process of uptake, either increasing or decreasing bacterial uptake, strongly validates the initial screening with LBPs and the interactome analysis. RNAi also confirmed a role in phagocytosis for several proteins of the exocyst complex. Interestingly, there was considerable divergence between the sets of interfering RNAs that affected phagocytosis of S. aureus and E. coli, respectively. Both positive and negative regulators of phagocytosis were identified, a number of which were specific to one of the two pathogens. Some of the genes identified and their effects were unexpected. For example, the knock down of a ribosomal protein increased the phagocytosis of both bacteria. The power of this kind of analysis is that it gives rise to a rich spectrum of molecular hypotheses that can drive the entire field. The LBP has emerged as an excellent model for studying the biogenesis of a membrane organelle. It is discrete and easily defined, unlike, for example, endosomes, and is straightforward to isolate. It has additional advantages, including the ease with which phylogenetic comparisons can be made, as exemplified by the ongoing proteomic analyses of Dictyostelium, Entamoeba, Drosophila, mouse and human phagosomes. Phagosomes can also be compared from host cells of different genetic background. Because of the distinct sequence of phagosome maturation, it is much easier to analyze phagosomes in different functional states than it is for other organelles. Finally, given that the type of ligand that induces phagocytosis helps determine the final fate of the phagosome, LBPs can be used to study the effect of different ligands (such as IgG, complement, or mannose) and different receptors on phagosome behavior. - Stuart LM, Boulais J, Charriere GM, Hennessy EJ, Brunet S, Jutras I, Goyette G, Rondeau C, Letarte S, Huang H, et al: A systems biology analysis of the Drosophila phagosome. Nature. 2007, 445: 95-101. 10.1038/nature05380.PubMedView ArticleGoogle Scholar - Garin J, Diez R, Kieffer S, Dermine JF, Duclos S, Gagnon E, Sadoul R, Rondeau C, Desjardins M: The phagosome proteome: insight into phagosome functions. J Cell Biol. 2001, 152: 165-180. 10.1083/jcb.152.1.165.PubMedPubMed CentralView ArticleGoogle Scholar - Gotthardt D, Dieckmann R, Blancheteau V, Kistler C, Reichardt F, Soldati T: Preparation of intact, highly purified phagosomes from Dictyostelium. Methods Mol Biol. 2006, 346: 439-448.PubMedGoogle Scholar - Gotthardt D, Blancheteau V, Bosserhoff A, Ruppert T, Delorenzi M, Soldati T: Proteomics fingerprinting of phagosome maturation and evidence for the role of a Galpha during uptake. Mol Cell Proteomics. 2006, 5: 2228-2243. 10.1074/mcp.M600113-MCP200.PubMedView ArticleGoogle Scholar - Wetzel MG, Korn ED: Phagocytosis of latex beads by Acanthamoeba castellanii (Neff). 3. Isolation of the phagocytic vesicles and their membranes. J Cell Biol. 1969, 43: 90-104. 10.1083/jcb.43.1.90.PubMedPubMed CentralView ArticleGoogle Scholar - Desjardins M, Huber LA, Parton RG, Griffiths G: Biogenesis of phagosomes proceeds through a sequential series of interactions with the endocytic apparatus. J Cell Biol. 1994, 124: 677-688. 10.1083/jcb.124.5.677.PubMedView ArticleGoogle Scholar - Desjardins M, Griffiths G: Phagocytosis: latex leads the way. Curr Opin Cell Biol. 2003, 15: 498-503. 10.1016/S0955-0674(03)00083-8.PubMedView ArticleGoogle Scholar - Defacque H, Bos E, Garvalov B, Barret C, Roy C, Mangeat P, Shin HW, Rybin V, Griffiths G: Phosphoinositides regulate membrane-dependent actin assembly by latex bead phagosomes. Mol Biol Cell. 2002, 13: 1190-1202. 10.1091/mbc.01-06-0314.PubMedPubMed CentralView ArticleGoogle Scholar - Anes E, Kühnel MP, Bos E, Moniz-Pereira J, Habermann A, Griffiths G: Selected lipids activate phagosome actin assembly and maturation leading to killing of pathogenic mycobacteria. Nature Cell Biol. 2003, 5: 793-802. 10.1038/ncb1036.PubMedView ArticleGoogle Scholar - Burlak C, Whitney AR, Mead DJ, Hackstadt T, Deleo FR: Maturation of human neutrophil phagosomes includes incorporation of molecular chaperones and endoplasmic reticulum quality control machinery. Mol Cell Proteomics. 2005, 5: 620-634. 10.1074/mcp.M500336-MCP200.View ArticleGoogle Scholar - Marion S, Laurent C, Guillen N: Signalization and cytoskeleton activity through myosin IB during the early steps of phagocytosis in Entamoeba histolytica: a proteomic approach. Cell Microbiol. 2005, 7: 1504-1518. 10.1111/j.1462-5822.2005.00573.x.PubMedView ArticleGoogle Scholar - Gotthardt D, Warnatz HJ, Henschel O, Bruckert F, Schleicher M, Soldati T: High-resolution dissection of phagosome maturation reveals distinct membrane trafficking phases. Mol Biol Cell. 2002, 13: 3508-3520. 10.1091/mbc.E02-04-0206.PubMedPubMed CentralView ArticleGoogle Scholar - Houde M, Bertholet S, Gagnon E, Brunet S, Goyette G, Laplante A, Princiotta MF, Thibault P, Sacks D, Desjardins M: Phagosomes are competent organelles for antigen cross-presentation. Nature. 2003, 425: 402-406. 10.1038/nature01912.PubMedView ArticleGoogle Scholar - Lee WL, Kim MK, Schreiber AD, Grinstein S: Role of ubiquitin and proteasomes in phagosome maturation. Mol Biol Cell. 2005, 16: 2077-2090. 10.1091/mbc.E04-06-0464.PubMedPubMed CentralView ArticleGoogle Scholar
<urn:uuid:fd9b3398-099f-44af-9e63-4538dd78fd05>
2.53125
3,469
Truncated
Science & Tech.
37.778933
95,482,748
This book provides a comprehensive introduction to the field of geochemistry. The book first lays out the ‘geochemical toolbox’: the basic principles and techniques of modern geochemistry, beginning with a review of thermodynamics and kinetics as they apply to the Earth and its environs. These basic concepts are then applied to understanding processes in aqueous systems and the behavior of trace elements in magmatic systems. Subsequent chapters introduce radiogenic and stable isotope geochemistry and illustrate their application to such diverse topics as determining geologic time, ancient climates, and the diets of prehistoric peoples. The focus then broadens to the formation of the solar system, the Earth, and the elements themselves. Then the composition of the Earth itself becomes the topic, examining the composition of the core, the mantle, and the crust and exploring how this structure originated. The penultimate chapter covers organic chemistry, including the origin of fossil fuels and the carbon cycle’s role in controlling Earth’s climate, both in the geologic past and the rapidly changing present. A new, final chapter looks at applied geochemistry, covering environmental applications of geochemistry, and geochemical exploration. Geochemistry is essential reading for all earth science students, as well as for researchers and applied scientists who require an introduction to the essential theory of geochemistry, and a survey of its applications in the earth and environmental sciences.
<urn:uuid:b01c97f6-39a1-45bf-81fa-0810fc71f9f0>
3.734375
283
Product Page
Science & Tech.
17.9
95,482,750
Justice Department Charges Over 600 People For Health Care Fraud Attorney general Hill praises FDA efforts with cannabis-based drug Regina man remembers father after Spade and Bourdain deaths Twitter reacts to news of Anthony Bourdain’s death United States spends record $306 billion on weather, climate disasters in 2017 09 January 2018, 01:53 | Nichole Osborne Roofs ripped off houses in San Juan Puerto Rico as Hurricane Maria slammed into the city Hurricane Harvey racked up total damage costs of $125 billion, second only to Hurricane Katrina in the 38-year period of record keeping for billion-dollar disasters. The US National Oceanic and Atmospheric Administration (NOAA) said on Monday the destructive climate events came during the third-warmest year on record for the US. "Each of these destructive hurricanes now joins Katrina and Sandy, in the new top 5 costliest USA hurricanes on record". Last year's disasters killed 362 people in the USA, including Puerto Rico, NOAA said. The interruption to commerce and standard living conditions will continue, as much of Puerto Rico's infrastructure is rebuilt. "The cumulative damage of these 16 United States events during 2017 is $306.2bn, which shatters the previous USA annual record cost of $214.8bn established in 2005". Rainfall from Harvey caused massive flooding that displaced over 30,000 people and damaged or destroyed over 200,000 homes and businesses, NOAA said. The previous record year for $1 billion disasters was 2005, the year of hurricanes Katrina and Rita, with an inflation-adjusted $214.8 billion cost. Meanwhile, Hurricanes Maria and Irma had total damages of $90bn and $50bn, respectively. "During 2017, the USA experienced a historic year of weather and climate disasters", the weather agency said. Scientists have long concluded that carbon dioxide and other emissions from fossil fuels and industry are driving climate change, leading to floods, droughts and more-frequent powerful storms. 2017 was the third warmest year on record for the United States, which was tortured by drought, fire, hurricanes and floods and the effects were felt disproporionately wherever the pace of development, like on Houston's flood plain, has outstripped environmental considerations. It noted the five warmest years for the United States all have occurred since 2006. "Each year provides another piece of evidence in what science has already confirmed - the consequences of rising temperatures are putting people and wildlife at risk", he said. The nation's $1 billion disasters a year ago were: three hurricanes, eight severe storms, two inland floods, a crop freeze, a drought and a wildfire. Status Orange Wind Warning As Storm Dylan Hits Ireland A status orange wind warning has been issued for Connacht, Cavan, Monaghan, Donegal, Longford, Louth, Westmeath and Meath. Wind speeds of between 50 and 65km/h and gusts between 90 and 110km/h are also forecast in some places. Apple: Mac And iOS Vulnerable To Meltdown And Spectre Flaws Apple says it as of now discharged "alleviations" in iOS 11.2, macOS 10.13.2, and tvOS 11.2 to help safeguard against Meltdown . Google and Microsoft have already issued statements telling users which products are affected by the bugs.
<urn:uuid:df0fda49-ceb3-45f1-ac0e-0c8edd9074eb>
2.625
689
News Article
Science & Tech.
45.594113
95,482,753
This article does not cite any sources. (October 2011) (Learn how and when to remove this template message) |Atteva aurea feeding on Vernonia gigantea| The ailanthus webworm (Atteva aurea) is an ermine moth now found commonly in the United States. It was formerly known under the scientific name Atteva punctella (see Taxonomy section). This small, very colorful moth resembles a true bug or beetle when not in flight, but in flight it resembles a wasp. The ailanthus webworm is thought to be native to South Florida and the American tropics (south to Costa Rica),which were the habitat of its original larval host plants: the paradise tree (Simarouba glauca) and Simarouba amara. Another tree called tree-of-heaven (Ailanthus altissima), originally from China, has been widely introduced and naturalized, and Atteva aurea has been able to adapt to this new host plant, giving rise to its common name, the "ailanthus webworm". Ailanthus, common name "tree of heaven", is considered an invasive species, although it is still sold by nurseries as a yard plant, mainly because it is one of the species that will grow in polluted or otherwise difficult places. Atteva aurea can be a minor pest in nurseries, although it rarely does serious damage. This tropical moth is commonly seen in summer throughout the continental US, and occasionally eastern Canada (its northern limit is eastern Ontario and south-western Quebec beyond the host range). This species appears to be either adapting to colder areas, or staying further north due to changing climates. Larvae produce nests on the host plant by pulling two or more leaflets around a network of loose webbing. Then they consume the leaflets and bark. The caterpillars have a wide, light greenish-brown stripe down their backs and several thin, alternating white and olive-green stripes along their sides. The range of colors is from light brown to dark black. The adult moth visits flowers, is diurnal, and is a pollinator. The life cycle from egg to egg can happen in 4 weeks. Due to this being a species from warmer areas, it lacks a diapause stage. Larvae can be found from mid-spring to a hard freeze. There may be many generations each summer, with eggs being laid on the webs of other larvae. This can result in a communal web, having multiple generations from egg to various larva instars to pupa. Mating happens in the mornings, with egg-laying apparently happening in the evening. Eggs are found individually, not in clusters, even though each web may contain many separate eggs. Wilson JJ et al. discovered that morphologically similar Attevid moths were assigned two different names, Atteva ergatica in Costa Rica and Atteva punctella in North America, but had identical DNA barcodes. Combining DNA barcoding, morphology and food plant records also revealed a complex of two sympatric species that are diagnosable by their DNA barcodes and their facies in Costa Rica. However, neither of the names could be correctly applied to either species, as A. ergatica is a junior synonym and A. punctella a junior homonym. By linking the specimens to type material through morphology and DNA barcoding, they determined that the species distributed from Costa Rica to southern Quebec and Ontario, should be called A. aurea, whereas the similar and marginally sympatric species found in Central America should be called A. pustulella. The name Phalaena (Tinea) punctella was recognized as a junior homonym almost immediately after its description but has been retained through several major works (Heppner and duckworth 1983; Covell 1984; Heppner 1984). The two objective replacement names proposed were Tinea punctella (Fabricius, 1787) and Crameria subtilis (Hübner, 1822). The oldest valid name to replace Phalaena punctella is Tinea pustulella but this remained overlooked until recently (Heppner 2003). Over time seven more nominal taxa were synonymized under Atteva pustulella, being Deiopeia aurea (Fitch, 1857), Poeciloptera compta Clemens, 1861, Oeta compta floridana (Neumoegen, 1891), A. edithella (Busck, 1908), A. exquisita (Busck, 1912), A. ergatica (Walsingham, 1914) and A. microsticta (Walsingham, 1914). There were early suspicions that A. aurea and A. pustulella might represent different species, the former distributed in the United States, the latter in South America, but at the time there was insufficient material to support this view (Walsingham 1897). A recent taxonomic review of New World Atteva (Becker 2009) introduced several nomenclatural changes and recognized three separate species within the long-standing concept of A. pustulella: A. pustulella, A. aurea and A. floridana. The most recent treatment retains A. floridana as a synonym of Atteva aurea (Wilson JJ, 2010).
<urn:uuid:d806f971-d94f-4ae4-a8a3-a18827a1ac7b>
3.125
1,131
Knowledge Article
Science & Tech.
39.388368
95,482,769
Physicists have predicted the existence of a short-lived tetraneutron with unprecedented properties A member of the Lomonosov Moscow State University together with his colleagues, using new interaction between neutrons, have theoretically justified the low-energy tertaneutron resonance obtained recently experimentally. This proves the existence for a very short period of time of a particle consisting of four neutrons. According to the supercomputer simulations, the tetraneutron lifetime is 5×10-22 sec. The research results are published in a top-ranked journal Physical Review Letters. A team, consisting of Russian, German and American scientists, and among them Andrey Shirokov, Senior Researcher at the Skobeltsyn Institute of Nuclear Physics, has calculated the energy of the resonant tetraneutron state. Their theoretical computations, based on a new approach and new interaction between neutrons, correlate with the results of the experiment in which the tetraneutron has been produced. Searching for neutron stability A neutron lives about 15 min before it decays producing a proton, electron and antineutrino. There is also another known stable system consisting of a huge number of neutrons - a neutron star. Scientists have aimed to find out whether there are other systems, even short-lived, composed purely of neutrons. A system made up of two neutrons doesn't form even a short-lived state. Due to multi-year experimental and fundamental researches, scientists conclude that there are no such states in a system made up of three neutrons. Searches for a tetraneutron, a cluster of four neutrons, have been conducted for more than 50 years. These searches were fruitless until 2002 when a group of French researchers in an experiment at the Large Heavy Ion National Accelerator (Grand accélérateur national d'ions lourds - GANIL) in Caen has found 6 events which could be interpreted as the tetraneutron production. However, the reproduction of this experiment failed, and some scientists suppose that at least a part of the original data analysis was incorrect. A new phase of the tetraneutron searches takes place at the Radioactive Ion Beam Factoryin the RIKEN Institute, Japan, where a high-quality beam of 8He nuclei is available. The 8He nucleus consists of an α-particle (the 4He nuclei) and four neutrons. A few research teams from different countries have proposed the tetraneutron searches in RIKEN. In the first of these experimental searches, the 8He nuclei were bombarding the 4He target. As a result of the collision, the α-particle was knocked out from 8He leaving the system of 4 neutrons. Four events interpreted as the short-lived tetraneutron resonant statehave been detected. This experiment of the Japanese group has been published at the beginning of this year, and it will be continued. How long could a tetraneutron live? The scientist from Lomonosov Moscow State University and his collaborators have published in their article theoretical evaluations of the tetraneutron resonant state energy and its lifetime. They have contributed to the preparation of one of the proposed experimental searches for the tetraneutron when a group of experimentalists from Germany asked for the assistance. Andrey Shirokov, the first author of the article, says: "Such evaluations were made by us in different models, and the obtained results were used to support the experiment application. Afterwards, we thoroughly elaborated thetheoretical approach and performed numerous simulations on supercomputers. The results have been published in our paper in Physical Review Letters". The theoretical results for the energy of tetraneutron resonance of 0.84 MeV correlate well with the Japanese experimental findingof 0.83 MeV which is however characterized by a large uncertainty (about ±2 MeV). The calculated width of the resonant tetraneutron state is 1.4 MeV which corresponds to the lifetime of about 5×10-22 sec. Andey Shirokov continues: "It's worth noting that none of theoretical papers up to now has predicted the existence of the resonant tetraneutron state at such low energies of about 1 MeV". The new theoretical result probably stems from a new theoretical approach to the studies of resonant states in nuclear systems developed by the scientists. This approach has been carefully tested on model problems and in less complicated systems and only afterwards applied to the tetraneutron studies accounting for the specifics of the four-particle decay of this system. Andrey Shirokov however indicates an alternative possibility: "Another possible reason is the fact that we've used a new interaction between neutrons elaborated by our team. Our tetraneutron studies will be continued, we'll perform simulations with other more traditional interactions. At the same time, our French colleagues are going to study thetetraneutron with our interaction within their approach. Of course, all of us are looking forward for the results of new experimental tetraneutron searches". The research has been conducted by a large international team of theorists with Russia been represented by scientists not only from the Lomonosov Moscow State University, but also from the Pacific National University (Khabarovsk). This team includes also collaborators from USA and Germany. Researchers from South Korea are joining the group for future studies. The Russian side has been at the forefront of this research leading the elaboration of the theoretical approach to the resonant states and the design of the new interaction between particles in atomic nuclei. Vladimir Koryagin | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:707033b0-3c78-473f-8784-91cec4a7cbcd>
3.046875
1,802
Content Listing
Science & Tech.
33.57249
95,482,773
The sun is setting. Through gaps in frosted tree tops, I can see bands of pink colour lingering on the blue-gray clouds. According to SunriseSunset.com, we are within a day or two of our earliest sunset of the year, more than ten minutes before the hour of 5 p.m. Their calendar lists sunrise and sunset times to the nearest minute, so I can't tell exactly which day marks the turning point. By the time the winter solstice rolls around, our sunset will be moving later again. In those last few days before the solstice, the shortening of daylight will be only on the morning side, with sunrise still moving later, right through until early January. From a New Scientist article, Early Days, I have a shaky idea of how our days shift through this dark time of the year. The U.S. Naval Observatory offers what sounds like a more systematic explanation in The Dark Days of Winter, but my understanding still feels a bit unsteady. The impression that stays with me is this. Solar days pulse slightly over the course of a year, making the time from one noon to the next longer and shorter. Time as measured by the sun does not march exactly to the beat of atomic vibrations. We don't correct our clocks for this pulsation. Although those 24 hours on the dial suggest that we are keeping track of solar days, we are really approximating them, and clock time wobbles around solar time without quite matching it. Twice in the last month I've seen an "atomic clock" offered for sale. It's really just an electric clock that automatically resets itself when it detects a radio signal from an atomic clock in Boulder, Colorado. You can get one at Lee Valley; you can also get a sundial. With the atomic clock, you can make detailed observations of sunrise and sunset times and then try to get your head around their movements. With the sundial, you can observe the solar day directly, and not worry about how many wobbles of an atom fit into it. 2 weeks ago
<urn:uuid:b80b0770-ad07-4100-83bb-f2cc5b6dadf9>
3.046875
424
Personal Blog
Science & Tech.
63.181869
95,482,776
NASA's Characterization of Arctic Sea Ice Experiment, known as CASIE, began a series of unmanned aircraft system flights in coordination with satellites. Working with CU-Boulder and its research partners, NASA is using the remotely piloted aircraft to image thick, old slabs of ice as they drift from the Arctic Ocean south through the Fram Strait -- which lies between Greenland and Svalbard, Norway -- and into the North Atlantic Ocean. NASA's Science Instrumentation Evaluation Remote Research Aircraft, or SIERRA, will weave a pattern over open ocean and sea ice to map and measure ice conditions below cloud cover to as low as 300 feet. "Our project is attempting to answer some of the most basic questions regarding the most fundamental changes in sea-ice cover in recent years," said CU-Boulder Research Professor James Maslanik of the aerospace engineering sciences department and principal investigator for the NASA mission. "Our analysis of satellite data shows that in 2009 the amount of older ice is just 12 percent of what it was in 1988 -- a decline of 74 percent. The oldest ice types now cover only 2 percent of the Arctic Ocean as compared to 20 percent in the 1980s." SIERRA, laden with scientific instruments, travels long distances at low altitudes, flying below the clouds. The aircraft has high maneuverability and slow flight speed. SIERRA's relatively large payload, approximately 100 pounds, combined with a significant range of 500 miles and a small, 20-foot wingspan makes it the ideal aircraft for the expedition. The mission is conducted from the Ny-Alesund research base on the island of Svalbard, Norway, located near the northeastern tip of Greenland. Mission planners are using satellite data to direct flights of the aircraft. "We demonstrated the utility of small- to medium-class unmanned aircraft systems for gathering science data in remote, harsh environments during the CASIE mission," said Matt Fladeland, CASIE project and SIERRA manager at NASA's Ames Research Center in Moffett Field, Calif. The aircraft observations will be complemented by NASA satellite large-scale views of many different features of the Arctic ice. The Moderate Resolution Imaging Spectroradiometer aboard NASA's Aqua satellite will be used to identify the ice edge location, ice features of interest and cloud cover. Other sensors such as the Advanced Microwave Scanning Radiometer-Earth Observing System on Aqua and the Quick Scatterometer satellite can penetrate cloud cover and analyze the physical properties of ice. By using multiple types of satellite data, in conjunction with high-resolution aircraft products, more can be learned about ice conditions than is possible by using one or two data analysis methods. NASA's CASIE mission supports a larger NASA-funded research effort titled "Sea Ice Roughness as an Indicator of Fundamental Changes in the Arctic Ice Cover: Observations, Monitoring, and Relationships to Environmental Factors." The project also supports the goals of the International Polar Year, a major international scientific research effort involving many NASA research efforts to study large-scale environmental changes in Earth's polar regions. Other CU-Boulder participants in CASIE include Research Associate Ute Herzfeld, aerospace engineering graduate student Ian Crocker and Professional Research Assistant Katja Wegrzyn. The CASIE expedition is providing mission updates online at: For more information about CASIE visit http://www.espo.nasa.gov/casie/. James Maslanik | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:75fe0fe5-d9d8-4f93-a306-e7f4a0359bf6>
3.796875
1,339
Content Listing
Science & Tech.
35.810026
95,482,788
A new study using a reconstruction of North American drought history over the last 1,000 years found that the drought of 1934 was the driest and most widespread of the last millennium. Using a tree-ring-based drought record from the years 1000 to 2005 and modern records, scientists from NASA and Lamont-Doherty Earth Observatory found the 1934 drought was 30 percent more severe than the runner-up drought (in 1580) and extended across 71.6 percent of western North America. For comparison, the average extent of the 2012 drought was 59.7 percent. This photo shows a farmer and his two sons during a dust storm in Cimarron County, Oklahoma, 1936. The 1930s Dust Bowl drought had four drought events with no time to recover in between: 1930-31, 1934, 1936 and 1939-40. Image Credit: Arthur Rothstein, Farm Security Administration "It was the worst by a large margin, falling pretty far outside the normal range of variability that we see in the record," said climate scientist Ben Cook at NASA's Goddard Institute for Space Studies in New York. Cook is lead author of the study, which will publish in the Oct. 17 edition of Geophysical Research Letters. Two sets of conditions led to the severity and extent of the 1934 drought. First, a high-pressure system in winter sat over the west coast of the United States and turned away wet weather – a pattern similar to that which occurred in the winter of 2013-14. Second, the spring of 1934 saw dust storms, caused by poor land management practices, suppress rainfall. "In combination then, these two different phenomena managed to bring almost the entire nation into a drought at that time," said co-author Richard Seager, professor at the Lamont-Doherty Earth Observatory of Columbia University in New York. "The fact that it was the worst of the millennium was probably in part because of the human role." According to the recent Fifth Assessment Report of the Intergovernmental Panel on Climate Change, or IPCC, climate change is likely to make droughts in North America worse, and the southwest in particular is expected to become significantly drier as are summers in the central plains. Looking back one thousand years in time is one way to get a handle on the natural variability of droughts so that scientists can tease out anthropogenic effects – such as the dust storms of 1934. “We want to understand droughts of the past to understand to what extent climate change might make it more or less likely that those events occur in the future," Cook said. The abnormal high-pressure system is one lesson from the past that informs scientists' understanding of the current severe drought in California and the western United States. "What you saw during this last winter and during 1934, because of this high pressure in the atmosphere, is that all the wintertime storms that would normally come into places like California instead got steered much, much farther north,” Cook said. “It's these wintertime storms that provide most of the moisture in California. So without getting that rainfall it led to a pretty severe drought." This type of high-pressure system is part of normal variation in the atmosphere, and whether or not it will appear in a given year is difficult to predict in computer models of the climate. Models are more attuned to droughts caused by La Niña's colder sea surface temperatures in the Pacific Ocean, which likely triggered the multi-year Dust Bowl drought throughout the 1930s. In a normal La Niña year, the Pacific Northwest receives more rain than usual and the southwestern states typically dry out. But a comparison of weather data to models looking at La Niña effects showed that the rain-blocking high-pressure system in the winter of 1933-34 overrode the effects of La Niña for the western states. This dried out areas from northern California to the Rockies that otherwise might have been wetter. As winter ended, the high-pressure system shifted eastward, interfering with spring and summer rains that typically fall on the central plains. The dry conditions were exacerbated and spread even farther east by dust storms. "We found that a lot of the drying that occurred in the spring time occurred downwind from where the dust storms originated," Cook said, "suggesting that it's actually the dust in the atmosphere that's driving at least some of the drying in the spring and really allowing this drought event to spread upwards into the central plains." Dust clouds reflect sunlight and block solar energy from reaching the surface. That prevents evaporation that would otherwise help form rain clouds, meaning that the presence of the dust clouds themselves leads to less rain, Cook said. "Previous work and this work offers some evidence that you need this dust feedback to explain the real anomalous nature of the Dust Bowl drought in 1934," Cook said. Dust storms like the ones in the 1930s aren't a problem in North America today. The agricultural practices that gave rise to the Dust Bowl were replaced by those that minimize erosion. Still, agricultural producers need to pay attention to the changing climate and adapt accordingly, not forgetting the lessons of the past, said Seager. "The risk of severe mid-continental droughts is expected to go up over time, not down," he said. Ellen Gray | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:5dcdddc0-8bf4-4c05-92dc-57c3e510d4b7>
3.515625
1,659
Content Listing
Science & Tech.
49.03542
95,482,819
Squash Vine Borer Head: Top of head dark bronze-black. Eyes orange. Nose cone (palpi) orange, white on underside. Antenna: Black. Males have row of hairs on one side (pectinate), antenna appear thicker. Thorax: Dark bronze-black with white tuft a side edges. Wings: Forewing bronze-black; fringe thick sometimes slightly lighter in colour. Hindwing transparent, veins and fringe brown. Legs: Black with orange and black scaling. Hind legs thickly covered with orange and black hair. Front foot black, mid and hind feet striped. Abdomen: Colour can vary from yellow to orange. Segments 1 and 2 black. Segment 3 orange. Other segments mostly orange with a large black dot at center of segment. Male abdomen usually has last 2 segments dark, with pointed hair tuft. Female tip bluntly rounded, usually red. Underside orange-yellow. Size: 14 to 16 mm long. Wingspan 25 to 32 mm. Habitat: Gardens and meadows. Food: Adults often found on milkweed blossoms and flowering garden herbs. Larvae feed on squashes, pumpkin, zucchini, gourds. Sometimes on cucumber and watermelon. Flight Time: Late June to August. Life Cycle: Reddish-brown, somewhat flattened, very tiny eggs laid on crown root to 1 foot above on stems, on leaf stalks, leaves and fruit buds. Larva bores into plant stems and feed on juices, leaving a sawdust trail. Mature larvae burrow 25 to 50 mm into the soil and over-winter, pupating in the spring in the same tunnel. Larvae appear grub-like, but with legs – 3 pair on thorax and 5 pair on abdomen. Mature larvae are 25 mm long, white with a dark head. Adults are day flyers. One generation per year in Ontario. Comments: Essex County – Ojibway Prairies; Pelee Island. Common in Ontario. For information on synonyms, references and type specimens see next page
<urn:uuid:4d8e741e-f08f-4533-9854-168c1ba2d198>
3.15625
438
Knowledge Article
Science & Tech.
63.095827
95,482,878
Abstract In this new approach the total number of the photons from the Big-Bang and the volume of the entire Universe at the present epoch , have produced the energy density of the CMB photons. This energy density shows that the photons of the CMB have a wavelength of: . Approximate frequency of 30 GHz with a temperature of: .This new approach indicates that the CMB photons are the force behind the expansion of the Universe or the Dark energy. The peak at 0.2 cm, 160 mega Hertz with a temperature of 2.7 K could either be due to all the galaxies in the Universe giving a peak intensity in the microwave at 2.7 K or according to: P.J.E. PEEBLES Principles of physical cosmology (pages144 and145). “The CBR energy density can be compared to that of other local fields. The luminosity of the Milky Way is so at our position near the outskirts, at distance from the centre, the energy density is . This defines an effective temperature for the starlight; the result is . The interstellar magnetic field is . If we again define effective temperature by the energy density, , we get .”. Comments: 5 Pages. [v1] 2017-09-04 05:17:53 Unique-IP document downloads: 21 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:31bcf1e4-82b8-43c4-b775-d466269e0bfc>
2.578125
413
Academic Writing
Science & Tech.
55.139148
95,482,896
In the past 540 million years there have been five mass extinctions. A mass extinction is when more than three-quarters of the world's species become extinct in a short frame of time. Total up the numbers in these five mass extinctions and we have lost 99 percent of the Earth's species – and that’s without human intervention. Now, experts are concerned that a sixth mass extinction has begun, and could take a mere 1,000 years to complete. Whether we are in the midst of a mass extinction or not, it's clear that we are losing species at an alarming rate. Here are five recently extinct animals that left our world over the past decade: 5. Javan Tiger Declared Extinct in 2003 The Javan tiger was a big cat, but not in comparison to other tiger species. The males weighed in at 220 to 310 pounds and the females averaged 170 to 250 pounds. Their small size was thought to be attributed to the size of the available prey in their native land of Java, Indonesia. The theory is that the smaller the prey, the smaller the predator. At one point, the Javan tigers inhabited all of Java. In the mid 1800s to mid 1900s the native people viewed these tigers as pests and chased them off to the remote mountainous areas. By 1972, the remaining tigers were sectioned off to the Meru Betiri National Park Reserve. Unfortunately, the protection of the reserve was not enough. Due to hunting, loss of forest habitat and lack of prey, the number of Javan tigers dwindled. The last members were spotted in the reserve in 1976 and the Javan tigers were officially declared extinct in 2003, joining the ever growing list of recently extinct species. 4. Western Black Rhinoceros Declared Extinct in 2018 Rhinos have been around since prehistoric times and are the second largest land mammal, second in size only to elephants. in 2018, the Western Black Rhinoceros was officially declared extinct, leaving only four remaining rhino sub-species, all of which are endangered. Despite the Western Black Rhino's large size, they were actually extremely fast runners and could reach speeds of 40 miles per hour when they needed to intimidate other animals or humans. However, these big guys had extremely poor vision and had been known to accidentally run into trees and other objects. Besides their scary intimidation tactics, they were actually quite gentle and strict vegetarians. These giants had tough black skin that protected them from predators, but it was also quite soft to the touch and sensitive to sunlight, so the rhinos would wallow in mud to give them some UV protection. Loss of habitat and poaching was the downfall for the Western Black Rhino. They had two horns that were valued on the black market for their beauty and supposed medicinal value. The last known members of this rhino sub-species were known to live in Cameroon, West Africa. 3. Southern Gastric Brooding Frog Declared Extinct in 2002 Scientists had hoped the Southern Gastric Brooding Frog could provide a solution for stomach ulcers in humans. Why? Because this frog delivered it's young from its mouth, temporarily turning off its stomach acid to do so. Scientists thought the ability to turn off the production of stomach acid could be useful to humans. Before scientists were able to discover the Southern Gastric Brooding Frogs' secret, the species became extinct. The Southern Gastric Brooding Frog was an aquatic species that lived in rainforests, wet forest communities and near freshwater streams in Australia. The last living member of this species was spotted in the wild in 1981 and the Southern Gastric Brooding Frogs were officially declared extinct in 2002. The cause of extinction is unknown. Their demise could have been caused by timber harvesting, habitat changes, non-native species of plants and animals, by disease or something else. 2. Pinta Island Tortoise Last Survivor Died 2012 The Pinta Island Tortoise has long been considered extinct in the wild. The last known member of the species was found in 1972 and transferred to the Charles Darwin Research Center. Being the last of his kind, he was named "Lonesome George." Researchers had hoped to find George a mate, but never succeeded. In 2012, Lonesome George died and the Pinta Island Tortoise was no more. George was 200 pounds and five feet long and was in the prime of his life at the time of his death. Pinta tortoises are thought to have a 200 year lifespan. The cause of his early death is not known, but a heart attack is suspected. Where had George's relatives gone? Why was he the last of his kind? Well, whalers and Galapagos Island (the native area of the Pinta Island Tortoise) settlers ate them. Because of the big size of these tortoises, they provided a lot of food to humans. In addition, because the tortoises could live a long time without food and water, the whalers liked to bring these tortoises with them on long excursions as a source of fresh meat. 1. Baiji River Dolphin Functionally Extinct as of 2006 The Baiji River Dolphin have been considered critically endangered since 1996. Conservationists have made efforts to save these beautiful river dolphins, but have been unsuccessful. The last confirmed sighting of this freshwater river dolphin was in 2001. Since then, researchers have scanned the Yangtze River in China -- the Baiji River Dolphins' only habitat -- in search of any survivors and have found none. In 2006, researchers declared the species had likely joined the long list of animals that are extinct, and stated that if there happened to be any survivors, they likely would not survive. The Chinese regarded the Baiji River Dolphins as a national treasure. According to Chinese legend, the first Baiji River Dolphin was the reincarnation of a drowned princess. But, legends and treasures could not save these graceful freshwater dolphins from industrialization. Heavy ship traffic, over-fishing, dam building, dredging and water pollution produced an environment that was impossible for the Baiji River Dolphins to survive in. The loss of these species should be a reminder about the importance of taking conservation seriously. We may not be able to save every species, but if we don't take quick and serious action, we may one day lose them all.
<urn:uuid:6bf549ee-3c7b-45f5-a9cc-35ebba0c209d>
3.625
1,320
Listicle
Science & Tech.
52.239579
95,482,909
Skip to Main Content Lichen communities as climate indicators in the U.S. Pacific States.Author(s): Robert J. Smith; Sarah Jovan; Bruce McCune Source: Gen. Tech. Rep. PNW-GTR-952. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 44 p. Publication Series: General Technical Report (GTR) Station: Pacific Northwest Research Station PDF: View PDF (6.0 MB) DescriptionEpiphytic lichens are bioindicators of climate, air quality, and other forest conditions and may reveal how forests will respond to global changes in the U.S. Pacific States of Alaska, Washington, Oregon, and California. We explored climate indication with lichen communities surveyed by using both the USDA Forest Service Forest Inventory and Analysis (FIA) and Alaska Region (R10) methods. Across the Pacific States, lichen indicator species and ordination “climate scores” reflected associations between lichen community composition and climate. Indicator species are appealing targets for monitoring, while climate scores at sites resurveyed in the future can indicate climate change effects. Comparing the FIA and R10 survey methods in coastal Alaska showed that plot size affected lichen-species capture but not climate scores, whereas mixing data from both methods did not improve climate scores. Remeasurements from 1989 to 2014 in south-central and southeast Alaska revealed the importance of systematically random plot designs to detect climate responses in lichen communities. We provide an appendix of lichen species with climate indicator values. Lichen indicator species and community climate scores are promising tools for meeting regional forest management objectives. - You may send email to firstname.lastname@example.org to request a hard copy of this publication. - (Please specify exactly which publication you are requesting and your mailing address.) - We recommend that you also print this page and attach it to the printout of the article, to retain the full citation information. - This article was written and prepared by U.S. Government employees on official time, and is therefore in the public domain. CitationSmith, Robert J.; Jovan, Sarah; McCune, Bruce. 2017. Lichen communities as climate indicators in the U.S. Pacific States. Gen. Tech. Rep. PNW-GTR-952. Portland, OR: U.S. Department of Agriculture, Forest Service, Pacific Northwest Research Station. 44 p. KeywordsBioindication, climate change, coastal Pacific Northwest, forest health, gradient analysis, indicator species, niche tolerance, ordination, site scores. - Lichen bioindication of biodiversity, air quality, and climate: baseline results from monitoring in Washington, Oregon, and California. - Lichen communities and species indicate climate thresholds in southeast and south-central Alaska, USA - Epiphytic Macrolichen Community Composition Database—epiphytic lichen synusiae in forested areas of the US XML: View XML
<urn:uuid:5a81bd67-0052-4544-ad6f-742c03bb8ae6>
2.75
632
Truncated
Science & Tech.
30.222537
95,482,911
The Deepwater Program: Northern Gulf of Mexico Continental Slope Habitat and Benthic Ecology - DgoMB: Polychaetes Ocean Biogeographic Information System. The Deepwater Program: Northern Gulf of Mexico Continental Slope Habitat and Benthic Ecology - DgoMB: Polychaetes. Occurrence dataset https://doi.org/10.15468/71oqaf accessed via GBIF.org on 2018-07-20. A research program has been initiated by the Minerals Management Service (Contract No. 1435-01-99-CT-30991) to gain better knowledge of the benthic communities of the deep Gulf of Mexico entitled “The Deepwater Program: Northern Gulf of Mexico Continental Slope Habitat and Benthic Ecology”. Increasing exploration and exploitation of fossil hydrocarbon resources in the deep-sea prompted the Minerals Management Service of the U.S. Department of the Interior to support an investigation of the structure and function of the assemblages of organisms that live in association with the sea floor in the deep-sea. The program, Deep Gulf of Mexico Benthos or DGoMB, is studying the northern Gulf of Mexico (GOM) continental slope from water depths of 300 meters on the upper continental slope out to greater than 3,000 meters water depth seaward of the base of the Sigsbee and Florida Escarpments. The study is focused on areas that are the most likely targets of future resource exploration and exploitation. However, to develop a Gulf-wide perspective of deep-sea communities, sampling in areas beyond those thought to be potential areas for exploration has been included in the study design. A major enhancement in the program is the extension of the transects onto the abyssal plain of the central Gulf of Mexico through collaborative studies with Mexican scientists. This additional work effort will allow assessment of benthic communities structure and function throughout the basin by sampling the deepest habitats in the region. The program is designed to gain a better ability to predict variations in the structure and function of animal assemblages in relation to water depth, geographic location, time and overlying water mass. Biological studies are integrated with measurements of physical and chemical hydrographic parameters, sediment geochemical properties and geological characteristics that are known to influence benthic community distributions and dynamics. Eight (8) hypotheses are being tested on the basis of measures of benthic community structure. It is hypothesized that community structure varies as a function of: 1) water depth, 2) geographic location (east vs. west), 3) association with canyons, 4) association with mid-slope basins, 5) sea surface primary productivity, 6) proximity to hydrocarbon seeps, 7) time (seasonal and inter-annual scales), and 8) association with the base of escarpments.
<urn:uuid:10dedaf3-ffc6-42ce-9e3f-4b817741c8c9>
3.359375
587
Academic Writing
Science & Tech.
28.56331
95,482,912
Within the €3.6 million EU research project PROMESS1 (PROfiles across MEditerranean Sedimentary Systems), with an EU contribution of €2.7 million, European scientists have collected 500 000 year-old sediment cores from the bottom of the Mediterranean Sea. These samples will allow researchers to reconstruct climate variations since pre-historic times, thus providing keys for understanding what is happening to Earth’s climate now. Ocean drilling is crucial in understanding changes in climate, as the sediments hold archives of past developments. PROMESS1 involves partners from France, Germany, Italy, Spain, the Netherlands and the United Kingdom. “The findings of the PROMESS1 project place European research on a par with the world leaders in marine geosciences, the US and Japan,” said European Research Commissioner Philippe Busquin. “This research helps us to understand the Earth’s situation and envisage scenarios to be taken into account by policy-makers. Changes in sea-bottom sediments off the shore of densely populated coastlines may have a deep impact on those areas. Moreover, better understanding of how these sediments formed will help identify and monitor gas- and oilfields.” Journalists are invited to visit the research vessel SRV Bavenit and meet the research team tomorrow, Friday 23 July, at 10.00, in the harbour of Barcelona. Cross-examination of data from different sources will help better understand climate variations. The data of PROMESS1 will be compared with data provided by ice core drilling. Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:7582989f-aba0-42b9-8201-aa07c7679fba>
3.4375
950
Content Listing
Science & Tech.
40.30033
95,482,913
Around the home, regularly used tools are generally kept close at hand: a can opener in a kitchen drawer, a broom in the hall closet. Less frequently used tools are more likely to be stored in less accessible locations, out of immediate reach, perhaps in the basement or garage. And hazardous tools might even be kept under lock and key. Similarly, the human genome has developed a set of sophisticated mechanisms for keeping selected genes readily available for use while other genes are kept securely stored away for long periods of time, sometimes forever. Candidate genes for such long-term storage include those required only for early development and proliferation, potentially dangerous genes that could well trigger cancers and other disorders should they be reactivated later in life. Cancer researchers and others have been eager to learn more about the molecules that direct this all-important system for managing the genome. Now, researchers at The Wistar Institute and Fox Chase Cancer Center have successfully determined the three-dimensional structure of a key two-molecule complex involved in long-term gene storage, primarily in cells that have ceased proliferating, or growing. The study also sheds light on a related two-molecule complex that incorporates one member of the molecular pair, but with a different partner. This second complex is involved in storing genes in a more accessible way in cells that continue to grow. A report on the team's findings, published online on September 17, will appear in the October issue of Nature Structural and Molecular Biology. "The two-molecule complex we studied is pivotal for protecting certain genes from expression, genes that could cause problems if they were activated," says Ronen Marmorstein, Ph.D., a professor in the Gene Expression and Regulation Program at Wistar and one of the two senior authors on the study. "This is the first time we've been able to see the structure of these molecules communicating and interacting with each other, and it provides important insights into their function." "By defining some of the rules that dictate how these complexes are formed and operate, we have revealed a part of the difference between growing and non-growing cells," says Peter D. Adams, Ph.D., an associate member in the Basic Science Division at Fox Chase and the other senior author on the study. "This difference is crucial to the distinction between normal and cancerous cells and may inform our ability to treat this disease." The molecular complex studied by the scientists governs the assembly of an especially condensed form of chromatin, the substructure of chromosomes. The complex is called a histone chaperone complex, responsible for inserting the appropriate histones into the correct locations within the chromatin. Histones are relatively small proteins around which DNA is coiled to create structures called nucleosomes. Compact strings of nucleosomes, then, form into chromatin. "There are more and less condensed forms of chromatin," explains Marmorstein. "The less condensed forms correlate with more gene expression, and the more condensed forms involve DNA that's buried away and is not transcribed." "Appropriate packaging of the DNA in the cell nucleus is crucial for proper functioning of the cell and suppression of disease states, such as cancer," says Adams. An unanticipated observation from the study centers on the region of association between the two molecules in the complex. The researchers knew that one of the two molecules in the complex, called ASF1, associated with a particular molecular partner, HIRA, when directing assembly of the more condensed form of chromatin. But it could also associate with a different partner, called CAF1, to shepherd assembly of the less condensed form of chromatin. On closer study, the scientists discovered that HIRA and CAF1 have nearly identical structural motifs in the regions of interaction with ASF1. This means that ASF1 can bind to one or the other molecular partner, but not to both. In other words, the interaction is mutually exclusive: A kind of decision is made by ASF1 as to whether to guide the assembly process towards the more or less condensed forms of chromatin. What determines the choice? The relevant factors are unknown for now. Franklin Hoke | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:9ac18d31-9adc-4914-b32e-2d1097fb5d5c>
3.328125
1,495
Content Listing
Science & Tech.
37.017242
95,482,914
|CS552 Course Wiki: Spring 2013||Main » Using the Assembler In order to test your project, you need to assemble programs to be loaded into memory. To do this, there is a simple assembler provided. It will take source code that looks like the text in figure below and produces two files: An object file and a listing for your reference. The assembler is located here: That directory has already been added to your Say you have a source file names "myfile.asm"; to assemble it, type: This produces many files, two of the files are: The listing file is called The assembler always produces a warning that if there are any errors, the output is not valid. This is just a reminder -- this message itself is not an error. See running the programs in the WISC-SP13 simulator-debugger. The assembler will also produce 4 files of the form Assembly programs are written using the semantics outlined in the WISC-SP13 ISA document. C style comments can be used (//). Static data and labels can be used as follows: |Page last modified on February 23, 2013, visited 1272 times|
<urn:uuid:12ec0ef4-b14e-4519-8382-ad7a11912841>
3.171875
249
Tutorial
Software Dev.
48.832778
95,482,947
Common name: Stonecat available through www.itis.gov Identification: Becker (1983); Page and Burr (1991); Etnier and Starnes (1993); Jenkins and Burkhead (1994). Size: 31 cm. Native Range: St. Lawrence-Great Lakes, Hudson Bay (Red River), and Mississippi River basins from Quebec to Alberta, and southern to northern Alabama, northern Mississippi, and northeastern Oklahoma; Hudson River drainage, New York (Page and Burr 1991). Puerto Rico & Interactive maps: Point Distribution Maps Native range data for this species provided in part by NatureServe Table 1. States with nonindigenous occurrences, the earliest and latest observations in each state, and the tally and names of HUCs with observations†. Names and dates are hyperlinked to their relevant specimen records. The list of references for all nonindigenous occurrences of Noturus flavus are found here. Table last updated 5/25/2018 † Populations may not be currently present. Means of Introduction: Unknown; possible bait bucket introduction or stock contamination. Status: Reported from West Virginia. Impact of Introduction: Unknown. References: (click for full references) Becker, G. C. 1983. Fishes of Wisconsin. University of Wisconsin Press, Madison, WI. Etnier, D. A., and W. C. Starnes. 1993. The fishes of Tennessee. University of Tennessee Press, Knoxville, TN. Hocutt, C.H., R.E. Jenkins, and J.R. Stauffer, Jr. 1986 . Zoogeography of the Fishes of the Central Appalachians and Central Atlantic Coastal Plain. In C.H. Hocutt and E.O. Wiley, eds. The Zoogeography of North American Freshwater Fishes. :161-212. Jenkins, R. E., and N. M. Burkhead. 1994. Freshwater fishes of Virginia. American Fisheries Society, Bethesda, MD. Page, L. M., and B. M. Burr. 1991. A field guide to freshwater fishes of North America north of Mexico. The Peterson Field Guide Series, volume 42. Houghton Mifflin Company, Boston, MA. Revision Date: 5/5/2010 Peer Review Date: 4/1/2016 Fuller, P., 2018, Noturus flavus Rafinesque, 1818: U.S. Geological Survey, Nonindigenous Aquatic Species Database, Gainesville, FL, https://nas.er.usgs.gov/queries/FactSheet.aspx?SpeciesID=745, Revision Date: 5/5/2010, Peer Review Date: 4/1/2016, Access Date: 7/16/2018 This information is preliminary or provisional and is subject to revision. It is being provided to meet the need for timely best science. The information has not received final approval by the U.S. Geological Survey (USGS) and is provided on the condition that neither the USGS nor the U.S. Government shall be held liable for any damages resulting from the authorized or unauthorized use of the information.
<urn:uuid:e29eb503-ece0-4f78-b5ff-edc8348ab239>
2.71875
674
Structured Data
Science & Tech.
60.066961
95,482,971
Their results suggest that such magnetic fields play a key role in channeling matter to form denser clouds, and thus in setting the stage for the birth of new stars. The work will be published in the November 24 edition of the journal Nature (online version: November 16). Image of the Triangulum Galaxy M33, which presents astronomers with a bird’s eye view of its disk. The pink blobs are regions containing newly formed stars. Credit & Copyright: Thomas V. Davis (http://tvdavisastropics.com) Stars and their planets are born when giant clouds of interstellar gas and dust collapse. You've probably seen the resulting stellar nurseries in beautiful astronomical images: Colorful nebulae, lit by the bright young stars they have brought forth. Astronomers know quite a bit about these so-called molecular clouds: They consist mainly of hydrogen molecules – unusual in a cosmos where conditions are rarely right for hydrogen atoms to bond together into molecules. And if one traces the distribution of clouds in a spiral galaxy like our own Milky Way galaxy, one finds that they are lined up along the spiral arms. But how do those clouds come into being? What makes matter congregate in regions a hundred or even a thousand times more dense than the surrounding interstellar gas? One candidate mechanism involves the galaxy's magnetic fields. Everyone who has seen a magnet act on iron filings in the classic classroom experiment knows that magnetic fields can be used to impose order. Some researchers have argued that something similar goes on in the case of molecular clouds: that galaxies' magnetic fields guide and direct the condensation of interstellar matter to form denser clouds and facilitate their further collapse. Some astronomer see this as the key mechanism enabling star formation. Others contend that the cloud matter's gravitational attraction and turbulent motion of gas within the cloud are so strong as to cancel any influence of an outside magnetic field. If we were to restrict attention to our own galaxy, it would be difficult to find out who is right. We would need to see our galaxy's disk from above to make the appropriate measurements; in reality, our Solar System sits within the galactic disk. That is why Hua-bai Li and Thomas Henning from the Max Planck Institute for Astronomy chose a different target: the Triangulum galaxy, 3 million light-years from Earth and also known as M 33, which is oriented in just the right way (cf. image). Using a telescope known as the Submillimeter Array (SMA), which is located at Mauna Kea Observatory on Mauna Kea Island, Hawai'i, Li and Henning measured specific properties of radiation received from different regions of the galaxy which are correlated with the orientation of these region's magnetic fields. They found that the magnetic fields associated with the galaxy's six most massive giant molecular clouds were orderly, and well aligned with the galaxy's spiral arms. If turbulence played a more important role in these clouds than the ordering influence of the galaxy's magnetic field, the magnetic field associated with the cloud would be random and disordered. Thus, Li and Henning's observations are a strong indication that magnetic fields indeed play an important role when it comes to the formation of dense molecular clouds – and to setting the stage for the birth of stars and planetary systems like our own. Contact informationHua-bai Li (first author) The research is supported by the Max Planck Institute for Astronomy and the Harvard-Smithsonian Center for Astrophysics. The Submillimeter Array is a joint project between the Smithsonian Astrophysical Observatory and the Academia Sinica Institute of Astronomy and Astrophysics and is funded by the Smithsonian Institution and the Academia Sinica. Dr. Markus Pössel | Max-Planck-Institut First evidence on the source of extragalactic particles 13.07.2018 | Technische Universität München Simpler interferometer can fine tune even the quickest pulses of light 12.07.2018 | University of Rochester For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:d31db6fe-229b-4b40-9b91-4d50660abe1f>
4.125
1,406
Content Listing
Science & Tech.
41.257097
95,482,974
Say goodbye to La Nina, maybe hello to a more normal summer By SETH BORENSTEIN May. 10, 2018 WASHINGTON (AP) — U.S. scientists say this winter's brief La Nina has evaporated, meaning an increased likelihood of a more normal summer. The National Oceanic and Atmospheric Administration said Thursday that the central Pacific has returned to normal after a weak-to-moderate natural cooling that happens every few years during the presence of La Nina, the cooler flip side of El Nino that affects weather worldwide. La Nina usually means more Atlantic hurricanes, but it won't be goosing this hurricane season. Other factors such as wind and rain patterns off Africa and other natural climate events might still add up to a stormy season. Mike Halpert of NOAA says the absence of El Nino and La Nina means this summer's weather will be harder to predict. But he expects long-term increased warming.
<urn:uuid:fee11c1a-4a32-4a48-bd9c-c5603183223d>
2.78125
193
Truncated
Science & Tech.
49.627993
95,482,984
From launching the most powerful spherical tokamak on Earth to discovering a mechanism that halts solar eruptions, scientists at the U.S. Department of Energy's Princeton Plasma Physics Laboratory advanced the boundaries of clean energy and plasma science research in 2015. Here, in no particular order, are our picks for the Top-5 developments of the year: 1. Starting up the National Spherical Torus Experiment-Upgrade (NSTX-U) From top left: 1.Magnetic island geometry revealing the mechanism for the density limit. (Reprinted with permission from Phys. Plasmas 22, 022514 2015); 2.Carlos Paz-Soldan and Raffi Nazikian advanced understanding of the control of heat bursts; 3.interior of the NSTX-U showing the completed center stack; 4.W7-X stellarator in Greifswald, Germany; 5.solar flare at the peak of the cycle in October, 2014, with no observed eruptions. Background: umbrella view of the interior of the NSTX-U. Credit: Elle Starkman/PPPL; Lisa Petrillo/GA for Carlos Paz-Soldan and Raffi Nazikian PPPL completed construction of the NSTX-U, the Laboratory's flagship fusion facility, doubling its heating and magnetic power and making it the most powerful spherical tokamak in the world. The machine is shaped like a cored apple, unlike conventional donut-shaped fusion facilities, and creates high plasma pressure with relatively low magnetic fields -- a highly cost-effective feature since magnetic fields are expensive to produce. The upgrade creates a flexible research platform that will enable physicists to directly address some of fusion's most outstanding puzzles. 2. Discovering a mechanism that halts solar eruptions Solar eruptions are massive explosions of plasma and radiation from the sun that can be deadly for space travelers and can disrupt cell phone service and other crucial functions when they collide with the magnetic field that surrounds Earth. Researchers working on the Magnetic Reconnection Experiment (MRX), the world's premier device for studying the convergence and separation of magnetic fields in plasma, have discovered a previously unknown mechanism that causes eruptions to fail. The findings could prove highly valuable to NASA, which is eager to know when an eruption is coming and when the start of an outburst is just a false alarm. 3. First plasma on Germany's Wendelstein 7-X On December 10, 2015, the Wendelstein 7-X (W7-X) stellarator produced its first plasma after 10 years of construction. PPPL, which leads the United States' collaboration in the German project and will conduct research on it, joined the worldwide celebration of the achievement. The Laboratory designed and delivered five barn-door size magnetic coils, together with power supplies, that will help shape the plasma during W7-X experiments. The Lab also designed and installed an X-ray diagnostic system that will collect vital data from the plasma in the machine. Stellarators are fusion facilities that confine plasma in twisty -- or 3D -- magnetic fields, compared with the symmetrical -- or 2D -- fields that tokamaks produce. 4. Enhanced model of the source of the density limit Physicists have long puzzled over a mystery called the density limit -- a process that causes fusion plasmas to spiral apart when reaching a certain density and keeps tokamaks from operating at peak efficiency. Building on their past research, PPPL scientists have developed a detailed model of the source of this limitation. They've traced the cause to the runaway growth to bubble-like islands that form in the plasma and are cooled by impurities that stray plasma particles kick up from the walls of the surrounding tokamak. Researchers counter this heat loss by pumping fresh heat into the plasma, but even a tiny bit of net cooling in the islands can cause them to grow exponentially and the density limit to be reached. These findings could lead to methods to overcome the barrier. 5. Breakthrough in understanding how to control intense heat bursts Scientist from General Atomics and PPPL have taken a key step in predicting how to control potentially damaging heat bursts inside a fusion reactor. In experiments on the DIII-D National Fusion Facility that General Atomics operates for the DOE in San Diego, the physicists built upon previous DIII-D research showing that these intense heat bursts -- called edge localized modes (ELMS) -- could be suppressed with tiny magnetic fields. But how these fields worked had been unclear. The new findings reveal that the fields can create two kinds of response, one of which allows heat to leak from the edge of the plasma at just the right rate to avert the heat bursts. The team also identified the changes in the plasma that lead to suppression of the bursts. NSTX-U and DIII-D are DOE Office of Science User Facilities. PPPL, on Princeton University's Forrestal Campus in Plainsboro, N.J., is devoted to creating new knowledge about the physics of plasmas -- ultra-hot, charged gases -- and to developing practical solutions for the creation of fusion energy. Results of PPPL research have ranged from a portable nuclear materials detector for anti-terrorist use to universally employed computer codes for analyzing and predicting the outcome of fusion experiments. The Laboratory is managed by the University for the U.S. Department of Energy's Office of Science, which is the largest single supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. John Greenwald | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:d579a429-e1a0-4cec-b1c1-38e0922cf42a>
2.734375
1,806
Content Listing
Science & Tech.
44.048617
95,482,998
This article's factual accuracy is disputed. (July 2018) (Learn how and when to remove this template message) The Skylab mutiny was a day-long strike held by the crew of Skylab 4 on December 28, 1973, the last of the U.S. National Aeronautics and Space Administration's Skylab missions. The three-man crew, Gerald P. Carr, Edward G. Gibson, and William R. Pogue, turned off radio communications with NASA ground control for a full day, spending the day relaxing and looking at the Earth before resuming communication with NASA. They refused communications from mission control during this period. Once communications resumed, there were discussions between the crew and NASA, and the mission continued for several more weeks before the crew returned to Earth in 1974. The 84-day mission was Skylab's last crew, and last time American astronauts set foot in a space station for two decades, until Shuttle–Mir in the 1990s. The event, which is the only strike to have occurred in space, has been extensively studied as case study in various fields of endeavor including space medicine, team management, and psychology. Man-hours in space was, and continued to be into the 21st century, a profoundly expensive undertaking; a single day on Skylab was worth about $22.4 million in 2017 dollars. The mutiny also affected the planning of future space missions, especially long-term missions. Background and causes Behavioral problems during a spaceflight are of concern to mission planners, because they can trigger a mission failure. NASA has studied things that affect crew social dynamics such as morale, stress management, and how they solve problems as a group with missions like HI-SEAS. Each Skylab pushed farther into the unknown of space medicine, and it was difficult to make predictions about the reaction of the human body to prolonged weightlessness. The first manned Skylab mission set a spaceflight record with its 28-day mission, and Skylab 3 roughly doubled that to 59 days; no one had spent this long in orbit. Possible contributing factors to the mutiny include: - Twelve-week length of stay (the longest stay yet attempted by astronauts up to that time) - Isolated environment - Design of the spacecraft - Microgravity environment - Workload expectations of Skylab team - Workload expectations of mission control - Crew inexperience (all first-time astronauts) - No transition period Three three-man crews spent progressively longer amounts of time (28, 60, and then 84 days), launched to orbit by the Saturn IB and flying the Apollo CSM spacecraft to the station. It was visited by three three-man crews, and the mutiny occurred on this last all-rookie mission, which was also the longest. Skylab 3 had finished all their work and asked for more work—this may have led NASA to have a higher expectation for the next crew. However, the next crew were all "rookies" (they had not been in space before) and may not have had the same concept of workload as the previous crew. Both previous crews had veteran members and both previous crews had one member that had been to the moon and back. Another factor was that the rookie astronauts were in denial about their problems and hid the issues they were having with mission control, leading to even higher mental strain. The crew increasingly became bothered by having every hour of their trip duration scheduled. "We need more time to rest. We need a schedule that is not so packed. We don't want to exercise after a meal. We need to get things under control."— Gerald Carr NASA had continued with a workload similar to that on the shorter Skylab 3, and the crew gradually fell behind on their workload. After six weeks, the crew announced their mutiny and turned off all communication with ground control for December 28, 1973. The three men stopped work; Gibson spent the day on Skylab's solar console, and Carr and Pogue spent the time in the wardroom looking out of the window. At a time when nobody had spent six weeks in space, it was not known what had happened psychologically. NASA carefully worked with crew's requests, reducing their workload for the next six weeks. The incident took NASA into an unknown realm of concern in the selection of astronauts, still a question as humanity considers human missions to Mars or returning to the Moon. After the events of the mutiny, there were many attempts to either determine the cause or downplay what happened. Nevertheless, lessons learned focused on balancing workload with crew psychology and stress level. One factor that affects disaster planning is the process of lessons learned from past incidents. Two contrasting pressures are the desire to hide a problem to avoid issues such as reprimands versus the honest evaluation of the issue to prevent future occurrences. Among the complicating factors was the interplay between management and subordinates (see also Apollo 1 fire and Challenger disaster). On Skylab 4, one problem was that the crew was pushed even harder as they fell behind on their workload, creating an increasing level of stress. Even though none of the astronauts returned to space, there was only one more NASA spaceflight in the decade and Skylab was the first and last all-American space station. NASA was planning larger space stations but its budget shrank considerably after the moon landings, and the Skylab orbital workshop was the only major execution of Apollo application projects. Though the final Skylab mission became known for the mutiny, it was also known for the large amount of work that was accomplished in the long mission. Skylab orbited for six more years before decaying in 1979 due to higher-than-anticipated solar activity. The next U.S. spaceflight was the Apollo–Soyuz Test Project conducted in July 1975, and after a human spaceflight gap, the first Space Shuttle orbital flight STS-1. The mutiny is considered a significant example of "us" versus "them" syndrome in space medicine. Crew psychology has been a point of study for Mars analog missions such as Mars-500, with a particular focus on crew behavior triggering a mission failure or other issues. One of the impacts of the Skylab mutiny is that at least one member of the International Space Station crew must be a space veteran (not be on a first flight). The 84-day stay of the Skylab 4 mission was a human spaceflight record that was not exceeded for over two decades by a NASA astronaut; the 96-day Soviet Salyut 6 EO-1 mission broke Skylab 4's record in 1978. Sources including Homesteading Space dispute that the crew purposefully ended contact with mission control. The book was written by spaceflight history author David Hitt along with former astronauts Owen K. Garriott and Joseph P. Kerwin. - Space psychology - Psychological and sociological effects of spaceflight - Team composition and cohesion in spaceflight missions - Effects of sleep deprivation in space - List of spaceflight records#Duration of spaceflight - Space adaptation syndrome - Timeline of longest spaceflights - Van Dongen, HP; Maislin, G; Mullington, JM; Dinges, DF (2003). "The cumulative cost of additional wakefulness: Dose-response effects on neurobehavioral functions and sleep physiology from chronic sleep restriction and total sleep deprivation". Sleep. 26 (2): 117–26. PMID 12683469. - Broad, William J. (July 16, 1997). "On Edge in Outer Space? It Has Happened Before". The New York Times. Retrieved January 29, 2017. - Hiltzik, Michael. "The day when three NASA astronauts staged a strike in space". Los Angeles Times. Retrieved 2017-01-29. - Vitello, Paul (March 10, 2014). "William Pogue, Astronaut Who Staged a Strike in Space, Dies at 84". The New York Times. Retrieved January 30, 2017. - Lafleur, Claude (March 8, 2010). "Costs of US Piloted Programs". The Space Review. Retrieved February 18, 2012. See author's correction in comments section. - "All the King's Horses: The Final Mission to Skylab (Part 3)". Space Safety Magazine. 2013-12-05. Retrieved 2017-01-04. - Hitt, David (2008). Homesteading Space: The Skylab Story. University of Nebraska Press. Retrieved January 29, 2017. - "Behavioral Problems in Early Human Spaceflight". Spacesafetymagazine.com. 2015-08-29. Retrieved 2017-01-31. - Howell, Elizabeth (2015-03-03). "Mars on Earth: Mock Space Mission Examines Trials of Daily Life". Space.com. Retrieved 2017-01-31. - "Second crew on Skylab: Breaking all records". Sen.com. Retrieved 2017-01-05. - "Skylab 4 Rang in the New Year with Mutiny in Orbit". Motherboard. Archived from the original on 2017-01-04. Retrieved 2017-01-04. - "Skylab: Everything You Need to Know". www.armaghplanet.com. Retrieved 2017-01-04. - "Skylab: First U.S. Space Station". Space.com. Retrieved 2017-01-04. - Hollingham, Richard (December 21, 2015). "How the Most Expensive Structure in the World was Built". BBC. Retrieved January 30, 2017. - "The Skylab 4 mutiny, 1973". Libcom.org. Retrieved 2017-01-31. - Clément, Gilles (2011-07-15). Fundamentals of Space Medicine. Springer Science & Business Media. ISBN 9781441999054. - Cooper, Henry S. F. (August 30, 1976). "Life in a Space Station". The New Yorker. Retrieved January 30, 2017. - DNews (2012-04-16). "Why 'Space Madness' Fears Haunted NASA's Past". Seeker – Science. World. Exploration. Retrieved 2017-01-04. - "James Oberg's Pioneering Space". www.jamesoberg.com. Retrieved 2017-01-04. - Staff, Wired Science. "Skylab: America's First Home in Space Launched 40 Years Ago Today". WIRED. Retrieved 2017-01-04. - Fundamentals of Space Medicine by Gilles Clément. p. 255 - Elert, Glenn. "Duration of the Longest Space Flight". hypertextbook.com. Retrieved 2017-01-05. - Pike, John. "Soyuz 26 and Soyuz 27". www.globalsecurity.org. Retrieved 2017-01-05.
<urn:uuid:e2e46495-fe34-49c2-8f43-3cc016d7549c>
3.9375
2,239
Knowledge Article
Science & Tech.
62.376671
95,483,040
This post first appeared on The AnthropoZine. You can view the original here. 17 November 2014 | Last week, the United States and China, the world’s leading polluters, announced plans to limit their greenhouse gas emissions and strengthen cooperation on issues related to climate change and clean energy. While the announcement centered on the nations pledges on carbon dioxide (CO2) emissions targets (a reduction of 26-28% of 2005 levels by 2025 for the United States, and a goal for China’s emissions to reverse their upward course by 2030), a White House fact sheet offered a more detailed glimpse at additional actions. The document announces a renewed commitment to the U.S.-China Clean Energy Research Center, established by a 2009 agreement between President Obama and China’s then-president Hu Jintao. It also includes a cooperative effort to phase out hydrofluorocarbons, a “Climate-Smart/Low-Carbon city-planning initiative, and an effort to encourage trade in “green goods. Perhaps most interesting, deep in the fact sheet’s second page, is the document’s description of “a major carbon capture and storage project in China that supports a long term, detailed assessment of full-scale sequestration in a suitable, secure underground geologic reservoir. As it goes on, the plan announces a “new frontier in CO2 management, with “a carbon capture, use, and sequestration (CCUS) project that will capture and store CO2 while producing fresh water, thus demonstrating power generation as a net producer of water instead of a water consumer. According to the fact sheet: “This CCUS project with Enhanced Water Recovery will eventually inject about 1 million tons of CO2 and create approximately 1.4 million cubic meters of freshwater per year. The description is loaded with promise yet bogged down with technical language. So how will it all work? Carbon capture and storage projects aim to collect CO2 from industrial emissions and store it someplace generally underground or underwater where it won’t be released into the atmosphere. Some such projects aim not only to keep CO2 from entering the atmosphere, but also to produce something useful or marketable in the process. In the example discussed here last month, a Canadian energy company designed a facility to collect unwanted CO2, which was then sold to be injected beneath an oil field to free up oil that had been stuck in rock formations. The U.S.-China plan suggests a similar approach, but with the byproduct of extra oil swapped out in favor of fresh water something China badly needs. Those seeking a comprehensive explanation of the process should seek out scientific writings, but here we can provide some basics. Once CO2 is captured from emissions, it can be compressed and transported to a geologic formation where it will be securely, and permanently, stored. Depending on conditions at the storage site, the injection of CO2 into these spaces can effectively push water from the ground, allowing it to be collected for industrial use or distribution to areas in need. In some cases, water extracted in the process can be clean enough to drink. The project certainly has its share of unanswered questions, technical challenges, and areas for concern. And there is also the question of whether this non-binding agreement will receive continued long-term commitment from each country. But with proper safeguards and execution, the carbon-capture water project could represent a novel approach to an urgent problem, and potentially a meaningful blueprint for cooperative climate actions to come.
<urn:uuid:66091123-93b8-45c6-bfe2-7e4fd01584cc>
3.1875
724
News Article
Science & Tech.
36.106627
95,483,044
A groundbreaking study by Harvard University's Harvard Forest and the Smithsonian Institution reveals that, if left unchecked, recent trends in the loss of forests to development will undermine significant land conservation gains in Massachusetts, jeopardize water quality, and limit the natural landscape's ability to protect against climate change. The scientists researched and analyzed four plausible scenarios for what Massachusetts could look like in the future. The scenarios were developed by a group of forestry professionals, land-use planning and water policy experts, and conservation groups. The scenarios reflect contrasting patterns and intensities of land development, wood harvesting, conservation, and agriculture. The two-year study is unique in its forward-looking approach and its use of sophisticated computer models to conduct a detailed acre-by-acre analysis of the entire forested landscape of Massachusetts over 50 years. "What we found is that land-use decisions have immediate and dramatic impacts on many of the forest benefits people depend on," said Jonathan Thompson, Senior Ecologist at Harvard Forest and lead author of the new study. This is the first time a study of this magnitude has been conducted for an entire state. Thompson goes on to say, "Massachusetts is an important place to study land-use because it is densely populated, heavily forested, and experiencing rapid change – much like the broader forested landscape of the eastern U.S. The results of the study show that sprawl, coupled with a permanent loss of forest cover in Massachusetts, create an urgent need to address land-use choices." "We know from decades of research that forests are more than a collection of trees, they are 'living infrastructure' that works 24-hours a day to provide climate protection, clean water, local wood products, and natural areas for people and wildlife. The results of this new study show that seemingly imperceptible changes to the land add-up in ways that can significantly enhance or erode these vital benefits, depending on the choices we all make," said David Foster, Director of the Harvard Forest and co-author of the study. The stakes are high but there is good news in the study. "The Forests as Infrastructure scenario shows it's possible to protect forest benefits while also increasing local wood production and supporting economic development, by making important but achievable changes," said Thompson. Forests as Infrastructure clusters more of the development, implements "improvement forestry" on much of the harvested land, and increases the rate of forest conservation with a focus on priority habitat. By 2060, compared to Recent Trends, this scenario would:•Limit flooding risks in virtually all of the state's major watersheds •Grow 20% more high-value trees like large oak, sugar maple, and white pine •Double the amount of local wood harvested •Maintain a 35% increase in the storage of carbon that would otherwise warm the earth•Reduce forest fragmentation by 25% The team has received funding from the National Science Foundation to extend the study to include the five other New England states. By using science to understand and inform land-use decisions here in Massachusetts, the researchers are building on the Commonwealth's history as a leader in science and conservation to help shape the future of one of the most globally significant forested regions in the world. Download the report and the executive summary with policy addendum; watch a short video on the report; and access maps, figures, and b-roll at: http://harvardforest.fas.harvard.edu/changes-to-the-land. For interviews with the authors or project collaborators, contact: Clarisse Hart, 978-756-6157; email@example.com or Barbara MacLeod – 207-752-0484; firstname.lastname@example.org. The Harvard Forest is a department of the Faculty of Arts and Sciences (FAS) of Harvard University. The research center is based in central Massachusetts and is comprised of 3,500 acres of land, research facilities, and the Fisher Museum. Since 1988, the Harvard Forest has been a Long-Term Ecological Research Site funded by the National Science Foundation to conduct integrated, long-term studies of forest dynamics. Clarisse Hart | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:7d19140c-b883-4c05-b2b6-caba30453a10>
3.65625
1,445
Content Listing
Science & Tech.
36.362406
95,483,045
Send the link below via email or IMCopy Present to your audienceStart remote presentation - Invited audience members will follow you as you navigate and present - People invited to a presentation do not need a Prezi account - This link expires 10 minutes after you close the presentation - A maximum of 30 users can follow your presentation - Learn more about this feature in our knowledge base article Transcript of space The ring that surrounds Saturn could be the remnants of a moon that was shattered by Saturn's gravity. Uranus Uranus’ axis is at a 97 degree angle, meaning that it orbits lying on its side! Talk about a lazy planet. Neptune Neptune was discovered in 1846 (over 150 years ago). Since that time it has still yet to make a complete orbit around the sun, because one Neptune year lasts 165 Earth years! Like Jupiter, Neptune has a dark spot caused by a storm. Neptune's spot is smaller than Jupiter's -- it is only about the size of the planet earth. Pluto Note: Pluto is no longer considered a planet -- instead, astronomers call it a dwarf planet or planetoid. The first man on the moon was Neil Armstrong U.S Space Shuttle By Bronte Heslehurst Hope U liked it Blackholes a black hole is a region of space from which nothing, including light, can escape. If you hover over a black hole you will return younger then the people you left. Wormholes a wormhole is "shortcut" through spacetime. Dwarf Stars A white dwarf is very hot when it is formed but since it has no source of energy, it will gradually radiate away its energy and cool down.
<urn:uuid:17b0cf4e-e882-4543-9ee4-8a484d0fccec>
3.1875
339
Truncated
Science & Tech.
50.013069
95,483,060
The design and development of an Integrated Refuse Management System for the proposed International Space Station was performed. The primary goal was to make use of any existing potential energy or material properties that refuse may possess. The secondary goal was based on the complete removal or disposal of those products that could not, in any way, benefit astronauts' needs aboard the Space Station. The design of a continuous living and experimental habitat in space has spawned the need for a highly efficient and effective refuse management system capable of managing nearly forty-thousand pounds of refuse annually. To satisfy this need, the following four integrable systems were researched and developed: collection and transfer; recycle and reuse; advance disposal; and propulsion assist in disposal. The design of a Space Station subsystem capable of collecting and transporting refuse from its generation site to its disposal and/or recycling site was accomplished. Several methods of recycling or reusing refuse in the space environment were researched. The optimal solution was determined to be the method of pyrolysis. The objective of removing refuse from the Space Station environment, subsequent to recycling, was fulfilled with the design of a jettison vehicle. A number of jettison vehicle launch scenarios were analyzed. Selection of a proper disposal site and the development of a system to propel the vehicle to that site were completed. Reentry into the earth atmosphere for the purpose of refuse incineration was determined to be the most attractive solution.
<urn:uuid:e4eef5e2-e8ba-4050-850c-7bf24838407a>
2.65625
283
Knowledge Article
Science & Tech.
15.876165
95,483,070
The term pelagic is derived from a Greek word meaning the sea or open ocean. When applied to fish, it generally means those species adapted to living not far from the ocean surface. The pelagic fish of commercial interest may be found from top surface waters to depths as great as 656.2 ft (200 m) or more. KeywordsPelagic Fish Pelagic Species Purse Seine Jack Mackerel Packing Medium Unable to display preview. Download preview PDF.
<urn:uuid:a94ae3aa-ce83-491b-a16d-e914d2269be8>
2.8125
100
Truncated
Science & Tech.
62.1715
95,483,076
When NASA’s Juno spacecraft flew past Earth on Oct. 9, 2013, it received a boost in speed of more than 8,800 mph (about 7.3 kilometer per second), which set it on course for a July 4, 2016, rendezvous with Jupiter. One of Juno’s sensors, a special kind of camera optimized to track faint stars, also had a unique view of the Earth-moon system. The result was an intriguing, low-resolution glimpse of what our world would look like to a visitor from afar. The cameras that took the images for the movie are located near the pointed tip of one of the spacecraft’s three solar-array arms. They are part of Juno’s Magnetic Field Investigation (MAG) and are normally used to determine the orientation of the magnetic sensors. These cameras look away from the sunlit side of the solar array, so as the spacecraft approached, the system’s four cameras pointed toward Earth. Earth and the moon came into view when Juno was about 600,000 miles (966,000 kilometers) away — about three times the Earth-moon separation. During the flyby, timing was everything. Juno was traveling about twice as fast as a typical satellite, and the spacecraft itself was spinning at 2 rpm. To assemble a movie that wouldn’t make viewers dizzy, the star tracker had to capture a frame each time the camera was facing Earth at exactly the right instant. The frames were sent to Earth, where they were processed into video format. The music accompaniment is an original score by Vangelis. Are Antibiotics Leading To An Increased Risk Of Miscarriage? According to a new study published in the CMAJ (Canadian Medical Association Journal), many classes of antibiotics are associated with an...May 1, 2017 Could a Carbon Tax Work? Over the past couple of years, several suggestions for limiting the amount of greenhouse gases that are produced by the burning...May 1, 2017 Genes Might Be Helping the Tasmanian Devil Fight Off Face Cancer Getty Images The Tasmanian devil is famous for two things. One, it’s ornery as all hell. And two, it’s the unfortunate...August 30, 2016 How to Use Physics to Paddle Board Like a Pro Getty Images Question: How do you make a stand up paddle board go straight if you only paddle on one side?...August 29, 2016 Cluster of Big Earthquakes Rattles Iceland’s Katla Volcano Alamy Last night, a brief earthquake swarm rattled the caldera at Katla in southern Iceland. The largest earthquakes were over M4,...August 29, 2016 Six Scientists Lived in a Tiny Pod for a Year Pretending They Were on Mars Arguably one of the most Mars-like environments on Earth, the north side of Mauna Loa has been home sweet home to...August 29, 2016 Forget the Pool. This Guy Chased Tornadoes All Summer This May, a massive supercell storm ripped through the countryside just outside of Dodge City, Kansas. It produced more than a...August 29, 2016 This Aquanaut Is Defining the Next Era of Spaceflight NASA Megan McArthur has spent her life messing with microgravity. She was on the team that got the first commercial cargo...August 29, 2016 What Gives With Insects Pretending to Be Sticks and Leaves? Imagine that you had one outfit and one outfit only: a jumpsuit that made you look like a leaf. You’d blend...August 29, 2016
<urn:uuid:94816644-8c59-4d0c-a6f9-38f7d66a7f6f>
3.140625
745
Content Listing
Science & Tech.
65.098512
95,483,077
Pushing the envelope of Albert Einstein's 'spooky action at a distance', known as entanglement, researchers at the Joint Quantum Institute (JQI) of the Commerce Department's National Institute of Standards and Technology (NIST) and the University of Maryland have demonstrated a 'quantum buffer', a technique that could be used to control the data flow inside a quantum computer. A team of scientists in Japan has demonstrated the possibility of switching the magnetization of a thin magnetic film with a non-conventional and innovative method, achieving a considerable step forward in magnetic data storage and the field known as spintronics. Researchers at Arizona State University's Biodesign Institute and faculty in the Department of Chemistry and Biochemistry, reveal for the first time the three-dimensional character of DNA nanotubules, rings and spirals. Researchers succeeded for the first time in directly measuring the spin of electrons in a material that exhibits the quantum spin Hall effect, which was theoretically predicted in 2004 and first observed in 2007. How many different sudokus are there? How many different ways are there to color in the countries on a map? And how do atoms behave in a solid? Researchers have now developed a new method that quickly provides an answer to these questions. The ink, composed of silver nanoparticles, can be used in electronic and optoelectronic applications to create flexible, stretchable and spanning microelectrodes that carry signals from one circuit element to another. Tiny light-emitting diodes with optical microsystems that can produce all the colors of the rainbow, a new method for producing printed circuit boards - Fraunhofer researchers are showing innovative developments at the nano tech 2009 exhibition in Japan.
<urn:uuid:ef2e0909-42ce-44cc-8825-7bfbbd538b99>
2.625
354
Content Listing
Science & Tech.
17.765709
95,483,108
The adverse effects of HABs include toxin production, fish gill clogging, oxygen depletion and unpleasant water quality. Molecular approaches to species identification and quantification may be broadly categorized as “whole cell” (eg. FISH) or “lysed cell” methods (e.g. sandwich hybridization, microarray hybridization, real-time qPCR). DNA-based methods are more rapid, sensitive and specific at the species and population level, amenable to high throughput, and requiring a minor level of expertise in the routine laboratory procedures.
<urn:uuid:cbf51652-c271-46e2-88fa-558afe5d8b27>
2.578125
117
Knowledge Article
Science & Tech.
14.720855
95,483,109
Scientists at the U.S. Department of Energy's Ames Laboratory are being credited with creating the first intermetallic double salt with platinum. Materials researchers Anja-Verena Mudring and Volodymyr Smetana were the first to create and accurately characterize the compound. Cesium platinide hydride, or 4Cs2Pt?CsH, forms a translucent ruby red crystal and can exist only in an inert environment similar to conditions that exist in outer space. It's a new member of a rare family of compounds in which a metal forms a truly negatively charged ion. "It's a compound that as a researcher you have trouble envisioning that it can even exist, but once you do have it and can analyze it, it's nothing like what you expect," said Mudring. "Instead of creating a gray, shiny alloy as typically observed for many hydrogen storage materials by reacting the metals cesium and platinum with hydrogen, these red crystals form. They are really quite beautiful." This intriguing new compound was initially extracted from a cesium melt. The compound is highly unstable, with the platinum in the compound returning to its elemental state if it is exposed to oxygen. Since first detected it took a long time to understand its true nature and prove the composition. Single crystal studies combined with powder X-ray diffraction, solid-state nuclear magnetic resonance and deep theoretical investigations allowed researchers to prove its existence. Its unusual structure and properties, so different from typical intermetallic hydrides, are explained by the strong influence of relativistic effects on both cesium and platinum. "It's unique. It's the first example we have of a salt with so strongly negatively charged metal ions. Moreover, you mix an alloy with a salt and get another non-conducting salt," said Mudring. "This allows for some deep insight into the nature of chemical bonding--and as Goethe wrote, ultimately what holds the world and its compounds together in its inmost form." The research is further discussed in a paper published in and featured on the cover of the current issue of Angewandte Chemie, "Cesium Platinide Hydride 4Cs2Pt?CsH: An Intermetallic Double Salt Featuring Metal Anions," by Volodymyr Smetana and Anja-Verena Mudring. Ames Laboratory is a U.S. Department of Energy Office of Science national laboratory operated by Iowa State University. Ames Laboratory creates innovative materials, technologies and energy solutions. We use our expertise, unique capabilities and interdisciplinary collaborations to solve global problems. DOE's Office of Science is the single largest supporter of basic research in the physical sciences in the United States, and is working to address some of the most pressing challenges of our time. For more information, please visit science.energy.gov. Laura Millsaps | EurekAlert! In borophene, boundaries are no barrier 17.07.2018 | Rice University Research finds new molecular structures in boron-based nanoclusters 13.07.2018 | Brown University For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Life Sciences 18.07.2018 | Information Technology
<urn:uuid:928d3691-17ba-42d6-958f-72955f8cd59b>
3.390625
1,217
Content Listing
Science & Tech.
36.521794
95,483,123