text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
From the groundbreaking partnership of W. H. Freeman and Scientific American comes this one-of-a-kind introduction to the science of biology and its impact on the way we live. In Biology for a Changing World, two experienced educators and a science journalist explore the core ideas of biology through a series of chapters written and illustrated in the style of a Scientific American article. Chapters don't just feature compelling stories of real people-each chapter is a newsworthy story that serves as a context for covering the standard curriculum for the non-majors biology course. Updated throughout, the new edition offers new stories, additional physiology chapters, a new Electronic Teachers' Edition, and new pedagogy. Using HPC for Computational Fluid Dynamics: A Guide to High Performance Computing for CFD Engineersoffers one of the first self-contained guides on the use of high performance computing for computational work in fluid dynamics. Beginning with an introduction to HPC, including its history and basic terminology, the book moves on to consider how modern supercomputers can be used to solve common CFD challenges, including the resolution of high density grids and dealing with the large file sizes generated when using commercial codes. Written to help early career engineers and post-graduate students compete in the fast-paced computational field where knowledge of CFD alone is no longer sufficient, the text provides a one-stop resource for all the technical information readers will need for successful HPC computation. Inkredible Inks Articles Inkredible Inks Books
<urn:uuid:2dab9da2-026c-456b-94cd-72374fc31682>
2.8125
308
Product Page
Science & Tech.
21.798694
95,564,398
For example, the site cannot determine your email name unless you choose to type it. Allowing a website to create a cookie does not give that or any other site access to the rest of your computer, and only the site that created the cookie can read it. Although current molecular clock methods offer greater flexibility in modelling evolutionary events, calibration of the clock with dates from the fossil record is still problematic for many groups. Here we implement several new approaches in molecular dating to estimate the evolutionary ages of Lacertidae, an Old World family of lizards with a poor fossil record and uncertain phylogeny. This site stores nothing other than an automatically generated session ID in the cookie; no other information is captured. Kimura’s neutral theory of molecular evolution provided an explanation of why macromolecules might be evolving in a clock-like fashion. That is, differences between sequences would accumulate in a linear fashion. In addition, they suggested that this uniform rate of a specific protein would be approximately constant, not just over evolutionary time, but also across different lineages or taxonomic groups. Our results emphasize the sensitivity of molecular divergence dates to fossil calibrations, and support the use of combined molecular data sets and multiple, well-spaced dates from the fossil record as minimum node constraints. The bioinformatics program used here, Tree Time, is publicly available, and we recommend its use for molecular dating of taxa faced with similar challenges.
<urn:uuid:2dd51664-5033-44fc-9182-2f00bb41589e>
2.546875
298
Academic Writing
Science & Tech.
13.909164
95,564,418
One-million-year-old skull found recently in Ethiopia. © Nature/ Bill Atlanta This "terrific find" seems to complicate the story of human evolution. © D.L. Bill/ Bill Atlanta Ethiopian fossil suggests early humans were one big family. A one-million-year-old skull unearthed in Ethiopia hints that our long-extinct cousins Homo erectus were a varied and widespread bunch, much like today’s humans. The find may undermine previous claims that H. erectus was in fact made up of two different species. Homo erectus, which means ’upright man’, appeared about 1.8 million years ago. Because of its posture and large brain, it is regarded as the first fully human group. H. erectus left Africa and spread throughout Eurasia from eastern China, possibly reaching as far as southern England. Bony-browed and thick-jawed, H. erectus wielded primitive stone tools and may have been the first creature to make and use fire. Since the 1980s, however, some scientists have suggested that 1.7-million-year-old H. erectus fossils from Africa and central Asia are so different to later 700,000-year-old examples that they belong to a different species, Homo ergaster. The latest find could turn that theory on its head. The fossil is in remarkably good shape considering it is a million years old, says Berhane Asfaw of the University of Addis Ababa in Ethiopia, one of the team that found the skull near the village of Bouri, 230 km northeast of Addis Ababa, in 1997. "It’s a complete skull cap with all the important features present," Asfaw says. The shape of the skull aligns it firmly with the recent H. erectus, but it shares some characteristics with older ones, says Asfaw. Its age also puts it right between where H. erectus and H. ergaster might have split. "Our fossil clearly links Asian and African forms of H. erectus," says Asfaw1. Unless something else turns up, the find strongly suggests that H. ergaster is a misnomer, Asfaw believes. Alan Walker, who studies human evolution at Pennsylvania State University in University Park, agrees. "It is arbitrary to break up the lineage into and early ergaster and later erectus," he says. But Bernard Wood of George Washington University in Washington, DC, who first proposed the H. ergaster as a distinct group, is holding on to his idea. "It’s a terrific find," he says and certainly relevant to H. erectus’ history. But he suspects the new find bears too little resemblance to H. ergaster to rule them out as a separate group. Even if the skull does unify H. erectus as a group, it doesn’t simplify the picture of their history. Finding a fossil in Ethiopia that looks like east Asian H. erectus suggests that anatomical features, such as skull shape, might have varied independently of location. Previously, the geographical separation of different forms of H. erectus fossils was thought to explain why they look the way they do. TOM CLARKE | © Nature News Service Investigating cell membranes: researchers develop a substance mimicking a vital membrane component 25.05.2018 | Westfälische Wilhelms-Universität Münster New approach: Researchers succeed in directly labelling and detecting an important RNA modification 30.04.2018 | Westfälische Wilhelms-Universität Münster A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:ec6d6d5e-ee27-4907-b31b-aa9bfa440363>
3.59375
1,284
Content Listing
Science & Tech.
46.651845
95,564,444
This is a free and open-source script written in Python to estimate the 'chemical' age or date1 in monazites analyzed in electron microprobes. For individual measures, it uses the following equation (Williams et al. 2007): that relates the age (t, in years) and the concentrations of Th, U, and the total radiogenic Pb in parts per million. λ232, λ238, and λ235 are the decay constants for Th232 (4.95E-11/year), U238 (1.55E-10/year), and U235 (9.85E-10/year), respectively. The script solves the ages iteratively by entering age guesses with the known concentrations of U and Th until the calculated Pb value matches the measured Pb with an error below 0.1. It uses a bisection search algorithm and returns the age in million years. Since version 1.1, it also adds an experimental implementation of the CHIME method (Suzuki and Adachi 1991)2. 1Since individual ages may or may not have geological significance, Williams et al. (2006) refer to these as "dates" instead of "ages". They use term "age" for a result (a date or mean of dates) that is interpreted to have geological significance 2This method is only useful when monazites are cogenetic, Th-rich, and show a range of Th contents instead of similar values. The script requires Python 2.7.x/3.x. Also, from version 1.1 onwards it requires Numpy, Scipy, and Matplotlib scientific packages. See an example here for installing Python in different operating systems. Once you open and run the script (Fig 1) and for estimating individual ages you need to write in the shell/console : >>> find_chemage(64586, 2519, 1626) where the three inputs separated by commas within the parentheses are the concentrations of Th, U and Pb, respectively. Press the Enter key and that's it. See an example below. At left, running the script in the IDLE (the default Python's integrated development environment). At right, the Python shell window showing the results (in blue) after calling the Python function (in black) To estimate ages in a data set (i.e. arrays) use: >>> find_chemage_array(Th, U, Pb) where Th, U and Pb are the arrays of data (Python list or Numpy arrays) that contains the values of Th, U and Pb in ppm. Finally, to use the CHIME method use: >>> CHIME(Th, U, Pb) where Th, U and Pb are the values of Th, U and Pb in ppm in the form of Python lists or Numpy arrays. This method is currently in alpha version and returns the age, the slope of the isochron and a plot with the isochron and the data, but no the error in the estimation. A detailed tutorial explaining how to use the script with tabular-like data will be released in the future Suzuki, K. and Adachi, M., 1991. Precambrian provenance and Silurian metamorphism of the Tsubonosawa paragneiss in the South Kitakami terrane, Northeast Japan, revealed by the chemical Th-U-total Pb isochron ages of monazite, zircon and xenotime. Geochem. J. 25, 357-376. doi:http://doi.org/10.2343/geochemj.25.357 Williams, M.L., Jercinovic, M.J., Hetherington, C.J., 2007. Microprobe Monazite Geochronology: Understanding Geologic Processes by Integrating Composition and Chronology. Annu. Rev. Earth Planet. Sci. 35, 137–175. doi:10.1146/annurev.earth.35.031306.140228 Williams, M.L., Jercinovic, M.J., Goncalves, P., Mahan, K., 2006. Format and philosophy for collecting, compiling, and reporting microprobe monazite ages. Chem. Geol. 225, 1–15. doi:10.1016/j.chemgeo.2005.07.024 Chemical age script is a free script available under the Apache License, Version 2.0 (the "License")
<urn:uuid:cd32f3fe-63f5-458a-83d1-dcb9635282a8>
3.078125
964
Documentation
Science & Tech.
69.171857
95,564,470
Switzerland coordinates an extensive network of 600 seismographs stretching from Perpignan to Prague. The data obtained will enable better estimates of earthquake risk in Alpine regions. Buried in a meadow, hidden in a barn and anchored at the bottom of the Mediterranean: 600 sensors placed on and around the Alps constitute the largest academic seismographic network in the world. The AlpArray project will enable better understanding of the birth of the Alps as well as homogeneous seismic hazard maps of the Alpine regions. Comprising 36 institutions from 11 countries, the project is coordinated by scientists at ETH Zurich and the University of Lausanne and is supported by the Swiss National Science Foundation (SNSF). "We use extremely sensitive stations", explains György Hetényi, SNSF Professor at the University of Lausanne and first author on the publication detailing the implementation of the network.(*) "The stations can detect a mild earthquake in Japan, as well as thousands of seismic events that occur each year in Switzerland, 99% of which the population is unaware of." The primary aim of the project is to better understand the structure and composition of the lithosphere (up to a hundred kilometres under the Alps) as well as the earth's upper mantle (up to 660 kilometres). It is at these depths that the traces of ancient ocean floors which are tens of millions of years old can be found. Tectonic movements continue at the surface and produce present-day earthquakes in Alpine regions, explains Hetényi. The collected data make it possible to compare and standardise the catalogues of events maintained by European countries, and thus to refine probability estimates for earthquakes. Two thousand metres under the sea Half of the network consists of existing stationary seismographs. The other half comprises mobile sensors, distributed during the two years of the project and placed both underground and in barns in high mountain pastures. "Convincing our partners to make so many stations available at the same time was not easy, but it's the only way to create this network and still keep costs under control. Only four countries had to buy new sensors." Launched by Switzerland, AlpArray is managed by Edi Kissling and Irene Molinari of ETH Zurich, John Clinton of the Swiss Seismological Service and György Hetényi of the University of Lausanne. The Swiss part of the project is supported by a Sinergia grant from the SNSF. The sensors were placed in a hexagonal network, analogous to the cellular structure of a beehive. "It was the most efficient way to achieve a dense geometry considering the fixed stations", explains Hetényi. "No part of the studied region is more than 30 kilometres away from a sensor." AlpArray extends more than 200 kilometres around the Alps, from the Pyrenees to Hungary and from Frankfurt to Corsica. Thirty sensors were installed at the bottom of the Mediterranean Sea. "It was only after fishing them back out last February that we got confirmation that they had worked properly, because the water column above them prevents wireless transmission", says Hetényi. The deepest station is 2771 metres under the sea; the highest is at an altitude of 3005 metres. An "ultrasound" of the Alps Mapping the Alpine structure is akin to doing an ultrasound: the sensors record the echo of seismic waves reflecting off the deep layers of the Earth. Comparing the arrival times of the waves at different sensors enables the researchers to triangulate the position of the layer as well as its composition, since the latter affects the propagation speed of the waves. The recorded shocks come from small seismic events in Europe and moderate earthquakes all over the Earth. The network can even use ambient noise, such as from the swell of the sea, to obtain information about geological structures near the surface, down to a depth of a few tens of kilometres. The AlpArray network has been fully operational since July 2017. Initial results are expected in 2019. The stations of the AlpArray Seismic Network field experiment are collaboratively operated by the following institutions (alphabetical order): (alphabetical order): Czech Academy of Sciences, Deutsches GeoForschungsZentrum, Freie Universität Berlin, Geozentrum Hannover, Goethe University Frankfurt, Helmholtz Centre for Ocean Research Kiel, Hungarian Academy of Sciences, Istituto Nazionale di Geofisica e Vulcanologia, Istituto Nazionale Di Oceanografia E Di Geofisica Sperimentale, Karlsruhe Institute of Technology, Kövesligethy Radó Seismological Observatory, Ludwig-Maximilians-Universität München, Observatoire de la Côte d'Azur, Republic Hydrometeorological Service of Republika Srpska, Ruhr-University Bochum, Slovenian Environment Agency, Swiss Seismological Service at ETH Zurich, Università degli Studi di Genova, Universität Kiel, Université de Strasbourg, Université Paris Diderot, University Grenoble Alpes, University of Leipzig, University of Potsdam, University of Vienna, University of Zagreb, Westfälische Wilhelms-Universität Münster, Zentralanstalt für Meteorologie und Geodynamik. In addition to the SNSF, the research was financed by the following institutions (alphabetical order by country: FWF (Austria); HRZZ (Croatia); Czech Academy of Sciences and CzechGeo/EPOS (Czech Republic); ADEME, ANR, Labex OSUG@2020 and RESIF (France); DFG (Germany); Development and Innovation Fund and Hungarian Academy of Sciences (Hungary); INGV (Italy). G. Hetényi, I. Molinari, J. Clinton et al.: The AlpArray Seismic Network: a large-scale European experiment to image the Alpine orogeny. Surveys in Geophysics (2018) doi: 10.1007/s10712-018-9472-4 (Open Access) Prof. György Hetényi Faculty of Geosciences and Environment University of Lausanne Phone: + 41 21 692 43 21 https://link.springer.com/article/10.1007%2Fs10712-018-9472-4 'The AlpArray Seismic Network: a large-scale European experiment to image the Alpine orogeny.' https://flic.kr/s/aHsmaJ27B1 'Pictures for editorial use' http://p3.snf.ch/project-157627 'SNSF project OROG3NY' http://p3.snf.ch/Project-154434 'SNSF project SWISS-AlpArray' Medien - Abteilung Kommunikation | idw - Informationsdienst Wissenschaft New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:bbc7d76e-8ab8-4500-998a-94f071a762d4>
3.515625
2,123
Knowledge Article
Science & Tech.
35.350503
95,564,516
http://www.news-press.com/story/news/2014/05/06/climate-change-report-… Click count:1958 A federal report released Tuesday shows much of South Florida could experience dramatic, damaging effects of climate change and rising sea levels within just a few decades. The Third National Climate Assessment, compiled by more than 300 national experts over the last three years, says climate change threatens Florida's tourism industry, water supply and public health. Sea levels will rise between 1 to 4 feet by 2100. Part of President Obama's Climate Action Plan, the report says Florida is extremely vulnerable. Floods may become more frequent, even in areas where overall precipitation is expected to decline. Few states as vulnerable to climate change as Florida "We're in climate change, it's already here," said Jim Beever with the Southwest Florida Regional Planning Council. "It's been going on my whole life, even the life of my parents. This didn't just start happening in the 1980s. It started when people started removing trees from Africa and turning the area into deserts. All of that causes climate change." Beever said Fort Myers residents can expect shifts in rainfall patterns, crop losses and flooding in the downtown district along the south bank of the Caloosahatchee River. Some cities and counties have been planning for climate change for years. Beever said people will be able to live in the ecologically altered Southwest Florida, but that Fort Myers may be a city of gondolas and front-porch fishing. Vulnerability to Sea Level Rise(Photo: globalchange.gov) "In 2100, we'll still be here," Beever said. "We'll be here in 2200 and 2300 and 2400, but it will look different. You're going to have higher water levels that we have today. Certainly, the barrier islands will change shape when they can." Report: Global warming disrupting Americans' lives Locally, sea level rise is expected to cause a myriad of problems: increased tropical storm strength and frequency, infrastructure failures and shifts in rainfall patterns and growing seasons, according to the federal report. Beever said climate change has been evident in Southwest Florida for more than half a century. "The saltmarshes have already moved. They've moved about the length of a football field since the 1950s," Beever said. "Habitats will migrate. Where habitat is blocked by sea walls, the habitat will be gone. There's going to be major road problems. The approaches to bridges will go underwater first. The Sanibel causeway is built to accommodate some sea level rise, but the approaches will be underwater." Beever said Punta Gorda is one of the most prepared cities in Florida. Community leaders there began to plan for climate change and sea level rise nearly a decade ago. Heading advice from Beever and others, the city relocated the construction site for its new public works office — moving the location inland, which may save the city millions of dollars down the road. "They're a progressive community," Beever said. "They want to build better, not just the same." Climate change also has the attention of the National Park Service, which oversees Everglades National Park and Big Cypress National Preserve — about 2 million acres of South Florida wild lands. Linda Friar, with Everglades National Park, said managers there implemented a climate change planning practice that encourages scientists and others working at the park to plan for a different Florida in the future. Any new investments in infrastructure must include a sea level risk assessment, she said. "Ecologically, concerns over saltwater intrusion may impact water supply in the park," Friar said. "The brackish water is home to a variety of creatures, and that may move further inland. If it happens slowly, a resilient, healthy system can manage that change over time as they have for centuries. It's the sudden change that could be more challenging and potentially devastating." The report wasn't news to everyone. "The climate report is a synopsis of reports that have been out a while, and other than the U.S., those reports seem to be taken seriously," said Wayne Daltry, former planner for the state and Lee County. "The biggest issue facing us for short term isn't sea level rise, it is weather change/rainfall patterns that cause drought and flooding in ways our tailored crops cannot adapt to quickly enough." U.S. Sen. Bill Nelson, D-Orlando, says climate-change deniers have successfully cast the phenomenon as a subject for debate as opposed to scientific fact. He says a political solution is elusive in a divided Congress and that drastic steps by the government aren't possible until the public is "agitated" enough to demand action. Republican Sen. Marco Rubio, a Republican from West Miami, said he too is worried about the impact of severe weather on his home state. But he said hurricanes have been around for "hundreds of years" and isn't convinced severe weather is the result of man-made conditions, as the vast majority of^ @scientists conclude. He also warns that the Obama administration may try to use the new report to boost a political agenda that would "devastate" the American economy. "Even if scientists concluded that, in fact, our modern way of living in the 21st century is the only cause of changes to our climate, I would ask what policy changes are they recommending that would actually reverse that, when the largest polluters in the world -- China, India and underdeveloped countries -- have no interest in making any changes whatsoever," he said Tuesday. "So why should we eviscerate our own economy if it would have no impact whatsoever on these things that they're raising a concern about?" Some findings from the Third U.S. National Climate Assessment: • Tourism: "Climate change impacts on tourism and recreation will vary significantly by region. For instance, some of Florida's top tourist attractions, including the Everglades and Florida Keys, are threatened by sea level rise, with estimated revenue losses of $9 billion by 2025 and $40 billion by the 2050s." (NCA, Ch. 14: Rural Communities) • Health: "Atlanta, Miami, New Orleans, and Tampa have already had increases in the number of days with temperatures exceeding 95ºF, during which the number of deaths is above average. Higher temperatures also contribute to the formation of harmful air pollutants and allergens. Ground-level ozone is projected to increase in the 19 largest urban areas of the Southeast, leading to an increase in deaths." (NCA, Ch. 17: Southeast) • Ecosystems: "Coral reefs in the Southeast and Caribbean, as well as worldwide, are susceptible to climate change, especially warming waters and ocean acidification, whose impacts are exacerbated when coupled with other stressors, including disease, runoff, over-exploitation, and invasive species. (NCA, Ch. 17: Southeast) Examples of Efforts Underway in Florida • Mechanisms being used by local governments to prepare for climate change include: land-use planning; provisions to protect infrastructure and ecosystems; regulations related to the design and construction of buildings, road, and bridges; and preparation for emergency response and recovery. • Investing in Clean Energy: Since President Obama took office, the U.S. increased solar-electricity generation by more than ten-fold and tripled electricity production from wind power. Since 2009, the Administration has supported tens of thousands of renewable energy projects throughout the country, including more than 1,378 in Florida, generating enough energy to power more than 17,000 homes. • President Barack Obama established the toughest fuel economy standards for passenger vehicles in U.S. history. These standards will double the fuel efficiency of our cars and trucks by 2025, saving the average driver more than $8,000 over the lifetime of a 2025 vehicle and cutting carbon pollution. The News-Press Washington Correspondent Ledyard King contributed to this report.
<urn:uuid:053d1be5-1169-4c98-8c4c-3476e9e66138>
2.9375
1,657
News Article
Science & Tech.
43.31708
95,564,539
+44 1803 865913 By: George R McGhee 316 pages, 76 illus, figs Seeks to sketch the range of forms that biological entities could take, with the ultimate goal of discovering why certain forms exist but others do not. Presents a complete overview of the field, its advancements in recent years, and the challenges ahead. In his excellent book Theoretical Morphology: The Concept and Its Applications, George McGhee provides an admirable introduction to the complex theoretical landscape surrounding the exploration of possible biological form...an enthusiastic and scholarly summary of an exciting new scientific discipline. -- James MacLaurin, University of Otago Biology and Philosophy PrefaceWhat is Thereotical Morphology?The Concept of the Theoretical MorphospaceTwists and Twigs: Theoretical Morphospaces of Branching Growth SystemsSpiral and Shells I: Theoretical Morphospaces of Univalved Accretionary Growth SystemsSpirals and Shells II: Theoretical Morphospaces of Bivalved Accretionary Growth SystemsStep by Step: Theoretical Morphospaces of Discrete Growth SystemsThe Time Dimension: Evolution and Theoretical MorphospacesTheoretical Models of Morphogenesis: An ExampleTheoretical Models of Accretionary Growth SystemsTheoretical Models of Other Aspects of Morphogenesis in NatureThe Future of Theoretical MorphologyGlossary There are currently no reviews for this book. Be the first to review this book! George R. McGhee, Jr. is professor of geological sciences at Rutgers University. He is the author of The Late Devonian Mass Extinction: The Frasnian/Famennian Crisis (Columbia). He has held research positions at the University of Tubingen, the American Museum of Natural History, and the Field Museum of Natural History. Your orders support book donation projects Fantastic service at a great price – I'll definitely use you again. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:ec3aeb25-014f-436a-bbea-61c0fc4905cc>
2.703125
433
Product Page
Science & Tech.
14.433929
95,564,554
Could we directly engineer the climate and refreeze the poles? The answer is probably yes, and it could be a cheap thing to achieve – maybe costing only a few billion dollars a year. But doing this – or even just talking about it – is controversial. Some have suggested there is a good business case to be made. We could carefully engineer the climate for a few decades while we work out how to reduce our dependency on carbon, and by taking our time we can protect the global economy and avoid financial crises. I don’t believe this argument for a minute, but you can see it’s a tempting prospect. Reflecting the sun One option might be to reflect some of the sun’s energy back into space. This is known as Solar Radiation Management (SRM), and it is the most viable climate engineering technology explored so far. For instance we could spray sea water up out of the oceans to seed clouds and create more “whiteness”, which we know is a good way to reflect the heat of the sun. Others have proposed schemes to put mirrors in space, carefully located at the point between the sun and the Earth where gravity forces balance. These mirrors could reflect, say, 2% of the sun’s rays harmlessly into space, but the price tag puts them out of reach. Perhaps a more immediate prospect for cooling the planet is to spray tiny particles high up into the stratosphere, at around 20km altitude – this is twice as high as normal commercial planes fly. To maximise reflectivity these particles would need to be around 0.5 micrometres across, like the finest of dust. We know from large volcanic eruptions that particles injected at high altitude cool the planet. The 1991 eruption of Mount Pinatubo in the Philippines is the best recent example. It is estimated that more than 10m tonnes of sulphur dioxide were propelled into the high atmosphere and it quickly formed tiny droplets of sulphuric acid (yes, the same stuff found in acid rain) which reflected sunlight and caused global cooling. For about a year after Pinatubo the Earth cooled by around 0.4℃ and then temperatures reverted to normal. I was involved recently in the SPICE project (Stratospheric Particle Injection for Climate Engineering) and we looked at the possibility of injecting all sorts of particles, including titanium dioxide, which is also used as the pigment in most paints and is the active ingredient in sun lotion. The technology to deliver these particles is crazy – we looked at pumping them in a slurry up to 20km into the air using a giant hose suspended by a huge helium balloon. A small-scale experiment was cancelled because even it proved too controversial, too hot. Imagine if we demonstrate that this technology can work. Politicians could then claim there was a technical “fix” for climate change so there would be no need to cut emissions after all. But this isn’t a ‘quick fix’ There are so many problems with climate engineering. The main one is that we have only one planet to work with (we have no Planet B) and if we screw this one up then what do we do? Say “sorry” I guess. But we’re already screwing it up by burning more than 10 billion tonnes of fossil fuels a year. We have to stop this carbon madness immediately. Engineering the climate by reflecting sunlight doesn’t prevent more CO2 being pumped into the atmosphere, some of which dissolves in the oceans causing acidification which is a problem for delicate marine ecosystems. There is therefore a strong imperative to remove the 600 billion tonnes of fossil carbon that we’ve already puffed into the air in just 250 years. This is known as Carbon Dioxide Removal (CDR). We must work fast to cut our carbon emissions and at the same time we should explore as many climate engineering options as possible, simultaneously. However while reflecting sunlight may be an idea that buys us some time it is absolutely not a solution for climate change and it is still vital that we cut our emissions – we can’t use climate engineering as a get-out clause. The opinions expressed in this article are those of the individual author(s) and do not represent the views of the University of Cambridge.
<urn:uuid:9c318703-782d-4a4a-91b7-f8d11a711a3e>
3.21875
897
Personal Blog
Science & Tech.
46.364726
95,564,564
The latitudinal location of the sunspot zones in each hemisphere is determined by calculating the centroid position of sunspot areas for each solar rotation from May 1874 to June 2012. When these centroid positions are plotted and analyzed as functions of time from each sunspot cycle maximum there appears to be systematic differences in the positions and equatorward drift rates as a function of sunspot cycle amplitude. If, instead, these centroid positions are plotted and analyzed as functions of time from each sunspot cycle minimum then most of the differences in the positions and equatorward drift rates disappear. The differences that remain disappear entirely if curve fitting is used to determine the starting times (which vary by as much as 8 months from the times of minima). The sunspot zone latitudes and equatorward drift measured relative to this starting time follow a standard path for all cycles with no dependence upon cycle strength or hemispheric dominance. Although Cycle 23 was peculiar in its length and the strength of the polar fields it produced, it too shows no significant variation from this standard. This standard law, and the lack of variation with sunspot cycle characteristics, is consistent with Dynamo Wave mechanisms but not consistent with current Flux Transport Dynamo models for the equatorward drift of the sunspot zones.
<urn:uuid:1aba4045-bf9b-42f4-8456-21f1c1d29861>
3.671875
256
Academic Writing
Science & Tech.
22.544194
95,564,577
The use of ion irradiation for converting superconducting thin-film NbN into niobium oxide Nb2O5 It is shown experimentally that the use of ion irradiation allows one to convert superconducting thin-film niobium nitride into dielectric niobium oxide in a controllable manner. The conversion of NbN into Nb2O5 throughout the entire thickness of the film is demonstrated via transmission electron microscopy and layer-by-layer XPS analysis. This conversion is followed by a corresponding increase in the film thickness with no signs of sputtering and thus provides the possibility of forming dielectric regions of the required sizes and shapes in the process of fabrication of various functional cryoelectronic elements. KeywordsNiobium Tunnel Junction Niobium Oxide Superconductor Tunnel Junction Niobium Nitride Unable to display preview. Download preview PDF. - 3.A. Pifferi, A. Torricelli, et al., Phys. Rev. Lett. 100, 138101 (2008).Google Scholar
<urn:uuid:d0141d5c-a983-454d-be3d-8214c0a7ce2d>
2.640625
225
Truncated
Science & Tech.
37.426154
95,564,579
After scoring a Supreme Court victory this spring, the Environmental Protection Agency can move forward with its strategy to cut air pollution from coal-fired power plants in several states — and new research suggests the impact could be lifesaving. Scientists assessed the effects of one state's prescient restrictions on plant emissions in a report in the ACS journal Environmental Science & Technology. They estimated that the state's legislation prevented about 1,700 premature deaths in 2012. Jacqueline MacDonald Gibson and Ya-Ru Li explain that the U.S. has been working for years to lower levels of particulate matter, a form of air pollution that can cause serious health problems when people breathe it in. Certain kinds of particulate matter form mainly from power plant emissions. More than 10 years ago, correctly anticipating the federal government would eventually set tighter restrictions on power plants, North Carolina had approved more stringent goals than neighboring states. It required 14 major coal-fired plants within its borders to reduce emissions of nitrogen oxides and sulfur dioxides by 60 percent and 72 percent, respectively, over a 10-year period. Gibson's team wanted to see what effect the measures were having. They found that the policy had successfully reduced emissions in North Carolina more than other southeastern states. Sulfur dioxide levels, for example, dropped an average of 20 percent a year from 2002 to 2012. Across all southeastern states, they dropped 13.6 percent per year. As a result of the improved air in North Carolina, the scientists used a health impact model to estimate that about 1,700 lives were saved in 2012 alone. The American Chemical Society is a nonprofit organization chartered by the U.S. Congress. With more than 161,000 members, ACS is the world's largest scientific society and a global leader in providing access to chemistry-related research through its multiple databases, peer-reviewed journals and scientific conferences. Its main offices are in Washington, D.C., and Columbus, Ohio. To automatically receive news releases from the American Chemical Society, contact firstname.lastname@example.org. Michael Bernstein | Eurek Alert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:2bd73a16-6531-4b55-bef0-c2e0fd23bbfc>
3.40625
1,021
Content Listing
Science & Tech.
40.582009
95,564,583
Biology of Polar Bryophytes and Lichens - Studies in Polar Research (Paperback)R. E. Longton (author) Paperback 404 Pages / Published: 11/01/2009 - Not available This book reviews the biology of bryophytes and lichens in the polar tundra, where these plants may form a dominant component of the vegetation. It considers adaptation to severe environments in terms of growth form, physiology and reproduction. The role of bryophytes and lichens is discussed in vegetation processes such as colonisation and succession, and in energy flow, nutrient cycling and other functional aspects of polar ecosystems, both natural and as modified by man, The range of microclimates experience by polar cryptograms is described using an energy budget approach, and the environmental relationships of CO2 exchange, stress resistance, growth and other physiological responses are discussed against this background. Reproductive biology is also reviewed as an introduction to a consideration of population ecology, distribution patterns, dispersal potential and the origin and adaptation of polar cryptogamic floras. This book integrates the results of work in the Arctic and the Antarctic, and includes a classification of vegetation zones applicable to both polar regions. The study of plant ecology in these areas has advanced dramatically and the results synthesised here contribute to a general understanding both of polar ecosystems and of the environmental relationships of bryophytes and lichens. Publisher: Cambridge University Press Number of pages: 404 Weight: 640 g Dimensions: 244 x 170 x 21 mm You may also be interested in... £3.00Sheet map, folded Please sign in to write a review Thank you for your reservation Your order is now being processed and we have sent a confirmation email to you at When will my order be ready to collect? Call us on or send us an email at Unfortunately there has been a problem with your order Please try again or alternatively you can contact your chosen shop on or send us an email at
<urn:uuid:91977692-abb0-4f95-91a5-f7bef1ed52da>
2.703125
408
Product Page
Science & Tech.
26.108425
95,564,601
Focus: Modeling Imperfections Boosts Microscope Precision Just when you thought optical microscopes couldn’t get any better, they just did. A research team has now shown that, by fitting microscope images to a mathematical model of the instrument, they can improve the precision of position measurements by between 10 and 100 times. Applied to electron microscopy, this would offer a precision of about meters—a thousandth of the length of a chemical bond. Advances in optical microscopy have enabled “superresolution” images that show details down to the nanometer scale, better than the classical “diffraction limit,” which is about a quarter of the imaging wavelength. But big improvements could be achieved even in conventional microscopy by using better methods of analyzing the images, say Brian Leahy and his colleagues from Cornell University in Ithaca, New York. The precision in any imaging method is ultimately limited by statistical noise due to random fluctuations in the sample and apparatus. This limit corresponds to the so-called Cramér-Rao bound (CRB). “Any calculation you do based on the data will always have an uncertainty of at least the CRB,” says Leahy. By studying the mathematical formulation of the CRB, Leahy and his colleagues realized that current methods of image analysis do much worse than the CRB because they don’t make use of all of the information in the image. For example, in fluorescence imaging using light-emitting dyes attached to a sample, the light from each dye molecule spreads out over many pixels, blurring the “ideal” image. This blurring reduces the precision of any measurement of an object’s boundaries and location. To overcome this imprecision, the researchers developed a model of the image-formation process in which light interacts with the sample and with the microscope optics. This model incorporates representations of, for example, the uneven distribution of fluorescent dyes and of the uneven laser illumination. Then by fitting the model parameters to the experimental data, each source of noise in the final image can be accounted for as accurately as possible, so that no useful information goes to waste. The researchers call their method parameter extraction from reconstructed images (PERI). Applying PERI to imaging of 1.3-micrometer-diameter spheres in a water-glycerol mixture, the Cornell team measured the radius and position of the particles to within 3 nanometers, even though individual pixels were 125 nanometers across. With this information for a collection of about 1200 particles, they reconstructed the distribution of interparticle separations and, in turn, mapped out the (repulsive) interparticle force with nanometer-scale precision. What makes this technique possible now is computer power, not any great leap in understanding, says Leahy. Thanks to good computer hardware, “we’re breaking with the qualitative tradition in microscopy analysis so as to describe images with a complete, bottom-up approach.” He says that the technique will be useful for situations where the macroscopic properties depend sensitively on interparticle separations, such as in colloidal glasses or gels. “This is really good stuff,” says condensed-matter physicist David Grier of New York University. “It solves a problem that other imaging techniques cannot solve and offers exquisite precision despite the samples’ extreme complexity. I don't believe that any other technique offers so much information for this type of system.” However, Grier and soft matter physicist David Weitz of Harvard University think that PERI will be limited by the assumption that the sample is made exclusively of spherical objects—or at least that their shape needs to be known in advance. But Leahy doesn’t see any problem, in principle, with applying the approach to systems with more complex shapes or with other intricacies, such as dye concentrations that vary across the sample. “All of these can be modeled,” he says, although he admits that there may be limits to the complications that PERI can embrace. “Is it possible to model all the organelles and structure of a cell? I think we'll just have to wait and see.” This research is published in Physical Review X. Philip Ball is a freelance science writer in London; his latest book is Beyond Weird, a survey of interpretations of quantum mechanics.
<urn:uuid:46723c77-103b-47d1-b459-acce94c786c4>
3.0625
912
Truncated
Science & Tech.
29.311268
95,564,617
Melinda Daniels, associate professor of geography, and Keith Gido, associate professor of biology, are collaborating on a project that involves habitat and fish sampling on the Kansas River, which stretches across northeast Kansas. They are supported by a grant from the Kansas Department of Wildlife and Parks. The grant money comes from sales of state fishing and hunting licenses. "These dollars go back to the people of Kansas because the research provides knowledge to help manage water resources throughout the state," said Daniels, who has received other funding and recognition for her research on conservation and restoration in the Kansas River basin. Daniels is also collaborating with Craig Paukert, fisheries biologist and adjunct associate professor, on a project involving the Bowersock Dam, a hydropower dam on the Kansas River in Lawrence, Kan. In both projects Daniels explores the non-living parts of river systems, such as water and sediment movement, while Gido and Paukert focus on fish populations. They evaluate all species of fish, with the projects' focus on rapidly declining native Kansas River fish species, such as the plains minnow, the silver chub and the shoal chub. "These are fish that are found almost solely in Great Plains rivers," Daniels said. "If they drop out of Kansas, it is likely they would drop out pretty much everywhere." Daniels is especially interested in the effects of dredging, which is the process of taking sand and gravel from the river bottom and pumping it up on the riverbanks. Dredged material is used in construction industries. To help the Kansas Department of Wildlife and Parks understand how sediment removal influences the river, the researchers are looking at three active Kansas River dredge sites from Manhattan to Kansas City. "KDWP is particularly worried about taking a lot of sediment out of the river because doing so in excess starts to cause riverbeds to cut down or incise," Daniels said. "That is a problem in the lower Kansas River in the Kansas City reaches, where water intakes have been left perched so they are no longer in the water. One of the questions we are trying to answer is how much of this bed incision may be due to sand and gravel mining." To measure the effects of dredging, Daniels uses an acoustic Doppler instrument to detect the velocity of the water in the river and produce a map of the channel bottom topography. Because habitats and environments change seasonally, researchers map the river about once a month to understand how river flow and habitat change throughout the year. The team has discovered that, on average, the Kansas River is about a meter and a half deep. But during certain times of year, when dredges are active, Daniels' team has detected holes as deep as 15 meters at active dredge locations. "We're not prejudging whether dredging is a bad thing, but there is at least a temporary habitat alteration going on," she said. "We're trying to help generate information for the state agencies so that they can make decisions about how many dredges to permit, or if they should even continue permitting sand dredging." At the Bowersock Dam in Lawrence the researchers are categorizing the surrounding habitat and observing differences in fish communities both upstream and downstream from the dam. They want to identify whether the dam is a barrier to fish movement, and if a fish passage structure in the dam would help reduce endangered species. Their findings can influence how dams are handled throughout the state.Related news: Melinda Daniels, 785-532-0765, email@example.com Melinda Daniels | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:3ae097e1-5178-460b-a091-56b9f1971616>
3.109375
1,340
Content Listing
Science & Tech.
40.569496
95,564,663
Data from Cassini's cosmic dust analyzer show the grains expelled from fissures, known as tiger stripes, are relatively small and predominantly low in salt far away from the moon. But closer to the moon's surface, Cassini found that relatively large grains rich with sodium and potassium dominate the plumes. The salt-rich particles have an "ocean-like" composition and indicate that most, if not all, of the expelled ice and water vapor comes from the evaporation of liquid salt water. The findings appear in this week's issue of the journal Nature. "There currently is no plausible way to produce a steady outflow of salt-rich grains from solid ice across all the tiger stripes other than salt water under Enceladus's icy surface," said Frank Postberg, a Cassini team scientist at the University of Heidelberg, Germany, and the lead author on the paper. When water freezes, the salt is squeezed out, leaving pure water ice behind. If the plumes emanated from ice, they should have very little salt in them. The Cassini mission discovered Enceladus' water-vapor and ice jets in 2005. In 2009, scientists working with the cosmic dust analyzer examined some sodium salts found in ice grains of Saturn's E ring, the outermost ring that gets its material primarily from Enceladean jets. But the link to subsurface salt water was not definitive. The new paper analyzes three Enceladus flybys in 2008 and 2009 with the same instrument, focusing on the composition of freshly ejected plume grains. The icy particles hit the detector target at speeds between 15,000 and 39,000 mph (23,000 and 63,000 kilometers per hour), vaporizing instantly. Electrical fields inside the cosmic dust analyzer separated the various constituents of the impact cloud. The data suggest a layer of water between the moon's rocky core and its icy mantle, possibly as deep as about 50 miles (80 kilometers) beneath the surface. As this water washes against the rocks, it dissolves salt compounds and rises through fractures in the overlying ice to form reserves nearer the surface. If the outermost layer cracks open, the decrease in pressure from these reserves to space causes a plume to shoot out. Roughly 400 pounds (200 kilograms) of water vapor is lost every second in the plumes, with smaller amounts being lost as ice grains. The team calculates the water reserves must have large evaporating surfaces, or they would freeze easily and stop the plumes. "This finding is a crucial new piece of evidence showing that environmental conditions favorable to the emergence of life can be sustained on icy bodies orbiting gas giant planets," said Nicolas Altobelli, the European Space Agency's project scientist for Cassini. Cassini's ultraviolet imaging spectrograph also recently obtained complementary results that support the presence of a subsurface ocean. A team of Cassini researchers led by Candice Hansen of the Planetary Science Institute in Tucson, Ariz., measured gas shooting out of distinct jets originating in the moon's south polar region at five to eight times the speed of sound, several times faster than previously measured. These observations of distinct jets, from a 2010 flyby, are consistent with results showing a difference in composition of ice grains close to the moon's surface and those that made it out to the E ring. That paper was published in the June 9 issue of Geophysical Research Letters. "Without an orbiter like Cassini to fly close to Saturn and its moons -- to taste salt and feel the bombardment of ice grains -- scientists would never have known how interesting these outer solar system worlds are," said Linda Spilker, NASA's Cassini project scientist at the Jet Propulsion Laboratory in Pasadena, Calif. The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The mission is managed by JPL for NASA's Science Mission Directorate in Washington. JPL is a division of the California Institute of Technology, Pasadena. http://www.nasa.gov/cassini and http://saturn.jpl.nasa.gov Jia-Rui Cook 818-354-0850 Jet Propulsion Laboratory, Pasadena, Calif. Dwayne C. Brown 202-358-1726 NASA Headquarters, Washington Markus Bauer 011-31-71-565-6799 European Space Agency, Noordwijk, the Netherlands
<urn:uuid:2d5932d2-44c9-4fd0-89ff-c6707eae3ff4>
3.84375
921
News (Org.)
Science & Tech.
42.260713
95,564,677
Sudbury Neutrino Observatory The Sudbury Neutrino Observatory (SNO) was a neutrino observatory located 2100 m underground in Vale's Creighton Mine in Sudbury, Ontario, Canada. The detector was designed to detect solar neutrinos through their interactions with a large tank of heavy water. The detector was turned on in May 1999, and was turned off on 28 November 2006. The SNO collaboration was active for several years after that analyzing the data taken. The underground laboratory has been enlarged into a permanent facility and now operates multiple experiments as SNOLAB. The SNO equipment itself is currently being refurbished for use in the SNO+ experiment. The first measurements of the number of solar neutrinos reaching the earth were taken in the 1960s, and all experiments prior to SNO observed a third to a half fewer neutrinos than were predicted by the Standard Solar Model. As several experiments confirmed this deficit the effect became known as the solar neutrino problem. Over several decades many ideas were put forward to try to explain the effect, one of which was the hypothesis of neutrino oscillations. All of the solar neutrino detectors prior to SNO had been sensitive primarily or exclusively to electron neutrinos and yielded little to no information on muon neutrinos and tau neutrinos. In 1984, Herb Chen of the University of California at Irvine first pointed out the advantages of using heavy water as a detector for solar neutrinos. Unlike previous detectors, using heavy water would make the detector sensitive to two reactions, one reaction sensitive to all neutrino flavours, the other reaction sensitive to only electron neutrino. Thus, such detector can measure neutrino oscillations directly. A location in Canada was attractive because Atomic Energy of Canada Limited, which maintains large stockpiles of heavy water to support its CANDU reactor power plants, was willing to lend the necessary amount (worth C$330,000,000 at market prices) at no cost. The Creighton Mine in Sudbury, among the deepest in the world and accordingly low in background radiation, was quickly identified as an ideal place for Chen's proposed experiment to be built, and the mine management was willing to make the location available for only incremental costs.:440 The SNO collaboration held its first meeting in 1984. At the time it competed with TRIUMF's KAON Factory proposal for federal funding, and the wide variety of universities backing SNO quickly led to it being selected for development. The official go-ahead was given in 1990. The experiment observed the light produced by relativistic electrons in the water created by neutrino interactions. As relativistic electrons travel through a medium, they lose energy producing a cone of blue light through the Cherenkov effect, and it is this light that is directly detected. The SNO detector target consisted of 1,000 tonnes (1,102 short tons) of heavy water contained in a 6-metre-radius (20 ft) acrylic vessel. The detector cavity outside the vessel was filled with normal water to provide both buoyancy for the vessel and radiation shielding. The heavy water was viewed by approximately 9,600 photomultiplier tubes (PMTs) mounted on a geodesic sphere at a radius of about 850 centimetres (28 ft). The cavity housing the detector was the largest in the world at such a depth, requiring a variety of high-performance rock bolting techniques to prevent rock bursts. The observatory is located at the end of a 1.5-kilometre-long (0.9 mi) drift, named the "SNO drift", isolating it from other mining operations. Along the drift are a number of operations and equipment rooms, all held in a clean room setting. Most of the facility is Class 3000 (fewer than 3,000 particles of 1 μm or larger per 1 ft3 of air) but the final cavity containing the detector is an even stricter Class 100. Charged current interactionEdit In the charged current interaction, a neutrino converts the neutron in a deuteron to a proton. The neutrino is absorbed in the reaction and an electron is produced. Solar neutrinos have energies smaller than the mass of muons and tau leptons, so only electron neutrinos can participate in this reaction. The emitted electron carries off most of the neutrino's energy, on the order of 5–15 MeV, and is detectable. The proton which is produced does not have enough energy to be detected easily. The electrons produced in this reaction are emitted in all directions, but there is a slight tendency for them to point back in the direction from which the neutrino came. Neutral current interactionEdit In the neutral current interaction, a neutrino dissociates the deuteron, breaking it into its constituent neutron and proton. The neutrino continues on with slightly less energy, and all three neutrino flavours are equally likely to participate in this interaction. Heavy water has a small cross section for neutrons, and when neutrons capture on a deuterium nucleus a gamma ray (photon) with roughly 6 MeV of energy is produced. The direction of the gamma ray is completely uncorrelated with the direction of the neutrino. Some of the neutrons wander past the acrylic vessel into the light water, and since light water has a very large cross section for neutron capture these neutrons are captured very quickly. A gamma ray with roughly 2 MeV of energy is produced in this reaction, but because this is above the detector's energy threshold (meaning above the threshold for the photomultipliers) they are not directly observable. The gamma ray collides with an electron through Compton scattering and the accelerated electron can be detected through Cherenkov radiation. Electron elastic scatteringEdit In the elastic scattering interaction, a neutrino collides with an atomic electron and imparts some of its energy to the electron. All three neutrinos can participate in this interaction through the exchange of the neutral Z boson, and electron neutrinos can also participate with the exchange of a charged W boson. For this reason this interaction is dominated by electron neutrinos, and this is the channel through which the Super-Kamiokande (Super-K) detector can observe solar neutrinos. This interaction is the relativistic equivalent of billiards, and for this reason the electrons produced usually point in the direction that the neutrino was travelling (away from the sun). Because this interaction takes place on atomic electrons it occurs with the same rate in both the heavy and light water. Experimental results and impactEdit On 18 June 2001, the first scientific results of SNO were published, bringing the first clear evidence that neutrinos oscillate (i.e. that they can transmute into one another), as they travel in the sun. This oscillation in turn implies that neutrinos have non-zero masses. The total flux of all neutrino flavours measured by SNO agrees well with the theoretical prediction. Further measurements carried out by SNO have since confirmed and improved the precision of the original result. Although Super-K had beaten SNO to the punch, having published evidence for neutrino oscillation as early as 1998, the Super-K results were not conclusive and did not specifically deal with solar neutrinos. SNO's results were the first to directly demonstrate oscillations in solar neutrinos. This was important to the standard solar model. The results of the experiment had a major impact on the field, as evidenced by the fact that two of the SNO papers have been cited over 1,500 times, and two others have been cited over 750 times. In 2007, the Franklin Institute awarded the director of SNO Art McDonald with the Benjamin Franklin Medal in Physics. In 2015 the Nobel Prize for Physics was awarded to Arthur B. McDonald for the discovery of neutrino oscillations. Other possible analysesEdit The SNO detector would have been capable of detecting a supernova within our galaxy if one had occurred while the detector was online. As neutrinos emitted by a supernova are released earlier than the photons, it is possible to alert the astronomical community before the supernova is visible. SNO was a founding member of the Supernova Early Warning System (SNEWS) with Super-Kamiokande and the Large Volume Detector. No such supernovae have yet been detected. The SNO experiment was also able to observe atmospheric neutrinos produced by cosmic ray interactions in the atmosphere. Due to the limited size of the SNO detector in comparison with Super-K the low cosmic ray neutrino signal is not statistically significant at neutrino energies below 1 GeV. Large particle physics experiments require large collaborations. With approximately 100 collaborators, SNO was a rather small group compared to collider experiments. The participating institutions have included: - Carleton University - Laurentian University - Queen's University – designed and built many calibration sources and the device for deploying sources - University of British Columbia - University of Guelph Although no longer a collaborating institution, Chalk River Laboratories led the construction of the acrylic vessel that holds the heavy water, and Atomic Energy of Canada Limited was the source of the heavy water. - University of Oxford – developed much of the experiment's Monte Carlo analysis program (SNOMAN), and maintained the program - LBNL – Led the construction of the geodesic structure that holds the PMTs - University of Pennsylvania – designed and built the front end electronics and trigger - University of Washington – designed and built proportional counter tubes for detection of neutrons in the third phase of the experiment - Brookhaven National Laboratory - University of Texas at Austin - Massachusetts Institute of Technology Honours and awardsEdit - Asteroid 14724 SNO is named in honour of SNO. - In November 2006, the entire SNO team was awarded the inaugural John C. Polanyi Award for "a recent outstanding advance in any field of the natural sciences or engineering" conducted in Canada. - SNO principal investigator Arthur B. McDonald won the 2015 Nobel Prize in Physics, jointly with Takaaki Kajita of Kamiokande, for the discovery of neutrino oscillation. - SNO was awarded the 2016 Fundamental Physics Prize along with 4 other neutrino experiments. - "2015 Nobel Prize in Physics: Canadian Arthur B. McDonald shares win with Japan's Takaaki Kajita". CBC News. 2015-10-06. - Chen, Herbert H. (September 1984). "Direct Approach to Resolve the Solar-Neutrino Problem". Physical Review Letters. 55 (14): 1534–1536. Bibcode:1985PhRvL..55.1534C. doi:10.1103/PhysRevLett.55.1534. PMID 10031848. - "The Sudbury Neutrino Observatory – Canada's eye on the universe". CERN Courier. CERN. 4 December 2001. Retrieved 2008-06-04. - "Heavy Water". 31 January 2006. Retrieved 2015-12-03. - Jelley, Nick; McDonald, Arthur B.; Robertson, R.G. Hamish (2009). "The Sudbury Neutrino Observatory" (PDF). Annual Review of Nuclear and Particle Science. 59: 431–65. Bibcode:2009ARNPS..59..431J. doi:10.1146/annurev.nucl.55.090704.151550. A good retrospective on the project. - Brewer, Robert. "Deep Sphere: The unique structural design of the Sudbury Neutrinos Observatory buried within the earth". Canadian Consulting Engineer. - Ahmad, QR; et al. (2001). "Measurement of the Rate of νe + d → p + p + e− Interactions Produced by 8B Solar Neutrinos at the Sudbury Neutrino Observatory". Physical Review Letters. 87 (7): 071301. arXiv: . Bibcode:2001PhRvL..87g1301A. doi:10.1103/PhysRevLett.87.071301. - "Sudbury Neutrino Observatory First Scientific Results". 3 July 2001. Retrieved 2008-06-04. - "SPIRES HEP Results". SPIRES. SLAC. Retrieved 2009-10-06.[permanent dead link] - "Arthur B. McDonald, Ph.D." Franklin Laureate Database. Franklin Institute. Archived from the original on 2008-10-04. Retrieved 2008-06-04. - "The Nobel Prize in Physics 2015". Retrieved 2015-10-06. - "Past Winners – The Sudbury Neutrino Observatory". NSERC. 3 March 2008. Retrieved 2008-06-04. - SNOLAB User’s Handbook Rev. 2 (PDF), 2006-06-26, p. 33, retrieved 2013-02-01 - SNO's official site - Joshua Klein's Introduction to SNO, Solar Neutrinos, and Penn at SNO - "Experiment Cave". WIRED Science. Episode 104. 2007-10-24. PBS. - Written and Directed by David Sington (2006-02-21). "The Ghost Particle". Nova. Season 34. Episode 3306 (607). PBS. - Showcase of Canadian Engineering Achievement: Sudbury Neutrino Observatory (IEEE Canada). Several articles about the civil engineering of SNO.
<urn:uuid:b635b442-df2d-45fe-bbd5-9921a4dd8963>
3.515625
2,844
Knowledge Article
Science & Tech.
46.129312
95,564,678
|Scientific Name:||Amblema plicata (Say, 1817)| Unio plicata Say, 1817 |Red List Category & Criteria:||Least Concern ver 3.1| |Assessor(s):||Cordeiro, J. & Bogan, A.| |Reviewer(s):||Bohm, M., Collen, B. & Seddon, M.| |Contributor(s):||Richman, N., Duncan, C., Offord, S., Dyer, E., Soulsby, A.-M., Whitton, F., Kasthala, G., McGuinness, S., Milligan, HT, De Silva, R., Herdson, R., Thorley, J., McMillan, K. & Collins, A.| Amblema plicata has been assessed as Least Concern, due to the fact that it has a broad distribution in North America and is widespread and abundant throughout its range. It is also considered to be stable and in some cases expanding throughout its range. This species is endemic to North America. It is distributed from the coastal plain portion of the Gulf of Mexico drainages from the Escambia River in Florida west to Texas and north into the Mississippi River drainage (Mulvey et al. 1997). It is also known from the St. Lawrence River drainage, but it is absent from Lake Superior and its drainages (Burch 1975). Butler (1989) lists the distribution as throughout the Interior Basin and from the San Antonio River, Texas, east to the Choctawhatchee River, but it is not known from the Yellow River. In Michigan the species is found mainly in rivers in the lower peninsula from the Saginaw and Grand River drainages to the south. However, there are some records from the Sturgeon River in the upper peninsula (Burch 1975). In Canada, the species’ range is restricted to southern Ontario, southern Manitoba and southeastern Saskatchewan. It is widely distributed and often abundant in Canada. It is restricted to the Lake Erie drainage in Ontario (Metcalfe-Smith and Cadmore-Vokey 2004). Its northern range includes the Red River of the North, Winnipeg River and Nelson River (Burch 1975). It extends into the Niagara River drainage in western New York State (Strayer and Jirka 1997). Native:Canada (Manitoba, Ontario, Saskatchewan); United States (Alabama, Arkansas, Florida, Illinois, Indiana, Iowa, Kansas, Kentucky, Louisiana, Michigan, Minnesota, Mississippi, Missouri, Montana, Nebraska, New York, North Dakota, Ohio, Oklahoma, Pennsylvania, South Dakota, Tennessee, Texas, Virginia, West Virginia, Wisconsin) |Range Map:||Click here to open the map viewer and explore range.| In Canada, the species is often abundant (Metcalfe-Smith and Cadmore-Vokey 2004). In the United States, the species is widespread and common throughout most of its range but is apparently declining in some smaller streams (Illinois Natural History Survey 2009). Overall, it is considered to be stable, and in some cases expanding, throughout its range (NatureServe 2009). |Current Population Trend:||Stable| |Habitat and Ecology:| This species is a generalist and is known from a variety of habitats, ranging from small streams to big rivers, as well as lakes. It is known to occur in habitats with little or no current, and also from habitats where there is a very fast current. It is also known to occur on a variety of substrates, including clay, mud, sand and gravel. It is however most common on substrates composed of sand and gravel at depths of one to three feet, though it has been found at depths of up to 30 feet (Parmalee and Bogan 1998). The age of sexual maturity for this species is not known. It is, as a unionid, gonochoristic and viviparous. The glochidia (larval stage) are released as live offspring from the female after they are fully developed. The species is a short-term brooder and breeds once annually in the spring. In the Huron River, the species is gravid from early June to mid-July and it probably spawns in May (Lefevre and Curtis 1912, van der Schalie 1938, Watters 1995). |Use and Trade:|| The species is harvested for use by the pearl industry due to its sturdy shell. The shell of the species is sliced and ground into beads (“slugs”) which are then placed in pearl-producing oysters in order for them to create a large pearl over the basis of the freshwater mussel shell (Oesch 1984, Watters 1995). The species is harvested and utilised by the pearl industry (Oesch 1984, Watters 1995), but other threats to the species throughout its range are not known. It is considered stable throughout its range and is therefore unlikely to be significantly affected by any major threats. The species has been given a NatureServe Global Heritage ranking of G5 – Secure (NatureServe 2009). There are no species-specific conservation measures in place for this species. |Citation:||Cordeiro, J. & Bogan, A. 2012. Amblema plicata. The IUCN Red List of Threatened Species 2012: e.T203724A2770567.Downloaded on 16 July 2018.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:8cf9f6de-e917-4c93-acc8-13b428b73767>
2.59375
1,192
Knowledge Article
Science & Tech.
53.564457
95,564,697
Try adding together the dates of all the days in one week. Now multiply the first date by 7 and add 21. Can you explain what happens? Put the numbers 1, 2, 3, 4, 5, 6 into the squares so that the numbers on each circle add up to the same amount. Can you find the rule for giving another set of six numbers? Find the sum of all three-digit numbers each of whose digits is odd. This task follows on from Build it Up and takes the ideas into three dimensions! Find the sum and difference between a pair of two-digit numbers. Now find the sum and difference between the sum and difference! What happens? This challenge focuses on finding the sum and difference of pairs of two-digit numbers. Tom and Ben visited Numberland. Use the maps to work out the number of points each of their routes scores. In a Magic Square all the rows, columns and diagonals add to the 'Magic Constant'. How would you change the magic constant of this square? We can arrange dots in a similar way to the 5 on a dice and they usually sit quite well into a rectangular shape. How many altogether in this 3 by 5? What happens for other sizes? What can you say about these shapes? This problem challenges you to create shapes with different areas and perimeters. Sweets are given out to party-goers in a particular way. Investigate the total number of sweets received by people sitting in different positions. Only one side of a two-slice toaster is working. What is the quickest way to toast both sides of three slices of bread? Can you put the numbers 1-5 in the V shape so that both 'arms' have the same total? Strike it Out game for an adult and child. Can you stop your partner from being able to go? How many ways can you find to do up all four buttons on my coat? How about if I had five buttons? Six ...? What happens when you round these three-digit numbers to the nearest 100? Use two dice to generate two numbers with one decimal place. What happens when you round these numbers to the nearest whole number? This challenge, written for the Young Mathematicians' Award, invites you to explore 'centred squares'. What happens when you round these numbers to the nearest whole number? Can you make dice stairs using the rules stated? How do you know you have all the possible stairs? Can you find all the ways to get 15 at the top of this triangle of numbers? Many opportunities to work in different ways. Can you see why 2 by 2 could be 5? Can you predict what 2 by 10 will be? This challenge encourages you to explore dividing a three-digit number by a single-digit number. Got It game for an adult and child. How can you play so that you know you will always win? Can you continue this pattern of triangles and begin to predict how many sticks are used for each new "layer"? Use your addition and subtraction skills, combined with some strategic thinking, to beat your partner at this game. How many different journeys could you make if you were going to visit four stations in this network? How about if there were five stations? Can you predict the number of journeys for seven stations? Compare the numbers of particular tiles in one or all of these three designs, inspired by the floor tiles of a church in Cambridge. An investigation that gives you the opportunity to make and justify predictions. In how many different ways can you break up a stick of 7 interlocking cubes? Now try with a stick of 8 cubes and a stick of 6 cubes. Polygonal numbers are those that are arranged in shapes as they enlarge. Explore the polygonal numbers drawn here. Find out what a "fault-free" rectangle is and try to make some of your own. Ben’s class were cutting up number tracks. First they cut them into twos and added up the numbers on each piece. What patterns could they see? While we were sorting some papers we found 3 strange sheets which seemed to come from small books but there were page numbers at the foot of each page. Did the pages come from the same book? This challenge asks you to imagine a snake coiling on itself. Investigate the sum of the numbers on the top and bottom faces of a line of three dice. What do you notice? Place the numbers from 1 to 9 in the squares below so that the difference between joined squares is odd. How many different ways can you do this? How many centimetres of rope will I need to make another mat just like the one I have here? How can you arrange these 10 matches in four piles so that when you move one match from three of the piles into the fourth, you end up with the same arrangement? In this problem we are looking at sets of parallel sticks that cross each other. What is the least number of crossings you can make? And the greatest? What happens if you join every second point on this circle? How about every third point? Try with different steps and see if you can predict what will happen. In this game for two players, the idea is to take it in turns to choose 1, 3, 5 or 7. The winner is the first to make the total 37. One block is needed to make an up-and-down staircase, with one step up and one step down. How many blocks would be needed to build an up-and-down staircase with 5 steps up and 5 steps down? Do you notice anything about the solutions when you add and/or subtract consecutive negative numbers? These squares have been made from Cuisenaire rods. Can you describe the pattern? What would the next square look like? Use the interactivity to investigate what kinds of triangles can be drawn on peg boards with different numbers of pegs. This activity involves rounding four-digit numbers to the nearest thousand. Here are two kinds of spirals for you to explore. What do you notice? Find some examples of pairs of numbers such that their sum is a factor of their product. eg. 4 + 12 = 16 and 4 × 12 = 48 and 16 is a factor of 48. Watch this film carefully. Can you find a general rule for explaining when the dot will be this same distance from the horizontal axis?
<urn:uuid:22a42d4f-09f0-4035-9036-8e1e5cadc12c>
3.40625
1,330
Content Listing
Science & Tech.
73.11522
95,564,701
Broadening Technology's Reach Researchers should switch from the race for the best to bringing new technologies to rest. We have become obsessed with the extreme. To be interesting, a technology must be the fastest, the smallest, the biggest, the thinnest, the highest precision, or the lowest tolerance. We often invest immense resources in achieving these extremes. And while such work is essential to the progress of science and technology, its high cost has the unfortunate result that only a tiny fraction of the world can participate in it or benefit from the results. When focusing purely on research goals, it is all too easy to overlook opportunities for reducing cost or eliminating complexity, because pursuing them might lower performance. But simple ideas that trade a bit of performance for a substantial saving in cost can have surprising and often powerful results both scientifically and socially. Finding ways to put new capabilities within the reach of thousands–or millions–more people than was previously possible creates change on an immeasurable scale. Even beyond the direct benefits of usage are the indirect consequences of giving people power they never thought they would have. More people means more ideas–always a good thing in science. People become inspired. They become excited about exploring the potential of their new abilities. They choose to participate, to contribute, to create, to share with those who are like themselves. I realize that I have not cited any specific examples. This is because I want to encourage you to think about how this idea could be applied in your own work, whether you are doing fundamental research or developing commercial products. Try asking yourself, “Would providing 80 percent of the capability at 1 percent of the cost be valuable to someone?” If the answer is yes, perhaps it is worth exploring whether that goal could be realized using alternative approaches. Of course, the impact of a technology depends greatly on the context of its application. But I can say that I have been fortunate enough to witness several occasions when my own work had broad effects. And the diverse types of research and development that have benefited from the principle of dramatic simplification continue to surprise me. While it may not always be possible to apply that principle in your field, you can take pride in any effort you make to share the vast technological capabilities you possess. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:9d68ad02-e34f-4bb9-a046-9141b1b6841d>
2.734375
488
Truncated
Science & Tech.
37.641472
95,564,704
Click the thumbnail photos to enhance! Most species of Rostanga are orange or red, and apparently take up colour pigments from their sponge food so that their colour exactly matches the sponge they are feeding on. Individual animals show a close resemblance to their food sponge. This means that colour can vary within a species quite considerably if the sponge they feed on is itself variable in colour, or if the species feeds on a number of species of sponge. External differences can be found in the shape of the rhinophores and in biological characteristics such as nature of egg mass, size of eggs, and food sponge. Internally, the radular morphology is the most useful character. As long as I am not sure I label it Rostanga sp. 06. Rostanga sp. 06 has distinctive whitish rhinophores Dr. Alicia Hermosillo-González labels it 'Doris rojo13.jpg'. More information on Rostanga spp. are on Bill Rudman's Sea-Slug Forum!
<urn:uuid:8cc5673a-486d-40a0-9e46-29544225b5c1>
2.59375
212
Knowledge Article
Science & Tech.
53.487595
95,564,734
Due to increase in the human population, animals lost their habitat by a great extent. Due to excessive hunting, many animals became extinct. But some animals made it back from the extinct status. let’s have a look at five such animals which removed the status of “extinct” from their name. Tahake is a flightless bird belong to the rail bird family. These birds are native to New Zealand. In 1898, Tahake was considered extinct as no other bird of the same species was found after the last four known specimen were taken for the museum. But then on November 20, 1948, scientist changed their decision after they found one near Lake Te Anau. These birds are not extinct now, but their population is only in hundreds, not enough to make them common again, which they used to be once. Lord Hawe Island Stick Insects are also called tree lobsters. They used to be very common in Lord Howe Island, New South Wales, until 1880. Their Population decreased drastically and by 1920, not a single insect was found and was ultimately declared extinct. However, in 2001 scientists rediscovered the insect alive on the same island, and you are right! 😉 again a living species. Arakan Forest Turtle are extremely rare species of turtle and are only found the Arakan hills (Myanmar) and Chittagong hills tracts (Bangladesh). These turtles can survive in both water and land. Scientists declared this species extinct after no trace of a single turtle was found in 1908, eventually they got de-extinct after a couple of specimens were found in Asian food market. These turtles are still alive, but with a little population. But despite of their little population, these turtles are still traded worldwide. These nocturnal ants were believed to be extinct 15-20 million years ago, but a species of these ants has been found in Paraguay, Brasil and Argentina in 2006 which put them out of the extinct species list. Scientists thought that this species became extinct 66 million years ago, until one specimen was found in a fishing trawler in 1938. Till now there are two known species, one is found near the Comoro Island of east coast of South Africa and one is found along the coastline of Indian Ocean and Indonesia. Studies have shown that coelacanth is related to lungfish, reptiles and mammals than typical ray-finned fishes. Coelacanth is an extremely rare fish. They can live very deep in the ocean and can grow as big as a full grown human and weigh more than 100 kilos. Do you know any other animal species which was thought to have extinct but was found alive later? Please let us know via comments.
<urn:uuid:d53a9d16-3c08-47cc-a125-0450bd3059d6>
3.5
559
Listicle
Science & Tech.
54.070541
95,564,740
Dry Yeast and Hydrogen Peroxide - Acid Base Catalysis Published: Last Edited: Disclaimer: This essay has been submitted by a student. This is not an example of the work written by our professional essay writers. You can view samples of our professional work here. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of UK Essays. The purpose of this experiment was to figure out if either acids or bases accelerate or decelerate the chemical reaction consisting of dry yeast and hydrogen peroxide. I am trying to prove that the more acidic or the more basic the reaction is, the more accelerated the reaction will be. Enzymes are very important to the human body because they speed up chemical reactions without being a part of it. Enzymes are made up of proteins which are important biological compounds in the formation of living organisms. The addition of an acid or base to yeast makes a certain amount of bubbles to show how acidity or basicness affects the chemical composition taking place in the reaction with the yeast. Without the addition of an acid or a base, the reaction is harmless to our bodies. The enzyme Catalase is used in everyday life as well. The protein found in the enzyme is easily changeable with the addition of another substance Among the materials that you need to conduct this experiment are five clear containers, a washable spoon, distilled water, a measuring cup, baking soda, lemon juice, and a set of measuring spoons. The six planned concoctions are control with no acids or bases, low-acid with one teaspoon oflemonjuice, high-acid with two teaspoons of lemon juice, low-base with one teaspoon of baking soda, and high-acid with two teaspoons of baking soda. You might even want to try a combination of both the acid and the base. Next, you must add a Y4 cup of hydrogen peroxide into the glass. Then add a 1teaspoon of dry yeast and the reaction will begin. Record your results carefully to track this marvelous experiment. In the end, the reactions that were further away from a neutral pH performed in a more decelerated rate. Therefore, the control, low-acid, and low-base reactions performed at a more accelerated rate than the high-acid and the high-base reactions. However, the combination reaction performed at an exponentially better rate than all other reactions. Although all the mixtures performed within the same range (besides the combination), it was simply due to the reactions being at a microcosmic scale. The experiment ended up proving my initial hypothesis completely incorrect. It would probably be a wise idea to use larger amounts in order to get more appreciable results. The bubbles formed because different atoms in the hydrogen peroxide and the dry yeast collided and then bounced away to be farther away than they were in the beginning. This microscopic change appears to us humans in the form ofbubbles. The enzyme Catalase found in dry yeast, is also found in our bodies' organs; primarily the liver. What Catalase does in the liver is manage the graying of our hair. The more Catalase the faster our hair will gray, and the less Catalase there is the slower our hair will gray. Since Catalase is found in our crucial organs, doctors and scientists have done experiments to try and manipulate the enzyme. Their experiments primarily consist of the yeast acting against acids and bases as I did in my project. If this experiment were to be done on a more grand scale, it would sure affect and aid us in our everyday lives. The purpose of this project is to figure out if either acids or bases accelerate or decelerate the chemical reaction consisting of dry yeast and hydrogen peroxide. Enzymes are very important to the human body because they speed up chemical reactions without being a part of it. This catalysis isn't just found in the human body, it's also in most living things on Earth. Enzymes are made up of proteins which are important biological compounds in the formation of living organisms. The addition of an acid or base to yeast makes a certain amount of bubbles to show how acidity or basicness affects the chemical composition taking place in the reaction with the yeast. If you have ever mixed baking soda and lemon juice in an attempt to fight indigestion, you will see a basic chemical reaction between the two. Without the addition of acids or bases, the yeast reaction is quite harmless to our bodies. However, since we consume acids and bases almost every day, it's a great idea to enlighten yourself on just how our bodies are working. The main goal of this experiment is to fmd out how well the catalase in yeast breaks down acids and bases or vice versa. Hypothesis and Background Research Acids and basses are two very common terms in many scientific fields, such as chemistry. Acids are chemical substances that dissolve some types of metal and turn litmus intro a red color because of them being of a pH lower than seven. They are typically a corrosive or sour-tasting kind of liquid. Bases on the other hand, are usually of a pH higher than seven and are the opposite of acidic substances. They accept hydrogen ions instead of releasing them such as acids do. Bases will also typically turn litmus paper into a sort of blue color. There are several different types of chemical reactions and changes happening around us in our everyday lives. The most common of these reactions occurs when a raw egg turns solid. This happens because an impressive amount of heat is applied to the raw egg which forms longer and stronger chains of protein molecules inside the egg. This reaction and several others that occur in our body rely on enzymes, which are basically special types of catalysts made up of protein. Catalysts are anything that speeds up an action without being used up themselves. Thus, an acid- base catalysis is the acceleration of a chemical reaction by the addition of an acid or a base with the acid or base itself not being consumed in the reaction. Enzymes are not only found in human bodies, they are found in all types of living things including yeast. Yeast contains the enzyme known as catalase which breaks down the chemical hydrogen peroxide (H202) into oxygen gas and water. This would be the reaction that will inform us about the amount of bubbles formed from the acids and bases. This reaction will also show us how much the yeast has to work to break down the hydrogen peroxide when different substances are also added onto the concoction. Proteins can be changed when a specific amount of heat is brought upon it. Since enzymes are made up of proteins, they too can be changed by heat. However, what a majority of people do not know, is that the addition of acids and bases can also affect the way that a protein is put together. Both acid-catalysis and base-catalyzed reactions are used for their own unique purposes. A macrocosmic example of acid catalysis is the reaction and conversion of the hydrocarbon atoms found in petroleum to gasoline, and the creation of silicone. An example of a grand base catalyzed reaction is the creation and conversion of several compounds and molecules used in the creation of foam sponges. The main reasoning behind this investigation is to discover how well the catalase enzyme in yeast can break down hydrogen peroxide after different amounts of acids and bases have been added onto it. For this experiment, my hypothesis is that the more acidic or the more basic the concoction made in the different cups is, the more bubbles will be made and the higher they will get. The materials you will need for this experiment include: - 5 clear glass containers of equal size (beakers or test tubes are ideal) - Permanent marker - 5 clean spoons - Distilled water - Small clear cup/glass - Baking soda - Set of measuring teaspoons - Measuring cup - Hydrogen peroxide - Dry yeast - Lemon juice 1. The rlcpcndent-ormanipulated variable in this experiment is the amount of lemon juice or baking soda poured into the different containers and thus, the acidity or basicity in each container. 2. The iodepor responding variable in this experiment is the height and amount of bubbles formed as a result of the chemical reaction. 3. The controlled variable or the variable held constant in this experiment is the amount of yeast and the amount of hydrogen peroxide put in each container and the containers themselves. 1. Label the containers: 1- Control, 2- Low Acid, 3- High Acid, 4- Low Base, and 5-High Base. 2. Put a spoon in each of the containers, and make sure to never move a spoon from one container to the other. 3. Add two teaspoons of distilled water to container 1- Control. 4. Stir in 1 4 cup of hydrogen peroxide to container 1-Control. 5. Stir in 1 teaspoon of yeast to container 1- Control. 6. Place the ruler alongside the container, and record the highest height the bubbles reach 7.Of the other containers, record predictions first, and actual results after on a chart. 8. To create the acidic containers, add one teaspoon oflemonjuice to container 2- Low Acid and two teaspoons oflemonjuice to container 3-High Acid. 9. Add one teaspoon of distilled water to container 2- Low Acid so it is the same volume as con iner 3. 1O.Stir in V4 cup of hydrogen peroxide to containers 2 and 3. ll.Add 1 teaspoon yeast to both container 2 and 3. Stir and observe. 12.Record the maximum height ofthe yeast bubbles. 13.To create the basic containers, add one teaspoon ofthe baking soda solution to container 4- Low Base and two teaspoons of the baking soda solution to container 5- High Base. 14.Add one teaspoon of distilled water to container 4- Low Base so it has the same volume as container 5. 15.Stir in V4 cup ofhydrogen peroxide to containers 4 and 5. 16.Add 1 8 teaspoon of yeast to both container 4 and 5. Stir and observe 17.Record the maximum height ofthe yeast bubbles. (Compare your predictions with your actual observations) There were a plethora of things to be discovered from this otherwise simple experiment. The very flrst thing that you have to be aware of to do this experiment is that there will always be a change to an altered chemical reaction, no matter how small the alteration or the result. The result of each and every chemical reaction wasn't very different, but it was enough so that each showed a noticeable change. The temperature for each experiment I conducted stayed at approximately the same level throughout. The original height of the mixture was approximately 1 inch before adding the yeast. I performed three separate trials for each chemical reaction. The results were approximately the same for every trial I conducted of the different concoctions. The initial height of all the concoctions prior to adding yeast was approximately I inch. The constant reaction worked at the most accelerated rate, thus causing more bubbles to form on the mixtures surface. This occurred because of the fact that the enzyme Catalase works best at around pH 7, and this mixture was very near to the neutral pH. The foamy bubbles made the height of the concoction reach approximately 1.5 inches in an average whiskey glass. The bubbles reached their maximum height at a slow rate. This was true for a majority of the reactions. The acidic reactions reacted in a very similar way to each other. The low-acid reaction acted in a very similar way to the control reaction in every single trial I conducted. The bubbles in this reaction reached a slightly lower height than that of the control reaction; approximately 1.2 inches. The pH of this composition was slightly more acidic; about a 6 or 5 on the pH scale. The pH being lower is what caused the bubbles to perform in a more decelerated rate. The high-acid reaction also performed at a lesser magnitude than the control reaction. The height of the bubbles reached a height of slightly more than 1 inch. Due to the fact that the high-acid reaction had a lower pH and strayed further from the desired neutral status, it performed the worst of all the reactions thus far. However, this reaction reached its maximum height in a shorter amount of time. Low-Acid Reaction High-AcidReaction The low-base mixture reacted in approximately the same way as the low-acid mixture. This is because the two mixtures were the same amount of pH away from the desired neutral pH. This concoction was at a pH of roughly 9 or 10. The height ofthis mixture was approximately 1.2 inches. Even though the amount of acid or base added to the mixture was the same, the one teaspoon of baking soda raised the pH more than the one teaspoon of lemon juice lowered the pH because the baking soda is a powder. It being a powder allows for the individual molecules of the substance to spread around the mixture more than the tangy lemon juice could. The high-base mixture reacted in a very similar way to the high-acid mixture. Again, this was because they were the same amount away from a neutral pH. The pH of the high-base concoction was a pH of approximately 11 or 12. The maximum height of this mixture reached slightly more than 1 inch. This blend also reached its maximum height in a shorter amount of time than the others. Low-Base Reaction High-Base Reaction Due to the fact that the different reactions reacted in quite a similar way to one another, I decided to conduct an additional experiment. This one consisted of one teaspoon of lemon juice and one teaspoon of baking soda in the beginning. This was to discover if a mixture of the two would accelerate or decelerate the Catalase reaction. I had previous knowledge that a mixture of baking soda and lemon juice resulted in a foamy liquid that helped with indigestion and to fight off minor cancer cells, so I put it to the test with the catalytic enzyme. This concoction reacted in a way like no other. The maximum height of the reaction was approximately 5 inches. This reaction also reached its maximum height quicker than any other reaction. The initial foam of the mixture of the acid and the base caused the yeast bubbles to be larger and whiter in color in comparison to the other reactions. Estimated Height of Yeast Bubbles Actual Height of Yeast Bubbles The results proved my hypothesis completely incorrect. I believed that the further away from neutral the concoctions got, the more accelerated the reaction would be. However, the complete opposite to what I believed turned out to be true. I was very surprised to see that every planned reaction gave approximately the same results. That was why I decided to conduct an experiment with usually counteracting substances; the acid and the ba::}if I were to do this experiment again, I would use larger amounts in order to get larger and more visible results. The most plausible explanation of the yeast reaction is that the bubbles formed because the hydrogen and oxide atoms collided with the Catalase in the yeast and then bounced away. Due to the fact that the molecules bounced apart, a larger microscopic gap formed between the atoms. The way us humans see this minuscule separation is in the form of the Catalase bubbles. The way that this reaction could help us in our everyday lives is actually quite simple. Catalase is found in a majority of human bodies; especially in the liver. What Catalase does in the human body is that if there is more of it in the liver, your hair will gray at a slower rate or not at all, and if there is not a lot of Catalase in your liver, then your hair will grow at an exponential rate. Due to the fact that Catalase is found in one of our crucial organs, doctors and scientists have conducted experiments to try and manipulate the enzyme in order to treat ailments in that region of the bodese experiments were simply on a microcosmic scale, which did not allow them to perform in such a notable and appreciable way. However, on a larger scale, this type of catalysis would be truly helpful in our everyday needs. Gray, Theodore. Molecules:TheElementsandtheArchitectureofEverything.New York City: Black Dog & Leventhal, 2014. Print. Touchette, Betty. (2014, May 01). Acid-BaseCatalysis. <https://www.education.com/science-fair/article/acids-bases-affect-enzyme-action/> Ruiz, Brianna. (2015, September 10). AcidandBaseCatalysis.<https://prezi.com/lxzq e4ehkxy/acid-and-base-catalysis/> Goodsell, David. (2004, September). PDB-101:Catalase. Cite This Essay To export a reference to this article please select a referencing stye below:
<urn:uuid:08c3948f-c0da-486b-a1c4-74fdf13e11b5>
3.5625
3,537
Academic Writing
Science & Tech.
49.827112
95,564,760
Defaunation, the severe decline of animal populations from natural ecosystems, is a process faced by tropical forests that can go unnoticed. Several large birds and mammals are threatened by hunting and human persecution. However, the loss of animals can generate large unforeseen impacts. The extinction of large mammals implies the loss of functions that maintain diversity and ecosystem services of which humans depend. A recent study published in the journal Science Advances and conducted by Brazilian researchers from the Universidade Estadual Paulista (São Paulo State University) in Rio Claro, in collaboration with researchers from Spain, England, and Finland, demonstrated that the loss of large frugivores negatively affects the capacity of tropical forests to stock carbon and, therefore, their potential to counter climate change. “The big frugivores, such as large primates, the tapir, the toucans, among other large animals, are the only ones able to effectively disperse plants that have large seeds. Usually, the trees that have large seeds are big trees with dense wood that store more carbon” explains Professor Mauro Galetti from the Department of Ecology at São Paulo State University. Figure 1. Large frugivores and large-seeded trees of the Atlantic Forest. a) Anta (Tapirus terrestris), b) Muriqui (Brachyteles arachnoides), c) Jacutinga (Aburria jacutinga), d) Jatoba (Hymenaea courbaril), e), f) Atlantic Forest. Mauro Galetti author of the photos a, d, e and f. Pedro Jordano author Photo b, c. Then, “When we lose large frugivores we are losing dispersal and recruitment functions of large seeded trees and therefore, the composition of tropical forests changes. The result is a new forest dominated by smaller trees with milder woods which stock less carbon”, complements Carolina Bello, a PhD student from the São Paulo State University. Figure 2. Replacement process of tropical forests when they lose large dispersers. Forests with large trees and hardwood (initial community) are replaced by forests with smaller trees with mild wood (final community). The recent study showed that when large-seeded trees are removed from the forest and are replaced by trees with smaller seeds, the carbon stock potential of the forest decreases. Pedro Jordano, Research Professor at the Biological Station of Doñana (CSIC, Spain), explains that this is the result of the loss of crucial interactions that support the Web of Life in tropical forests. “Not only we are facing the loss of charismatic animals, but we are facing the loss of interactions that maintain the proper functioning and key ecosystem services such as carbon storage.” Carlos Peres, a Professor of Tropical Conservation Ecology at the University of East Anglia (UK), says “to date, tropical forest degradation has been entirely defined by REDD programs in terms of structural forms of human disturbance such as timber extraction and wildfires. Yet, even an apparently intact but otherwise defaunated forest should be considered as degraded because the insidious carbon erosion processes we highlight in this paper are already well underway”. Therefore, the present study alerts current REDD+ programs that seek to counteract climate change by storing carbon in tropical forests, about the importance of considering the animals and their functionality as a fundamental part of the maintenance of carbon stocks. “The effectiveness of these programs will be improved if the preservation of ecological processes that sustain the ecosystem service of carbon storage over time is guaranteed” concludes Carolina Bello. The study also included Marco A. Pizo (UNESP), Otso Ovaskainen (University of Helsinki), Renato Lima (USP), Luiz Fernando S. Magnago (Federal University of Lavras) and Mariana Rocha Ferreira (Federal University of Viçosa).
<urn:uuid:f468dbce-6025-48d4-991b-2910212d69af>
3.953125
807
Knowledge Article
Science & Tech.
19.636529
95,564,762
So reports a study just published in the Proceedings of the National Academy of Sciences. The paper, exploring nitrogen dynamics, found that untangling climate impacts from other factors can be difficult, even when scientists have access to decades of data on a forest's environmental conditions. Co-author Dr. Gene E. Likens of the Cary Institute comments, "Understanding how climate change is shaping forests is critical. Our paper underscores the complexity of forest ecosystems, the legacy left by disturbance, and the difficulty in isolating climate impacts from the legacies of past disturbances." The Hubbard Brook Experimental Forest, located in the White Mountains of New Hampshire, is home to the longest, most complete record of watershed-ecosystem dynamics in the world. Its study sites have been measuring the environmental pulse of the forest for nearly half a century. Because nitrogen is essential to plant growth and a potential pollutant in water, Hubbard Brook scientists have paid close attention to nitrate draining from the watershed. Their long-term records show that nitrate concentrations in streams are at a 46-year low, and ecosystem-wide loss of nitrate from the watershed has decreased by 90%. The paper's authors, including two scientists from the Cary Institute and several from Princeton University, sought to reveal what was driving this shift in nitrogen dynamics. Among the variables explored were reductions in airborne nitrogen pollution, climate change (species shifts, warming soils, a longer growing season, and snowmelt changes) and landscape-level disturbance (logging, hurricanes). A decline in airborne nitrogen pollution was not found, and the replacement of ~25% of the forest's sugar maples with American beech, a slow-decomposing species, accounted for only a modest reduction in nitrate export. Most surprisingly, despite five decades of warming, the authors did not find that a longer growing season resulted in increased vegetation growth and subsequent nitrogen demand. They did identify a relationship between warmer winters, a decline in large snowmelt events, and a decrease in nitrate export. When nitrate has time to linger in the soil, it can be taken up by plants and microbes. And increases in soil temperature, combined with a shift in soil water flow patterns, explained about 40% of the nitrate decline. But historical disturbance—not climate change—was the driving factor behind the shift in nitrogen dynamics seen at Hubbard Brook. Using hundreds of modeling scenarios, the authors found that 50-60% of the decrease in nitrogen export could be explained by extensive timbering that occurred in the White Mountains in the early twentieth century. Logging activity had a large influence on the amount of nitrogen in soils that persisted for decades. The counterintuitive finding that nitrate export dropped when forest growth was decelerating underscores the legacy that landscape-scale disturbances leave on the forest nitrogen cycle. First author Susana Bernal of Princeton University comments, "Recognizing how present-day concerns such as climate change interact with historical patterns in ecosystems marks a major challenge in gauging the health of the planet." With Likens concluding, "As far as the forest nitrogen cycle is concerned, we can't assess climate impacts, or determine accurate baselines for predictive models, without accounting for past disturbances." This study was supported by a Fulbright Postdoctoral Scholarship from the Spanish Ministry of Science and lnnovation, the National Oceanic and Atmospheric Administration, the National Science Foundation, and the A. W. Mellon Foundation. Authors included: Susana Bernal (Princeton University, Center for Advanced Studies of Blanes CEAB-CSIC, Spain), Lars O. Hedin (Princeton University), Gene E. Likens (Cary Institute of Ecosystem Studies), Stefan Gerber (Princeton University, University of Florida), and Don C. Buso (Cary Institute of Ecosystem Studies). Photo Caption: Image of a weir used to monitor stream flow patterns in the Hubbard Brook Experimental Forest. Cary Institute Photo Archive. The Cary Institute of Ecosystem Studies is a private, not-for-profit environmental research and education organization in Millbrook, N.Y. For more than twenty-five years, Cary Institute scientists have been investigating the complex interactions that govern the natural world. Their objective findings lead to more effective policy decisions and increased environmental literacy. Focal areas include air and water pollution, climate change, invasive species, and the ecological dimensions of infectious disease. Learn more at www.caryinstitute.org Lori Quillen | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8268af4f-2bd2-473e-979b-7cc350f6cddf>
3.5625
1,516
Content Listing
Science & Tech.
30.224048
95,564,771
Radiometric dating, or radioactive dating as it is sometimes called, is a method used to date rocks and other objects based on the known decay rate of radioactive isotopes. Different methods of radiometric dating can be used to estimate the age of a variety of natural and even man-made materials. However, rocks and other objects in nature do not give off such obvious clues about how long they have been around. So, we rely on radiometric dating to calculate their ages. and is now the principal source of information about the absolute age of rocks and other geological features, including the age of fossilized life forms or the age of the Earth itself, and can also be used to date a wide range of natural and man-made materials. Together with stratigraphic principles, radiometric dating methods are used in geochronology to establish the geological time scale. The methods work because radioactive elements are unstable, and they are always trying to move to a more stable state. This process by which an unstable atomic nucleus loses energy by releasing radiation is called radioactive decay. The thing that makes this decay process so valuable for determining the age of an object is that each radioactive isotope decays at its own fixed rate, which is expressed in terms of its half-life. And then either later in this video or in future videos we'll talk about how it's actually used to date things, how we use it actually figure out that that bone is 12,000 years old, or that person died 18,000 years ago, whatever it might be. So let me just draw the surface of the Earth like that. So then you have the Earth's atmosphere right over here. And 78%, the most abundant element in our atmosphere is nitrogen. And we don't write anything, because it has no protons down here. And what's interesting here is once you die, you're not going to get any new carbon-14. You can't just say all the carbon-14's on the left are going to decay and all the carbon-14's on the right aren't going to decay in that 5,730 years.Radioactive atoms are inherently unstable; over time, radioactive "parent atoms" decay into stable "daughter atoms." When molten rock cools, forming what are called igneous rocks, radioactive atoms are trapped inside. By measuring the quantity of unstable atoms left in a rock and comparing it to the quantity of stable daughter atoms in the rock, scientists can estimate the amount of time that has passed since that rock formed.Fossils are generally found in sedimentary rock not igneous rock. And we talk about the word isotope in the chemistry playlist. But this number up here can change depending on the number of neutrons you have. And every now and then-- and let's just be clear-- this isn't like a typical reaction. So instead of seven protons we now have six protons. And a proton that's just flying around, you could call that hydrogen 1. If it doesn't gain an electron, it's just a hydrogen ion, a positive ion, either way, or a hydrogen nucleus. And so this carbon-14, it's constantly being formed. I've just explained a mechanism where some of our body, even though carbon-12 is the most common isotope, some of our body, while we're living, gets made up of this carbon-14 thing. So carbon by definition has six protons, but the typical isotope, the most common isotope of carbon is carbon-12. And then that carbon dioxide gets absorbed into the rest of the atmosphere, into our oceans. When people talk about carbon fixation, they're really talking about using mainly light energy from the sun to take gaseous carbon and turn it into actual kind of organic tissue.
<urn:uuid:5cdac4c9-f199-4e94-a982-b4af6e925d8a>
3.78125
777
Audio Transcript
Science & Tech.
50.068605
95,564,773
Towards the end of the fifth year of the catastrophe at Chernobyl, the ecological situation in a significant part of the Dniepr river basin remains alarming. The area of the basin spans 509 000 square kilometers, extending across the territories of Byelorussia, the Russian Republic, and the Ukraine, and it contains 44 km3 of water. It supplies water to 50 large towns and industrial centers, about 10 000 industrial enterprises and to 53 large irrigation systems which cover an area of 1.2 million hectares. Forty million people drink water from the Dniepr, and a total of 10.5 km3 of effluent water is poured back into the river each year. Ignoring radiation, the pollutants discharged into the Dniepr in 1988 alone included 53 000 tons of organic origin, 64 800 tons of various substances in suspension, 334 000 tons of sulphates, 336 600 tons of chlorides, 3750 tons of phosphates, 15 100 tons of nitrates, and 67 tons of phenols. Also in 1988, 20 cubic kilometers of water were taken out of the river; of these, 10.3 km3 was for industrial use, 2 km3 was used for drinking water, and 4.6 km3 for irrigation. In addition to continuous chemical pollution, the river is now polluted with radioactive substances. KeywordsNuclear Power Station Radioactive Contamination Radioactive Substance Radioactive Nuclide Accumulation Coefficient Unable to display preview. Download preview PDF.
<urn:uuid:07933e2c-0aad-4817-b3e8-3553269990ba>
3.484375
303
Truncated
Science & Tech.
53.743158
95,564,806
We present an extended series of observations and more comprehensive analysis of a tracer-based measure of new production in the Sargasso Sea near Bermuda using the 3He flux gauge technique. The estimated annually averaged nitrate flux of 0.84 ± 0.26 mol m−2 yr−1 constitutes only that nitrate physically transported to the euphotic zone, not nitrogen from biological sources (e.g., nitrogen fixation or zooplankton migration). We show that the flux estimate is quantitatively consistent with other observations, including decade timescale evolution of the 3H + 3He inventory in the main thermocline and export production estimates. However, we argue that the flux cannot be supplied in the long term by local diapycnal or isopycnal processes. These considerations lead us to propose a three-dimensional pathway whereby nutrients remineralized within the main thermocline are returned to the seasonally accessible layers within the subtropical gyre. We describe this mechanism, which we call “the nutrient spiral,” as a sequence of steps where (1) nutrient-rich thermocline waters are entrained into the Gulf Stream, (2) enhanced diapycnal mixing moves nutrients upward onto lighter densities, (3) detrainment and enhanced isopycnal mixing injects these waters into the seasonally accessible layer of the gyre recirculation region, and (4) the nutrients become available to biota via eddy heaving and wintertime convection. The spiral is closed when nutrients are utilized, exported, and then remineralized within the thermocline. We present evidence regarding the characteristics of the spiral and discuss some implications of its operation within the biogeochemical cycle of the subtropical ocean. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:ce4df308-605e-4c30-9de5-a857cfa58eb9>
2.53125
379
Academic Writing
Science & Tech.
15.110833
95,564,808
Coming to a Lab Bench Near You: Femtosecond X-Ray Spectroscopy News Apr 13, 2017 | Original story from Lawrence Berkeley National Laboratory Upon light activation (in purple, bottom row’s ball-and-stick diagram), the cyclic structure of the 1,3-cyclohexadiene molecule rapidly unravels into a near-linear shape in just 200 femtoseconds. Using ultrafast X-ray spectroscopy, researchers have captured in real time the accompanying transformation of the molecule’s outer electron “clouds” (in yellow and teal, top row’s sphere diagram) as the structure unfurls. (Credit: Kristina Chang/Berkeley Lab) The ephemeral electron movements in a transient state of a reaction important in biochemical and optoelectronic processes have been captured and, for the first time, directly characterized using ultrafast X-ray spectroscopy at the Department of Energy’s Lawrence Berkeley National Laboratory (Berkeley Lab). Like many rearrangements of molecular structures, the ring-opening reactions in this study occur on timescales of hundreds of femtoseconds (1 femtosecond equals a millionth of a billionth of a second). The researchers were able to collect snapshots of the electronic structure during the reaction by using femtosecond pulses of X-ray light on a tabletop apparatus. The experiments are described in the April 7 issue of the journal Science. “Much of the work over the past decades characterizing molecules and materials has focused on X-ray spectroscopic investigations of static or non-changing systems,” said study principal investigator Stephen Leone, faculty scientist at Berkeley Lab’s Chemical Sciences Division and UC Berkeley professor of chemistry and physics. “Only recently have people started to push the time domain and look for transient states with X-ray spectroscopy on timescales of femtoseconds.” The researchers focused on the structural rearrangements that occur when a molecule called 1,3 cyclohexadiene (CHD) is triggered by light, leading to a higher-energy rearrangement of electrons, known as an excited state. In this excited state, the cyclic molecule of six carbon atoms in a ring opens up into a linear six-carbon chain molecule. The ring-opening is driven by an extremely fast exchange of energy between the motions of the atomic nuclei and the new, dynamic electronic configuration. This light-activated, ring-opening reaction of cyclic molecules is a ubiquitous chemical process that is a key step in the photobiological synthesis of vitamin D in the skin and in optoelectronic technologies underlying optical switching, optical data storage, and photochromic devices. In order to characterize the electronic structure during the ring-opening reaction of CHD, the researchers took advantage of the unique capabilities of X-ray light as a powerful tool for chemical analysis. In their experiments, the researchers used an ultraviolet pump pulse to trigger the reaction and subsequently probe the progress of the reaction at a controllable time delay using the X-ray flashes. At a given time delay following the UV light exposure, the researchers measure the wavelengths (or energies) of X-ray light that are absorbed by the molecule in a technique known as time-resolved X-ray spectroscopy. “The key to our experiment is to combine the powerful advantages of X-ray spectroscopy with femtosecond time resolution, which has only recently become possible at these photon energies,” said study lead author Andrew Attar, a UC Berkeley Ph.D. student in chemistry. “We used a novel instrument to make an X-ray spectroscopic ‘movie’ of the electrons within the CHD molecule as it opens from a ring to a linear configuration. The spectroscopic still frames of our ‘movie’ encode a fingerprint of the molecular and electronic structure at a given time.” In order to unambiguously decode the spectroscopic fingerprints that were observed experimentally, a series of theoretical simulations were performed by researchers at Berkeley Lab’s Molecular Foundry and the Theory Institute for Materials and Energy Spectroscopies (TIMES) at DOE’s SLAC National Accelerator Laboratory. The simulations modeled both the ring-opening process and the interaction of the X-rays with the molecule during its transformation. “The richness and complexity of dynamical X-ray spectroscopic signatures such as the ones captured in this study require a close synergy with theoretical simulations that can directly model and interpret the experimentally observed quantities,” said Das Pemmaraju, project scientist at Berkeley Lab’s Chemical Sciences Division and an associate staff scientist at TIMES. The use of femtosecond X-ray pulses on a laboratory benchtop scale is one of the key technological milestones to emerge from this study. “We have used a tabletop, laser-based light source with pulses of X-rays at energies that have so far been limited only to large-facility sources,” said Attar. The X-ray pulses are produced using a process known as high-harmonic generation, wherein the infrared frequencies of a commercial femtosecond laser are focused into a helium-filled gas cell and, through a nonlinear interaction with the helium atoms, are up-converted to X-ray frequencies. The infrared frequencies were multiplied by a factor of about 300. The researchers are now utilizing the instrument to study myriad light-activated chemical reactions with a particular focus on reactions that are relevant to combustion. “These studies promise to expand our understanding of the coupled evolution of molecular and electronic structure, which lies at the heart of chemistry,” said Attar. The Perfect Terahertz Beam - Thanks to the 3D PrinterNews Scientists have succeeded in shaping terahertz beams with extremely high precision. All that is needed for this is a simple plastic screen from a 3D printer.READ MORE
<urn:uuid:b0f87081-209b-4ffb-9d62-201ced19e63c>
2.78125
1,251
News Article
Science & Tech.
17.285753
95,564,812
Rapid sea level rise and ice sheet response to 8,200-year climate event Cronin, Thomas M. Vogt, P. R. Willard, D. A. Thunell, Robert C. Pohlman, John W. MetadataShow full item record The largest abrupt climatic reversal of the Holocene interglacial, the cooling event 8.6–8.2 thousand years ago (ka), was probably caused by catastrophic release of glacial Lake Agassiz-Ojibway, which slowed Atlantic meridional overturning circulation (AMOC) and cooled global climate. Geophysical surveys and sediment cores from Chesapeake Bay reveal the pattern of sea level rise during this event. Sea level rose ~14 m between 9.5 to 7.5 ka, a pattern consistent with coral records and the ICE-5G glacio-isostatic adjustment model. There were two distinct periods at ~8.9–8.8 and ~8.2–7.6 ka when Chesapeake marshes were drown as sea level rose rapidly at least ~12 mm yr−1. The latter event occurred after the 8.6–8.2 ka cooling event, coincided with extreme warming and vigorous AMOC centered on 7.9 ka, and may have been due to Antarctic Ice Sheet decay. Author Posting. © American Geophysical Union, 2007. This article is posted here by permission of American Geophysical Union for personal use, not for redistribution. The definitive version was published in Geophysical Research Letters 34 (2007): L20603, doi:10.1029/2007GL031318. Suggested CitationArticle: Cronin, Thomas M., Vogt, P. R., Willard, D. A., Thunell, Robert C., Halka, J., Berke, M., Pohlman, John W., "Rapid sea level rise and ice sheet response to 8,200-year climate event", Geophysical Research Letters 34 (2007): L20603, DOI:10.1029/2007GL031318, https://hdl.handle.net/1912/3348 Showing items related by title, author, creator and subject. Species-specific control of external superoxide levels by the coral holobiont during a natural bleaching event Diaz, Julia M.; Hansel, Colleen M.; Apprill, Amy; Brighi, Caterina; Zhang, Tong; Weber, Laura; McNally, Sean; Xun, Liping (Nature Publishing Group, 2016-12-07)The reactive oxygen species superoxide (O2·−) is both beneficial and detrimental to life. Within corals, superoxide may contribute to pathogen resistance but also bleaching, the loss of essential algal symbionts. Yet, the ... Peltomaa, Elina; Johnson, Matthew D. (Inter-Research, 2017-02-09)The marine ciliate Mesodinium rubrum is known to form large non-toxic red water blooms in estuarine and coastal upwelling regions worldwide. This ciliate relies predominantly upon photosynthesis by using plastids and other ... Mass-induced sea level change in the northwestern North Pacific and its contribution to total sea level change Cheng, Xuhua; Li, Lijuan; Du, Yan; Wang, Jing; Huang, Rui Xin (John Wiley & Sons, 2013-08-02)Over the period 2003–2011, the Gravity Recovery and Climate Experiment (GRACE) satellite pair revealed a remarkable variability in mass-induced sea surface height (MSSH) in the northwestern North Pacific. A significant ...
<urn:uuid:965d077f-9c65-40d8-88b7-cbb947ae0857>
3.109375
771
Content Listing
Science & Tech.
58.847602
95,564,838
Species Detail - Wrinkled Snail (Candidula intersecta) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). Invasive Species: Invasive Species || Invasive Species: Invasive Species >> Medium Impact Invasive Species 5 January (recorded in 1924) 20 November (recorded in 1997) National Biodiversity Data Centre, Ireland, Wrinkled Snail (Candidula intersecta), accessed 18 July 2018, <https://maps.biodiversityireland.ie/Species/120171>
<urn:uuid:528ca58a-08d7-4782-a50e-92a59fa64a03>
2.6875
162
Structured Data
Science & Tech.
20.38
95,564,840
Time continuum and true long-term ecology: from theory to practice - Palynology and Paleoecology Lab, Botanic Institute of Barcelona (IBB-CSIC-ICUB), Barcelona, Spain The need for long-term studies to understand ecological dynamics is widely recognized but has not been satisfactorily addressed to date. The development of “long-term” (LT) observatories has aimed to improve the situation, but the main handicaps are that we should wait for generations to yield reliable results and that a number of ecological processes occurring at time scales larger than centuries will not be fully resolved. Palaeoecology can provide the needed time scale for true long-term ecology, but it is limited by the ability to merge ecological, and palaeoecological data into continuous time series. This paper suggests a practical way of attaining such goals based on the concept of time continuum. A short review is provided on the main handicaps for palaeoecological records to be incorporated into current ecological datasets and the recent improvements in the field. A global network of past-present-future ecological observatories (PPFEO) centered around lakes with annually-laminated sediments could act as a means of producing truly long-term and continuous ecological records by combining high-resolution palaeoecological techniques with ecological methods commonly used in LT observatories. Space and Time in Ecology Perhaps one of the main tenets of modern ecology is the notion that relevant ecological patterns and processes should be considered globally to find sound explanations and make accurate predictions about key functional aspects of the biosphere. The study of worldwide biogeochemical cycles and matter/energy balances underwent a spectacular boost during the last decades of the 20th century and has attained a significant amount of data to run models using a global scope (Regnier et al., 2013; Schlesinger and Bernhardt, 2013; Smith et al., 2014). Therefore, in terms of space, ecologists have already attained a solid and likely enduring awareness on the appropriate ecological framework. Time, however, has not been equally appreciated, although some progress has been made. In recent decades, ecologists have realized that many ecological processes should be studied on a long-term basis to infer functional ecosystem features, to calibrate and validate ecological models, and to forecast potential ecological responses to future environmental change (Clutton-Brock and Sheldon, 2010; Magurran et al., 2010). This has led to the creation of global databases (Edwards et al., 2010; Peters, 2010), and the establishment of worldwide networks of ecological observatories that aim to provide “long-term” records of pivotal ecological parameters and processes in the near future (http://www.lternet.edu/). In this context, it is not uncommon to consider decadal or secular ecological time series as “long-term” series. The main handicaps of this approach are that we should wait for generations to yield reliable results and that a number of ecological processes that occur at time scales larger than centuries—e.g., ecological succession, range shifts, migration, extinction, community assembly, biotic responses to global environmental changes, etc.—will not be fully resolved and understood. Present day communities originated and assembled after the Last Glacial Maximum (LGM; between 26,500 and 19,000 years ago) (Lecavalier et al., 2014); therefore, the time frame needed to properly understand their temporal dynamics is larger than one or a few centuries. Palaeoecology can provide the needed time scale for true long-term ecology (Rull and Vegas-Vilarrúbia, 2011), but it is limited by the ability to merge ecological and palaeoecological data into consistent and continuous time series. This paper contends that past-present-future data series of this type are possible and suggests a way of attaining this goal based on the concept of the time continuum. The conceptual separation between past and present is a human construction. Time is a continuum through which species and communities flow, interact, and evolve. A biosphere of the past and a biosphere of the present do not exist separately; there is a single biosphere where ecological and evolutionary processes have occurred continuously since the origin of life on earth. Therefore, there is no ecology of the past (palaeoecology) and no ecology of the present (modern ecology or neoecology) but rather a single ecology (general ecology) that includes both. Historically, ecology and palaeoecology have been separated for primarily psychological and methodological reasons, not because there are any differences between them per se (Rull, 2010). We hope that the psychological barrier will be overcomed if we are able to find suitable methods to combine ecological and palaeoecological records to produce true long-term records (sensu Rull and Vegas-Vilarrúbia, 2011), rather than insisting on the need to merge ecology and palaeoecology from a theoretical framework. Ecology and Palaeoecology Some ecological palaeoecologists whose interest is to reconstruct past ecological dynamics rather than solely past environmental changes have addressed the potential usefulness of past records for ecological knowledge (e.g., Davis, 1981; Birks, 1993, 2013a; Jackson, 2001; Flessa and Jackson, 2005; Rull et al., 2013; Seddon et al., 2014). However, these claims have not been fruitful thus far. Some of these palaeoecologists have attempted to enter the ecological arena by publishing papers on palaeoecology in general ecological journals rather than palaeoecological journals (e.g., Birks, 1996; Restrepo et al., 2012; Rull, 2012; Jackson, 2013; Seddon et al., 2014), or by highlighting the usefulness of palaeo-data for nature conservation (e.g., Birks, 1993, 1996; Willis et al., 2010; Vegas-Vilarrúbia et al., 2011; Birks, 2013b; Gillson and Marchant, 2014; Seddon et al., 2014). In spite of this, the lack of synergy between ecological and palaeoecological communities persists and is delaying the advancement of ecological knowledge and the potential impact of its applications on important topics, such as nature conservation and the sustainable use of ecological services. A number of ecologists are only interested in present-day ecology or are ignorant of what palaeoecology can tell them. Others are aware of the relevance of palaeoecology but rarely consider past records a useful tool for ecological studies. The more usual reasons for many ecologists to ignore palaeoecological results as potential ecological inputs have been the following (Huntley, 1996, 2012; Rull, 2012): (i) the lack of enough time resolution in palaeoecological reconstructions, (ii) the often incomplete (fragmentary) nature of palaeoecological evidence, (iii) the lack of enough taxonomic resolution, that is, the difficulty of identifying most fossils at the species level, (iv) the difficulty of equating fossil-based measures with present measures, notably abundance and diversity, (v) the lack of taxonomic continuity between past and present communities due to evolution, (vi) the poor development of quantitative methods and the rather qualitative nature of palaeoecological studies in comparison to ecological studies, and (vii) the strong bias of palaeoecologists toward past environmental reconstructions rather than ecological reconstructions. It could be added that the inherently human psychological disconnection between past and present has also been a major handicap for the desired synergies (Rull, 2010). Ecologists should be able to understand and use palaeoecological data but palaeoecologists should also make an extra effort to adapt their data to the requirements of the research field of ecology to promote interaction. The first point (i) is the responsible for the lack of continuity between ecological and palaeoecological time series. Indeed, palaeoecological surveys of a time resolution comparable to ecological records are very rare. This is a serious handicap that has led to a fundamental disconnection between ecological and palaeoecological databases of global scope (Peng et al., 2011). The time resolution needed for ecological studies varies according to the duration of the lifecycle of the involved organisms. For example, in the case of forests dominated by trees with life spans of centuries or millennia, decadal studies are clearly insufficient for a sound ecological appraisal. But even in the case of very short life spans, for example in planktonic organisms, there is a seasonal environmental control that makes year round studies (ideally multi-year studies) necessary for a sound understanding of the ecological functioning of these communities (e.g., Köster and Pienitz, 2006; Bunbury and Gajewski, 2008). In palaeoecology, annual resolution is possible in a special type of laminated sediments, referred to as varved sediments, or in cases of very high accumulation rates (Ojala et al., 2012). Laminated sediments are more frequent in continental rather than marine environments, where sedimentation rates are also comparatively lower. Exceptions include certain coastal areas with strong seasonal dynamics that allow the development and preservation of seasonal layers of sediments (Riboulleau et al., 2014). Several detailed studies conducted so far in Europe and North America have demonstrated the high ecological potential of these special types of sediments but this potential has not been fully exploited yet (Hughes and Ammann, 2009). In temperate regions, dendrochronology can provide powerful high-resolution tools to study forest dynamics (Foster et al., 2014; Galván et al., 2014). Other palaeoecological archives that could be useful for ecological purposes are the growth rings of corals (Bramanti et al., 2014), speleothems (Feurdean et al., in press), or ice cores (Alley, 2011). Studies of annual resolution are feasible and meaningful for both ecology and palaeoecology; therefore, annual resolution seems to be an excellent link to produce continuous, homogeneous, and coherent long-term ecological records (Figure 1). Figure 1. Timescales of ecological and palaeoecological studies showing the shared temporal window, ranging from seasons to decades. The higher resolution (HR) of combined ecological-palaeoecological time series is attained using seasonal records, whereas the lower resolution (LR) occurs in decadal records. In palaeoecology, seasonal resolution is more difficult to attain than annual resolution, which is more feasible and seems a suitable framework for continuous long-term ecological records. Record Incompleteness and Taxonomic Resolution Record incompleteness (point ii) may refer to the lack of continuity in the sedimentary records themselves or to the biased representation of organisms in the fossil record due to differential preservation, or both. Sedimentary continuity tends to be higher in permanent water bodies than in terrestrial and temporarily flooded environments, due to either erosion, the lack of deposition, or both. Additionally, continuity is usually higher in recent (Quaternary) sediments than in older sediments, which have been submitted to more tectonic and erosional processes. The preferred locations for palaeoecologists seeking continuous sedimentary records are lakes that originated after the LGM. Differential preservation favors the persistence of organisms with hard elements able to endure post-depositional diageneses, whereas others that might be equally significant from an ecological point of view are lost. Fortunately, palaeoecological methods are continuously advancing, and we now have a wide range of new possibilities beyond the classical paleontology. Indeed, organisms without hard structures may be recorded in the sediments by their characteristic chemical imprints (biomarkers), and also by their genetic material (DNA, RNA) that can be analyzed at a molecular level and identified to species resolution (e.g., Boere et al., 2009; Coolen et al., 2013) (point iii). Molecular palaeoecology is now in full expansion (Anderson-Carpenter et al., 2011; Hofreiter et al., 2012), and will likely lead to significant improvements toward more complete palaeocommunity reconstructions. Abundance and Diversity Biases in the fossil record due to differential preservation also determine inconsistencies between the abundance of a given fossil and the abundance of its parent organism in the original living community (point iv). In some cases, these inconsistencies are magnified by other processes such as differential production and transport, as occurs in the case of pollen. As a consequence, diversity measures, such as richness and equitability, are commonly distorted in the fossil assemblage compared to the living community. Several strategies have attempted to quantitatively calibrate fossil and living abundances using modern analogs (Jackson and Williams, 2004; Jackson, 2012). In these studies, the basic assumption (based on the principle of uniformitarianism) is that modern fossil assemblages, once sedimented, have been submitted to the same processes as fossil assemblages, except for diagenesis. Therefore, we can calibrate the abundance of the components of a modern assemblage with parameters from their sources, such as relative abundance of the living organism involved, its cover (in the case of vegetation), the distance to its source, and other quantitative relationships (Gaillard et al., 2008; Zheng et al., 2014), to generate models and transfer functions that can be applied to fossil assemblages to estimate the past values for these parameters. The diversity of fossil assemblages cannot give direct and accurate measures of the diversity of the original living community. The better cases are those of fossils representing complete organisms, such as diatoms or foraminifers; the worst case is again represented by vegetation reconstruction using pollen analysis. One possibility to address this problem could be the combined use of pollen and macrofossils, which are more local in origin and less dependent on production and transport factors (Birks and Birks, 2000). However, as macrofossils are random parts of the whole organism, diversity estimation is always problematic. Some studies have shown that pollen diversity and vegetation diversity do not coincide (Goring et al., 2013), while others show that their trends can be consistent, and it may be possible to derive community diversity changes in time from fossil diversity tendencies (van der Knaap, 2009). Diversity estimation using transfer functions has been rarely attempted and might be potentially useful. Research in this field is presently very active (Goring et al., 2013; Keen et al., 2014). Evolution and Community Turnover Points v and vi refer to taxonomic turnover of communities due to speciation and extinction, which makes past and present communities incomparable, or at least discontinuous in time. Extant communities were assembled and developed during the Quaternary, where many new species emerged and others became extinct (Davis, 1981; Huntley and Birks, 1983; Graham et al., 1996; Rull, 2011). These communities, however, have remained relatively constant during the Late Glacial and the Holocene, where no significant speciation and extinction events have been documented, and spatial reorganization in response to environmental changes has been the rule (Willis and Bhagwat, 2009). Exceptions are communities that have been significantly modified as a result of the megafaunal extinctions occurred during the Late Pleistocene, which affected not only animal but also plant communities (Gill et al., 2009). Contrastingly, one single plant extinction has been documented so far for the whole Quaternary, which occurred during the last deglacial phase (Jackson and Weng, 1999). Therefore, the Late Glacial and the Holocene can provide us a truly long record of extant community dynamics and internal and external drivers involved, which can be especially useful as past analogs to infer potential ecological responses to eventual future environmental changes (Williams and Jackson, 2007; Williams et al., 2007). In the past, palaeoecological and palaeoenvironmental reconstructions were mostly qualitative or semi-quantitative, but this has notably changed during recent decades. At present, palaeoecological studies are based on quantitative multivariate datasets that are analyzed with a variety of statistical techniques including the latest developments in numerical analysis (Birks et al., 2010, 2012; Birks, 2013c). Additionally, several regional and global databases exist that are available to researchers interested in developing meta-analysis on particular subjects and selected time intervals (Fyfe et al., 2009; Peng et al., 2011; Brewer et al., 2012; Grimm et al., 2013). It could be said that the idea of Palaeoecology as a qualitative discipline is a myth that is no longer tenable, but it is equally true that there is still room for improvement in the use of numerical methods and datasets for palaeoecological research. Point vii is still a challenge for ecology and palaeoecology integration, but there are signs of change that are encouraging. For example, the concept of ecological palaeoecology, although it may seem redundant, has emerged from the need of differentiating palaeoecologists interested in ecological dynamics from those whose aim is to reconstruct past environments, mostly addressing climate. Palaeoclimatology is now experiencing a significant development, as it has proven to be useful to calibrate and validate global climatic models based on general circulation and energy balance, as well as to reconstruct past atmospheric and oceanic circulation, thus suggesting previously unknown causal mechanisms for climate change. These developments are commonly used in forecasts of future global change (Stocker et al., 2013). Ecological palaeoecologists have also conducted some steps in the potential application of palaeoecological results to ecological knowledge in general, especially in reference to nature conservation in the face of future events (Willis et al., 2010; Vegas-Vilarrúbia et al., 2011; Gillson and Marchant, 2014), but there is still much room for improvement. Continuous Past-Present-Future Time Series in Practice To overcome the drawbacks discussed above, we need practical proposals. Presently, the theoretical basis for fruitful ecological-palaeoecological synergies seems robust, and a large number of priority hypotheses and questions that need to be addressed have already been identified (Seddon et al., 2014). Now, we need a feasible strategy to put them into practice. We should be able to combine present-day ecology with palaeoecology in a way that can be satisfactory for the practitioners of both disciplines who are willing to share the common objective of understanding the ecological functioning of the biosphere—the common goal of ecology and palaeoecology (Rull, 2010). Thus, far, palaeoecologists have attempted to attract the attention of ecologists by highlighting the suitability of palaeo-data for ecology, but this has been unsuccessful. Therefore, new ideas are necessary. A more pragmatic strategy is to launch collaborative projects of long-term ecological research that include past (palaeoecology), present (modern ecology), and future (ecological monitoring) evidence aimed at combining all these data in a single time series of the same temporal and taxonomic resolution. An initiative of this type is the PalEON Project, which aim is “to reconstruct forest composition, fire regime and climate in forests across the northwestern US and Alaska over the past 2000 years and then use this to derive and validate terrestrial ecosystem models.” (http://www3.nd.edu/~paleolab/paleonproject/). To develop a global network of continuous long-term ecological records of annual resolution, we do not need to create any special infrastructure, as the already existing network of “long-term” ecological observatories (LTER and similar) may represent a good starting point. In some cases, we could benefit from the already existing LTER sites to develop palaeoecological and palaeolimnological studies in lakes located near these sites. Examples are some alpine lakes of the Colorado Front Range (USA), where high-resolution palaeolimnological studies on nitrogen deposition during the last centuries has been used to forecast potential future biotic responses to anthropogenic pressure, as part of the study of the Niwot Ridge LTER site (Wolfe et al., 2000). In other cases, we could propose the emplacement of new ecological monitoring stations inside catchments with suitable palaeoecological archives, thus transforming these eventual observatories into truly long-term stations. According to the six points analyzed above, the optimal locations for these past-present-future ecological observatories (PPFEO) would be catchments with lakes containing continuous and annually laminated sedimentary records that include the Late Glacial and the Holocene. The advantage of lake sediments is that they contain fossils from the aquatic biota and also from terrestrial ecosystems present in the catchment thus facilitating integral palaeolimnological and palaeoecological reconstructions. Ideally, this procedure should be developed synchronously at different representative locations worldwide to obtain a global scope. In this way, we might be capable of producing continuous records of annual resolution since the LGM. This time scale appears sufficient to find well-documented explanations of ecological change at the ecosystem level, as well as to optimize functional and predictive ecological models. An example of continuous and homogeneous past-present records of annual resolution is provided by Northern Hemisphere temperature reconstructions of the last millennium (Figure 2). Similar long-term ecological records would be possible for subjects as varied as population dynamics, community assembly, ecological succession, biodiversity shifts, migration patterns and range shifts, biotic responses to environmental changes, changes in forest cover, land use patterns, fire incidence, nitrogen deposition, carbon sequestration, or lake acidification, among many others. Such long-term annual data series would provide huge and coherent datasets with optimal statistical reliability to be analyzed with numerical methods and unravel trends, cycles, and other processes impossible to be captured at decadal to centennial time scales. In addition, it would be possible to check whether the present-day ecological state is consistent with historical trends or, on the contrary, it is anomalous, as it occurs for example with modern temperature trends as compared to the last millennium records (Figure 2). Figure 2. Northern Hemisphere temperature reconstructions of the last millennium. Palaeorecords are based on tree rings and ice cores, present-day records are from instrumental measures. Temperature anomalies are calculated using the average annual temperature of the period 1902–1980. Redrawn from Mann et al. (1999). According to the points discussed above, palaeoecology is able to achieve temporal and taxonomic resolutions compatible with modern ecology observations using continuous long-term (millennial) quantitative records of extant community change and its drivers. It appears that the continuity between palaeoecological and ecological time series is possible, provided we are capable of improving several aspects of palaeoecological data and their presentation. Improvements are still needed in the seven areas discussed above, particularly on points iii (taxonomic resolution), and iv (abundance and diversity measures), but recent advances in molecular genetics are very promising and have been successful in other ecological areas (Cavender-Bares et al., 2009, 2012; Marske et al., 2013). Despite the remaining limitations, the launch of collaborative long-term projects of the type suggested in this paper is already possible and may be the best strategy to identify the methodological and psychological aspects that require further attention for true ecological-palaeoecological synergies. Conflict of Interest Statement The author declares that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Anderson-Carpenter, L., McLachlan, J. S., Jackson, S. T., Kuch, M., Lumibao, C. Y., and Poinar, H. N. (2011). Ancient DNA from lake sediments: bridging the gap between paleoecology and genetics. BMC Evol. Biol. 11:30. doi: 10.1186/1471-2148-11-30 Birks, H. J. B. (2013a). “Paleoecology,” in Encyclopedia of Ecology: Reference Module in Earth Systems and Environmental Sciences, ed S. A. Elias (Amsterdam:Elsevier). doi: 10.1016/B978-0-12-409548-9.00884-8 Birks, H. J. B. (2013b). Ecological palaeoecology and conservation biology: controversies, challenges, and compromises. Int. J. Biodiv. Sci. Ecosyst. Serv. Manage. 8, 292–304. doi: 10.1080/21513732.2012.701667 Birks, H. J. B., Heiri, O., Seppä, H., and Bjune, A. (2010). Strengths and weaknesses of quantitative climate reconstructions based on Late-Quaternary biological proxies. Open Ecol. J. 3, 68–110. doi: 10.2174/1874213001003020068 Birks, H. J. B., Lotter, A. F., Juggins, S., and Smol, J. P. (eds.). (2012). Tracking Environmental Change Using Lake Sediments. Data Handling and Numerical Techniques. Dordrecht: Springer. doi: 10.1007/978-94-007-2745-8 Boere, A. C., Abbas, B., Rijpstra, W. I. C., Versteegh, G. J. M., Volkman, J. K., Damste, J. S. S., et al. (2009). Late-Holocene succession of dinoflagellates in an Antarctic fjord using a multi-proxy approach: paleoenvironmental genomics, lipid biomarkers and palynomorphs. Geobiology 7, 265–281. doi: 10.1111/j.1472-4669.2009.00202.x Bramanti, L., Vielmini, I., Rossi, S., Tsounis, G., Ianelli, M., Cattaneo-Vietti, R., et al. (2014). Demographic parameters ot two populations of red coral (Corallium rubrum L. 1758) in the North Western Mediterranean. Mar. Biol. 161, 1015–1026. doi: 10.1007/s00227-013-2383-5 Bunbury, J., and Gajewski, K. (2008). Does a one point sample adequately characterize the lake environment for paleoenvironmental calibration studies? J. Paleolimnol. 39, 511–531. doi: 10.1007/s10933-007-9127-9 Clutton-Brock, T., and Sheldon, B. C. (2010). Individuals and populations: the role of long-term, individual-based studies of animals in ecology and evolutionary biology. Trends Ecol. Evol. 25, 562–573. doi: 10.1016/j.tree.2010.08.002 Coolen, M. J., Orsi, W. D., Balkema, C., Quince, C., Harris, K., Sylva, S. P., et al. (2013). Evolution of the plankton paleome in the Black Sea from the Deglacial to Anthropocene. Proc. Natl. Acad. Sci. U.S.A. 110, 8609–8614. doi: 10.1073/pnas.1219283110 Davis, M. B. (1981). “Quaternary history and the stability of forest communities,” in Forest Succession, Concepts and Applications, eds D. C. West, D. B. Shugart, and D. B. Botkin (New York, NY: Springer), 152–153. Edwards, M., Beaugrand, G., Hays, G. C., Koslow, J. A., and Richardson, A. J. (2010). Multi-decadal oceanic ecological datasets and their application in marine policy and management. Trends Ecol. Evol. 25, 602–610. doi: 10.1016/j.tree.2010.07.007 Feurdean, A., Perşiou, A., Tanţău, I., Stevens, T., Magyari, E. K., Onac, B. P., et al. (in press). Climate variability and associated vegetation response throughout central and eastern europe (CEE) between 60 and 8 ka. Quat. Sci. Rev. doi: 10.1016/j.quascirev.2014.06.003 Foster, J. R., D'Amatto, A. W., and Bradford, J. B. (2014). Looking for age-related growth decline in natural forests: unexpected biomass patterns from tree rings and simulated mortality. Oecologia 175, 363–374. doi: 10.1007/s00442-014-2881-2 Fyfe, R. M., de Beaulieu, J.-L., Binney, H., Bradshaw, R. H. W., Brewer, S., Flao, A. L., et al. (2009). The European Pollen Database: past efforts and current activities. Veg. Hist. Archaeobot. 18, 417–424. doi: 10.1007/s00334-009-0215-9 Gaillard, M.-J., Sugita, S., Bunting, M. J., Middleton, R., Broström, A., Caseldine, C., et al. (2008). The use of modelling and simulation approach in reconstructing past landscapes from fossil pollen data: a review and results from the POLLANDCAL network. Veg. Hist. Archaeobot. 17, 419–443. doi: 10.1007/s00334-008-0169-3 Galván, J. D., Camarero, J. J., and Gutiérrez, E. (2014). Seeing the trees for the forest: drivers of individual growth responses to climate in Pinus uncinata mountains forests. J. Ecol. 102, 1244–1257. doi: 10.1111/1365-2745.12268 Gill, J. L., Williams, J. W., Jackson, S. T., Lininger, K. B., and Robinson, G. S. (2009). Pleistocene megafaunal collapse, novel plant communities, and enhanced fire regimes in North America. Science 326, 1100–1103. doi: 10.1126/science.1179504 Gillson, L., and Marchant, R. (2014). From myopia to clarity: sharpening the focus of ecosystem management through the lens of palaeoecology. Trends Ecol. Evol. 29, 317–325. doi: 10.1016/j.tree.2014.03.010 Goring, S., Lacourse, T., Pellatt, M. G., and Mathewes, R. W. (2013). Pollen assemblage richness does not reflect regional plant species richness: a cautionary tale. J. Ecol. 191, 1137–1145. doi: 10.1111/1365-2745.12135 Graham, R. W., Lundelius, E. L., Graham, M. A., Schroeder, E. K., Toomey, R. S., Anderson, E., et al. (1996). Spatial response of mammals to late Quaternary environmental fluctuations. Science 272, 1601–1606. doi: 10.1126/science.272.5268.1601 Grimm, E. C., Bradshaw, R. H. W., Brewer, S., Flantua, S., Giesecke, T., Lézine, A. M., et al. (2013). “Databases and their application,” in Encyclopaedia of Quaternary Science, ed S. A. Elias (Amsterdam: Elsevier), 831–838. doi: 10.1016/B978-0-444-53643-3.00174-6 Jackson, S. T., and Williams, J. W. (2004). Modern analogs in Quaternary paleoecology: here today, gone yesterday, gone tomorrow? Annu. Rev. Earth Planet. Sci. 32, 495–537. doi: 10.1146/annurev.earth.32.101802.120435 Keen, H. F., Gosling, W. D., Hanke, F., Miller, C. S., Montoya, E., Valencia, B. G., et al. (2014). A statistical sub-sampling tool for extracting vegetation community and diversity information from pollen assemblage data. Palaeogeogr. Palaeoclimatol. Palaeoecol. 408, 48–59. doi: 10.1016/j.palaeo.2014.05.001 Lecavalier, B. S., Milne, G. A., Simpson, M. J. R., Wake, L., Huybrechts, P., Tarasov, L., et al. (2014). A model for Greenland ice sheet deglaciation constrained by observations of relative sea level and ice extent. Quat. Sci. Rev. 102, 54–84. doi: 10.1016/j.quascirev.2014.07.018 Magurran, A. E., Baillie, S. R., Buckland, S. T., Dick, J. M., Elston, D. A., Scott, M., et al. (2010). Long-term datasets in biodiversity research and monitoring: assessing change in ecological communities through time. Trends Ecol. Evol. 25, 574–582. doi: 10.1016/j.tree.2010.06.016 Mann, M. E., Bradley, R. S., and Hughes, M. K. (1999). Northern Hemisphere temperatures during the last millennium: inferences, uncertainties and limitations. Geophys. Res. Lett. 26, 759–762. doi: 10.1029/1999GL900070 Ojala, A. E. K., Francus, P., Zolitschka, B., Besonen, M., and Lamoureux, S. F. (2012). Characteristics of sedimentary varve chronologies–a review. Quat. Sci. Rev. 43, 45–60. doi: 10.1016/j.quascirev.2012.04.006 Peng, C. H., Guiot, J., Wu, H., Jiang, H., and Luo, Y. (2011). Integrating model data in ecology and palaeoecology: advances towards a model-data fusion approach. Ecol. Lett. 14, 522–536. doi: 10.1111/j.1461-0248.2011.01603.x Regnier, P., Friedlingstein, P., Ciais, P., Mackenzie, F. T., Gruber, N., Janssens, I. A., et al. (2013). Anthropogenic perturbation of the carbon fluxes from land to ocean. Nat. Geosci. 6, 597–607. doi: 10.1038/ngeo1830 Restrepo, A., Colinvaux, P., Bush, M., Correa-Metrio, A., Conroy, J., Gardener, M. R., et al. (2012). Impacts of climate variability and human colonization on the vegetation of the Galápagos Islands. Ecology 93, 1853–1866. doi: 10.1890/11-1545.1 Riboulleau, A., Bout-Roumazeilles, V., and Tribovillard, N. (2014). Controls on detrital sedimentation in the Cariaco Basin during the last climatic cycle: insights from clay minerals. Quat. Sci. Rev. 94, 62–73. doi: 10.1016/j.quascirev.2014.04.023 Rull, V., Montoya, E., Nogué, S., Vegas-Vilarrúbia, T., and Safont, E. (2013). Ecological paleoecology in the neotropical Gran Sabana region: long-term records of vegetation dynamics as a basis for ecological hypothesis testing. Persp. Plant Ecol. Evol. Syst. 15, 338–359. doi: 10.1016/j.ppees.2013.07.004 Seddon, A. W. R., Mackay, A. W., Baker, A. G., Birks, H. J. B., Breman, E., Buck, C. E., et al. (2014). Looking forward through the past: identification of 50 priority questions in palaeoecology. J. Ecol. 102, 256–267. doi: 10.1111/1365-2745.12195 Smith, B., Warlind, D., Arneth, A., Hickler, T., Leadley, P., Stilberg, J., et al. (2014). Implications of incorporating N cycling and N limitations on primary production in an individual-based dynamic vegetation model. Biogeosci. 11, 2027–2054. doi: 10.5194/bg-11-2027-2014 Vegas-Vilarrúbia, T., Rull, V., Montoya, E., and Safont, E. (2011). Quaternary palaeoecology and nature conservation with an emphasis on global warming and fire, with examples from the Neotropics. Quat. Sci. Rev. 30, 2361–2388. doi: 10.1016/j.quascirev.2011.05.006 Williams, J. W., Jackson, S. T., and Kutzbach, J. E. (2007). Projected distributions of novel and disappearing climates by 2100 AD. Proc. Natl. Acad. Sci. U.S.A. 104, 5738–5742. doi: 10.1073/pnas.0606292104 Willis, K. J., Bailey, R. M., Bhagwat, S., and Birks, H. J. B. (2010). Biodiversity baselines, thresholds and resilience: testing predictions and assumptions using palaeoecological data. Trends Ecol. Evol. 25, 583–591. doi: 10.1016/j.tree.2010.07.006 Wolfe, A. P., Baron, J. S., and Cornett, R. J. (2000). Anthropogenic nitrogen deposition induces rapid ecological changes in alpine lakes of the Colorado Front Range (USA). J. Paleolimnol. 25, 1–7. doi: 10.1023/A:1008129509322 Keywords: time continuum, long-term ecology, ecology-palaeoecology synergy, time series, long-term observatories Citation: Rull V (2014) Time continuum and true long-term ecology: from theory to practice. Front. Ecol. Evol. 2:75. doi: 10.3389/fevo.2014.00075 Received: 08 September 2014; Accepted: 31 October 2014; Published online: 17 November 2014. Edited by:Jean Nicolas Haas, University of Innsbruck, Austria Reviewed by:John Birks, University of Bergen, Norway Simon J. Goring, University of Wisconsin - Madison, USA Copyright © 2014 Rull. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. *Correspondence: Valentí Rull, Palynology and Paleoecology Lab, Botanic Institute of Barcelona (IBB-CSIC-ICUB), Passeig del Migdia, s/n (Parc de Montjuïc), 08038 Barcelona, Spain e-mail: email@example.com
<urn:uuid:142b6b48-d68d-42b7-95a9-c9cf190b32d4>
2.828125
8,575
Academic Writing
Science & Tech.
47.612972
95,564,845
You are here: Home Environment Climate Change Groundwater Depletion Contributes to Sea Level Rise Groundwater Depletion Contributes to Sea Level Rise A new study indicates that groundwater pulled from aquifers for drinking water, irrigation, and other uses has a measurable effect on sea level rise. by Heather Carr May 10, 2012, 2:00 am 2 Comments A new study indicates that groundwater pulled from aquifers for drinking water, irrigation, and other uses has a measurable effect on sea level rise. The study published in Geophysical Research Letters shows that groundwater depletion has more than doubled in recent decades due to increased water demand. At the same time, fewer dams being built and the removal of older dams allows the groundwater to reach the ocean more quickly. The researchers estimate that by 2050, groundwater depletion will contribute 0.87 mm annually to sea level, for a total of 31 mm between 2015 and 2050. Beach photo via Shutterstock See more Previous article The Sweet Potato Project: Urban Agriculture as a Path Out of Poverty Next article Farm Bill 2012 – House Hearing on Nutrition and Specialty Crops 2 Pings & Trackbacks Pingback:The Week in Water: May 26-Jun 1, 2012 Pingback:The Week in Water: May 26-Jun 1, 2012 • Nifty Homestead Leave a Reply Cancel reply Your email address will not be published. Required fields are marked *Comment Name * Email * Website Save my name, email, and website in this browser for the next time I comment. Upload a photo / attachment to this comment (PNG, JPG, GIF - 6 MB Max File Size): (Allowed file types: jpg, gif, png, maximum file size: 6MB.
<urn:uuid:36462078-99e3-4571-8307-9b0bfc47e206>
2.96875
346
Truncated
Science & Tech.
39.145
95,564,863
|The C Preprocessor| >. You can prepend directories to the list of quote directories with the -iquote option. The argument of ‘#include’, whether delimited with quote marks or angle brackets, behaves like a string constant in that comments are not recognized, and macro names are not expanded. Thus, #include <x/*y> specifies inclusion of a system header file named x/*y. However, if backslashes occur within file, they are considered ordinary text characters, not escape characters. None of the character escape sequences appropriate to string constants in C are processed. #include "x\n\\y" specifies a filename containing three backslashes. (Some systems interpret ‘\’ as a pathname separator. All of these also interpret ‘/’ the same way. It is most portable to use only ‘/’.) It is an error if there is anything (other than comments) on the line after the file name.
<urn:uuid:d54bbbc3-fee4-4a2d-ac8f-c78a5d504ca4>
2.75
217
Documentation
Software Dev.
38.807652
95,564,915
For 2,3-butanediol, there are 3 isomers; 2 are enantiomers, one is not. Use Fischer projections and draw all three. Identify stereocenters and identify them as enantiomers and the meso compound.© BrainMass Inc. brainmass.com July 18, 2018, 4:37 pm ad1c9bdddf The meso compound can be shown with either both -OH groups on the right of the Fischer projection, or both on the left: This solution provides a brief response, including diagrams to show the meso compound in a Fischer projection. Fischer projects are used to draw stereocenters.
<urn:uuid:88c706b3-3b75-445a-8b18-60e9fa8eab58>
2.890625
135
Tutorial
Science & Tech.
61.245
95,564,957
WASHINGTON, Sept 28 — Rocky outcrops in eastern Canada contain what may be some of the oldest evidence of life on Earth, dating back about 3.95 billion years. Scientists said yesterday they found indirect evidence of life in the form of bits of graphite contained in sedimentary rocks from northern Labrador that they believe are remnants of primordial marine microorganisms. The researchers carried out a geological analysis of the Labrador rocks and measured concentrations and isotope compositions of the graphite, and concluded that it was produced by a living organism. They did not find fossils of the microorganisms that may have left behind the graphite, a form of carbon, but said they may have been bacteria. “The organisms inhabited an open ocean,” said University of Tokyo geologist Tsuyoshi Komiya, who led the study published in the journal Science. Earth formed about 4.5 billion years ago and the oceans appeared roughly 4.4 billion years ago. The new study and some other recent research indicate that microbial life emerged earlier than previously known and relatively soon after the Earth’s formation. Canada has produced some of the most ancient signs of life. Another team of scientists in March reported that microfossils between 3.77 billion and 4.28 billion years old found in northern Quebec, relatively close to the Labrador site, are similar to the bacteria that thrive today around sea floor hydrothermal vents. Other scientists last year described 3.7 billion-year-old fossilised microbial mats, called stromatolites, from Greenland. — Reuters
<urn:uuid:17a8fb64-48d7-409b-b178-39768a2b5878>
4.0625
321
Truncated
Science & Tech.
43.470179
95,564,962
Species Detail - Angle Shades (Phlogophora meticulosa) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). insect - moth 1 January (recorded in 1896) 31 December (recorded in 2007) National Biodiversity Data Centre, Ireland, Angle Shades (Phlogophora meticulosa), accessed 19 July 2018, <https://maps.biodiversityireland.ie/Species/79104>
<urn:uuid:bf83fe8c-0cf8-4596-958f-23cf44e79309>
2.65625
142
Structured Data
Science & Tech.
30.306
95,564,983
The goal of Project MySelf is to build a system to collect data about yourself in a safe and private way, so that you control your data and you can decide what happens with it. You track a lot of data about yourself. Some of that you do consciously. You might wear a wristband or a watch logging your daily activity or the miles you do when you do sports. You might use your phone to track your location, the food you eat, or the money you spend. You might record health data such as your weight, blood pressure, pulse. You might keep memories by writing a diary or taking photos. Other data is tracked by your environment or by others. Your bank tracks how much money you spend, your house tracks how much energy you consume, your navigation system tracks, where you are going. There are many other examples. Technology has made it tremendously easy to acquire and collect data. With your phone you have a powerful computer in your pocket, packed with sensors. More and more devices are connected to the Internet. Scales are sending weight data to the cloud. Phones are recording the path of your latest run or bike ride. A sensor to record your movements fits into a nice piece of jewelry. The Internet of Things is promising that there will be much more of that in the future, and with devices like the Rasperry Pi you are even able to easily build parts of it yourself. Cloud servives are queuing up to get your data and while helping you to make use of it build a business on top of it. Tracking data about yourself can be very useful. You can use it to improve your health, gain insight into your life, control your behavior, remember good moments. You can learn and decide based on data which provide facts. This can make better decisions, get better understanding, and help you to do the right things. So there is value in tracking your data, and services which help you to make use of the data provide some of this value. But there is an even bigger problem. Combining data multiplies its value. You can answer the really interesting questions. Does my diet work? How much money do I spend, when I'm on vacation? What indicators predict my health? How does my environment influence my happiness? There are many more questions where data about yourself can give some answer, and the answers are even more sensitive and private than any of the individual data sets. If you want to make use of this you want to make sure that you know exactly what happens there. How can you be sure? We need to solve this problem. I should be in control of data about myself. I should be able to decide what data I share, and if I do it at all. I should be the person who gains the insight in what can be learned from combining all the data I track about myself. I should own my data and be sure of it. Project MySelf is the attempt to solve this problem. Its goal is to create a system which makes it possible to collect data about yourself in a safe and private way, so that you control your data and you can decide what happens with it. The Game Plan Data is acquired on many different devices. A server in the cloud is the convenient way to collect it. To ensure privacy the data will be encrypted on the client, and the server won't have the keys required to decrypt it. This way you can use cloud servers without having to trust them to not to look into the data. Using assymetric encryption clients writing data only need to know the public key used for encryption, and only clients which read the data need to have the private key required for decryption of data. The type of data which is in scope of this project is self-tracking data. This is data about yourself, which is recorded over time. It typically is a small amount of data acquired in regular intervals or on demand triggered by some activity. The most interesting use of the data usually is tracking it over time and correlating it with data from different sources or covering different aspects of your activity. Primary deliverable of Project MySelf is the definition of an API for clients to talk to the server, to send and synchronize data. The API includes the format of the containers of the data, the definition how to encrypt the data, and some conventions for the actual data payload. The goal of the API is to allow having alternative server implementation, and a multitude of clients. Another important deliverable of Project MySelf are reference implementations of the server and a client, and a test suite to validate conformance of server implementations to the API specification. The server should be easy to deploy. It is used by multiple clients, but only by a single user, so it does not have high scaling requirements. There will be several different clients: - Simple clients just sending data to server. They will run on special devices depending on the kind of data they track. - Clients to show tracked data. These will read from the server and display results to the user. This is an area for many interesting solutions, covering different types of data, analyze and correlation of data, and sophisticated ways of visualisation. - Clients to administrate the server. This could be a web interface running on the server itself or a special client using the API remotely. - Importers to get data from an existing service and import it into the safe and private storage of Project MySelf - Exporters to share selected data with others or to store it as backup. - Hybrid clients combining several or all of the areas above. The goal for the first step is to collect one specific type of data on multiple devices, store it safely on the server, and have a client to plot the data over time, per device or combined. Project MySelf consists of a number of key elements. Each is a component living in its own git repository. The first element is the API specification. This will be hosted in the main repository along with any general documentation or other central material and code. The second element is the reference implementation of the server. Its name will be Mycroft. The code is in the mycroft repository. The third element is the reference client. It will be a command line client operating on the API and will include some simple data acquisition features. Its name will be Myer. The code is in the myer repository. The fourth element is a graphical client for display of data. This client will have access to all the data and will be the primary place for visualization and analysis of data. It will be called Myles. The code is in the myles repository. The fifth element is a client for mobile devices. It will acquire data from sensors on a phone and allow users to manually put in specific data. This client will be called Myla. Maybe we will need a separate client for administration of the server. This would have the name Mychael then. - The user owns and controls the data. - Data is encrypted before it is transmitted and stored on the server, so that there is no need to trust the server. - The client is assumed to be a trusted environment, where it is safe to store secrets. The degree of trust necessary there is gradual and over time we might introduce ways to also operate in a less trusted client environment. - Data can reliably and quickly be synchronized between clients - The API is the central specification. Alternative implementations of server and client and integration with other services is desired and welcome. - The project is developed as free software in an iterative way. Getting It Done There is some work ahead to make this project reality. Tasks are tracked on Trello. Hack Week 12 is just around the corner, and I plan to get more work done on this project. If you would like to join me or contribute in any way, you are more than welcome. Project MySelf is started and maintained by Cornelius Schumacher. If you want to join in, have questions, or want to discuss the project, please don't hesitate to contact me.
<urn:uuid:411f750a-5831-40e1-835c-c3db220c0cfb>
2.84375
1,666
About (Org.)
Software Dev.
53.752302
95,565,016
+44 1803 865913 By: Marten A Hemminga(Author), Carlos M Duarte(Author) 298 pages, b/w photos, b/w illustrations, 12 tables Seagrasses occur in coastal zones throughout the world, in the part of the marine habitat that is most heavily influenced by humans. Decisions about coastal management therefore often involve seagrasses, but despite a growing awareness of the importance of these plants, a full appreciation of their role in coastal ecosystems has yet to be reached. Seagrass Ecology provides an entry point for those wishing to learn about their ecology, and gives a broad overview of the state of knowledge, including progress in research and research foci, complemented by extensive literature references to guide the reader to more detailed studies. It will be valuable to students of marine biology wishing to specialize in this area and also to established researchers wanting to enter the field. In addition, it will provide an excellent reference for those involved in the management and conservation of coastal areas that harbour seagrasses. "[...] an excellent source of information for those involved in the management and conservation of coastal areas that harbour seagrasses." "Whilst the individual chapters will appeal to specialists in these areas, overall there is something for everyone." - Alan Bedford, Biologist 1. Taxonomy and distribution 2. Seagrass architectural features 3. Population and community dynamics 4. Light, carbon and nutrients 5. Elemental dynamics in seagrass systems 6. Fauna associated with seagrass systems 7. Seagrasses in the human environment There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects The shipment arrived, beautifully packaged, in perfect condition. Thanks for your exceptional service. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:0bd10fec-efda-4458-bb47-989dff067aa3>
2.75
409
Product Page
Science & Tech.
34.674283
95,565,021
Why the earth is zero potential? In what context do you ask this? Are you asking why AC power systems use the local ground potential as the zero-point? Why the earth is zero potential (0 V)? In electrical point. It is potential differences that matter, not potential values. However, one might object to that and wonder: Why should the Earth's potential be constant? The short answer to that is that any addition of electric charge down there won't usually affect it (the ground is very large). However, in extremely rare cases, like massive dumps of charge, the Earth potential might change somewhat on a local scale. And that is a bother for elictricians.. Electrical potential is defined only in terms of differences, that is, the potential difference between two points can be calculated, but there is no absolute definition of the electrical potential at one point (this is because the physically observable field, the electric field, is given by a derivative of the potential, so only differences are physical). The Earth is often chosen to be the zero point for convenience, much like choosing the origin of a coordinate system. It is not the only possible choice, and in fact the potential is often to be zero at r = infinity, so that the potential at any distance closer to a source charge at r = 0 will be negative. EDIT: Sorry, I posted this response a few milliseconds after arildno posted his. (hers?) Earth-grounding is a means of establishing a reasonably-consistent zero point for the transmission of AC power. Please note that ground potential is not the same everywhere, and can swing widely with lightning strikes nearby, more slowly with magnetic storms, etc, and there may be other events that can induce currents in the ground so that ground potential is changed. Dairy farms are often afflicted with stray voltage problems that shock cows and make them nervous and unproductive. Imagine that you're a cow that is going to be milked, and as you are led into the milking stall, your wet nose touches the pipe-frame of the stall and you get a nice shock! Such problems can be caused by these differentials in ground potential. They are generally addressed by improving the ground-reference integrity of the neutral line to local ground, and may require that the primary and secondary neutral/ground reference at the transformer be separated. Ground zero-point is not the same everywhere. You can also have AC power that is not referenced to earth ground and has a zero-point that is not at ground potential. Edit: This is a REALLY responsive forum. I got claim-jumped by two other members before I could compose what I hope was a cogent explanation. It is truly unimaginable. I can't understand how something is zero potential or whatever. It must have some property which makes the ground zero potential. And how then the Fe have lets say 1volts e. potential, Cu 0.3 volts, or whatever. Sometimes voltage is defined as electron's pressure. Once again, potential is not absolute, it's relative. You can pick any point and call it "zero" and then measure all other voltages in reference to that point. The Earth is a convenient choice, since it does not change for most practical purposes. There is no reason to say that one material (e.g. Fe) has a different potential than another (Cu). You can charge up any conductive material to whatever voltage you like (within reason), if you have means to do so, e.g. a generator. If the ground if doesn't have 0 potential, do you know what will happen, if I connect R with 0? Look on this http://img106.imageshack.us/img106/2797/68664788cd4.jpg" [Broken]The whole system will burn out. So it means, that the ground is zero potential, in the most of the cases, but the question is why? In an AC power system, ground is CHOSEN as the zero-point because it is a relatively stable reference. Ground may not be at the same electrical potential at the generating station and at a step-down transformer outside your house, but it is close enough for purposes of commercial AC transmission. Ground is not some magical zero-potential electrical state - it is a relatively stable reference that is accessible everywhere, so the power companies exploit that. If the ground is not zero potential, why when I connect R and 0, the R will burn out? If ground is not zero potential, then the potential of R will be higher, or the potential difference between them will be smaller. It's a matter of choice. The power company has designed earth ground to be the zero, you are forced to follow the same convention. You might consider that the resistance of ground (dirt if you will) approaches zero when the current spreads out. For most intents and purposes it is works just like any other conductor. Think of ground as a piece of wire. If I get piece of ground and put it in pan tile, and than make electric circuit, will it work as same as, if I make electric circuit with the ground? Do you understand the concept of parallel resistance? The ground works like that. This is why ground planes tend to involve long pieces of metal conductor buried in the ground. I don't know what do you mean. Please explain. Thanks. If you have two 100 ohm resistors in parallel then the effective resistance is 50 ohms. If you have four 100 ohm resistors in parallel then the effective resistance is 25 ohms. Do you understand that? I'm not sure. This is a basic principal in electricity. Perhaps one of your instructors can help you select a book, in your language, that will help you understand. Ok, I checked my dictionary and understand what do you mean by the efficiency. So what next? Yes. Can you give me the end of your analogy? Separate names with a comma.
<urn:uuid:501951e2-406d-4d54-8fde-7038f3f66843>
2.921875
1,253
Comment Section
Science & Tech.
58.586206
95,565,022
How whale corpses feed ocean floors Whales that end up on the beach make headlines. But those that sink to the bottom of the ocean make new homes for sea life. The brawn, blubber, and bone of these unlucky cetaceans—some 70,000 of which perish during harrowing seasonal migrations each year—nourish a vibrant, constantly evolving community of creatures. Opportunistic eaters can flourish on a decaying corpse for anywhere from a decade to a century. Marine biologists have discovered dozens of new critters in these deep-sea ecosystems since they first encountered one in 1987. For animals at the bottom of the ocean, a whale fall is a meal ticket. But for the humans above, it’s a helpful reminder that, even in the modern era, most underwater mysteries remain unsolved. This timeline tracks what little we know about a gray whale’s decomposition, and the friends it makes along the way. Hagfish: Hagfish dine on dead and dying animals, in part to fuel their numerous slime-producing glands. Dead bodies tend to bloat—and float—in the early stages of decomposition. But as carbon dioxide, methane, and other gases dissipate, whales descend to depths of a few hundred or thousand feet. Once there, death quickly gives way to a vibrant flurry of life. Squat lobsters: Squat lobsters thrive on ocean trash, including decaying whales and even shipwrecked wood. Scavengers such as pincher crabs and sharp-toothed sharks are the first to feast. They start eating shortly after the big sink, and they can munch for months on the plentiful buffet of organs, skin, and muscle. But their meat-loving mouths leave plenty behind. With little meat left, worms, snails, and other hungry organisms can take advantage of the delicious biofilms—slimy bacteria and other microorganisms—that now coat the corpse. It doesn’t look like much to our eyes, but for the right animal, the carcass is ripe for the ripping. Osedax roseus, or “bone-devouring redhead,” embed themselves in bones and eat lipids within. As more of the skeleton becomes exposed, bone-eating bacteria flock to their supper. Scientists have seen some of these tiny beings only on a whale fall, spurring biologists to deflate dead beached cetaceans, add weights, and send them deep down for study. Clams: Certain enterprising clams store sulfide-oxidizing bacteria in their gill cells, allowing them to turn sulfur into sugar. Eventually, all that remains of the great gray are nutrient-rich particles, which have leached into the soil beneath the deep-sea cemetery. As bacteria chow down, they excrete sulfurous waste, creating the perfect environment for mollusk colonies fueled by the gases.
<urn:uuid:2fa6cb4e-a9ce-4d89-86ac-d17c623d5f5b>
3.515625
604
Truncated
Science & Tech.
48.910906
95,565,035
Understanding nuclear structure and dynamics from the underlying strong force described by QCD is one of the major challenges in nuclear physics today. Exotic nuclei, with extreme ratios of proton-to-neutron number, provide an ideal playground to address key questions in nuclear physics and astrophysics. - How does the structure of nuclei change with temperature, isospin and angular mometum? - How does the strong force binds the nucleons together in atomic nuclei? - What is the origin of elements in Nature? - What are the nuclear processes involved in the evolution of the stars? Accessing the limits of the nuclear chart is very challenging and highly innovatory equipment is required to reach further regions of the nuclear chart. The active target and time projection chamber (ACTAR TPC) is a novel gas-filled detection system that will permit new studies into the structure and decays of the most exotic nuclei. The use of a gas volume that acts as a sensitive detection medium and as the reaction target itself (an “active target”) offers considerable advantages over traditional nuclear physics detectors and techniques. In high-energy physics, TPC detectors have found profitable applications but their use in nuclear physics has been limited. With the ACTAR TPC design, individual detection pad sizes of 2×2 mm2 are the smallest ever attempted in either discipline but is a requirement for high-efficiency and high-resolution nuclear spectroscopy. The corresponding large number of electronic channels (16000 from a surface of only 25×25 cm) requires new developments in high-density electronics and data-acquisition systems that are not yet available in the nuclear physics domain. Funded by the European Research Council (ERC) in 2014, ACTAR TPC is an ambitious project responsible for the development of a novel and versatile detector system for rare-isotope beam experiments at GANIL (France) and CERN-ISOLDE (Switzerland) and other facilities worldwide. The ACTAR TPC collaboration is composed of researchers and engineers from GANIL, CENBG, IPN Orsay in France, the K.U. Leuven in Belgium, and the Universidade de Santiago de Compostela in Spain.
<urn:uuid:fac3c46a-6d4c-4af4-b7be-067baffe2723>
2.90625
458
About (Org.)
Science & Tech.
30.392255
95,565,037
Using data sets collected north of San Francisco Bay (CA) an ArcGIS classification toolset was developed using supervised image classification tools to characterize potential shallow marine benthic habitats. First-derivative images and a topographic algorithm, called Bathymetric Position Index were created from the bathymetry data set using ArcGIS Spatial Analyst tools. Backscatter intensity was also analyzed by creating training samples based on the collected sediment samples and then applying multivariate statistical tools to delinate textural classes. The data collected revealed a rugged and complex seafloor and imaged in detail basement and bedrock outcrops, sand and gravel bedforms, and flat sediment covered seabed. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:7e2251e0-5df1-447d-875f-08b2068a52f4>
3.078125
161
Academic Writing
Science & Tech.
-3.778686
95,565,040
The study, published today in the open access journal BMC Ecology, shows that when salmon is available, wolves will reduce deer hunting activity and instead focus on seafood. Chris Darimont from the University of Victoria and the Raincoast Conservation Foundation, Canada, led a team of researchers who studied the feeding habits of wolves in a remote 3,300km2 area of British Columbia. As Darimont describes, “Over the course of four years, we identified prey remains in wolf droppings and carried out chemical analysis of shed wolf hair in order to determine what the wolves like to eat at various times of year”. For most of the year, the wolves tend to eat deer, as one would expect. During the autumn, however, salmon becomes available and the wolves shift their culinary preferences. According to the authors, “One might expect that wolves would move onto salmon only if their mainstay deer were in short supply. Our data show that this is not the case, salmon availability clearly outperformed deer availability in predicting wolves’ use of salmon.” This work gives researchers as much insight into salmon ecology as wolf ecology. Darimont’s mentor and co-author Thomas Reimchen, also of the University of Victoria, admits, “Salmon continue to surprise us, showing us new ways in which their oceanic migrations eventually permeate entire terrestrial ecosystems. In terms of providing food and nutrients to a whole food web, we like to think of them as North America’s answer to the Serengeti’s wildebeest.” The authors explain that the wolves’ taste for fishy fare is likely based on safety, nutrition and energetics. Darimont said, “Selecting benign prey such as salmon makes sense from a safety point of view. While hunting deer, wolves commonly incur serious and often fatal injuries. In addition to safety benefits we determined that salmon also provides enhanced nutrition in terms of fat and energy”. The research also warns that this already vestigial predator-prey relationship – one that once spread from California to Alaska – might not be around forever. Darimont cautions, “There are multiple threats to salmon systems, including overexploitation by fisheries and the destruction of spawning habitats, as well as diseases from exotic salmon aquaculture that collectively have led to coast-wide declines of up to 90% over the last century”. Graeme Baldwin | alfa Colorectal cancer risk factors decrypted 13.07.2018 | Max-Planck-Institut für Stoffwechselforschung Algae Have Land Genes 13.07.2018 | Julius-Maximilians-Universität Würzburg For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:8348c577-b3b0-45ed-a4c0-d02570be5383>
3.5
1,152
Content Listing
Science & Tech.
38.478495
95,565,055
HI-MEMS: Control Circuits Embedded In Pupal Stage Successfully Cornell University researchers have succeeded in implanting electronic circuit probes into tobacco hornworms as early pupae. The hornworms pass through the chrysalis stage to mature into moths whose muscles can be controlled with the implanted electronics. (Tobacco hornworm with circuit and electrode implanted in pupal stage.) The pupal insertion state is shown in insert "i" in the picture seen above. The successful emergence of a microsystem-controlled insect is shown in insert "ii;" the microsystem platform is shown held with tweezers. The X-ray image (A) shows the probes inserted into the dorsoventral and dorsolongitudinal flight muscles. CT images (B) show components of high absorbance indicating tissue growth around the probe. (Results of insertions done at different stages of metamorphosis.) The research also indicated the most favorable and least favorable times for insertion of control devices. The overall size of the circuit board is 8x7mm, with a total weight of about 500 mg. The capacity of the battery is 16 mAh, and weighs 240 mg. A driving voltage of 5 volts causes the tobacco hornworm blade muscles (two pairs) to move for flight and maneuvering. DARPA HI-MEMS program director Amit Lal credits science fiction writer Thomas Easton with the idea. Lal read Easton's 1990 novel Sparrowhawk, in which animals enlarged by genetic engineering were outfitted with implanted control systems. Dr. Easton, a professor of science at Thomas College, sees a number of applications for HI-MEMS insects. Moths are extraordinarily sensitive to sex attractants, so instead of giving bank robbers money treated with dye, they could use sex attractants instead. Then, a moth-based HI-MEMS could find the robber by following the scent." "[Also,] with genetic engineering Darpa could replace the sex attractant receptor on the moth antennae with receptors for other things, like explosives, drugs or toxins," said Easton. DARPA had better be careful with its insect army; in Easton's novel, hackers are able to gain control of genetically engineered animals by hacking the controller chips used in their implanted control structures. Read more about Roachsters, full-size anthropod-based vehicles with embedded control structures from Easton's 1990 book Sparrowhawk. If you are interested in one dark-side view of how this kind of invention could be used by corporations, see the madcap blurbflies from Jeff Noon's excellent 2000 novel Nymphomation. Learn more about HI-MEMS (Hybrid Insect Micro-Electrico-Mechanical Systems) Sought By DARPA; additional research reviewed at HI-MEMS: Cyborg Beetle Microsystem. Via Robot Watch. See also this informative article Darpa hatches plan for insect cyborgs to fly reconnaissance. Scroll down for more stories in the same category. (Story submitted 1/27/2008) Follow this kind of news @Technovelgy. | Email | RSS | Blog It | Stumble | del.icio.us | Digg | Reddit | you like to contribute a story tip? Get the URL of the story, and the related sf author, and add Comment/Join discussion ( 2 ) Related News Stories - Has Climate Change Already Been Solved By Aliens? 'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.' - Larry Niven, 1970. Skin Electronics 3D Printed 'June's body is a tracery of lambent lines, like some arcane capillary circuitry...' - Paul Di Filippo, 1985 Super-Resolution Microscopy Provides '4D' Views View the magnified interior of living cells. Physicists Try To Turn Light Into Matter If E=mc squared, then... m=E/c squared! Technovelgy (that's tech-novel-gee!) is devoted to the creative science inventions and ideas of sf authors. Look for the Invention Category that interests you, the Glossary, the Invention Timeline, or see what's New. Ontario Starts Guaranteed Minimum Income 'Earned by just being born.' Is There Life In Outer Space? Will We Recognize It? 'The antennae of the Life Detector atop the OP swept back and forth...' Space Traumapod For Surgery In Spacecraft ' It was a ... coffin, form-fitted to Nessus himself...' Tesla Augmented Reality Hypercard 'The hypercard is an avatar of sorts.' A Space Ship On My Back ''Darn clever, these suits,' he murmured.' Biomind AI Doctor Mops Floor With Human Doctors 'My aim was just not to lose by too much.' - Human Physician participant. Fuli Bad Dog Robot Is 'Auspicious Raccoon Dog' Bot Bad dog, Fuli. Bad dog. Las Vegas Humans Ready To Strike Over Robots 'A worker replaced by a nubot... had to be compensated.' You'll Regrow That Limb, One Day '... forcing the energy transfer which allowed him to regrow his lost fingers.' Elon Musk Seeks To Create 1941 Heinlein Speedster 'The car surged and lifted, clearing its top by a negligible margin.' Somnox Sleep Robot - Your Sleepytime Cuddlebot Science fiction authors are serious about sleep, too. Real-Life Macau or Ghost In The Shell Art imitates life imitates art. Has Climate Change Already Been Solved By Aliens? 'I had explained," said Nessus, "that our civilisation was dying in its own waste heat.' First 3D Printed Human Corneas From Stem Cells Just what we need! Lots of spare parts. VirtualHome: Teaching Robots To Do Chores Around The House 'Just what did I want Flexible Frank to do? - any work a human being does around a house.' Messaging Extraterrestrial Intelligence (METI) Workshop SF writers have thought about this since the 19th century. More SF in the News Stories More Beyond Technovelgy science news stories
<urn:uuid:850d98db-c31c-4475-98fd-ea5c3cef49d3>
2.84375
1,326
Content Listing
Science & Tech.
51.818181
95,565,060
A question of execution time We all know that the complexity of algorithms we use influences a lot the performance of our programs.The difference between having linear time algorithms and having constant time algorithms is seen quite soon, after the first hundreds of items. So here’s a simple question. What’s the complexity of adding an element at the end of a linked list? The naive answer usually is ‘constant time’. But let’s check the usual implementations. Let’s assume we have the structure: This implementation is clearly NOT in constant time because, well, we have to start from the head, get all the way to the end of the list and add the element there. Time: O(n). At least. More later. The obvious fix would be to also have the tail of the list. To make the tail available as well. So let’s expand our data structures with a ‘list’ structure that will contain the head and the tail. Great! Problem solved! We have constant time for adding. Don’t we? Begginer’s luck, we’re safe at bay! What is the execution time of malloc? Good, now that we found out that it’s not so easy to do that, we can say goodbye to the idea that a dynamically allocated linked list does not have the possibility to add an element in O(1) simply because we have to allocate the element. In academia it’s a good and nice O(1) solution, in RealLife™ it’s not. So what do we do? The solutions I found so far are not perfect by far. First solution would be to have something like a smart pool that will give us the elements in a more or less constant time (one idea would be to have a buffer of more node structures, and then one could perform a circular search from the last ‘allocated’ node). Another solution would be to not do dynamic allocation at all, but that defeats the purpose of having a linked list in the first place. One reminder. Never forget that just because you have a one liner doesn’t mean that the execution time for that line is constant. And memory allocation can be one of the most critical operation for the performance of your program
<urn:uuid:4e467265-3750-4d9b-b67f-4bce9371ea27>
2.75
485
Personal Blog
Software Dev.
63.091318
95,565,061
The term greenhouse gas (sometimes abbreviated GHG) is used for a gas in an atmosphere that absorbs and emits radiation within the thermal infrared range. The theory of global warming and climate change relies on these gasses, particularly carbon dioxide, acting in the same role as the glass panes of a real greenhouse, absorbing the infrared energy from the heated ground, then re-emitting it back downwards, adding an extra component to the insulating, or blanketing, properties the air already has. This process is said to be the fundamental cause of the greenhouse effect. The primary infrared-absorbing and emitting gases in Earth's atmosphere are water vapor, carbon dioxide, methane, nitrous oxide, and ozone. In the Solar System, the atmospheres of Venus and Mars also contain carbon dioxide that is hypothesized to cause a greenhouse effect on those planets. The "greenhouse effect" of gasses in the atmosphere is used to make an analogy with ordinary greenhouses, which get warmer in sunlight. The term is misapplied, because a real greenhouse gets warmer by physically trapping the heated air inside, not by re-radiation of infrared from the glass walls and roof. A greenhouse works by letting sunlight in, which warms the solid and liquid surfaces inside the structure. This heat energy is then transferred to the air inside the greenhouse. The heated air, which rises (convection), is prevented from leaving the greenhouse. The warmed air is physically trapped inside by the walls and roof of the greenhouse. The explanation given for the warmer temperature in a real greenhouse, in many sources that promote the "radiative" greenhouse effect, is that incident solar radiation in the visible, long-wavelength ultraviolet, and short-wavelength infrared range of the spectrum passes through the glass roof and walls and is absorbed by the floor, earth, and contents, which become warmer and re-emit the energy as longer-wavelength infrared radiation. Glass and other materials used for greenhouse walls do not transmit infrared radiation, so the infrared cannot escape via radiative transfer, and so returns inside to heat the air and ground some more. How a greenhouse actually works The "greenhouse effect" of the atmosphere is named by analogy to greenhouses which get warmer in sunlight. The explanation given in most sources for the warmer temperature in an actual greenhouse is that incident solar radiation in the visible, long-wavelength ultraviolet, and short-wavelength infrared range of the spectrum passes through the glass roof and walls and is absorbed by the floor, earth, and contents, which become warmer and re-emit the energy as longer-wavelength infrared radiation. Glass and other materials used for greenhouse walls do not transmit infrared radiation, so the infrared cannot escape via radiative transfer. As the structure is not open to the atmosphere, heat also cannot escape via convection, so the temperature inside the greenhouse rises. The greenhouse effect, due to infrared-opaque "greenhouse gases" including carbon dioxide and methane instead of glass, also affects Earth as a whole; there is no convective cooling because no significant amount of air escapes from Earth. However, a significant experiment shows that the mechanism by which the atmosphere retains heat—the "greenhouse effect"—is different; a greenhouse is not primarily warmed by the "greenhouse effect". A greenhouse works by allowing sunlight to warm solid and liquid surfaces inside the structure. This heat energy is then transferred to the air inside the greenhouse. The heated air, which rises, convection is prevented from leaving the greenhouse. It is trapped inside by the greenhouse glass. A greenhouse is built of any transparent material that lets sunlight through, usually glass or plastic. It mainly warms up because the sun warms the ground and contents inside, which then warms the air in the greenhouse. The air continues to heat because it is confined within the greenhouse, unlike the environment outside the greenhouse where warm air near the surface rises and mixes with cooler air aloft. This can be demonstrated by opening a small window near the roof of a greenhouse: the temperature will drop considerably. It was demonstrated experimentally (R. W. Wood, 1909) that a "greenhouse" with a cover of rock salt (which is transparent to infra red) heats up an enclosure similarly to one with a glass cover. Thus greenhouses work primarily by preventing convective cooling. In contrast, the greenhouse effect heats Earth because rather than retaining (sensible) heat by physically preventing movement of the air, greenhouse gases act to warm Earth by re-radiating some of the energy back towards the surface. This process may exist in real greenhouses, but is comparatively unimportant there. Greenhouse gas concept used by global warming proponents A greenhouse gas (sometimes abbreviated GHG) is a gas in an atmosphere that absorbs and emits radiation within the thermal infrared range. This process is the fundamental cause of the greenhouse effect. The primary greenhouse gases in Earth's atmosphere are water vapor, carbon dioxide, methane, nitrous oxide, and ozone. The following statement needs fact checking. Two of the citations do not mention these numbers. The third, , is behind a paywall. Without greenhouse gases, the average temperature of Earth's surface would be about 15 °C (27 °F) colder than the present average of 14 °C. The statement above may be referring to the blanketing, or insulation, effects of the atmosphere, regardless of any greenhouse gas. Human activities since the beginning of the Industrial Revolution (taken to be the year 1750) have produced an estimated 40% increase in the atmospheric concentration of carbon dioxide, from an estimated 280 ppm in 1750 to 402 ppm in 2016. This increase has occurred despite the uptake of a large portion of the emissions by various natural "sinks" involved in the carbon cycle. Anthropogenic carbon dioxide (CO2) emissions (i.e. emissions produced by human activities) come from combustion of carbon-based fuels, principally coal, oil, and natural gas, along with deforestation, soil erosion and animal agriculture. By one estimate, if greenhouse gas emissions continue at the present rate, and if the climate models are accurate, Earth's surface temperature could exceed historical values as early as 2047, with potentially harmful, as well as beneficial, effects on ecosystems, biodiversity and the livelihoods of people worldwide. - 1 How a greenhouse actually works - 2 Greenhouse gas concept used by global warming proponents - 3 Gases in Earth's atmosphere - 4 Impacts on the overall greenhouse effect - 5 Natural and anthropogenic sources - 6 Anthropogenic greenhouse gases - 7 Role of water vapor - 8 Direct greenhouse gas emissions - 8.1 Regional and national attribution of emissions - 8.2 Land-use change - 8.3 Greenhouse gas intensity - 8.4 Cumulative and historical emissions - 8.5 Changes since a particular base year - 8.6 Annual emissions - 8.7 Top emitter countries - 8.8 Embedded emissions - 8.9 Effect of policy - 8.10 Projections - 8.11 Relative CO2 emission from various fuels - 9 Life-cycle greenhouse-gas emissions of energy sources - 10 Removal from the atmosphere ("sinks") - 11 History of scientific research - 12 See also - 13 References - 14 Bibliography - 15 External links Gases in Earth's atmosphere Greenhouse gases are those that absorb and emit infrared radiation in the wavelength range emitted by Earth. In order, the most abundant greenhouse gases in Earth's atmosphere are: - Water vapor (H2O) - Carbon dioxide (CO2) - Methane (CH4) - Nitrous oxide (N2O) - Ozone (O3) - Chlorofluorocarbons (CFCs) Atmospheric concentrations of greenhouse gases are determined by the balance between sources (emissions of the gas from human activities and natural systems) and sinks (the removal of the gas from the atmosphere by conversion to a different chemical compound). The proportion of an emission remaining in the atmosphere after a specified time is the "airborne fraction" (AF). More precisely, the annual airborne fraction is the ratio of the atmospheric increase in a given year to that year's total emissions. Over the last 50 years (1956–2006) the airborne fraction for CO2 has been increasing at 0.25 ± 0.21%/year. The major atmospheric constituents, nitrogen (N 2), oxygen (O 2), and argon (Ar), are not greenhouse gases. This is because molecules containing two atoms of the same element such as N 2 and O 2 and monatomic molecules such as argon (Ar) have no net change in the distribution of their electrical charges when they vibrate and hence are almost totally unaffected by infrared radiation. Although molecules containing two atoms of different elements such as carbon monoxide (CO) or hydrogen chloride (HCl) absorb infrared radiation, these molecules are short-lived in the atmosphere owing to their reactivity and solubility. Therefore, they do not contribute significantly to the greenhouse effect and usually are omitted when discussing greenhouse gases. Indirect radiative effects Some gases have indirect radiative effects (whether or not they are greenhouse gases themselves). This happens in two main ways. One way is that when they break down in the atmosphere they produce another greenhouse gas. For example, methane and carbon monoxide (CO) are oxidized to give carbon dioxide (and methane oxidation also produces water vapor; that will be considered below). Oxidation of CO to CO2 directly produces an unambiguous increase in radiative forcing although the reason is subtle. The peak of the thermal IR emission from Earth's surface is very close to a strong vibrational absorption band of CO2 (667 cm−1). On the other hand, the single CO vibrational band only absorbs IR at much higher frequencies (2145 cm−1), where the ~300 K thermal emission of the surface is at least a factor of ten lower. On the other hand, oxidation of methane to CO2, which requires reactions with the OH radical, produces an instantaneous reduction, since CO2 is a weaker greenhouse gas than methane; but it has a longer lifetime. As described below this is not the whole story, since the oxidations of CO and CH 4 are intertwined by both consuming OH radicals. In any case, the calculation of the total radiative effect needs to include both the direct and indirect forcing. A second type of indirect effect happens when chemical reactions in the atmosphere involving these gases change the concentrations of greenhouse gases. For example, the destruction of non-methane volatile organic compounds (NMVOCs) in the atmosphere can produce ozone. The size of the indirect effect can depend strongly on where and when the gas is emitted. Methane has a number of indirect effects in addition to forming CO2. Firstly, the main chemical that destroys methane in the atmosphere is the hydroxyl radical (OH). Methane reacts with OH and so more methane means that the concentration of OH goes down. Effectively, methane increases its own atmospheric lifetime and therefore its overall radiative effect. The second effect is that the oxidation of methane can produce ozone. Thirdly, as well as making CO2 the oxidation of methane produces water; this is a major source of water vapor in the stratosphere, which is otherwise very dry. CO and NMVOC also produce CO2 when they are oxidized. They remove OH from the atmosphere and this leads to higher concentrations of methane. The surprising effect of this is that the global warming potential of CO is three times that of CO2. The same process that converts NMVOC to carbon dioxide can also lead to the formation of tropospheric ozone. Halocarbons have an indirect effect because they destroy stratospheric ozone. Finally hydrogen can lead to ozone production and CH 4 increases as well as producing water vapor in the stratosphere. Contribution of clouds to Earth's greenhouse effect The major non-gas contributor to Earth's greenhouse effect, clouds, also absorb and emit infrared radiation and thus have an effect on radiative properties of the greenhouse gases. Clouds are water droplets or ice crystals suspended in the atmosphere. Impacts on the overall greenhouse effect The contribution of each gas to the greenhouse effect is affected by the characteristics of that gas, its abundance, and any indirect effects it may cause. For example, the direct radiative effect of a mass of methane is about 72 times stronger than the same mass of carbon dioxide over a 20-year time frame but it is present in much smaller concentrations so that its total direct radiative effect is smaller, in part due to its shorter atmospheric lifetime. On the other hand, in addition to its direct radiative impact, methane has a large, indirect radiative effect because it contributes to ozone formation. Shindell et al. (2005) argue that the contribution to climate change from methane is at least double previous estimates as a result of this effect. When ranked by their direct contribution to the greenhouse effect, the most important are: |Water vapor and clouds||H (A) Water vapor strongly varies locally In addition to the main greenhouse gases listed above, other greenhouse gases include sulfur hexafluoride, hydrofluorocarbons and perfluorocarbons (see IPCC list of greenhouse gases). Some greenhouse gases are not often listed. For example, nitrogen trifluoride has a high global warming potential (GWP) but is only present in very small quantities. Proportion of direct effects at a given moment It is not possible to state that a certain gas causes an exact percentage of the greenhouse effect. This is because some of the gases absorb and emit radiation at the same frequencies as others, so that the total greenhouse effect is not simply the sum of the influence of each gas. The higher ends of the ranges quoted are for each gas alone; the lower ends account for overlaps with the other gases. In addition, some gases such as methane are known to have large indirect effects that are still being quantified. Aside from water vapor, which has a residence time of about nine days, major greenhouse gases are well mixed and take many years to leave the atmosphere. Although it is not easy to know with precision how long it takes greenhouse gases to leave the atmosphere, there are estimates for the principal greenhouse gases. Jacob (1999) defines the lifetime of an atmospheric species X in a one-box model as the average time that a molecule of X remains in the box. Mathematically can be defined as the ratio of the mass (in kg) of X in the box to its removal rate, which is the sum of the flow of X out of the box (), chemical loss of X (), and deposition of X () (all in kg/s): . If one stopped pouring any of this gas into the box, then after a time , its concentration would be about halved. The atmospheric lifetime of a species therefore measures the time required to restore equilibrium following a sudden increase or decrease in its concentration in the atmosphere. Individual atoms or molecules may be lost or deposited to sinks such as the soil, the oceans and other waters, or vegetation and other biological systems, reducing the excess to background concentrations. The average time taken to achieve this is the mean lifetime. Carbon dioxide has a variable atmospheric lifetime, and cannot be specified precisely. The atmospheric lifetime of CO2 is estimated of the order of 30–95 years. This figure accounts for CO2 molecules being removed from the atmosphere by mixing into the ocean, photosynthesis, and other processes. However, this excludes the balancing fluxes of CO2 into the atmosphere from the geological reservoirs, which have slower characteristic rates. Although more than half of the CO2 emitted is removed from the atmosphere within a century, some fraction (about 20%) of emitted CO2 remains in the atmosphere for many thousands of years. Similar issues apply to other greenhouse gases, many of which have longer mean lifetimes than CO2. E.g., N2O has a mean atmospheric lifetime of 114 years. Earth absorbs some of the radiant energy received from the sun, reflects some of it as light and reflects or radiates the rest back to space as heat. Earth's surface temperature depends on this balance between incoming and outgoing energy. If this energy balance is shifted, Earth's surface could become warmer or cooler, leading to a variety of changes in global climate. A number of natural and man-made mechanisms can affect the global energy balance and force changes in Earth's climate. Greenhouse gases are one such mechanism. Greenhouse gases in the atmosphere absorb and re-emit some of the outgoing energy radiated from Earth's surface, causing that heat to be retained in the lower atmosphere. As explained above, some greenhouse gases remain in the atmosphere for decades or even centuries, and therefore can affect Earth's energy balance over a long time period. Factors that influence Earth's energy balance can be quantified in terms of "radiative climate forcing." Positive radiative forcing indicates warming (for example, by increasing incoming energy or decreasing the amount of energy that escapes to space), whereas negative forcing is associated with cooling. Global warming potential The global warming potential (GWP) depends on both the efficiency of the molecule as a greenhouse gas and its atmospheric lifetime. GWP is measured relative to the same mass of CO2 and evaluated for a specific timescale. Thus, if a gas has a high (positive) radiative forcing but also a short lifetime, it will have a large GWP on a 20-year scale but a small one on a 100-year scale. Conversely, if a molecule has a longer atmospheric lifetime than CO2 its GWP will increase with the timescale considered. Carbon dioxide is defined to have a GWP of 1 over all time periods. Methane has an atmospheric lifetime of 12 ± 3 years. The 2007 IPCC report lists the GWP as 72 over a time scale of 20 years, 25 over 100 years and 7.6 over 500 years. A 2014 analysis, however, states that although methane's initial impact is about 100 times greater than that of CO2, because of the shorter atmospheric lifetime, after six or seven decades, the impact of the two gases is about equal, and from then on methane's relative role continues to decline. The decrease in GWP at longer times is because methane is degraded to water and CO2 through chemical reactions in the atmosphere. |Global warming potential (GWP) for given time horizon| |100||11 000||10 900||5 200| |12||5 160||1 810||549| |50 000||5 210||7 390||11 200| |10 000||8 630||12 200||18 200| |3 200||16 300||22 800||32 600| |740||12 300||17 200||20 700| Natural and anthropogenic sources Aside from purely human-produced synthetic halocarbons, most greenhouse gases have both natural and human-caused sources. During the pre-industrial Holocene, concentrations of existing gases were roughly constant. In the industrial era, human activities have added greenhouse gases to the atmosphere, mainly through the burning of fossil fuels and clearing of forests. The 2007 Fourth Assessment Report compiled by the IPCC (AR4) noted that "changes in atmospheric concentrations of greenhouse gases and aerosols, land cover and solar radiation alter the energy balance of the climate system", and concluded that "increases in anthropogenic greenhouse gas concentrations is very likely to have caused most of the increases in global average temperatures since the mid-20th century". In AR4, "most of" is defined as more than 50%. |Carbon dioxide (CO2)||280 ppm||395.4 ppm||115.4 ppm||41.2%||1.88| |700 ppb||1893 ppb / |1193 ppb / |Nitrous oxide (N |270 ppb||326 ppb / |56 ppb / |237 ppb||337 ppb||100 ppb||42%||0.4| |236 ppt / |527 ppt / |74 ppt / |231 ppt / |24 ppt / |23 ppt / |Halon 1211 (CBrClF |4.1 ppt / |Halon 1301 (CBrClF |3.3 ppt / |75 ppt / |Carbon tetrachloride (CCl |85 ppt / |Sulfur hexafluoride (SF |7.79 ppt / |Other halocarbons||Varies by |Halocarbons in total||0.3574| Ice cores provide evidence for greenhouse gas concentration variations over the past 800,000 years (see the following section). Both CO2 and CH 4 vary between glacial and interglacial phases, and concentrations of these gases correlate strongly with temperature. Direct data does not exist for periods earlier than those represented in the ice core record, a record that indicates CO2 mole fractions stayed within a range of 180 ppm to 280 ppm throughout the last 800,000 years, until the increase of the last 250 years. However, various proxies and modeling suggests larger variations in past epochs; 500 million years ago CO2 levels were likely 10 times higher than now. Indeed, higher CO2 concentrations are thought to have prevailed throughout most of the Phanerozoic eon, with concentrations four to six times current concentrations during the Mesozoic era, and ten to fifteen times current concentrations during the early Palaeozoic era until the middle of the Devonian period, about 400 Ma. The spread of land plants is thought to have reduced CO2 concentrations during the late Devonian, and plant activities as both sources and sinks of CO2 have since been important in providing stabilising feedbacks. Earlier still, a 200-million year period of intermittent, widespread glaciation extending close to the equator (Snowball Earth) appears to have been ended suddenly, about 550 Ma, by a colossal volcanic outgassing that raised the CO2 concentration of the atmosphere abruptly to 12%, about 350 times modern levels, causing extreme greenhouse conditions and carbonate deposition as limestone at the rate of about 1 mm per day. This episode marked the close of the Precambrian eon, and was succeeded by the generally warmer conditions of the Phanerozoic, during which multicellular animal and plant life evolved. No volcanic carbon dioxide emission of comparable scale has occurred since. In the modern era, emissions to the atmosphere from volcanoes are only about 1% of emissions from human sources. Measurements from Antarctic ice cores show that before industrial emissions started atmospheric CO2 mole fractions were about 280 parts per million (ppm), and stayed between 260 and 280 during the preceding ten thousand years. Carbon dioxide mole fractions in the atmosphere have gone up by approximately 35 percent since the 1900s, rising from 280 parts per million by volume to 387 parts per million in 2009. One study using evidence from stomata of fossilized leaves suggests greater variability, with carbon dioxide mole fractions above 300 ppm during the period seven to ten thousand years ago, though others have argued that these findings more likely reflect calibration or contamination problems rather than actual CO2 variability. Because of the way air is trapped in ice (pores in the ice close off slowly to form bubbles deep within the firn) and the time period represented in each ice sample analyzed, these figures represent averages of atmospheric concentrations of up to a few centuries rather than annual or decadal levels. Changes since the Industrial Revolution Since the beginning of the Industrial Revolution, the concentrations of most of the greenhouse gases have increased. For example, the mole fraction of carbon dioxide has increased from 280 ppm by about 36% to 380 ppm, or 100 ppm over modern pre-industrial levels. The first 50 ppm increase took place in about 200 years, from the start of the Industrial Revolution to around 1973.; however the next 50 ppm increase took place in about 33 years, from 1973 to 2006. Recent data also shows that the concentration is increasing at a higher rate. In the 1960s, the average annual increase was only 37% of what it was in 2000 through 2007. Today, the stock of carbon in the atmosphere increases by more than 3 million tonnes per annum (0.04%) compared with the existing stock.[clarification needed] This increase is the result of human activities by burning fossil fuels, deforestation and forest degradation in tropical and boreal regions. The other greenhouse gases produced from human activity show similar increases in both amount and rate of increase. Many observations are available online in a variety of Atmospheric Chemistry Observational Databases. Anthropogenic greenhouse gases Since about 1750 human activity has increased the concentration of carbon dioxide and other greenhouse gases. Measured atmospheric concentrations of carbon dioxide are currently 100 ppm higher than pre-industrial levels. Natural sources of carbon dioxide are more than 20 times greater than sources due to human activity, but over periods longer than a few years natural sources are closely balanced by natural sinks, mainly photosynthesis of carbon compounds by plants and marine plankton. As a result of this balance, the atmospheric mole fraction of carbon dioxide remained between 260 and 280 parts per million for the 10,000 years between the end of the last glacial maximum and the start of the industrial era. It is likely that anthropogenic (i.e., human-induced) warming, such as that due to elevated greenhouse gas levels, has had a discernible influence on many physical and biological systems. Future warming is projected to have a range of impacts, including sea level rise, increased frequencies and severities of some extreme weather events, loss of biodiversity, and regional changes in agricultural productivity. The main sources of greenhouse gases due to human activity are: - burning of fossil fuels and deforestation leading to higher carbon dioxide concentrations in the air. Land use change (mainly deforestation in the tropics) account for up to one third of total anthropogenic CO2 emissions. - livestock enteric fermentation and manure management, paddy rice farming, land use and wetland changes, pipeline losses, and covered vented landfill emissions leading to higher methane atmospheric concentrations. Many of the newer style fully vented septic systems that enhance and target the fermentation process also are sources of atmospheric methane. - use of chlorofluorocarbons (CFCs) in refrigeration systems, and use of CFCs and halons in fire suppression systems and manufacturing processes. - agricultural activities, including the use of fertilizers, that lead to higher nitrous oxide (N The seven sources of CO2 from fossil fuel combustion are (with percentage contributions for 2000–2004): |Seven main fossil fuel |Liquid fuels (e.g., gasoline, fuel oil)||36%| |Solid fuels (e.g., coal)||35%| |Gaseous fuels (e.g., natural gas)||20%| |Cement production||3 %| |Flaring gas industrially and at wells||< 1%| |Non-fuel hydrocarbons||< 1%| |"International bunker fuels" of transport not included in national inventories Carbon dioxide, methane, nitrous oxide (N 2O) and three groups of fluorinated gases (sulfur hexafluoride (SF 6), hydrofluorocarbons (HFCs), and perfluorocarbons (PFCs)) are the major anthropogenic greenhouse gases,:147 and are regulated under the Kyoto Protocol international treaty, which came into force in 2005. Emissions limitations specified in the Kyoto Protocol expired in 2012. The Cancún agreement, agreed in 2010, includes voluntary pledges made by 76 countries to control emissions. At the time of the agreement, these 76 countries were collectively responsible for 85% of annual global emissions. Although CFCs are greenhouse gases, they are regulated by the Montreal Protocol, which was motivated by CFCs' contribution to ozone depletion rather than by their contribution to global warming. Note that ozone depletion has only a minor role in greenhouse warming though the two processes often are confused in the media. |This section requires expansion with: Information on emissions from other sectors. (July 2013)| According to UNEP global tourism is closely linked to climate change. Tourism is a significant contributor to the increasing concentrations of greenhouse gases in the atmosphere. Tourism accounts for about 50% of traffic movements. Rapidly expanding air traffic contributes about 2.5% of the production of CO2. The number of international travelers is expected to increase from 594 million in 1996 to 1.6 billion by 2020, adding greatly to the problem unless steps are taken to reduce emissions. The road haulage industry plays a part in production of CO2, contributing around 20% of the UK’s total carbon emissions a year, with only the energy industry having a larger impact at around 39%. Average carbon emissions within the haulage industry are falling—in the thirty-year period from 1977–2007, the carbon emissions associated with a 200-mile journey fell by 21 percent; NOx emissions are also down 87 percent, whereas journey times have fallen by around a third. Due to their size, HGVs often receive criticism regarding their CO2 emissions; however, rapid development in engine technology and fuel management is having a largely positive effect. Role of water vapor Water vapor accounts for the largest percentage of the greenhouse effect, between 36% and 66% for clear sky conditions and between 66% and 85% when including clouds. Water vapor concentrations fluctuate regionally, but human activity does not significantly affect water vapor concentrations except at local scales, such as near irrigated fields. The atmospheric concentration of vapor is highly variable and depends largely on temperature, from less than 0.01% in extremely cold regions up to 3% by mass at in saturated air at about 32 °C. (See Relative humidity#other important facts.) The average residence time of a water molecule in the atmosphere is only about nine days, compared to years or centuries for other greenhouse gases such as CH 4 and CO2. Thus, water vapor responds to and amplifies effects of the other greenhouse gases. The Clausius–Clapeyron relation establishes that more water vapor will be present per unit volume at elevated temperatures. This and other basic principles indicate that warming associated with increased concentrations of the other greenhouse gases also will increase the concentration of water vapor (assuming that the relative humidity remains approximately constant; modeling and observational studies find that this is indeed so). Because water vapor is a greenhouse gas, this results in further warming and so is a "positive feedback" that amplifies the original warming. Eventually other earth processes offset these positive feedbacks, stabilizing the global temperature at a new equilibrium and preventing the loss of Earth's water through a Venus-like runaway greenhouse effect. Direct greenhouse gas emissions Between the period 1970 to 2004, GHG emissions (measured in CO2-equivalent) increased at an average rate of 1.6% per year, with CO2 emissions from the use of fossil fuels growing at a rate of 1.9% per year. Total anthropogenic emissions at the end of 2009 were estimated at 49.5 gigatonnes CO2-equivalent.:15 These emissions include CO2 from fossil fuel use and from land use, as well as emissions of methane, nitrous oxide and other GHGs covered by the Kyoto Protocol. Regional and national attribution of emissions - Definition of measurement boundaries: Emissions can be attributed geographically, to the area where they were emitted (the territory principle) or by the activity principle to the territory produced the emissions. These two principles result in different totals when measuring, for example, electricity importation from one country to another, or emissions at an international airport. - Time horizon of different GHGs: Contribution of a given GHG is reported as a CO2 equivalent. The calculation to determine this takes into account how long that gas remains in the atmosphere. This is not always known accurately and calculations must be regularly updated to reflect new information. - What sectors are included in the calculation (e.g., energy industries, industrial processes, agriculture etc.): There is often a conflict between transparency and availability of data. - The measurement protocol itself: This may be via direct measurement or estimation. The four main methods are the emission factor-based method, mass balance method, predictive emissions monitoring systems, and continuous emissions monitoring systems. These methods differ in accuracy, cost, and usability. These different measures are sometimes used by different countries to assert various policy/ethical positions on climate change (Banuri et al., 1996, p. 94). This use of different measures leads to a lack of comparability, which is problematic when monitoring progress towards targets. There are arguments for the adoption of a common measurement tool, or at least the development of communication between different tools. Emissions may be measured over long time periods. This measurement type is called historical or cumulative emissions. Cumulative emissions give some indication of who is responsible for the build-up in the atmospheric concentration of GHGs (IEA, 2007, p. 199). The national accounts balance would be positively related to carbon emissions. The national accounts balance shows the difference between exports and imports. For many richer nations, such as the United States, the accounts balance is negative because more goods are imported than they are exported. This is mostly due to the fact that it is cheaper to produce goods outside of developed countries, leading the economies of developed countries to become increasingly dependent on services and not goods. We believed that a positive accounts balance would means that more production was occurring in a country, so more factories working would increase carbon emission levels.(Holtz-Eakin, 1995, pp.;85;101). Emissions may also be measured across shorter time periods. Emissions changes may, for example, be measured against a base year of 1990. 1990 was used in the United Nations Framework Convention on Climate Change (UNFCCC) as the base year for emissions, and is also used in the Kyoto Protocol (some gases are also measured from the year 1995).:146,149 A country's emissions may also be reported as a proportion of global emissions for a particular year. Another measurement is of per capita emissions. This divides a country's total annual emissions by its mid-year population.:370 Per capita emissions may be based on historical or annual emissions (Banuri et al., 1996, pp. 106–107). Land-use change, e.g., the clearing of forests for agricultural use, can affect the concentration of GHGs in the atmosphere by altering how much carbon flows out of the atmosphere into carbon sinks. Accounting for land-use change can be understood as an attempt to measure "net" emissions, i.e., gross emissions from all GHG sources minus the removal of emissions from the atmosphere by carbon sinks (Banuri et al., 1996, pp. 92–93). There are substantial uncertainties in the measurement of net carbon emissions. Additionally, there is controversy over how carbon sinks should be allocated between different regions and over time (Banuri et al., 1996, p. 93). For instance, concentrating on more recent changes in carbon sinks is likely to favour those regions that have deforested earlier, e.g., Europe. Greenhouse gas intensity Greenhouse gas intensity is a ratio between greenhouse gas emissions and another metric, e.g., gross domestic product (GDP) or energy use. The terms "carbon intensity" and "emissions intensity" are also sometimes used. GHG intensities may be calculated using market exchange rates (MER) or purchasing power parity (PPP) (Banuri et al., 1996, p. 96). Calculations based on MER show large differences in intensities between developed and developing countries, whereas calculations based on PPP show smaller differences. Cumulative and historical emissions Cumulative anthropogenic (i.e., human-emitted) emissions of CO2 from fossil fuel use are a major cause of global warming, and give some indication of which countries have contributed most to human-induced climate change.:15 |OECD North America||33.2||29.7| The table above to the left is based on Banuri et al. (1996, p. 94). Overall, developed countries accounted for 83.8% of industrial CO2 emissions over this time period, and 67.8% of total CO2 emissions. Developing countries accounted for industrial CO2 emissions of 16.2% over this time period, and 32.2% of total CO2 emissions. The estimate of total CO2 emissions includes biotic carbon emissions, mainly from deforestation. Banuri et al. (1996, p. 94) calculated per capita cumulative emissions based on then-current population. The ratio in per capita emissions between industrialized countries and developing countries was estimated at more than 10 to 1. Including biotic emissions brings about the same controversy mentioned earlier regarding carbon sinks and land-use change (Banuri et al., 1996, pp. 93–94). The actual calculation of net emissions is very complex, and is affected by how carbon sinks are allocated between regions and the dynamics of the climate system. Non-OECD countries accounted for 42% of cumulative energy-related CO2 emissions between 1890–2007.:179–180 Over this time period, the US accounted for 28% of emissions; the EU, 23%; Russia, 11%; China, 9%; other OECD countries, 5%; Japan, 4%; India, 3%; and the rest of the world, 18%.:179–180 Changes since a particular base year Between 1970–2004, global growth in annual CO2 emissions was driven by North America, Asia, and the Middle East. The sharp acceleration in CO2 emissions since 2000 to more than a 3% increase per year (more than 2 ppm per year) from 1.1% per year during the 1990s is attributable to the lapse of formerly declining trends in carbon intensity of both developing and developed nations. China was responsible for most of global growth in emissions during this period. Localised plummeting emissions associated with the collapse of the Soviet Union have been followed by slow emissions growth in this region due to more efficient energy use, made necessary by the increasing proportion of it that is exported. In comparison, methane has not increased appreciably, and N 2O by 0.25% y−1. Using different base years for measuring emissions has an effect on estimates of national contributions to global warming.:17–18 This can be calculated by dividing a country's highest contribution to global warming starting from a particular base year, by that country's minimum contribution to global warming starting from a particular base year. Choosing between different base years of 1750, 1900, 1950, and 1990 has a significant effect for most countries.:17–18 Within the G8 group of countries, it is most significant for the UK, France and Germany. These countries have a long history of CO2 emissions (see the section on Cumulative and historical emissions). Annual per capita emissions in the industrialized countries are typically as much as ten times the average in developing countries.:144 Due to China's fast economic development, its annual per capita emissions are quickly approaching the levels of those in the Annex I group of the Kyoto Protocol (i.e., the developed countries excluding the USA). Other countries with fast growing emissions are South Korea, Iran, and Australia (which apart from the oil rich Persian Gulf states, now has the highest percapita emission rate in the world). On the other hand, annual per capita emissions of the EU-15 and the USA are gradually decreasing over time. Emissions in Russia and the Ukraine have decreased fastest since 1990 due to economic restructuring in these countries. Energy statistics for fast growing economies are less accurate than those for the industrialized countries. For China's annual emissions in 2008, the Netherlands Environmental Assessment Agency estimated an uncertainty range of about 10%. The GHG footprint, or greenhouse gas footprint, refers to the amount of GHG that are emitted during the creation of products or services. It is more comprehensive than the commonly used carbon footprint, which measures only carbon dioxide, one of many greenhouse gases. 2015 was the first year to see both total global economic growth and a reduction of carbon emissions. Top emitter countries In 2009, the annual top ten emitting countries accounted for about two-thirds of the world's annual energy-related CO2 emissions. |Country|| % of global total |Tonnes of GHG |People's Rep. of China||23.6||5.13| |Islamic Rep. of Iran||1.8||7.3| |Country|| % of world CO2 per person One way of attributing greenhouse gas (GHG) emissions is to measure the embedded emissions (also referred to as "embodied emissions") of goods that are being consumed. Emissions are usually measured according to production, rather than consumption. For example, in the main international treaty on climate change (the UNFCCC), countries report on emissions produced within their borders, e.g., the emissions produced from burning fossil fuels.:179:1 Under a production-based accounting of emissions, embedded emissions on imported goods are attributed to the exporting, rather than the importing, country. Under a consumption-based accounting of emissions, embedded emissions on imported goods are attributed to the importing country, rather than the exporting, country. Davis and Caldeira (2010):4 found that a substantial proportion of CO2 emissions are traded internationally. The net effect of trade was to export emissions from China and other emerging markets to consumers in the US, Japan, and Western Europe. Based on annual emissions data from the year 2004, and on a per-capita consumption basis, the top-5 emitting countries were found to be (in tCO2 per person, per year): Luxembourg (34.7), the US (22.0), Singapore (20.2), Australia (16.7), and Canada (16.6).:5 Carbon Trust research revealed that approximately 25% of all CO2 emissions from human activities 'flow' (i.e. are imported or exported) from one country to another. Major developed economies were found to be typically net importers of embodied carbon emissions — with UK consumption emissions 34% higher than production emissions, and Germany (29%), Japan (19%) and the USA (13%) also significant net importers of embodied emissions. Effect of policy Governments have taken action to reduce GHG emissions (climate change mitigation). Assessments of policy effectiveness have included work by the Intergovernmental Panel on Climate Change, International Energy Agency, and United Nations Environment Programme. Policies implemented by governments have included national and regional targets to reduce emissions, promoting energy efficiency, and support for renewable energy such as Solar energy as an effective use of renewable energy because solar uses energy from the sun and does not release pollutants into the air. Countries and regions listed in Annex I of the United Nations Framework Convention on Climate Change (UNFCCC) (i.e., the OECD and former planned economies of the Soviet Union) are required to submit periodic assessments to the UNFCCC of actions they are taking to address climate change.:3 Analysis by the UNFCCC (2011):8 suggested that policies and measures undertaken by Annex I Parties may have produced emission savings of 1.5 thousand Tg CO2-eq in the year 2010, with most savings made in the energy sector. The projected emissions saving of 1.5 thousand Tg CO2-eq is measured against a hypothetical "baseline" of Annex I emissions, i.e., projected Annex I emissions in the absence of policies and measures. The total projected Annex I saving of 1.5 thousand CO2-eq does not include emissions savings in seven of the Annex I Parties.:8 A wide range of projections of future GHG emissions have been produced. Rogner et al. (2007) assessed the scientific literature on GHG projections. Rogner et al. (2007) concluded that unless energy policies changed substantially, the world would continue to depend on fossil fuels until 2025–2030. Projections suggest that more than 80% of the world's energy will come from fossil fuels. This conclusion was based on "much evidence" and "high agreement" in the literature. Projected annual energy-related CO2 emissions in 2030 were 40–110% higher than in 2000, with two-thirds of the increase originating in developing countries. Projected annual per capita emissions in developed country regions remained substantially lower (2.8–5.1 tonnes CO2) than those in developed country regions (9.6–15.1 tonnes CO2). Projections consistently showed increase in annual world GHG emissions (the "Kyoto" gases, measured in CO2-equivalent) of 25–90% by 2030, compared to 2000. Relative CO2 emission from various fuels One liter of gasoline, when used as a fuel, produces 2.32 kg (about 1300 liters or 1.3 cubic meters) of carbon dioxide, a greenhouse gas. One US gallon produces 19.4 lb (1,291.5 gallons or 172.65 cubic feet) |Liquefied petroleum gas||139||59.76| |Tires/tire derived fuel||189||81.26| |Wood and wood waste||195||83.83| |Tar-sand Bitumen||||| Life-cycle greenhouse-gas emissions of energy sources A literature review of numerous energy sources CO2 emissions by the IPCC in 2011, found that, the CO2 emission value that fell within the 50th percentile of all total life cycle emissions studies conducted was as follows. |Ocean Energy||wave and tidal||8| |Nuclear||various generation II reactor types||16| |Solar thermal||parabolic trough||22| |Geothermal||hot dry rock||45| |Solar PV||Polycrystalline silicon||46| |Natural gas||various combined cycle turbines without scrubbing||469| |Coal||various generator types without scrubbing||1001| Removal from the atmosphere ("sinks") Greenhouse gases can be removed from the atmosphere by various processes, as a consequence of: - a physical change (condensation and precipitation remove water vapor from the atmosphere). - a chemical reaction within the atmosphere. For example, methane is oxidized by reaction with naturally occurring hydroxyl radical, OH• and degraded to CO2 and water vapor (CO2 from the oxidation of methane is not included in the methane Global warming potential). Other chemical reactions include solution and solid phase chemistry occurring in atmospheric aerosols. - a physical exchange between the atmosphere and the other compartments of the planet. An example is the mixing of atmospheric gases into the oceans. - a chemical change at the interface between the atmosphere and the other compartments of the planet. This is the case for CO2, which is reduced by photosynthesis of plants, and which, after dissolving in the oceans, reacts to form carbonic acid and bicarbonate and carbonate ions (see ocean acidification). - a photochemical change. Halocarbons are dissociated by UV light releasing Cl• and F• as free radicals in the stratosphere with harmful effects on ozone (halocarbons are generally too stable to disappear by chemical reaction in the atmosphere). A number of technologies remove greenhouse gases emissions from the atmosphere. Most widely analysed are those that remove carbon dioxide from the atmosphere, either to geologic formations such as bio-energy with carbon capture and storage and carbon dioxide air capture, or to the soil as in the case with biochar. The IPCC has pointed out that many long-term climate scenario models require large scale manmade negative emissions to avoid serious climate change. History of scientific research In the late 19th century scientists experimentally discovered that N2 and O2 do not absorb infrared radiation (called, at that time, "dark radiation"), while water (both as true vapor and condensed in the form of microscopic droplets suspended in clouds) and CO2 and other poly-atomic gaseous molecules do absorb infrared radiation. In the early 20th century researchers realized that greenhouse gases in the atmosphere made Earth's overall temperature higher than it would be without them. During the late 20th century, an hypothesis was put forth that increasing concentrations of what are called greenhouse gases in the atmosphere cause a substantial rise in global temperatures and changes to other parts of the climate system, with consequences for the environment and for human health. - Attribution of recent climate change - Carbon accounting - Carbon credit - Carbon emissions reporting - Carbon neutrality - Carbon offset - Cap and Trade - Deforestation and climate change - Effects of global warming - Emission standard - Environmental impact of aviation - Greenhouse debt - Hydrogen economy - Integrated Carbon Observation System - List of countries by electricity production from renewable sources - List of international environmental agreements - Low-carbon economy - Mobile source air pollution - Physical properties of greenhouse gases - Sustainability measurement - World energy consumption - Zero-emissions vehicle - A Dictionary of Physics (6 ed.), Oxford University Press, 2009, ISBN 9780199233991: "greenhouse effect" - A Dictionary of Chemistry (6 ed.), edited by John Daintith, Publisher: Oxford University Press, 2008, ISBN 9780199204632: "greenhouse effect" - Brian Shmaefsky (2004). Favorite demonstrations for college science: an NSTA Press journals collection. NSTA Press. p. 57. ISBN 978-0-87355-242-4. - Wood, R.W. (1909). "Note on the Theory of the Greenhouse". Philosophical Magazine. 17: 319–320. doi:10.1080/14786440208636602. When exposed to sunlight the temperature rose gradually to 65 °C., the enclosure covered with the salt plate keeping a little ahead of the other because it transmitted the longer waves from the Sun, which were stopped by the glass. In order to eliminate this action the sunlight was first passed through a glass plate." "it is clear that the rock-salt plate is capable of transmitting practically all of it, while the glass plate stops it entirely. This shows us that the loss of temperature of the ground by radiation is very small in comparison to the loss by convection, in other words that we gain very little from the circumstance that the radiation is trapped. - Schroeder, Daniel V. (2000). An introduction to thermal physics. San Francisco, California: Addison-Wesley. pp. 305–7. ISBN 0-321-27779-1. ... this mechanism is called the greenhouse effect, even though most greenhouses depend primarily on a different mechanism (namely, limiting convective cooling). - Oort, Abraham H.; Peixoto, José Pinto (1992). Physics of climate. New York: American Institute of Physics. ISBN 0-88318-711-6. ...the name water vapor-greenhouse effect is actually a misnomer since heating in the usual greenhouse is due to the reduction of convection - "IPCC AR4 SYR Appendix Glossary" (PDF). Retrieved 14 December 2008. - Karl TR, Trenberth KE (2003). "Modern global climate change". Science. 302 (5651): 1719–23. Bibcode:2003Sci...302.1719K. PMID 14657489. doi:10.1126/science.1090228. - Le Treut H.; Somerville R.; Cubasch U.; Ding Y.; Mauritzen C.; Mokssit A.; Peterson T.; Prather M. (2007). Historical overview of climate change science. In: Climate change 2007: The physical science basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (Solomon S., Qin D., Manning M., Chen Z., Marquis M., Averyt K. B., Tignor M. and Miller H. L., editors) (PDF). Cambridge University Press. Retrieved 14 December 2008. - "NASA Science Mission Directorate article on the water cycle". Nasascience.nasa.gov. Retrieved 2010-10-16. - From non-copyrighted source: Blasing, T. J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, on CDIAC 2013. Details on copyright status: Frequently Asked Global Change Questions, Q34: I would like to use a diagram, image, graph, table, or other materials from the CDIAC Web site. How can I obtain permission? Are there copyright restrictions?, retrieved 2012-09-26, on CDIAC 2013. "All of the reports, graphics, data, and other information on the CDIAC Web site are freely and publicly available without copyright restrictions. However as a professional courtesy, we ask that the original data source be acknowledged." - The most recent preliminary estimate of global monthly mean CO2 concentration (as of May 2013) is 396.71 ppm: (Ed Dlugokencky and Pieter Tans, NOAA/ESRL () - "Frequently asked global change questions". Carbon Dioxide Information Analysis Center. - ESRL Web Team (14 January 2008). "Trends in carbon dioxide". Esrl.noaa.gov. Retrieved 2011-09-11. - "AR4 SYR Synthesis Report Summary for Policymakers – 2 Causes of change". ipcc.ch. - Mora, C (2013). "The projected timing of climate departure from recent variability". Nature. 502: 183–187. doi:10.1038/nature12540. - "Chapter 7: Couplings Between Changes in the Climate System and Biogeochemistry" (PDF). IPCC WG1 AR4 Report. IPCC. 2007. p. FAQ 7.1; report page 512; pdf page 14. Retrieved 11 July 2011. - Canadell, J. G. (2007). "Contributions to accelerating atmospheric CO2 growth from economic activity, carbon intensity, and efficiency of natural sinks". Proc. Natl. Acad. Sci. U.S.A. 104 (47): 18866–70. Bibcode:2007PNAS..10418866C. PMC . PMID 17962418. doi:10.1073/pnas.0702737104. Unknown parameter - http://earthobservatory.nasa.gov/Library/RemoteSensingAtmosphere/remote_sensing6.html Archived 20 September 2008 at the Wayback Machine - Forster, P.; et al. (2007). "2.10.3 Indirect GWPs". Changes in Atmospheric Constituents and in Radiative Forcing. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. Retrieved 2012-12-02. - MacCarty, N. "Laboratory Comparison of the Global-Warming Potential of Six Categories of Biomass Cooking Stoves" (PDF). Approvecho Research Center. - Kiehl, J.T.; Kevin E. Trenberth (1997). "Earth's annual global mean energy budget" (PDF). Bulletin of the American Meteorological Society. 78 (2): 197–208. Bibcode:1997BAMS...78..197K. doi:10.1175/1520-0477(1997)078<0197:EAGMEB>2.0.CO;2. Archived from the original (PDF) on 30 March 2006. Retrieved 1 May 2006. - "Water vapour: feedback or forcing?". RealClimate. 6 April 2005. Retrieved 1 May 2006. - Schmidt, G. A.; R. Ruedy; R. L. Miller; A. A. Lacis (2010), "The attribution of the present-day total greenhouse effect" (PDF), J. Geophys. Res., 115, Bibcode:2010JGRD..11520106S, doi:10.1029/2010JD014287, D20106. Web page for paper. - Lacis, A. (October 2010), NASA GISS: CO2: The Thermostat that Controls Earth's Temperature, New York: NASA GISS - IPCC Fourth Assessment Report, Table 2.14, Chap. 2, p. 212 - Shindell, Drew T. (2005). "An emissions-based view of climate forcing by methane and tropospheric ozone". Geophysical Research Letters. 32 (4): L04803. Bibcode:2005GeoRL..3204803S. doi:10.1029/2004GL021900. - "Methane's Impacts on Climate Change May Be Twice Previous Estimates". Nasa.gov. 30 November 2007. Retrieved 2010-10-16. - http://www3.epa.gov/climatechange/science/indicators/ghg/ghg-concentrations.html. Missing or empty - Wallace, John M. and Peter V. Hobbs. Atmospheric Science; An Introductory Survey.Elsevier. Second Edition, 2006. ISBN 978-0-12-732951-2. Chapter 1 - Prather, Michael J.; J Hsu (2008). "NF 3, the greenhouse gas missing from Kyoto". Geophysical Research Letters. 35 (12): L12810. Bibcode:2008GeoRL..3512810P. doi:10.1029/2008GL034542. - Isaksen, Ivar S. A.; Michael Gauss; Gunnar Myhre; Katey M. Walter Anthony; Carolyn Ruppel (20 April 2011). "Strong atmospheric chemistry feedback to climate warming from Arctic methane emissions" (PDF). Global Biogeochemical Cycles. 25 (2). Bibcode:2011GBioC..25B2002I. doi:10.1029/2010GB003845. Retrieved 29 July 2011. - "AGU Water Vapor in the Climate System". Eso.org. 27 April 1995. Retrieved 2011-09-11. - Betts (2001). "6.3 Well-mixed Greenhouse Gases". Chapter 6 Radiative Forcing of Climate Change. Working Group I: The Scientific Basis IPCC Third Assessment Report — Climate Change 2001. UNEP/GRID-Arendal — Publications. Retrieved 2010-10-16. - Jacob, Daniel (1999). Introduction to atmospheric chemistry. Princeton University Press. pp. 25–26. ISBN 0-691-00185-5. - "How long will global warming last?". RealClimate. Retrieved 2012-06-12. - Jacobson, MZ (2005). "Correction to "Control of fossil-fuel particulate black carbon and organic matter, possibly the most effective method of slowing global warming."". J. Geophys. Res. 110. pp. D14105. doi:10.1029/2005JD005888. - Archer, David (2009). "Atmospheric lifetime of fossil fuel carbon dioxide". Annual Review of Earth and Planetary Sciences. 37. pp. 117–134. doi:10.1146/annurev.earth.031208.100206. - Meehl, G. A. (2007). "Frequently Asked Question 10.3: If emissions of greenhouse gases are reduced, how quickly do their concentrations in the atmosphere decrease?". In S. Solomon; et al. Chapter 10: Global Climate Projections. Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press (CUP), Cambridge, United Kingdom and New York, USA.: Print version:CUP. This version: IPCC website. Retrieved 2011-06-01. - See also: Archer, David (2005). "Fate of fossil fuel CO2 in geologic time" (PDF). Journal of Geophysical Research. 110 (C9): C09S05.1–6. Bibcode:2005JGRC..11009S05A. doi:10.1029/2004JC002625. Retrieved 27 July 2007. - See also: Caldeira, Ken; Wickett, Michael E. (2005). "Ocean model predictions of chemistry changes from carbon dioxide emissions to the atmosphere and ocean" (PDF). Journal of Geophysical Research. 110 (C9): C09S04.1–12. Bibcode:2005JGRC..11009S04C. doi:10.1029/2004JC002671. Archived from the original (PDF) on 10 August 2007. Retrieved 27 July 2007. - Edited quote from public-domain source: "Climate Change Indicators in the United States". U.S. Environmental Protection Agency (EPA). 2010. Greenhouse Gases: Figure 1. The Annual Greenhouse Gas Index, 1979–2008: Background.. This publication is also available as a PDF (page 18). - David L. Chandler, How to count methane emissions, MIT News, April 25, 2014 (Accessed Jan. 15, 2015). Referenced paper is Jessika Trancik and Morgan Edwards, Climate impacts of energy technologies depend on emissions timing, Nature Climate Change, Volume 4, April 25, 2014, p. 347 (Accessed Jan. 15, 2015). - Use of ozone depleting substances in laboratories. TemaNord 2003:516 - Montreal Protocol - St. Fleur, Nicholas (10 November 2015). "Atmospheric Greenhouse Gas Levels Hit Record, Report Says". New York Times. Retrieved 11 November 2015. - Ritter, Karl (9 November 2015). "UK: In 1st, global temps average could be 1 degree C higher". AP News. Retrieved 11 November 2015. - Cole, Steve; Gray, Ellen (14 December 2015). "New NASA Satellite Maps Show Human Fingerprint on Global Air Quality". NASA. Retrieved 14 December 2015. - Canadell, J. G.; et al. (20 November 2007), "Contributions to Accelerating Atmospheric CO2 Growth from Economic Activity, Carbon Intensity, and Efficiency of Natural Sinks (Results and Discussion: Growth in Atmospheric CO2)", Proceedings of the National Academy of Sciences of the United States of America, 104 (47): 18866–18870, Bibcode:2007PNAS..10418866C, PMC , PMID 17962418, doi:10.1073/pnas.0702737104 - "Chapter 1 Historical Overview of Climate Change Science — FAQ 1.3 Figure 1 description page 116" (PDF). Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Intergovernmental Panel on Climate Change. 5 February 2007. Retrieved 25 April 2008. - "Chapter 3, IPCC Special Report on Emissions Scenarios, 2000". Intergovernmental Panel on Climate Change. 2000. Retrieved 2010-10-16. - "AR4 SYR SPM page 5" (PDF). Retrieved 2010-10-16. - Ehhalt, D.; et al., "Ch 4. Atmospheric Chemistry and Greenhouse Gases", Table 4.1, in IPCC TAR WG1 2001, pp. 244–245. Referred to by: Blasing, T. J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, on CDIAC 2013. Based on Blasing et al. (2013): Pre-1750 concentrations of CH4,N2O and current concentrations of O3, are taken from Table 4.1 (a) of the IPCC Intergovernmental Panel on Climate Change), 2001. Following the convention of IPCC (2001), inferred global-scale trace-gas concentrations from prior to 1750 are assumed to be practically uninfluenced by human activities such as increasingly specialized agriculture, land clearing, and combustion of fossil fuels. Preindustrial concentrations of industrially manufactured compounds are given as zero. The short atmospheric lifetime of ozone (hours-days) together with the spatial variability of its sources precludes a globally or vertically homogeneous distribution, so that a fractional unit such as parts per billion would not apply over a range of altitudes or geographical locations. Therefore a different unit is used to integrate the varying concentrations of ozone in the vertical dimension over a unit area, and the results can then be averaged globally. This unit is called a Dobson Unit (D.U.), after G. M. B. Dobson, one of the first investigators of atmospheric ozone. A Dobson unit is the amount of ozone in a column that, unmixed with the rest of the atmosphere, would be 10 micrometers thick at standard temperature and pressure. - Because atmospheric concentrations of most gases tend to vary systematically over the course of a year, figures given represent averages over a 12-month period for all gases except ozone (O3), for which a current global value has been estimated (IPCC, 2001, Table 4.1a). CO2 averages for year 2012 are taken from the National Oceanic and Atmospheric Administration, Earth System Research Laboratory, web site: www.esrl.noaa.gov/gmd/ccgg/trends maintained by Dr. Pieter Tans. For other chemical species, the values given are averages for 2011. These data are found on the CDIAC AGAGE web site: http://cdiac.ornl.gov/ndps/alegage.html or the AGAGE home page: http://agage.eas.gatech.edu. - Forster, P.; et al., "Ch 2: Changes in Atmospheric Constituents and in Radiative Forcing", Table 2.1, in IPCC AR4 WG1 2007, p. 141. Referred to by: Blasing, T. J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, on CDIAC 2013. For the latest updates, see the NOAA Annual Greenhouse Gas Index at: . - Prentice, I. C.; et al., "Ch 3. The Carbon Cycle and Atmospheric Carbon Dioxide", Executive summary, in IPCC TAR WG1 2001, p. 185. Referred to by: Blasing, T. J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, on CDIAC 2013 - Recent CO2 concentration (395.4 ppm) is the 2013 average taken from globally averaged marine surface data given by the National Oceanic and Atmospheric Administration Earth System Research Laboratory, website: http://www.esrl.noaa.gov/gmd/ccgg/trends/index.html#global. Please read the material on that web page and reference Dr. Pieter Tans when citing this average (Dr. Pieter Tans, NOAA/ESRL http://www.esrl.noaa.gov/gmd/ccgg/trends). The oft-cited Mauna Loa average for 2012 is 393.8 ppm, which is a good approximation although typically about 1 ppm higher than the spatial average given above. Refer to http://www.esrl.noaa.gov/gmd/ccgg/trends for records back to the late 1950s. - ppb = parts-per-billion - The first value in a cell represents Mace Head, Ireland, a mid-latitude Northern-Hemisphere site, and the second value represents Cape Grim, Tasmania, a mid-latitude Southern-Hemisphere site. "Current" values given for these gases are annual arithmetic averages based on monthly background concentrations for year 2011. The SF 6 values are from the AGAGE gas chromatography – mass spectrometer (gc-ms) Medusa measuring system. Source: Advanced Global Atmospheric Gases Experiment (AGAGE) data posted on CDIAC web site at: http://cdiac.ornl.gov/ftp/ale_gage_Agage/. These data are compiled from data on finer time scales in the ALE/GAGE/AGAGE database (Prinn et al., 2000). These data represent the work of several investigators at various institutions; guidelines on citing the various parts of the AGAGE database are found within the ALE/GAGE/AGAGE database, see: . - The pre-1750 value for N 2O is consistent with ice-core records from 10,000 B.C.E. through 1750 C.E.: "Summary for policymakers", Figure SPM.1, IPCC, in IPCC AR4 WG1 2007, p. 3. Referred to by: Blasing, T. J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, on CDIAC 2013 - Changes in stratospheric ozone have resulted in a decrease in radiative forcing of 0.05 W/m2: Forster, P.; et al., "Ch 2: Changes in Atmospheric Constituents and in Radiative Forcing", Table 2.12, in IPCC AR4 WG1 2007, p. 204. Referred to by: Blasing, T. J. (February 2013), Current Greenhouse Gas Concentrations, doi:10.3334/CDIAC/atg.032, on CDIAC 2013 - For SF 6 data from January 2004 onward see: . For data from 1995 through 2004, see the National Oceanic and Atmospheric Administration (NOAA), Halogenated and other Atmospheric Trace Species (HATS) site at: . Concentrations of SF 6 from 1970 through 1999, obtained from Antarctic firn (consolidated deep snow) air samples, can be found in W. T. Sturges et al. - File:Phanerozoic Carbon Dioxide.png - Berner, Robert A. (January 1994). "GEOCARB II: a revised model of atmospheric CO2 over Phanerozoic time" (PDF). American Journal of Science. 294 (1): 56–91. doi:10.2475/ajs.294.1.56. - Royer, D. L.; R. A. Berner; D. J. Beerling (2001). "Phanerozoic atmospheric CO2 change: evaluating geochemical and paleobiological approaches". Earth-Science Reviews. 54 (4): 349–92. Bibcode:2001ESRv...54..349R. doi:10.1016/S0012-8252(00)00042-8. - Berner, Robert A.; Kothavala, Zavareth (2001). "GEOCARB III: a revised model of atmospheric CO2 over Phanerozoic time" (PDF). American Journal of Science. 301 (2): 182–204. doi:10.2475/ajs.301.2.182. - Beerling, D. J.; Berner, R. A. (2005). "Feedbacks and the co-evolution of plants and atmospheric CO2". Proc. Natl. Acad. Sci. U.S.A. 102 (5): 1302–5. Bibcode:2005PNAS..102.1302B. PMC . PMID 15668402. doi:10.1073/pnas.0408724102. - Hoffmann, PF; AJ Kaufman; GP Halverson; DP Schrag (1998). "A neoproterozoic snowball earth". Science. 281 (5381): 1342–6. Bibcode:1998Sci...281.1342H. PMID 9721097. doi:10.1126/science.281.5381.1342. - Gerlach, TM (1991). "Present-day CO2 emissions from volcanoes". Transactions of the American Geophysical Union. 72 (23): 249–55. Bibcode:1991EOSTr..72..249.. doi:10.1029/90EO10192. - See also: "U.S. Geological Survey". 14 June 2011. Retrieved 15 October 2012. - Flückiger, Jacqueline (2002). "High-resolution Holocene N 2O ice core record and its relationship with CH 4 and CO2". Global Biogeochemical Cycles. 16: 1010. Bibcode:2002GBioC..16a..10F. doi:10.1029/2001GB001417. - Friederike Wagner, Bent Aaby and Henk Visscher (2002). "Rapid atmospheric CO2 changes associated with the 8,200-years-B.P. cooling event". Proc. Natl. Acad. Sci. U.S.A. 99 (19): 12011–4. Bibcode:2002PNAS...9912011W. PMC . PMID 12202744. doi:10.1073/pnas.182420699. - Andreas Indermühle, Bernhard Stauffer, Thomas F. Stocker (1999). "Early Holocene Atmospheric CO2 Concentrations". Science. 286 (5446): 1815. doi:10.1126/science.286.5446.1815a. "Early Holocene atmospheric CO2 concentrations". Science. Retrieved 26 May 2005. - H. J. Smith, M. Wahlen and D. Mastroianni (1997). "The CO2 concentration of air trapped in GISP2 ice from the Last Glacial Maximum-Holocene transition". Geophysical Research Letters. 24 (1): 1–4. Bibcode:1997GeoRL..24....1S. doi:10.1029/96GL03700. - "Monthly Average Carbon Dioxide Concentration, Mauna Loa Observatory" (PDF). Carbon Dioxide Information Analysis Center. 2005. Retrieved 14 December 2008. - Dr. Pieter Tans (3 May 2008) "Annual CO2 mole fraction increase (ppm)" for 1959–2007 National Oceanic and Atmospheric Administration Earth System Research Laboratory, Global Monitoring Division (additional details; see also K. A. Masarie, P. P. Tans (1995). "Extension and integration of atmospheric carbon dioxide data into a globally consistent measurement record". J. Geophys. Research. 100: 11593–610. Bibcode:1995JGR...10011593M. doi:10.1029/95JD00859. - Dumitru-Romulus Târziu, Victor-Dan Păcurar (Jan 2011). "Pădurea, climatul și energia". Rev. pădur. (in română). 126 (1): 34–39. ISSN 1583-7890. 16720. Retrieved 2012-06-11.(webpage has a translation button) - "Climate Change Indicators in the United States". NOAA. 2012. Figure 4. The Annual Greenhouse Gas Index, 1979–2011. - "Climate Change Indicators in the United States". U.S. Environmental Protection Agency (EPA). 2010. Figure 2. Global Greenhouse Gas Emissions by Sector, 1990–2005. - "Climate Change 2001: Working Group I: The Scientific Basis: figure 6-6". Retrieved 1 May 2006. - "The present carbon cycle — Climate Change". Grida.no. Retrieved 2010-10-16. - Solomon, S., D. Qin, M. Manning, Z. Chen, M. Marquis, K. B. Averyt, M. Tignor and H. L. Miller, ed. (2007). "Chapter 7. Couplings Between Changes in the Climate System and Biogeochemistry" (PDF). Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge, United Kingdom and New York, USA: Cambridge University Press. ISBN 978-0-521-88009-1. Retrieved 13 May 2008. - IPCC (2007d). "6.1 Observed changes in climate and their effects, and their causes". 6 Robust findings, key uncertainties. Climate Change 2007: Synthesis Report. A Contribution of Working Groups I, II, and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Geneva, Switzerland: IPCC. - "6.2 Drivers and projections of future climate changes and their impacts". 6 Robust findings, key uncertainties. Climate Change 2007: Synthesis Report. A Contribution of Working Groups I, II, and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Geneva, Switzerland: IPCC. 2007d. - "3.3.1 Impacts on systems and sectors". 3 Climate change and its impacts in the near and long term under different scenarios. Climate Change 2007: Synthesis Report. A Contribution of Working Groups I, II, and III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC). Geneva, Switzerland: IPCC. 2007d. - H. Steinfeld, P. Gerber, T. Wassenaar, V. Castel, M. Rosales, C. de Haan (2006) Livestock’s long shadow. Environmental issues and options. FAO Livestock, Environment and Development (LEAD) Initiative. - Raupach, M. R.; et al. (2007). "Global and regional drivers of accelerating CO2 emissions" (PDF). Proc. Natl. Acad. Sci. U.S.A. 104 (24): 10288–93. Bibcode:2007PNAS..10410288R. PMC . PMID 17519334. doi:10.1073/pnas.0700609104. - Schrooten, L; De Vlieger, Ina; Int Panis, Luc; Styns, R. Torfs, K; Torfs, R (2008). "Inventory and forecasting of maritime emissions in the Belgian sea territory, an activity based emission model". Atmospheric Environment. 42 (4): 667–676. - Grubb, M. (July–September 2003). "The economics of the Kyoto protocol" (PDF). World Economics. 4 (3). - Lerner & K. Lee Lerner, Brenda Wilmoth (2006). "Environmental issues: essential primary sources". Thomson Gale. Retrieved 11 September 2006. - "Kyoto Protocol". United Nations Framework Convention on Climate Change. Home > Kyoto Protocol. - King, D.; et al. (July 2011), "Copenhagen and Cancún", International climate change negotiations: Key lessons and next steps, Oxford, UK: Smith School of Enterprise and the Environment, University of Oxford, p. 12, doi:10.4210/ssee.pbs.2011.0003 PDF version is also available - Environmental Impacts of Tourism – Global Level UNEP - "search engine optimisation manchester, web design agency london, e commerce manchester". freightbestpractice.org.uk. Retrieved 13 September 2015. - A practical guide for fleet operators - Evans, Kimberly Masters (2005). "The greenhouse effect and climate change". The environment: a revolution in attitudes. Detroit: Thomson Gale. ISBN 0-7876-9082-1. - "Inventory of U.S. Greenhouse Gas Emissions and Sinks: 1990–2010" (PDF). U.S. Environmental Protection Agency. 15 April 2012. p. 1.4. Retrieved 2 June 2012. - Held, I. M.; Soden, B. J. (2000). "Water Vapor Feedback and Global Warming1". Annual Review of Energy and the Environment. 25: 441–475. doi:10.1146/annurev.energy.25.1.441. - Includes the Kyoto "basket" of GHGs - Rogner, H.-H., D. Zhou, R. Bradley. P. Crabbé, O. Edenhofer, B. Hare, L. Kuijpers, M. Yamaguchi (2007). "Executive Summary". In B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer. Introduction. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4. - Rogner, H.-H., D. Zhou, R. Bradley. P. Crabbé, O. Edenhofer, B.Hare, L. Kuijpers, M. Yamaguchi (2007). "1.3.1 Review of the last three decades". In B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer. Introduction. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4. This citation clarifies the time period (1970–2004) for the observed emissions trends - Bridging the Emissions Gap: A UNEP Synthesis Report (PDF), Nairobi, Kenya: United Nations Environment Programme (UNEP), November 2011, ISBN 978-92-807-3229-0 UNEP Stock Number: DEW/1470/NA - "Global Greenhouse Gas Emissions Data". EPA. Retrieved 4 March 2014. The burning of coal, natural gas, and oil for electricity and heat is the largest single source of global greenhouse gas emissions. - "Selected Development Indicators" (PDF). World Development Report 2010: Development and Climate Change (PDF). Washington, D.C., USA: The International Bank for Reconstruction and Development / The World Bank. 2010. Tables A1 and A2. ISBN 9780821379875. doi:10.1596/978-0-8213-7987-5. - Bader, N.; Bleichwitz, R. (2009). "Measuring urban greenhouse gas emissions: The challenge of comparability. S.A.P.I.EN.S. 2 (3)". Sapiens.revues.org. Retrieved 2011-09-11. - Banuri, T. (1996). Equity and social considerations. In: Climate change 1995: Economic and social dimensions of climate change. Contribution of Working Group III to the Second Assessment Report of the Intergovernmental Panel on Climate Change (J.P. Bruce et al. Eds.) (PDF). This version: Printed by Cambridge University Press, Cambridge, UK, and New York, USA. PDF version: IPCC website. ISBN 978-0-521-56854-8. doi:10.2277/0521568544. - World energy outlook 2007 edition – China and India insights. International Energy Agency (IEA), Head of Communication and Information Office, 9 rue de la Fédération, 75739 Paris Cedex 15, France. 2007. p. 600. ISBN 978-92-64-02730-5. Retrieved 2010-05-04. - Holtz-Eakin, D. (1995). "Stoking the fires? CO2 emissions and economic growth". Journal of Public Economics. 57 (1): 85–101. doi:10.1016/0047-2727(94)01449-X. Retrieved 2011-04-20. - B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer, ed. (2007). "Annex I: Glossary J-P". Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Print version: Cambridge University Press, Cambridge, United Kingdom and New York, USA. This version: IPCC website. ISBN 978-0-521-88011-4. Retrieved 2011-04-11. - Markandya, A. (2001). "7.3.5 Cost Implications of Alternative GHG Emission Reduction Options and Carbon Sinks". In B. Metz; et al. Costing Methodologies. Climate Change 2001: Mitigation. Contribution of Working Group III to the Third Assessment Report of the Intergovernmental Panel on Climate Change. Print version: Cambridge University Press, Cambridge, U.K., and New York, N.Y. This version: GRID-Arendal website. ISBN 978-0-521-01502-8. doi:10.2277/0521015022. Retrieved 2011-04-11. - Herzog, T. (November 2006). Yamashita, M. B., ed. Target: intensity — an analysis of greenhouse gas intensity targets (PDF). World Resources Institute. ISBN 1-56973-638-3. Retrieved 2011-04-11. - Botzen, W. J. W.; et al. (2008). "Cumulative CO2 emissions: shifting international responsibilities for climate debt". Climate Policy. 8: 570. doi:10.3763/cpol.2008.0539. - Höhne, N.; et al. (24 September 2010). "Contributions of individual countries’ emissions to climate change and their uncertainty" (PDF). Climatic Change. Springer Science+Business Media B.V. doi:10.1007/s10584-010-9930-6. - World Energy Outlook 2009 (PDF), Paris, France: International Energy Agency (IEA), 2009, pp. 179–180, ISBN 978-92-64-06130-9 - Rogner, H.-H., D. Zhou, R. Bradley. P. Crabbé, O. Edenhofer, B. Hare, L. Kuijpers, M. Yamaguchi (2007), "1.3.1 Review of the last three decades", in B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer, Introduction, Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88011-4 - The cited paper uses the term "start date" instead of "base year." - "Global CO2 emissions: annual increase halves in 2008". Netherlands Environmental Assessment Agency (PBL) website. 25 June 2009. Retrieved 2010-05-05. - "Global Carbon Mechanisms: Emerging lessons and implications (CTC748)". Carbon Trust. March 2009. p. 24. Retrieved 2010-03-31. - CO2 Emissions From Fuel Combustion: Highlights (2011 edition), Paris, France: International Energy Agency (IEA), 2011, p. 9 - Helm, D.; et al. (10 December 2007). Too Good To Be True? The UK's Climate Change Record (PDF). p. 3. - Davis, S. J. and K. Caldeira (8 March 2010). "Consumption-based Accounting of CO2 Emissions" (PDF). Proceedings of the National Academy of Sciences of the United States of America. 107 (12). Bibcode:2010PNAS..107.5687D. doi:10.1073/pnas.0906974107. Retrieved 2011-04-18. - "International Carbon Flows". Carbon Trust. May 2011. Retrieved 12 November 2012. - e.g., Gupta et al. (2007) assessed the scientific literature on climate change mitigation policy: Gupta, S.; et al. (2007). Chapter 13: Policies, instruments, and co-operative arrangements. In: Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, L.A. Meyer (eds)). Cambridge University Press. ISBN 9780521880114. - "International Energy Agency — Energy Policy". Paris, France: International Energy Agency (IEA). 2012. - "IEA Publications Bookshop: IEA Publications on 'Energy Policy'". Paris, France: Organization for Economic Co-operation and Development (OECD) / International Energy Agency (IEA). 2012. - Bridging the Emissions Gap: A UNEP Synthesis Report (PDF), Nairobi, Kenya: United Nations Environment Programme (UNEP), November 2011, ISBN 978-92-807-3229-0 UNEP Stock Number: DEW/1470/NA - "4. Energizing development without compromising the climate" (PDF). World Development Report 2010: Development and Climate Change (PDF). Washington, D.C., USA: The International Bank for Reconstruction and Development / The World Bank. 2010. p 192, Box 4.2: Efficient and clean energy can be good for development. ISBN 9780821379875. doi:10.1596/978-0-8213-7987-5. - Sixth compilation and synthesis of initial national communications from Parties not included in Annex I to the Convention. Note by the secretariat. Executive summary. (PDF). Geneva, Switzerland: United Nations Framework Convention on Climate Change (UNFCCC). 2005. pp. 10–12. - Compilation and synthesis of fifth national communications. Executive summary. Note by the secretariat. (PDF). Geneva (Switzerland): United Nations Framework Convention on Climate Change (UNFCCC). 2011. pp. 9–10. - Fisher, B.; et al. (2007). "3.1 Emissions scenarios". In B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer. Chapter 3: Issues related to mitigation in the long-term context. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4. - Rogner, H.-H., D. Zhou, R. Bradley. P. Crabbé, O. Edenhofer, B. Hare, L. Kuijpers, M. Yamaguchi (2007). "1.3.2 Future outlook". In B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer. Introduction. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4. - Rogner, H.-H., D. Zhou, R. Bradley. P. Crabbé, O. Edenhofer, B. Hare, L. Kuijpers, M. Yamaguchi (2007). "188.8.131.52 Total GHG emissions". In B. Metz, O. R. Davidson, P. R. Bosch, R. Dave, L. A. Meyer. Introduction. Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change. Cambridge University Press. ISBN 978-0-521-88011-4. - carbon dioxide, methane, nitrous oxide, sulfur hexafluoride - "Greenhouse Gas Emissions from a Typical Passenger Vehicle, US Environment Protection Agency" (PDF). Epa.gov. Retrieved 2011-09-11. - Engber, Daniel (1 November 2006). "How gasoline becomes CO2, Slate Magazine". Slate Magazine. Retrieved 2011-09-11. - "Volume calculation for carbon dioxide". Icbe.com. Retrieved 2011-09-11. - "Voluntary Reporting of Greenhouse Gases Program". Energy Information Administration. Retrieved 21 August 2009. - Moomaw, W., P. Burgherr, G. Heath, M. Lenzen, J. Nyboer, A. Verbruggen (2011). "Annex II: Methodology" (PDF). IPCC Special Report on Renewable Energy Sources and Climate Change Mitigation: 10. - Obersteiner M; Azar C; Kauppi P; et al. (October 2001). "Managing climate risk". Science. 294 (5543): 786–7. PMID 11681318. doi:10.1126/science.294.5543.786b. - Azar, C., Lindgren, K., Larson, E.D. and Möllersten, K. (2006). "Carbon capture and storage from fossil fuels and biomass – Costs and potential role in stabilising the atmosphere" (PDF). Climatic Change. 74: 47–79. doi:10.1007/s10584-005-3484-7. - "Geoengineering the climate: science, governance and uncertainty". The Royal Society. 2009. Archived from the original on 7 September 2009. Retrieved 12 September 2009. - Fischer, B.S., N. Nakicenovic, K. Alfsen, J. Corfee Morlot, F. de la Chesnaye, J.-Ch. Hourcade, K. Jiang, M. Kainuma, E. La Rovere, A. Matysek, A. Rana, K. Riahi, R. Richels, S. Rose, D. van Vuuren, R. Warren, (2007)"Issues related to mitigation in the long term context", In Climate Change 2007: Mitigation. Contribution of Working Group III to the Fourth Assessment Report of the Inter-governmental Panel on Climate Change [B. Metz, O.R. Davidson, P.R. Bosch, R. Dave, L.A. Meyer (eds)], Cambridge University Press, Cambridge. - Cook, J.; Nuccitelli, D.; Green, S. A.; Richardson, M.; Winkler, B. R.; Painting, R.; Way, R.; Jacobs, P.; Skuce, A. (2013). "Quantifying the consensus on anthropogenic global warming in the scientific literature". Environmental Research Letters. 8 (2): 024024. doi:10.1088/1748-9326/8/2/024024. - Carbon Dioxide Information Analysis Center (CDIAC), Oak Ridge, Tennessee, 2012 - IPCC TAR WG1 (2001), Houghton, J. T.; Ding, Y.; Griggs, D. J.; Noguer, M.; van der Linden, P. J.; Dai, X.; Maskell, K.; and Johnson, C. A., ed., Climate Change 2001: The Scientific Basis, Contribution of Working Group I (WG1) to the Third Assessment Report (TAR) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press, ISBN 0-521-80767-0 (pb: 0-521-01495-6) - IPCC AR4 WG1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K. B.; Tignor, M.; and Miller, H. L., ed., Climate Change 2007: The Physical Science Basis, Contribution of Working Group I (WG1) to the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change (IPCC), Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7) - Van Dijk, P.; Zhang, J.; Jun, W.; Kuenzer, C.; WOLF, K. H. (2011). "Assessment of the contribution of in-situ combustion of coal to greenhouse gas emission; based on a comparison of Chinese mining information to previous remote sensing estimates". International Journal of Coal Geology. 86 (1 Special Issue RS/GIS): 108–119. doi:10.1016/j.coal.2011.01.009. - Zhou, Yiqin (2011). Compar[ison of] Fresh or Ensiled Fodders (e.g., Grass, Legume, Corn) on the Production of Greenhouse Gases Following Enteric Fermentation in Beef Cattle. Rouyn-Noranda, Qué.: Université du Québec en Abitibi-Témiscamingue. N.B.: Research report. - The official greenhouse gas emissions data of developed countries from the UNFCCC - Greenhouse gas at DMOZ - Annual Greenhouse Gas Index (AGGI) from NOAA - Atmospheric spectra of GHGs and other trace gases - How Much Greenhouse Gas Does the United States Emit? Carbon dioxide emissions - International Energy Annual: Reserves - Trends in Atmospheric Carbon Dioxide at NOAA - NOAA CMDL CCGG — Interactive Atmospheric Data Visualization NOAA CO2 data - Carbon Dioxide Information Analysis Center (CDIAC) - Little Green Data Book 2007, World Bank. Lists CO2 statistics by country, including per capita and by country income class. - Database of carbon emissions of power plants - NASA's Orbiting Carbon Observatory
<urn:uuid:eeb46ee6-ae14-4118-be72-7296007ff893>
4.125
20,509
Knowledge Article
Science & Tech.
58.333916
95,565,062
|Home||Save Land||Travel||Totals||Resources||Get Involved||About||July 18, 5:24 pm CT| Biodiversity is the variety of living organisms. There are different biological levels of biodiversity: from the very small differences linked to the genetic variances within the same species to different species, genus, different families until the highest taxonomic groups. Biodiversity is normally divided into three levels: genetic variation, species variation and variety of ecosystem. Of these the most commonly used is the variety of species as a synonym for the richness of species. The variety of species is the number of species present in a particular site or habitat. Without any doubt the richest environment in number of species is the tropical rain forest. Despite this environment occupying only 7% of the surface of the Earth, it provides a home to more than half of all living species. The decline of biodiversity The extinction of species is, in a certain way, therefore a natural phenomenon. In the last 600 million years the trend of biodiversity has always been towards growth, despite the occurrence of five mass extinctions of which the most famous is that of the dinosaurs which happened at the end of Cretaceous period 65 million years ago. It is more and more evident that, in the last decades, we are contributing to the sixth mass extinction, this time caused, directly or indirectly, by man. The difference between this and the previous ones is that the rate of extinction is incredibly faster than that the past. About 27,000 species of plants and animals are made extinct every year by the activity of man i.e. 74 species a day, three an hour. Before man interfered with the environment, species survived for a period in the realm millions of years (as evident from fossil documentation) which means that the normal basic rate of extinction is 1 species a year for every million species which exist. Human activity has increased the rate of extinction by between 1000 and 10.000 times. Therefore we find ourselves in the middle of one of the strongest waves of extinction that has ever happen on the Earth. The causes of the extinction of species due to man, can be divided into two categories: Excessive hunting is maybe the most obvious cause of extinction but, in global terms, its contribution to the loss of biodiversity is undoubtedly less important than the indirect causes. Virtually every type of human activity leads to the modification of the natural environment. In particular the reduction and destruction of entire ecosystem, the fragmentation of environments into small, non-self sufficient parts and pollution and contaminants in the natural environments, all influence the relative number of species. In extreme cases these can cause extinction. Why Conserve Biodiversity The biodiversity is a biological resource and maintains favorable conditions in the biosphere for human life, in particular as a food source and medicine for humanity. About 80% of the population of developing countries use medicines taken from natural substances despite the relative unavailability of medical products in the west. Actually about 120 substances extracted from 90 species of plants are used in medicine. It is known that the synthetic derivatives are less efficient therapeutically then natural products. The species used are only a very small part of those potentially useable, one of the main worries of researchers is that the reduction of biodiversity could prevent the use of these substances that, in the future, could cure important illnesses. Other functions of biodiversity include the role of forests in the regulation of water basins and the stabilization of soil, preventing erosion; the role of mangroves in the stabilization of tropical coastal areas and the reproduction of fish; the role of coral reefs in the survival of innumerable species; the role that protective areas have in the economy of many developing countries trough the income of eco tourism. In every case the conservation of biodiversity doesn't have to be justified only by economic advantages but also for moral and aesthetic values. There are two types of reasons:
<urn:uuid:f3642850-f7dc-4d86-af2f-c0981fafee78>
3.328125
795
Knowledge Article
Science & Tech.
27.131549
95,565,079
|Part of a series of articles about| Torque, moment, or moment of force is rotational force. Just as a linear force is a push or a pull, a torque can be thought of as a twist to an object. In three dimensions, the torque is a pseudovector; for point particles, it is given by the cross product of the position vector (distance vector) and the force vector. The magnitude of torque of a rigid body depends on three quantities: the force applied, the lever arm vector connecting the origin to the point of force application, and the angle between the force and lever arm vectors. In symbols: Torque is referred to using different vocabulary depending on geographical location and field of study. This article refers to the definition used in US physics in its usage of the word torque. In the UK and in US mechanical engineering, torque is referred to as moment of force, usually shortened to moment. In US physics and UK physics terminology these terms are interchangeable, unlike in US mechanical engineering, where the term torque is used for the closely related "resultant moment of a couple". Torque is defined mathematically as the rate of change of angular momentum of an object. The definition of torque states that one or both of the angular velocity or the moment of inertia of an object are changing. Moment is the general term used for the tendency of one or more applied forces to rotate an object about an axis, but not necessarily to change the angular momentum of the object (the concept which is called torque in physics). For example, a rotational force applied to a shaft causing acceleration, such as a drill bit accelerating from rest, results in a moment called a torque. By contrast, a lateral force on a beam produces a moment (called a bending moment), but since the angular momentum of the beam is not changing, this bending moment is not called a torque. Similarly with any force couple on an object that has no change to its angular momentum, such moment is also not called a torque. This article follows the US physics terminology by calling all moments by the term torque, whether or not they cause the angular momentum of an object to change. The concept of torque, also called moment or couple, originated with the studies of Archimedes on levers. The term torque was apparently introduced into English scientific literature by James Thomson, the brother of Lord Kelvin, in 1884. A force applied at a right angle to a lever multiplied by its distance from the lever's fulcrum (the length of the lever arm) is its torque. A force of three newtons applied two metres from the fulcrum, for example, exerts the same torque as a force of one newton applied six metres from the fulcrum. The direction of the torque can be determined by using the right hand grip rule: if the fingers of the right hand are curled from the direction of the lever arm to the direction of the force, then the thumb points in the direction of the torque. More generally, the torque on a particle (which has the position r in some reference frame) can be defined as the cross product: where r is the particle's position vector relative to the fulcrum, and F is the force acting on the particle. The magnitude τ of the torque is given by where r is the distance from the axis of rotation to the particle, F is the magnitude of the force applied, and θ is the angle between the position and force vectors. Alternatively, It follows from the properties of the cross product that the torque vector is perpendicular to both the position and force vectors. The torque vector points along the axis of the rotation that the force vector (starting from rest) would initiate. The resulting torque vector direction is determined by the right-hand rule. The unbalanced torque on a body along axis of rotation determines the rate of change of the body's angular momentum, where L is the angular momentum vector and t is time. If multiple torques are acting on the body, it is instead the net torque which determines the rate of change of the angular momentum: For the motion of a point particle, where α is the angular acceleration of the particle, and p|| is the radial component of its linear momentum. This equation is the rotational analogue of Newton's Second Law for point particles, and is valid for any type of trajectory. Note that although force and acceleration are always parallel and directly proportional, the torque τ need not be parallel or directly proportional to the angular acceleration α. This arises from the fact that although mass is always conserved, the moment of inertia in general is not. The definition of angular momentum for a single particle is: where "×" indicates the vector cross product, p is the particle's linear momentum, and r is the displacement vector from the origin (the origin is assumed to be a fixed location anywhere in space). The time-derivative of this is: This result can easily be proven by splitting the vectors into components and applying the product rule. Now using the definition of force (whether or not mass is constant) and the definition of velocity The cross product of momentum with its associated velocity is zero because velocity and momentum are parallel, so the second term vanishes. By definition, torque τ = r × F. Therefore, torque on a particle is equal to the first derivative of its angular momentum with respect to time. If multiple forces are applied, Newton's second law instead reads Fnet = ma, and it follows that This is a general proof. Torque has dimension force times distance, symbolically L2MT−2. Official SI literature suggests using the unit newton metre (N⋅m) or the unit joule per radian. The unit newton metre is properly denoted N⋅m or N m. This avoids ambiguity with mN, millinewtons. The SI unit for energy or work is the joule. It is dimensionally equivalent to a force of one newton acting over a distance of one metre, but it is not used for torque. Energy and torque are entirely different concepts, so the practice of using different unit names (i.e., reserving newton metres for torque and using only joules for energy) helps avoid mistakes and misunderstandings. The dimensional equivalence of these units is not simply a coincidence: a torque of 1 N⋅m applied through a full revolution will require an energy of exactly 2π joules. Mathematically, In Imperial units, "pound-force-feet" (lbf⋅ft), "foot-pounds-force", "inch-pounds-force", "ounce-force-inches" (ozf⋅in) are used, and other non-SI units of torque includes "metre-kilograms-force". For all these units, the word "force" is often left out. For example, abbreviating "pound-force-foot" to simply "pound-foot" (in this case, it would be implicit that the "pound" is pound-force and not pound-mass). This is an example of the confusion caused by the use of English units that may be avoided with SI units because of the careful distinction in SI between force (in newtons) and mass (in kilograms). Torque is sometimes listed with units that do not make dimensional sense, such as the gram-centimeter. In this case, "gram" should be understood as the force given by the weight of 1 gram on the surface of the Earth (i.e. 0.00980665 N). The surface of the Earth has a standard gravitational field strength of 9.80665 N/kg. A very useful special case, often given as the definition of torque in fields other than physics, is as follows: The construction of the "moment arm" is shown in the figure to the right, along with the vectors r and F mentioned above. The problem with this definition is that it does not give the direction of the torque but only the magnitude, and hence it is difficult to use in three-dimensional cases. If the force is perpendicular to the displacement vector r, the moment arm will be equal to the distance to the centre, and torque will be a maximum for the given force. The equation for the magnitude of a torque, arising from a perpendicular force: For example, if a person places a force of 10 N at the terminal end of a wrench that is 0.5 m long (or a force of 10 N exactly 0.5 m from the twist point of a wrench of any length), the torque will be 5 N.m – assuming that the person moves the wrench by applying force in the plane of movement and perpendicular to the wrench. For an object to be in static equilibrium, not only must the sum of the forces be zero, but also the sum of the torques (moments) about any point. For a two-dimensional situation with horizontal and vertical forces, the sum of the forces requirement is two equations: ΣH = 0 and ΣV = 0, and the torque a third equation: Στ = 0. That is, to solve statically determinate equilibrium problems in two-dimensions, three equations are used. When the net force on the system is zero, the torque measured from any point in space is the same. For example, the torque on a current-carrying loop in a uniform magnetic field is the same regardless of your point of reference. If the net force is not zero, and is the torque measured from , then the torque measured from is … Torque is part of the basic specification of an engine: the power output of an engine is expressed as its torque multiplied by its rotational speed of the axis. Internal-combustion engines produce useful torque only over a limited range of rotational speeds (typically from around 1,000–6,000 rpm for a small car). The varying torque output over that range can be measured with a dynamometer, and shown as a torque curve. Steam engines and electric motors tend to produce maximum torque close to zero rpm, with the torque diminishing as rotational speed rises (due to increasing friction and other constraints). Reciprocating steam engines can start heavy loads from zero RPM without a clutch. If a force is allowed to act through a distance, it is doing mechanical work. Similarly, if torque is allowed to act through a rotational distance, it is doing work. Mathematically, for rotation about a fixed axis through the center of mass, The work done by a variable force acting over a finite linear displacement is given by integrating the force with respect to an elemental linear displacement However, the infinitesimal linear displacement is related to a corresponding angular displacement and the radius vector as Substitution in the above expression for work gives The expression is a scalar triple product given by . An alternate expression for the same scalar triple product is But as per the definition of torque, Corresponding substitution in the expression of work gives, Since the parameter of integration has been changed from linear displacement to angular displacement, the limits of the integration also change correspondingly, giving If the torque and the angular displacement are in the same direction, then the scalar product reduces to a product of magnitudes; i.e., giving Algebraically, the equation may be rearranged to compute torque for a given angular speed and power output. Note that the power injected by the torque depends only on the instantaneous angular speed – not on whether the angular speed increases, decreases, or remains constant while the torque is being applied (this is equivalent to the linear case where the power injected by a force depends only on the instantaneous speed – not on the resulting acceleration, if any). In practice, this relationship can be observed in bicycles: Bicycles are typically composed of two road wheels, front and rear gears (referred to as sprockets) meshing with a circular chain, and a derailleur mechanism if the bicycle's transmission system allows multiple gear ratios to be used (i.e. multi-speed bicycle), all of which attached to the frame. A cyclist, the person who rides the bicycle, provides the input power by turning pedals, thereby cranking the front sprocket (commonly referred to as chainring). The input power provided by the cyclist is equal to the product of cadence (i.e. the number of pedal revolutions per minute) and the torque on spindle of the bicycle's crankset. The bicycle's drivetrain transmits the input power to the road wheel, which in turn conveys the received power to the road as the output power of the bicycle. Depending on the gear ratio of the bicycle, a (torque, rpm)input pair is converted to a (torque, rpm)output pair. By using a larger rear gear, or by switching to a lower gear in multi-speed bicycles, angular speed of the road wheels is decreased while the torque is increased, product of which (i.e. power) does not change. Also, the unit newton metre is dimensionally equivalent to the joule, which is the unit of energy. However, in the case of torque, the unit is assigned to a vector, whereas for energy, it is assigned to a scalar. A conversion factor may be necessary when using different units of power or torque. For example, if rotational speed (revolutions per time) is used in place of angular speed (radians per time), we multiply by a factor of 2π radians per revolution. In the following formulas, P is power, τ is torque, and ν (Greek letter nu) is rotational speed. Dividing by 60 seconds per minute gives us the following. where rotational speed is in revolutions per minute (rpm). Some people (e.g., American automotive engineers) use horsepower (imperial mechanical) for power, foot-pounds (lbf⋅ft) for torque and rpm for rotational speed. This results in the formula changing to: The constant below (in foot pounds per minute) changes with the definition of the horsepower; for example, using metric horsepower, it becomes approximately 32,550. Use of other units (e.g., BTU per hour for power) would require a different custom conversion factor. For a rotating object, the linear distance covered at the circumference of rotation is the product of the radius with the angle covered. That is: linear distance = radius × angular distance. And by definition, linear distance = linear speed × time = radius × angular speed × time. By the definition of torque: torque = radius × force. We can rearrange this to determine force = torque ÷ radius. These two values can be substituted into the definition of power: The radius r and time t have dropped out of the equation. However, angular speed must be in radians, by the assumed direct relationship between linear speed and angular speed at the beginning of the derivation. If the rotational speed is measured in revolutions per unit of time, the linear speed and distance are increased proportionately by 2π in the above derivation to give: If torque is in newton metres and rotational speed in revolutions per second, the above equation gives power in newton metres per second or watts. If Imperial units are used, and if torque is in pounds-force feet and rotational speed in revolutions per minute, the above equation gives power in foot pounds-force per minute. The horsepower form of the equation is then derived by applying the conversion factor 33,000 ft⋅lbf/min per horsepower: The Principle of Moments, also known as Varignon's theorem (not to be confused with the geometrical theorem of the same name) states that the sum of torques due to several forces applied to a single point is equal to the torque due to the sum (resultant) of the forces. Mathematically, this follows from: Torque can be multiplied via three methods: by locating the fulcrum such that the length of a lever is increased; by using a longer lever; or by the use of a speed reducing gearset or gear box. Such a mechanism multiplies torque, as rotation rate is reduced. |Look up torque in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to Torque.| None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media. All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves. The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely.
<urn:uuid:66443546-010c-4dd8-a382-7b6939dc0ef3>
4.28125
3,548
Knowledge Article
Science & Tech.
42.642388
95,565,132
Outlook for Plant Invasions: Interactions with Other Agents of Global Change It is the daunting task of this chapter to peer into the future and speculate about the prospects for plant invasions. We know that the globe is changing in its physical, biological, and cultural features. What impact might these changes have on the rate at which plant invaders arrive in our natural communities and on the effects they have there? One can certainly list some major changes underway in the environment and review the ways in which the arrival or establishment of invaders might be affected; that is how I will begin. However, it is clearly unwise to think of these factors in isolation from one another, and so I go on to discuss possible synergistic relationships among the factors. I will also try to put into a larger management context the entire issue of plant invasions as affected by global change. KeywordsGlobal Change Disturbance Regimen Increase Carbon Dioxide Stratospheric Ozone Depletion Elevated Carbon Dioxide Unable to display preview. Download preview PDF.
<urn:uuid:f3db8de8-728b-4a3d-9010-8b0a4413d6e5>
2.609375
211
Truncated
Science & Tech.
29.867128
95,565,140
Did scientists find Zealandia beneath the waves? Their two-month expedition was a success. After a nine-week voyage to study the lost, submerged continent of Zealandia in the South Pacific, a team of 32 scientists from 12 countries has arrived in Hobart, Tasmania, aboard the research vessel JOIDES Resolution. Where were the scientists heading? A map shows the once-lost continent of Zealandia. he research vessel JOIDES Resolution about to leave Australia as it embarks on the expedition. Researchers affiliated with the International Ocean Discovery Program (IODP) mounted the expedition to explore Zealandia. IODP is a collaboration of scientists from 23 countries; the organization coordinates voyages to study the history of the Earth recorded in sediments and rocks beneath the seafloor. "Zealandia, a sunken continent long lost beneath the oceans, is giving up its 60 million-year-old secrets through scientific ocean drilling," said Jamie Allan, program director in the U.S. National Science Foundation's Division of Ocean Sciences, which supports IODP. "This expedition offered insights into Earth's history, ranging from mountain-building in New Zealand to the shifting movements of Earth's tectonic plates to changes in ocean circulation and global climate," Allan said. Earlier this year, Zealandia was confirmed as Earth's seventh continent, but little is known about it because it's submerged more than a kilometer (two-thirds of a mile) under the sea. Until now, the region has been sparsely surveyed and sampled. Expedition scientists drilled deep into the seabed at six sites in water depths of more than 1,250 meters (4,101 feet). They collected 2,500 meters (8,202 feet) of sediment cores from layers that record how the geography, volcanism and climate of Zealandia have changed over the last 70 million years. According to expedition co-chief scientist Gerald Dickens of Rice University in the U.S., significant new fossil discoveries were made. They prove that Zealandia was not always as deep beneath the waves as it is today. "More than 8,000 specimens were studied, and several hundred fossil species were identified," said Dickens. "The discovery of microscopic shells of organisms that lived in warm shallow seas, and of spores and pollen from land plants, reveal that the geography and climate of Zealandia were dramatically different in the past." The new discoveries show that the formation 40 to 50 million years ago of the "Pacific Ring of Fire," an active seafloor zone along the perimeter of the Pacific Ocean, caused dramatic changes in ocean depth and volcanic activity and buckled the seabed of Zealandia, according to Dickens. Expedition co-chief scientist Rupert Sutherland of Victoria University of Wellington in New Zealand said researchers had believed that Zealandia was submerged when it separated from Australia and Antarctica about 80 million years ago. "That is still probably accurate, but it is now clear that dramatic later events shaped the continent we explored on this voyage," Sutherland said. "Big geographic changes across northern Zealandia, which is about the same size as India, have implications for understanding questions such as how plants and animals dispersed and evolved in the South Pacific. "The discovery of past land and shallow seas now provides an explanation. There were pathways for animals and plants to move along." Studies of the sediment cores obtained during the expedition will focus on understanding how Earth's tectonic plates move and how the global climate system works. Records of Zealandia's history, expedition scientists said, will provide a sensitive test for computer models used to predict future changes in climate. Cheryl Dybas | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:d2e4eda7-3cf4-4d2f-b8b0-ef4dfee48e26>
3.78125
1,326
Content Listing
Science & Tech.
40.594853
95,565,143
In the previous chapter, we examined an interesting aspect of threads. Before we used a thread pool, we were concerned with creating, controlling, and communicating between threads. With a thread pool, we were concerned with the task that we wanted to execute. Using an executor allowed us to focus on our program's logic instead of writing a lot of thread-related code. In this chapter, we examine this idea in another context. Task schedulers give us the opportunity to execute particular tasks at a fixed point in time in the future (or, more correctly, after a fixed point in time in the future). They also allow us to set up repeated execution of tasks. Once again, they free us from many of the low-level details of thread programming: we create a task, hand it off to a task scheduler, and don't worry about the rest. Java provides different kinds of task schedulers. Timer classes execute tasks (perhaps repeatedly) at a point in the future. These classes provide a basic task scheduling feature. J2SE 5.0 has a new, more flexible task scheduler that can be used to handle many tasks more effectively than the timer classes. In this chapter, we'll look into all of these classes. Interestingly, this is not the first time that we have been concerned with when a task is to be executed. Previously, we've just considered the timing as part of the task. We've seen tools that allow threads to wait for specific periods of time. Here is a quick ...
<urn:uuid:71983023-e2ca-4503-a54a-98b6619028fa>
3.875
312
Truncated
Software Dev.
56.978462
95,565,144
Stalagmites, which crystallize from water dropping onto the floors of caves, millimeter by millimeter, over thousands of years, leave behind a record of climate change encased in stone. Newly published research by Rhawn Denniston, professor of geology at Cornell College, and his research team, applied a novel technique to stalagmites from the Australian tropics to create a 2,200-year-long record of flood events that might also help predict future climate change. A paper by Denniston and 10 others, including a 2014 Cornell College graduate, is published this week in the journal Proceedings of the National Academy of Sciences. The article, “Extreme rainfall activity in the Australian tropics reflects changes in the El Niño/Southern Oscillation over the last two millennia,” presents a precisely dated stalagmite record of cave flooding events that are tied to tropical cyclones, which include storms such as hurricanes and typhoons. Denniston is one of few researchers worldwide using stalagmites to reconstruct past tropical cyclone activity, a field of research called paleotempestology. His work in Australia began in 2009 and was originally intended to focus on the chemical composition of the stalagmites as a means of reconstructing past changes in the intensity of Australian summer monsoon rains. But Denniston and his research team found more than just variations in the chemical composition of the stalagmites they examined; they discovered that the interiors of the stalagmites also contained prominent layers of mud. “Seeing mud accumulations like these was really unusual,” Denniston said. “There was no doubt that the mud layers came from the cave having flooded. The water stirred up the sediment and when the water receded, the mud coated everything in the cave—the floor, the walls, and the stalagmites. Then the stalagmites started forming again and the mud got trapped inside.” The stalagmites were precisely dated by Denniston, Cornell College geology majors, and Denniston’s colleagues at the University of New Mexico. Once the ages of the stalagmites were known, the mud layers were measured. Angelique Gonzales ’14, who worked with Denniston on the research and is third author on the paper, examined nearly 11 meters of stalagmites, measuring them in half millimeter increments and recording the location and thickness of mud layers. This work gave the team more than 2,000 years of data about the frequency of cave flooding. But the origins of the floods were still unclear. Given the area’s climatology, Denniston found that these rains could have come from the Australian monsoon or from tropical cyclones. “We were sort of stuck,” Denniston said, “but then I started working with Gabriele.” Gabriele Villarini, an assistant professor of engineering at the University of Iowa and the second author of the paper, studies extreme meteorological events, what drives the frequency and magnitude of those events, and their impact on policy and economics. With Denniston and Gonzales, Villarini examined historical rainfall records from a weather station near the cave. “The largest rainfall events, almost regardless of duration, are tied to tropical cyclones,” Villarini said. Next, they compared flood events recorded in a stalagmite that grew over the past several decades to historical records of tropical cyclones. This analysis revealed that timing of flood events in the cave was consistent with the passing of tropical cyclones through the area. Thus, the researchers interpreted the flood layers in their stalagmites largely as recording tropical cyclone activity. The resulting data tell scientists about more than just the frequencies of tropical cyclones in one part of Australia over the past 2,200 years. A major driver of year-to-year changes in tropical cyclones around the world is the El Niño/Southern Oscillation, which influences weather patterns across the globe. During El Niño events, for example, Australia and the Atlantic generally experience fewer tropical cyclones, while during La Niña events, the climatological opposite of El Niño, the regions see more tropical cyclones. “Our work, and that of several other researchers, reveals that the frequency of storms has changed over the past hundreds and thousands of years,” Denniston said. “But why? Could it have been due to El Niño? Direct observations only go back about a hundred years, and there hasn’t been much variation in the nature of El Niño/Southern Oscillation over that time. Further back there was more, and so our goal was to test the link between storms and El Niño in prehistory.” Denniston noted that the variations over time in the numbers of flood events recorded by his stalagmites matched reconstructed numbers of hurricanes in the Atlantic, Gulf of Mexico, and Caribbean. “Tropical cyclone activity in these regions responds similarly to El Niño, and previous studies had also suggested that some periods, such as those when we had lots of flood layers in our stalagmites, were likely characterized by more frequent La Niñas. Similarly, times with fewer storms were characterized by more frequent El Niños.” The results of this study mark an important step towards understanding how future climate change may be expressed. “It is difficult to use climate models to study hurricane activity, and so studies such as ours, which produced a record of storms under different climate conditions, are important for our understanding of future storm activity,” Denniston said. Gonzales, who is planning to pursue a Ph.D. in geology, said that her experience with Denniston and his research, including two senior seminars and an honors thesis, was valuable because she got both field and lab experience as she helped determine not just what had happened in the past, but what that meant. “There were a lot of different aspects to put this together—dating, measuring, literature review, and modeling”, she said. “It was really exciting.” Denniston is now gearing up to establish a detailed cave monitoring program in this and other regional caves. “We want to extend this study,” he said, “to examine what conditions trigger cave flooding.” In addition to Denniston, Villarini, and Gonzales, the other authors on the paper were Karl-Heinz Wyrwoll from the University of Western Australia, Victor J. Polyak from the University of New Mexico, Caroline C. Ummenhofer from the Woods Hole Oceanographic Institution, Matthew S. Lachniet from the University of Nevada Las Vegas, Alan D. Wanamaker Jr. from Iowa State University, William F. Humphreys from the Western Australian Museum, David Woods from the Australian Department of Parks and Wildlife, and John Cugley from the Australian Speleological Federation. Digital News Director James Kelly | newswise Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:562326af-cc51-462f-a0f7-fe18c991bd4c>
3.8125
2,093
Content Listing
Science & Tech.
35.885484
95,565,156
Astronomers have reported a whole new type of exploding star, or supernova, which seems to spew out calcium and titanium. While most press reports have focused on the calcium, it's the titanium that's really interesting - the finding could negate ongoing efforts to find signs of dark matter at the center of the Milky Way. The team of astronomers, led by Hagai Perets, now at the Harvard-Smithsonian Center for Astrophysics, and Avishay Gal-Yam of the Weizmann Institute of Science in Rehovot, Israel, presents evidence that supernova SN 2005E is distinct from the two main classes of supernovae: the Type Ia supernovae, thought to be old, white dwarf stars that accrete matter from a companion until they undergo a thermonuclear explosion that blows them apart entirely; and Type Ib/c or Type II supernovae, thought to be hot, massive and short-lived stars that explode and leave behind black holes or neutron stars. Perets and colleagues describe a scenario with a pair of orbiting white dwarf stars, where one SN 2005E and arguing that it is distinct from the two main classes of supernovae: the Type Ia supernovae, thought to be old, white dwarf stars that accrete matter from a companion until they undergo a thermonuclear explosion that blows them apart entirely; and Type Ib/c or Type II supernovae, thought to be hot, massive and short-lived stars that explode and leave behind black holes or neutron stars. One theory of this new exploding system is that a low mass white dwarf steals helium from a companion until the mass thief becomes very hot and dense until the temperature and pressure ignited a thermonuclear explosion – a massive fusion bomb – that blew off at least the outer layers of the star and perhaps blew the entire star to smithereens. The helium is transformed into elements such as calcium and titanium, eventually producing the building blocks of life for future generations of stars. The titanium is radioactive and emits positrons as it decays. Over the past couple of years, there have been reports from experiments such as ATIC and PAMELA of an excess of positrons coming from deep space. This excess, it has been argued, is a signature of dark matter particles colliding. But if the new supernova finding is anything to go by, these explosions could be quite commonplace and they could be the source of the excess positrons. While this does not prove or disprove the existence of dark matter, it challenges the interpretation that the excess positrons are coming from the annihilation of dark matter particles. The WMAP, which supports the "concordance (Λ-CDM) model" of the Universe with up to 73% dark energy, 23% dark matter and 4% comprising all the matter in observable universe, has been under attack. Critics state that claims for the existence of invisible, unknown forces, to support a Big Bang theory where it is admitted that over 90% of the universe it seeks to explain cannot even be detected, is not what Karl Popper would have called "science.". Casey Kazan via http://www.newscientist.com/blogs/shortsharpscience/2010/05/new-supernova-class-may-underm.html
<urn:uuid:0d955646-c2cc-4f0b-b162-017b7c0a5e4d>
3.25
684
Personal Blog
Science & Tech.
36.10084
95,565,177
|Part of a series of articles about| In Lagrangian mechanics, the trajectory of a system of particles is derived by solving the Lagrange equations in one of two forms, either the Lagrange equations of the first kind, which treat constraints explicitly as extra equations, often using Lagrange multipliers; or the Lagrange equations of the second kind, which incorporate the constraints directly by judicious choice of generalized coordinates. In each case, a mathematical function called the Lagrangian is a function of the generalized coordinates, their time derivatives, and time, and contains the information about the dynamics of the system. No new views on physics are necessarily introduced in applying Lagrangian mechanics compared to Newtonian mechanics. Newton's laws can include non-conservative forces like friction; however, they must include constraint forces explicitly and are best suited to Cartesian coordinates. Lagrangian mechanics is ideal for systems with conservative forces and for bypassing constraint forces in any coordinate system. Dissipative and driven forces can be accounted for by splitting the external forces into a sum of potential and non-potential forces, leading to a set of modified Euler–Lagrange (EL) equations. Generalized coordinates can be chosen by convenience, to exploit symmetries in the system or the geometry of the constraints, which may simplify solving for the motion of the system. Lagrangian mechanics also reveals conserved quantities and their symmetries in a direct way, as a special case of Noether's theorem. Lagrangian mechanics is important not just for its broad applications, but also for its role in advancing deep understanding of physics, although Lagrange only sought to describe classical mechanics in his treatise Mécanique analytique, William Rowan Hamilton later developed Hamilton's principle that can be used to derive the Lagrange equation and was later recognized to be applicable to much of fundamental theoretical physics as well, particularly quantum mechanics and the theory of relativity. It can also be applied to other systems by analogy, for instance to coupled electric circuits with inductances and capacitances. Lagrangian mechanics is widely used to solve mechanical problems in physics and when Newton's formulation of classical mechanics is not convenient. Lagrangian mechanics applies to the dynamics of particles, while fields are described using a Lagrangian density. Lagrange's equations are also used in optimisation problems of dynamic systems; in mechanics, Lagrange's equations of the second kind are used much more than those of the first kind. - 1 Introduction - 2 From Newtonian to Lagrangian mechanics - 3 Properties of the Euler–Lagrange equation - 4 Examples - 5 Extensions to include non-conservative forces - 6 Other contexts and formulations - 7 See also - 8 Footnotes - 9 Notes - 10 References - 11 Further reading - 12 External links Suppose we have a bead sliding around on a wire, or a swinging simple pendulum, etc. If one tracks each of the massive objects (bead, pendulum bob, etc.) as a particle, calculation of the motion of the particle using Newtonian mechanics would require solving for the time-varying constraint force required to keep the particle in the constrained motion (reaction force exerted by the wire on the bead, or tension in the pendulum rod). For the same problem using Lagrangian mechanics, one looks at the path the particle can take and chooses a convenient set of independent generalized coordinates that completely characterize the possible motion of the particle, this choice eliminates the need for the constraint force to enter into the resultant system of equations. There are fewer equations since one is not directly calculating the influence of the constraint on the particle at a given moment. For a wide variety of physical systems, if the size and shape of a massive object are negligible, it is a useful simplification to treat it as a point particle, for a system of N point particles with masses m1, m2, ..., mN, each particle has a position vector, denoted r1, r2, ..., rN. Cartesian coordinates are often sufficient, so r1 = (x1, y1, z1), r2 = (x2, y2, z2) and so on. In three dimensional space, each position vector requires three coordinates to uniquely define the location of a point, so there are 3N coordinates to uniquely define the configuration of the system. These are all specific points in space to locate the particles, a general point in space is written r = (x, y, z). The velocity of each particle is how fast the particle moves along its path of motion, and is the time derivative of its position, thus Instead of forces, Lagrangian mechanics uses the energies in the system, the central quantity of Lagrangian mechanics is the Lagrangian, a function which summarizes the dynamics of the entire system. Overall, the Lagrangian has units of energy, but no single expression for all physical systems. Any function which generates the correct equations of motion, in agreement with physical laws, can be taken as a Lagrangian, it is nevertheless possible to construct general expressions for large classes of applications. The non-relativistic Lagrangian for a system of particles can be defined by Kinetic energy is the energy of the system's motion, and vk2 = vk · vk is the magnitude squared of velocity, equivalent to the dot product of the velocity with itself. The kinetic energy is a function only of the velocities vk, not the positions rk nor time t, so T = T(v1, v2, ...). The potential energy of the system reflects the energy of interaction between the particles, i.e. how much energy any one particle will have due to all the others and other external influences. For conservative forces (e.g. Newtonian gravity), it is a function of the position vectors of the particles only, so V = V(r1, r2, ...). For those non-conservative forces which can be derived from an appropriate potential (e.g. electromagnetic potential), the velocities will appear also, V = V(r1, r2, ..., v1, v2, ...). If there is some external field or external driving force changing with time, the potential will change with time, so most generally V = V(r1, r2, ..., v1, v2, ..., t). The above form of L does not hold in relativistic Lagrangian mechanics, and must be replaced by a function consistent with special or general relativity. Also, for dissipative forces another function must be introduced alongside L. One or more of the particles may each be subject to one or more holonomic constraints, such a constraint is described by an equation of the form f(r, t) = 0. If the number of constraints in the system is C, then each constraint has an equation, f1(r, t) = 0, f2(r, t) = 0, ... fC(r, t) = 0, each could apply to any of the particles. If particle k is subject to constraint i, then fi(rk, t) = 0. At any instant of time, the coordinates of a constrained particle are linked together and not independent, the constraint equations determine the allowed paths the particles can move along, but not where they are or how fast they go at every instant of time. Nonholonomic constraints depend on the particle velocities, accelerations, or higher derivatives of position. Lagrangian mechanics can only be applied to systems whose constraints, if any, are all holonomic. Three examples of nonholonomic constraints are: when the constraint equations are nonintegrable, when the constraints have inequalities, or with complicated non-conservative forces like friction. Nonholonomic constraints require special treatment, and one may have to revert to Newtonian mechanics, or use other methods. If T or V or both depend explicitly on time due to time-varying constraints or external influences, the Lagrangian L(r1, r2, ... v1, v2, ... t) is explicitly time-dependent. If neither the potential nor the kinetic energy depend on time, then the Lagrangian L(r1, r2, ... v1, v2, ...) is explicitly independent of time. In either case, the Lagrangian will always have implicit time-dependence through the generalized coordinates. With these definitions Lagrange's equations of the first kind are Lagrange's equations (First kind) where k = 1, 2, ..., N labels the particles, there is a Lagrange multiplier λi for each constraint equation fi, and are each shorthands for a vector of partial derivatives ∂/∂ with respect to the indicated variables (not a derivative with respect to the entire vector).[nb 1] Each overdot is a shorthand for a time derivative, this procedure does increase the number of equations to solve compared to Newton's laws, from 3N to 3N + C, because there are 3N coupled second order differential equations in the position coordinates and multipliers, plus C constraint equations. However, when solved alongside the position coordinates of the particles, the multipliers can yield information about the constraint forces, the coordinates do not need to be eliminated by solving the constraint equations. In the Lagrangian, the position coordinates and velocity components are all independent variables, and derivatives of the Lagrangian are taken with respect to these separately according to the usual differentiation rules (e.g. the derivative of L with respect to the z-velocity component of particle 2, vz2 = dz2/dt, is just that; no awkward chain rules or total derivatives need to be used to relate the velocity component to the corresponding coordinate z2). In each constraint equation, one coordinate is redundant because it is determined from the other two, the number of independent coordinates is therefore n = 3N − C. We can transform each position vector to a common set of n generalized coordinates, conveniently written as an n-tuple q = (q1, q2, ... qn), by expressing each position vector, and hence the position coordinates, as functions of the generalized coordinates and time, The vector q is a point in the configuration space of the system, the time derivatives of the generalized coordinates are called the generalized velocities, and for each particle the transformation of its velocity vector, the total derivative of its position with respect to time, is Given this vk, the kinetic energy in generalized coordinates depends on the generalized velocities, generalized coordinates, and time if the position vectors depend explicitly on time due to time-varying constraints, so T = T(q, dq/dt, t). Lagrange's equations (Second kind) are mathematical results from the calculus of variations, which can also be used in mechanics. Substituting in the Lagrangian L(q, dq/dt, t), gives the equations of motion of the system. The number of equations has decreased compared to Newtonian mechanics, from 3N to n = 3N − C coupled second order differential equations in the generalized coordinates. These equations do not include constraint forces at all, only non-constraint forces need to be accounted for. Although the equations of motion include partial derivatives, the results of the partial derivatives are still ordinary differential equations in the position coordinates of the particles. The total time derivative denoted d/dt often involves implicit differentiation. Both equations are linear in the Lagrangian, but will generally be nonlinear coupled equations in the coordinates. From Newtonian to Lagrangian mechanics For simplicity, Newton's laws can be illustrated for one particle without much loss of generality (for a system of N particles, all of these equations apply to each particle in the system), the equation of motion for particle of mass m is Newton's second law of 1687, in modern vector notation where a is its acceleration and F the resultant force acting on it. In three spatial dimensions, this is a system of three coupled second order ordinary differential equations to solve, since there are three components in this vector equation. The solutions are the position vectors r of the particles at time t, subject to the initial conditions of r and v when t = 0. Newton's laws are easy to use in Cartesian coordinates, but Cartesian coordinates are not always convenient, and for other coordinate systems the equations of motion can become complicated; in a set of curvilinear coordinates ξ = (ξ1, ξ2, ξ3), the law in tensor index notation is the "Lagrangian form" is the kinetic energy of the particle, and gbc the covariant components of the metric tensor of the curvilinear coordinate system. All the indices a, b, c, each take the values 1, 2, 3. Curvilinear coordinates are not the same as generalized coordinates. It may seem like an overcomplication to cast Newton's law in this form, but there are advantages, the acceleration components in terms of the Christoffel symbols can be avoided by evaluating derivatives of the kinetic energy instead. If there is no resultant force acting on the particle, F = 0, it does not accelerate, but moves with constant velocity in a straight line. Mathematically, the solutions of the differential equation are geodesics, the curves of extremal length between two points in space (These may end up being minimal so the shortest paths, but that is not necessary); in flat 3d real space the geodesics are simply straight lines. So for a free particle, Newton's second law coincides with the geodesic equation, and states free particles follow geodesics, the extremal trajectories it can move along. If the particle is subject to forces, F ≠ 0, the particle accelerates due to forces acting on it, and deviates away from the geodesics it would follow if free, with appropriate extensions of the quantities given here in flat 3d space to 4d curved spacetime, the above form of Newton's law also carries over to Einstein's general relativity, in which case free particles follow geodesics in curved spacetime that are no longer "straight lines" in the ordinary sense. However, we still need to know the total resultant force F acting on the particle, which in turn requires the resultant non-constraint force N plus the resultant constraint force C, The constraint forces can be complicated, since they will generally depend on time. Also, if there are constraints, the curvilinear coordinates are not independent but related by one or more constraint equations. The constraint forces can either be eliminated from the equations of motion so only the non-constraint forces remain, or included by including the constraint equations in the equations of motion. A fundamental result in analytical mechanics is D'Alembert's principle, introduced in 1708 by Jacques Bernoulli to understand static equilibrium, and developed by D'Alembert in 1743 to solve dynamical problems. The principle asserts for N particles The δrk are virtual displacements, by definition they are infinitesimal changes in the configuration of the system consistent with the constraint forces acting on the system at an instant of time, i.e. in such a way that the constraint forces maintain the constrained motion. They are not the same as the actual displacements in the system, which are caused by the resultant constraint and non-constraint forces acting on the particle to accelerate and move it.[nb 2] Virtual work is the work done along a virtual displacement for any force (constraint or non-constraint). Since the constraint forces act perpendicular to the motion of each particle in the system to maintain the constraints, the total virtual work by the constraint forces acting on the system is zero;[nb 3] Thus D'Alembert's principle allows us to concentrate on only the applied non-constraint forces, and exclude the constraint forces in the equations of motion, the form shown is also independent of the choice of coordinates. However, it cannot be readily used to set up the equations of motion in an arbitrary coordinate system since the displacements δrk might be connected by a constraint equation, which prevents us from setting the N individual summands to 0. We will therefore seek a system of mutually independent coordinates for which the total sum will be 0 if and only if the individual summands are 0. Setting each of the summands to 0 will eventually give us our separated equations of motion. Equations of motion from D'Alembert's principle If there are constraints on particle k, then since the coordinates of the position rk = (xk, yk, zk) are linked together by a constraint equation, so are those of the virtual displacements δrk = (δxk, δyk, δzk). Since the generalized coordinates are independent, we can avoid the complications with the δrk by converting to virtual displacements in the generalized coordinates. These are related in the same form as a total differential, There is no partial time derivative with respect to time multiplied by a time increment, since this is a virtual displacement, one along the constraints in an instant of time. The first term in D'Alembert's principle above is the virtual work done by the non-constraint forces Nk along the virtual displacements δrk, and can without loss of generality be converted into the generalized analogues by the definition of generalized forces This is half of the conversion to generalized coordinates, it remains to convert the acceleration term into generalized coordinates, which is not immediately obvious. Recalling the Lagrange form of Newton's second law, the partial derivatives of the kinetic energy with respect to the generalized coordinates and velocities can be found to give the desired result; Now D'Alembert's principle is in the generalized coordinates as required, These equations are equivalent to Newton's laws for the non-constraint forces, the generalized forces in this equation are derived from the non-constraint forces only – the constraint forces have been excluded from D'Alembert's principle and do not need to be found. The generalized forces may be non-conservative, provided they satisfy D'Alembert's principle. Euler–Lagrange equations and Hamilton's principle For a non-conservative force which depends on velocity, it may be possible to find a potential energy function V that depends on positions and velocities. If the generalized forces Qi can be derived from a potential V such that equating to Lagrange's equations and defining the Lagrangian as L = T − V obtains Lagrange's equations of the second kind or the Euler–Lagrange equations of motion However, the Euler–Lagrange equations can only account for non-conservative forces if a potential can be found as shown, this may not always be possible for non-conservative forces, and Lagrange's equations do not involve any potential, only generalized forces; therefore they are more general than the Euler–Lagrange equations. The Euler–Lagrange equations also follow from the calculus of variations, the variation of the Lagrangian is which has a similar form to the total differential of L, but the virtual displacements and their time derivatives replace differentials, and there is no time increment in accordance with the definition of the virtual displacements. An integration by parts with respect to time can transfer the time derivative of δqj to the ∂L/∂(dqj/dt), in the process exchanging d(δqj)/dt for δqj, allowing the independent virtual displacements to be factorized from the derivatives of the Lagrangian, Now, if the condition δqj(t1) = δqj(t2) = 0 holds for all j, the terms not integrated are zero. If in addition the entire time integral of δL is zero, then because the δqj are independent, and the only way for a definite integral to be zero is if the integrand equals zero, each of the coefficients of δqj must also be zero. Then we obtain the equations of motion, this can be summarized by Hamilton's principle; which is a functional; it takes in the Lagrangian function for all times between t1 and t2 and returns a scalar value. Its dimensions are the same as [ angular momentum ], [energy]·[time], or [length]·[momentum], with this definition Hamilton's principle is Thus, instead of thinking about particles accelerating in response to applied forces, one might think of them picking out the path with a stationary action, with the end points of the path in configuration space held fixed at the initial and final times. Hamilton's principle is sometimes referred to as the principle of least action, however the action functional need only be stationary, not necessarily a maximum or a minimum value. Any variation of the functional gives an increase in the functional integral of the action. Historically, the idea of finding the shortest path a particle can follow subject to a force motivated the first applications of the calculus of variations to mechanical problems, such as the Brachistochrone problem solved by Jean Bernoulli in 1696, as well as Leibniz, Daniel Bernoulli, L'Hôpital around the same time, and Newton the following year. Newton himself was thinking along the lines of the variational calculus, but did not publish, these ideas in turn lead to the variational principles of mechanics, of Fermat, Maupertuis, Euler, Hamilton, and others. Hamilton's principle can be applied to nonholonomic constraints if the constraint equations can be put into a certain form, a linear combination of first order differentials in the coordinates, the resulting constraint equation can be rearranged into first order differential equation. This will not be given here. Lagrange multipliers and constraints The Lagrangian L can be varied in the Cartesian rk coordinates, for N particles, Hamilton's principle is still valid even if the coordinates L is expressed in are not independent, here rk, but the constraints are still assumed to be holonomic. As always the end points are fixed δrk(t1) = δrk(t2) = 0 for all k. What cannot be done is to simply equate the coefficients of δrk to zero because the δrk are not independent. Instead, the method of Lagrange multipliers can be used to include the constraints. Multiplying each constraint equation fi(rk, t) = 0 by a Lagrange multiplier λi for i = 1, 2, ..., C, and adding the results to the original Lagrangian, gives the new Lagrangian The Lagrange multipliers are arbitrary functions of time t, but not functions of the coordinates rk, so the multipliers are on equal footing with the position coordinates. Varying this new Lagrangian and integrating with respect to time gives The introduced multipliers can be found so that the coefficients of δrk are zero, even though the rk are not independent. The equations of motion follow, from the preceding analysis, obtaining the solution to this integral is equivalent to the statement which are Lagrange's equations of the first kind. Also, the λi Euler-Lagrange equations for the new Lagrangian return the constraint equations For the case of a conservative force given by the gradient of some potential energy V, a function of the rk coordinates only, substituting the Lagrangian L = T − V gives and identifying the derivatives of kinetic energy as the (negative of the) resultant force, and the derivatives of the potential equaling the non-constraint force, it follows the constraint forces are thus giving the constraint forces explicitly in terms of the constraint equations and the Lagrange multipliers. Properties of the Euler–Lagrange equation In some cases, the Lagrangian has properties which can provide information about the system without solving the equations of motion, these follow from Lagrange's equations of the second kind. The Lagrangian of a given system is not unique. A Lagrangian L can be multiplied by a nonzero constant a, an arbitrary constant b can be added, and the new Lagrangian aL + b will describe exactly the same motion as L. A less obvious result is that two Lagrangians describing the same system can differ by the total derivative (not partial) of some function f(q, t) with respect to time; Invariance under point transformations Given a set of generalized coordinates q, if we change these variables to a new set of generalized coordinates s according to a point transformation q = q(s, t), the new Lagrangian L′ is a function of the new coordinates This may simplify the equations of motion. Cyclic coordinates and conserved momenta An important property of the Lagrangian is that conserved quantities can easily be read off from it, the generalized momentum "canonically conjugate to" the coordinate qi is defined by If the Lagrangian L does not depend on some coordinate qi, it follows immediately from the Euler–Lagrange equations that and integrating shows the corresponding generalized momentum equals a constant, a conserved quantity. This is a special case of Noether's theorem, such coordinates are called "cyclic" or "ignorable". For example, a system may have a Lagrangian where r and z are lengths along straight lines, s is an arc length along some curve, and θ and φ are angles. Notice z, s, and φ are all absent in the Lagrangian even though their velocities are not. Then the momenta are all conserved quantities. The units and nature of each generalized momentum will depend on the corresponding coordinate; in this case pz is a translational momentum in the z direction, ps is also a translational momentum along the curve s is measured, and pφ is an angular momentum in the plane the angle φ is measured in. However complicated the motion of the system is, all the coordinates and velocities will vary in such a way that these momenta are conserved. Taking the total derivative of the Lagrangian L = T − V with respect to time leads to the general result If the entire Lagrangian is explicitly independent of time, it follows the partial time derivative of the Lagrangian is zero, ∂L/∂t = 0, so the quantity under the total time derivative in brackets must be a constant for all times during the motion of the system, and it also follows the kinetic energy is a homogenous function of degree 2 in the generalized velocities. If in addition the potential V is only a function of coordinates and independent of velocities, it follows by direct calculation, or use of Euler's theorem for homogenous functions, that Under all these circumstances, the constant is the total conserved energy of the system. The kinetic and potential energies still change as the system evolves, but the motion of the system will be such that their sum, the total energy, is constant, this is a valuable simplification, since the energy E is a constant of integration that counts as an arbitrary constant for the problem, and it may be possible to integrate the velocities from this energy relation to solve for the coordinates. In the case the velocity or kinetic energy or both depends on time, then the energy is not conserved. and time is scaled by a factor β, t′ = βt, then the velocities vk are scaled by a factor of α/β and the kinetic energy T by (α/β)2. The entire Lagrangian has been scaled by the same factor if Since the lengths and times have been scaled, the trajectories of the particles in the system follow geometrically similar paths differing in size, the length l traversed in time t in the original trajectory corresponds to a new length l′ traversed in time t′ in the new trajectory, given by the ratios For a given system, if two subsystems A and B are non-interacting, the Lagrangian L of the overall system is the sum of the Lagrangians LA and LB for the subsystems: If they do interact this is not possible; in some situations, it may be possible to separate the Lagrangian of the system L into the sum of non-interacting Lagrangians, plus another Lagrangian LAB containing information about the interaction, This may be physically motivated by taking the non-interacting Lagrangians to be kinetic energies only, while the interaction Lagrangian is the system's total potential energy. Also, in the limiting case of negligible interaction, LAB tends to zero reducing to the non-interacting case above. The extension to more than two non-interacting subsystems is straightforward – the overall Lagrangian is the sum of the separate Lagrangians for each subsystem. If there are interactions, then interaction Lagrangians may be added. The following examples apply Lagrange's equations of the second kind to mechanical problems. If there are more particles, in accordance with the above results, the total kinetic energy is a sum over all the particle kinetic energies, and the potential is a function of all the coordinates. The Lagrangian of the particle can be written The equations of motion for the particle are found by applying the Euler–Lagrange equation, for the x coordinate and similarly for the y and z coordinates. Collecting the equations in vector form we find which is Newton's second law of motion for a particle subject to a conservative force. Polar coordinates in 2d and 3d The Lagrangian for the above problem in spherical coordinates, with a central potential, is so the Euler–Lagrange equations are The φ coordinate is cyclic since it does not appear in the Lagrangian, so the conserved momentum in the system is the angular momentum in which r, θ and dφ/dt can all vary with time, but only in such a way that pφ is constant. Pendulum on a movable support Consider a pendulum of mass m and length ℓ, which is attached to a support with mass M, which can move along a line in the x-direction. Let x be the coordinate along the line of the support, and let us denote the position of the pendulum by the angle θ from the vertical, the coordinates and velocity components of the pendulum bob are The generalized coordinates can be taken to be x and θ, the kinetic energy of the system is then and the potential energy is giving the Lagrangian Since x is absent from the Lagrangian, it is a cyclic coordinate, the conserved momentum is and the Lagrange equation for the support coordinate x is The Lagrange equation for the angle θ is These equations may look quite complicated, but finding them with Newton's laws would have required carefully identifying all forces, which would have been much more laborious and prone to errors. By considering limit cases, the correctness of this system can be verified: For example, should give the equations of motion for a simple pendulum that is at rest in some inertial frame, while should give the equations for a pendulum in a constantly accelerating system, etc. Furthermore, it is trivial to obtain the results numerically, given suitable starting conditions and a chosen time step, by stepping through the results iteratively. Two-body central force problem Two bodies of masses m1 and m2 with position vectors r1 and r2 are in orbit about each other due to an attractive central potential V. We may write down the Lagrangian in terms of the position coordinates as they are, but it is an established procedure to convert the two-body problem into a one-body problem as follows. Introduce the Jacobi coordinates; the separation of the bodies r = r2 − r1 and the location of the center of mass R = (m1r1 + m2r2)/(m1 + m2). The Lagrangian is then[nb 4] where M = m1 + m2 is the total mass, μ = m1m2/(m1 + m2) is the reduced mass, and V the potential of the radial force, which depends only on the magnitude of the separation |r| = |r2 − r1|. The Lagrangian splits into a center-of-mass term Lcm and a relative motion term Lrel. The Euler–Lagrange equation for R is simply which states the center of mass moves in a straight line at constant velocity. Since the relative motion only depends on the magnitude of the separation, it is ideal to use polar coordinates (r, θ) and take r = |r|, so θ is an ignorable coordinate with the corresponding conserved (angular) momentum The radial coordinate r and angular velocity dθ/dt can vary with time, but only in such a way that ℓ is constant. The Lagrange equation for r is This equation is identical to the radial equation obtained using Newton's laws in a co-rotating reference frame, that is, a frame rotating with the reduced mass so it appears stationary. Eliminating the angular velocity dθ/dt from this radial equation, which is the equation of motion for a one-dimensional problem in which a particle of mass μ is subjected to the inward central force − dV/dr and a second outward force, called in this context the centrifugal force Of course, if one remains entirely within the one-dimensional formulation, ℓ enters only as some imposed parameter of the external outward force, and its interpretation as angular momentum depends upon the more general two-dimensional problem from which the one-dimensional problem originated. If one arrives at this equation using Newtonian mechanics in a co-rotating frame, the interpretation is evident as the centrifugal force in that frame due to the rotation of the frame itself. If one arrives at this equation directly by using the generalized coordinates (r, θ) and simply following the Lagrangian formulation without thinking about frames at all, the interpretation is that the centrifugal force is an outgrowth of using polar coordinates. As Hildebrand says: "Since such quantities are not true physical forces, they are often called inertia forces. Their presence or absence depends, not upon the particular problem at hand, but upon the coordinate system chosen." In particular, if Cartesian coordinates are chosen, the centrifugal force disappears, and the formulation involves only the central force itself, which provides the centripetal force for a curved motion. This viewpoint, that fictitious forces originate in the choice of coordinates, often is expressed by users of the Lagrangian method, this view arises naturally in the Lagrangian approach, because the frame of reference is (possibly unconsciously) selected by the choice of coordinates. For example, see for a comparison of Lagrangians in an inertial and in a noninertial frame of reference. See also the discussion of "total" and "updated" Lagrangian formulations in. Unfortunately, this usage of "inertial force" conflicts with the Newtonian idea of an inertial force; in the Newtonian view, an inertial force originates in the acceleration of the frame of observation (the fact that it is not an inertial frame of reference), not in the choice of coordinate system. To keep matters clear, it is safest to refer to the Lagrangian inertial forces as generalized inertial forces, to distinguish them from the Newtonian vector inertial forces, that is, one should avoid following Hildebrand when he says (p. 155) "we deal always with generalized forces, velocities accelerations, and momenta. For brevity, the adjective "generalized" will be omitted frequently." It is known that the Lagrangian of a system is not unique. Within the Lagrangian formalism the Newtonian fictitious forces can be identified by the existence of alternative Lagrangians in which the fictitious forces disappear, sometimes found by exploiting the symmetry of the system. A test particle is a particle whose mass and charge are assumed to be so small that its effect on external system is insignificant, it is often a hypothetical simplified point particle with no properties other than mass and charge. Real particles like electrons and up quarks are more complex and have additional terms in their Lagrangians. The Lagrangian for a charged particle with electrical charge q, interacting with an electromagnetic field, is the prototypical example of a velocity-dependent potential. The electric scalar potential ϕ = ϕ(r, t) and magnetic vector potential A = A(r, t) are defined from the electric field E = E(r, t) and magnetic field B = B(r, t) as follows; The Lagrangian of a massive charged test particle in an electromagnetic field is which produces the Lorentz force law An interesting detail in this example is the generalized momentum conjugate to r is the ordinary momentum plus a contribution from the A field, If r is cyclic, which happens if the ϕ and A fields are uniform (independent of position), then this expression for p given here is the conserved momentum, while the usual quantity mv is not. This relation is also used in the minimal coupling prescription in quantum mechanics and quantum field theory. Extensions to include non-conservative forces In a more general formulation, the forces could be both conservative and viscous. If an appropriate transformation can be found from the Fi, Rayleigh suggests using a dissipation function, D, of the following form: where Cjk are constants that are related to the damping coefficients in the physical system, though not necessarily equal to them. If D is defined this way, then Other contexts and formulations The ideas in Lagrangian mechanics have numerous applications in other areas of physics, and can adopt generalized results from the calculus of variations. Alternative formulations of classical mechanics A closely related formulation of classical mechanics is Hamiltonian mechanics, the Hamiltonian is defined by and can be obtained by performing a Legendre transformation on the Lagrangian, which introduces new variables canonically conjugate to the original variables. For example, given a set of generalized coordinates, the variables canonically conjugate are the generalized momenta, this doubles the number of variables, but makes differential equations first order. The Hamiltonian is a particularly ubiquitous quantity in quantum mechanics (see Hamiltonian (quantum mechanics)). Routhian mechanics is a hybrid formulation of Lagrangian and Hamiltonian mechanics, which is not often used in practice but an efficient formulation for cyclic coordinates. Momentum space formulation The Euler–Lagrange equations can also be formulated in terms of the generalized momenta rather than generalized coordinates. Performing a Legendre transformation on the generalized coordinate Lagrangian L(q, dq/dt, t) obtains the generalized momenta Lagrangian L′(p, dp/dt, t) in terms of the original Lagrangian, as well the EL equations in terms of the generalized momenta. Both Lagrangians contain the same information, and either can be used to solve for the motion of the system; in practice generalized coordinates are more convenient to use and interpret than generalized momenta. Higher derivatives of generalized coordinates There is no reason to restrict the derivatives of generalized coordinates to first order only, it is possible to derive modified EL equations for a Lagrangian containing higher order derivatives, see Euler-Lagrange equation for details. Lagrangian mechanics can be applied to geometrical optics, by applying variational principles to rays of light in a medium, and solving the EL equations gives the equations of the paths the light rays follow. Lagrangian mechanics can be formulated in special relativity and general relativity, some features of Lagrangian mechanics are retained in the relativistic theories but difficulties quickly appear in other respects. In particular, the EL equations take the same form, and the connection between cyclic coordinates and conserved momenta still applies, however the Lagrangian must be modified and is not simply the kinetic minus the potential energy of a particle. Also, it is not straightforward to handle multiparticle systems in a manifestly covariant way, it may be possible if a particular frame of reference is singled out. In quantum mechanics, action and quantum-mechanical phase are related via Planck's constant, and the principle of stationary action can be understood in terms of constructive interference of wave functions. In 1948, Feynman discovered the path integral formulation extending the principle of least action to quantum mechanics for electrons and photons. In this formulation, particles travel every possible path between the initial and final states; the probability of a specific final state is obtained by summing over all possible trajectories leading to it. In the classical regime, the path integral formulation cleanly reproduces Hamilton's principle, and Fermat's principle in optics. Classical field theory In Lagrangian mechanics, the generalized coordinates form a discrete set of variables that define the configuration of a system; in classical field theory, the physical system is not a set of discrete particles, but rather a continuous field ϕ(r, t) defined over a region of 3d space. Associated with the field is a Lagrangian density defined in terms of the field and its space and time derivatives at a location r and time t. Analogous to the particle case, for non-relativistic applications the Lagrangian density is also the kinetic energy density of the field, minus its potential energy density (this is not true in general, and the Lagrangian density has to be "reverse engineered"), the Lagrangian is then the volume integral of the Lagrangian density over 3d space where d3r is a 3d differential volume element. The Lagrangian is a function of time since the Lagrangian density has implicit space dependence via the fields, and may have explicit spatial dependence, but these are removed in the integral, leaving only time in as the variable for the Lagrangian. If the Lagrangian is invariant under a symmetry, then the resulting equations of motion are also invariant under that symmetry, this characteristic is very helpful in showing that theories are consistent with either special relativity or general relativity. - Fundamental lemma of the calculus of variations - Canonical coordinates - Functional derivative - Generalized coordinates - Hamiltonian mechanics - Hamiltonian optics - Lagrangian and Eulerian specification of the flow field - Lagrangian point - Lagrangian system - Non-autonomous mechanics - Restricted three-body problem - Plateau's problem - Inverse problem for Lagrangian mechanics, the general topic of finding a Lagrangian for a system given the equations of motion. - Sometimes in this context the variational derivative denoted and defined as - Here the virtual displacements are assumed reversible, it is possible for some systems to have non-reversible virtual displacements that violate this principle, see Udwadia–Kalaba equation. - In other words - The Lagrangian also can be written explicitly for a rotating frame. See Padmanabhan, 2000. - Dvorak & Freistetter 2005, p. 24 - Haken 2006, p. 61 - Lanczos 1986, p. 43 - Menzel & Zatzkis 1960, p. 160 - Jose & Saletan, p. 129 - Lagrange 1811 - Lagrange 1815 - Goldtein 1980 - Torby1984, p.270 - Torby 1984, p. 269 - Hand & Finch 2008, p. 36–40 - Hand & Finch 2008, p. 60–61 - Hand & Finch 2008, p. 19 - Penrose 2007 - Schuam 1988, p. 156 - Synge & Schild 1949, p. 150–152 - Foster & Nightingale 1995, p. 89 - Hand & Finch 2008, p. 4 - Goldstein 1980, p. 16–18 - Hand 2008, p. 15 - Hand & Finch 2008, p. 15 - Fetter & Walecka 1980, p. 53 - Torby 1984, p. 264 - Torby 1984, p. 269 - Kibble & Berkshire 2004, p. 234 - Fetter & Walecka 1980, p. 56 - Hand & Finch 2008, p. 17 - Hand & Finch 2008, p. 15–17 - R. Penrose (2007). The Road to Reality. Vintage books. p. 474. ISBN 0-679-77631-1. - Goldstien 1980, p. 23 - Kibble & Berkshire 2004, p. 234–235 - Hand & Finch 2008, p. 51 - Hand & Finch 2008, p. 44–45 - Goldstein 1980 - Fetter & Walecka, pp. 68–70 - Landau & Lifshitz 1976, p. 4 - Goldstien, Poole & Safko 2002, p. 21 - Landau & Lifshitz 1976, p. 4 - Goldstein 1980, p. 21 - Landau & Lifshitz 1976, p. 14 - Landau & Lifshitz 1976, p. 22 - Taylor 2005, p. 297 - Padmanabhan 2000, p. 48 - Hand & Finch 1998, pp. 140–141 - Hildebrand 1992, p. 156 - Zak, Zbilut & Meyers 1997, pp. 202 - Shabana 2008, pp. 118–119 - Gannon 2006, p. 267 - Kosyakov 2007 - Galley 2013 - Hadar, Shahar & Kol 2014 - Birnholtz, Hadar & Kol 2013 - Torby 1984, p. 271 - Lagrange, J. L. (1811). Mécanique analytique. 1. - Lagrange, J. L. (1815). Mécanique analytique. 2. - Penrose, Roger (2007). The Road to Reality. Vintage books. ISBN 0-679-77631-1. - Landau, L. D.; Lifshitz, E. M. Mechanics (3rd ed.). Butterworth Heinemann. p. 134. ISBN 9780750628969. - Landau, Lev; Lifshitz, Evgeny (1975). The Classical Theory of Fields. Elsevier Ltd. ISBN 978-0-7506-2768-9. - Hand, L. N.; Finch, J. D. Analytical Mechanics (2nd ed.). Cambridge University Press. p. 23. ISBN 9780521575720. - Louis N. Hand; Janet D. Finch (1998). Analytical mechanics. Cambridge University Press. pp. 140–141. ISBN 0-521-57572-9. - Saletan, E. J.; José, J. V. (1998). Classical Dynamics: A Contemporary Approach. Cambridge University Press. - Kibble, T. W. B.; Berkshire, F. H. (2004). Classical Mechanics (5th ed.). Imperial College Press. p. 236. ISBN 9781860944352. - Goldstein, Herbert (1980). Classical Mechanics (2nd ed.). San Francisco, CA: Addison Wesley. pp. 352–353. ISBN 0201029189. - Goldstein, Herbert; Poole, Charles P., Jr.; Safko, John L. (2002). Classical Mechanics (3rd ed.). San Francisco, CA: Addison Wesley. pp. 347–349. ISBN 0-201-65702-3. - Lanczos, Cornelius (1986). "II §5 Auxiliary conditions: the Lagrangian λ-method". The variational principles of mechanics (Reprint of University of Toronto 1970 4th ed.). Courier Dover. p. 43. ISBN 0-486-65067-7. - Fetter, A. L.; Walecka, J. D. (1980). Theoretical Mechanics of Particles and Continua. Dover. pp. 53–57. ISBN 978-0-486-43261-8. - The Principle of Least Action, R. Feynman - Dvorak, R.; Freistetter, Florian (2005). "§ 3.2 Lagrange equations of the first kind". Chaos and stability in planetary systems. Birkhäuser. p. 24. ISBN 3-540-28208-4. - Haken, H (2006). Information and self-organization (3rd ed.). Springer. p. 61. ISBN 3-540-33021-6. - Henry Zatzkis (1960). "§1.4 Lagrange equations of the second kind". In DH Menzel. Fundamental formulas of physics. 1 (2nd ed.). Courier Dover. p. 160. ISBN 0-486-60595-7. - Francis Begnaud Hildebrand (1992). Methods of applied mathematics (Reprint of Prentice-Hall 1965 2nd ed.). Courier Dover. p. 156. ISBN 0-486-67002-3. - Michail Zak; Joseph P. Zbilut; Ronald E. Meyers (1997). From instability to intelligence. Springer. p. 202. ISBN 3-540-63055-4. - Ahmed A. Shabana (2008). Computational continuum mechanics. Cambridge University Press. pp. 118–119. ISBN 0-521-88569-8. - John Robert Taylor (2005). Classical mechanics. University Science Books. p. 297. ISBN 1-891389-22-X. - Padmanabhan, Thanu (2000). "§2.3.2 Motion in a rotating frame". Theoretical Astrophysics: Astrophysical processes (3rd ed.). Cambridge University Press. p. 48. ISBN 0-521-56632-0. - Doughty, Noel A. (1990). Lagrangian Interaction. Addison-Wesley Publishers Ltd. ISBN 0-201-41625-5. - Kosyakov, B. P. (2007). Introduction to the classical theory of particles and fields. Berlin, Germany: Springer. doi:10.1007/978-3-540-40934-2. - Galley, Chad R. (2013). "Classical Mechanics of Nonconservative Systems". Physical Review Letters. 110 (17): 174301. arXiv: . Bibcode:2013PhRvL.110q4301G. doi:10.1103/PhysRevLett.110.174301. PMID 23679733. - Birnholtz, Ofek; Hadar, Shahar; Kol, Barak (2014). "Radiation reaction at the level of the action". International Journal of Modern Physics A. 29 (24): 1450132. arXiv: . Bibcode:2014IJMPA..2950132B. doi:10.1142/S0217751X14501322. - Birnholtz, Ofek; Hadar, Shahar; Kol, Barak (2013). "Theory of post-Newtonian radiation and reaction". Physical Review D. 88 (10): 104037. arXiv: . Bibcode:2013PhRvD..88j4037B. doi:10.1103/PhysRevD.88.104037. - Roger F Gans (2013). Engineering Dynamics: From the Lagrangian to Simulation. New York: Springer. ISBN 978-1-4614-3929-5. - Terry Gannon (2006). Moonshine beyond the monster: the bridge connecting algebra, modular forms and physics. Cambridge University Press. p. 267. ISBN 0-521-83531-3. - Torby, Bruce (1984). "Energy Methods". Advanced Dynamics for Engineers. HRW Series in Mechanical Engineering. United States of America: CBS College Publishing. ISBN 0-03-063366-4. - Foster, J; Nightingale, J.D. (1995). A Short Course in General Relativity (2nd ed.). Springer. ISBN 0-03-063366-4. - M. P. Hobson; G. P. Efstathiou; A. N. Lasenby (2006). General Relativity: An Introduction for Physicists. Cambridge University Press. pp. 79–80. ISBN 9780521829519. - Gupta, Kiran Chandra, Classical mechanics of particles and rigid bodies (Wiley, 1988). - Cassel, Kevin (2013). Variational methods with applications in science and engineering. Cambridge: Cambridge University Press. ISBN 978-1-107-02258-4. - Goldstein, Herbert, et al. Classical Mechanics. 3rd ed., Pearson, 2002.
<urn:uuid:b3c1a756-98f6-4e3e-906c-069643afa6d3>
3.578125
11,163
Knowledge Article
Science & Tech.
47.904441
95,565,182
Ice Sheets and Sea Level: Thinking Outside the Box - 3.2k Downloads Until quite recently, the mass balance (MB) of the great ice sheets of Greenland and Antarctica was poorly known and often treated as a residual in the budget of oceanic mass and sea level change. Recent developments in regional climate modelling and remote sensing, especially altimetry, gravimetry and InSAR feature tracking, have enabled us to specifically resolve the ice sheet mass balance components at a near-annual timescale. The results reveal significant mass losses for both ice sheets, caused by the acceleration of marine-terminating glaciers in southeast, west and northwest Greenland and coastal West Antarctica, and increased run-off in Greenland. At the same time, the data show that interannual variability is very significant, masking the underlying trends. KeywordsGreenland Antarctica Mass balance Sea level The large ice sheets of Antarctica (AIS) and Greenland (GrIS) represent the largest freshwater sources on Earth, larger by two orders of magnitude than all other glaciers and ice caps combined, and sufficient to increase global sea level by about 70 m. Until quite recently, the mass balance (MB) of the AIS and GrIS and hence their contribution to sea level rise (SLR) was poorly known. Over the twentieth century, during which sea level rose by about 1.7 mm year−1, their contribution to SLR was assumed to be small (Church et al. 2001). For the period 1993–2003, based on satellite altimetry, Bindoff et al. (2007) estimated the rate of SLR to be 3.1 ± 0.4 mm year−1, i.e. almost twice the rate of the twentieth century. Thermal expansion of the upper ocean layers amounted to 1.6 ± 0.3 mm year−1. If for simplicity we neglect the contribution from other sources (e.g., deep ocean warming, storage of ground water and water in artificial basins), this leaves a land ice contribution of 1.5 ± 0.3 mm year−1. Of this, 0.8 ± 0.2 mm year−1 is attributed to the melting of small glaciers and ice caps (GIC, Lemke et al. 2007). The residual of 0.7 ± 0.5 mm year−1 is ascribed to mass loss from the AIS and GrIS. For the more recent period 2003–2008, for which many more data are available, considerable uncertainties remain with respect to the oceanic mass budget (Willis et al. 2008; Nicholls and Cazenave 2010). Using altimetry and satellite gravimetry data from GRACE over the period 2003–2008, Cazenave et al. (2009) estimate the rate of SLR to be 2.5 ± 0.4 mm year−1, i.e. a 20% decrease compared to 1993–2003. This decrease was explained by a reduction in the contribution of thermal expansion, to 0.4 ± 0.1 mm year−1 (Willis et al. 2008), indicating that the contribution of land ice to SLR had increased to 2.1 ± 0.3 mm year−1. Meier et al. (2007) estimate that GIC in 2006 contributed 1.1 ± 0.2 mm year−1 to SLR. If we assume this value to be the representative for 2003–2008, this leaves a contribution of 1.0 ± 0.4 mm year−1 for the AIS and GrIS. Both for GIC and the AIS/GrIS, this represents a ≈40% increase in mass loss compared to the period 1993–2003. In spite of the insight it yields, it is unsatisfactory to treat the contribution of the AIS and GrIS to SLR as a residual. Firstly, all components of the sea level budget have large uncertainties, enhancing the uncertainty in the residual. Secondly, it is imperative that we partition the ice sheet mass loss into an ice dynamic (glacier acceleration) and surface (snowfall/run-off) component; only then will it be possible to model the physical processes responsible for ongoing mass losses and make predictions. Thirdly, we must resolve the temporal (interannual) variability of the AIS and GrIS mass balance, to explain interannual variations in SLR. In this survey, we present some recent advances and results in this field. 2 Methods to Determine Ice Sheet Mass Balance Surface mass balance (SMB, Gt year−1) is the sum of accumulation by precipitation (snow and rain) and ablation by sublimation and run-off. Run-off, in turn, is determined by the liquid water balance (LWB), the sum of sources (water vapour condensation, rainfall and ice and snow melt) and sinks (refreezing and capillary retention) of liquid water. Recent developments in remote sensing and regional climate modelling offer three methods to estimate ice sheet MB, each with their advantages and disadvantages. These three methods are briefly discussed below. 2.1 Satellite Altimetry Remotely sensed elevation changes from radar/laser altimetry yield changes in ice sheet volume, i.e. this technique does not discriminate between the different processes responsible for mass loss. Moreover, converting volume to mass changes can be problematic, because they are caused by changes in ice dynamics and firn densification rate as well as decadal changes in accumulation, the latter representing a major source for short-term ice sheet elevation changes (McConnell et al. 2000; Helsen et al. 2008). These processes cannot be isolated based on elevation change measurements alone, and a separate estimate must be made for firn densification and accumulation variability. This requires a firn densification model forced by high-resolution (in time and space) atmospheric re-analyses, which can only be done for the period with reliable atmospheric forcing data, i.e. after 1978 in Greenland and 1980 in Antarctica (Van de Berg et al. 2005). This excludes an assessment of the role of longer time scale variations in snowfall, which are known to exist. Radar altimeters have a penetration depth in snow that depends on (time varying) snow structure and density (Thomas et al. 2008). Until the launch of Cryosat-2 in 2010, the narrow, fast flowing outlet glaciers, which are expected to react most rapidly to environmental changes, were not adequately resolved. The laser altimeter onboard ICESat captures changes in these glaciers in detail (Pritchard et al. 2009), but unfortunately has limited coverage and life span and is sensitive to clouds, prohibiting continuous time series in high-accumulation (i.e. frequently overcast) areas. 2.2 Satellite Gravimetry This method uses data of the Gravity Recovery and Climate Experiment (GRACE) satellites. GRACE has caused a small revolution in climate and sea level research by showing beyond a doubt that the large ice sheets are losing mass (Velicogna and Wahr 2006a, b; Wouters et al. 2008; Velicogna 2009; Chen et al. 2009). Moreover, the method is completely independent of the other two methods, making it a suitable verification and calibration tool. Drawbacks are that no distinction is made between the different processes responsible for mass loss, and the uncertainties arising from the short time period over which the trends have been calculated (2003–2009, i.e. 7 years). Moreover, multiple corrections must be applied to obtain mass changes, each introducing additional uncertainties. In Antarctica, the most important correction is that for upward motion of the Earth’s crust following deglaciation (Glacial Isostatic Adjustment or GIA). The GIA correction is relatively small over Greenland, for which good agreement with the mass budget method is found (Van den Broeke et al. 2009) but large and relatively poorly constrained over Antarctica (Riva et al. 2009). 2.3 The Mass Budget Method This method relies on an accurate determination of mass input (SMB) and mass output (D). The major advantage of the mass budget method is that individual MB components (SMB, LWB and D) are quantified, per drainage basin and year, providing insight into the physical processes that determine ice sheet mass changes. Because MB represents the difference between two large terms (SMB and D), the method is very sensitive to uncertainties in the individual components. This prohibited its use over the full ice sheet surface until recently. Using remotely sensed ice velocities/thicknesses and elevation data, and improved SMB fields, ice sheet-wide MB assessments using the mass budget method are now feasible (Rignot and Kanagaratnam 2006; Rignot et al. 2008a, b; Van den Broeke et al. 2009; Rignot et al. 2011). Figure 1 shows that both ice sheets have coastal areas that are significantly wetter than the ice sheet interior. Coastal West Antarctica experiences accumulation rates in excess of 1500 kg m−2 year−1, while peak values in excess of 3000 kg m−2 year−1 occur in the western Antarctic Peninsula and coastal southeast Greenland. In contrast, northeast Greenland receives less than 100 kg m−2 year−1 and interior East Antarctica even less than 50 kg m−2 year−1. Due to strong summertime melt and run-off, Greenland has a well-defined marginal ablation zone, which is more than 100 km wide in the southwest, where ablation up to 3000 kg m−2 year−1 occurs. Ablation in Antarctica is limited in area and magnitude. It is generally not caused by run-off, but rather by (snowdrift) sublimation and erosion, limiting ablation to regions where these processes are active (Van den Broeke et al. 2006b; Lenaerts et al. 2010). Because observations are very sparse in coastal, high-accumulation/high-ablation areas (Van den Broeke et al. 2006a), compilations that rely on interpolation of available observations tend to under/overestimate SMB in coastal Antarctica/Greenland and therewith ice sheet SMB. This favours the use of modelled (‘dynamically downscaled’) SMB fields in the mass budget method. The SMB numbers presented in this paper refer to the conterminous, grounded ice sheet. Ice discharge (D) is quantified using feature tracking from satellite imaging radar to obtain the flow speed of the narrow glaciers through the flux gates at the ice sheet grounding line (Rignot et al. 2008a, b). Satellite altimetry is used to accurately delineate the ice drainage basins as well as to obtain the elevation/thickness of the outlet glacier at the grounding line (Bamber et al. 2001, 2010). In Antarctica, where a floatation criterion is used to determine ice thickness at the grounding line for most glaciers (Rignot et al. 2008b), a correction must be applied for the density of the firn mantle that covers the ice. This can be achieved by using output of a regional atmospheric climate model in combination with a steady-state firn compaction model (Van den Broeke 2008; Van den Broeke et al. 2008). For the GrIS, where glacier tongues experience significant ablation in summer, this correction is less important. Finally, if the grounding line is migrating, as is currently the case in coastal West Antarctica, this represents a mass flux that must also be estimated (Rignot et al. 2011). 3 Application of the Mass Budget Method to the AIS and GrIS Estimates of total AIS discharge (D) are available for the years 1992, 1996, 2000, 2003, 2004 and annually since 2006 and include the estimated effect of inland migration of the grounding line in coastal West Antarctica (Rignot et al. 2011). For the years without discharge data, a linear interpolation between data points was applied, assuming slowly changing D. Here, we assume constant discharge between 1989 and 1992. The increase in discharge is mainly caused by the acceleration of glaciers in coastal West Antarctica, which still continues (Rignot 2008), and acceleration of glaciers in the Antarctic Peninsula mainly prior to 2005 (Pritchard and Vaughan 2007). Compared to 1992, discharge from the AIS has increased by 173 Gt year−1 or 8% in 2009; as a result, MB has been persistently negative since 1994, except for 3 years with high snowfall (1998, 2001 and 2005). For the GrIS, reliable SMB time series date back as far as 1958, owing to much better observational coverage in the northern hemisphere (Ettema et al. 2009). However, reliable estimates of D are only available for years 1992, 1996, 2000 and annually since 2004 (Rignot et al. 2011). Both SMB and D time series are displayed in Fig. 2b for the period 1989–2009, once more assuming constant discharge between 1989 and 1992 and using a linear interpolation to obtain D data between years with observations. Unlike in Antarctica, where run-off is negligible, interannual SMB variability for the GrIS is also for an important part caused by run-off variability, which is anti-correlated with accumulation (little winter snowfall causes larger summer ablation through the lower albedo of bare ice); year-to-year variations in SMB can therefore be as large as 400 Gt year−1. The average standard deviation is 100 Gt year−1, 24% of the average SMB (417 Gt year−1). Another contrast to the AIS is that the SMB shows a negative trend since about 2000, following atmospheric warming and increased run-off since the early 1990s (Hanna et al. 2008; Van den Broeke et al. 2009). In combination with an increase in D since about 1996, owing to glacier acceleration in southeast, west and northwest Greenland (Joughin et al. 2004; Howat et al. 2005, 2007; Rignot and Kanagaratnam 2006; Khan et al. 2010), this has resulted in a persistently negative MB since 1999. 4 Discussion and Conclusions We can compare our ice sheet mass balance time series with results based on SLR residuals mentioned in the Introduction. The average 2003–2008 MB of the AIS is −161 ± 150 Gt year−1 and that of the GrIS −241 ± 51 Gt year−1. This represents an average contribution of the ice sheets to SLR of 1.1 ± 0.4 mm year−1 over that six-year period, which agrees well with the estimate based on SLR residuals (1.0 ± 0.5 mm year−1). The mass loss over this period is partitioned 40%/60% between the AIS and the GrIS. Acceleration of outlet glaciers is the only source of mass loss in Antarctica, and increased run-off (decreased SMB) is equally important in Greenland. For the period 1993–2003, the MB summed for both ice sheets averages −133 ± 158 Gt year−1, equivalent to a SLR of 0.4 ± 0.4 mm year−1, which agrees within error bars with the SLR residual of 0.7 ± 0.5 mm year−1. For both ice sheets, the general picture that emerges from the boxes in Fig. 3 is that of near balance to modest mass losses in the 1990s, increasing to more substantial mass losses after 2000. Annual MB values from Fig. 2 are plotted as black dashed lines. In general, these curves nicely connect the various boxes, confirming many of the earlier studies and partly explaining the differences among studies in terms of interannual MB variability. The black lines show that MB can vary substantially within a single box, which means that the choice of the averaging period is critical for the obtained average MB value and is not necessarily representative for a longer period. The box plot has the undesirable characteristic to hide the interannual variability, providing a too static picture of ice sheet mass balance. To do justice to the important interannual variability in MB, we suggest either to calculate averages over a sufficiently long period (>10 years) or, better, to try to present MB at an annual resolution. This article is distributed under the terms of the Creative Commons Attribution Noncommercial License which permits any noncommercial use, distribution, and reproduction in any medium, provided the original author(s) and source are credited. - Bales RC, Guo Q, Shen D, McConnell JR, Du G, Burkhart JF, Spikes VB, Hanna E, Cappelen J (2009) Annual accumulation for Greenland updated using ice core data developed during 2000–2006 and analysis of daily coastal meteorological data. J Geophys Res 114:D06116. doi: 10.1029/2008JD011208 CrossRefGoogle Scholar - Bindoff NL, Willebrand J, Artale V, Cazenave A, Gregory JM, Gulev S, Hanawa K, Le Quéré KC, Levitus S, Nojiri Y, Shum CK, Talley LD, Unnikrishnan AS (2007) Observations: oceanic climate change and sea level. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds) Climate change 2007: the physical science basis. contribution of working group i to the fourth assessment report of the intergovernmental panel on climate change, Cambridge University Press, United Kingdom and New York, USAGoogle Scholar - Church JA, Gregory JM, Huybrechts P, Kuhn M, Lambeck K, Nhuan MT, Qin D, Woodworth PL (2001) Changes in sea level. In: Houghton JT, Ding Y, Griggs DJ, Noguer M, Van der Linden PJ, Xiaou D (eds) Climate change 2001: the scientific basis. Cambridge University Press, Cambridge and New York, pp 639–694Google Scholar - Lemke P, Ren J, Alley RB, Allison I, Carrasco J, Flato G, Fujii Y, Kaser G, Mote P, Thomas RH, Zhang T (2007) Observations: changes in snow, ice and frozen ground. In: Solomon S, Qin D, Manning M, Chen Z, Marquis M, Averyt KB, Tignor M, Miller HL (eds) Climate change 2007: the physical science basis. contribution of working group i to the fourth assessment report of the intergovernmental panel on climate change, Cambridge University Press, United Kingdom and New York, USAGoogle Scholar - Monaghan AJ, Bromwich DH, Fogt RL, Wang S-H, Mayewski PA, Dixon DA, Ekaykin A, Frezzotti M, Goodwin I, Isaksson E, Kaspari SD, Morgan VI, Oerter H, Van Ommen TD, Van der Veen CJ, Wen J (2006) Insignificant change in Antarctic snowfall since the international geophysical year. Science 313:827–831CrossRefGoogle Scholar - Thomas R, Frederick E, Krabill W, Manizade S, Martin C (2006) Progressive increase in ice loss from Greenland, Geophys Res Lett, L10503Google Scholar - Thomas R, Davis C, Frederick E, Krabill W, Li Y, Manizade S, Martin C (2008) A comparison of Greenland ice-sheet volume changes derived from altimetry measurements. J Glaciol 54: 203-212. 11Google Scholar
<urn:uuid:1d817a10-f5de-4c22-9d66-ed2c36404b10>
3.328125
4,089
Academic Writing
Science & Tech.
56.103231
95,565,210
Atoms Laser-Cooled Below the Doppler-Cooling Limit We have previously reported laser cooling of atoms in optical molasses to temperatures as low as 40 μK; more recently, we have measured the temperature of this three-dimensional gas of Na atoms to be as low as -10 +20 μK. These low temperatures are in strong disagreement with the traditional theory of laser cooling, which predicted a lower limit of 240 μK. Temperatures below the “cooling limit” were surprising and unexpected. They pointed out the inadequacy of the usual theory for explaining a rather simple experiment. The results have great practical interest because it now appears to be relatively easy to obtain temperatures much lower than previously thought possible. KeywordsLaser Cool Laser Polarization Raman Resonance Great Practical Interest Flux Gate Magnetometer Unable to display preview. Download preview PDF. - P Gould et al., in Laser Spectroscopy VIII,S. Svanberg and W. Persson, Eds. (Springer Verlag, Berlin, 1987) p. 64. and references therein.Google Scholar - J. Dalibard and C. Cohen-Tannoudji, presented at the Eleventh International Conference on Atomic Physics, Paris, July 1988; presented at this conference; and private communications.Google Scholar - .S. Chu, presented at the Eleventh International Conference on Atomic Physics, Paris, July 1988.Google Scholar
<urn:uuid:da614360-1a03-4b58-a1d2-b6c0a551b60c>
2.53125
299
Truncated
Science & Tech.
36.722957
95,565,214
A Compiler for an Implicitly Parallel Functional Language Dr. Greg Wolffe, email@example.com Functional programming presents a relatively unexplored approach to achieving high- performance computing. Typically, the field has been dominated by imperative languages such as C/C++ and FORTRAN. However, purely functional languages use functions without side effects, a characteristic that can prove useful when parallelizing code. The goal of this research was to create an automatic parallelizing compiler for functional programs. The compiler uses the LLVM infrastructure to transform Lisp-like source code into parallelized LLVM bytecode. The LLVM bytecode is then used to generate machine code that executes on multiple processors with multiple cores. Parallelism is clearly a critical technology of the future, but presents new challenges to developers. Much as high-level languages with optimizing compilers have supplanted hand-written assembly, automatic parallelization optimized for specific architectures is poised to eliminate error-prone manual multiprogramming. Fisk, Sean, "A Compiler for an Implicitly Parallel Functional Language" (2014). Technical Library. 183.
<urn:uuid:23d570a0-ad97-47c2-a50c-877e8b183d28>
2.71875
229
Academic Writing
Software Dev.
9.870629
95,565,215
Astronomers may have finally settled a long-standing controversy when it comes to the Pleiades, a famous star cluster. They've accurately measured the distance from our planet to the star cluster - within one percent - which may correct models of star formation. In the case of supernovae, it was long thought that dying white dwarf stars were left out of the equation, simply too small to spark the awe-inspiring explosion. Now researchers believe they figured out how some stars managed to still pull off the self-destructive stunt - re-igniting with the help of a nearby buddy. Scientists have discovered traces of material released by the death of one of the Universe's first stars, a new study reports. A series of pseudo-3D maps may help astronomers better understand the world outside our solar system, otherwise known as interstellar space. The maps pinpoint materials between stars in the Milky Way galaxy, and could help reveal the composition of these interstellar mediums. Astrophysicists have a very general and extremely theoretical idea of what happened after the Big Bang, including the formation of our solar system's Sun and stars like it. However, a team of experts from Monash University now believe that they have discovered something that will take us a step closer to understanding what the Sun's birth was truly like. Astrophysicists have recently identified a new source of high-energy gamma ray emissions using NASA's Fermi Gamma-ray Space Telescope. Stellar explosions, called novae, have been found to release a surprising amount of gamma radiation, showing just how little we know about these rays. NASA's Hubble Space Telescope has allowed astronomers to get a more in-depth look of an outer galaxy's halo of stars than ever before, thanks to new research. NASA's Fermi Gama-ray Space Telescope recently identified an "exceptional" binary system that not only contains a rapidly spinning neutron star called a pulsar, but also a relatively small yellow star. Interestingly, close examination revealed that this second star serves much like a dance partner for the pulsar, causing it to exhibit some unusual behavior. Researchers at NASA have not only successfully mapped intimate details of the Eta Carinae Homunculus Nebula for the first time, but they even went ahead and 3D printed themselves a model, enabling people to literally hold the results of a massive stellar explosion in their hands. If you listen closely, can you hear a star age? Probably not. But researchers do say that sound waves all on their own can help experts distinguish young stars from adolescent ones. Researchers have recently identified a portion of a prolific stellar nursery which has chemical signatures that would indicate a cold environment utterly impossible for its location. These signatures they say, could be explained by an unusual burst of stellar winds. Researchers from the Herschel Space Observatory (HSO) have discovered a mysterious ring of dusty material while taking some of the sharpest images of pre-star dust and gas formations to date. Astronomers have discovered that they have been overlooking a great number of incredibly small galaxies, mainly because these celestial formations make incredibly use of a very small amount of space. A Thorne-Żytkow object - a unique "hybrid star" - has been discovered in the Universe, proving the existence of a once purely theoretical celestial object.
<urn:uuid:16d97584-ff1c-429f-885a-876d8bd4e2fe>
3.28125
684
Content Listing
Science & Tech.
29.081185
95,565,241
'Invisible' Protein Structure Explains the Power of Enzymes News Jul 06, 2015 The discovery lays the base for developing designed enzymes as catalysts to new chemical reactions for instance in biotechnological applications. Enzymes are extraordinary biocatalysts able to speed up the cellular, chemical reactions several million times. This increase of speed is completely necessary for all biological life, which would otherwise be limited by the slow nature of vital chemical reactions. Now, a research group at the Department of Chemistry has discovered a new aspect in enzymes that, in part, explains how enzymes manage their tasks with unmatched efficiency and selectivity. So-called high-energy states in enzymes are regarded as necessary for catalysing of chemical reactions. A high-energy level is a protein structure only occurring temporarily and for a short period of time; and these factors collaborate until its state becomes invisible to traditional spectroscopic techniques. The Umeå researchers have managed to find a way to maintain a high-energy state in the enzyme, adenylate kinase, by mutating the protein. "Thanks to this enrichment, we have been able to study both structure and dynamics of this state. The study shows that enzymatic high-energy states are necessary for chemical catalysis," says Magnus Wolf-Watz, research group leader at the Department of Chemistry. The study also indicates a possibility to fine-tune the dynamics of an enzyme and this possibility can be useful for researchers in developing new enzymes for catalysis of new chemical reactions. "Research on Bioenergy is an active field at Umeå University. An important, practical application of the new knowledge can be enzymatic digestion of useful molecules from wooden raw materials," says Magnus Wolf-Watz. The discovery has been made possible thanks to a broad scientific approach where numerous advanced biophysical techniques have been used; Nuclear Magnetic Resonance (NMR) and x-ray crystallography being the main techniques. "One of the strengths of Umeå University is the open cooperative climate with low or no barriers between research groups. It means that exciting research can be conducted in the borderland of differing expertise," says Magnus Wolf-Watz. ‘Good Cholesterol’ May Not Always be Good for Postmenopausal WomenNews Postmenopausal factors may have an impact on the heart-protective qualities of high-density lipoproteins (HDL) – also known as ‘good cholesterol’ – according to a study led by researchers in the University of Pittsburgh Graduate School of Public Health.READ MORE What Makes Good Brain Proteins Turn Bad?News The protein FUS is implicated in two neurodegenerative diseases: amyotrophic lateral sclerosis (ALS) and frontotemporal lobar degeneration (FTLD). Using a newly developed fruit fly model, researchers have zoomed in on the protein structure of FUS to gain more insight into how it causes neuronal toxicity and disease.
<urn:uuid:55f22db5-616c-4030-ae80-fce18d8a770c>
3.03125
611
News Article
Science & Tech.
16.922371
95,565,244
Probability theory is the branch of mathematics concerned with probability. Although real analysis and probability pdf are several different probability interpretations, probability theory treats the concept in a rigorous mathematical manner by expressing it through a set of axioms. Typically these axioms formalise probability in terms of a probability space, which assigns a measure taking values between 0 and 1, termed the probability measure, to a set of outcomes called the sample space. Any specified subset of these outcomes is called an event. Central subjects in probability theory include discrete and continuous random variables, probability distributions, and stochastic processes, which provide mathematical abstractions of non-deterministic or uncertain processes or measured quantities that may either be single occurrences or evolve over time in a random fashion. Although it is not possible to perfectly predict random events, much can be said about their behaviour. Two major results in probability theory describing such behaviour are the law of large numbers and the central limit theorem. As a mathematical foundation for statistics, probability theory is essential to many human activities that involve quantitative analysis of data. Methods of probability theory also apply to descriptions of complex systems given only partial knowledge of their state, as in statistical mechanics. A great discovery of twentieth century physics was the probabilistic nature of physical phenomena at atomic scales, described in quantum mechanics. Christiaan Huygens published a book on the subject in 1657 and in the 19th century, Pierre Laplace completed what is today considered the classic interpretation. Initially, probability theory mainly considered discrete events, and its methods were mainly combinatorial. Eventually, analytical considerations compelled the incorporation of continuous variables into the theory. This culminated in modern probability theory, on foundations laid by Andrey Nikolaevich Kolmogorov. Kolmogorov combined the notion of sample space, introduced by Richard von Mises, and measure theory and presented his axiom system for probability theory in 1933. Most introductions to probability theory treat discrete probability distributions and continuous probability distributions separately. The more mathematically advanced measure theory-based treatment of probability covers the discrete, continuous, a mix of the two, and more. Consider an experiment that can produce a number of outcomes. The set of all outcomes is called the sample space of the experiment. For example, rolling an honest die produces one of six possible results. One collection of possible results corresponds to getting an odd number.
<urn:uuid:1daf09fd-6814-4418-a0c7-36ef85bac785>
3.765625
485
Knowledge Article
Science & Tech.
5.906878
95,565,247
Researchers are studying how environmental context can help determine whether oxygen (O2) detected in extrasolar planetary observations is more likely to have a biological source This illustration shows the possible surface of TRAPPIST-1f, one of the newly discovered planets in the TRAPPIST-1 system. Scientists using the Spitzer Space Telescope and ground-based telescopes have discovered that there are seven Earth-size planets in the system. Credits: NASA/JPL-Caltech NASA’s Spitzer Space Telescope has revealed the first known system of seven Earth-size planets around a single star. Three of these planets are firmly located in the habitable zone, the area around the parent star where a rocky planet is most likely to have liquid water. The discovery sets a new record for greatest number of habitable-zone planets found around a single star outside our solar system. All of these seven planets could have liquid water – key to life as we know it – under the right atmospheric conditions, but the chances are highest with the three in the habitable ... Source: [NASA]February 22, 2017 / Written by: NASA Hydrogen in Mars’ upper atmosphere comes from water vapor in the lower atmosphere. An atmospheric water molecule can be broken apart by sunlight, releasing the two hydrogen atoms from the oxygen atom that they had been bound to. Several processes at work in Mars’ upper atmosphere may then act on the hydrogen, leading to its escape. Image source: NASA/GSFC; CU/LASP Researchers at the University of Colorado, Boulder Laboratory for Atmospheric and Space Physics (LASP) have discovered an atmospheric escape route for hydrogen on Mars, a mechanism that may have played a significant role in the planet’s loss of liquid water. The findings describe a process in which water molecules rise to the middle layers of the planet’s atmosphere during warmer seasons of the year and then break apart, triggering a large increase in the rate of hydrogen escape from the atmosphere to space in a span of just weeks. The study, which appears in the journal Nature Geoscience ... Source: [University of Colorado, Boulder]February 21, 2017 / Written by: University of Colorado, Boulder Scientists assemble data from shale samples worldwide ranging as far back as 3 billion years old to trace the levels (and scarcity) of phosphorus in Earth's ancient oceans. Image source: Wikimedia Commons Life’s list of essential nutrients is long, but carbon, nitrogen, and phosphorus are the big three. Carbon and nitrogen, both easily extracted from the atmosphere, have usually been in ample supply in the ocean over Earth’s history. Carbon dioxide readily dissolves in seawater, and that carbon is then converted to the molecules of life through photosynthesis. Nitrogen is nearly 80 percent of the air we breathe, and diverse microorganisms are able to convert nitrogen to compounds more widely useful to life. Phosphorus is much harder to get: it must be delivered to the oceans by rivers fed through ... Source: [UC Riverside]February 16, 2017 / Written by: Sean Nealon Analysis of Martian meteorite NWA 7635 dates it at 2.4 billion years old. Image source: University of Houston Scientists have confirmed the long-lived nature of volcanoes on Mars, finding meteoric evidence that a Martian volcano or volcanic system was active for over 2 billion years. A Mars meteorite, a shergottite rock called Northwest Africa (NWA) 7635 originally discovered in Algiers in 2012, was analyzed by Tom Lapen, a geology professor at the University of Houston, with members of the NASA Astrobiology Institute team based at the University of Wisconsin. NWA 7635 is one of eleven Martian shergottites that have been discovered on Earth, sharing similar chemical composition which points to a similar location of origin and time of ... Source: [University of Houston]February 09, 2017 / Posted by: Miki Huynh Summit of the Simba volcano (19,400 ft) – The summit crater lake is shallow and its water column completely transparent. The red color of the lake is from an algae that has developed special pigments in response to extreme levels of short wavelength (UVA and UVB) radiation. Source: SETI Institute/ NAI High Lakes Project From October through November 2016, Nathalie Cabrol, director of the Carl Sagan Center at the SETI Institute, with members of the NASA Astrobiology Institute team based at SETI, went on a month-long expedition to Chile, visiting Mars-analogue sites between 800 and 6,000 km above sea level to collect samples and test in situ instruments in preparation for the Mars 2020 and ExoMars science payloads. Photos and posts from the field sites written by Nathalie Cabrol are available at the SETI institute website, and are linked to below. The Search for Biosignatures on Mars Starts High on Earth: http://www ... Source: [SETI]February 07, 2017 / Posted by: Miki Huynh UC Riverside 2016-2017 Science Lecture Series, Are We Alone?, presents monthly topics about the search for life in the Universe and what it means for humanity. Source: UC Riverside The University of California, Riverside has a lecture series entitled Are We Alone?, discussing the search for life in the universe—from analyzing our cosmic origins and early Earth analogues, to exploring Mars, icy moons, and other Earth-like planets. Presenters include members of the NASA Astrobiology Institute (NAI) Alternative Earths team. The next installment in the series will be “Mars 2020 & Beyond: Will We Find Life on the Red Planet?” presented by Ken Williford, Deputy Project Scientist, Mars 2020 Mission and Director, Astrobiogeochemistry Laboratory, NASA Jet Propulsion Laboratory. The lecture takes place March 23, 2017, 6-7:30PM PST ... Source: [UC Riverside]February 01, 2017 / Posted by: Miki Huynh Release of the NASA Astrobiology Institute CAN 8 has been delayed to February 2017. Stay tuned! The NASA Science Mission Directorate Planetary Science Division intends to release a Cooperative Agreement Notice (CAN) soliciting team-based proposals for membership in the NASA Astrobiology Institute (NAI) in February 2017. Step-1 proposals will be due ~8 weeks after the final CAN release, and Step-2 proposals will be due ~18 weeks after the CAN release. A preproposal conference will be scheduled ~2 weeks after the CAN release. Questions and comments related to this announcement should be addressed to Mary Voytek, NASA Astrobiology Institute Program Scientist, at email@example.com.January 31, 2017 / Written by: NASA Science Mission Directorate A map showing the thickness of sediment (in meters) for a temperature range under 80 degrees Celsius. Source: Amend and LaRowe Our Earth is about 70% covered in ocean, and the seafloor is a blanket of unconsolidated sediment made up of a wide range of organic matter, minerals, and chemistries. The habitable portions of ocean sediment provide living space for an estimated 3×1029 microbial cells. Scientists with the NASA Astrobiology Institute (NAI) Life Underground team based at the University of Southern California have used data on global sediment thickness, ocean depth, heat flow, and bottom water temperatures to developed a model to calculate the three-dimensional distribution of temperature in sediments. The research, “Temperature and volume of global marine sediments ... Source: [Geology]January 30, 2017 / Posted by: Miki Huynh In the summer of 2016, Penny Boston, Director of the NASA Astrobiology Institute (NAI), presented the seminar, Subsurface Astrobiology: Cave Habitat on Earth, Mars, and Beyond at NASA Ames Research Center in Mountain View, CA. She talked about her past work exploring and studying caves around the world, where the extreme subsurface conditions and the discovered microbial life forms held possible clues for future Mars exploration and the search for life in our solar system. Boston’s talk is part of the annual NASA Ames Summer Series, which invites subject leaders from around the world to present science and technology discoveries ... Source: [NASA Ames Summer Series]January 27, 2017 / Posted by: Miki Huynh Luis Campos is the 2016-2017 Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology. Luis Campos, science historian and the current Baruch S. Blumberg NASA/Library of Congress Chair in Astrobiology, was interviewed by Dan Turello at the Kluge Center, where Campos will spend his one year residency as Chair. The conversation is available at the Library of Congress blog. Campos talked about how his interests and academic path led up to the position of Astrobiology Chair, giving details about his childhood inspirations and college and graduate work that incorporated both science and the humanities. Campos discussed his book, Radium and the Secret of Life, which explores how the discovery of the radioactive element ... Source: [Library of Congress]January 24, 2017 / Written by: Miki Huynh Compelling terrestrial evidence records active and ancient serpentinization, the process that occurs when ultramafic rocks come into contact with water. This process may have been active on the surface and subsurface of Mars, beneath the surface of icy satellites such as Enceladus and Europa, and beyond. On Earth, these geochemical interactions support distinct microbial ecosystems. The purpose of this workshop is to highlight recent advances in understanding how Serpentinizing Systems function chemically and biologically within our Solar System. Each day of this 3-day workshop will begin with an overview by a Theme Lead, followed by several invited talks (list below ... Source: [NAI Seminars and Workshops]January 23, 2017 / Posted by: Miki Huynh Abstracts and Student Travel Grant Applications are due January 18, 2017. Opportunities are also available to be a mentor at AbSciCon 2017. Deadlines for the Astrobiology Science Conference (AbSciCon) 2017 are coming soon! Don’t miss your chance to participate in the conference in Mesa, Arizona on April 24–28, 2017. Abstracts are due January 18, 2017. Instructions for submitting an abstract can be found at: http://www.hou.usra.edu/meetings/abscicon2017/program-abstracts/abstracts/. When filling out the submission form, students can additionally register for the poster competition. Further details about submitting a poster can be found at: http://www.hou.usra.edu/meetings/abscicon2017/program-abstracts/posters/. Qualified students can apply for grants to cover travel expenses for AbSciCon 2017. Applications for ... Source: [USRA]January 11, 2017 / Posted by: Miki Huynh These 1.9 billion-year-old marine sediments are from the East Arm of the Great Slave Lake, Canada. Thousands of samples for this study were collected from the few places on Earth that have such remaining slivers of ancient seafloor. Credit: Georgia Tech / Yale - Reinhard / Planavsky For three billion years or more, the evolution of the first animal life on Earth was ready to happen, practically waiting in the wings. But the breathable oxygen it likely required wasn’t there, and a lack of simple nutrients may have been to blame. Then came a fierce planetary metamorphosis. Roughly 800 million years ago, in the late Proterozoic Eon, phosphorus, a chemical element essential to all life, began to accumulate in shallow ocean zones near coastlines widely considered to be the birthplace of animals and other complex organisms, according to a new study by geoscientists from the Georgia ... Source: [Georgia Tech]January 10, 2017 / Written by: Ben Brumfield The 5th ELSI International Symposium, Expanding Views on the Emergence of the Biosphere, takes place January 11-13, 2017 in Tokyo, Japan. Talks will be webcast via SAGANet.org. The Earth-Life Science Institute (ELSI) presents its 5th International Symposium: Expanding Views on the Emergence of the Biosphere. January 11th-13th, 9AM – 5PM (GMT+9) Earth-Life Science Institute, Tokyo Institute of Technology, Japan Conference website: www.elsi5sympo.org The emergence of a biosphere on Earth, and possibly elsewhere in the universe, remains one of the great unsolved scientific questions. Research into the origin and subsequent evolution of life takes place across an array of scientific disciplines, including but not limited to planetary sciences, astronomy, theoretical physics, chemistry and biology. The goal of this Symposium is to provide a forum for ... Source: [Earth-Life Science Institute]January 09, 2017 / Posted by: Miki Huynh An image of Saturn's moon Titan, which is surrounded by a thick haze. Scientists speculate that a similar haze surrounding early Earth may have helped to make it habitable. Source: NASA. Before it became visible as the Pale Blue Dot, early Earth may have been aglow in orange, and this might have helped to make it habitable. Scientists at the Virtual Planetary Laboratory, the NASA Astrobiology Institute (NAI) team based at the University of Washington, have developed a simulation of Earth during the Archaen era (3.8-2.5 billion years ago), with the atmosphere supporting an organic-rich and orange-colored haze that—shifting from previous haze studies— provided UV and temperature shielding to support the existence of life. The paper, “The Pale Orange Dot: The Spectrum and Habitability of Hazy Archaen Earth,” was ... Source: [Astrobiology]January 04, 2017 / Written by: Miki Huynh - July 17 - Abstract Submission Deadline for The First Billion Years: Bombardment - July 18 - Abstract Submission Deadline for Late Mars Workshop - July 30 - Early Registration Deadline for Comparative Climatology of Terrestrial Planets III - July 30 - Early Registration Deadline for Comparative Climatology of Terrestrial Planets: From Stars to Surfaces (CCTP-3) - August 1 - Abstract Submission Deadline for AGU 2018 Session P046: “The New Mars Underground”: Science and Exploration of a New Deep Frontier - August 1 - Abstract Submission Deadline for AGU 2018 Fall Meeting - August 1 - Abstract Submission Deadline for AGU 2018 Session B092: Understanding the Biogeochemistry of Nitrogen Inputs and Outputs from Molecular to Global Scales - August 1 - Abstract Submission Deadline for 9th Planetary Crater Consortium Meeting - August 1 - Registration Deadline for Experimental Analysis of the Outer Solar System Workshop - August 1 - Abstract Submission Deadline for AGU 2018 Session P044: Super-Earth Detection, Characterization and Modeling - How Habitable Are They? - August 1 - Abstract Submission Deadline for AGU 2018 Session P049: The Interiors of Jupiter and Saturn in the Era of Juno and Cassini - August 1 - Application Deadline: AAAS Early Career Award for Public Engagement with Science - August 1 - Application Deadline: NASA Astrobiology Postdoctoral Program (NPP) Opportunity at NASA Ames Astrochemistry Laboratory - August 13 - Registration Deadline for Comparative Climatology of Terrestrial Planets III - August 13 - Registration Deadline for Comparative Climatology of Terrestrial Planets: From Stars to Surfaces (CCTP-3) - August 14 - Abstract Submission Deadline for Geological Society of America (GSA) 2018 Meeting - August 15 - Application Deadline for European Planetary Science Congress 2018 - August 17 - Application Deadline: Postdoctoral Scholar Position Available in Evolutionary and Isotopic Enzymology
<urn:uuid:17e2057c-48b8-45a6-9b0e-72aeaa6f05de>
3.671875
3,323
Content Listing
Science & Tech.
33.856667
95,565,257
The evolutionary origins of microsatellites are not well understood. Some investigators have suggested that point mutations that expand repeat arrays beyond a threshold size trigger microsatellites to become variable. However, little empirical data has been brought forth on this and related issues. In this study, we examine the evolutionary history of microsatellites in six species within the obscura group of Drosophila, tracing changes in microsatellite alleles using both PCR product size and sequence data. We found little evidence supporting a general role of point mutations triggering initial microsatellite expansion, and no consistent threshold size for expansion was observed. Flanking region length variation was extensive when alleles were sequenced in distantly related species, and some species possessed altogether different repeat arrays between the same primer binding sites. Our results suggest extreme caution in using microsatellite allele sizes for phylogenetic analyses or to infer divergences between populations. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:57c1f574-0df8-41c9-82d7-f23bd31da39c>
3.40625
204
Academic Writing
Science & Tech.
-2.524907
95,565,282
The "substr" expression extracts portions of a string. The first argument is the string to be processed, the second argument is the initial position of the substring, and the third argument is the number of characters to extract. Note that the initial position argument is zero-based (i.e., the first character is referenced via a "0"). To obtain the last n characters of a string use "Ftime" expressions are used for date and time formatting. The generic form is where fmt is a formatting string and when is the time to be formatted. The arguments can be in either order and may use the optional "fmt=" and "when=" labels. The fmt parameter is whatever is given by "fmt=", the first parameter containing a '%', or else the site's default. The formatting codes are described at http://php.net/strftime. In addition to those, 's' produces Unix timestamps. Some common formatting strings: %F # ISO-8601 dates "2018-07-19" %s # Unix timestamp "1531971792" %H:%M:%S # time as hh:mm:ss "05:43:12" %m/%d/%Y # date as mm/dd/yyyy "07/19/2018" "%A, %B %d, %Y" # in words "Thursday, July 19, 2018" The when parameter understands many different date formats. The when parameter is whatever is given by "when=", or whatever parameter remains after determining the format parameter. Some examples: 2007-04-11 # ISO-8601 dates 20070411 # dates without hyphens, slashes, or dots 2007-03 # months @1176304315 # Unix timestamps (seconds since 1-Jan-1970 00:00 UTC) now # the current time today # today @ 00:00:00 yesterday # yesterday @ 00:00:00 "next Monday" # relative dates "last Thursday" # relative dates "-3 days" # three days ago "+2 weeks" # two weeks from now Note: If you want to convert a Unix timestamp you must prefix with the @. Thus, The "strlen" expression returns the length of a string. The first argument is the string to be measured. The "rand" expression returns a random integer. The first argument is the minimum number to be returned and the second argument is the maximum number to be returned. If called without the optional min, max arguments rand() returns a pseudo-random integer between 0 and RAND_MAX. If you want a random number between 5 and 15 (inclusive), for example, use rand (5, 15). toupper / tolower The "toupper" and "tolower" expressions convert a string into uppercase or lowercase. The first argument is the string to be processed. The "ucfirst" expression converts the first character of a string to uppercase. The first argument is the string to be processed. The "ucwords" expression converts the first character of each word in a string to uppercase. The first argument is the string to be processed. The "pagename" expression builds a pagename from a string. The first argument is the string to be processed. The "asspaced" expression formats wikiwords. The first argument is the string to be processed. Markup expressions can be nested:
<urn:uuid:9a968cf8-0312-4af2-95e9-2a0417b722c1>
2.5625
726
Documentation
Software Dev.
64.96187
95,565,305
Scientists from around the world have joined forces to lay the foundations for an experiment of truly astronomical proportions: putting together the biggest map of the Universe ever made. The experiment will combine signals from hundreds of radio dishes to make cosmic atlas. In a series of papers published today on the arXiv.org astrophysics pre-print website, an international team of researchers set out their plans for the mammoth survey. Researchers from the Cosmology Science Working Group of the Square Kilometre Array (SKA) have worked out how to use the world's largest telescope for the task. "The team has produced an exciting collection of cutting-edge ideas that will help shape the future of cosmology", said Working Group chair Roy Maartens, from the University of Western Cape in South Africa. The SKA will be a collection of thousands of radio receivers and dishes spread across two sites in South Africa and Western Australia. When the first phase is completed in 2020, the SKA will have a total collecting area equivalent to 15 football pitches, and will produce more data in one day than several times the daily traffic of the entire internet. A second phase, due in 2025, will be ten times larger still. The key to mapping the cosmos is to detect the faint radio emission from hydrogen gas. "Hydrogen is the most common element in the Universe, so we see it everywhere" said Phil Bull, from the University of Oslo in Norway. "This makes it ideal for tracing the way matter is distributed throughout space". This includes the mysterious dark matter, which is completely invisible to telescopes, but can be detected through its gravitational pull on other objects, like hydrogen-containing galaxies. How to map the cosmos: Speed, or accuracy? The standard way to map the positions of galaxies is to painstakingly detect the faint radio signals from many individual galaxies, staring at them for long enough to measure properties like their distance. Though time consuming, this method is the most accurate, allowing highly detailed 3D maps of the matter distribution to be made. By the late 2020's, the researchers hope to have found almost a billion galaxies in this way; in comparison, the largest galaxy surveys to date have mapped the positions of only around a million galaxies. An exciting alternative option being developed by SKA researchers, and others, is to rapidly scan the telescopes across the sky, sacrificing accuracy but surveying a much larger area in a short period of time. "This will only give us a low-resolution map" said Mario Santos (University of Western Cape), "but that's already enough to start answering some serious questions about the geometry of the Universe and the nature of gravity". The results from this type of "intensity mapping" survey could be ready as early as 2022. New window on cosmic mysteries For the astrophysicists, some of the biggest questions relate to dark energy, an enigmatic substance that appears to be making the Universe expand at an ever faster rate. "The SKA will allow the most precise investigations of dark energy to date" said Alvise Raccanelli, from Johns Hopkins University, USA. "By using 3D maps of the distribution of galaxies, we can study dark energy and test Einstein's General Relativity better than any experiment so far", he added. Characteristic patterns in the galaxy distribution allow researchers to make extremely accurate measurements of how the cosmic expansion has changed over periods of billions of years. Testing Einstein's theory is another top priority for cosmologists. "This will shed light on whether there is a '5th force' of nature", said Gongbo Zhao from National Astronomical Observatories of China. "Seeing it would be the smoking gun if General Relativity is breaking down over cosmological distances". Such a huge atlas of the distribution of matter in the Universe will also open a new window to investigate the first moments after the Big Bang. "What happens on ultra-large distance scales tells us something about how the newborn Universe behaved when it was only a tiny fraction of a second old," said Stefano Camera, at the Jodrell Bank Centre for Astrophysics in Manchester, UK. The measurements will allow researchers to more closely scrutinise "cosmic inflation", the process that is believed to have sown the seeds of structures like galaxies and superclusters that we see today. According to the scientists, it's not only by looking into the past that we can figure out how the Universe works. "By observing a billion galaxies at two different dates, ten years apart, the SKA will be able to measure the expansion of the Universe directly" said Hans-Rainer Klöckner from the Max-Planck Institute for Radioastronomy in Germany. The cosmic expansion happens relatively slowly compared to the timescale of, say, a human lifetime, so performing a direct measurement like this "would be a major technical achievement", as well as providing more information on the nature of dark energy, said Klöckner. The shape of the Universe In addition to 3D maps of the hydrogen radio emission, the SKA will also make two-dimensional maps using the total radio-wave emissions of galaxies. "These maps will contain hundreds of millions of galaxies, and billions in Phase 2, allowing us to test whether the shape of the Universe is as simple as our theory predicts", said Matt Jarvis from Oxford University, UK. Jarvis is referring to a series of fundamental physical principles, dating back to Copernicus in the 16th Century, which state that the shape of the matter distribution should look about the same on average, regardless of the direction you point your telescope. Recent observations have revealed troubling hints that this property, called "statistical isotropy", may not hold however. "If this turns out to be the case, there would be very serious ramifications for our understanding of the cosmos" concludes Dominik Schwarz, from the University of Bielefeld in Germany. In addition to „Cosmology“ there are a number of SKA preprints referring to other Science topics of the SKA available on astro-ph, where scientists from several Max Planck Institutes and German Universities are participating. Dr. Hans-Rainer Klöckner Max-Planck-Institut für Radioastronomie, Bonn. Fon: +49 228-525-31 Prof. Dr. Dominik Schwarz Fon: +49 521-106-6226 Dr. Norbert Junkes, Press and Public Outreach, Max-Planck-Institut für Radioastronomie. Norbert Junkes | Max-Planck-Institut für Radioastronomie Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:279eafa7-fb6c-451a-afcc-70e07c6dd8dc>
3.703125
1,942
Content Listing
Science & Tech.
37.845853
95,565,340
The Komodo dragon Varanus komodoensis is the world's largest living lizard. Despite numerous collections exhibiting these animals, only a small number of institutions worldwide have managed to breed them. Here, we discuss the new, purpose-built breeding facility at ZSL London Zoo and recent husbandry changes that resulted in the oviposition of two fertile clutches of eggs and five hatchling Komodo dragons for the first time in the United Kingdom. Four of the Komodo dragons were produced parthenogenetically, which is the first time this has been documented in this species. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:26e75ac7-18ca-40a0-9636-3ec2e97040fe>
3
136
Academic Writing
Science & Tech.
20.762255
95,565,342
How Clear Is the Water? - Grade Level: - Seventh Grade-Twelfth Grade - Biology: Animals, Biology: Plants, Chemistry, Climate Change, Ecological Engineering, Ecology, Environment, Physical Science - 6-8 Class periods (50 minutes each) - Group Size: - Up to 36 (6-12 breakout groups) - National/State Standards: HS-LS2 Ecosystems: Interactions, Energy, and Dynamics HS-ESS2 Earths Systems HS-ETS1 Engineering Design - ecosystem, turbidity, water quality, secchi disk OverviewIn this lesson students will test their knowledge of ecosystems and the qualities necessary to sustain life by creating Secchi disks, testing turbidity (water clarity), and making predictions about the habitat that might exist. This fun, hands-on lesson allows students to be the scientist and make predictions based on their findings in their lab reports. - Students will be able to explain the impact of water clarity on ecosystems functionality. - Students will create a product to analyze a solution. - Students will complete a lab report. BackgroundMany marine or aquatic ecosystems depend on the quality or productivity of water. Water heavy with glacial sediment is often not highly productive because of the silt concentration and temperature, where salt water tends to be warmer and teeming with life. Students will be working in a local lake or ocean (or classroom if you don’t have a nearby body of water) to test its turbidity and analyze the life that resides in the body of water. The final result should be a lab report and an engineered product that students have taken time to create and explain. This lesson fits well with an engineering project or into an ecology lesson. The goals are to engage students in the water near them,with the life around them and to have them think of creative solutions to sustain their environment. MaterialsThe materials with the PDF lesson are all inclusive. It is a presentation to start students off and spearhead the lesson. There is an activity manual with all the portions suggested in the procedure as a template for teachers to use and hand out. The activity manual is a complete student handbook of materials to use while working through the idea of water quality. It includes a Turbidity Lab, Engineering process outline, Lab Write-up outline, and teacher notes and suggestions. This is the main portion of the activity. Download Use this presentation to kick off your class exploration of our lesson, "How Many Salmon Are Enough? Download Part 1: Introduce the topic to students as either a part of a stand-alone lesson or as a part of a climate change, ecology, or engineering lesson. Starting with the introductory presentation, talk students through the ideas of water quality and its importance to the ecological world. Take time to chat and have students respond to the “Let’s Consider” sections found in the presentation. (50 min) The goal behind the engineering portion is for students to realize a problem related to the turbidity of water, brainstorm a creative solution, and test a product they have created. There is a template to take students through the engineering process in the activity manual. With this activity, you can suggest students look at the ecology of an area, the plants and animals who live within a body of water, a better Secchi disk design that makes it easier to record, or a Secchi disk that tests for something other than turbidity (dissolved oxygen, pH, etc.). The idea is that students are using their brains to create something new. (2-3 50 min periods)
<urn:uuid:08038d18-e3e1-48af-9f19-8bcd8436b74d>
4
740
Tutorial
Science & Tech.
37.785987
95,565,343
|Systematic IUPAC name 3D model (JSmol) |Molar mass||1.01 g·mol−1| |108.96 J K−1 mol−1| Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). The hydrogen anion, H−, is a negative ion of hydrogen, that is, a hydrogen atom that has captured an extra electron. The hydrogen anion is an important constituent of the atmosphere of stars, such as the Sun. In chemistry, this ion is called hydride. The ion has two electrons bound by the electromagnetic force to a nucleus containing one proton. The binding energy of H− equals the binding energy of an extra electron to a hydrogen atom, called electron affinity of hydrogen. It is measured to be 0.754195(19)eV or 0.0277161(62)Ha. The total ground state energy thus becomes −14.359888 eV. The hydrogen anion is an important species in the photosphere of the Sun. It absorbs energies in the range 0.75–4.0 eV, which ranges from the infrared into the visible spectrum (Rau 1999, Srinivasan 1999). It also occurs in the Earth's ionosphere (Rau 1999), and can be produced in particle accelerators. Its existence was first proven theoretically by Hans Bethe in 1929 (Bethe 1929). H− is unusual because, in its free form, it has no bound excited states, as was finally proven in 1977 (Hill 1977). It has been studied experimentally using particle accelerators (Bryant 1977). The term hydride is probably most often used to describe compounds of hydrogen with other elements in which the hydrogen is in the formal −1 oxidation state. In most such compounds the bonding between the hydrogen and its nearest neighbor is covalent. An example of a hydride is the borohydride anion (BH− - Bethe, H. (1929). "Berechnung der Elektronenaffinität des Wasserstoffs". Zeitschrift für Physik (in German). 57 (11–12): 815–821. Bibcode:1929ZPhy...57..815B. doi:10.1007/BF01340659. - Bryant, H. C.; Dieterle, B. D.; Donahue, J.; Sharifian, H.; Tootoonchi, H.; Wolfe, D. M.; Gram, P. A. M.; Yates-Williams, M. A. (1977). "Observation of Resonances near 11 eV in the Photodetachment Cross Section of the H− Ion". Physical Review Letters. 38 (5): 228. Bibcode:1977PhRvL..38..228B. doi:10.1103/PhysRevLett.38.228. - Hill, R. N. (1977). "Proof that the H− Ion Has Only One Bound State". Physical Review Letters. 38 (12): 643. Bibcode:1977PhRvL..38..643H. doi:10.1103/PhysRevLett.38.643. - Rau, A. R. P. (1996). "The Negative Ion of Hydrogen" (PDF). Journal of Astrophysics and Astronomy. 17 (3): 113–145. Bibcode:1996JApA...17..113R. doi:10.1007/BF02702300. - Rau, A. (1999). The Negative Ion of Hydrogen. - Srinivasan, G. (1999). "Chapter 5". From White Dwarfs to Black Holes: The Legacy of S. Chandrasekhar. Chicago: University of Chicago Press.
<urn:uuid:d88d8e11-591a-4d15-89be-f2b71a898552>
3.453125
816
Knowledge Article
Science & Tech.
83.94973
95,565,354
The Mechanism of Plate Motion The Mechanism of Plate Motion More Essay Examples on Geography Rubric It is clear by now that there is movement deep in the earth as evidenced by earthquakes and volcanoes - The Mechanism of Plate Motion introduction. The question that comes to mind then is “What is moving in the earth’s interior? Why and how is it moving?” These questions prompted a series of scientific geologic studies, and in the 1960’s a plate tectonic theory (plate motion) was formulated (Weil 2006). “In 1965, Tuzo Wilson introduced the term plate for the broken pieces of the Earth’s lithosphere…in 1967, Jason Morgan proposed that the Earth’s surface consists of 12 rigid plates that move relative to each other…in 1967, Xavier Le Pichon published a synthesis showing the location and type of plate boundaries and their direction of movement” (Hawaiian Natural History Association, “The Birth of Plate Tectonics”, 2005). In order to understand plate motion, it is first necessary to know the physical composition of the earth’s interior, for these elements are all involved in plate motion. Basic knowledge in science revealed that the interior of the earth is made up of layers of mantle and crust. The ones that are in the surface of the earth are the oceanic crust and the continental crust. The oceanic crust is what we call the “ocean floor” and the continental crust is what made up the “continental masses” or lands that we know so well (refer to figure1) (“Theory” 1). From them we have the oceanic plates and the continental plates. These two “crusts” along with the “rigid” upper mantle is what we call the lithosphere. Beneath the lithosphere is the athenosphere, which is made of “plastic rocks under great pressure” (“Theory” 1). In other words, athenosphere area is a very hot place. Although it is now clear with the geologists regarding the existence and movement of the plates, they are not still sure which exactly causes plate motion (Weil 2006). They are puzzled between two possibilities: 1. Mantle convection currents causes plate motion (Weil 2006) 2. Surface boundary and plate forces causes plate motion (Weil 2006) Between these two possibilities the geologists are primarily interested in proving or disproving the arguments that plates move only as a response to the athenosphere mantle convection currents , or that the plates themselves are the ones who causes the movement ( due to surrounding forces ) and consequently affecting the athenosphere mantle below(Weil 2006). Let us now consider the mechanism of plate motion in detail by discussing the two possibilities. In the first possibility, convection (release of heat by boiling) currents in the athenosphere are thought to be the cause of plate motion (“Theory” 2). Most geologists consider this phenomenon as the most likely cause of plate motions. As stated earlier, the area in the athenosphere is very hot and the molten rocks inside become less dense so that it rises up towards the edge of the athenosphere. As it rises, it left a void behind it so that the surrounding molten rocks move to fill the void. But when the other molten rocks fill the void, it also left its own corresponding void so that the other rocks or elements on the side move to fill the avoid and so on until the first molten rocks that move up does the filling ( for it then cools and sink back) completing a circular process known as “convection cell”. The convection cell movement creates convection currents that naturally drive the plates in motion for it sits on top of the lithosphere rigid mantle crust that is just above the athenosphere ( Strickler 1997). The next possibility involves surface boundary and plate forces. These forces are thought to be strong enough so as to make it able to move a wide variety of plate sizes. These forces may move the plates slowly usually with observable effect ranging from tens of millions years or the force may cause a sudden movement causing earthquakes. Identified plate forces are “ridge push, slab pull, trench suction, collisional resistance and basal drag” (see figure 2) (Weil 2006). Ridge push may exhibit itself as either a boundary or a body force. The body force is horizontal force acting on the ocean floor as a direct effect of the “cooling and thickening of the oceanic lithosphere with age” (Weil 2006). On the other hand, Bott (1993) stated that the Ridge Push boundary force, is caused by the “gravity wedging” effect when hot, “buoyant” mantle from below the ridge crest rises up to cause horizontal pressure( qtd in Weil 2006 ). In this case, the effect of the force is limited at the edge of the lithospheric plate, felt only in the area covering the length of the ridge (Weil 2006). The Slab Pull forces are found in the subduction (depression) zone. Subduction zones are created when the more dense oceanic crust collides with the less dense continental crust so that the former sinks forming a subducting slab below. According to Capple and Tullis (1977), the slab pull force is the force that pulls the slab deeper and is “dependent on the angle, temperature, age and volume of the subducting slab…” (qtd in Weil 2006). Wilson (1993) said that it is believed that slab pull is a very strong boundary force (qtd in Weil 2006). The Collisional Resistance may be said to be the negative counterpart of slab pull. Whenever a subducting slab exists, there is a corresponding “resistive force” exerted by the viscous, more ductile upper mantle. According to Ziegler (1992) the sum of the two opposing forces (slab pull and resistive) equals the Net Slab force exerting at the” colliding margin”. Richards (1992) revealed that in recent studies , however, the slab itself balance the slab force so that the slab force do not actually contribute to plate movements (qtd in Weil 2006). Trench Suction forces occur in the trenches created by the subduction zones and it is more often referred to as the net trenchward pull (qtd Forsyth and Uyeda (1975) and Chase (978) in Weil 2006). Along with the formation of subduction zone is the “small-scale convection in the mantle wedge”, in the shallow subsurface, resulting to trench suction. Basal Shear Traction or Basal Drag forces plays a significant role in plate motion for they are the ones who gave an indication whether plate motions are “active” or “passive”. Basal drag is created as a resulting resistance or dragging force when the two surface layers of the upper mantle and the lithosphere meet. Geologists consider the effect of basal drag to be small but when it involves big surface plates, it can create a big total resistance (Weil 2006). Whichever causes the plate motion, there are three sure ways of plate movement as dictated by the corresponding plate boundaries. Plates either “move away from, toward, or slide past each other”. Geologists respectively identify these movements as “divergent, convergent, and transform plate boundaries” (Hawaiian Natural History Association, “Types of Plate Motion”, 2005). In the case of divergent plate boundary (see figure 3), oceanic or continental plates move away from each other. Prime example of this is the mid-Atlantic Ridge, near the middle of the Atlantic Ocean (“Theory” 1). On the other hand, there are three types of convergent boundaries (see figure 4), depending on which of the lithospheric plates collides. The first type is when denser oceanic plates collide with the continental plates resulting in the formation of subduction zones and volcano. Example of this is the subducting oceanic Nazca plate in the South American continent. The next type is when two continental plates collide resulting to the formation of mountains. The third type is when two oceanic plates collide resulting to the subducting of a more dense oceanic plate, which then eventually leads to the formation of volcanic islands (“Theory” 1). In the last plate movement, transform plate boundaries (see figure 5), the prime example is the San Andreas Fault in California. This fault was created when the “Pacific Plate slides past the North American Plate” (Hawaiian Natural History Association, “Types of Plate Motion”, 2005). The issue of which of the two possible mechanisms is responsible for plate motion is what the geologists are trying to find out for sure at present. Scientific geologic studies are still ongoing to prove or disprove any of the two possibilities. Figure 1 . (Taken from http://volcano.und.nodak.edu/vwdocs/vwlessons/plate_tectonics/part13.html ) Figure 2: Basic schematic of different Plate Driving Forces (Taken from http://www.umich.edu/~gs265/tecpaper.htm) Figure 3 (Taken from http://volcano.und.nodak.edu/vwdocs/vwlessons/plate_tectonics/part13.html) Figure 4. (Taken from http://volcano.und.nodak.edu/vwdocs/vwlessons/plate_tectonics/part13.html) Figure 5. (Taken from http://volcano.und.nodak.edu/vwdocs/vwlessons/plate_tectonics/part13.html) Hawaii Natural History Association. “The Birth of Plate Tectonics”. A Teacher’s Guide to the Geology of Hawaii Volcanoes National Park. North Dakota and Oregon Space Grant Consortia. 2005. Accessed February 19, 2008 <http://volcano.und.nodak.edu/vwdocs/vwlessons/plate_tectonics/part11.html> Hawaii Natural History Association. “Types of Plate Motion”. A Teacher’s Guide to the Geology of Hawaii Volcanoes National Park. North Dakota and Oregon Space Grant Consortia. 2005. Accessed February 19, 2008 <http://volcano.und.nodak.edu/vwdocs/vwlessons/plate_tectonics/part13.html> Strickler, Mike. “What causes the plates to move?” University of Oregon. April 1997. Accessed February 19, 2008 from http://jersey.uoregon.edu/~mstrick/AskGeoMan/geoQuerry35.html __________. “Theory of Plate Tectonics”. Accessed February 19, 2008 <bin.lps.org/manila/lnesci/Ch4.2Notes.pdf> Weil, Arlo Brandon. “Plate Driving Forces and Tectonic Stress”. University of Michigan. 2006. Accessed February 19, 2008 <http://www.umich.edu/~gs265/tecpaper.htm>
<urn:uuid:063c465e-5b11-4695-859d-5b3599e30b45>
4.0625
2,421
Academic Writing
Science & Tech.
51.290887
95,565,356
Hinge benefits: ions pour through this synthetic chloride channel Chemists copy from cells to make a tunnel for salt Chemists have finally achieved what every human cell can do. They have designed and built from scratch a gate for electrically charged chlorine atoms to pass through1. George Gokel and colleagues at Washington University in St Louis, Missouri, based their gate on biological proteins that transport chloride ions from one side of our cell membranes to the other. Like these, the synthetic channel can be opened and closed by applying a voltage. How this happens is not clear, even in natural ion channels. In nature, voltage regulates ion flow to control how salty cells become. If there are more chloride ions on one side of a membrane than the other, the imbalance of electrical charge sets up a voltage across the membrane that can start or stop ions passing. Cells use ion channels to produce electrical signals such as nerve impulses and the muscle movements that produce the heart beat. Many channels transport only one kind of ion, sodium, say, or chloride. Similarly, the artificial channels transport chloride ions much more effectively than other ions, such as potassium or sulphate. Gokel’s group tested them in artificial particles called liposomes, which are hollow shells with walls like real cell membranes. Several different types of protein-based chloride channel in the human body serve functions ranging from salt uptake to muscle contraction. Genetic mutations that make channels faulty are linked to heritable diseases such as cystic fibrosis and some muscle and kidney complaints. Artificial chloride channels might one day serve as drugs against such diseases, but that’s a distant goal. At the moment, Gokel and his colleagues are simply trying to build simple molecules that can do the same job as real ion channels. Another motivation is that natural and synthetic ion transporters can act as antibiotics. Cell membranes have an oily inside edge that repels water, so water-soluble substances such as ions need help getting across. Protein ion channels are embedded in a membrane, creating a kind of tunnel that lets ions through. The new synthetic chloride channel tries to copy this. The molecule has a fatty, oil-soluble tail and a protein-like, ion-transporting head. The fatty tail anchors it in the membrane. The head contains a string of seven amino acids, like those that make up natural chloride channels. In particular, an amino acid known as proline is in the middle of the sequence. Gokel’s team think that the proline is the hinge-like apex of an arch-shaped structure, and that two prolines stick together in the membrane to form a pore just wide enough for a chloride ion to pass through. PHILIP BALL | © Nature News Service Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:4c6eea47-5372-4046-b5c7-f03e2c1dc76d>
3.5
1,146
Content Listing
Science & Tech.
40.633219
95,565,359
Wildlife-Friendly Solar PV for Massachusetts Large solar photovoltaic (PV) arrays can be planted with native vegetation to provide habitat to pollinators and wildlife species. A number of states have established voluntary "pollinator-friendly" certification programs to help solar developers implement, maintain, and promote native meadow habitats under and around solar panels. CEE is working with state and federal agencies, pollinator experts, and stakeholders in the agriculture, wildlife biology, and solar energy communities to develop a wildlife-friendly designation program for solar PV facilities in Massachusetts. Why wildlife-friendly solar? It’s Good for Native Wildlife and Plants: Native flowering herbs and shrubs provide habitat and food to pollinators and other species. Grassland habitats support over 70 animals and plants designated as Species of Greatest Conservation Need in Massachusetts. It’s Cost-Effective: Establishing native plants under solar PV arrays may require higher upfront costs, but these practices can result in lower maintenance costs over time, due to reduced mowing schedules, and reduced needs for watering and herbicide application. It’s Prettier: Wildflower meadows and vegetation screens of native shrub species are aesthetically more appealing than grass or gravel. They may make solar PV facilities more acceptable to neighbors and visitors. What is UMass Clean Energy Extension doing to develop wildlife-friendly solar PV in Massachusetts? - Working with state wildlife and native plant organizations to determine best management practices for establishing and maintaining native plant and animal communities under solar arrays. - Working with agricultural organizations and beekeepers to help support pollinators important to farming. - Working with solar PV developers to ensure designation standards are economically feasible and compatible with solar PV array operation and maintenance. - Working with the state Attorney General’s office to develop a proposed legal path to voluntary designation. What have other states done? - Created voluntary designation programs for solar PV facilities to establish habitats friendly to pollinators and native grassland birds - Developed best management practices, as well as establishment, maintenance, and monitoring guidance. - Check out what's happening in other states: Vermont Maryland Minnesota Have questions about the project, or interested in becoming involved? Contact Zara Dowling (email@example.com; 413-545-8516).
<urn:uuid:788408fe-1443-4eb6-af0d-c860eeaf1f02>
2.6875
475
News (Org.)
Science & Tech.
11.454441
95,565,361
Drunk and alcohol One drunk measured 2.7 ‰ alcohol in the blood, another 1.75 ‰. How many grams of alcohol in the blood they had if they has 6 kg of blood? Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: - Beer permille In the 5 kg of blood of adult human after three 10° beers consumed shortly after another is 6.6 g of the alcohol. How much is it as per mille? After three 10° beers consumed in a short time there are 5.1 g of alcohol in 5 kg adult human blood. How much is it per mille? - Motion problem From Levíc to Košíc go car at speed 81 km/h. From Košíc to Levíc go another car at speed 69 km/h. How many minutes before the meeting will be cars 27 km away? Calculate how many average minutes a year is the web server is unavailable, the availability is 99.99%. How many times a day hands on a clock overlap? Calculate how many promiles river Vltava average falls, if on section long 928 km flowing water from 1592 m AMSL to 108 m AMSL. - Road - permille 5 km long road begins at an altitude 500 meters above sea level and ends at a altitude 521 ASL. How many permille road rises? If you go at speed 5.1 km/h, you come to the station 37 minutes after leaving train. If you go by bike to the station at speed 28 km/h, you come to the station 38 minutes before its departure. How far is the train station? Between points A, B, whose horizontal distance is 1.5 km railway line has 8promile climb. Between points B, C with horizontal distance of 900 m is climb 14promile. Calculate differences of altitudes between points A and C. - Slope of track Calculate the average slope (in promiles and even in degrees) of the rail tracks between Prievidza (309 m AMSL) and Nitra (167 m AMSL), if the track is 77 km long. - Climb in percentage The height difference between points A and B is 587 m. Calculate the percentage of route climbing if the horizontal distance places A, B is 4.8 km. - Mountain railway Height difference between points A, B of railway line is 38.5 meters, their horizontal distance is 3.5 km. Determine average climb in permille up the track. Funicular on Petrin (Prague) was 408 meters long and overcomes the difference 106 meters in altitude. Calculate the angle of climb. - A clock A clock was set right at 6:00 AM. If it gains 3 1/2 minutes per hour, what time will it show at 6:00 PM on the same day? Show your solution - Descent of road Road sign informs the gradient is 10.3%. Calculate the angle which average decreases. - Scientific notation Approximately 7.5x105 gallons of water flow over a waterfall each second. There are 8.6x104 seconds in 1 day. Select the approximate number of gallons of water that flow over the waterfall in 1 day. The rectangle is 11 cm long and 45 cm wide. Determine the radius of the circle circumscribing rectangle.
<urn:uuid:8ef99e42-ed6d-4527-8e24-3cf1e58d6a3b>
2.5625
757
Tutorial
Science & Tech.
79.037364
95,565,365
“It’s not the kind of thing you’d want to spread on a slice of toast!” This kind of event has only been clearly seen at the centre of a galaxy before, not at the outer edges Where are the stars in various constellations located? Analysis of over 800 galaxies shows that there is a relationship between a galaxy’s age and size One supernova explosion is enough to pack a punch, but can two go off at the same time? This study revealed that unusual X-ray results of galaxy clusters could be explained by dark matter Dwarf galaxy J0811+4730 may serve well as a proxy for better understanding the developing chemistry of the young cosmos The research has thrown a wrench into what we know about how galaxies form No-nonsense astrophysicist Neil deGrasse Tyson reveals his thoughts on the universe, fears for the American space program and George R. R. Martin – available now! A research team has discovered new evidence of stars forming in the Milky Way With all that gas in the galactic nucleus, shouldn’t stars be made faster? Three detector systems for the Euclid mission, led by the European Space Agency, have been delivered to Europe for the spacecraft’s near-infrared instrument NASA’s Chandra X-ray Observatory and other telescopes have revealed details about a giant black hole, located some 145 million light years away Some 290 million years ago, a star much like the Sun wandered too close to the central black hole of its galaxy Data from NASA’s Swift spacecraft, amongst other telescopes, suggest black holes swallow stellar debris in bursts The finding was made after recent analysis of data for millions of stars from the Gaia space mission Across the universe, galaxies are being killed and the question scientists want answered is, what’s killing them? Astronomers have found that the earliest galaxies have been turning an unusual colour An unexpected contribution to the universe’s relic radiation has been detected for the very first time NASA’s Hubble Space Telescope captured two festive-looking nebulae brightening up our galaxy
<urn:uuid:7d5f33a6-02a1-4f85-8398-c19b7f22fce2>
3.390625
444
Content Listing
Science & Tech.
22.594131
95,565,369
The evolution of supernova remnants (SNRs) has not been well understood for many years, and only magneto-hydrodynamics (MHD) simulations can tell us their history. Recently, by applying MHD simulation, the researchers from National Astronomical Observatories of CAS (NAOC) successfully explain the radio emission evolution of a SNR. In addition, such a simulation predicts there should be a new shell of the SNR, which is proved by the analysis of polarization observation. A SNR is the result of the interaction between a supernova and the surrounding interstellar medium. The evolution is not only dependent on the parameters of the progenitor, but also related to the distribution of the surrounding gas and magnetic field. However, the progenitor is various and the distribution of the surrounding gas and magnetic field is also mysterious. Thus, it is very difficult to set the reasonable initial conditions. This study focuses on the SNR W51C which has been observed by various telescopes. Based on the previous work, it becomes easier to obtain appropriate conditions then realize more accurate simulation. SNR W51C is taken as a one-edge SNR next to two HII regions, W51A and W51B. However, the simulation shows that there should be another new edge overlapping with W51A (See Fig. 1). W51A is so luminous that the new edge cannot be detected directly. In fact, many years ago, some researchers found there is non-thermal emission toward W51A which should not generate non-thermal as a HII region. This problem lasts for many years. This MHD simulation brings the first light of solving the problem. The further analysis for the polarization data of Effelsberg 100 m radio telescope in Germany shows obvious polarization emission toward W51A, which is impossible for an HII region. However, a SNR can generate both of non-thermal emission and polarization emission. This result proves the prediction of MHD simulation and solves the difficult problem. This work also simulates the interaction between SNR W51C and a surrounding molecular cloud, explains the origin of the non-thermal emission toward a compact HII regionG49.2-0.35 in W51B, and further studies the OH maser distribution around G49.2-0.35. These results help us better understand this SNR and enrich the researches on the evolution of SNRs. This study has been published on The Astrophysics Journal. The e-print can be accessed on arXiv :https://arxiv.org/abs/1710.04770 Fig. 1. The images of W51C. The left panel is the observed radio image, and the right panel is the image generated from the simulation. In the left panel, only the dispersive emission on the lower right was taken as SNR W51C before. The two bright regions on the upper are HII regions W51A and W51B, respectively. Address: 20A Datun Road, Chaoyang District, Beijing, China code: 100012 Tel: 010-64888708 E-mail: email@example.com
<urn:uuid:add9da21-d126-41c6-b235-580046c4b3c8>
3.0625
664
Academic Writing
Science & Tech.
52.445962
95,565,370
What systems can you find within the Earth sciences? How do they work? How do they interact with each other? Within its new online Earth sciences theme, SEED has collected articles, activities, animations, and simulations to highlight the many systems of Earth. Soils are critical for many aspects of our daily life. They provide food such as grains, vegetables, and animal feed. They provide fiber for clothing, as in cotton, flax-linen, and hemp. And they provide shelter materials like wood and brick. But did you realize that soils also are an important part of the energy cycle? Soil is often overlooked as a natural resource. Like fossil fuels, we depend on it for energy in the form of foods. And, like fossil fuels, it is nonrenewable. Soil is a delicate balance of inorganic minerals, organic matter, living organisms, soil water, and soil atmosphere. The natural development of soil is an exceedingly slow process. In a few hours, a heavy rain falling on exposed soil can remove inches of what took hundreds of years to form. Here is a simple exercise that will allow you to compare the rates and amounts of erosion that result from various land uses. MY NASA DATA microsets are created using data from NASA Earth science satellite missions. A microset is a small amount of data extracted from a much larger data file. Data is available on the atmosphere, biosphere, cryosphere, ocean, and land surface. Data and related lessons can be used with existing curriculum to help students practice science inquiry and math or technology skills using real measurements of Earth system variables and processes. In this activity, students use NASA data to determine areas of the country that are most likely to produce solar energy by analyzing differences in incoming solar radiation graphs. Want to be an archaeologist without leaving your school? No problem! Use a computer to become a space archaeologist (no spacesuit required)! Crucial to our existence, water sustains all life on Earth. Following the old adage, "What goes around comes around," water moves continuously through the stages of the hydrologic cycle (evaporation, condensation, and precipitation). How does our drinking water fit into this hydrologic cycle? Where did the water we drink fall as precipitation? Did this water percolate down into the ground as part of a groundwater system, or did it remain on the surface as part of a surface water system? What path did this water follow in order to become our drinking water? This lesson will explore the hydrologic cycle and water's journey to our glass. As a citizen scientist, you can take your own air temperatures with an outdoor thermometer and compare your readings to the official ones from the National Weather Service. It is important that you follow the correct procedures, however, for placing your thermometer. This activity will help you to do that, as well as find out what the normal yearly average temperature is for each day. Various types of sediments, or “surficial features,” lie above the bedrock in many places. The following activity shows how a visualization map of surficial features can be used to consider the interactions of the geosphere, hydrosphere, biosphere, and atmosphere. Geodesy is the science that measures and represents the size and shape of Earth. In the United States, survey reference points are developed and maintained by NOAA’s National Geodetic Survey (NGS). In this activity, you will find data on the location and description of survey marks in your area and—if you like—search for them through a variation of geocaching. The following activity can leverage SeisMac technology to help students understand how a seismometer records ground motions.
<urn:uuid:9b5091b4-7d83-4ee0-be36-3ccd6b6c343d>
3.640625
763
Content Listing
Science & Tech.
40.823183
95,565,381
Using O(N) = Σca for some choices of a in N , prove that in A5 there is no normal subgroup N other than (e) and A5 .© BrainMass Inc. brainmass.com July 18, 2018, 3:12 am ad1c9bdddf Please see the attached file for the complete solution. Thanks for using BrainMass. Group Theory (CVII) Another Counting Principle By:- Thokchom Sarojkumar Sinha Using for some choices of in , prove that in there is no normal ... Chains are investigated. The solution is detailed and well presented. Permutation groups and counting principles are determined.
<urn:uuid:6936fba5-989d-4cb9-be29-21e5778c2b9c>
2.859375
146
Q&A Forum
Science & Tech.
65.050455
95,565,406
Click image to advance. |© Copyright Dan L. Perlman, 2005-2007| |Aposematic coloration in Monarch butterfly, Costa Rica. Monarch butterflies are distasteful and toxic. When they are caterpillars, they feed on milkweed plants, which contain a host of toxins and are largely immune to attack from other insect herbivores. The caterpillars sequester the toxins within parts of their bodies, where they cause no harm, and these toxins stay in the animals when they become adult monarchs. The adults are strikingly colored, and after a bird has tried to eat one it will typically spit the butterfly out and will avoid them thereafter. This type of bright and memorable color pattern is called aposematic coloring, and acts as a warning. Monarchs are also famous for their annual migrations; late in the summer, all monarchs from the eastern two-thirds of the USA and Canada migrate south to overwinter in a small area of pine forests high in the mountains of Mexico. When spring arrives, the individuals that have survived begin a flight northward. Once they reach the southern USA, they stop to lay eggs and there they die. Their offspring and grandoffspring head north as the summer progresses. Although it was once thought that Viceroy butterflies, which look very similar to Monarchs, were palatable, now scientists believe that both species are unpalatable to predators, making them a classic example of Mullerian mimicry.| |Set image width: 640 · 720 · 800 · 1000 · 1200 · 1500 · max pixels. • Click image to advance.| Updated: 2018-07-19 08:22:24 gmt © Designed by The Polistes Corporation
<urn:uuid:e4253969-7912-493e-bebd-912f115fd0e1>
3.421875
352
Truncated
Science & Tech.
48.704907
95,565,420
Overwhelming scientific evidence supports reducing carbon pollution that causes global warming as much as possible and as quickly as possible. Global warming is happening faster than predicted even several years ago, with many natural systems already seriously impacted. Sea-level rise by the end of the century may be two to three times previous projections. Arctic sea ice is melting faster than anticipated even a few years ago. Northern forests are under attack from heat, drought, insects, and fires. And, many of the changes in our climate may be with us for hundreds and thousands of years. New scientific findings indicate that holding further increases in global temperatures to no more than 2°F above today’s levels, which many believe will allow us to avoid dangerous interference with the climate system, may not be enough to protect people and the planet from significant harm after all. Furthermore, a target of 450 ppm CO2, widely thought to be sufficient for keeping warming below 2°F, only gives us a 50 percent chance of keeping warming that low. More alarming are the early warning signs that we could be approaching tipping points that would cause global warming to accelerate even faster. The United States and the international community must come to terms with an increased sense of urgency to address climate change.http://www.youtube.com/watch?v=vrLXXioNiD8 Aileo Weinmann | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:11b3ba36-d6f7-4563-b790-0d4d40515c26>
3.828125
938
Content Listing
Science & Tech.
42.349299
95,565,428
Perception of stimuli and activation of a signaling cascade is an intrinsic characteristic feature of all living organisms. Till date, several signaling pathways have been elucidated that are involved in multiple facets of growth and development of an organism. Exposure to unfavorable stimuli or stress condition activates different signaling cascades in both plants and animal. Being sessile, plants cannot move away from an unfavorable condition, and hence activate the molecular machinery to cope up or adjust against that particular stress condition. In plants, role of calcium as second messenger has been studied in detail in both abiotic and biotic stress signaling. Several calcium sensor proteins such as calmodulin (CaM), calcium dependent protein kinases (CDPK) and calcinuerin B-like (CBL) were discovered to play a crucial role in abiotic stress signaling in plants. Unlike CDPK, CBL and CaM are calcium-binding proteins, which do not have any protein kinase enzyme activity and interact with a target protein kinase termed as CBL-interacting protein kinase (CIPK) and CaM kinases respectively. Genome sequence analysis of Arabidopsis and rice has led to the identification of multigene familes of these calcium signaling protein kinases. Individual and global gene expression analysis of these protein kinase family members has been analyzed under several developmental and different abiotic stress conditions. In this review, we are trying to overview and emphasize the expressional analysis of calcium signaling protein kinases under different abiotic stress and developmental stages, and linking the expression to possible function for these kinases. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:71cc52ea-b7e5-464a-b04a-80b18dd046a0>
2.59375
340
Academic Writing
Science & Tech.
6.115405
95,565,429
Nodulated Tree Legumes and Their Symbiotic Bradyrhizobium in African and South-American Tropical Rainforests Leguminosae, the third largest family of angiosperms, are of major agricultural, ecological and economic importance. Several recent studies have clarified the taxonomic and phylogenetic relationships among the 19,400 species that constitute this family and traced the “road map of legume diversity” (Doyle and Lückow, 2003). KeywordsTropical Rainforest Plant Taxon Rhizobial Inoculation Bradyrhizobium Strain Specific Host Plant Unable to display preview. Download preview PDF.
<urn:uuid:953aabcc-f6c0-451c-9a0b-5196fb387472>
3.203125
138
Truncated
Science & Tech.
-2.765192
95,565,439
Published on September 18th, 2017 | by Guest Contributor0 What Hurricane Harvey Taught Us About Risk, Climate & Resilience September 18th, 2017 by Guest Contributor Originally published on The Conversation. By Andrew Dessler, Daniel Cohan, & Katharine Hayhoe Hurricane Harvey has taught us many lessons, but the most valuable may be the oldest lesson of all, one we humans have been learning – and forgetting – since the dawn of time: how much we all have to lose when climate and weather disasters strike. The risks we face from disasters depend on three factors: hazard, exposure and vulnerability. In the case of Harvey, the hazard was the hurricane with its associated winds, storm surge and, most of all, rain. Houston is one of North America’s biggest metro areas, making 6.6 million people exposed to this hazard. Finally, there’s our vulnerability to heavy rainfall events, in this case exacerbated by the city’s rapid expansion that has paved over former grasslands, overloaded critical infrastructure, challenged urban planning and limited evacuation routes. These three factors explain the immense costs associated with tragedies like Hurricane Harvey. As atmospheric scientists in Texas, we already know the hazards are real. Once the effects of Harvey have been added up, Texas and Louisiana will have been hit by more billion-plus dollar flooding events since 1980 than any other states. We also know that many of these hazards are intensifying. In a warmer world, heavy precipitation is on the rise, which increases the amount of rain associated with a given storm. Sea level is rising, worsening the risks of coastal flooding and storm surge. At the cutting edge of climate research, scientists are also exploring how human-induced change may affect storm intensity and the winds that steer the hurricanes. This is why catastrophes like Harvey – in which every extra inch of rain can lead to additional damage and harm – highlight exactly how and why climate change matters to each and every one of us. People know the climate is changing, but they don’t know how serious it is. Over 70 percent of Americans agree that the climate is changing, but less than half of us believe it will affect us personally. Why? Perhaps because the image we associate most often with a changing climate is not the devastation left by a flood in our own state but rather a polar bear perched on a chunk of melting ice or an African farmer bearing silent witness to the impacts of a disaster that’s taken place on the other side of the world. As tragedy unfolds, we must focus on the immediate response. But in the weeks and months that follow, we need to remember that, despite our air conditioners, our insurance and the politicized discourse that suggests that the science is somehow a matter of opinion rather than fact, we are incredibly vulnerable to natural disasters – disasters that are increasingly being amplified in a warming world. What sensible, pragmatic, bipartisan steps can we take to increase our resilience to risks that a disaster like Hurricane Harvey represents? This question must be asked, because the current administration has proposed cutting the budget of the National Weather Service and other agencies that study and forecast weather and climate disasters and has rescinded regulations designed to address rising sea levels when constructing infrastructure. First and foremost, we should reduce our exposure and build resilience to the hazards we already face today. We can’t continue building in places that we know will flood. We need to build and modernize infrastructure to make our water management systems more resilient to both floods and droughts. We must continue to invest in the weather forecasting systems that provided advance warning and in the public services that build community resilience and provide disaster response. Ultimately, though, even these practical steps may not be enough. In a changing climate, building capacity and resilience to cope with today’s risks leave us unprepared for future extremes. That’s why, in order to reduce the risk of disasters both here and abroad, we need to minimize the climate change that is turbocharging these events. And that means reducing our emissions of the heat-trapping greenhouse gases. Changing the risk equation Here again Texas can lead the way. We’re already number one in wind power production by state, thanks to targeted investments that boosted the power grid connecting cities with windy regions. And we’ve only begun to tap our abundant solar resources. The innovations that energy companies have pioneered to build offshore oil platforms can inform the development of, and investment in, offshore wind turbines and their knowledge of producing petrochemicals could be applied to more sustainably produced biofuels. There will always be those who claim that the costs of moving to cleaner energy sources and reducing carbon emissions are too high. But the U.S. has improved air quality in ways in which the benefits greatly exceed the costs and replaced ozone-depleting chemicals, all while the economy has grown. Today, wind and solar power prices are now competitive with fossil fuelsacross Texas. Across the country, these industries already employ far more people than coal mining. Electric cars may soon be as affordable as gasoline ones and be charged in ways that help balance the fluctuations in wind and solar power. Only someone profoundly pessimistic would bet against the ability of American ingenuity to repower our economy. Hurricane Harvey exemplifies the risks we all face – and a more dangerous future if we don’t take actions now. More people and vulnerable infrastructure exposed to more frequent and intense hazards equals even greater risk for us in the future. The time to rethink the equation is now. Andrew Dessler is Professor of Atmospheric Sciences at Texas A&M University, Daniel Cohan is Associate Professor of Environmental Engineering at Rice University, and Katharine Hayhoe is a Professor and Director of the Climate Science Center at Texas Tech University. Reprinted with permission.
<urn:uuid:dd18a78e-9ab3-4e2d-841a-04e9c4813d4d>
2.78125
1,196
Truncated
Science & Tech.
33.514896
95,565,449
Answer to Question #8047 in Electric Circuits for Mukul How the direction of electric current differs from the direction of electrons in a circuit? Although it is electrons which are the mobile charge carriers which are responsible for electric current in conductors such as wires, it has long been the convention to take the direction of electric current as if it were the positive charges which are moving. Some texts reverse this convention and take electric current direction as the direction the electrons move, an obviously more physically realistic direction, but the vast majority of references use the conventional current direction and that convention will be followed in most of this material. In common applications such as determining the direction of force on a current carrying wire, treating current as positive charge motion or negative charge motion gives identical results. Besides the advantage of agreeing in direction with most texts, the conventional current direction is the direction from high voltage to low voltage, high energy to low energy, and thus has some appeal in its parallel to the flow of water from high pressure to low Thanks so much, it runs perfectly you saved me a lot of stress. You gave good quality service and I won't hesitate to recommend you to my other friends who don't have the time to get projects like these done.
<urn:uuid:616ebda9-25cb-4280-bcee-31ab552f4679>
3.46875
250
Q&A Forum
Science & Tech.
19.472895
95,565,455
A beautiful 67 km stretch of River Ganga in Bihar’s Bhagalpur district is home to Vikramshila Gangetic Dolphin Sanctuary, the only reserve in India dedicated to the country’s national aquatic animal: the blind, side-swimming, endangered Gangetic river dolphin or Platanista gangetica. The most ancient of all cetaceans, Gangetic dolphins are fascinating animals. Some 30 million years ago, they diverged from other toothed whales, making them one of the oldest species of aquatic mammals that use echolocation. In echolocation, the animal sends out sound waves that bounce off underwater obstacles and darting fish, helping them navigate and find food. These gentle freshwater creatures are also known for swimming at an angle, nodding their head rhythmically and trailing a flipper along the riverbed to dislodge potential prey. Gangetic dolphins were once found in the thousands in river Ganga, one of the world’s most densely populated areas, but decades of hunting, destructive fishing practices, hydro-projects, increasing boat traffic, and pollution has pushed this shy, long-snouted species into the endangered list of the International Union for Conservation of Nature (IUCN). The threats to the Gangetic dolphin are identical to the ones faced by another freshwater cetacean in China not so long ago, the Yangtze river dolphin or Baiji, which was declared functionally extinct by IUCN in 2007. The good news is several conservation organisations and communities in India are working to try to prevent the same fate for the most elusive inhabitant of the river Ganga. In an innovative approach, a team of Japanese and Indian sonar engineers have come together to build and deploy a custom-built sonar system that can track these reclusive creatures by the high-frequency clicks they use to navigate and hunt. This unique conservation initiative is led by Harumi Sugimatsu, an acoustical engineer from the University of Tokyo’s Institute of Industrial Science, and Rajendar Bahl, a professor at the Center for Applied Research in Electronics at the Indian Institute of Technology, Delhi. The duo created the blueprint for this experimental project while they were studying and tracking the migration of humpback whales around the islands of Japan in the early 2000s. Sugimatsu and Bahl’s sonar-monitoring project brought together a team of dolphin experts and sonar engineers from both Japan and India. This team had two main objectives: one, to reveal little-known details about the dolphins’ activities, and two, to provide better and more reliable tallies of dolphin population. Harumi Sugimatsu and Rajendar Bahl Thanks to the plummeting numbers of these aquatic mammals and the vastness of their habitat, researchers and conservationists generally struggle to keep tabs on these dolphins. By eavesdropping on their underwater lives, the team believes it can gather data about their daily routine, behaviour and geographical range (i.e. locations where they hunt, play, and nurse young calves), they will be able to focus protective efforts in the right areas. Also, the present system of estimating dolphin population depends heavily on visual surveys. For instance, in a government census, four men in a boat look in different directions, keeping a watch for surfacing dolphins (they surface to breathe about once every 4 minutes). This method can lead to a single dolphin being counted multiple times as it surfaces in different spots. So, sonar-monitoring can definitely result in a huge improvement on the reliability of the survey data. #MGChangemakers - Episode 2: THE 21-YEAR JOURNEY OF CHANGE | Driving India Into Future Live Now #MGChangemakers Episode 2 : Touched by poverty, untouchability and atrocities against Musahar- the Mahadalit community of Bihar, Padma Shri Sudha Varghese decided to dedicate her life for their upliftment. Watch the video to learn about her inspirational journey & how she is ‘Driving India Into The Future’. #MGChangemakers powered by MG Motor India and supported by United Nations India. Show your support by donating now: http://bit.ly/Milap-MGChangemakersPosted by TheBetterIndia on Wednesday, July 18, 2018 The team members first started work in 2006 when they heard about a solitary Gangetic dolphin that had somehow ended up in the Budhabalanga River in Odisha. Cut off from all known populations of his species, it was an unlucky situation for the male dolphin, but his predicament also provided an opportunity for researchers to study how an individual dolphin, isolated within the confines of a narrow river habitat, used his echolocation system in the wild. To do this, the Indo-Japanese team installed hydrophones in the muddy shallows of the Budhabalanga river. By triangulating the incoming signals, they were able to chart the solitary dolphin’s movements and learn more about his bio-sonar abilities. For instance, they learnt that the Gangetic dolphin produces a narrow sonar beam that it sweeps back and forth, like a swinging flashlight used to illuminate a large area. The engineers used information such as this to build a better dolphin detector which was then placed in a peaceful 12-km stretch of river between two dams about 150 km south of New Delhi. For the next six years, the team used the data they gathered here to refine their technique and sonar equipment for long-term monitoring. Finally, the solar-powered dolphin detector was placed in a polluted stretch of river Ganga near the industrial city of Kanpur in Uttar Pradesh. Fixed securely to a steel pole, the hydrophone’s underwater monitor sends its raw data to a signal processor that analyses, stores, and uploads data to a server in real time. Much of the sophisticated signal processing is a result of Bahl’s expertise in submarines; he used to track them for the Indian Navy. The team’s first placement of the dolphin detector was a success, with several episodes of sonar activity in its vicinity lighting up their laptops’ screens with blue streaks. Each of these streaks represented a Gangetic dolphin gliding through its underwater world – unseen by human eye, but now accounted for. In the months after the successful deployment of their first dolphin detector, the team worked with local conservationists to find locations suitable for long-term sonar monitoring of dolphins. Awareness campaigns for the local fishermen of ‘dolphin hot spots,’ pinpointed by deployed sonar systems, were also put in to action. In the future, Sugimatsu, Bahl and their hardworking team members hope that series of stationary sonar monitors along the river Ganga will help record the migration pattern of these endangered animals, provide an accurate estimate of their numbers and help marine biologists learn more about this little-known species. For those who would like to do their bit to protect India’s national aquatic animals, there are two other organisations working to conserve the Gangetic dolphin. Society for Conservation of Nature (SCoN) is working with Sugimatsu and Bahl’s team to conduct sonar-monitored surveys, assess threats to the dolphins in the river and help plan improvements in its habitat. To contact them, click here. Aaranyak’s Gangetic Dolphin Research and Conservation Division (GDRCD) works for long term conservation of the endangered Gangetic dolphin in the Brahmaputra river system. Other than conducting dolphin counts, this NGO has also launched a dedicated research-cum-education boat for dolphin conservation in India. To increase community engagement in the project, this survey boat travels along the Brahmaputra to stage “dolphin dramas” and other educational events in riverside villages. To contact them, click here.
<urn:uuid:41808179-37aa-4406-af8f-4da84ce85018>
3.53125
1,626
News Article
Science & Tech.
26.408699
95,565,474
The work by Sankaran Thayumanavan and colleagues at UMass Amherst, with others at the University of California-Riverside, is highlighted in the current issue of the Journal of the American Chemical Society (JACS), a premier chemistry journal, for the clever way it mimics nature’s way of harnessing solar energy. To achieve the breakthrough, Thayumanavan and co-workers took inspiration from plants and experimented with organic molecules to mimic the photosynthetic machinery of plants. Their new paper demonstrates how a photosynthesis-style photovoltaic device can be designed using large, highly branched, non-biological organic molecules called dendrimers, based on plant anatomy. Branches allow the dendrimer to absorb photons from a wide area and funnel this energy to the dendrimer’s core where it is connected to a polymer “wire.” At the core, charge is separated and the electrons travel down the polymer “wire” to an electrode where electricity is produced. As Thayumanavan explains, “Our method is inspired by an energy-harnessing process that plants use in nature, which evolved over millions of years to be efficient in terms of capturing a lot of energy and transporting it short distances without power loss. In the future, photovoltaic devices may no longer rely on slower, less efficient human-made semiconductors. Our work should lead to lighter, more efficient and sustainable photovoltaics.” Thayumanavan, known to colleagues as “Thai,” is director of the UMass Amherst’s Fueling the Future Center for Chemical Innovation. He adds, “The hope is that such a bio-inspired design could approach the conversion efficiency that plants achieve naturally.” The recent JACS article by him and colleagues titled, “Dendritic and linear macromolecular architectures for photovoltaics: A photoinduced charge transfer investigation,” was selected by the journal editors to appear in a special section, “Harnessing Energy for a Sustainable World.” They predict that the research will transform the way engineers design future photovoltaic devices. The editors add, “Innovation through scientific discovery is a necessary component of much societal advancement. To truly implement sustainable practices, energy must be harnessed more cleanly and stored for efficient distribution and use. This systems-level change sometimes referred to as the New Industrial Revolution, will require novel materials as well as savvy analysis and modeling to ensure success.” "Thai" Thayumanavan | Newswise Science News Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:593200b2-48db-4f37-8981-ff58cb6fa760>
3.484375
1,136
Content Listing
Science & Tech.
29.515591
95,565,486
A typical Priority Queue requires following operations to be efficient. - Get Top Priority Element (Get minimum or maximum) - Insert an element - Remove top priority element - Decrease Key A Binary Heap supports above operations with following time complexities: - Finding minimum and maximum are not naturally O(1), but can be easily implemented in O(1) by keeping an extra pointer to minimum or maximum and updating the pointer with insertion and deletion if required. With deletion we can update by finding inorder predecessor or successor. - Inserting an element is naturally O(Logn) - Removing maximum or minimum are also O(Logn) - Decrease key can be done in O(Logn) by doing a deletion followed by insertion. See this for details. So why is Binary Heap Preferred for Priority Queue? - Since Binary Heap is implemented using arrays, there is always better locality of reference and operations are more cache friendly. - Although operations are of same time complexity, constants in Binary Search Tree are higher. - We can build a Binary Heap in O(n) time. Self Balancing BSTs require O(nLogn) time to construct. - Binary Heap doesn’t require extra space for pointers. - Binary Heap is easier to implement. - There are variations of Binary Heap like Fibonacci Heap that can support insert and decrease-key in Θ(1) time Is Binary Heap always better? Although Binary Heap is for Priority Queue, BSTs have their own advantages and the list of advantages is in-fact bigger compared to binary heap. - Searching an element in self-balancing BST is O(Logn) which is O(n) in Binary Heap. - We can print all elements of BST in sorted order in O(n) time, but Binary Heap requires O(nLogn) time. - Floor and ceil can be found in O(Logn) time. - K’th largest/smallest element be found in O(Logn) time by augmenting tree with an additional field. This article is contributed by Vivek Gupta. Please write comments if you find anything incorrect, or you want to share more information about the topic discussed above - Binomial Heap - Binary Heap - k largest(or smallest) elements in an array | added Min Heap method - Fibonacci Heap | Set 1 (Introduction) - Time Complexity of building a heap - LFU (Least Frequently Used) Cache Implementation - Convert BST to Max Heap - Program for Preemptive Priority CPU Scheduling - Minimum increment/decrement to make array non-Increasing - Height of a complete binary tree (or Heap) with N nodes
<urn:uuid:0785d194-0b31-4c30-ad18-ec0b63bced4b>
3
598
Knowledge Article
Software Dev.
36.88847
95,565,500
DNA evidence often implicates violent criminals. Now it can do the same for poachers harvesting wood from protected forests. The ocean contributes $1.5 trillion to the global economy every year. But there's another reason to protect marine ecosystems—they’re crucial for curbing climate change. Intact forest landscapes (IFLs), or vast stretches of unbroken forest wilderness, are some of the most important ecosystems in the world. The fact that the world lost an area of IFLs twice the size of California over the past decade spells trouble for nature, the climate and human well-being. Indonesia will continue to ban new licenses to clear key forest areas. The policy brings benefits for the country's forests, climate and the economy. Forest loss threatens the survival of endangered and endemic species like Madame Berthe's mouse lemur, the sky blue poison dart frog and the whooping crane. A new WRI paper finds bioenergy can play a modest role using wastes and other niche fuelstocks, but recommends against dedicating land to produce bioenergy. The lesson: do not grow food or grass crops for ethanol or diesel or cut down trees for electricity. FORMA, DETER, and PRODES The advent of near-real-time forest monitoring can dramatically strengthen efforts by governments, businesses, and communities to conserve and sustainably manage the world’s forests. This issue brief introduces a system called FORest Monitoring for Action (FORMA), which provides near-... This issue brief reports on the mechanics of and lessons learned from a conservation incentive program focused on the gopher tortoise. Its aim is to inform the successful design and implementation of other candidate programs emerging throughout the southern forests and greater United States. A joint collaboration between WRI and the World Business Council on Sustainable Development (WBCSD) This WRI/WBCSD publication is an information and decision-making tool to help customers develop their own sustainable procurement policies for wood and paper-based products. It also... We are on a collision course between ecosystems and food. How we resolve this issue over the coming years will be a key to preserving biodiversity and human well-being.
<urn:uuid:c49cfd9f-8b15-40bf-83cc-a64d1d0c0247>
2.859375
450
Content Listing
Science & Tech.
40.680546
95,565,523
Phosphoryl amino acids: Common origin for nucleic acids and protein A series of compounds (DAP-AA) composed of an amino acid (AA) and a dialkyl phosphoryl group (DAP) is the basic elements of life chemistry. Self-catalysis of DAP-AA gives the self-assembly oligopeptides, even in aqueous medium at 38°C. The oligo-nucleotides could also be assembled from nucleosides' phosphorylation by DAP-AA. DAP-AA acts as the energy source as well as the phosphoryl donor for the synthesis of nuclic Acids and protein. A general expression for the self assembly system is proposed. Key wordsbasic element of life chemistry model expression phosphoryl amino acids Origin of life Unable to display preview. Download preview PDF. - 1.Fox, S. W. and Dose, K. (1977)Molecular Evolution and the origin of Life, Marcel Dekker, New York.Google Scholar - 2.Miller, S. L. and Orgel, L. E. (1974)The Origins of Life on the Earth, Prentice Hall Inc., Englewood Cliffs, New Jersey.Google Scholar - 3.Ji, G. J., Xue, C. B., Zeng, J. N., Li, L. P., Chai, W. G. and Zhao, Y. F. (1988) Synthesis of N-O, O-diisopropyl phosphoryl amino acids and dipetides,Synthesis 6, 444–8.Google Scholar - 4.Xue, C. B., Yin, Y. W. and Zhao, Y. F. (1988) Studies on Phosphoserine and phosphothreonine derivatives,Tetrahedron Letters, 29, 1145–8.Google Scholar - 5.Li, Y. M., Zhang, D. Q., Zhang, H. W., Ji, G. J. and Zhao, Y. F. (1992) B-Carboxyl catalytic effect of N-phosphoryl aspartic acid,Bioorganic Chemistry, 20, 285–295.Google Scholar - 6.Li, Y. M., Yin, Y. W. and Zhao, Y. F. (1992) Phosphoryl Group Participation leads to peptide formation from N-phosphoryl amino acids,Int. J. Peptide Protein Res. 39, 375–81.Google Scholar - 7.Li, Y. C., Tan, B. and Zhao, Y. F. (1993) Phosphoryl transfer reaction of phosphohistidine,Heteroatom Chemistry, 4, 415–9.Google Scholar - 8.Zhao, Y. F., Li, Y. M., Yin, Y. W. and Li, Y. C. (1993) The Regulation Effect of phosphoryl group on amino acids side chain,Science in China, 36, 1453–8.Google Scholar
<urn:uuid:9243fb9f-e688-43c9-a14d-b55345a3f1dd>
2.625
646
Academic Writing
Science & Tech.
82.164872
95,565,545
- 21st International Conference on Condensed Matter Nuclear Science Thursday’s lectures began with Akito Takahashi chairing the Diverse Experiments session and introducing Jacques Ruer with Considerations on Chemical Reactions and LENR. Ruer has investigated explosions in LENR experiments in an attempt to discern whether the origins are chemical or nuclear. Shock waves can be created by blasts of hot gas, and this is what Ruer reproduced. He believes this is what might be behind the explosion that JP Biberian experienced in 2014. Removing an H-loaded Pd cathode rod from the cell, it retains it’s heat. Ruer simulated the system and he found that small rods get hotter, but not for long. High-loading increases the temperature, too. De-sorption is slow if the surface is non-reactive. The Maximum temperature can exceed 1000 degrees. Peak is reached after several minutes. Large Pd pieces glow longer, but giver lower peaks. Ruer referenced the 1985 March 1-cm-cube Pd cathode explosive-meltdown reported by Fleischmann and Pons. The bench was destroyed and a hole about 40 cm was made. He speculates that it may be a SWACER (shock wave __ ). The glowing cube of Pd may have introduced sufficient stresses in the flow to break pieces of concrete. Ruer ended by looking at the names given to this science over the years: Cold Fusion, LANR, LENR, AHE, MHE… The last one is important, he says, “because it does not contain the word nuclear”. “Nuclear reactions are not the root cause of what we observe”, says Ruer, “as that is the result of effects of Quantum physics.” He wants to say “Quantum effects energy” as it avoids the word Nuclear. It breaks the wall for the “4 miracles problem”. Ruer believes it describes accurately what is the focus of our theoretical students. He quotes Neils Bohr: ” Anyone not shocked by quantum mechanics has not yet understood!” and then asks what would be the best acronym for this now-called-LENR reaction? Ruer proposes QUEEN: QUantum-Effects-ENergy Xing Zhong Li talks about Temperature Dependence of Excess Heat in Gas-Loading Experiments by Z. M. Dong et al. Li wanted to show additional support for Edmund Storms‘ linear relationship between excess heat and temperature so they decided to use F&P’s “heat after death” effect experiment. One bottle filled with H one bottle filled with D with a palladium wire in both. Li expected to find each cooling at a different rate, due to the difference in H or D. Two sets of data supported not only the temperature dependence of excess heat in gas-loading experiment, but also revealed the diffusion nature of this straight line. He repeats again, the FPHE is a real fact. Katduaki Tanabe of Kyoto University presents Direct Joule Heating of D-Loaded Bulk Pd Plates in Vacuum by Y. Kitagawa et al . Multi-layered deuterium containing Pd plates is applied a bias voltage across the Pd sample to proved a current injection through Pd to stimulate the nuclear reaction by Joule heating. Temperatures were significantly higher with D loading than not. The group also observed excess heat. Excess heat around 0.5 watt lasting >10 hours. The temporal behavior for the heat generation is unable to be explained by known chemical processes, which indicates an existence of some nuclear process or unknown phenomenon. 70% re-producible neutron signal peaks corresponded to the accelerated temperatures was found to be proportional to the distance to the pd sample. Excess heat bursts were temporally coinciding with the neutron peaks. The plate also bended about 1mm. After excluding all trivial causes, they found the change in the lattice constant was as large as 3%, and it is hypothesized the the Pd bend is possibly related with alpha-beta phase transition. Tom Claytor presents Stringham Sono-Cell Replication by Roger Stringham et al. Claytor had tested some of Stringham’s sample early for tritium and other nuclear signatures, and found nothing. Then, Stringham sent his sample to Brian Oliver who found some interesting helium-effects. Years later, Claytor saw Stringham with a very small (1″x 1″) demonstration cell, and Claytor recognized that he would be able to easily test the foil for evidence of nuclear effects. He wanted to verify the heat results of the cavitation method and had help from IH, Edmund Storms, and Coolescence. Conventional cavitation is collapsing bubbles in liquid, but Stringham found multi-bubble cavitation on Metal s in D2O bubbles collapse near a a metal surface and are energetic enough to work the surface. The bubbles are only a few microns in diameter. At first, they saw nothing. Analyzing the system they saw there are many factors effecting the cavitation effect, including the chemical composition of the foil. After managing more of the parameters, a significant amount of heat is dissipated in the transducer. Still, various samples gave an average of ~3 Watt excess The effect is probably a near surface effect and is material dependent, its very dynamic and needs to be tamed, excess heat cna probably increase, but how much? Roger Stringham’s claims are confirmed. George Egely with Changes of Isotope Ratios in Transmutations. He began by speculating if the heat in Saturn could be LENR generated. He has a model of “dusty plasma fusion”. Egely has experimentally transmuted lighter mass elements like carbon, oxygen and nitrogen via one or two fusion steps into medium mass elements like iron, zinc, copper with no major energy release and neutron-rich daughter elements. Egely says that transmutation is far more frequent in nature than expected. He showed video of his glass reactor, all glowing and moving, and afterwords, finds transmutations in the chamber. He said, “This is the way to do transmutations in the simple way.” Bob Greenyer commented after that he repeated Egely’s experiment successfully. After a short break, Pamela Mossier-Boss introduced Malcom Fowler to talk about Development of a Sensitive Detection System for the Measurement of Trace Amounts of He4 in Deuterium or Hydrogen. Malcom Fowler and Tom Claytor developed a low cost and compact systems that allows measurement of He4 down to sub 100 ppb levels in D2 using a column of activated carbon at LN2 temperature that effectively absorbs everything but Helium. A typical sample size required to achieve low ppb sensitivity to He4 is 50 cc at 50 torr. Currently, they can measure the amount of helium in air at ambient levels (5.26 ppm) with an uncertainty of about 10% using a sample of 50cc at 50 Torr. They detect a small amount of helium in almost every gas sample we run, and it’s mostly atmospheric helium. Changing the design to include 304 stainless steel among other changes, are planned to inhibit the atmospheric helium from entering the system. Bob Higgins presents Modeling & Simulation of a Gas Discharge LENR Prototype by Bob Higgins and Dennis Letts. Presentation .pdf: http://lenr-canr.org/acrobat/HigginsRmodelingsi.pdf Higgins began by reviewing the gas discharge system, which is a foot-long coaxial stainless steel gas discharge tube – though the whole thing is not active. It is inserted in copper block, and heat exits via Seebeck TEG. For higher temperature 4 cartridge heater apply heat directly to copper block. He noted that there is a thermal issue that makes it difficult to determine when the excess power started and that there are three different modeling choices, where he chose the equivalent circuit modeling using SPICE. Measured data (not want you want it to be) in a file to drive the model. An iterative process of parameter extraction shows there are two heat propagation modes are present – non-Fourier heat transfer! This was confirmed by a stair-step method. Modeling tip: Model only what you don’t measure. And sample 10x faster than shortest time constant; thermal capacitor and metallic conduction R’s are linear. This all turned up some error sources like the heater lead wire dissipation – heat was not deposited in the calorimeter – they also found new sources of other heat in the system. Jirota Kasagi et al. presented Search for Gamma-ray Radiation in NiCuZr Nano-materials and H2 gas System Generating Large Excess Heat Hydrogen gas absorption (or discharge) by Ni-containing complex nano-metals produces large thermal energy far beyond that of chemical reaction. However, the origin of this excess heat generation has not been known. Yet, Sergio Focardi et al. reported some radiations from the Ni-H system including a discrete gamma emission as evidence of nuclear reaction. Gamma rays were detected in parallel with precise measurement of heat generation from our system using H2 gas and CNZ6s sample (the sample with CuNiZr composition ratio same as CNZ5s reported in ): averaged excess power was about 2.1 W and total thermal energy 5.3 MJ/mol-H. A Ge detector (ORTEC) was placed outside the wall of chamber with its front face at 5 cm from the wall and gammas up to 2.7 MeV were measured. 1 MeV energy release by a reaction, its reaction rate should be 6.24*1012 reactions/sec for 1 W output. No discrete gammas are emitted in the reaction which produces anomalous excess heat. In conclusion, Kasagi has determined the Ni+p reactions are not a source of heat. What about p+p? Too small to make a significant contribution to the heat source. In conclusion, there was no gamma ray transition down to 50 keV during the heat generation of 1.3MJ. Only upper-limit of gammas were obtained. Measurements for E gamma < 50keV are very important to investigation the possibility of multi-body reactions as well as other possibilities. Fabrice David talks about Alternatives to Calorimetry by Fabrice David and John Giles of Deuo Dynamics. Presentation paper: http://lenr-canr.org/acrobat/DavidFalternativ.pdf David assumes no new particle, a nuclear force, and, a nuclear interaction. Calorimetry is difficult, but even for Marie Curie, calorimetry was important. New alloys for hydrides have been discovered by hydrogen battery research. The fusion diode effect: deuterated alloys in contact with a semiconductor cause the appearance of an easy-to-measure electrical voltage. It this voltage is actually due to the direct conversion of LENR to electricity, we have a simple method to select the most promising alloys. Palladium in close contact with a semi-conductor. This is a semi-conducting diode. A stacked silicon-alloy wafer design is also in the works. His first experiment was negative, so they changed the design to include a strong tube for high pressure. They had trouble characterizing a possible deuterium leak and they plan to use a glass tube. We were unable to attend the Experiment and Theory session chaired by P.L.Hagelstein or the Theory session chaired by George H. Miley. We do have R. Blake with Understanding LENR Using QST Thursday evening was the Awards Banquet Ceremony. Coming Soon! ICCF-21 Break time is where the action is:
<urn:uuid:d75265c2-7a68-4057-b65c-dbf254bc5abe>
2.609375
2,533
Content Listing
Science & Tech.
49.116403
95,565,552
University of Cincinnati student Shujie Wang has discovered that a good way to monitor the environmental health of Antarctica is to go with the flow – the ice flow, that is. It’s an important parameter to track because as Antarctica’s health goes, so goes the world’s. “The ice sheet in Antarctica is the largest fresh water reservoir on Earth, and if it were totally melted, the sea level would rise by more than 60 meters. So it is quite important to measure the ice mass loss there,” says Wang, a doctoral student in geography in UC’s McMicken College of Arts & Sciences. Wang will present her research, “Analysis of Ice Flow Velocity Variations on the Antarctic Peninsula during 1986-2012 Based on Multi-Sensor Remote Sensing Image Time Series,” at the Association of American Geographers annual meeting to be held April 9-13 in Los Angeles. The interdisciplinary forum is attended by more than 7,000 scientists from around the world and features an array of geography-related presentations, workshops and field trips. Antarctica is 5.5 million square miles of windswept, mountainous ice desert. The fifth largest continent is covered in a sheet of ice that is on average more than a mile thick. Across this province of penguins, outlet glaciers and ice streams funnel chunks of ice into the ocean where they eventually melt in warmer waters. If the ice begins to melt at an abnormally high rate and the sea level rises, a chain reaction of negative ecological effects could take place worldwide. For her research, Wang uses remote-sensing images recorded by satellites to gather data on Antarctica’s ice motion. She’s particularly interested in determining changes in the ice flow velocity, because the faster ice moves, the faster it’s lost. By calculating that velocity at different time intervals, Wang hopes to further understand the process of ice motion and be able to predict changes to Antarctica’s landscape. She’s planning models that simulate the ice sheet dynamics and estimate any influence on the sea level. “I hope to provide valuable research to the academia of global change studies,” Wang says. Additional contributors to Wang’s research paper were Hongxing Liu (UC), Lei Wang (Louisiana State University) and Xia Li (Sun Yat-Sen University, China). Funding for the research was provided by University Graduate Scholarship allocations from UC’s Graduate School and the Department of Geography. In 2012, UC was named among the nation’s top “green” schools by The Princeton Review due its to strong commitment to sustainability in academic offerings, campus infrastructure, activities and career preparation. It was the third year in a row that UC earned a spot on the prestigious list. Tom Robinette | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:6626cb1c-74d0-4a98-b578-4d32398fabc1>
3.1875
1,159
Content Listing
Science & Tech.
39.955444
95,565,579
One of the most basic laws of quantum mechanics is that a system can be in more than one state – it can exist in multiple realities – at once. This phenomenon, known as the superposition principle, exists only so long as the system is not observed or measured in any way. As soon as such a system is measured, its superposition collapses into a single state. Thus, we, who are constantly observing and measuring, experience the world around us as existing in a single reality. All spin directions (represented by the spheres) collapse on one or the opposite direction depending on the measured photon polarization The principle of superposition was first demonstrated in 1922 by Otto Stern and Walther Gerlach, who observed the phenomenon in the spin of silver atoms. Spin is the intrinsic magnet in quantum particles, and when a particle’s spin is in superposition, it points in more than one direction at the same time. (Instead of the north and south of magnets, these are referred to as up and down.)Dr. Roee Ozeri and research students Yinnon Glickman, Shlomi Kotler and Nitzan Akerman, of the Physics of Complex Systems Department studied how the spin of a single atom collapsed from superposition to one state when it was observed with light. They “measured” the atom by shining laser light on it. Just as our eyes observe the world by absorbing the photons – light particles – scattered in our direction by objects, the researchers observed the process of spin collapse in the atoms by measuring the scattered photons. In results that appeared recently in Science, they showed that the direction that a photon takes as it leaves the atom is the direction that the spin adopts when superposition collapses. Dr. Roee Ozeri’s research is supported by the Crown Photonics Center; David Dickstein, France; Martin Kushner Schnur, Mexico; the Wolfson Family Charitable Trust; and the Yeda-Sela Center for Basic Research. Yivsam Azgad | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Science Education 23.07.2018 | Health and Medicine 23.07.2018 | Life Sciences
<urn:uuid:c599c50b-a72c-4e59-b1b3-11bc98b64ec1>
3.78125
993
Content Listing
Science & Tech.
41.02156
95,565,604
A web-based interface to your SQL machine can be useful. Fortunately, plenty of options are available to you. You can connect to your SQL database in a number of ways. You can use the command-line interface as shown in [Hack #1], and you can execute SQL from a programming language as shown in [Hack #2]. Another option is to work from a web browser. Either each vendor has its own mechanism for this, or a third-party product is available. phpMyAdmin (http://www.phpmyadmin.net), shown in Figure 12-1, is a tool that allows MySQL administration over the Web. It is popular with web hosting companies because it allows their clients to control MySQL accounts without requiring shell access. Figure 12-1. The phpMyAdmin user interface The phpMyAdmin tool set includes step-by-step forms for most of the commonly used facilities of SQL. Creating a table is relatively intuitive; setting permissions is a breeze. After you click the Go button, it shows you the SQL that has been generated so that you can easily find the exact syntax for those obscure SQL commands that you rarely use. But you also have the opportunity to execute arbitrary SQL if you find the interface inadequate. 12.5.2. SQL Server For SQL Server, you can use the Web Data Administrator utility available from Microsoft, and shown in Figure 12-2. Figure 12-2. Microsoft Web Data Administrator You can also use WebSQL Console, available from http://www.websqlconsole.com, and shown in Figure 12-3. Figure 12-3. WebSQL console Oracle provides a web interface: iSQL*Plus, which is shown in Figure 12-4. Figure 12-4. iSQL*Plus By default, the iSQL*Plus interface is available at http://localhost:5500/em/console/logon/logon for administration and at http://localhost:5560/isqlplus/dynamic for general SQL access. The program phpPgAdmin, available at http://www.phppgadmin.org, allows you to run queries from a web page, and it provides access to other administrative functions. It is shown in Figure 12-5. Figure 12-5. phpPgAdmin 12.5.5. Hacking the Hack The user can specify the name of a country, his SQL username, and his password. The client-side script formulates an SQL query to get the population for that country. It sends the query to the general-purpose SQL web interface and then displays the result without refreshing the whole page. Figure 12-6. AJAX demonstration The code for this example is a static HTML page that does not need to be interpreted at the web server; all the processing is done on the client: SQL Hacks AJAX Demo Here's an explanation of how this works: phpMyAdmin encodes the SQL and other parameters as CGI get variables, so the URL contains each value required. It is also possible to perform POST requests from an AJAX application. You use responseText to get the data. A hidden The function show(i) extracts the TD element i from the table with the ID table_results. The entire result set is stored in this array; if the SQL statement resulted in more than one value, the other results would also be available. 12.5.6. Using Other Web Interfaces You can use a similar technique for the Oracle, PostgreSQL, and SQL Server web interfaces; however, a little investigation is required. 22.214.171.124. CGI parameters You need to know the name of the CGI parameters. In phpMyAdmin the SQL statement is in sql_query, and for Oracle's iSQL the SQL is in the CGI parameter script. 126.96.36.199. Processing results This technique relies on the user having access to an SQL account on the database server. You might safely use an anonymous account as long as you take the precautions outlined in "Allow an Anonymous Account" [Hack #97]. Joins, Unions, and Views Storing Small Amounts of Data Locking and Performance Users and Administration
<urn:uuid:ab2bca48-4fe5-4245-8b8b-b80bf5a17e15>
2.65625
879
Tutorial
Software Dev.
64.398553
95,565,615
Almost 150 different genomes have been sequenced to date, including the human genome. But sequencing needs are growing faster than ever: In March 2003, the Bush administration announced it will spend $1 billion over five years to increase forensic analysis of DNA, including a backlog of up to 300,000 samples. And the success of the growing field of genomic medicine, which promises to deliver better therapies and diagnostics, depends on faster sequencing technology. This fall, researchers at Whitehead Institute will test new technology that could aid these and other endeavors. The BioMEMS 768 Sequencer can sequence the entire human genome in only one year, processing up to 7 million DNA letters a day, about seven times faster than its nearest rival. Scientists began working on the project in 1999 with a $7 million National Human Genome Research Institute grant. The technology eventually will help scientists quickly determine the exact genetic sequence of the DNA of many different organisms, and could lead to faster forensic analysis of DNA gathered in criminal cases. The heart of the new BioMEMs machine is a large glass chip etched with tiny microchannels called "lanes." It tests 384 lanes of DNA at a time, four times more than existing capillary sequencers. Each lane can accommodate longer strands of DNA: about 850 bases (the nucleic acids found in DNA, abbreviated by the letters A, C, T or G), compared to the current 550 bases per lane. David Appell | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences 16.07.2018 | Physics and Astronomy
<urn:uuid:9448ea79-6457-4341-9977-d11190a4261a>
3.59375
946
Content Listing
Science & Tech.
41.111048
95,565,621
To put things into a perspective, here are the moons of our solar system (including our moon) and their sizes compared to Earth. Continue reading The Moons of the Solar System in Perspective According to a new study, microbes like those found in Earth’s deep ocean could potentially thrive in the underground ocean of Saturn’s icy moon Enceladus. Both molecular hydrogen (H2) and methane (CH4) already have been detected in the plume. Researchers have shown that Methanothermococcus okinawensis, a methanogenic archaeon first isolated from a deep-sea hydrothermal vent on the western Pacific Ocean, can produce methane under conditions known to exist on Enceladus. Continue reading We May Have Already Detected Signs of Alien Microbes on Saturn’s Moon Enceladus NASA (The National Aeronautics and Space Administration) has published a video that contains highlights of important events and the space agency’s achievements over the year 2017. In the video description, NASA has stated that “2017: A year of groundbreaking discoveries and record-setting exploration at NASA. The Moon became a focal point for the agency, we brought you unique coverage of the first coast-to-coast total solar eclipse in the U.S. in 99 years, we announced the most Earth-size planets ever found in the habitable zone of a star outside our solar system, and more!” Continue reading Watch: NASA’s 2017 Highlights In September 2017, Elon Musk, the founder and CEO of SpaceX has revealed a new plan to colonize Moon and Mars with giant reusable spaceships. They are ambitiously planning to send the first humans to Mars as early as 2024 to build the foundations for the first Martian city. But is Mars really the best place for humans to settle? Some scientists, like Amanda HendrixNotes 1, the American planetary scientist, thinks it’s not, and we should be looking somewhere else and colonize Titan, Saturn’s largest moon, instead. Continue reading Living on Mars is a terrible idea, scientist say. We should colonize Titan, instead Earth is actually a fragile and isolated rock, a “blue marble” in a vast, cold and hostile space. But only after seeing our planet from space we truly understood that. Seeing the Earth first time from a distance was a powerful experience which has changed the way we see our planet. Here are the top 10 most iconic photos of Earth from space. Continue reading Top 10 Most Iconic Photos of Earth from Space Launched on October 15, 1997, NASA’s Cassini spacecraftNotes 1 went into orbit around Saturn on July 1, 2004. Since then, it has taken thousands of photos of Saturn, the second-largest planet in the Solar System, its prominent rings, and moons. And on September 15, 2017, Cassini plunged into Saturn’s atmosphere and disintegrated. Here are the 20 most beautiful photos that the spacecraft has sent back to Earth during its 13-year voyage around the gas giant. Continue reading 20 Best Photos of Cassini’s voyage around Saturn
<urn:uuid:b4f79b2d-513f-4f30-92ed-9f830d691ac2>
3.078125
648
Content Listing
Science & Tech.
44.664181
95,565,638
Surface melt ponds form intermittently on several Antarctic ice shelves. Although implicated in ice-shelf break up, the consequences of such ponding for ice formation and ice-shelf structure have not been evaluated. Here we report the discovery of a massive subsurface ice layer, at least 16 km across, several kilometres long and tens of metres deep, located in an area of intense melting and intermittent ponding on Larsen C Ice Shelf, Antarctica. We combine borehole optical televiewer logging and radar measurements with remote sensing and firn modelling to investigate the layer, found to be ∼10 °C warmer and ∼170 kg m−3 denser than anticipated in the absence of ponding and hitherto used in models of ice-shelf fracture and flow. Surface ponding and ice layers such as the one we report are likely to form on a wider range of Antarctic ice shelves in response to climatic warming in forthcoming decades. The ∼49,000 km2 Larsen C Ice Shelf (LCIS) is considered susceptible to future collapse because of its exposure to intense surface melting1 across the northern sector of the Antarctic Peninsula (Fig. 1). Satellite and airborne radar data on the LCIS indicate low firn-air content2, promoting the formation of melt ponds and the potential for hydrofracturing3,4. Analysis of remotely sensed images of Cabinet Inlet, located in the northwest sector of the LCIS, reveals the presence during some summer months of surface melt ponds that generally form in flow-parallel troughs that are some tens of kilometres long and hundreds of metres wide (Fig. 1). These ponds form in Cabinet Inlet as a result of melting by föhn winds that blow from the Graham Land mountains and eastwards through the shelf’s northernmost-fringing inlets5,6,7. Although appearing to generate substantial surface melt, these winds do not persist for more than a few days even through the summer, and ponds appear intermittently on satellite images (Fig. 1). The additional stress and hydrofracturing potential of such ponded surface water has been proposed as a means of shelf destabilization8,9,10 and has been implicated in the collapse of Larsen B Ice Shelf in 200211,12. However, the implications of such ponding for the formation of new ice and its influence on ice-shelf structure have not been reported. In this study, we report the presence of a massive ice layer, at least 16 km across, several kilometres long and tens of metres deep, present beneath an area of intermittent pond formation on LCIS, Antarctica. We combine field data with firn-modelling and remotely sensed data to investigate the layer’s properties and formation. The layer is found to be composed of two units: an upper, solid ice unit formed largely from the continual refreezing of ponded water; and a lower, infiltration ice unit formed largely from the refreezing of meltwater that has percolated into very dense firn. The layer is found to be ∼10 °C warmer and ∼170 kg m−3 denser than that which would have been present in the absence of the influence of intense surface melting and pond formation. The implications of the layer’s presence for ice-shelf thickness estimates, flow and stability are explored. Borehole drilling and logging In early austral summer 2014/2015, we drilled a ∼100-m long borehole into the flank of a Cabinet Inlet trough indicated by satellite imagery to have repeatedly hosted melt ponds over the past 15 years, but not since 2008/2009. Although the shelf was snow-covered at the time of drilling, inspection of the wall of a 2-m deep pit revealed the presence of both numerous ice layers within the snowpack and an unusually thick ice layer at a depth of 2.0 m that prevented continued excavation. The borehole was logged to a depth of ∼97 m by optical televiewer (OPTV)13, providing a geometrically accurate image of the complete borehole wall at a vertical and lateral resolution of ∼1 mm. The resulting OPTV log (Fig. 2a) contrasts starkly with those retrieved from other accumulating ice-shelf and ice-sheet locations (for example, Fig. 2b), including the interior of the Greenland Ice Sheet14 and an East Antarctic ice shelf and ice rise15,16. In each of these other cases, OPTV log luminosity decreases gradually with depth as surface snow metamorphoses through firn to dense ice over depths of several tens of metres. In contrast, our Cabinet Inlet OPTV log (Fig. 2a) shows a sharp contact between high-luminosity surface snow or firn and low-luminosity ice at a depth of only 2.9 m below the ice-shelf surface. This ice extends to the 97-m deep base of the log. Converting the luminosity of this OPTV log to density reveals a mean density for this entire ice layer of 870 kg m−3. Figure 2a also reveals a transition in ice type at a depth of ∼45 m, with the overlying layer (named Unit 1) hosting more frequent horizontal layers and being more dense (mean=888 kg m−3) than the underlying ice layer (Unit 2; mean=854 kg m−3), which is principally composed of bubbly host ice containing coarse, irregularly dipping bubble-free ice layers. The proximity of this generally massive ice layer to the shelf surface, and the sharpness of its contact with the overlying snowpack, preclude formation by the usual process of compaction–metamorphism. Instead, we interpret this Unit 1 ice as having formed as a consequence of refreezing, following periods of intense surface melting and intermittent pond formation. The characteristics revealed by the OPTV image are consistent with this unit (2.9 to ∼45 m) being largely bubble-free pond ice formed from the refreezing of surface water ponded during extended periods of intense, presumably summer, melting. Here the fine-scale horizontal layering apparent in Fig. 2a likely reflects the episodic nature of the process, which would involve four general stages: (a) snow accumulation, (b) surface melting, (c) meltwater infiltration into underlying snow, eventually resulting in its saturation and pond formation, and (d) the freezing of that meltwater layer to form pond ice, probably during early austral autumn. In contrast, Unit 2 (approximately 45–97-m depth) also contains layers of bubble-poor ice, but in this zone they are typically decimetres thick, contorted and isolated within host ice that is optically brighter (likely due to the presence of reflective bubbles; Fig. 2a). We interpret this lower unit as ice dominated by infiltration refreezing formed by meltwater percolating into underlying firn. This still involves the influence of intense surface melting but, in contrast to the Unit 1, of generally insufficient magnitude to form a continuous quasi-massive layer of largely bubble-free pond ice. These physical characteristics and depth ranges of the two units we identify are consistent with the upper (Unit 1) ‘pond ice’ forming within the Cabinet Inlet region of pond formation and the lower (Unit 2) ‘infiltration ice’ forming, and being inherited from, up-flow of the region of pond formation (Fig. 1). Firn modelling and satellite image analysis We evaluate this hypothesis for the recent past through a one-dimensional firn densification and hydrology model17, driven by surface mass fluxes and temperature data from the RACMO2.3 regional climate model for Cabinet Inlet18. Model results (Fig. 3a,b) indicate the occurrence of intermittent, but substantial, surface melt events at the borehole location, consistent with Unit 1 forming from the refreezing of surface meltwater. As well as predicting the presence of such a layer, the model predicts (Fig. 3b) that the layer’s upper surface was, in summer 2014–2015, expected to be 2.9-m below the snow surface, and that the overlying snowpack contains a substantial ice layer between ∼1.9 and 2.2 m, agreeing with our direct observations from snow-pit digging, the OPTV log (Fig. 2a) and ground-penetrating radar (GPR) data (Fig. 4) addressed below. Further, this analysis is supported by a time series of moderate resolution imaging spectroradiometer (MODIS) satellite images (Fig. 3c), indicating that melt ponds formed annually between 2001 and 2009, while none appears in any of the images available between early 2009 and the time of fieldwork in late 2014. This reconstruction matches very closely the RACMO-based firn densification reconstruction (Fig. 3b) with ‘pond years’, coinciding with periods of intense near-surface firn densification (for example, 2005–2007). An approximation of the lateral extent of the massive subsurface ice layer we report is provided by the area known from satellite images to host melt ponds: ∼60-km across-flow and ∼20-km along-flow (Fig. 1). However, the precise degree to which this zone is underlain by massive pond and/or infiltration ice, and the depth of such layers where they are present beyond our borehole, cannot be determined from temporally discrete satellite images. To evaluate this we carried out GPR profiling at 200 MHz along three transects focused on the borehole location (Fig. 1). The resulting radargrams (Fig. 4) reveal the presence of numerous near-surface reflectors and one substantial reflector at a depth that varies between ∼1 and ∼3 m along the transects. Very little radar energy above the background noise level was received from below this reflection. This absence of signal return is consistent with the presence of a refrozen ice layer that, although characterized by minor density stratification, is physically and chemically uniform compared with firn layering unmodified by meltwater. This main reflector is ∼2.9-m deep at the borehole location, coincident with field-based digging, drilling and OPTV logging, as well as with firn modelling (above). We therefore infer that this strong GPR reflection represents the upper surface of a spatially extensive ice layer that is both thick and relatively homogeneous. Since meltwater ponds are confined to surface troughs that persist in MODIS images for decades within this region of Cabinet Inlet, it is unlikely that this ice layer is ubiquitously composed of Unit 1 pond ice. Away from the troughs, this near-surface reflector therefore probably indicates the uppermost surface of an ice layer that is more similar to Unit 2 infiltration ice, that is, still influenced by intense surface melting and subsurface refreezing, but in these areas not actually forming standing ponds. In the absence of further direct investigation, such as by ice coring or borehole analysis, it is not yet possible to determine at high resolution the lateral variability of the two units we identify. Nonetheless, our OPTV and GPR data together indicate that a widespread ice layer extends at least across all of our ∼16-km flow-orthogonal and ∼6-km flow-parallel GPR study area. Our borehole OPTV log also indicates that Units 1 and 2 combined extend to a depth of at least 97 m at the location of our borehole. While it is highly likely that this thickness—defined by the depth of the base of Unit 2—varies spatially across Cabinet Inlet, it is not currently possible to specify this distribution without further borehole and/or GPR data. The presence of the refrozen ice layer we report above affects the physical properties of the LCIS in at least two ways. First, the layer’s density is substantially higher than that of the snow and firn that would otherwise have formed over this depth range by standard compaction–metamorphism. For example, a recent LCIS model19 used a density of ∼700 kg m−3 for the shelf’s uppermost ∼100 m, based on inverting seismic data recorded in the shelf’s southern sector. In contrast, our OPTV-derived densities indicate a measured density of ∼870 kg m−3 over this depth range in the Cabinet Inlet area, 24% higher. Such a density enhancement influences calculations of the shelf’s thickness based on the surface elevation data20. In this case, using an ice density of 917 kg m−3, a sea water density of 1,026 kg m−3, a surface elevation of 63.5 m (Fig. 4) and the assumed firn density of 700 kg m−3 for the uppermost 97 m of the shelf yields a total shelf thickness of 382 m. Repeating the calculation with our OPTV-reconstructed density of 870 kg m3 for the uppermost 97 m yields a total shelf thickness of 551 m. Although substantially thicker than that calculated from the ‘standard firn’ model, a thickness of 551 m is close to that of 564 m reconstructed for the area by the Bedmap2 consortium21. This close correspondence reflects the fact that the (altimetry-based) Bedmap reconstructions include a correction to account for enhanced densification, resulting from active surface melting on ice shelves22. Our OPTV-based density reconstruction also yields a firn-air content (the column-length equivalent accounted for by material with a density lower than that of bubble-free glacial ice) of 5.0 m. While this value is substantially lower than the 23.0 m that would result from firn of density 700 kg m−3, it is far closer to the spatially distributed range of 0–4 m predicted for the area based on recent combined analyses of remotely sensed surface elevation and shelf thickness fields9,10. However, direct comparisons such as these are somewhat confounded by our firn-air content of 5.0 m being based on a single-point measurement (recorded, as noted above, on the limb of an elongate trough hosting an ephemeral surface pond), whereas the reconstructions based on remotely sensed data integrate data over a coarser spatial field. Second, as well as altering its density, the ice column will be warmed by latent heat released by the freezing of ponded and percolating meltwater. This effect is quantified by a thermistor string installed into the borehole following OPTV logging (Fig. 2c). The mean annual temperature measured at a depth of 11 m was −5.9 °C, while the mean annual surface temperature at the site (which normally defines that at a depth of 10–15 m in the absence of significant refreezing) is −16.9 °C. Further, the entire ice profile recorded by our thermistor string (Fig. 2c) is warmer than would be expected without factoring in refreezing of ponded meltwater. This effect is confirmed by our firn model, which under-predicts englacial temperatures despite accounting for heat released by the refreezing of percolating meltwater. In this case, modelled pore-water refreezing contributes ∼1.5 °C to the firn, yielding a predicted temperature at 11 m of −15.4 °C (Fig. 2c). This is still almost 10 °C colder than our measured firn temperature at that depth, demonstrating that the refreezing of ponded meltwater (not included in the model) provides significant, hitherto unconsidered, englacial heating in this region. As well as being denser, this ice is therefore also warmer than that which would otherwise be used in numerical models of the flow of the LCIS19. Replacing on the order of 10 × 10 × 0.1 km of standard firn with relatively warm and dense ice will exert some influence on ice shelf flow and stability. However, evaluating this influence at the ice-shelf scale is not straightforward. In the first instance, the warmer ice layer we identify will be less viscous than colder ice, and this may go some way to accounting for the anomalously high-rate factor that has in the past been necessary for numerical models of the flow of the LCIS, and in particular its northern sector, to match empirical data23,24. A consequence of such a temperature-induced acceleration would be a general reduction in back stress within confined embayments, such as Cabinet Inlet, potentially increasing shear stresses along the flow unit’s lateral margins. This process is consistent with observations that the northern sector of Larsen B Ice Shelf experienced an increase in lateral rifting before its break up in 2002 (ref. 25). In contrast, the influence of the layer’s presence on brittle deformation may be in the opposite direction, with enhanced ductile flow accommodating strain and the solid ice we report being more resistant to tensile fracture than lower-density and finer-grained firn. In such cases, the flow-parallel alignment of elongate solid ice bodies such as that we report herein might serve to resist the flow-orthogonal crevassing that commonly develops in response to longitudinal tensile stresses, approaching the marine limit of ice shelves. Finally, the spatial extent of this layer at the ice-shelf scale, and the way in which its deformation interacts with that of other material units, such as suture ice, and basal channels and crevasses, will also influence the way in which the layer’s presence affects overall shelf stability. In particular, longitudinal troughs located on the surface of ice shelves have been related to spatially coincident basal channels26, which have themselves been associated with shelf instability27 through local thinning and crevassing28,29. Although it is not yet known whether the surface troughs on LCIS investigated herein are associated with such basal channels, the recent finding that material density is locally enhanced below similar surface troughs on the Roi Baudouin Ice Shelf, Antarctica30, is consistent with the enhanced melting and massive ice formation reported herein. In the light of these complexities, identifying and exploring the net influence of pond ice formation on ice-shelf stability can only be achieved with confidence by including the full spatial extent and physical properties of the refrozen ice layer into a shelf-wide flow and fracture model, which may be regarded as a future research priority. The surface ponding responsible for the subsurface ice layer we report herein is currently restricted to warmer regions of Antarctica’s fringing ice shelves and in particular, but not exclusively, to the northern and western sectors of the Antarctic Peninsula. However, regional warming is predicted to spread southwards and intensify substantially over forthcoming decades5,31, so ponding is expected also to become more widespread. Similarly, massive layers of warm, dense ice are therefore highly likely to form within ice shelves present across a substantial area of Antarctica, perhaps the entire continent if warming continues into the 22nd century5, with important consequences for ice-shelf flow, hydrofracture and stability. Borehole drilling and logging The borehole was drilled by pressurized hot water and logged by OPTV13. The resulting OPTV log, analysed by WellCAD and BIFAT software32, provides a geometrically accurate image, with a pixel size of ∼1 mm, of the material composition of the complete borehole wall, as well as the structural geometry of layers and inclusions intersecting it33. Since the OPTV log records an image of reflected light, it also provides a proxy for the density of compacted snow, firn and ice due to the progressive decrease in reflectivity, as voids close and bubbles are occluded and collapsed13,15. Hubbard et al.15 exploited this relationship and identified an exponential relationship between OPTV luminosity (L) and material density (D) on the basis of samples recovered from a core retrieved from a borehole logged by OPTV on the Roi Baudouin Ice Shelf, East Antarctica. However, due to equipment loss, the light-emitting diode brightness of the OPTV probe used in the current study was different from that of Hubbard et al.15, and a new calibration was undertaken. This was based on the correlation of 40 core samples recovered from a logged borehole, again located on the Roi Baudouin Ice Shelf (Fig. 5). The new calibration yields a best-fit regression equation of D=950–40.1 e(0.0101L) (R2=0.82), with root mean square values of the residuals of 40.4, 35.2 and 21.7 kg m−3 for the density ranges 600–700, 700–800 and 800–900 kg m−3, respectively. While these values provide an error range for absolute densities derived from OPTV luminosity, the relative changes in density reported herein, that is, along a single borehole log, reduce to the precision of the method. This is approximated by the density range (910 kg m−3) divided by the luminosity range (256), or ∼3.5 kg m−3. Borehole temperatures (Fig. 2c) were recorded by negative temperature coefficient thermistors, recorded across a Wheatstone half-bridge by Campbell Scientific micro-loggers. Resistances were converted to temperature using a polynomial34 fitted to the manufacturer’s calibration curve refined via a second-stage calibration in a distilled water/ice bath. Once recalibrated, sensors were replaced into a new water/ice bath to determine temperature error, yielding a root mean square error of ±0.03 °C. Borehole temperatures were logged for at least 100 days, and the undisturbed ice temperature was calculated from an exponential function fitted to the cooling curve35. By this time, all temperatures recorded below the near-surface zone of seasonal thermal disturbance had stabilized to values that varied (over timescales of days to weeks) by <0.05 °C. Thus, the englacial temperatures we report herein were not influenced by the hot-water drilling process. The firn model used is IMAU-FDM v1.0, which takes into account firn compaction, meltwater percolation and refreezing17,18. At the surface, the firn model is forced with mass fluxes (snowfall, snowmelt, rain, sublimation, and snowdrift sublimation and erosion) and surface temperature from the regional climate model RACMO2.3, run at a 5.5-km horizontal resolution in a domain over the Antarctic Peninsula. Satellite image analysis MODIS Terra and Aqua level 1 (250 m) data were ordered via the level 1 and atmosphere archive and distribution system, and used to check for the presence of melt ponds in Cabinet Inlet. A total of 577 cloud-free band 2 (848 nm) images between 1st December and 30th April from 2000 (Aqua) or 2002 (Terra) to 2015 were classified to generate the time series shown in Fig. 3c. GPR data (Fig. 4) were collected using a Sensors and Software Pulse Ekko Pro system operated in common offset mode with a 0.8-ns sample interval and a 4,000-ns sample window. The console was mounted on the skidoo and the 200 MHz antennae towed 15 m behind on a plastic sledge at ∼10 km h−1 with an eight stack trace recorded every 3 m. A Leica VIVA GS10 GNSS rover unit was connected directly to the GPR console; the base station was located at the drill site. Surface elevation was referenced to the Earth Gravitational Model 1996 geoid, taken to represent sea level, and no correction was made for dynamic topography. GPR data were processed in Reflex-W. Processing steps included de-wow of 25-ns filter length, spherical divergence compensation, spectral whitening between 70 and 200 MHz, and a two-dimensional mean-averaging filter. The static correction was based on Leica Geo Office post-processed GNSS data, resulting in a vertical accuracy of ±0.5 m. Radar wave propagation velocities used to convert travel times to the depths shown in Fig. 4 were 0.2 m ns−1 for the uppermost 3 m (based on a local 500 MHz common midpoint gather from the uppermost reflector of Unit 1) and a typical value for glacial ice36 of 0.17 m ns−1 below that. The data that support the findings of this study are available from http://www.projectmidas.org/data/hubbard2016/ and from the corresponding author upon request. How to cite this article: Hubbard, B. et al. Massive subsurface ice formed by refreezing of ice-shelf melt ponds. Nat. Commun. 7:11897 doi: 10.1038/ncomms11897 (2016). This research was funded by the Natural Environment Research Council, grants NE/L005409/1 and NE/L006707/1, and Aberystwyth University’s Capital Equipment fund. The Leica VIVA GS10 GNSS system and Sensors and Software Pulse Ekko Pro radar transmitter were loaned by the Natural Environment Research Council Geophysical Equipment Facility, loan number 1028. We thank the British Antarctic Survey for logistical support, and in particular the project’s field assistants Ashly Fusiarski and Nicholas Gillett.
<urn:uuid:d7d50645-eaf1-4994-b594-3e15e13982cb>
3.109375
5,323
Academic Writing
Science & Tech.
42.577203
95,565,647
A progressive build of a Shiny application to simulate and fit linear models (folders Shiny 1-5) and a markdown document detailing how to download a SNODAS raster, download a shapefile and a few basic spatial manipulations (SpatialDownload_Markdown). A link to the Shiny5 application and the SpatialDownload document can be found at: http://popr.cfc.umt.edu/ Shiny Application Demo The purpose of this application is to demonstrate the use of Shiny while using some of the skills learned during the workshop. - To use the application define the number of observations to simulate. Think of this as your sample size. - Define a formula, without the response variable - Most any valid R formula will do, but this application cannot handle random effects - If you need help with R formulas try ?formula at the console or google it - An example formula might be: ~snow*rain - Once you have defined the formula the application will create a series of numeric inputs that will allow you to put the true value of the coefficient(s) - Also, you will notice that your formula is pseudo-validated at the top of the screen the result of this quick check is shown to the user - At this time the model matrix is presented to the user as a data.table on the right side of the screen - The residual error distribution is fixed to Normal for the moment, but I could imagine including Binomial and others in the future - When the residual error choice is Normal then we need to define the standard deviation of the distribution, put that in the next box - When you are happy with the data and your formula click on the Fit Model button to run the model - You may notice a progress bar that appears ever so briefly in the bottom right corner of the screen - Now you can view the results of the fitting procedure by changing to the Model Fit tab near the top center of the screen Interesting Shiny bits - The application is built in a progressive manner, see the folders of this repository - Many of the really cool features of Shiny are built into this application - Dynamic UI components take two forms, conditional and uiOutput types - UI elements are created using a loop - The new modal dialog was incorporated - There is a progress bar - I made efforts to use many of the existing widgets - And much more...hope it is a good example of a the components of an application Bugs and feature request - Report bugs and feature requests to this repository's Issues page By the way, for those of you in the workshop this is another example of Markdown. The syntax is really close to that we used in R, but there are few differences.
<urn:uuid:20e29e56-a4fb-4e38-a8cb-d463a9dd9147>
2.625
576
Documentation
Software Dev.
34.998478
95,565,651
Amazing Science Images You Must See A graceful three layered cloud structure develops over the Indian Ocean in this award winning photo snapped in 2011. As part of a projected called DYNAMO,researchers are studying the dynamics of the Madden Julian Oscillation, a travelling atmospheric pattern over the Indian and Pacific Oceans. The pattern creates anomalous phases of tropical rain and then unusual dryness in patterns lasting a month or two. Understanding this pattern helps scientists build better models for climate and weather.
<urn:uuid:db49962c-9467-4dc5-a263-31877964052f>
2.765625
101
Truncated
Science & Tech.
31.862907
95,565,673
Because we have compiler programs, software developers often take the process of compilation for granted. However, as a software developer, you should cultivate a solid understanding of how compilers work in order to develop the strongest code possible and fully understand its underlying language. In addition, the compilation process comprises techniques that are applicable to the development of many software applications. As such, this course will introduce you to the compilation process, present foundational topics on formal languages and outline each of the essential compiler steps: scanning, parsing, translation and semantic analysis, code generation, and optimization. By the end of the class, you will have a strong understanding of what it means to compile a program, what happens in the process of translating a higher-level language into a lower-level language, and the applicability of the steps of the compilation process to other applications. Have you tried this resource? Help someone out by sharing your thoughts!Write a review
<urn:uuid:196784f5-75ed-499a-b9e7-a43f7afdfe60>
3.359375
186
Product Page
Software Dev.
19.19375
95,565,682
Each winter, wide swaths of the Arctic Ocean freeze to form sheets of sea ice that spread over millions of square miles. This ice acts as a massive sun visor for the Earth, reflecting solar radiation and shielding the planet from excessive warming. The Arctic ice cover reaches its peak each year in mid-March, before shrinking with warmer spring temperatures. But over the last three decades, this winter ice cap has shrunk: Its annual maximum reached record lows, according to satellite observations, in 2007 and again in 2011. Understanding the processes that drive sea-ice formation and advancement can help scientists predict the future extent of Arctic ice coverage — an essential factor in detecting climate fluctuations and change. But existing models vary in their predictions for how sea ice will evolve. Now researchers at MIT have developed a new method for optimally combining models and observations to accurately simulate the seasonal extent of Arctic sea ice and the ocean circulation beneath. The team applied its synthesis method to produce a simulation of the Labrador Sea, off the southern coast of Greenland, that matched actual satellite and ship-based observations in the area. Through their model, the researchers identified an interaction between sea ice and ocean currents that is important for determining what’s called “sea ice extent” — where, in winter, winds and ocean currents push newly formed ice into warmer waters, growing the ice sheet. Furthermore, springtime ice melt may form a “bath” of fresh seawater more conducive for ice to survive the following winter. Accounting for this feedback phenomenon is an important piece in the puzzle to precisely predict sea-ice extent, says Patrick Heimbach, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Until a few years ago, people thought we might have a seasonal ice-free Arctic by 2050,” Heimbach says. “But recent observations of sustained ice loss make scientists wonder whether this ice-free Arctic might occur much sooner than any models predict … and people want to understand what physical processes are implicated in sea-ice growth and decline.” Heimbach and former MIT graduate student Ian Fenty, now a postdoc at NASA’s Jet Propulsion Laboratory, will publish a paper, "Hydrographic Preconditioning for Seasonal Sea Ice Anomalies in the Labrador Sea," in the Journal of Physical Oceanography. An icy forecast As Arctic temperatures drop each winter, seawater turns to ice — starting as thin, snowflake-like crystals on the ocean surface that gradually accumulate to form larger, pancake-shaped sheets. These ice sheets eventually collide and fuse to create massive ice floes that can span hundreds of miles. When seawater freezes, it leaches salt, which mixes with deeper waters to create a dense, briny ocean layer. The overlying ice is fresh and light in comparison, with very little salt in its composition. As ice melts in the spring, it creates a freshwater layer on the ocean surface, setting up ideal conditions for sea ice to form the following winter. Heimbach and Fenty constructed a model to simulate ice cover, thickness and transport in response to atmospheric and ocean circulation. In a novel approach, they developed a method known in computational science and engineering as “optimal state and parameter estimation” to plug in a variety of observations to improve the simulations. A tight fit The researchers tested their approach on data originally taken in 1996 and 1997 in the Labrador Sea, an arm of the North Atlantic Ocean that lies between Greenland and Canada. They included satellite observations of ice cover, as well as local readings of wind speed, water and air temperature, and water salinity. The approach produced a tight fit between simulated and observed sea-ice and ocean conditions in the Labrador Sea — a large improvement over existing models. The optimal synthesis of model and observations revealed not just where ice forms, but also how ocean currents transport ice floes within and between seasons. From its simulations, the team found that, as new ice forms in northern regions of the Arctic, ocean currents push this ice to the south in a process called advection. The ice migrates further south, into unfrozen waters, where it melts, creating a fresh layer of ocean water that eventually insulates more incoming ice from warmer subsurface waters of subtropical Atlantic origin. Knowing that this model fits with observations suggests to Heimbach that researchers may use the method of model-data synthesis to predict sea-ice growth and transport in the future — a valuable tool for climate scientists, as well as oil and shipping industries. “The Northwest Passage has for centuries been considered a shortcut shipping route between Asia and North America — if it was navigable,” Heimbach says. “But it’s very difficult to predict. You can just change the wind pattern a bit and push ice, and suddenly it’s closed. So it’s a tricky business, and needs to be better understood.” Martin Losch, a research scientist at the Alfred Wegener Institute for Polar and Marine Research in Bremerhaven, Germany, says the feedback mechanism identified by the MIT group is important for predicting sea-ice extent on a regional scale. “The dynamics of climate are complicated and nonlinear, and are due to many different feedback processes,” says Losch, who was not involved with the research. “Identifying these feedbacks and their impact on the system is at the heart of climate research.” As part of the “Estimating the Circulation and Climate of the Ocean” (ECCO) project, Heimbach and his colleagues are now applying their model to larger regions in the Arctic. This research was supported in part by the National Science Foundation and NASA. Written by: Jennifer Chu, MIT News Office Sarah McDonnell | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:43981aad-19ff-48d9-a339-01c2ce54ea75>
4
1,780
Content Listing
Science & Tech.
36.991392
95,565,699
Diploblastic Metazoans: Porifera and Archaeocyatheans These two groups have frequently been considered together in the same phylum as simple forms of metazoans. The affinities and taxonomic position of the exclusively fossil archaeocyatheans has long been discussed. However, sponges and archaeocyatheans are now interpreted as two separate metazoan phyla which originated directly but independently from protozoan ancestors. The receptaculids, another enigmatic group, is placed close to these two groups, albeit in a still rather uncertain position. KeywordsLower Cambrian Skeletal Element Calcareous Sponge Calcareous Skeleton Sponge Reef Unable to display preview. Download preview PDF.
<urn:uuid:e6a0c986-01bd-4eec-83a8-e47996054625>
2.578125
148
Truncated
Science & Tech.
3.680362
95,565,708
The Human Genome in 3-D New technology reveals how DNA molecules pack themselves inside a cell nucleus. Unfurled, the human genome would contain approximately six feet of DNA. Amazingly, all of that length is packed into a cell nucleus about three micrometers in diameter–roughly one-tenth the width of a human hair. New technology that makes it possible to assess the three-dimensional interactions among different parts of the genome has revealed how these molecules are packed into such a tiny space. The findings could also yield new clues to genome regulation–how specific genes are turned on and off. While scientists have previously been able to resolve the three-dimensional structure of parts of the genome, a new study is the first to do so on a genome-wide scale. “Our technology is kind of like MRI for genomes,” says Erez Lieberman-Aiden, a researcher in the Harvard-MIT Division of Health Sciences and Technology and one of the authors of a new paper detailing the work. (Lieberman was named to Technology Review’s TR35 list young innovators this year). DNA has multiple levels of organization–the linear sequence of bases, its famous helical structure, and higher-order formations that wrap it around proteins and coil it to form chromosomes. But identifying how DNA is organized at these higher levels across the genome has been difficult. “We have the entire linear sequence of the genome, but no one knows even the principles of how DNA is organized in higher-order space,” says Tom Misteli, a scientist at the National Cancer Institute, in Bethesda, MD, who was not involved in the study. A growing pool of research also shows that this organization is crucial for regulating gene activity. For example, genes must be unwound before they can be transcribed into proteins. And some genes are turned on only when bound to DNA sequences on entirely different chromosomes, says Misteli. “That means they have to come together in three-dimensional space.” In a new method, dubbed Hi-C, scientists first use a preservative such as formaldehyde to fix the three-dimensional structure of a folded DNA molecule in place. This way, gene sequences that are close together in the three-dimensional structure but not necessarily adjacent in the linear sequence become bonded together. The fixed genome is then broken into a million pieces using a DNA-cutting enzyme. But the DNA segments that were stuck together during the fixation process remain bonded together. Researchers then add a marker called biotin to the ends of the bonded genome fragments and use another enzyme to glue the ends of each fragment together, making a circle of DNA. The biotin-marked pieces are then sequenced, revealing which pieces of DNA were physically close together in the three-dimensional conformation. While scientists have been working on some aspects of the Hi-C technology for several years, the rapidly declining cost of gene sequencing has just recently made it possible to tackle the whole genome. “Only now, with the development of novel sequencing technologies, can we pull this off,” says Job Dekker, a biologist at the University of Massachusetts Medical School, in Amherst, and senior author of the paper. The findings are published today in the journal Science. Using this new technology, the researchers identified two organizing principles in DNA. Chromosomes appear to be folded in such a way that active genes–those that are being made into proteins–are close together, and inactive genes are also close together, properties that had previously only been observed on a smaller scale. “The active stuff tends to be in one compartment that is not so densely packed,” says Lieberman. “The second compartment is like a storage compartment–it’s a bit denser and holds most of the genome.” Adds Dekker: “We think this is an efficient way for cells to organize chromatin within the nucleus.” The researchers also developed a model for how they think DNA is organized within these active and inactive compartments. DNA molecules appear to form a polymer structure known as a fractal globule, in which segments that are close to each other in the linear sequence are also close in the three-dimensional globule. Lieberman likens the structure to a fresh packet of ramen noodles, before they are stirred into a tangled glob. “It suggests there is a kind of beautiful un-entangled structure that the genome folds into,” says Lieberman. “It has no knots, and a very simple physical process can be used to pull out a piece of fractal globule and then put it back.” The technology makes it possible to tackle a number of questions, such as how the three-dimensional structure of the genome varies among cell types, among organisms, and between normal and cancerous cells. “Maybe this could help explain why cancer genomes are so misregulated,” says Dekker. But it’s not yet clear how quickly the technology will catch on. While fast, cheap sequencing has made such experiments possible, “it is still a major undertaking,” says Misteli. That may change as prices continue to fall. The researchers now hope to improve the resolution of the technology. Currently, they can examine the three-dimensional structure of the genome on a megabase scale–in units of a million DNA letters–but they are ultimately aiming for a kilobase resolution. “I think there are more structural features we haven’t discovered,” says Dekker. Increasing the resolution by a factor of 10 will require a hundredfold more sequencing, he says. Scientists also want to explore exactly how the three-dimensional structure of the genome affects regulation. “What happens when you move a gene artificially from an inactive to an active area?” asks Dekker. “People have started to develop methods to move genes around in the nucleus, but the results are generally mixed.” Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:0e49ad66-7ade-41eb-9a2e-94af72f90105>
3.75
1,257
News Article
Science & Tech.
36.323718
95,565,714
Physicists from the University of Regensburg (Germany), the Kanazawa University (Japan) and the Linnaeus University in Kalmar (Sweden) have studied the vibrations of a carbon monoxide molecule (CO, black and red ball in Figure below) that is bonded on a copper surface under the influence of an external force field exerted by the tip of a scanning probe microscope. The measurements were conducted at the University of Regensburg using combined scanning tunneling microscopy, scanning tunneling spectroscopy and atomic force microscopy at liquid helium temperatures and ultrahigh vacuum. The CO molecule bonds with the carbon atom to the copper underneath and stands upright on the surface such that the oxygen atom points away from the surface. The CO molecule can oscillate just like an inverted pendulum. The vibration of a molecule on a surface contains critical information on the bond of the molecule with the surface, which is crucial for understanding surface phenomena and for technologically important processes such as catalysis and epitaxial growth. As expected, the force that originates from the probe tip (pointed object from above in Figure) changes the vibrational frequencies – attractive forces increase the oscillation frequency, and repulsive interactions decrease the oscillation frequency. The data revealed that the strength of the bond between carbon monoxide and copper was decreasing as the probe tip pulled the molecule away from the surface, marking the direct observation of the weakening of a single atomic bond by an external influence. The result is important as chemical reactions often evolve by loosening an existing bond before forming a new one. The result of the research has been reported in “Vibrations of a molecule in an external force field” by N. Okabayashi, A. Peronio, M. Paulsson, T. Arai and F. J. Giessibl in Proceedings of the National Academy of Sciences of the United States of America, April xx, 2018, www.pnas.org Christina Glaser | idw - Informationsdienst Wissenschaft Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:dd66be46-94b5-41f8-afdb-cf35ded0de55>
3.15625
995
Content Listing
Science & Tech.
32.365609
95,565,723