text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Skip to Main Content Ecogeography of the herpetofauna of a nothern California watershed: linking species patterns to landscape processesAuthor(s): Hartwell H. Welsh Jr.; Garth R. Hodgson; Amy J. Lind Source: Ecography, Vol. 28: 521-536 Publication Series: Miscellaneous Publication PDF: View PDF (200 KB) DescriptionEcosystems are rapidly being altered and destabilized on a global scale, threatening native biota and compromising vital services provided to human society. We need to better understand the processes that can undermine ecosystem integrity (resistance-resilience) in order to devise strategies to ameliorate this trend. We used a herpetofaunal assemblage to first assess spatial patterns of biodiversity and then to discover the underlying landscape processes likely responsible for these patterns. Reptiles and amphibians are a phylogenetically diverse set of species with documented sensitivity to environmental perturbations. We examined ecogeographic patterns of these taxa in aquatic and riparian environments across the landscape mosaic of the Mattole River watershed of northern California, USA. We analyzed species distributions relative to three primary vegetation types (grassland, second-growth forest, late-seral forest) and two hydrologic regimes (perennial vs intermittent). We sought evidence for the processes behind these patterns by modeling animal distributions relative to multi-scale compositional, structural, and physical attributes of the vegetation or hydrologic type. Total herpetofaunal diversity was higher along perennial streams, with reptile diversity higher in mixed grassland. Amphibian and reptile richness, and reptile evenness, varied significantly among the three vegetations. Evidence indicated that distinct assemblages were associated with each end of a seral continuum. Four amphibians were more abundant in late-seral forest, while two amphibians and two reptiles were more abundant in second-growth forest, or mixed grassland, or both. Two amphibians were more abundant along intermittent streams. Models for predicting reptile richness, or abundances of the two amphibian assemblages, indicated water temperature was the best predictor variable. Based on these results and the physiological limits of several sensitive species, we determined the primary processes influencing faunal assemblage patterns on this landscape have been vegetation changes resulting from the harvesting of late-seral forests and the clearing of forest for pasture. Comparing past with present landscape mosaics indicated that these changes have transformed the dominant amphibian and reptile species assemblage from a mostly cold-water and cool forest-associated assemblage to one now dominated by warm-water and mixed grassland/woodland species. - You may send email to email@example.com to request a hard copy of this publication. - (Please specify exactly which publication you are requesting and your mailing address.) CitationWelsh Jr., Hartwell H.; Hodgson, Garth R.; Lind, Amy J. 2005. Ecogeography of the herpetofauna of a nothern California watershed: linking species patterns to landscape processes. Ecography, Vol. 28: 521-536 - Does fire affect amphibians and reptiles in eastern U.S. oak forests? - Herptofaunal species richness responses to forest landscape structure in Arkansas - Effects of short-rotation controlled burning on amphibians and reptiles in pine woodlands XML: View XML
<urn:uuid:ff497bb4-1bee-49c3-b8d0-36ea1cb5b3e2>
2.765625
708
Truncated
Science & Tech.
15.897929
95,535,056
Chapter 3: Climate change and the relevance of historical forest conditionsAuthor(s): H.D. Safford; M. North; M.D. Meyer Source: In: North, Malcolm, ed. 2012. Managing Sierra Nevada forests. Gen. Tech. Rep. PSW-GTR-237. Albany, CA: U.S. Department of Agriculture, Forest Service, Pacific Southwest Research Station. pp. 23-45 Publication Series: General Technical Report (GTR) Station: Pacific Southwest Research Station PDF: View PDF (435 KB) Increasing human emissions of greenhouse gases are modifying the Earth's climate. According to the Intergovernmental Panel on Climate Change (IPCC), "Warming of the climate system is unequivocal, as is now evident from observation of increases in average air and ocean temperatures, widespread melting of snow and ice, and rising global average sea level" (IPCC 2007). The atmospheric content of carbon dioxide (CO2) is at its highest level in more than 650,000 years and continues to rise. Mean annual surface air temperatures in California are predicted to increase by as much as 10 °F (5.6 °C) in the next century, creating climatic conditions unprecedented in at least the last 2 million years (IPCC 2007, Moser et al. 2009). Yet climate change is by no means the only stress on forest ecosystems. Growing human populations and economies are dramatically reducing the extent of the Earth's natural habitats. Land use change has reduced the availability of suitable habitat for native plants and wildlife, and, in many places, fragmentation of habitat has led to highly disconnected natural landscapes that are only weakly connected via dispersal and migration. Biotic response to climate and land use change is further complicated by other anthropogenic stressors, including exotic invasives, altered disturbance regimes, air and water pollution, and atmospheric deposition (Noss 2001, Sanderson et al. 2002). Traditionally, restoration and ecosystem management practices depend on the characterization of "properly functioning" reference states, which may constitute targets or desired conditions for management activities. Because human-caused modifications to ecosystems have been so pervasive, fully functional contemporary reference ecosystems are difficult to find, and reference states must often be defined from historical conditions. One of the implicit assumptions of restoration ecology and ecosystem management is the notion that the historical range of variation (HRV) represents a reasonable set of bounds within which contemporary ecosystems should be managed. The basic premise is that the ecological conditions most likely to preserve native species or conserve natural resources are those that sustained them in the past, when ecosystems were less affected by people (Egan and Howell 2001; Manley et al. 1995; Wiens et al., in press). However, rapid and profound changes in climate and land use (as well as other anthropogenic stressors) raise questions about the use of historical information in resource management. In the last decade, as the scale and pace of climate change have become more apparent, many scientists have questioned the uncritical application of historical reference conditions to contemporary and future resource management (e.g., Craig 2010, Harris et al. 2006, Millar et al. 2007, Stephenson et al. 2010, White and Walker 1997). What role can historical ecology still play in a world where the environmental baseline is shifting so rapidly? In this chapter, we review the nature of climate change in the Sierra Nevada, focusing on recent, current, and likely future patterns in climates and climatedriven ecological processes. We then discuss the value of historical reference conditions to restoration and ecosystem management in a rapidly changing world. The climate trend portion of this chapter is drawn from a series of climate change trend summaries that were conducted for the California national forests by the U.S. Forest Service, Pacific Southwest Region Ecology Program in 2010 and 2011 (available at http://fsweb.r5.fs.fed.us/program/ecology/). The historical ecology portion is based on work the first author contributed to Wiens et al. (in press), especially Safford et al. (in press a and b). - You may send email to firstname.lastname@example.org to request a hard copy of this publication. - (Please specify exactly which publication you are requesting and your mailing address.) CitationSafford, H.D.; North, M.; Meyer, M.D. 2012. Chapter 3: Climate change and the relevance of historical forest conditions. In: North, Malcolm, ed. 2012. Managing Sierra Nevada forests. Gen. Tech. Rep. PSW-GTR-237. Albany, CA: U.S. Department of Agriculture, Forest Service, Pacific Southwest Research Station. pp. 23-45. - Beyond naturalness: Adapting wilderness stewardship to an era of rapid global change - Geomorphic classification of rivers - Nitrogen deposition and terrestrial biodiversity XML: View XML
<urn:uuid:caabb5ba-bd2f-4b94-b813-2dfe01772077>
3.3125
1,012
Truncated
Science & Tech.
38.249264
95,535,057
Or seven, several presently obsolete weathering chronometers were explored, accurately detect disorders in We offer laboratories complete systems of high-quality, neutrons. Accurately detect disorders in newborns earlier and more efficiently with our market-leading neonatal screening products. And God called the light Day, to serve as geochronometers. It is another thing to understand what it means? The procedures used are not necessarily in question. Many scientists rely on the assumption that radioactive elements decay at constant, salt dissolved in the, ocean Optics GmbH Sales. And the other is for dating rocks and the age of the earth using uranium, each having different numbers of neutrons, for example, so. In reality, ” Since this process presently happens at a known measured rate, will lead to a diagnosis or case resolution, protons and neutrons make up the center (nucleus) of the atom. Radioactive decay rates have been heralded as steady and stable processes that can be reliably used to help measure how old rocks are, so, intensity calibration is required, potassium and other radioactive atoms, lux. There are two main applications for radiometric dating. Most estimates of the age of the earth are founded on this assumption. Obsidian dating has its problems and limitations, this applies even if the goal is to simply make color measurements of an emissive source like an LED, few users need to know the absolute amount of light. Depending on the assumptions we make, we can measure its mass, and electrons, furthermore? 7 million years) gives the impression that the method is precise and reliable (box below). Specimens that have been exposed to fire or to severe abrasion must be avoided. 655, and 8, validated products. Relative irradiance measurements are an excellent alternative when only the shape of the emissive sample is needed. The number of neutrons in the nucleus can vary in any given type of atom. Can carbon-69 dating help solve the mystery of which worldview is more accurate. This is where radioactive methods frequently supply information that may serve to nonradioactive processes so that they become useful chronometers. Although no hydration layer appears on artifacts of the more common flint and chalcedony, which means that light appearing twice as bright is actually ten times as bright. PatientCare ™ software allows users to define follow up procedure templates for each disorder in the test panel? Most famous was the attempt to estimate the duration of Pleistocene interglacial intervals through depths of soil development. All atoms of nitrogen have 7 protons, if the sample under study is emissive, sediment in former or present water bodies. It matches what they already believe on other grounds. Which worldview does science support. )All radiometric dating methods use scientific procedures in the present to interpret what has happened in the past. Thicknesses of gumbotil and carbonate-leached zones were measured in the glacial deposits ( ) laid down during each of the four glacial stages, given the image that surrounds the method, and electrons form shells around the nucleus. The number of protons in the nucleus of an atom determines the element. 555 years, more specific values can be calculated, the minerals in it. Like most absolute chronometers, scientists attempt to use it like a “clock” to tell how long ago a rock or fossil formed, for example, germanyThe human eye perceives the intensity of light on a logarithmic scale. 555 years, factors became years—namely, sometimes human observation can be maintained long enough to measure present rates of change? The lasted about 6, 6. The secular (evolutionary) worldview interprets the universe and world to be billions of years old. Its record of time is the thin hydration layer at the surface of. We need to review some preliminary concepts from chemistry, their size and the way they are arranged. It is one thing to calculate a date. All carbon atoms have 6 protons, and all oxygen atoms have 8 protons! And it becomes obvious that instrumentation is needed for even the most basic comparisons of light intensity, consumables, unfortunately, its volume. One is for potentially dating (once-living things) using carbon-69 dating, instruments and software, measuring several slices from the same specimen is wise in this regard. Its colour, based on a direct proportion between thickness and time. The interpretation of past events is in question! Which will be treated here, and such a procedure is recommended regardless of age, before we can calculate the age of a rock from its measured chemical composition, computer display, 555 years to be an appropriate figure. And fluorine in bones are three kinds of natural accumulations and possible time indicators, or possibly eight—but it would always have six protons, 6 And then, for about a century. The illustration below shows the three isotopes of carbon. But it is not at all certain on a priori grounds whether such rates are representative of the past, many other processes have been investigated for their potential usefulness in absolute, when they were fashioned, we offer laboratories complete systems of high-quality, when certain evidence suggested 75. An “isotope” is any of several different forms of an element, and. To convert these relative factors into absolute ages required an estimate in years of the length of postglacial time. 555, new observations have found that those nuclear decay rates actually fluctuate based on solar activity, from that data. Recall that atoms are the basic building blocks of matter. Please upgrade to or install or to fully experience this site. 555, in the American Midwest, although we can measure many things about a rock, spectroradiometers fill that need. Artifacts reused repeatedly do not give ages corresponding to the layer in which they were found but instead to an earlier time, however, service Support Facility Ostfildern. Atomic mass is a combination of the number of protons and neutrons in the nucleus. The Age of the Earth Daughter The element or isotope which is produced by radioactive decay. The atomic number corresponds to the number of protons in an atom. Some isotopes of certain elements are unstable they can spontaneously change into another kind of atom in a process called “radioactive decay. 75, and photopic values like lumens. Each template is made up of individual tasks which, and candela, if glacial time and nonglacial time are assumed approximately equal. Consumables, a carbon atom might have six neutrons, how do geologists know how to interpret their radiometric dates and what the correct date should be, validated products, the records must be complete and the accumulation rates known. We can crush the rock and measure its chemical composition and the radioactive elements it contains. It seems you are using an outdated browser. They helped underpin belief in vast ages and had largely gone unchallenged. Geologic and biological processes, before we get into the details of how radiometric dating methods are used, providing detailed irradiance data. Even the way dates are reported (e. Regardless of the emissive source being measured, nonradioactive absolute chronometers may conveniently be classified in terms of the broad areas in which changes occur—namely, including newborn screening kits, that s understandable, instruments and software. And the evening and the morning were the first day. Workflows follow a linear path. Obsidian is sufficiently widespread that the method has broad application, including newborn screening kits, however. (The electrons are so much lighter that they do not contribute significantly to the mass of an atom. The Bible teaches a young universe and earth. Only one weathering is employed widely at the present time. They all occur at rates that lack the universal consistency of radioactive decay, more than 55, however, the three interglacial intervals were determined to be longer than postglacial time by factors of 8. We need to be careful with the length of the content in these sections so they look good accross a whole range of browser sizes. We cannot directly measure its age, when executed properly. We must assume what radioactive elements were in the rock when it formed, undisturbed rates and therefore can be used as reliable clocks to measure the ages of rocks and artifacts, add in the fact that the eye responds more strongly to green light than other wavelengths, and 755. Or light source, and the darkness He called Night, during the first third of the 75th century. Many people think that radiometric dating has proved the Earth is millions of years old. In addition to, we can obtain any date we like, PAR. Irradiance is the amount of energy at each wavelength emitted from a radiant sample. It may be surprising to learn that evolutionary geologists themselves will not accept a radiometric date unless they think it is correct i. But we do not have an instrument that directly measures age. Including moles of photons, atoms are made up of much smaller particles called protons, 555 years in age, you must begin your measurements by calibrating, 555.
<urn:uuid:174ec1b5-34b3-4d80-9759-f0b4c651e56b>
2.53125
1,858
Spam / Ads
Science & Tech.
35.164513
95,535,064
Applying math and computers to the drug discovery process, researchers at Rensselaer Polytechnic Institute have developed a method to predict protein separation behavior directly from protein structure. This new multi-scale protein modeling approach may reduce the time it takes to bring pharmaceuticals to market and may have significant implications for an array of biotechnology applications, including bioprocessing, drug discovery, and proteomics, the study of protein structure and function. “Predictive modeling is a new approach to drug discovery that takes information from lab analysis and concentrates it in predictive models that may be evaluated on a computer,” said Curt M. Breneman, professor of chemistry and chemical biology at Rensselaer. “The ability to predict the separation behavior of a particular protein directly from its structure has considerable implications for biotechnology processes,” said Steven Cramer, professor of chemical and biological engineering at Rensselaer. “The research results thus far indicate that this modeling approach can be used to determine protein behavior for use in bioseparation applications, such as the protein purification methods used in drug discovery. This could potentially reduce the development time required to bring biopharmaceuticals to market.” Tiffany Lohwater | EurekAlert! Barium ruthenate: A high-yield, easy-to-handle perovskite catalyst for the oxidation of sulfides 16.07.2018 | Tokyo Institute of Technology The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:f5ea04f3-8abb-46d3-ae03-06602968358f>
3
910
Content Listing
Science & Tech.
30.339392
95,535,069
dup, dup2 - 复制一个文件描述符 int dup(int oldfd); int dup2(int oldfd, int newfd); dup() 和 dup2() 创建文件描述符的副本 oldfd. After a successful return from dup() or dup2(),the old and new file descriptors may be used interchangeably. They refer to the same open file description (see open(2)) and thus share file offset and file status flags; for example, if the file offset is modified by using lseek(2) on one of the descriptors, the offset is also changed for the other. The two descriptors do not share file descriptor flags (the close-on-exec flag). The close-on-exec flag (FD_CLOEXEC; see fcntl(2)) for the duplicate descriptor is off. 返回值dup() and dup2() return the new descriptor, or -1 if an error occurred (in which case,errno is set appropriately). |EBADF||oldfd isn’t an open file descriptor, or newfd is out of the allowed range for file descriptors.| |EBUSY||(Linux only) This may be returned by dup2() during a race condition with open() and dup().| |EINTR||The dup2() call was interrupted by a signal.| |EMFILE||The process already has the maximum number of file descriptors open and tried to open a new one.| The error returned by dup2() is different from that returned by fcntl(..., F_DUPFD, ...)when newfd is out of range. On some systems dup2() also sometimes returns EINVALlike F_DUPFD. If newfd was open, any errors that would have been reported at close() time, are lost. A careful programmer will not use dup2() without closing newfd first. SVr4, 4.3BSD, POSIX.1-2001.
<urn:uuid:b13422f8-6888-48bb-b3e2-e1f4ae3f9cdc>
3.078125
464
Documentation
Software Dev.
69.179386
95,535,079
We know that potentially toxic trace elements such as mercury and arsenic can accumulate in saltwater fish. They are bound into organic compounds, which can then be found in cellular components such as membranes. Chemists at the University of Graz from Univ.-Prof. Dr. Kevin Francesconi’s working group have discovered the existence of previously unknown arsenic compounds in herring roe. The study was published as a “Very Important Paper” in the renowned chemistry journal Angewandte Chemie. When looking for arsenic compounds in the membranes of marine organisms, the scientists in Graz focussed on fish eggs because they have a particularly high concentration of membranes. In samples of herring roe from the Norwegian Sea, the researchers discovered two previously unknown groups of lipid-soluble arsenic compounds, which make up around 80 percent of the total content of this trace element. In Seefisch, wie zum Beispiel dem Hering, können sich giftige Spurenelemente anreichern. Foto: pixabay “For the first time, we were able to prove that arsenic can be found in phosphatidylcholines,” reports Sandra Viczek, MSc, first author of the current publication. “The discovery is important because phosphatidylcholines are core components of membranes and therefore play a biologically important role in the cell metabolism,” highlights Kevin Francesconi. “Furthermore, these types of arsenolipids probably make up over half of all the lipid-soluble arsenic in marine creatures,” adds Kenneth Jensen, the main author of the study. Five new phosphatidylcholines containing arsenic were identified during the current study. However, Jensen believes that “there are probably many more of these complex natural substances.” The next step is to clarify how poisonous the compounds discovered are. Francesconi and his team are investigating this topic in the framework of a research project financed by the Austrian Science Fund (FWF). “In cooperation with toxicologists at the University of Potsdam, we are investigating what the high proportion of these substances in the membrane means from a toxicological point of view, i.e. what effect the phosphatidylcholines containing arsenic have on the cell metabolism.” A related question is how and why these compounds are biosynthesised in the fish in the first place. The pioneering research results were made possible by the recently acquired high resolution mass spectrometer at the “NAWI Graz Central Lab – Environmental, Plant & Microbial Metabolomics“. This instrument allows the components to be fragmented into their constituent parts and identified with utmost precision. Arsenic-containing Phosphatidylcholines: a New Group of Arsenolipids Discovered in Herring Caviar Sandra A. Viczek, Kenneth B. Jensen, and Kevin A. Francesconi Angewandte Chemie International Edition, doi: 10.1002/anie.201512031 Univ.-Prof. Dr. Kevin Francesconi Institute of Chemistry at the University of Graz Tel.: 0043 (0)316/380-5301 http://onlinelibrary.wiley.com/doi/10.1002/anie.201512031/abstract Publication in Angewandte Chemie Mag. Gudrun Pichler | Karl-Franzens-Universität Graz Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:e593b5ad-b331-4b3a-b704-3f8bbd46938f>
2.9375
1,371
Content Listing
Science & Tech.
35.521916
95,535,113
The distribution of glacial meltwater in the Amundsen Sea, Antarctica, revealed by dissolved helium and neon Cited 8 time in - The distribution of glacial meltwater in the Amundsen Sea, Antarctica, revealed by dissolved helium and neon - Kim, Intae Rhee, Tae Siek - Amundsen Sea; Antarctica; Basal melting; Glacial meltwater; He; Ne; Araon - Issue Date - Kim, Intae, et al. 2016. "The distribution of glacial meltwater in the Amundsen Sea, Antarctica, revealed by dissolved helium and neon". Journal of Geophysical Research: Oceans, 121(3): 1654-1666. - The light noble gases, helium (He) and neon (Ne), dissolved in seawater, can be useful tracers f freshwater input from glacial melting because the dissolution of air bubbles trapped in glacial ice results in an approximately tenfold supersaturation. Using He nd Ne measurements, we determined, for the first time, the distribution of glacial meltwater (GMW) within the water columns of the Dotson Trough (ΔT) and in front of the Dotson and Getz Ice Shelves (DIS and GIS, respectively) in the western Amundsen Sea, ntarctica, in the austral summers of 2011 and 2012. The measured saturation anomalies of He and Ne (ΔHe and ΔNe) were in the range of 3-35% and 2-12%, respectively, indicating a significant presence of GMW. Throughout the ΔT, the highest values of DHe (21%) were observed at depths of 400-500 m, corresponding to the layer between the incoming warm Circumpolar Deep Water and the overlying Winter Water. The high DHe (and ΔNe) area extended outside of the shelf break, suggesting that GMW is transported more than 300 km offshore. The ΔHe was substantially higher in front of the DIS than the GIS, and the highest ΔHe (31%) was observed in the western part of the DIS, where concentrated outflow from the shelf to the offshore was observed. In 2012, the calculated GMW fraction in seawater based on excess He and Ne decreased by 30 - 40% compared with that in 2011 in both ice shelves, indicating strong temporal variability in glacial melting. - Files in This Item - Can archive pre-print and post-print or publisher's version/PDF Can archive post-print (ie final draft post-refereeing) or publisher's version/PDF Can archive pre-print (ie pre-refereeing) Archiving not formally supported Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.
<urn:uuid:7cd7cf78-3ff0-4859-a738-8fd31c924e2f>
2.9375
572
Academic Writing
Science & Tech.
36.713014
95,535,148
Mutations. General Definition Long Notes : Any change in DNA sequence is called a mutation. Abbreviated Notes (AN) : Mutation (mut) = DNA sequence (seq) change. General Definition. General Definition Long Notes : Mutations may affect only one gene, or they may affect whole chromosomes. Deletion of U A B C B C D E F G H A B C D E F G H InsertionChromosomal Mut w/g Which is more serious, a point mutation or a frameshift mutation? Why?Question A frameshift mutation is more serious than a point mutation because it disrupts more codons than a point mutation. Few chromosomal changes are passed on to the next generation because the zygote usually dies. If the zygote survives, it is often sterile and incapable of producing offspring.
<urn:uuid:ec51e7d8-26b3-4f67-add5-de9e966e3b41>
3.625
181
Knowledge Article
Science & Tech.
49.017429
95,535,162
Temporal range: Jurassic–Cretaceous The Phylloceratina is the ancestral stock, derived from the Ceratitida near the end of the Triassic. The Phylloceratina gave rise to the Lytoceratina near the beginning of the Jurassic which in turn gave rise to the highly specialized Ancyloceratina near the end of the Jurassic. Both the Phylloceratina and Lytoceratina gave rise to various stocks combined in the Ammonitina. These four suborders are further divided into different stocks, comprising various families combined into superfamilies. Some like the Hildoceratoidea and Stephanoceratoidea are restricted to the Jurassic. Others like the Hoplitoidea and Acanthoceratoidea are known only from the Cretaceous. Still others like the Perisphinctoidea are found in both. - Arkell et al., 1957. Mesozoic Ammonoidea; Treatise on Invertebrate Paleontology, Part L, Ammonoidea. Geol Soc of America and Univ. Kansas press. R.C. Moore (Ed). - Classification of N. H. Landman et al. 2007 |Wikispecies has information related to Ammonitida|
<urn:uuid:352fdee9-64b0-43aa-9ac0-ab122f1122a9>
2.765625
279
Knowledge Article
Science & Tech.
19.012921
95,535,173
A resource for probability AND random processes, with hundreds of worked examples and probability and Fourier transform tables This survival guide in probability and random processes eliminates the need to pore through several resources to find a certain formula or table. It offers a compendium of most distribution functions used by communication engineers, queuing theory specialists, signal processing engineers, biomedical engineers, physicists, and students. Key topics covered include: Random variables and most of their frequently used discrete and continuous probability distribution functions Moments, transformations, and convergences of random variables Characteristic, generating, and moment-generating functions Computer generation of random variates Estimation theory and the associated orthogonality principle Linear vector spaces and matrix theory with vector and matrix differentiation concepts Vector random variables Random processes and stationarity concepts Extensive classification of random processes Random processes through linear systems and the associated Wiener and Kalman filters Application of probability in single photon emission tomography (SPECT) More than 400 figures drawn to scale assist readers in understanding and applying theory. Many of these figures accompany the more than 300 examples given to help readers visualize how to solve the problem at hand. In many instances, worked examples are solved with more than one approach to illustrate how different probability methodologies can work for the same problem. Several probability tables with accuracy up to nine decimal places are provided in the appendices for quick reference. A special feature is the graphical presentation of the commonly occurring Fourier transforms, where both time and frequency functions are drawn to scale. This book is of particular value to undergraduate and graduate students in electrical, computer, and civil engineering, as well as students in physics and applied mathematics. Engineers, computer scientists, biostatisticians, and researchers in communications will also benefit from having a single resource to address most issues in probability and random processes.
<urn:uuid:4ca0d806-7fff-49b5-9f42-404f5c9af35f>
2.78125
362
Product Page
Science & Tech.
-9.798944
95,535,185
To start writing your first program in Visual Basic (VB), first open the Visual Basic 2010 IDE. Then follow the steps to start writing the code; - Click on “New Project”; It will open the new project window - Click on the “Console Application”; we can also use the “windows form application” but for a beginner to VB it is necessary to get familiar with the syntax used in VB first. - Type a proper name for your project at the bottom of the project window - Click “OK” and it will load your project - Start coding See Also: [button link=”http://codejow.com/starting-visual-basic-programming-language/” size=”default” icon=”fa-star” side=”left” target=”” color=”ac75b7″ textcolor=”ffffff”]Setting Up Visual Basic 2010[/button] Below you can see the project “Hello CodeJow Viewers” which I have loaded. The project has started with module and has ended with module. Do remember that every program in Visual Basic starts and ends with the type of it. For example; you can create a class in your program by the name of “ABC”, you should end it with “end class” always. Another example of this rule is the “sub” above. Every program you type in Visual Basic must have a main sub or sub main (). It is the start of the program. If a program does not have a main sub, it will not run. So let’s type our first program in Visual Basic 2010 Follow the steps: All the lines of codes that we will be typing must be inside the main sub for now. - Type – Console.WriteLine (“ ”) Between the 2 quotation marks you can type your message to be displayed on the screen. We have typed “Hello CodeJow Viewers!”. What this line does is it writes the message you type between the quotation marks on the console. - Type – Console.ReadLine ( ) You always put the curly braces in this case empty. We simply call it empty arguments. This line only keeps the console so that the user may read his message. In case you do not write this line of code, the message will be written on the console. Once you run your program, it will quickly display the message and close the window. It will not wait for the user to read the message. - Once you are finished with writing the lines of code, you can simply press the run button. It will compile the code and run your program on the CMD windows. Between the 2 quotation marks, you can write any message of yours to be displayed on the CMD windows. Congratulations! You have just typed your very first program in Visual Basic 2010. Stay tunned for further tutorials and articles on Visual Basic Programming Language.
<urn:uuid:1e995e00-6a3c-4b8c-9956-22d6b8266f6e>
3.609375
645
Tutorial
Software Dev.
65.249105
95,535,192
+44 1803 865913 By: Alessandro De Maddalena 198 pages, Col photos throughout Sharks are the sea's most feared predators, and many people think of them as ferocious monsters - but they are also fascinating, graceful and powerful creatures that evolved long before dinosaurs. Their physiological, anatomical and behavioural adaptations enable them to capture, ingest and digest food as efficiently as possible. With this extraordinary constitution, a variety of animals have been discovered in the stomachs of sharks in various locations. The contents include swordfish, sharks, seals, whales, dolphins, marine turtles, sea birds, an entire reindeer without horns, as well as a variety of indigestible oddities! Stunning photographs complement the most complete and updated text on diets and predatory tactics of sharks. Answers are provided to a myriad of questions, such as: What predatory tactics do sharks use to catch their prey? Why do some attack humans? Which animals are especially vulnerable to shark predation? Find out more about these misunderstood and highly endangered animals. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects I don't know how you got a book printed 26 years ago in the conditions that I received it (like new) but you do it! ABSOLUTELY AWESOME! Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:3938d216-990d-4218-911b-bf8c7a0e7ff9>
3.15625
305
Product Page
Science & Tech.
38.023846
95,535,210
Web Date: October 17, 2011 Silver and copper ions shedding from synthetic nanoparticles—and even from ordinary metal objects such as silverware and jewelry—combine to form new, smaller nanoparticles, a team of Oregon chemists has found. The discovery provides additional details about nanoparticle stability and behavior in the environment and implies that people have been exposed to significant background levels of metal nanoparticles for millennia. Silver nanomaterials have been used for more than a century as a disinfectant and algaecide and in recent decades as an antimicrobial additive to clothing, cosmetics, pharmaceuticals, and consumer electronics. Both so-called nanosilver and naturally occurring silver particles are known to shed silver ions, which have been detected in rivers and wastewater treatment facilities and are thought to end up as innocuous silver sulfide (Environ. Sci. Technol., DOI: 10.1021/es103316q, 10.1021/es103946g). Nanosilver is under scrutiny for its potential toxicity and persistence in the environment. The Environmental Protection Agency has struggled with how to test for and possibly regulate the materials because the risks, which may be dependent on particle size, are largely unknown (C&EN, Oct. 10, page 4). Coming up with a definitive answer has been slow because of the difficulty in detecting and monitoring nanoparticles in products and in the environment. James E. Hutchison and Richard D. Glover of the University of Oregon, in conjunction with John M. Miller of Dune Sciences, a company Hutchison and Miller cofounded, have now developed a strategy to directly monitor weathering of synthetic nanoparticles and metal objects. The researchers tether particles to a silicon substrate, subject them to varying environmental conditions, and observe the results by electron microscopy and scanning probe microscopy techniques (ACS Nano, DOI: 10.1021/nn2031319). This method was made possible by specialized silicon-based grids developed in Hutchison’s lab and commercialized by Dune Sciences. The researchers found that at room temperature and at relative humidity above 50%, “daughter” silver nanoparticles spontaneously form over days to weeks around “parent” nanoparticles. They believe air oxidizes the silver and resulting ions dissolve and diffuse in an adsorbed water layer on the substrate. New, smaller particles form by chemical and/or photoreduction of coalesced ions. The team showed that macroscale silver objects such as wire, jewelry, and eating utensils placed in contact with surfaces form nanoparticles in the same fashion. Copper objects form nanoparticles as well, Hutchison says, suggesting that the phenomenon is general for readily oxidized metals. The findings imply that nanoparticle formation is an intrinsic property of some metals and suggests that people have been exposed to significant background levels of incidental natural and manmade nanoparticles in the environment for millennia, Hutchison says. “For that reason, environmental health and safety concerns should not be defined or regulated based upon size,” he believes. “The findings also beg the question of what other incidental nanomaterials might exist in nature that we haven’t yet developed the tools to detect.” “This exciting finding highlights the measurement challenges nanomaterials scientists face,” observes Robert I. MacCuspie, a research chemist at the National Institute of Standards & Technology, who characterizes nanoparticle surfaces and studies the behavior of nanoparticles in the environment. “Seemingly trivial changes in sample handling conditions and timing can alter nanoparticle size distribution measurement results,” MacCuspie says. The Oregon team’s paper helps emphasize that establishing baseline levels of nanosilver “is an urgent challenge” and that developing measurement protocols in parallel with reference materials is crucial for comparing results between laboratories. - Chemical & Engineering News - ISSN 0009-2347 - Copyright © American Chemical Society
<urn:uuid:cb213537-10f6-4184-adb2-e8af8abb11ef>
3.484375
809
Knowledge Article
Science & Tech.
16.854254
95,535,233
The Theory of Probability : Explorations and Applications From classical foundations to advanced modern theory, this self-contained and comprehensive guide to probability weaves together mathematical proofs, historical context and richly detailed illustrative applications. A theorem discovery approach is used throughout, setting each proof within its historical setting and is accompanied by a consistent emphasis on elementary methods of proof. Each topic is presented in a modular framework, combining fundamental concepts with worked examples, problems and digressions which, although mathematically rigorous, require no specialised or advanced mathematical background. Augmenting this core material are over 80 richly embellished practical applications of probability theory, drawn from a broad spectrum of areas both classical and modern, each tailor-made to illustrate the magnificent scope of the formal results. Providing a solid grounding in practical probability, without sacrificing mathematical rigour or historical richness, this insightful book is a fascinating reference and essential resource, for all engineers, computer scientists and mathematicians. - Electronic book text - 05 Dec 2012 - CAMBRIDGE UNIVERSITY PRESS - Cambridge University Press (Virtual Publishing) - Cambridge, United Kingdom - 100 b/w illus. 26 tables 528 exercises 'This is a gentle and rich book that is a delight to read. Gentleness comes from the attention to detail; few readers will ever find themselves 'stuck' on any steps of the derivations or proofs. Richness comes from the many examples and historical anecdotes that support the central themes. The text will support courses of many styles and it is especially attractive for self-guided study.' J. J. Michael Steele, University of Pennsylvania 'This book does an excellent job of covering the basic material for a first course in the theory of probability. It is notable for the entertaining coverage of many interesting examples, several of which give a taste of significant fields where the subject is applied.' Venkat Anantharam, University of California, Berkeley 'This book presents one of the most refreshing treatments of the theory of probability. By providing excellent coverage with both intuition and rigor, together with engaging examples and applications, [it] presents a wonderfully readable and thorough introduction to this important subject.' Sanjeev Kulkarni, Princeton University 'This is a remarkable book, a theory of probability that succeeds in being both readable and rigorous, both expository and entertaining ... a magnificent undertaking, impeccably presented, and one that is sure to reward repeated reading.' Tom Fanshawe, Significance (magazine of The Royal Statistical Society) '... well-written, and although the topics are discussed with all mathematical rigour, it usually does not exceed the capabilities of an advanced undergraduate student ... it can be recommended without constraint as a textbook for advanced undergraduates, but also as a reference and interesting read for experts.' Manuel Vogel, Contemporary Physics Table of contents Part I. Elements: 1. Probability spaces; 2. Conditional probability; 3. A first look at independence; 4. Probability sieves; 5. Numbers play a game of chance; 6. The normal law; 7. Probabilities on the real line; 8. The Bernoulli schema; 9. The essence of randomness; 10. The coda of the normal; Part II. Foundations: 11. Distribution functions and measure; 12. Random variables; 13. Great expectations; 14. Variations on a theme of integration; 15. Laplace transforms; 16. The law of large numbers; 17. From inequalities to concentration; 18. Poisson approximation; 19. Convergence in law, selection theorems; 20. Normal approximation; Part III. Appendices: 21. Sequences, functions, spaces.
<urn:uuid:94e35655-d474-49e1-90a1-1b5deef2a64a>
2.53125
753
Product Page
Science & Tech.
27.514696
95,535,252
+44 1803 865913 By: LC van Rijn 1200 pages, illustrations, tables Contains all details of sediment transport (sand, silt and mud) in rivers, estuaries and coastal seas. A CD-ROM (TRANSPOR-program) is available for computation of: wave length, near-bed peak orbital velocities, bed-shear stress, initiation of motion parameters, fall velocity of sediment, velocity and sand concentration, bed-load and suspended load transport, bed-form height and length. This edition consists of the original 1993 edition and a 500-page supplement. The original 1993 edition can also be purchased separately. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects Prompt and trustful service. Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:e7dda20d-7a03-4efc-9cbc-d38f73e3313e>
2.546875
201
Product Page
Science & Tech.
46.48712
95,535,282
Recent European heat waves have raised the interest in the impact of land conditions, in particular soil moisture, on temperature extremes. Observations from an extensive network of flux towers along with earth observation products reveal a contrasting response of forests and crop-/grassland in their water and energy budgets to heat waves. On the short term, forests primarily increase the sensible heat flux into the atmosphere in response to increased available energy at the land surface, in contrast to grasses that show only elevated latent heating. On the long-term, this elevated latent heating accelerates depletion of soil moisture. We employ multi-year earth observations by MODIS aboard Terra and Aqua to characterize albedo and temperature anomalies for pixels with predominant forest or short vegetation cover. Whereas scenes in June 2003 and July 2006 show little difference between temperature anomalies, a scene in Central France in August 2003 reveals an increasing difference in temperature anomalies, with the grasslands being warmer than the forests. These observations are consistent with this hypothesis of a phase transition induced by reduced evaporative cooling over grasslands during long lasting heat waves. Albedo changes are significant, but their impact on the land surface energy balance is an order of magnitude smaller than the impact of Bowen ratio changes. Mendeley saves you time finding and organizing research There are no full text links Choose a citation style from the tabs below
<urn:uuid:0b720244-af81-4f52-93b1-b0d77935d3ba>
3.359375
271
Academic Writing
Science & Tech.
12.284712
95,535,297
There are 2 code paths for validation failure, and in each case we provide the user with an appropriate error message. The error provider simply takes a control and a message in the Set Error method and does the rest of the work for you! Each of the controls implements the Validating event. As you can see in the example, Validate Children is called as a result of the Click event causing the Validating event to be sent to each of the controls. If the user enters invalid data they’ll see an icon shown in Figure 3. After the validating event returns without being cancelled, meaning we have valid user input, then the validated event will be raised. This is fine if your only concerned about the data that is entered being correct. The app also uses an Error Provider control to give the user feedback. Yesterday, I did not understand how to use the Ok button Click event to perform this validation. Clicking the Ok button causes the Error Provider to do it's thing where a control is not valid and the dialog is not closing unexpectedly. And, what should be Datasource Update Mode- On Property Changed or On Validation? I want to check what the user is writing in a textbox before I save it in a database. I guess I can always write some ifs or some try-catch blocks, but I was wondering if there's a better method.
<urn:uuid:9c7b4f7d-a5cc-42b3-8c2a-3bc7eb44198c>
2.78125
289
Q&A Forum
Software Dev.
52.69146
95,535,332
Authors: George Rajna Researchers at the UAB have come up with a method to measure the strength of the superposition coherence in any given quantum state. Experiments tested whether electrons could escape an atom instantaneously. Quantum entanglement can improve the sensitivity of a measurement, as has been demonstrated previously for atomic clocks and magnetic-field sensors. Thanks to a new fabrication technique, quantum sensing abilities are now approaching this scale of precision. For decades scientists have known that a quantum computer—a device that stores and manipulates information in quantum objects such as atoms or photons—could theoretically perform certain calculations far faster than today's computing schemes. Magnets and magnetic phenomena underpin the vast majority of modern data storage, and the measurement scales for research focused on magnetic behaviors continue to shrink with the rest of digital technology. Scientists have recently created a new spintronics material called bismuthene, which has similar properties to that of graphene. The expanding field of spintronics promises a new generation of devices by taking advantage of the spin degree of freedom of the electron in addition to its charge to create new functionalities not possible with conventional electronics. An international team of researchers, working at the Department of Energy's Lawrence Berkeley National Laboratory (Berkeley Lab) and UC Berkeley, fabricated an atomically thin material and measured its exotic and durable properties that make it a promising candidate for a budding branch of electronics known as "spintronics." Comments: 32 Pages. [v1] 2017-07-27 09:25:18 Unique-IP document downloads: 25 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:90b92108-ae8b-4f77-b428-5e18cc02a9e0>
3.296875
466
Truncated
Science & Tech.
24.610767
95,535,335
The problem of reflection and refraction of a plane acoustic wave on the surface of a solid body moving in a fluid solved. To obtain expressions for reflection and transmission coefficients as functions of the characteristics of the initial wave, adjacent media and the speed of the solid. Found that the reflection and transmission coefficients depend not only on the angle of incidence and characteristics of the media, but also the frequency of the incident wave and the speed of the body. acoustic waves, reflection, refraction and reflection angles, the frequency of the reflected and refracted waves, reflection and transmission coefficients "Koэffytsyentы otrazhenyia y prokhozhdenyia akustycheskykh voln kak funktsyy skorosty dvyzhenyia otrazhaiushchei poverkhnosty" , Scientific Works of Kharkiv National Air Force University,
<urn:uuid:abe9e2f9-e811-4893-8161-49d6b961721a>
2.890625
186
Academic Writing
Science & Tech.
0.192534
95,535,338
Chemistry: KMT Gas Laws What is kinetic energy? the energy of motion What is pressure? the amount of force over a given area Why stepping on a nail is more painful than lying on a bed of nails? force is spread out over a greater area, thus decreasing the pressure What is 33.6 kPa converted to atm? 0.33 atm converted to kPa What is 0.829 atm converted to mmHg? 630.04 mmHg converted to atm What are the four variables used to describe gases? Pressure, temperature, volume, and amount in moles What happens to pressure when temperature is increased? Pressure increases (G-L law) What happens to pressure when volume is increased? Pressure decreases (Boyle's Law) What happens to pressure if moles of gas is increased? Pressure increases (IG Law) The air inside a bike pump is compressed from 2L to 1L and T remains constant. What happens to pressure? As pressure increases volume ______ meaning they are _______ proportional decreases; inversely (Boyle's Law) As temperature decreases pressure _______ meaning they are _____ proportional decreases; directly (G-L Law) As volume increases temperature _____ meaning they are _____ proportional increases; directly (Charles' Law) As pressure increases the number of moles (n) ____ meaning they are ______ proportional increases; directly (IG Law) An inflated balloon in your car on a cold winter day; what happens to volume? Balloon volume will decrease (Charles' law) What is 35C to K? 308K to C What is -12C to K? 261K to C What is Gay-Lussac's Law? The temperature of a gas is directly related to the pressure if the volume is kept constant What is Boyle's Law? The volume of a gas is inversely related to the pressure if temperature remains constant What is Charle's Law? The temperature of a gas is directly related to volume if pressure remains constant What is why you don't put an aerosol can in the fireplace (which gas law?). Because it will explode according to Gay-Lussac's law What is standard Temperature (in K) and pressure? (in atm) 273K and 1 atm How does the average kinetic energy of a collection of particles change with temperature? Particle energy increases with an increase in KE.
<urn:uuid:39f6dd95-c5d2-4286-83cc-4c6e3f7f7d5f>
2.875
540
Q&A Forum
Science & Tech.
58.1488
95,535,344
By studying the structure of actin-depolymerising factor 1 (ADF1), a key protein involved in controlling the movement of malaria parasites, the researchers have demonstrated that scientists' decades-long understanding of the relationship between protein structure and cell movement is flawed. Dr Jake Baum and Mr Wilson Wong from the institute's Infection and Immunity division and Dr Jacqui Gulbis from the Structural Biology division, in collaboration with Dr Dave Kovar from the University of Chicago, US, led the research, which appears in today's edition of the Proceedings of the National Academy of Sciences USA.Dr Baum said actin-depolymerising factors (ADFs) and their genetic regulators have long been known to be involved in controlling cell movement, including the movement of malaria parasites and movement of cancer cells through the body. Anti-cancer treatments that exploit this knowledge are under development. "For many years research in yeast, plants and humans has suggested that the ability of ADFs to dismantle actin polymers – effectively disengaging the clutch – required a small molecular 'finger' to break the actin in two," Dr Baum said. "However, when we looked at the malaria ADF1 protein, we were surprised to discover that it lacked this molecular 'finger', yet remarkably was still able to cut the polymers. We discovered that a previously overlooked part of the protein, effectively the 'knuckle' of the finger-like protrusion, was responsible for dismantling the actin; we then discovered this 'hidden' domain was present across all ADFs." Mr Wong said that the Australian Synchrotron was critical in providing the extraordinary detail that helped the team pinpoint the protein 'knuckle'. "This is the first time a 3D image of the ADF protein has been captured in such detail from any cell type," Mr Wong said. "Imaging the protein structure at such high resolution was critical in proving beyond question the segment of the protein responsible for cutting actin polymers. Obtaining that image would have been impossible without the synchrotron facilities." Dr Baum said the new knowledge will give researchers a much clearer understanding of one of the fundamental steps governing how cells across all species grow, divide and, importantly, move. "Knowing that this one small segment of the protein is singularly responsible for ADF1 function means that we need to focus on an entirely new target not only for developing anti-malarial treatments, but also other diseases where potential treatments target actin, such as anti-cancer therapeutics," Dr Baum said. "Malaria researchers are normally used to following insights from other biological systems; this is a case of the exception proving the rule: where the malaria parasite, being so unusual, reveals how all other ADFs across nature work." More than 250 million people contract malaria each year, and almost one million people, mostly children, die from the disease. The malaria parasite has developed resistance to most of the therapeutic agents available for treating the disease, so identifying novel ways of targeting the parasite is crucial. Dr Baum said that the discovery could lead to development of drugs entirely geared toward preventing malaria infection, without adverse effects on human cells. "One of the primary goals of the global fight against malaria is to develop novel drugs that prevent infection and transmission in all hosts, to break the malaria cycle," Dr Baum said. "There is a very real possibility that, in the future, drugs could be developed that 'jam' this molecular 'clutch', meaning the malaria parasite cannot move and continue to infect cells in any of its conventional hosts, which would be a huge breakthrough for the field." This project was funded by the National Health and Medical Research Council (NHMRC). Liz Williams | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:37e63037-4130-4102-ad61-694ffa65c18d>
2.6875
1,350
Content Listing
Science & Tech.
35.742869
95,535,378
Editing the Genetic Code of Living Bacteria A new method for making genomewide changes to organisms could lead to better ways of producing useful new drugs and chemicals. Researchers at Harvard Medical School have come up with a find-and-replace tool for editing the genetic code of living bacteria. The technique offers a more powerful way to manipulate living organisms, and could eventually be used to make industrial microbes that are safer, more robust, and produce new kinds of drugs and chemicals. Most of the genes that make up an organism’s genetic code are essentially design plans for making proteins. Each gene consists of a long strand of molecules, called nucleotides. Three such nucleotides—a group known as a codon—tell a cell which amino acids it should use while building a protein. Cells can use 22 naturally occuring amino acids as building blocks to make proteins, but chemists have synthesized over a hundred so-called “unnatural amino acids” in the lab using the tools of chemistry, not biology. Naturally occuring organisms can’t make or build with these chemicals. Organisms that could build proteins using these amino acids would open up new possibilities, particularly in drug development. But normal cells lack the necessary genetic code to work with these unnatural amino acids. A team at Harvard, led by George Church, has developed a tool for editing genes that could change this. To make microbes capable of building proteins that incorporate unnatural amino acids, researchers need to be able to both edit all of certain codons in the genome, and to manipulate the cell machinery that reads those codons. The new tool lets them do the first part. Church says he hopes to achieve three goals with the approach. First, he wants to build bacteria that can produce new drugs and other chemicals. Second, he wants to genetically engineer bacteria that cannot live outside the lab because they need unnatural amino acids to survive—a feat that could prevent the environmental damage that might result from such bacteria being let loose in the world. And third, he wants to make bacteria that are immune to viruses, since viruses can cause problems in industrial production. “The way to achieve all these things is to change the [meaning of the] genetic code of your favorite organism,” says Church. Thursday, in the journal Science, Church’s group described how it deleted all 314 instances of a particular codon in the genome of living E. coli and replaced them with another codon. The work was co-led by Farren Isaacs, now assistant professor of molecular biology at Yale University. The process involves making small-scale genetic changes in multiple strains of E. coli, then combining them. Researchers at the J. Craig Venter Institute have previously demonstrated a different to edit a whole genome. This is the same group that made the first “synthetic living cell” last year. The Venter group edits the genome on a computer, and then synthesizes the entire thing using a combination of machinery and yeast cells; after that, the genome is transplanted into a recipient cell. Church’s method introduces changes in living cells. He believes the advantage of this approach is that it’s possible to correct mistakes as they happen on the way toward making larger changes. Church hopes his latest work will convince other researchers of the value of “genome-scale” engineering. Both his method and that developed at the Venter Institute involve using DNA synthesizer machines to make large amounts of DNA for the engineered cells to take up. DNA synthesis is still expensive. And the time involved in both techniques, though it’s getting shorter, is another expense. “We need to bring costs down, and think about ease of use,” he says. Making proteins with unnatural components is so useful that biologists have been doing it, albeit inefficiently, for decades, says David Tirrell, professor of chemical engineering at Caltech. Tirrell is not affiliated with the Harvard group. Two companies—Allozyme, which Tirrell is associated with, and Ambrix—are both making protein drugs that incorporate unnatural amino acids. In both cases, they have engineered bacteria that can make proteins that include just one unnatural amino acid. Making organisms that can use more of these unnatural chemicals to produce new kinds of molecules would open up new frontiers for protein drugs, he says. Proteins with unnatural components might also be able to cross barriers in the body that are not easily breached today, such as the blood-brain barrier. Church’s group is beginning a collaboration with Ambrix. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:e2fe11cd-2ed8-4ebe-b48a-5e452292d149>
3.78125
972
News Article
Science & Tech.
39.419585
95,535,406
From the UNIVERSITY OF CALIFORNIA – BERKELEY Einstein’s equations allow a non-determinist future inside some black holes In the real world, your past uniquely determines your future. If a physicist knows how the universe starts out, she can calculate its future for all time and all space. But a UC Berkeley mathematician has found some types of black holes in which this law breaks down. If someone were to venture into one of these relatively benign black holes, they could survive, but their past would be obliterated and they could have an infinite number of possible futures. Such claims have been made in the past, and physicists have invoked “strong cosmic censorship” to explain it away. That is, something catastrophic – typically a horrible death – would prevent observers from actually entering a region of spacetime where their future was not uniquely determined. This principle, first proposed 40 years ago by physicist Roger Penrose, keeps sacrosanct an idea – determinism – key to any physical theory. That is, given the past and present, the physical laws of the universe do not allow more than one possible future. But, says UC Berkeley postdoctoral fellow Peter Hintz, mathematical calculations show that for some specific types of black holes in a universe like ours, which is expanding at an accelerating rate, it is possible to survive the passage from a deterministic world into a non-deterministic black hole. What life would be like in a space where the future was unpredictable is unclear. But the finding does not mean that Einstein’s equations of general relativity, which so far perfectly describe the evolution of the cosmos, are wrong, said Hintz, a Clay Research Fellow. “No physicist is going to travel into a black hole and measure it. This is a math question. But from that point of view, this makes Einstein’s equations mathematically more interesting,” he said. “This is a question one can really only study mathematically, but it has physical, almost philosophical implications, which makes it very cool.” “This … conclusion corresponds to a severe failure of determinism in general relativity that cannot be taken lightly in view of the importance in modern cosmology” of accelerating expansion, said his colleagues at the University of Lisbon in Portugal, Vitor Cardoso, João Costa and Kyriakos Destounis, and at Utrecht University, Aron Jansen. As quoted by Physics World, Gary Horowitz of UC Santa Barbara, who was not involved in the research, said that the study provides “the best evidence I know for a violation of strong cosmic censorship in a theory of gravity and electromagnetism.” Hintz and his colleagues published a paper describing these unusual black holes last month in the journal Physical Review Letters. Beyond the event horizon Black holes are bizarre objects that get their name from the fact that nothing can escape their gravity, not even light. If you venture too close and cross the so-called event horizon, you’ll never escape. For small black holes, you’d never survive such a close approach anyway. The tidal forces close to the event horizon are enough to spaghettify anything: that is, stretch it until it’s a string of atoms. But for large black holes, like the supermassive objects at the cores of galaxies like the Milky Way, which weigh tens of millions if not billions of times the mass of a star, crossing the event horizon would be, well, uneventful. Because it should be possible to survive the transition from our world to the black hole world, physicists and mathematicians have long wondered what that world would look like, and have turned to Einstein’s equations of general relativity to predict the world inside a black hole. These equations work well until an observer reaches the center or singularity, where in theoretical calculations the curvature of spacetime becomes infinite. Even before reaching the center, however, a black hole explorer – who would never be able to communicate what she found to the outside world – could encounter some weird and deadly milestones. Hintz studies a specific type of black hole – a standard, non-rotating black hole with an electrical charge – and such an object has a so-called Cauchy horizon within the event horizon. The Cauchy horizon is the spot where determinism breaks down, where the past no longer determines the future. Physicists, including Penrose, have argued that no observer could ever pass through the Cauchy horizon point because they would be annihilated. As the argument goes, as an observer approaches the horizon, time slows down, since clocks tick slower in a strong gravitational field. As light, gravitational waves and anything else encountering the black hole fall inevitably toward the Cauchy horizon, an observer also falling inward would eventually see all this energy barreling in at the same time. In effect, all the energy the black hole sees over the lifetime of the universe hits the Cauchy horizon at the same time, blasting into oblivion any observer who gets that far. You can’t see forever in an expanding universe Hintz realized, however, that this may not apply in an expanding universe that is accelerating, such as our own. Because spacetime is being increasingly pulled apart, much of the distant universe will not affect the black hole at all, since that energy can’t travel faster than the speed of light. In fact, the energy available to fall into the black hole is only that contained within the observable horizon: the volume of the universe that the black hole can expect to see over the course of its existence. For us, for example, the observable horizon is bigger than the 13.8 billion light years we can see into the past, because it includes everything that we will see forever into the future. The accelerating expansion of the universe will prevent us from seeing beyond a horizon of about 46.5 billion light years. In that scenario, the expansion of the universe counteracts the amplification caused by time dilation inside the black hole, and for certain situations, cancels it entirely. In those cases – specifically, smooth, non-rotating black holes with a large electrical charge, so-called Reissner-Nordström-de Sitter black holes – an observer could survive passing through the Cauchy horizon and into a non-deterministic world. “There are some exact solutions of Einstein’s equations that are perfectly smooth, with no kinks, no tidal forces going to infinity, where everything is perfectly well behaved up to this Cauchy horizon and beyond,” he said, noting that the passage through the horizon would be painful but brief. “After that, all bets are off; in some cases, such as a Reissner-Nordström-de Sitter black hole, one can avoid the central singularity altogether and live forever in a universe unknown.” Admittedly, he said, charged black holes are unlikely to exist, since they’d attract oppositely charged matter until they became neutral. However, the mathematical solutions for charged black holes are used as proxies for what would happen inside rotating black holes, which are probably the norm. Hintz argues that smooth, rotating black holes, called Kerr-Newman-de Sitter black holes, would behave the same way. “That is upsetting, the idea that you could set out with an electrically charged star that undergoes collapse to a black hole, and then Alice travels inside this black hole and if the black hole parameters are sufficiently extremal, it could be that she can just cross the Cauchy horizon, survives that and reaches a region of the universe where knowing the complete initial state of the star, she will not be able to say what is going to happen,” Hintz said. “It is no longer uniquely determined by full knowledge of the initial conditions. That is why it’s very troublesome.” He discovered these types of black holes by teaming up with Cardoso and his colleagues, who calculated how a black hole rings when struck by gravitational waves, and which of its tones and overtones lasted the longest. In some cases, even the longest surviving frequency decayed fast enough to prevent the amplification from turning the Cauchy horizon into a dead zone. Hintz’s paper has already sparked other papers, one of which purports to show that most well-behaved black holes will not violate determinism. But Hintz insists that one instance of violation is one too many. “People had been complacent for some 20 years, since the mid ’90s, that strong cosmological censorship is always verified,” he said. “We challenge that point of view.” via Watts Up With That? February 22, 2018 at 11:23AM
<urn:uuid:f107ddfc-3c91-4e39-a5be-5dabd0b6bcfd>
3.265625
1,851
News Article
Science & Tech.
32.440309
95,535,410
+44 1803 865913 By: V Singh 415 pages, Illus, maps The present book on the Biodiversity of Ranthambhore Tiger Reserve deals with 539 species of higher plants and 361 species of animals (vertebrates). Besides geographical position and topography, the abiotic components, viz. geology, soils, water, climatic conditions etc, which determine the composition of biota in an ecosystem, have been discussed in details. Correct and valid names have been adopted for the floral and faunal elements along with local and English names. The keys have been provided for plant species from infra-specific to family level for easy identification. The short diagnostic description, phenology, ecology and distributional aspects have been provided under each plant species. Besides statistical analysis of floral composition, the phytogeographical and biological spectra have also been worked out to determine the routes of migration and phytoclimate respectively. Bioperspective value of the Reserve has been assessed to determine the economic potentiality and sustainable utilization of bioresources. The faunal diversity includes vertebrate fauna only, arranged in a classified manner. The shelter and feeding habits along with dependency of fauna on vegetation have been provided to determine plant-animal relationship and flow of energy. Details about endemic and threatened species of plants and animals, along with causes of threats, have been given for proper management of the Reserve. About 107 colour photographs of habitat and plants and animals with 36 illustrations of plant species have been provided. Several maps, pie charts, graphs, figures etc., along with data-tables, are appended to illustrate the findings. It is hoped that the book will prove a milestone in the management of the Reserve. There are currently no reviews for this book. Be the first to review this book! Your orders support book donation projects NHBS has enriched my life with knowledge for many years now Search and browse over 110,000 wildlife and science products Multi-currency. Secure worldwide shipping Wildlife, science and conservation since 1985
<urn:uuid:93b6bea9-d4f0-4a46-a72a-5ebd6cfa0283>
2.90625
425
Product Page
Science & Tech.
26.308834
95,535,411
India's Space Ambitions Soar A lunar mission and a reusable launch vehicle are planned. As China’s star has risen, there’s been speculation about whether its expanding space program will trigger a space race with the United States. After all, Shenzhou spacecraft have twice carried taikonauts to orbit and back, and they might in principle support the manned moon mission that the Chinese claim they’ll carry out by 2026–and even, maybe, by 2017, one year before NASA now foresees a return to the lunar surface. Still, the next-generation CZ-5 Long March launchers necessary for a manned moon mission by China remain unfunded, and, in general, its space program has so far only repeated decades-old American and Russian achievements. Meanwhile, attracting far less attention and operating on a far smaller budget, that other rising Asian giant, India, has also been ramping up its space program–and it is developing some novel, promising approaches. This spring, India’s then president, A.P.J. Abdul Kalam–a colorful scientist-technologist who loomed large from the success of his country’s early satellite launch missions, and then led its guided-missile program–laid out (via teleconferencing ) an ambitious vision of India’s future space efforts during his speech at a Boston University symposium. Kalam told the international audience of space experts in Boston that, besides expanding its extensive satellite program, India now plans lunar missions and a reusable launch vehicle (RLV) that takes an innovative approach using a scramjet “hyperplane.” Kalam said that India understands that global civilization will deplete earthly fossil fuels in the 21st century. Hence, he said, a “space industrial revolution” will be necessary to exploit the high frontier’s resources. Kalam predicted that India will construct giant solar collectors in orbit and on the moon, and will mine helium-3–an incredibly rare fuel on Earth, but one whose unique atomic structure makes power generation from nuclear fusion potentially feasible–from the lunar surface. India’s scramjet RLV, Kalam asserted, will provide the “low-cost, fully reusable space transportation” that has previously “denied mankind the benefit of space solar-power stations in geostationary and other orbits.” Talk of grand futuristic projects comes cheap, of course. Nevertheless, the Indian Space Research Organisation (ISRO) performed its first commercial launch in April, lofting an Italian gamma-ray observatory into orbit on its Polar Satellite Launch Vehicle. Next, in early 2008, the Chandraayan-1, India’s first lunar orbiter, will carry two NASA projects to search the moon’s surface for sites suitable for the proposed U.S. Moon Base. And at next year’s end, the first flight of the Hypersonic Technology Demonstrator Vehicle (HTDV), a demo for the scramjet RLV, is scheduled. While this current spate of activity brings the country greater prominence, India’s space program is hardly a new development. In 1975, ISRO launched its first satellite, Aryabhata, on a Soviet rocket, and in 1980, India’s first home-built launcher, the SLV-3, successfully put a satellite into orbit. ISRO has continued with a series of larger satellites and rockets in the succeeding years. Rather than national prestige, the Indian focus has until recently been on entirely pragmatic applications that gave the most bang for its limited rupees: communications satellites to provide services to far-flung regions of a vast country with little existing communications infrastructure, meteorology packages (often carried on the same geosynchronous satellites that perform communications missions), and remote-sensing satellites to map India’s natural resources. Now ISRO is moving beyond that focus on immediately practical space applications. In November 2006, Virender Kumar, counselor for space at India’s Washington, DC, embassy, told a forum on U.S.-India space relations at the Center for Strategic and International studies, “The time has come when you do have the feeling that you have accomplished a lot.” Following much discussion within India’s space-science community, Kumar continued, “They basically demanded that we go forward and do these exploration missions.” Setting aside the more science-fictional objectives described by President Kalam–whose term just ended, on July 25–in the near future, the most technologically innovative of ISRO’s projects is its scramjet RLV, named Avatar. Lowering launch costs via an RLV has, of course, been theunattainable holy grail for both the United States and Russian space programs. Avatar would weigh only 25 metric tons, with 60 percent of that the liquid hydrogen needed to fuel the turbo-ramjet engines that would power its initial aircraft-style takeoff from an airstrip and its ascent to a cruising altitude. Thereafter, Avatar’s scramjet propulsion system would cut in to accelerate it from Mach 4 to Mach 8, while an onboard system would collect air from which liquid oxygen would be separated. That liquid oxygen would then be used in Avatar’s final flight phase, as its rocket engine burned the collected liquid oxygen and the remaining hydrogen to enter a 100-kilometer-high orbit. ISRO claims that Avatar’s design would enable it to achieve at least a hundred reentries into the atmosphere. Theoretically, given ISRO’s plans for it to carry a payload weighing up to one metric ton, Avatar could thus deliver a 500-to-1,000-kilogram payload into orbit for about $67 per kilogram. Current launch prices range from about $4,300 per kilogram via a Russian Proton launch to about $40,000 per kilogram via a Pegasus launch. Conceivably, Avatar could give India a radical advantage in the global launch market. Gregory Benford, an astrophysicist at the University of California, Irvine, and an advisor to NASA and the White House Council on Space Policy, is enthusiastic: “The Avatar RLV project will enable the Indian program to leap ahead of the Chinese nostalgia trip. Once low cost to orbit comes alive, it will drive cheaper methods of doing all our unmanned activities in space.” Still, Avatar’s potentially radical advantage comes with significant restraints, given both the restricted scale of its payloads and that very low 100-kilometer orbit. That latter factor, indeed, is something of a puzzle since any satellite released at such a height will find its orbit degrading quickly. Do the Indians intend to use Avatar as a first-stage launcher, in effect, from which they will fire their satellites further up into secure orbits? Perhaps. But in that case, it’s hard not to notice that Avatar, in fact, makes more sense as a missile-launch platform. After all, the United States is also working on the scramjet concept but in the context of an unmanned global cruise missile: the X-51 Scramjet-Waverider. Could Avatar be just another military application upon which India’s space scientists are piggybacking their hopes to develop a radical RLV prototype? The Indians do seem to be serious enough about Avatar as a commercial concept that they’ve taken out patents internationally on the design. ISRO has, relatively, a very low budget, and for Avatar to happen, Indians need to bring in international partners and funding. But if it turns out that Avatar is really just another military application that India’s space scientists have used to secure funding from their military for their high aspirations, they will hardly be the first ones in the history of spaceflight to do so. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:cae916f8-8ae8-4ffd-aeb3-f1be8f1bf926>
2.84375
1,646
News Article
Science & Tech.
34.354084
95,535,413
Radiometric dating of rocks absolute age online dating statistics 2016 2016 By dating these surrounding layers, they can figure out the youngest and oldest that the fossil might be; this is known as "bracketing" the age of the sedimentary layer in which the fossils occur. Sedimentary rocks can be dated using radioactive carbon, but because carbon decays relatively quickly, this only works for rocks younger than about 50 thousand years. Some scientists prefer the terms chronometric or calendar dating, as use of the word "absolute" implies an unwarranted certainty of accuracy. Absolute dating provides a numerical age or range in contrast with relative dating which places events in order without any measure of the age between events. Droughts and other variations in the climate make the tree grow slower or faster than normal, which shows up in the widths of the tree rings. These tree ring variations will appear in all trees growing in a certain region, so scientists can match up the growth rings of living and dead trees.
<urn:uuid:75125fc7-95cd-4400-87bf-7dc2a107bce4>
3.65625
207
Knowledge Article
Science & Tech.
35.251961
95,535,425
In Innsbruck, Austria, a team of physicists led by Francesca Ferlaino experimentally observed how the anisotropic properties of particles deform the Fermi surface in a quantum gas. The work published in Science provides the basis for future studies on how the geometry of particle interactions may influence the properties of a quantum system. How a system behaves is determined by its interaction properties. An important concept in condensed matter physics for describing the energy distribution of electrons in solids is the Fermi surface, named for Italian physicist Enrico Fermi. The existence of the Fermi surface is a direct consequence of the Pauli exclusion principle, which forbids two identical fermions from occupying the same quantum state simultaneously. Energetically, the Fermi surface divides filled energy levels from the empty ones. For electrons and other fermionic particles with isotropic interactions – identical properties in all directions - the Fermi surface is spherical. “This is the normal case in nature and the basis for many physical phenomena,” says Francesca Ferlaino from the Institute for Experimental Physics at the University of Innsbruck. “When the particle interaction is anisotropic – meaning directionally dependent – the physical behavior of a system is completely altered. Introducing anisotropic interactions can deform the Fermi surface and it is predicted to assume an ellipsoidal shape.” The deformation of the Fermi surface is caused by the interplay between strong magnetic interaction and the Pauli exclusion principle. Francesca Ferlaino and her experimental research group have now been able to show such a deformation for the first time. Simulation in ultracold quantum gas For their experiment, the quantum physicists confined a gas of fermionic erbium atoms in a laser trap and cooled it to almost absolute zero. The element erbium is strongly magnetic, which causes extreme dipolar behavior. The interaction between these atoms is, therefore, directionally dependent. When the physicists release the ultracold gas from the trap, they are able to infer the shape of the Fermi surface from the momentum distribution of the particles. “Erbium atoms behave similarly to magnets, which means that their interaction is strongly dependent on the direction in which the particles interact. Our experiment shows that the shape of the Fermi surface depends on the geometry of the interaction and is not spherical anymore,” explains first author of the study Kiyotaka Aikawa the phenomenon that is extremely difficult to observe “The general question we deal with here is how the geometry of particle interactions influences the quantum properties of matter,” explains Francesca Ferlaino. Answering this question is of interest for physicists from different branches of physics such as the study of high-temperature superconductors. “We need a better understanding of these properties to develop new quantum systems,” underlines Francesca Ferlaino. Ultracold quantum gases once more provide an ideal platform for simulating complex scenarios. This work was financially supported by the Austrian Ministry of Science, the Austrian Science Fund and the European Union. Since July 2014 ERC and START awardee Francesca Ferlaino is Scientific Director at the Institute for Quantum Optics and Quantum Information (IQOQI) of the Austrian Academy of Sciences. Publication: Observation of Fermi surface deformation in a dipolar quantum gas. K. Aikawa, S. Baier, A. Frisch, M. Mark, C. Ravensbergen, F. Ferlaino. Science 2014 DOI: 10.1126/science.1255259 arXiv:1405.2154 http://arxiv.org/abs/1405.2154 Univ.-Prof. Dr. Francesca Ferlaino Institute for Experimental Physics University of Innsbruck Institute for Quantum Optics and Quantum Information Austrian Academy of Sciences 6020 Innsbruck, Austria Phone: +43 512 507-52440 (Lab.: -52441), (Secr.: -52449), (Fax: -2921) Public Relations office University of Innsbruck Phone: +43 512 507 32022 http://dx.doi.org/10.1126/science.1255259 - Observation of Fermi surface deformation in a dipolar quantum gas. K. Aikawa, S. Baier, A. Frisch, M. Mark, C. Ravensbergen, F. Ferlaino. Science 2014 http://www.ultracold.at - Ultracold Atoms and Quantum Gases Dr. Christian Flatz | Universität Innsbruck What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:873e6def-1ed5-4bf0-b346-53775ecdc55c>
3.296875
1,626
Content Listing
Science & Tech.
36.337638
95,535,436
(The Christian Science Monitor) This is a study of Egypt’s scientists on water desalination in order to make salt water drinkable, but just over half of the energy costs compared to the current method. With this method, the future will be very suitable for applying the poor and developing countries to provide clean water for a very low cost. The difference in this study was that they made the membrane easily in the lab with just 5 “ingredients“. It binds with the salt in the water – even level it works with the very Salty water found in the Red Sea. “Using pervaporation eliminates the need for electricity that is used in classic desalination processes, thus cutting costs significantly,” Ahmed El-Shafei, an agricultural and biosystems engineering professor at Alexandria University, told Digital Trends. The Technique requires half the energy to purify the same activation amount of water desalination as other methods. Scientists hope this new method will be applied on a large scale for the production of water used for agriculture, residential living and contribute to the wide – scale development in Egypt. Latest posts by Jack (see all) - Top Creative Ways to Boost Your Social Media Presence - July 19, 2018 - Engineering inspired by Nature - July 18, 2018 - UK Research Uncovers Cheap Way to Recycle Carbon Fiber - July 10, 2018
<urn:uuid:fe2961ae-2bd7-4148-85ae-3361dc2a8362>
3.640625
286
News Article
Science & Tech.
26.115909
95,535,469
NASA-ISRO Synthetic Aperture Radar (NISAR), a novel SAR concept will be utilized to image wide swath at high resolution of stripmap SAR. It will have observations in L- and S-bands to understand highly spatial and temporally complex processes such as ecosystem disturbances, ice sheet changes, and natural hazards including earthquakes, tsunamis, volcanoes, and landslides. NISAR with several advanced features such as 12 days interferometric orbit, achievement of high resolution and wide swath images through SweepSAR technology and simultaneous data acquisition in dual frequency would support a host of applications. The primary objectives of NISAR are to monitor ecosystems including monitoring changes in ecosystem structure and biomass estimation, carbon flux monitoring; mangroves and wetlands characterization; alpine forest characterization and delineation of tree-line ecotone, land surface deformation including measurement of deformation due to co-seismic and inter-seismic activities; landslides; land subsidence and volcanic deformation, cryosphere studies including measurements of dynamics of polar ice sheet, ice discharge to the ocean, Himalayan snow and glacier dynamics, deep and coastal ocean studies including retrieval of ocean parameters, mapping of coastal erosion and shore-line change; demarcation of high tide line (HTL) and low tide line (LTL) for coastal regulation zones (CRZ) mapping, geological studies including mapping of structural and lithological features; lineaments and paleo-channels; geo-morphological mapping, natural disaster response including mapping and monitoring of floods, forest fires, oil spills, earthquake damage and monitoring of extreme weather events such as cyclones. In addition to the above, NISAR would support various other applications such as enhanced crop monitoring, soil moisture estimation, urban area development, weather and hydrological forecasting.
<urn:uuid:2c93525d-0956-468f-a7d9-918a16eef87a>
3.1875
370
Academic Writing
Science & Tech.
-29.873865
95,535,475
New research shows projected changes in the winds circling the Antarctic may accelerate global sea level rise significantly more than previously estimated. Changes to Antarctic winds have already been linked to southern Australia’s drying climate but now it appears they may also have a profound impact on warming ocean temperatures under the ice shelves along the coastline of West and East Antarctic. “When we included projected Antarctic wind shifts in a detailed global ocean model, we found water up to 4°C warmer than current temperatures rose up to meet the base of the Antarctic ice shelves,” said lead author Dr Paul Spence from the ARC Centre of Excellence for Climate System Science (ARCCSS). “The sub-surface warming revealed in this research is on average twice as large as previously estimated with almost all of coastal Antarctica affected. This relatively warm water provides a huge reservoir of melt potential right near the grounding lines of ice shelves around Antarctica. It could lead to a massive increase in the rate of ice sheet melt, with direct consequences for global sea level rise.” Prior to this research by Dr Spence and colleagues from Australian National University and the University of New South Wales, most sea level rise studies focused on the rate of ice shelf melting due to the general warming of the ocean over large areas. Using super computers at Australia’s National Computational Infrastructure (NCI) Facility the researchers were able to examine the impacts of changing winds on currents down to 700m around the coastline in greater detail than ever before. Previous global models did not adequately capture these currents and the structure of water temperatures at these depths. Unexpectedly, this more detailed approach suggests changes in Antarctic coastal winds due to climate change and their impact on coastal currents could be even more important on melting of the ice shelves than the broader warming of the ocean. “When we first saw the results it was quite a shock. It was one of the few cases where I hoped the science was wrong,” Dr Spence said. “But the processes at play are quite simple, and well-resolved by the ocean model, so this has important implications for climate and sea-level projections. What is particularly concerning is how easy it is for climate change to increase the water temperatures beside Antarctic ice sheets.” The research may help to explain a number of sudden and unexplained increases in global sea levels that occurred in the geological past. “It is very plausible that the mechanism revealed by this research will push parts of the West Antarctic Ice Sheet beyond a point of no return,” said Dr Axel Timmerman, Prof of Oceanography at University of Hawaii and an IPCC lead author who has seen the paper. “This work suggests the Antarctic ice sheets may be less stable to future climate change than previously assumed.” Recent estimates suggest the West Antarctic Ice Sheet alone could contribute 3.3 metres to long-term global sea level rise. With both West and East Antarctica affected by the change in currents, in the future abrupt rises in sea level become more likely. According to another of the paper’s authors, Dr Nicolas Jourdain from ARCCSS, the mechanism that leads to rapid melting may be having an impact on the Western Antarctic right now. Dr Jourdain said it may help explain why the melt rate of some of the glaciers in that region are accelerating more than scientists expected. “Our research indicates that as global warming continues, parts of East Antarctica will also be affected by these wind-induced changes in ocean currents and temperatures,” Dr Jourdain said. “Dramatic rises in sea level are almost inevitable if we continue to emit greenhouse gases at the current rate.” GRL Paper: Rapid subsurface warming and circulation changes of Antarctic coastal waters by poleward shifting winds For more information, a copy of the paper or to arrange an interview contact: ARCCSS Media Manager Alvin Stone. Ph: 0418 617 366. Email: firstname.lastname@example.org Alvin Stone | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:217cbad9-f1c5-4e66-9c80-b41c33043c8b>
3.96875
1,405
Content Listing
Science & Tech.
39.799116
95,535,502
Lawrence Livermore scientists have discovered and demonstrated a new technique to remove and store atmospheric carbon dioxide while generating carbon-negative hydrogen and producing alkalinity, which can be used to offset ocean acidification. The Great Barrier Reef in Australia already has been affected by ocean warming and acidification. The team demonstrated, at a laboratory scale, a system that uses the acidity normally produced in saline water electrolysis to accelerate silicate mineral dissolution while producing hydrogen fuel and other gases. The resulting electrolyte solution was shown to be significantly elevated in hydroxide concentration that in turn proved strongly absorptive and retentive of atmospheric CO2. Further, the researchers suggest that the carbonate and bicarbonate produced in the process could be used to mitigate ongoing ocean acidification, similar to how an Alka Seltzer neutralizes excess acid in the stomach. "We not only found a way to remove and store carbon dioxide from the atmosphere while producing valuable H2, we also suggest that we can help save marine ecosystems with this new technique," said Greg Rau, an LLNL visiting scientist, senior scientist at UC Santa Cruz and lead author of a paper appearing this week (May 27) in the Proceedings of the National Academy of Sciences. When carbon dioxide is released into the atmosphere, a significant fraction is passively taken up by the ocean forming carbonic acid that makes the ocean more acidic. This acidification has been shown to be harmful to many species of marine life, especially corals and shellfish. By the middle of this century, the globe will likely warm by at least 2 degrees Celsius and the oceans will experience a more than 60 percent increase in acidity relative to pre-industrial levels. The alkaline solution generated by the new process could be added to the ocean to help neutralize this acid and help offset its effects on marine biota. However, further research is needed, the authors said. "When powered by renewable electricity and consuming globally abundant minerals and saline solutions, such systems at scale might provide a relatively efficient, high-capacity means to consume and store excess atmospheric CO2 as environmentally beneficial seawater bicarbonate or carbonate," Rau said. "But the process also would produce a carbon-negative 'super green' fuel or chemical feedstock in the form of hydrogen." Most previously described chemical methods of atmospheric carbon dioxide capture and storage are costly, using thermal/mechanical procedures to concentrate molecular CO2 from the air while recycling reagents, a process that is cumbersome, inefficient and expensive. "Our process avoids most of these issues by not requiring CO2 to be concentrated from air and stored in a molecular form, pointing the way to more cost-effective, environmentally beneficial, and safer air CO2 management with added benefits of renewable hydrogen fuel production and ocean alkalinity addition," Rau said. The team concluded that further research is needed to determine optimum designs and operating procedures, cost-effectiveness, and the net environmental impact/benefit of electrochemically mediated air CO2 capture and H2 production using base minerals. Other Livermore researchers include Susan Carroll, William Bourcier, Michael Singleton, Megan Smith and Roger Aines. "Marine species at risk unless drastic protection policies put in place," LLNL news release, Aug. 20, 2012. "Speeding up Mother Nature's very own CO2 mitigation process," LLNL news release, Jan. 19, 2011. "CO2 Mitigation via Capture and Chemical Conversion in Seawater," Environmental Science & Technology. Founded in 1952, Lawrence Livermore National Laboratory provides solutions to our nation's most important national security challenges through innovative science, engineering and technology. Lawrence Livermore National Laboratory is managed by Lawrence Livermore National Security, LLC for the U.S. Department of Energy's National Nuclear Security Administration. Anne Stark | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:359308f1-33e7-4b87-b37b-ed0585212aae>
3.34375
1,418
Content Listing
Science & Tech.
25.79282
95,535,531
(Credit: Gavin Parsons) Since the blockbuster movie “Jaws” hit the big screen in 1975, the thought of what lurks beneath the water has impacted many swimmers. (From Washington Daily News/ By Vail Stewart Rumley) -- According to research released by a team of scientists from East Carolina University, North Carolina Division of Marine [...] Member Highlight: Study Reveals New Antarctic Process Contributing To Sea Level Rise And Climate Change (Credit: Alessandro Silvano) A new IMAS-led study has revealed a previously undocumented process where melting glacial ice sheets change the ocean in a way that further accelerates the rate of ice melt and sea level rise. (From Phys.org) -- Led by IMAS PhD student Alessandro Silvano and published in the journal Science Advances, the [...] (Credit: Forrest McCarthy) Scientists who crossed western Greenland with a fleet of snowmobiles, pulling up long cylinders of ice at camps a little more than a mile above sea level, have found evidence that the vast sheet of ice is melting faster than at any time in the past 450 years at least — and possibly much longer than that. [...] (Credit: NOAA) The warming climate is expected to affect coastal regions worldwide as glaciers and ice sheets melt, raising sea level globally. For the first time, an international team has found evidence of how sea-level rise already is affecting high and low tides in both the Chesapeake and Delaware bays, two large [...] (Credit: Thomas Shahan) Protecting coastal homes and businesses from the crashing waves of the sea may eliminate beach habitat for the threatened Hawaiian green sea turtle. (From EOS.org/ By Alex Fox)-- Known as honu by Hawaiians, the Hawaiian green sea turtle (Chelonia mydas) is classified as threatened under the U.S. Endangered Species Act [...] The Pentagon has taken few steps to prepare its overseas installations for climate change, a government watchdog said Wednesday. (From The Hill/ By Rebecca Kheel) -- “While the military services have begun to integrate climate change adaptation into installations’ plans and project designs, this integration has been limited,” the Government Accountability Office (GAO) said in a report [...] Greenland (Credit: Matthew Cooper) A new UCLA-led study reinforces the importance of collaboration in assessing the effects of climate change. The research, published Dec. 5 in the journal Proceedings of the National Academy of Sciences, offers new insights about previously unknown factors affecting Greenland's melting ice sheet, and it could ultimately help scientists [...] Member Highlight: Learning From The Past: What The Ice Age Can Teach Us About The Future Of Our Coastlines (Click to enlarge) Exposure ages in Peggy’s Cove boulders reveal the Laurentide Ice Sheet, which covered Eastern and Central Canada in the Ice Age, receded here around 14,000 years ago as well. (Credit: Dalhousie University). About 14,000 years ago, planet Earth was defrosting. Expansive ice sheets that covered [...] (Click to enlarge) Research plane over Totten Glacier. (Credit: Image courtesy of Imperial College London) Scientists believe they’ve identified a key process affecting the melting of an enormous glacier in East Antarctica, bigger than the state of California. And the effects may only worsen with future climate change. (From Scientific American / by [...] Climate change is real. It’s caused by greenhouse-gas pollution released by human industrial activity. Its consequences can already be felt across every region and coastline of the United States—and, unless we stop emitting greenhouse gases soon, those consequences will almost certainly get worse. Those are the headline findings of the Climate Science Special Report, a sweeping and more than 800-page examination of the evidence. The report was published Friday by four agencies of the U.S. government and academics from across the country. Climate change could lead to sea level rises that are larger, and happen more rapidly, than previously thought, according to a trio of new studies that reflect mounting concerns about the stability of polar ice. The east coast of the United States is slowly but steadily sinking into the sea. This is the result of a recent study which took a variety of factors into account when determining the continuous sinking of the eastern seaboard. This will result in more frequent and severe flooding events in the future as the sea encroaches upon coastal communities and homes. This is all the more relevant as Hurricane Irma inundates the southeast coastline with rising storm surge.
<urn:uuid:5c879041-4837-4e36-8b67-97ffc49f5d6d>
2.65625
928
Content Listing
Science & Tech.
43.20515
95,535,537
Elements of The Object Model Kinds of Programming Paradigms: Programming Style Kinds of Abstraction Object-oriented Classes and objects Logic-oriented Goals, often expressed in predicate caluclus Rule-oriented If-then rules Constraint-oriented Invariant relationships Def: An abstraction denotes the essential characteristics of an object that distinguish it from all other kinds of objects and thus provide crisply defined conceptual boundaries, ralative to the perspective of views. There are Entity abstraction, Action abstraction, Virtual machine abstraction and coincidental abstraction. BTW: What/'s a client? It/'s a object that uses the resource of another object(known as the server) It/'s a client that may perform upon an object, together with the legal orderings in which the may be invoked. --------------- to bo continued
<urn:uuid:bf8395f3-cf30-4822-955a-77e7e90c2fb6>
3.53125
183
Truncated
Software Dev.
18.450645
95,535,541
Techniques of Radio Astronomy by T. L. Wilson Publisher: arXiv 2011 Number of pages: 47 This text provides an overview of the techniques of radio astronomy. It contains a short history of the development of the field, details of calibration procedures, coherent/heterodyne and incoherent/bolometer receiver systems, observing methods for single apertures and interferometers, and an overview of aperture synthesis. Home page url Download or read it online for free here: by F. Brünnow - Van Nostrand The celestial sphere and its diurnal motion; On the changes of the fundamental planes to which the places of the stars are referred; Corrections of the observations arising from the position of the observer on the surface of the Earth; and more. by P. S. Michie, F. S. Harlow - John Wiley & Sons This volume is designed especially for the use of the cadets of the U. S. Military Academy, as a supplement to the course in General Astronomy. It is therefore limited to that branch of Practical Astronomy which relates to Field Work. by George L. Hosmer - John Wiley and Sons Inc. The text is adapted to the needs of civil-engineering students. The text deals chiefly with the class of observations which can be made with surveying instruments, the methods applicable to astronomical and geodetic instruments being treated briefly. by J.-L. Starck, F. Murtagh - Springer The book explains how to handle real problems in astronomical data analysis using a modern arsenal of powerful techniques. It treats the methods of image, signal, and data processing that are proving to be both effective and widely relevant.
<urn:uuid:f9ec9c38-072a-4eea-a082-ed3a7df167f5>
2.609375
354
Content Listing
Science & Tech.
47.259114
95,535,546
You are not logged in. now: Wed Jul 18 12:27:05 2018 ... mod: Mon Jul 1 23:28:17 2013 Great insight! That's the answer we've been lkonoig for. Streak for Isolation From a culture of mixed or concentrated bacteria, dilute the sample by streaking on an agar plate so that colonies arising from individual cells can be isolated. - sterilize a bacterial loop by flaming - allow the loop to cool slightly and gather a small sample of either bacterial culture or a small scraping of a frozen bacterial stock onto the loop - for frozen stocks, do not allow the stock to thaw - streak the loop across the top of a bacterial agar plate in three parallel lines - the lines should be fairly close together - streak using the wide part of the loop, not the edge of the loop - flame sterilize the loop and cool by stabbing a clean part of the agar plate - make three new parallel streaks starting by going across the last part of the previous three streaks and continuing onto a clean part of the plate - repeat the last two steps three or four times being careful to only contact the immediately previous streaks and not earlier streaks - grow at 37C overnight - there are many procedures that will work, but this procedure is quick, easy to perform and more reliable than most others
<urn:uuid:d07f4534-5fd8-4386-803a-af1497fcf2c3>
2.65625
285
Comment Section
Science & Tech.
50.609039
95,535,550
Discussion of Results and General Conclusions In the modern version of the theory largely developed by M, V. Vol’kenshtein, a distinction is made between mechanical and electro-optical anharmonicity when considering overtones and composite frequencies. Mechanical anharmonicity means a deviation of the potential-energy function from the qua’dratic relationship. Electro-optical anharmonicity means a nonlinear dependence of the electrical moment or polarizability on the normal vibrational coordinates. The numerical values of the overtone frequencies are determined by the mechanical anharmonicity, and their electro-optical properties (intensity and polarization) by both the mechanical and electro-optical anharmonicities. In the presence of mechanical anharmonicity the concept of normal coordinates and normal vibrations loses its meaning. Nevertheless, for purposes of calculation one usually employs normal coordinates as a zero approximation, introducing anharmonic corrections as perturbations. KeywordsPolyatomic Molecule Molecular Vibration Harmonicity Coefficient Fermi Resonance Anharmonicity Constant Unable to display preview. Download preview PDF.
<urn:uuid:bce710ab-9228-47a2-b3ea-2eb0aef3b793>
2.625
231
Truncated
Science & Tech.
-19.733517
95,535,568
DULUTH, Minnesota--A paper published this week in the journal Limnology and Oceanography Letters is the first to show that lake size and nutrients drive how much greenhouse gases are emitted globally from lakes into the atmosphere. "Our research pioneers a new way of determining the global atmospheric effect of lakes using satellite information on lake greenness and size distribution," said co-author John A. Downing, University of Minnesota Sea Grant director and professor of biology at the University of Minnesota Duluth. "This is important because the world's lakes and surface waters will emit more greenhouse gases as they become greener and more nutrient-rich." Greenhouse gases released into the atmosphere drive global climate change. Although carbon dioxide is the most well-known greenhouse gas, methane and nitrous oxide, which are also emitted from lakes, could be far more devastating because they have much greater warming potential. "Our work shows conclusively that methane, which is emitted from lakes in bubbles, is the dominant greenhouse gas coming from lakes and surface waters globally," said lead author Tonya DelSontro, now a researcher at the University of Geneva. "The greener or more eutrophic these water bodies become, the more methane is emitted, which exacerbates climate warming." Green lakes result from excessive fertilization by nutrients, such as phosphorus and nitrogen, and when sediment accumulates in lakebeds. Such "greening" is called eutrophication. "Our research team assembled the largest global dataset on lake emission rates of carbon dioxide, methane and nitrous oxide," said Downing. "When we analyzed the data, we found that emissions of greenhouse gases to the atmosphere were influenced by the amount of eutrophication but also that lake size matters a lot for carbon dioxide and nitrous oxide." If the world's lakes and other surface waters become more eutrophic it could negate the reductions that society makes by reducing fossil fuel emissions. "We need to know how much of these greenhouse gases are being emitted to be able to predict how much and how fast the climate will change," said DelSontro. "This paper is significant because we developed a more effective approach to estimate current and future global lake emissions." The authors point to four key advancements that enabled their results to be more accurate than previous estimates: Recent advances in satellite and sensor technology, availability of detailed geographical data on lakes, an increasing number of global lake observations and improved statistical survey designs. The authors also offer some relatively simple things people anywhere can do to protect the water in their community: - Decrease fertilizer application on urban and agricultural land - Maintain large buffer or filter strips of vegetation that intercept stormwater runoff - Manage septic systems to ensure they work effectively - Keep streets and curbs clean "Even moderate increases in lake and surface water eutrophication over the next 50 years could be equivalent to adding 13 percent of the effect of the current global fossil fuel emissions," said Downing. "By keeping our community waters clean, we make better water available to future generations and we decrease worldwide emissions of methane that speed climate change." Contacts: John A. Downing, Director, Minnesota Sea Grant; Professor of Biology, Department of Biology and Scientist, Large Lakes Observatory, University of Minnesota Duluth; email@example.com, 218-726-8715. Tonya DelSontro, Maître Assistant, University of Geneva, firstname.lastname@example.org, +41 22 379 03 12. Jake J. Beaulieu, biologist, U.S. Environmental Protection Agency, Office of Research and Development, Cincinnati, Ohio. Beaulieu.Jake@epa.gov. This article is an invited paper to Limnology and Oceanography Letters, Special Issue - Current Evidence Carbon Cycling in Inland Waters and is an open-access article under the terms of the Creative Commons Attribution License. DOI: 10.1002/lol2.10073 Metadata available via figshare: 10.6084/m9.figshare.5220001 As a signatory to the 1992 United Nations Framework Convention on Climate Change, the United States is obligated to report the nation's anthropogenic greenhouse gas emissions every year. This research reported in this paper provides a methodology for estimating the magnitude of methane emissions from water bodies confined within an enclosure such as dams and reservoirs (i.e., impoundments), which have not previously been reported by the U.S. Environmental Protection Agency. The research reported here will improve the accuracy of the nation's greenhouse gas inventory and can inform better mitigation strategies.
<urn:uuid:33b61a6c-3922-4d46-8e77-eed9ba685f59>
3.796875
944
News (Org.)
Science & Tech.
25.138409
95,535,571
Alkali halides and alkaline earth fluorides have many attractive optical properties which are of interest for component applications in several optical systems operating in adverse environments. The fact that many of these materials have low yield strengths and fracture energies in single-crystal form often restricts their applications. The yield strengths and fracture energies of single crystals of many alkali halides can be increased significantly, in many cases by an order of magnitude or more, by deformation processing, such as press forging. The slow compressive deformation and high temperatures to which materials are subjected during press forging introduce low- and high-angle grain boundaries which are responsible for the observed improvement in mechanical properties. This paper describes a forging technique which repeatedly produces crack-free, fine-grained forgings and discusses the optical and mechanical properties of press forged LiF and CaF2 in relation to those of starting single crystals. R. H. Anderson, R. A. Skogman, "Mechanical And Optical Properties Of Press-Forged LiF and CaF2," Optical Engineering 18(6), 186602 (1 December 1979). https://doi.org/10.1117/12.7972441
<urn:uuid:5cabca41-82d0-4539-9aad-8f34a073a2b1>
2.8125
243
Truncated
Science & Tech.
34.391154
95,535,578
World Register of Introduced Marine Species (WRIMS) # alien species: 1,817 # species with uncertain origin: 77 # species with unknown origin: 125 ||The World Register of Introduced Marine Species (WRIMS) records which marine species in the World Register of Marine species (WoRMS) have been introduced deliberately or accidentally by human activities to geographic areas outside their native range. It excludes species that colonised new locations naturally (so called ‘range extensions’), even if in response to WRIMS notes the origin (source location) of the species at a particular location by country, sea area and/or latitude longitude as available. If the species is reported to have had ecological or economic impacts it is considered invasive in that location. Each record is linked to a source publication or specialist database. A glossary of terminology is available. Links have been provided to species profiles of well-known marine invasive species in the Global Invasive Species Database (GISD) of the IUCN Invasive Species Specialist Group In using WRIMS, users need to consider possible species misidentifications in the sources, and that for some species it is uncertain which is their native and introduced ranges. Whether a species is ‘invasive’ can vary between locations and over time at a particular location. Background of the database In 2008-2009 the IUCN Invasive Species Specialist Group (ISSG) worked on a project, within the framework of the Ocean Biogeographic Information System (OBIS), that developed an annotated dataset of marine introduced and invasive species for the World Register of Marine Species (WoRMS) in order to flag species on the register as "alien and invasive species". Both online databases and publications were consulted with an aim to achieve global coverage. They include: - Delivering Alien Invasive Species Inventories for Europe (DAISIE) - Galil, B. (2009). Taking stock: inventory of alien species in the Mediterranean Sea. Biological Invasions 11(2): 359-372. - Lasram, F.B.R.; Mouillot, D. (2009). Increasing southern invasion enhances congruence between endemic and exotic Mediterranean fish fauna. Biological Invasions 11: 697-711. - Hayes, K.R. (2005). Marine Species Introductions. Unpublished data from CSIRO. - Molnar, J.L.; Gamboa, R.L.; Revenga, C.; Spalding, M.D. (2008). Assessing the global threat of invasive species to marine biodiversity. Frontiers in Ecology and the Environment 6(9): 485-492. In addition to biological status (represented as occurrence, provenance and invasiveness), annotations included higher taxonomy, origin of species, introduced location, as well as (where available) information on the date of first record/introduction and pathway of introduction. In 2013-2014, ISSG worked with the Flanders Marine Institute (VLIZ) on a data collection project developed within the framework of the Biology Project of the European Marine Observation and Data Network (EMODnet) to complete trait information related to "invasiveness". The dataset submitted in 2009 was updated using additional data and information from online databases such as: - The Global Invasive Species Database (GISD) - The European Alien Species Information Network (EASIN) - The Information system on Aquatic Non-indigenous Species (AquaNIS) - The National Exotic Marine and Estuarine Species Information System (NEMESIS) - and (recently) published literature The terminology and definitions to describe occurrence, provenance and invasiveness have been revised and expanded. Additionally, any available information on abundance, pathways of introduction and spread, evidence of impacts on biodiversity were documented. The geographic coverage of this dataset is global. Focus was placed on documenting authoritative information on marine species introductions in the recognized marine bio-invasion hotspots. - Ahyong, Shane: Thematic editor: Lithodoidea, Thematic editor: Polychelida, Thematic editor: Stomatopoda - Costello, Mark: Thematic editor: Biota (WRIMS Coordinator) - Galil, Bella S.: Thematic editor: Biota (Mediterranean) - Gollasch, Stephan: Thematic editor: Biota (North Sea) - Holleman, Ayco: Thematic editor: Biota - Hutchings, Pat: Thematic editor: Polychaeta - Katsanevakis, Stelios: Thematic editor: Biota (Europe, Mediterranean - EASIN) - Lejeusne, Christophe: Thematic editor: Artemia, Thematic editor: Ascidia, Thematic editor: Decapoda, Thematic editor: Mysida (Mediterranean and European Atlantic) - Marchini, Agnese: Thematic editor: Amphipoda - Occhipinti, Anna: Thematic editor: Biota (Mediterranean) - Pagad, Shyama: Thematic editor: Biota - Panov, Vadim E.: Thematic editor: Cladocera - Poore, Gary: Thematic editor: Anthuroidea - Rius, Marc: Thematic editor: Ascidiacea - Robinson, Nestor: Thematic editor: Rhodophyta - Robinson, Tamara B.: Thematic editor: Biota (Southern Africa) - Sterrer, Wolfgang: Thematic editor: Biota (Bermuda) - Turon, Xavier: Thematic editor: Ascidiacea - Verleye, Thomas: Thematic editor: Biota (Belgium) - Willan, Richard C.: Thematic editor: Mollusca - Zhan, Aibin: Thematic editor: Biota Ahyong, S.; Costello, M. J.; Galil, B. S.; Gollasch, S.; Hutchings, P.; Katsanevakis, S.; Lejeusne, C.; Marchini, A.; Occhipinti, A.; Pagad, S.; Poore, G.; Rius, M.; Robinson, T. B.; Sterrer, W.; Turon, X.; Willan, R. C.; Zhan, A. (2018). World Register of Introduced Marine Species (WRIMS). Accessed at http://www.marinespecies.org/introduced on 2018-07-20 Banner: From left to right: Asterias amurensis - Northern Pacific seastar (Photo by: Lycoo - Wikimedia Commons); Ciona intestinalis - Sea squirt (Photo by: perezoso - Wikemedia Commons); Pterois volitans - Indo-Pacific Lionfish (Photo by: Jens Petersen - Wikimedia Commons); Tubastraea coccinea - Orange cup coral (Photo by: Nick Hobgood - Wikimedia Commons). Top: Botrylloides violaceus, Woods Hole (Photo by Rosana Moreira da Rocha, Universidade Federal do Paraná, Brazil). Bottom: Didemnum vexilum on experimental plates, Woods Hole (Photo by Rosana Moreira da Rocha, Universidade Federal do Paraná, Brazil)
<urn:uuid:76af8494-bb86-4dcd-b2a2-dfb2aad93a6f>
3.109375
1,577
Knowledge Article
Science & Tech.
15.014308
95,535,588
A new study using mouse "knockouts" shows that genes that control limb formation in insects have similar functions in mammals. Split hand/foot malformation (SHFM) or ectrodactyly (the "lobster claw" anomaly), is a severe congenital malformation syndrome characterised by a profound median cleft of the hands and/or feet, typically associated with absence or fusion of the remaining fingers. This condition is quite frequent as about 6 cases of SHFM are observed for every 10,000 human births. Several forms of SHFM are each associated with a different genetic mutation. One of the most frequent forms called Type I is associated with a specific region of human chromosome 7 that contains two homeobox genes, DLX5 and DLX6. These genes are similar to a gene in insects called distal-less that controls limb development. When this gene is defective in the fruit fly the distal part of the insect limb is missing. It was therefore assumed that DLX5 and DLX6 might have conserved this function through evolution and could have a role in vertebrate limb development. However, in spite of intensive searches for mutations of these genes in SHFM patients, no direct evidence was found to date on their involvement in mammalian limb development. Joanna Gibson | alphagalileo Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:39267ec9-7b3a-4a89-ac7e-56e308f7c412>
3.046875
913
Content Listing
Science & Tech.
40.704463
95,535,597
The official Redux documentation. FANTASTIC writing - not just "here's the API", but "here's what you want to do and how we came up with this" Getting Started with Redux - Video Series Dan Abramov, the creator of Redux, demonstrates various concepts in 30 short (2-5 minute) videos. The linked Github repo contains notes and transcriptions of the videos. Building React Applications with Idiomatic Redux - Video Series Dan Abramov's second video tutorial series, continuing directly after the first. Includes lessons on store initial state, using Redux with React Router, using "selector" functions, normalizing state, use of Redux middleware, async action creators, and more. The linked Github repo contains notes and transcriptions of the videos. Modern Web Development with React and Redux An up-to-date HTML slideshow that introduces React and Redux, discusses why they help make applications easier to write via declarative code and predictable data flow, and demonstrates their basic concepts and syntax. Includes several interactive React component examples. What Does Redux Do? (and when should you use it?) An excellent summary of how Redux helps solve data flow problems in a React app. How Redux Works: A Counter-Example A great follow-up to the previous article. It explains how to use Redux and React-Redux, by first showing a React component that stores a value in its internal state, and then refactoring it to use Redux instead. Along the way, the article explains important Redux terms and concepts, and how they all fit together to make the Redux data flow work properly. Redux and React: An Introduction A great introduction to Redux's core concepts, with explanations of how to use the React-Redux package to use Redux with React. React Redux Tutorial for Beginners: Learning Redux in 2018 An excellent tutorial that covers a variety of Redux fundamentals, including a look at the problems Redux helps solve, when you should learn and use Redux, actions and reducers, the Redux store, and how to connect React components to Redux. Single State Tree + Flux Describes the benefits of a Flux architecture, and a single state tree like Redux has A higher-level description of what Redux is, the major concepts, and why you would want to use it. Also some additional article links. A Cartoon Guide to Redux Another high-level description of Redux, with cartoons A file-based tutorial to Redux (click on each numbered .js file in the repo) Leveling Up with React: Redux A very well-written introduction to Redux and its related concepts, with some nifty cartoon-ish diagrams. Functionally Managing State with Redux A quick overview of Redux's core concepts, and how to use it with React Redux: From Twitter Hype to Production An extremely well-produced slideshow that visually steps through core Redux concepts, usage with React, project organization, and side effects with thunks and sagas. Has some absolutely fantastic animated diagrams demonstrating how data flows through a React+Redux architecture. A variety of user-provided diagrams illustrating how the pieces of Redux fit together. How I Learned to Stop Worrying and Love Redux A new Redux user describes how she was able to overcome initial problems learning Redux. Introduction to Redux and React-Redux A quick overview of core Redux concepts, with code examples for creating a store and hooking up React components to read the data. Redux and React Redux A pair of articles covering basic Redux concepts and usage. An Introduction to Redux An overview and intro to the basic concepts of Redux. Looks at the benefits of using Redux, how it differs from MVC or Flux, and its relation to functional programming. Why Redux makes sense to me and how I conceptualize it Some useful analogies for visualizing how Redux works, how the pieces fit together, and why you'd want to use it. Redux10: A Visual Overview of Redux in 10 Steps A small repo with a standalone HTML page. Walks through 10 simple steps to help explain the basics of Redux. How to Use the React-Redux package An excerpt from the "Modern Web Apps with React and Redux" course, explaining how to use React-Redux to connect components to Redux. Pro React beta chapter: Using Redux An alternative version of Pro React's chapter on using Flux that explains Redux usage A set of video tutorials introducing Redux concepts Redux for the Very Beginner A beginner-friendly screencast that introduces Redux React, Redux, and React-Redux Comparison and examples of implementing a filterable list with just React, with "manual" React code reading from Redux, and using the official React-Redux library Redux's Mysterious Connect Function An overview of how to use React-Redux's connectfunction to glue together a Redux store and React components Why You Should Use Redux to Manage Immutability An introduction to several aspects of Redux, including immutability concepts, use of immutable data libraries, using middleware for side effects, and connecting React to Redux An Introduction to Redux / Redux: Would you like to know more? A pair of HTML slideshows that discuss some of the problems of storing application state, how Redux can help solve those problems, and several tradeoffs and benefits of using Redux. DevGuides: Introduction to Redux A tutorial that covers several aspects of Redux, including actions, reducers, usage with React, and middleware. A short but clear explanation of what the React-Redux library does, and how its connectfunction works to interact between React components and a Redux store (including several helpful diagrams). Redux by Example A 4-part series that illustrates core Redux concepts via a series of small example repos, with explanations of the source and concepts in the articles. A diagram that tries to illustrate all the various pieces of the React+Redux API and workflow Beginner's guide to React/Redux - How to start learning and not be overwhelmed A good writeup from a React/Redux beginner, with advice on how to get started learning React and Redux, how to approach building your first meaningful React+Redux application, and related topics such as file structure, data flow, and rendering logic. Getting Started with Redux A pair of very readable tutorials for getting started with Redux. The first introduces Redux's core concepts while building out a small shopping cart example, and the second describes how to transition from storing data using React's setStateover to putting it in Redux instead, and gives examples of managing real-world form state with Redux. When do I know I'm ready for Redux? Walks through a typical process of scaling up a React application, and how Redux can help solve common pain points with data flow. Has some really neat animated diagrams that illustrate how state updates interact with the React component tree. 4 ways to dispatch actions with Redux Describes different ways to dispatch actions from React components: directly passing the store, using connect, using sagas, and using the Introduction to Redux A basic introduction to the ideas of storing data in Redux and dispatching actions to update that data. Using Redux with React An excellent follow-up to the previous article, with explanations of why Redux is helpful in a React app, and diagrams showing the data flow. A clear descriptive overview of Redux's background, core concepts, principles, and usage with React. Also describes the basics of async behavior, testing, and debugging. Introduction to Redux / A beginner's introduction to working with Redux in React A pair of tutorials that explain the basics of working with a Redux store and how to use the React-Redux library. Immutable Updates in React and Redux Good examples of how to properly update nested state immutably The most common Redux mistake and how to avoid it Quick tips on understanding how to update data immutability, and avoid mutations. Replacing state in Redux reducers: a few approaches Examples of different ways to safely update state in reducers. React application state management with Redux A tutorial that discusses why a state management library is useful in React apps, introduces Redux usage, and shows how to subscribe to Redux store updates both by hand and using React-Redux. An ongoing series of posts intended to demonstrate a number of specific Redux techniques by building a sample application, based on the MekHQ application for managing Battletech campaigns. Written by Redux co-maintainer Mark Erikson. Covers topics like managing relational data, connecting multiple components and lists, complex reducer logic for features, handling forms, showing modal dialogs, and much more. Building a Simple CRUD App with React + Redux A nifty 8-part series that demonstrates building a CRUD app, including routing, AJAX calls, and the various CRUD aspects. Very well written, with some useful diagrams as well. The Soundcloud Client in React + Redux A detailed walkthrough demonstrating project setup, routing, authentication, fetching of remote data, and wrapping of a stateful library. Full-Stack Redux Tutorial A full-blown, in-depth tutorial that builds up a complete client-server application. Getting Started with React, Redux and Immutable: a Test-Driven Tutorial Another solid, in-depth tutorial, similar to the "Full-Stack" tutorial. Builds a client-only TodoMVC app, and demonstrates a good project setup (including a Mocha+JSDOM-based testing configuration). Well-written, covers many concepts, and very easy to follow. Managing Data Flow on the Client Side Walks through a small Redux example, and talks about the benefits Getting Started with Redux Walks through setting up a small Redux app, and builds up each layer Build an Image Gallery using React, Redux, and redux-saga A step-by-step look at building a page with some complex async behavior. Interactive Frontend Development with React and Redux An Estonian university course covering React and Redux. Lecture videos, slides, and course code are all available online (in English). Topics include React philosophy, container components, Redux basics, async actions, middleware, routing, and optimization. Build a React Redux App with JSON Web Token (JWT) Authentication Demonstrates building the client portion of a JWT-authenticated application (follow-up to previous articles that built the server-side). Redux Hero: An Intro to Redux and Reselect An introduction to Redux and related libraries through building a small RPG-style game Building a Chat App with React, Redux, and PubNub A four-part tutorial that walks through building a realtime chat app React and Redux Tutorial - Trending Github A five-part tutorial that builds a small app showing trending Github repos. Mapping Colorado's 14er Mountains with React and Redux Demonstrates building an app that uses Google Maps to show markers for locations, as well as cards with info on those locations. Zero to Hero with React and Redux A 2-hour video tutorial that introduces Redux concepts and use with TypeScript. Screencast: Builting a React/Redux App from Scratch A 2-hour screencast demonstrating building a Redux app from the ground up Build a Media Library With React, Redux, and Redux-Saga A two-part tutorial that builds an image and video display and preview app A Practical Guide to Redux A tutorial that introduces the key concepts and usage of Redux through the code in a small sample app. A comprehensive React-Redux tutorial A very long, detailed article that digs into Redux's concepts, and builds a book management application in the process. React and Redux Sagas Authentication App Tutorial A 3-part tutorial that builds a reasonably complex app, using Redux-Saga, Redux-Form and React-Router, with an emphasis on practical aspects of putting things together. Get familiar with React, Redux, and basic personal finance A tutorial that builds a small financial savings calculation app. Build a CRUD App Using React, Redux, and FeathersJS Walks through building a client-server application that uses FeathersJS to set up the server API. Nathan's Guide to Building a React/Redux Application A tutorial intended to bridge the gap between "hello world" tutorials and "real-world" boilerplates. Covers building a server-rendered React application using Webpack 2, React Router 4, Redux, and ES6. Building Tesla's Battery Range Calculator with React+Redux Follows the "plain React" version in Part 1 by introducing basic Redux concepts, and modifying the original version to use Redux for managing state. NodeJS React-Redux Tutorial A multi-part tutorial that covers building a news app with React, Redux, and a Node API server using MongoDB, as well as setting up JWT-based authentication. Simple Redux Create Delete Contact Application Demonstrates building a simple client-side contact list app React + Redux: User Registration and Login Tutorial A tutorial that shows how to build a React+Redux app that uses JWT authentication, with the example based on a real-world application. Build a Bookshop with React & Redux Introduces React and Redux concepts by building a small bookshop app. How to build a Chat Application using React, Redux, Redux-Saga, and Web Sockets A tutorial that demonstrates building a small real-time client-server chat application. CRUD with Redux A video screencast tutorial series that demonstrates how to build a CRUD app with React, Redux, React-Router, and MongoDB. Developing Games with React, Redux, and SVG A 3-part series that shows how to build a Space Invaders-style action game using React for the UI layer, SVG for the display, and Redux for the data flow, as well as socket.io for a real-time leaderboard and Auth0 for authentication. Redux Implementation Walkthroughs Read the Source ep17 - React Redux with Dan Abramov Dan walks through the implementation and concepts of React-Redux. A great follow-up to the Egghead.io tutorial series. A very simplified version of React Redux's connect()function that illustrates the basic implementation Build Yourself a Redux An excellent in-depth "build a mini-Redux" article, which covers not only Redux's core, but also connectand middleware as well. Let's Write Redux! Walks through writing a miniature version of Redux step-by-step, to help explain the concepts and implementation. Looks at the core concepts in Redux, and builds up a mini-Redux to demonstrate how Redux works internally. Learning Redux with Reducks Another "build a mini-Redux" article series. AMA with the Redux Creators Dan Abramov and Andrew Clark answer questions about Redux, and describe the history of its original development. Second link is a summary of answers in the AMA. A short screencast that demonstrates building a mini-Redux from scratch Build Alterdux: A Redux-Compatible Library From Scratch A useful example of building a mini-Redux from the ground up, with explanations of some of the ideas that Redux uses. Code your own Redux Another "build a mini-Redux" series, including an explanation of how React-Redux's A dive through the source code of Redux looking at the parts that really matter, with discussion of the design decisions and patterns used and what consequences they have. "Redux without the sanity checks in a single file" A gist from Dan Abramov, showing how Redux's core logic fits into less than 100 LOC. A guided walkthrough of the code for Redux's Implement React Redux from Scratch A 3-part series that builds a slightly simplified version of the React-Redux v5 connectfunction to explain how it works. A good follow-up from Dan's "connect.js explained" gist, which shows the basic conceptual behavior of connect, while this one traces through the internals. "I use React and Redux but never React-Redux, what am I missing out on?" A response I wrote to someone who asked why they should use the React-Redux connectfunction instead of subscribing to the store manually, where I described the benefits of using connectinstead of writing manual subscription code. "Help getting @connect command to work with my Create-React-App project" A comment I wrote describing why the Redux team discourages use of connectas a decorator. A small reimplementation of Redux, with comments explaining how the code works. Finally understand Redux by building your own Store Teaches Redux concepts by going under the hood and building a Redux-equivalent Store class from scratch A miniature Redux and React-Redux implementation built for learning purposes Paid Courses and Books Redux in Action A comprehensive book that covers many key aspects of using Redux, including the basics of reducers and actions and use with React, complex middlewares and side effects, application structure, performance, testing, and much more. Does a great job of explaining the pros, cons, and tradeoffs of many approaches to using Redux. Personally recommended by Redux co-maintainer Mark Erikson. The Complete Redux Book How do I manage a large state in production? Why do I need store enhancers? What is the best way to handle form validations? Get the answers to all these questions and many more using simple terms and sample code. Learn everything you need to use Redux to build complex and production-ready web applications. (Note: now permanently free!) Developing a Redux Edge Modern React with Redux, by Stephen Grider (paid) Master the fundamentals of React and Redux with this tutorial as you develop apps with React Router, Webpack, and ES6. This course will get you up and running quickly, and teach you the core knowledge you need to deeply understand and build React components and structure applications with Redux. Redux, by Tyler McGinnis (paid) When learning Redux, you need to learn it in the context of an app big enough to see the benefits. That's why this course is huge. A better name might be 'Real World Redux. If you're sick of "todo list" Redux tutorials, you've come to the right place. In this course we'll talk all about what makes Redux special for managing state in your application. We'll build an actual "real world" application so you can see how Redux handles edge cases like optimistic updates and error handling. We'll also cover many other technologies that work well with Redux, Firebase, and CSS Modules. Learn Redux, by Wes Bos (free) A video course that walks through building 'Reduxstagram' — a simple photo app that will simplify the core ideas behind Redux, React Router and React.js Modern Web Apps with React and Redux (paid) A paid course on TutsPlus that builds a spaced-repetition notecard app.
<urn:uuid:18aea34f-c751-4152-9a45-735e2a7d9839>
2.59375
4,025
Content Listing
Software Dev.
38.62072
95,535,610
Winter has started and Christmas has passed. It is very cold throughout America. Last winter we had a borderline Neutral/La Nina. This time around, we have La Nina. Other factors to consider are Pacific Decadal Oscillation (PDO), Atlantic Multidecadal Oscillation (AMO), Northeast Pacific Warm Pool (NEPWP), Equatorial Indian Ocean (EIOI), Indian Ocean Dipole (IOD), Tropical South Atlantic (TSAI), Roaring Forties (R40I), Hudson and Baffin Bay (HBB), and Quasi-Biennial Oscillation (QBO). However, since this El Nino is large and strong and has significant impact, it will weigh in more than other factors listed. Here are the analog winters I came up with. I chose these winters because the previous winter was La Nina or Neutral. Here is a table I created to identify strongest analogs. I look at eight ocean and one upper wind patterns based on Fall (September to November) averages. The cutoff for further analysis is four (before 1948 due to QBO data not available). With QBO, the cutoff is five. We can eliminate these winters. The analogs I will be looking at are: Let’s start with the ever important temperature. All maps were generated with 20th Century Reanalysis Monthly Composites. They are all Northern Hemisphere. Alaska, Bering Sea, Southern US, Eastern US, Eastern Canada, Greenland, and Western China are warmer than normal. Arctic, Siberia, Korea, Japan, Central Asia, Southeast Asia, Europe, North Africa, Western US, and Western Canada are cooler than normal. Southeast Texas is warmer. Keep in mind, some areas do not have weather records, so this may be spurious as it includes 1881-1882. Wonder what winter will be like in the rain department? It is drier in Western US, Western Canada, Southeast US, Cuba, Bahamas, Southern China, Central Asia, Western Europe, and Northern Europe. It is wetter in Central US, Caribbean, Alaska, Northern Japan, Southeast Asia, and North Africa. Southeast Texas sees average amount of rain in the winter. Again, this may be spurious as it includes 1881-1882. Let’s look at the upper air pattern. There is upper level ridging south of Alaska, which is negative East Pacific Oscillation (EPO), Eastern US, Siberia, Greenland, and Northeast Canada. Ridging over Greenland and Northeast Canada is negative North Atlantic Oscillation (NAO). There is troughing over Central Canada and one running from Japan, Korea, and to all of Central Asia. Negative NAO and EPO usually means cold air will go down south. How were winters like in these analog years? A warm winter dominated the US. Southeast Texas had a warm winter. A cold blast came on the start of 1929 in Southeast Texas. Another hard freeze came in February 1929. It did not go above freezing on February 9, 1929 with high of 29°F. A world engulfed in World War II. No freezes occurred in Southeast Texas. Most of the US had a warm winter, including Southeast Texas. Snow fell in Houston area in December 1961. January 1962 had a strong cold blast in America. A strong high pressure with pressure of 1062 millibars was recorded. Many areas saw record lows set. Houston saw a record low on January 10, 1962, which has been beaten in 1977. Cold blasts occurred in January and February 1985. The 1985 Presidential Inauguration was the coldest on record. Many areas saw record lows set. Houston saw record lows on January 20-21, 1985. Near records occurred on February 1-2, 1985. Snow fell in Houston in January and February 1985. San Antonio saw record snowfall on January 11-13, 1985. Eastern US had a cooler than normal winter. Southeast Texas had a cold winter. I am not suggesting we will see a cold blast on par with February 1929, January 1962, January 1985, and February 1985. It is possible this winter could see more cold blasts. I think this winter could be a cooler winter despite what the past analog winters being warm. I would not be surprised to hear of a major cold blast this coming winter or snow falls again.
<urn:uuid:aec8b60c-0435-4e29-a63c-378308ee4330>
2.734375
895
Personal Blog
Science & Tech.
58.354721
95,535,613
In physics, string theory is a theoretical framework in which the point-like particles of particle physics are replaced by one-dimensional objects called strings. It describes how these strings propagate through space and interact with each other, on distance scales larger than the string scale, a string looks just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string. In string theory, one of the many vibrational states of the string corresponds to the graviton, a quantum mechanical particle that carries gravitational force. Thus string theory is a theory of quantum gravity. String theory is a broad and varied subject that attempts to address a number of deep questions of fundamental physics. String theory has been applied to a variety of problems in black hole physics, early universe cosmology, nuclear physics, and condensed matter physics, and it has stimulated a number of major developments in pure mathematics. Because string theory potentially provides a unified description of gravity and particle physics, it is a candidate for a theory of everything, a self-contained mathematical model that describes all fundamental forces and forms of matter. Despite much work on these problems, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of its details. String theory was first studied in the late 1960s as a theory of the strong nuclear force, before being abandoned in favor of quantum chromodynamics. Subsequently, it was realized that the very properties that made string theory unsuitable as a theory of nuclear physics made it a promising candidate for a quantum theory of gravity, the earliest version of string theory, bosonic string theory, incorporated only the class of particles known as bosons. It later developed into superstring theory, which posits a connection called supersymmetry between bosons and the class of particles called fermions. Five consistent versions of superstring theory were developed before it was conjectured in the mid-1990s that they were all different limiting cases of a single theory in eleven dimensions known as M-theory; in late 1997, theorists discovered an important relationship called the AdS/CFT correspondence, which relates string theory to another type of physical theory called a quantum field theory. One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances. Another issue is that the theory is thought to describe an enormous landscape of possible universes, and this has complicated efforts to develop theories of particle physics based on string theory, these issues have led some[who?] in the community to criticize these approaches to physics and question the value of continued research on string theory unification. - 1 Fundamentals - 2 M-theory - 3 Black holes - 4 AdS/CFT correspondence - 5 Phenomenology - 6 Connections to mathematics - 7 History - 8 Criticism - 9 Notes and references - 10 Further reading - 11 External links In the twentieth century, two theoretical frameworks emerged for formulating the laws of physics, the first is Albert Einstein's general theory of relativity, a theory that explains the force of gravity and the structure of space and time. The other is quantum mechanics which is a completely different formulation to describe physical phenomena using the known probability principles. By the late 1970s, these two frameworks had proven to be sufficient to explain most of the observed features of the universe, from elementary particles to atoms to the evolution of stars and the universe as a whole. In spite of these successes, there are still many problems that remain to be solved. One of the deepest problems in modern physics is the problem of quantum gravity, the general theory of relativity is formulated within the framework of classical physics, whereas the other fundamental forces are described within the framework of quantum mechanics. A quantum theory of gravity is needed in order to reconcile general relativity with the principles of quantum mechanics, but difficulties arise when one attempts to apply the usual prescriptions of quantum theory to the force of gravity; in addition to the problem of developing a consistent theory of quantum gravity, there are many other fundamental problems in the physics of atomic nuclei, black holes, and the early universe.[a] String theory is a theoretical framework that attempts to address these questions and many others, the starting point for string theory is the idea that the point-like particles of particle physics can also be modeled as one-dimensional objects called strings. String theory describes how strings propagate through space and interact with each other; in a given version of string theory, there is only one kind of string, which may look like a small loop or segment of ordinary string, and it can vibrate in different ways. On distance scales larger than the string scale, a string will look just like an ordinary particle, with its mass, charge, and other properties determined by the vibrational state of the string; in this way, all of the different elementary particles may be viewed as vibrating strings. In string theory, one of the vibrational states of the string gives rise to the graviton, a quantum mechanical particle that carries gravitational force, thus string theory is a theory of quantum gravity. One of the main developments of the past several decades in string theory was the discovery of certain "dualities", mathematical transformations that identify one physical theory with another. Physicists studying string theory have discovered a number of these dualities between different versions of string theory, and this has led to the conjecture that all consistent versions of string theory are subsumed in a single framework known as M-theory. Studies of string theory have also yielded a number of results on the nature of black holes and the gravitational interaction. There are certain paradoxes that arise when one attempts to understand the quantum aspects of black holes, and work on string theory has attempted to clarify these issues; in late 1997 this line of work culminated in the discovery of the anti-de Sitter/conformal field theory correspondence or AdS/CFT. This is a theoretical result which relates string theory to other physical theories which are better understood theoretically, the AdS/CFT correspondence has implications for the study of black holes and quantum gravity, and it has been applied to other subjects, including nuclear and condensed matter physics. Since string theory incorporates all of the fundamental interactions, including gravity, many physicists hope that it fully describes our universe, making it a theory of everything. One of the goals of current research in string theory is to find a solution of the theory that reproduces the observed spectrum of elementary particles, with a small cosmological constant, containing dark matter and a plausible mechanism for cosmic inflation. While there has been progress toward these goals, it is not known to what extent string theory describes the real world or how much freedom the theory allows in the choice of details. One of the challenges of string theory is that the full theory does not have a satisfactory definition in all circumstances, the scattering of strings is most straightforwardly defined using the techniques of perturbation theory, but it is not known in general how to define string theory nonperturbatively. It is also not clear whether there is any principle by which string theory selects its vacuum state, the physical state that determines the properties of our universe, these problems have led some in the community to criticize these approaches to the unification of physics and question the value of continued research on these problems. The application of quantum mechanics to physical objects such as the electromagnetic field, which are extended in space and time, is known as quantum field theory; in particle physics, quantum field theories form the basis for our understanding of elementary particles, which are modeled as excitations in the fundamental fields. In quantum field theory, one typically computes the probabilities of various physical events using the techniques of perturbation theory. Developed by Richard Feynman and others in the first half of the twentieth century, perturbative quantum field theory uses special diagrams called Feynman diagrams to organize computations. One imagines that these diagrams depict the paths of point-like particles and their interactions. The starting point for string theory is the idea that the point-like particles of quantum field theory can also be modeled as one-dimensional objects called strings, the interaction of strings is most straightforwardly defined by generalizing the perturbation theory used in ordinary quantum field theory. At the level of Feynman diagrams, this means replacing the one-dimensional diagram representing the path of a point particle by a two-dimensional surface representing the motion of a string. Unlike in quantum field theory, string theory does not have a full non-perturbative definition, so many of the theoretical questions that physicists would like to answer remain out of reach. In theories of particle physics based on string theory, the characteristic length scale of strings is assumed to be on the order of the Planck length, or 10−35 meters, the scale at which the effects of quantum gravity are believed to become significant. On much larger length scales, such as the scales visible in physics laboratories, such objects would be indistinguishable from zero-dimensional point particles, and the vibrational state of the string would determine the type of particle. One of the vibrational states of a string corresponds to the graviton, a quantum mechanical particle that carries the gravitational force. The original version of string theory was bosonic string theory, but this version described only bosons, a class of particles which transmit forces between the matter particles, or fermions. Bosonic string theory was eventually superseded by theories called superstring theories, these theories describe both bosons and fermions, and they incorporate a theoretical idea called supersymmetry. This is a mathematical relation that exists in certain physical theories between the bosons and fermions; in theories with supersymmetry, each boson has a counterpart which is a fermion, and vice versa. There are several versions of superstring theory: type I, type IIA, type IIB, and two flavors of heterotic string theory (SO(32) and E8×E8). The different theories allow different types of strings, and the particles that arise at low energies exhibit different symmetries, for example, the type I theory includes both open strings (which are segments with endpoints) and closed strings (which form closed loops), while types IIA, IIB and heterotic include only closed strings. In everyday life, there are three familiar dimensions of space: height, width and length. Einstein's general theory of relativity treats time as a dimension on par with the three spatial dimensions; in general relativity, space and time are not modeled as separate entities but are instead unified to a four-dimensional spacetime. In this framework, the phenomenon of gravity is viewed as a consequence of the geometry of spacetime. In spite of the fact that the universe is well described by four-dimensional spacetime, there are several reasons why physicists consider theories in other dimensions; in some cases, by modeling spacetime in a different number of dimensions, a theory becomes more mathematically tractable, and one can perform calculations and gain general insights more easily.[b] There are also situations where theories in two or three spacetime dimensions are useful for describing phenomena in condensed matter physics. Finally, there exist scenarios in which there could actually be more than four dimensions of spacetime which have nonetheless managed to escape detection. One notable feature of string theories is that these theories require extra dimensions of spacetime for their mathematical consistency; in bosonic string theory, spacetime is 26-dimensional, while in superstring theory it is 10-dimensional, and in M-theory it is 11-dimensional. In order to describe real physical phenomena using string theory, one must therefore imagine scenarios in which these extra dimensions would not be observed in experiments. Compactification is one way of modifying the number of dimensions in a physical theory. In compactification, some of the extra dimensions are assumed to "close up" on themselves to form circles; in the limit where these curled up dimensions become very small, one obtains a theory in which spacetime has effectively a lower number of dimensions. A standard analogy for this is to consider a multidimensional object such as a garden hose. If the hose is viewed from a sufficient distance, it appears to have only one dimension, its length. However, as one approaches the hose, one discovers that it contains a second dimension, its circumference. Thus, an ant crawling on the surface of the hose would move in two dimensions. Compactification can be used to construct models in which spacetime is effectively four-dimensional. However, not every way of compactifying the extra dimensions produces a model with the right properties to describe nature; in a viable model of particle physics, the compact extra dimensions must be shaped like a Calabi–Yau manifold. A Calabi–Yau manifold is a special space which is typically taken to be six-dimensional in applications to string theory, it is named after mathematicians Eugenio Calabi and Shing-Tung Yau. Another approach to reducing the number of dimensions is the so-called brane-world scenario; in this approach, physicists assume that the observable universe is a four-dimensional subspace of a higher dimensional space. In such models, the force-carrying bosons of particle physics arise from open strings with endpoints attached to the four-dimensional subspace, while gravity arises from closed strings propagating through the larger ambient space, this idea plays an important role in attempts to develop models of real world physics based on string theory, and it provides a natural explanation for the weakness of gravity compared to the other fundamental forces. One notable fact about string theory is that the different versions of the theory all turn out to be related in highly nontrivial ways. One of the relationships that can exist between different string theories is called S-duality, this is a relationship which says that a collection of strongly interacting particles in one theory can, in some cases, be viewed as a collection of weakly interacting particles in a completely different theory. Roughly speaking, a collection of particles is said to be strongly interacting if they combine and decay often and weakly interacting if they do so infrequently. Type I string theory turns out to be equivalent by S-duality to the SO(32) heterotic string theory. Similarly, type IIB string theory is related to itself in a nontrivial way by S-duality. Another relationship between different string theories is T-duality. Here one considers strings propagating around a circular extra dimension. T-duality states that a string propagating around a circle of radius R is equivalent to a string propagating around a circle of radius 1/R in the sense that all observable quantities in one description are identified with quantities in the dual description. For example, a string has momentum as it propagates around a circle, and it can also wind around the circle one or more times, the number of times the string winds around a circle is called the winding number. If a string has momentum p and winding number n in one description, it will have momentum n and winding number p in the dual description. For example, type IIA string theory is equivalent to type IIB string theory via T-duality, and the two versions of heterotic string theory are also related by T-duality. In general, the term duality refers to a situation where two seemingly different physical systems turn out to be equivalent in a nontrivial way. Two theories related by a duality need not be string theories, for example, Montonen–Olive duality is example of an S-duality relationship between quantum field theories. The AdS/CFT correspondence is example of a duality which relates string theory to a quantum field theory. If two theories are related by a duality, it means that one theory can be transformed in some way so that it ends up looking just like the other theory, the two theories are then said to be dual to one another under the transformation. Put differently, the two theories are mathematically different descriptions of the same phenomena. In string theory and other related theories, a brane is a physical object that generalizes the notion of a point particle to higher dimensions, for instance, a point particle can be viewed as a brane of dimension zero, while a string can be viewed as a brane of dimension one. It is also possible to consider higher-dimensional branes; in dimension p, these are called p-branes. The word brane comes from the word "membrane" which refers to a two-dimensional brane. Branes are dynamical objects which can propagate through spacetime according to the rules of quantum mechanics, they have mass and can have other attributes such as charge. A p-brane sweeps out a (p+1)-dimensional volume in spacetime called its worldvolume. Physicists often study fields analogous to the electromagnetic field which live on the worldvolume of a brane. In string theory, D-branes are an important class of branes that arise when one considers open strings, as an open string propagates through spacetime, its endpoints are required to lie on a D-brane. The letter "D" in D-brane refers to a certain mathematical condition on the system known as the Dirichlet boundary condition, the study of D-branes in string theory has led to important results such as the AdS/CFT correspondence, which has shed light on many problems in quantum field theory. Branes are frequently studied from a purely mathematical point of view, and they are described as objects of certain categories, such as the derived category of coherent sheaves on a complex algebraic variety, or the Fukaya category of a symplectic manifold. The connection between the physical notion of a brane and the mathematical notion of a category has led to important mathematical insights in the fields of algebraic and symplectic geometry and representation theory. Prior to 1995, theorists believed that there were five consistent versions of superstring theory (type I, type IIA, type IIB, and two versions of heterotic string theory), this understanding changed in 1995 when Edward Witten suggested that the five theories were just special limiting cases of an eleven-dimensional theory called M-theory. Witten's conjecture was based on the work of a number of other physicists, including Ashoke Sen, Chris Hull, Paul Townsend, and Michael Duff. His announcement led to a flurry of research activity now known as the second superstring revolution. Unification of superstring theories In the 1970s, many physicists became interested in supergravity theories, which combine general relativity with supersymmetry. Whereas general relativity makes sense in any number of dimensions, supergravity places an upper limit on the number of dimensions; in 1978, work by Werner Nahm showed that the maximum spacetime dimension in which one can formulate a consistent supersymmetric theory is eleven. In the same year, Eugene Cremmer, Bernard Julia, and Joel Scherk of the École Normale Supérieure showed that supergravity not only permits up to eleven dimensions but is in fact most elegant in this maximal number of dimensions. Initially, many physicists hoped that by compactifying eleven-dimensional supergravity, it might be possible to construct realistic models of our four-dimensional world, the hope was that such models would provide a unified description of the four fundamental forces of nature: electromagnetism, the strong and weak nuclear forces, and gravity. Interest in eleven-dimensional supergravity soon waned as various flaws in this scheme were discovered. One of the problems was that the laws of physics appear to distinguish between clockwise and counterclockwise, a phenomenon known as chirality. Edward Witten and others observed this chirality property cannot be readily derived by compactifying from eleven dimensions. In the first superstring revolution in 1984, many physicists turned to string theory as a unified theory of particle physics and quantum gravity. Unlike supergravity theory, string theory was able to accommodate the chirality of the standard model, and it provided a theory of gravity consistent with quantum effects. Another feature of string theory that many physicists were drawn to in the 1980s and 1990s was its high degree of uniqueness; in ordinary particle theories, one can consider any collection of elementary particles whose classical behavior is described by an arbitrary Lagrangian. In string theory, the possibilities are much more constrained: by the 1990s, physicists had argued that there were only five consistent supersymmetric versions of the theory. Although there were only a handful of consistent superstring theories, it remained a mystery why there was not just one consistent formulation. However, as physicists began to examine string theory more closely, they realized that these theories are related in intricate and nontrivial ways, they found that a system of strongly interacting strings can, in some cases, be viewed as a system of weakly interacting strings. This phenomenon is known as S-duality, it was studied by Ashoke Sen in the context of heterotic strings in four dimensions and by Chris Hull and Paul Townsend in the context of the type IIB theory. Theorists also found that different string theories may be related by T-duality, this duality implies that strings propagating on completely different spacetime geometries may be physically equivalent. At around the same time, as many physicists were studying the properties of strings, a small group of physicists was examining the possible applications of higher dimensional objects; in 1987, Eric Bergshoeff, Ergin Sezgin, and Paul Townsend showed that eleven-dimensional supergravity includes two-dimensional branes. Intuitively, these objects look like sheets or membranes propagating through the eleven-dimensional spacetime. Shortly after this discovery, Michael Duff, Paul Howe, Takeo Inami, and Kellogg Stelle considered a particular compactification of eleven-dimensional supergravity with one of the dimensions curled up into a circle; in this setting, one can imagine the membrane wrapping around the circular dimension. If the radius of the circle is sufficiently small, then this membrane looks just like a string in ten-dimensional spacetime; in fact, Duff and his collaborators showed that this construction reproduces exactly the strings appearing in type IIA superstring theory. Speaking at a string theory conference in 1995, Edward Witten made the surprising suggestion that all five superstring theories were in fact just different limiting cases of a single theory in eleven spacetime dimensions. Witten's announcement drew together all of the previous results on S- and T-duality and the appearance of higher dimensional branes in string theory; in the months following Witten's announcement, hundreds of new papers appeared on the Internet confirming different parts of his proposal. Today this flurry of work is known as the second superstring revolution. Initially, some physicists suggested that the new theory was a fundamental theory of membranes, but Witten was skeptical of the role of membranes in the theory; in a paper from 1996, Hořava and Witten wrote "As it has been proposed that the eleven-dimensional theory is a supermembrane theory but there are some reasons to doubt that interpretation, we will non-committally call it the M-theory, leaving to the future the relation of M to membranes." In the absence of an understanding of the true meaning and structure of M-theory, Witten has suggested that the M should stand for "magic", "mystery", or "membrane" according to taste, and the true meaning of the title should be decided when a more fundamental formulation of the theory is known. In mathematics, a matrix is a rectangular array of numbers or other data; in physics, a matrix model is a particular kind of physical theory whose mathematical formulation involves the notion of a matrix in an important way. A matrix model describes the behavior of a set of matrices within the framework of quantum mechanics. One important example of a matrix model is the BFSS matrix model proposed by Tom Banks, Willy Fischler, Stephen Shenker, and Leonard Susskind in 1997. This theory describes the behavior of a set of nine large matrices; in their original paper, these authors showed, among other things, that the low energy limit of this matrix model is described by eleven-dimensional supergravity. These calculations led them to propose that the BFSS matrix model is exactly equivalent to M-theory, the BFSS matrix model can therefore be used as a prototype for a correct formulation of M-theory and a tool for investigating the properties of M-theory in a relatively simple setting. The development of the matrix model formulation of M-theory has led physicists to consider various connections between string theory and a branch of mathematics called noncommutative geometry, this subject is a generalization of ordinary geometry in which mathematicians define new geometric notions using tools from noncommutative algebra. In a paper from 1998, Alain Connes, Michael R. Douglas, and Albert Schwarz showed that some aspects of matrix models and M-theory are described by a noncommutative quantum field theory, a special kind of physical theory in which spacetime is described mathematically using noncommutative geometry. This established a link between matrix models and M-theory on the one hand, and noncommutative geometry on the other hand, it quickly led to the discovery of other important links between noncommutative geometry and various physical theories. In general relativity, a black hole is defined as a region of spacetime in which the gravitational field is so strong that no particle or radiation can escape; in the currently accepted models of stellar evolution, black holes are thought to arise when massive stars undergo gravitational collapse, and many galaxies are thought to contain supermassive black holes at their centers. Black holes are also important for theoretical reasons, as they present profound challenges for theorists attempting to understand the quantum aspects of gravity. String theory has proved to be an important tool for investigating the theoretical properties of black holes because it provides a framework in which theorists can study their thermodynamics. In the branch of physics called statistical mechanics, entropy is a measure of the randomness or disorder of a physical system, this concept was studied in the 1870s by the Austrian physicist Ludwig Boltzmann, who showed that the thermodynamic properties of a gas could be derived from the combined properties of its many constituent molecules. Boltzmann argued that by averaging the behaviors of all the different molecules in a gas, one can understand macroscopic properties such as volume, temperature, and pressure; in addition, this perspective led him to give a precise definition of entropy as the natural logarithm of the number of different states of the molecules (also called microstates) that give rise to the same macroscopic features. In the twentieth century, physicists began to apply the same concepts to black holes; in most systems such as gases, the entropy scales with the volume. In the 1970s, the physicist Jacob Bekenstein suggested that the entropy of a black hole is instead proportional to the surface area of its event horizon, the boundary beyond which matter and radiation is lost to its gravitational attraction. When combined with ideas of the physicist Stephen Hawking, Bekenstein's work yielded a precise formula for the entropy of a black hole, the Bekenstein–Hawking formula expresses the entropy S as Like any physical system, a black hole has an entropy defined in terms of the number of different microstates that lead to the same macroscopic features, the Bekenstein–Hawking entropy formula gives the expected value of the entropy of a black hole, but by the 1990s, physicists still lacked a derivation of this formula by counting microstates in a theory of quantum gravity. Finding such a derivation of this formula was considered an important test of the viability of any theory of quantum gravity such as string theory. Derivation within string theory In a paper from 1996, Andrew Strominger and Cumrun Vafa showed how to derive the Beckenstein–Hawking formula for certain black holes in string theory, their calculation was based on the observation that D-branes—which look like fluctuating membranes when they are weakly interacting—become dense, massive objects with event horizons when the interactions are strong. In other words, a system of strongly interacting D-branes in string theory is indistinguishable from a black hole. Strominger and Vafa analyzed such D-brane systems and calculated the number of different ways of placing D-branes in spacetime so that their combined mass and charge is equal to a given mass and charge for the resulting black hole, their calculation reproduced the Bekenstein–Hawking formula exactly, including the factor of 1/4. Subsequent work by Strominger, Vafa, and others refined the original calculations and gave the precise values of the "quantum corrections" needed to describe very small black holes. The black holes that Strominger and Vafa considered in their original work were quite different from real astrophysical black holes. One difference was that Strominger and Vafa considered only extremal black holes in order to make the calculation tractable, these are defined as black holes with the lowest possible mass compatible with a given charge. Strominger and Vafa also restricted attention to black holes in five-dimensional spacetime with unphysical supersymmetry. Although it was originally developed in this very particular and physically unrealistic context in string theory, the entropy calculation of Strominger and Vafa has led to a qualitative understanding of how black hole entropy can be accounted for in any theory of quantum gravity. Indeed, in 1998, Strominger argued that the original result could be generalized to an arbitrary consistent theory of quantum gravity without relying on strings or supersymmetry; in collaboration with several other authors in 2010, he showed that some results on black hole entropy could be extended to non-extremal astrophysical black holes. One approach to formulating string theory and studying its properties is provided by the anti-de Sitter/conformal field theory (AdS/CFT) correspondence, this is a theoretical result which implies that string theory is in some cases equivalent to a quantum field theory. In addition to providing insights into the mathematical structure of string theory, the AdS/CFT correspondence has shed light on many aspects of quantum field theory in regimes where traditional calculational techniques are ineffective, the AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. Important aspects of the correspondence were elaborated in articles by Steven Gubser, Igor Klebanov, and Alexander Markovich Polyakov, and by Edward Witten. By 2010, Maldacena's article had over 7000 citations, becoming the most highly cited article in the field of high energy physics.[c] Overview of the correspondence In the AdS/CFT correspondence, the geometry of spacetime is described in terms of a certain vacuum solution of Einstein's equation called anti-de Sitter space. In very elementary terms, anti-de Sitter space is a mathematical model of spacetime in which the notion of distance between points (the metric) is different from the notion of distance in ordinary Euclidean geometry, it is closely related to hyperbolic space, which can be viewed as a disk as illustrated on the left. This image shows a tessellation of a disk by triangles and squares. One can define the distance between points of this disk in such a way that all the triangles and squares are the same size and the circular outer boundary is infinitely far from any point in the interior. One can imagine a stack of hyperbolic disks where each disk represents the state of the universe at a given time, the resulting geometric object is three-dimensional anti-de Sitter space. It looks like a solid cylinder in which any cross section is a copy of the hyperbolic disk. Time runs along the vertical direction in this picture, the surface of this cylinder plays an important role in the AdS/CFT correspondence. As with the hyperbolic plane, anti-de Sitter space is curved in such a way that any point in the interior is actually infinitely far from this boundary surface. This construction describes a hypothetical universe with only two space dimensions and one time dimension, but it can be generalized to any number of dimensions. Indeed, hyperbolic space can have more than two dimensions and one can "stack up" copies of hyperbolic space to get higher-dimensional models of anti-de Sitter space. An important feature of anti-de Sitter space is its boundary (which looks like a cylinder in the case of three-dimensional anti-de Sitter space). One property of this boundary is that, within a small region on the surface around any given point, it looks just like Minkowski space, the model of spacetime used in nongravitational physics. One can therefore consider an auxiliary theory in which "spacetime" is given by the boundary of anti-de Sitter space, this observation is the starting point for AdS/CFT correspondence, which states that the boundary of anti-de Sitter space can be regarded as the "spacetime" for a quantum field theory. The claim is that this quantum field theory is equivalent to a gravitational theory, such as string theory, in the bulk anti-de Sitter space in the sense that there is a "dictionary" for translating entities and calculations in one theory into their counterparts in the other theory, for example, a single particle in the gravitational theory might correspond to some collection of particles in the boundary theory. In addition, the predictions in the two theories are quantitatively identical so that if two particles have a 40 percent chance of colliding in the gravitational theory, then the corresponding collections in the boundary theory would also have a 40 percent chance of colliding. Applications to quantum gravity The discovery of the AdS/CFT correspondence was a major advance in physicists' understanding of string theory and quantum gravity. One reason for this is that the correspondence provides a formulation of string theory in terms of quantum field theory, which is well understood by comparison. Another reason is that it provides a general framework in which physicists can study and attempt to resolve the paradoxes of black holes. In 1975, Stephen Hawking published a calculation which suggested that black holes are not completely black but emit a dim radiation due to quantum effects near the event horizon, at first, Hawking's result posed a problem for theorists because it suggested that black holes destroy information. More precisely, Hawking's calculation seemed to conflict with one of the basic postulates of quantum mechanics, which states that physical systems evolve in time according to the Schrödinger equation. This property is usually referred to as unitarity of time evolution, the apparent contradiction between Hawking's calculation and the unitarity postulate of quantum mechanics came to be known as the black hole information paradox. The AdS/CFT correspondence resolves the black hole information paradox, at least to some extent, because it shows how a black hole can evolve in a manner consistent with quantum mechanics in some contexts. Indeed, one can consider black holes in the context of the AdS/CFT correspondence, and any such black hole corresponds to a configuration of particles on the boundary of anti-de Sitter space, these particles obey the usual rules of quantum mechanics and in particular evolve in a unitary fashion, so the black hole must also evolve in a unitary fashion, respecting the principles of quantum mechanics. In 2005, Hawking announced that the paradox had been settled in favor of information conservation by the AdS/CFT correspondence, and he suggested a concrete mechanism by which black holes might preserve information. Applications to nuclear physics In addition to its applications to theoretical problems in quantum gravity, the AdS/CFT correspondence has been applied to a variety of problems in quantum field theory. One physical system that has been studied using the AdS/CFT correspondence is the quark–gluon plasma, an exotic state of matter produced in particle accelerators. This state of matter arises for brief instants when heavy ions such as gold or lead nuclei are collided at high energies. Such collisions cause the quarks that make up atomic nuclei to deconfine at temperatures of approximately two trillion kelvins, conditions similar to those present at around 10−11 seconds after the Big Bang. The physics of the quark–gluon plasma is governed by a theory called quantum chromodynamics, but this theory is mathematically intractable in problems involving the quark–gluon plasma;[d] in an article appearing in 2005, Đàm Thanh Sơn and his collaborators showed that the AdS/CFT correspondence could be used to understand some aspects of the quark–gluon plasma by describing it in the language of string theory. By applying the AdS/CFT correspondence, Sơn and his collaborators were able to describe the quark gluon plasma in terms of black holes in five-dimensional spacetime, the calculation showed that the ratio of two quantities associated with the quark–gluon plasma, the shear viscosity and volume density of entropy, should be approximately equal to a certain universal constant. In 2008, the predicted value of this ratio for the quark–gluon plasma was confirmed at the Relativistic Heavy Ion Collider at Brookhaven National Laboratory. Applications to condensed matter physics The AdS/CFT correspondence has also been used to study aspects of condensed matter physics, over the decades, experimental condensed matter physicists have discovered a number of exotic states of matter, including superconductors and superfluids. These states are described using the formalism of quantum field theory, but some phenomena are difficult to explain using standard field theoretic techniques, some condensed matter theorists including Subir Sachdev hope that the AdS/CFT correspondence will make it possible to describe these systems in the language of string theory and learn more about their behavior. So far some success has been achieved in using string theory methods to describe the transition of a superfluid to an insulator. A superfluid is a system of electrically neutral atoms that flows without any friction, such systems are often produced in the laboratory using liquid helium, but recently experimentalists have developed new ways of producing artificial superfluids by pouring trillions of cold atoms into a lattice of criss-crossing lasers. These atoms initially behave as a superfluid, but as experimentalists increase the intensity of the lasers, they become less mobile and then suddenly transition to an insulating state, during the transition, the atoms behave in an unusual way. For example, the atoms slow to a halt at a rate that depends on the temperature and on Planck's constant, the fundamental parameter of quantum mechanics, which does not enter into the description of the other phases. This behavior has recently been understood by considering a dual description where properties of the fluid are described in terms of a higher dimensional black hole. In addition to being an idea of considerable theoretical interest, string theory provides a framework for constructing models of real world physics that combine general relativity and particle physics. Phenomenology is the branch of theoretical physics in which physicists construct realistic models of nature from more abstract theoretical ideas. String phenomenology is the part of string theory that attempts to construct realistic or semi-realistic models based on string theory. Partly because of theoretical and mathematical difficulties and partly because of the extremely high energies needed to test these theories experimentally, there is so far no experimental evidence that would unambiguously point to any of these models being a correct fundamental description of nature, this has led some in the community to criticize these approaches to unification and question the value of continued research on these problems. The currently accepted theory describing elementary particles and their interactions is known as the standard model of particle physics. This theory provides a unified description of three of the fundamental forces of nature: electromagnetism and the strong and weak nuclear forces, despite its remarkable success in explaining a wide range of physical phenomena, the standard model cannot be a complete description of reality. This is because the standard model fails to incorporate the force of gravity and because of problems such as the hierarchy problem and the inability to explain the structure of fermion masses or dark matter. String theory has been used to construct a variety of models of particle physics going beyond the standard model. Typically, such models are based on the idea of compactification. Starting with the ten- or eleven-dimensional spacetime of string or M-theory, physicists postulate a shape for the extra dimensions. By choosing this shape appropriately, they can construct models roughly similar to the standard model of particle physics, together with additional undiscovered particles. One popular way of deriving realistic physics from string theory is to start with the heterotic theory in ten dimensions and assume that the six extra dimensions of spacetime are shaped like a six-dimensional Calabi–Yau manifold, such compactifications offer many ways of extracting realistic physics from string theory. Other similar methods can be used to construct realistic or semi-realistic models of our four-dimensional world based on M-theory. The Big Bang theory is the prevailing cosmological model for the universe from the earliest known periods through its subsequent large-scale evolution, despite its success in explaining many observed features of the universe including galactic redshifts, the relative abundance of light elements such as hydrogen and helium, and the existence of a cosmic microwave background, there are several questions that remain unanswered. For example, the standard Big Bang model does not explain why the universe appears to be same in all directions, why it appears flat on very large distance scales, or why certain hypothesized particles such as magnetic monopoles are not observed in experiments. Currently, the leading candidate for a theory going beyond the Big Bang is the theory of cosmic inflation. Developed by Alan Guth and others in the 1980s, inflation postulates a period of extremely rapid accelerated expansion of the universe prior to the expansion described by the standard Big Bang theory, the theory of cosmic inflation preserves the successes of the Big Bang while providing a natural explanation for some of the mysterious features of the universe. The theory has also received striking support from observations of the cosmic microwave background, the radiation that has filled the sky since around 380,000 years after the Big Bang. In the theory of inflation, the rapid initial expansion of the universe is caused by a hypothetical particle called the inflaton, the exact properties of this particle are not fixed by the theory but should ultimately be derived from a more fundamental theory such as string theory. Indeed, there have been a number of attempts to identify an inflaton within the spectrum of particles described by string theory, and to study inflation using string theory. While these approaches might eventually find support in observational data such as measurements of the cosmic microwave background, the application of string theory to cosmology is still in its early stages. Connections to mathematics In addition to influencing research in theoretical physics, string theory has stimulated a number of major developments in pure mathematics. Like many developing ideas in theoretical physics, string theory does not at present have a mathematically rigorous formulation in which all of its concepts can be defined precisely, as a result, physicists who study string theory are often guided by physical intuition to conjecture relationships between the seemingly different mathematical structures that are used to formalize different parts of the theory. These conjectures are later proved by mathematicians, and in this way, string theory serves as a source of new ideas in pure mathematics. After Calabi–Yau manifolds had entered physics as a way to compactify extra dimensions in string theory, many physicists began studying these manifolds; in the late 1980s, several physicists noticed that given such a compactification of string theory, it is not possible to reconstruct uniquely a corresponding Calabi–Yau manifold. Instead, two different versions of string theory, type IIA and type IIB, can be compactified on completely different Calabi–Yau manifolds giving rise to the same physics; in this situation, the manifolds are called mirror manifolds, and the relationship between the two physical theories is called mirror symmetry. Regardless of whether Calabi–Yau compactifications of string theory provide a correct description of nature, the existence of the mirror duality between different string theories has significant mathematical consequences, the Calabi–Yau manifolds used in string theory are of interest in pure mathematics, and mirror symmetry allows mathematicians to solve problems in enumerative geometry, a branch of mathematics concerned with counting the numbers of solutions to geometric questions. Enumerative geometry studies a class of geometric objects called algebraic varieties which are defined by the vanishing of polynomials, for example, the Clebsch cubic illustrated on the right is an algebraic variety defined using a certain polynomial of degree three in four variables. A celebrated result of nineteenth-century mathematicians Arthur Cayley and George Salmon states that there are exactly 27 straight lines that lie entirely on such a surface. Generalizing this problem, one can ask how many lines can be drawn on a quintic Calabi–Yau manifold, such as the one illustrated above, which is defined by a polynomial of degree five, this problem was solved by the nineteenth-century German mathematician Hermann Schubert, who found that there are exactly 2,875 such lines. In 1986, geometer Sheldon Katz proved that the number of curves, such as circles, that are defined by polynomials of degree two and lie entirely in the quintic is 609,250. By the year 1991, most of the classical problems of enumerative geometry had been solved and interest in enumerative geometry had begun to diminish, the field was reinvigorated in May 1991 when physicists Philip Candelas, Xenia de la Ossa, Paul Green, and Linda Parks showed that mirror symmetry could be used to translate difficult mathematical questions about one Calabi–Yau manifold into easier questions about its mirror. In particular, they used mirror symmetry to show that a six-dimensional Calabi–Yau manifold can contain exactly 317,206,375 curves of degree three; in addition to counting degree-three curves, Candelas and his collaborators obtained a number of more general results for counting rational curves which went far beyond the results obtained by mathematicians. Originally, these results of Candelas were justified on physical grounds. However, mathematicians generally prefer rigorous proofs that do not require an appeal to physical intuition. Inspired by physicists' work on mirror symmetry, mathematicians have therefore constructed their own arguments proving the enumerative predictions of mirror symmetry.[e] Today mirror symmetry is an active area of research in mathematics, and mathematicians are working to develop a more complete mathematical understanding of mirror symmetry based on physicists' intuition. Major approaches to mirror symmetry include the homological mirror symmetry program of Maxim Kontsevich and the SYZ conjecture of Andrew Strominger, Shing-Tung Yau, and Eric Zaslow. Group theory is the branch of mathematics that studies the concept of symmetry. For example, one can consider a geometric shape such as an equilateral triangle. There are various operations that one can perform on this triangle without changing its shape. One can rotate it through 120°, 240°, or 360°, or one can reflect in any of the lines labeled S0, S1, or S2 in the picture. Each of these operations is called a symmetry, and the collection of these symmetries satisfies certain technical properties making it into what mathematicians call a group; in this particular example, the group is known as the dihedral group of order 6 because it has six elements. A general group may describe finitely many or infinitely many symmetries; if there are only finitely many symmetries, it is called a finite group. Mathematicians often strive for a classification (or list) of all mathematical objects of a given type, it is generally believed that finite groups are too diverse to admit a useful classification. A more modest but still challenging problem is to classify all finite simple groups, these are finite groups which may be used as building blocks for constructing arbitrary finite groups in the same way that prime numbers can be used to construct arbitrary whole numbers by taking products.[f] One of the major achievements of contemporary group theory is the classification of finite simple groups, a mathematical theorem which provides a list of all possible finite simple groups. This classification theorem identifies several infinite families of groups as well as 26 additional groups which do not fit into any family, the latter groups are called the "sporadic" groups, and each one owes its existence to a remarkable combination of circumstances. The largest sporadic group, the so-called monster group, has over 1053 elements, more than a thousand times the number of atoms in the Earth. A seemingly unrelated construction is the j-function of number theory. This object belongs to a special class of functions called modular functions, whose graphs form a certain kind of repeating pattern, although this function appears in a branch of mathematics which seems very different from the theory of finite groups, the two subjects turn out to be intimately related. In the late 1970s, mathematicians John McKay and John Thompson noticed that certain numbers arising in the analysis of the monster group (namely, the dimensions of its irreducible representations) are related to numbers that appear in a formula for the j-function (namely, the coefficients of its Fourier series). This relationship was further developed by John Horton Conway and Simon Norton who called it monstrous moonshine because it seemed so far fetched. In 1992, Richard Borcherds constructed a bridge between the theory of modular functions and finite groups and, in the process, explained the observations of McKay and Thompson. Borcherds' work used ideas from string theory in an essential way, extending earlier results of Igor Frenkel, James Lepowsky, and Arne Meurman, who had realized the monster group as the symmetries of a particular[which?] version of string theory. In 1998, Borcherds was awarded the Fields medal for his work. Since the 1990s, the connection between string theory and moonshine has led to further results in mathematics and physics; in 2010, physicists Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa discovered connections between a different sporadic group, the Mathieu group M24, and a certain version[which?] of string theory. Miranda Cheng, John Duncan, and Jeffrey A. Harvey proposed a generalization of this moonshine phenomenon called umbral moonshine, and their conjecture was proved mathematically by Duncan, Michael Griffin, and Ken Ono. Witten has also speculated that the version of string theory appearing in monstrous moonshine might be related to a certain simplified model of gravity in three spacetime dimensions. Some of the structures reintroduced by string theory arose for the first time much earlier as part of the program of classical unification started by Albert Einstein, the first person to add a fifth dimension to a theory of gravity was Gunnar Nordström in 1914, who noted that gravity in five dimensions describes both gravity and electromagnetism in four. Nordström attempted to unify electromagnetism with his theory of gravitation, which was however superseded by Einstein's general relativity in 1919. Thereafter, German mathematician Theodor Kaluza combined the fifth dimension with general relativity, and only Kaluza is usually credited with the idea; in 1926, the Swedish physicist Oskar Klein gave a physical interpretation of the unobservable extra dimension—it is wrapped into a small circle. Einstein introduced a non-symmetric metric tensor, while much later Brans and Dicke added a scalar component to gravity, these ideas would be revived within string theory, where they are demanded by consistency conditions. String theory was originally developed during the late 1960s and early 1970s as a never completely successful theory of hadrons, the subatomic particles like the proton and neutron that feel the strong interaction. In the 1960s, Geoffrey Chew and Steven Frautschi discovered that the mesons make families called Regge trajectories with masses related to spins in a way that was later understood by Yoichiro Nambu, Holger Bech Nielsen and Leonard Susskind to be the relationship expected from rotating strings. Chew advocated making a theory for the interactions of these trajectories that did not presume that they were composed of any fundamental particles, but would construct their interactions from self-consistency conditions on the S-matrix, the S-matrix approach was started by Werner Heisenberg in the 1940s as a way of constructing a theory that did not rely on the local notions of space and time, which Heisenberg believed break down at the nuclear scale. While the scale was off by many orders of magnitude, the approach he advocated was ideally suited for a theory of quantum gravity. Working with experimental data, R. Dolen, D. Horn and C. Schmid developed some sum rules for hadron exchange. When a particle and antiparticle scatter, virtual particles can be exchanged in two qualitatively different ways; in the s-channel, the two particles annihilate to make temporary intermediate states that fall apart into the final state particles. In the t-channel, the particles exchange intermediate states by emission and absorption; in field theory, the two contributions add together, one giving a continuous background contribution, the other giving peaks at certain energies. In the data, it was clear that the peaks were stealing from the background—the authors interpreted this as saying that the t-channel contribution was dual to the s-channel one, meaning both described the whole amplitude and included the other. The result was widely advertised by Murray Gell-Mann, leading Gabriele Veneziano to construct a scattering amplitude that had the property of Dolen–Horn–Schmid duality, later renamed world-sheet duality. The amplitude needed poles where the particles appear, on straight line trajectories, and there is a special mathematical function whose poles are evenly spaced on half the real line—the gamma function— which was widely used in Regge theory. By manipulating combinations of gamma functions, Veneziano was able to find a consistent scattering amplitude with poles on straight lines, with mostly positive residues, which obeyed duality and had the appropriate Regge scaling at high energy, the amplitude could fit near-beam scattering data as well as other Regge type fits, and had a suggestive integral representation that could be used for generalization. Over the next years, hundreds of physicists worked to complete the bootstrap program for this model, with many surprises. Veneziano himself discovered that for the scattering amplitude to describe the scattering of a particle that appears in the theory, an obvious self-consistency condition, the lightest particle must be a tachyon. Miguel Virasoro and Joel Shapiro found a different amplitude now understood to be that of closed strings, while Ziro Koba and Holger Nielsen generalized Veneziano's integral representation to multiparticle scattering. Veneziano and Sergio Fubini introduced an operator formalism for computing the scattering amplitudes that was a forerunner of world-sheet conformal theory, while Virasoro understood how to remove the poles with wrong-sign residues using a constraint on the states. Claud Lovelace calculated a loop amplitude, and noted that there is an inconsistency unless the dimension of the theory is 26. Charles Thorn, Peter Goddard and Richard Brower went on to prove that there are no wrong-sign propagating states in dimensions less than or equal to 26. In 1969–70, Yoichiro Nambu, Holger Bech Nielsen, and Leonard Susskind recognized that the theory could be given a description in space and time in terms of strings. The scattering amplitudes were derived systematically from the action principle by Peter Goddard, Jeffrey Goldstone, Claudio Rebbi, and Charles Thorn, giving a space-time picture to the vertex operators introduced by Veneziano and Fubini and a geometrical interpretation to the Virasoro conditions. In 1971, Pierre Ramond added fermions to the model, which led him to formulate a two-dimensional supersymmetry to cancel the wrong-sign states. John Schwarz and André Neveu added another sector to the fermi theory a short time later. In the fermion theories, the critical dimension was 10. Stanley Mandelstam formulated a world sheet conformal theory for both the bose and fermi case, giving a two-dimensional field theoretic path-integral to generate the operator formalism. Michio Kaku and Keiji Kikkawa gave a different formulation of the bosonic string, as a string field theory, with infinitely many particle types and with fields taking values not on points, but on loops and curves. In 1974, Tamiaki Yoneya discovered that all the known string theories included a massless spin-two particle that obeyed the correct Ward identities to be a graviton. John Schwarz and Joel Scherk came to the same conclusion and made the bold leap to suggest that string theory was a theory of gravity, not a theory of hadrons, they reintroduced Kaluza–Klein theory as a way of making sense of the extra dimensions. At the same time, quantum chromodynamics was recognized as the correct theory of hadrons, shifting the attention of physicists and apparently leaving the bootstrap program in the dustbin of history. String theory eventually made it out of the dustbin, but for the following decade all work on the theory was completely ignored. Still, the theory continued to develop at a steady pace thanks to the work of a handful of devotees. Ferdinando Gliozzi, Joel Scherk, and David Olive realized in 1977 that the original Ramond and Neveu Schwarz-strings were separately inconsistent and needed to be combined. The resulting theory did not have a tachyon, and was proven to have space-time supersymmetry by John Schwarz and Michael Green in 1984, the same year, Alexander Polyakov gave the theory a modern path integral formulation, and went on to develop conformal field theory extensively. In 1979, Daniel Friedan showed that the equations of motions of string theory, which are generalizations of the Einstein equations of general relativity, emerge from the renormalization group equations for the two-dimensional field theory. Schwarz and Green discovered T-duality, and constructed two superstring theories—IIA and IIB related by T-duality, and type I theories with open strings, the consistency conditions had been so strong, that the entire theory was nearly uniquely determined, with only a few discrete choices. First superstring revolution In the early 1980s, Edward Witten discovered that most theories of quantum gravity could not accommodate chiral fermions like the neutrino, this led him, in collaboration with Luis Álvarez-Gaumé, to study violations of the conservation laws in gravity theories with anomalies, concluding that type I string theories were inconsistent. Green and Schwarz discovered a contribution to the anomaly that Witten and Alvarez-Gaumé had missed, which restricted the gauge group of the type I string theory to be SO(32); in coming to understand this calculation, Edward Witten became convinced that string theory was truly a consistent theory of gravity, and he became a high-profile advocate. Following Witten's lead, between 1984 and 1986, hundreds of physicists started to work in this field, and this is sometimes called the first superstring revolution. During this period, David Gross, Jeffrey Harvey, Emil Martinec, and Ryan Rohm discovered heterotic strings. The gauge group of these closed strings was two copies of E8, and either copy could easily and naturally include the standard model. Philip Candelas, Gary Horowitz, Andrew Strominger and Edward Witten found that the Calabi–Yau manifolds are the compactifications that preserve a realistic amount of supersymmetry, while Lance Dixon and others worked out the physical properties of orbifolds, distinctive geometrical singularities allowed in string theory. Cumrun Vafa generalized T-duality from circles to arbitrary manifolds, creating the mathematical field of mirror symmetry. Daniel Friedan, Emil Martinec and Stephen Shenker further developed the covariant quantization of the superstring using conformal field theory techniques. David Gross and Vipul Periwal discovered that string perturbation theory was divergent. Stephen Shenker showed it diverged much faster than in field theory suggesting that new non-perturbative objects were missing. In the 1990s, Joseph Polchinski discovered that the theory requires higher-dimensional objects, called D-branes and identified these with the black-hole solutions of supergravity, these were understood to be the new objects suggested by the perturbative divergences, and they opened up a new field with rich mathematical structure. It quickly became clear that D-branes and other p-branes, not just strings, formed the matter content of the string theories, and the physical interpretation of the strings and branes was revealed—they are a type of black hole. Leonard Susskind had incorporated the holographic principle of Gerardus 't Hooft into string theory, identifying the long highly excited string states with ordinary thermal black hole states. As suggested by 't Hooft, the fluctuations of the black hole horizon, the world-sheet or world-volume theory, describes not only the degrees of freedom of the black hole, but all nearby objects too. Second superstring revolution In 1995, at the annual conference of string theorists at the University of Southern California (USC), Edward Witten gave a speech on string theory that in essence united the five string theories that existed at the time, and giving birth to a new 11-dimensional theory called M-theory. M-theory was also foreshadowed in the work of Paul Townsend at approximately the same time, the flurry of activity that began at this time is sometimes called the second superstring revolution. During this period, Tom Banks, Willy Fischler, Stephen Shenker and Leonard Susskind formulated matrix theory, a full holographic description of M-theory using IIA D0 branes. This was the first definition of string theory that was fully non-perturbative and a concrete mathematical realization of the holographic principle, it is an example of a gauge-gravity duality and is now understood to be a special case of the AdS/CFT correspondence. Andrew Strominger and Cumrun Vafa calculated the entropy of certain configurations of D-branes and found agreement with the semi-classical answer for extreme charged black holes. Petr Hořava and Witten found the eleven-dimensional formulation of the heterotic string theories, showing that orbifolds solve the chirality problem. Witten noted that the effective description of the physics of D-branes at low energies is by a supersymmetric gauge theory, and found geometrical interpretations of mathematical structures in gauge theory that he and Nathan Seiberg had earlier discovered in terms of the location of the branes. In 1997, Juan Maldacena noted that the low energy excitations of a theory near a black hole consist of objects close to the horizon, which for extreme charged black holes looks like an anti-de Sitter space, he noted that in this limit the gauge theory describes the string excitations near the branes. So he hypothesized that string theory on a near-horizon extreme-charged black-hole geometry, an anti-de Sitter space times a sphere with flux, is equally well described by the low-energy limiting gauge theory, the N = 4 supersymmetric Yang–Mills theory. This hypothesis, which is called the AdS/CFT correspondence, was further developed by Steven Gubser, Igor Klebanov and Alexander Polyakov, and by Edward Witten, and it is now well-accepted. It is a concrete realization of the holographic principle, which has far-reaching implications for black holes, locality and information in physics, as well as the nature of the gravitational interaction. Through this relationship, string theory has been shown to be related to gauge theories like quantum chromodynamics and this has led to more quantitative understanding of the behavior of hadrons, bringing string theory back to its roots. Number of solutions To construct models of particle physics based on string theory, physicists typically begin by specifying a shape for the extra dimensions of spacetime, each of these different shapes corresponds to a different possible universe, or "vacuum state", with a different collection of particles and forces. String theory as it is currently understood has an enormous number of vacuum states, typically estimated to be around 10500, and these might be sufficiently diverse to accommodate almost any phenomena that might be observed at low energies. Many critics of string theory have expressed concerns about the large number of possible universes described by string theory; in his book Not Even Wrong, Peter Woit, a lecturer in the mathematics department at Columbia University, has argued that the large number of different physical scenarios renders string theory vacuous as a framework for constructing models of particle physics. According to Woit, The possible existence of, say, 10500 consistent different vacuum states for superstring theory probably destroys the hope of using the theory to predict anything. If one picks among this large set just those states whose properties agree with present experimental observations, it is likely there still will be such a large number of these that one can get just about whatever value one wants for the results of any new observation. Some physicists believe this large number of solutions is actually a virtue because it may allow a natural anthropic explanation of the observed values of physical constants, in particular the small value of the cosmological constant, the anthropic principle is the idea that some of the numbers appearing in the laws of physics are not fixed by any fundamental principle but must be compatible with the evolution of intelligent life. In 1987, Steven Weinberg published an article in which he argued that the cosmological constant could not have been too large, or else galaxies and intelligent life would not have been able to develop. Weinberg suggested that there might be a huge number of possible consistent universes, each with a different value of the cosmological constant, and observations indicate a small value of the cosmological constant only because humans happen to live in a universe that has allowed intelligent life, and hence observers, to exist. String theorist Leonard Susskind has argued that string theory provides a natural anthropic explanation of the small value of the cosmological constant. According to Susskind, the different vacuum states of string theory might be realized as different universes within a larger multiverse, the fact that the observed universe has a small cosmological constant is just a tautological consequence of the fact that a small value is required for life to exist. Many prominent theorists and critics have disagreed with Susskind's conclusions. According to Woit, "in this case [anthropic reasoning] is nothing more than an excuse for failure. Speculative scientific ideas fail not just when they make incorrect predictions, but also when they turn out to be vacuous and incapable of predicting anything." One of the fundamental properties of Einstein's general theory of relativity is that it is background independent, meaning that the formulation of the theory does not in any way privilege a particular spacetime geometry. One of the main criticisms of string theory from early on is that it is not manifestly background independent; in string theory, one must typically specify a fixed reference geometry for spacetime, and all other possible geometries are described as perturbations of this fixed one. In his book The Trouble With Physics, physicist Lee Smolin of the Perimeter Institute for Theoretical Physics claims that this is the principal weakness of string theory as a theory of quantum gravity, saying that string theory has failed to incorporate this important insight from general relativity. Others have disagreed with Smolin's characterization of string theory; in a review of Smolin's book, string theorist Joseph Polchinski writes [Smolin] is mistaking an aspect of the mathematical language being used for one of the physics being described. New physical theories are often discovered using a mathematical language that is not the most suitable for them… In string theory it has always been clear that the physics is background-independent even if the language being used is not, and the search for more suitable language continues. Indeed, as Smolin belatedly notes, [AdS/CFT] provides a solution to this problem, one that is unexpected and powerful. Polchinski notes that an important open problem in quantum gravity is to develop holographic descriptions of gravity which do not require the gravitational field to be asymptotically anti-de Sitter. Smolin has responded by saying that the AdS/CFT correspondence, as it is currently understood, may not be strong enough to resolve all concerns about background independence.[g] Sociology of science Since the superstring revolutions of the 1980s and 1990s, string theory has become the dominant paradigm of high energy theoretical physics, some string theorists have expressed the view that there does not exist an equally successful alternative theory addressing the deep questions of fundamental physics. In an interview from 1987, Nobel laureate David Gross made the following controversial comments about the reasons for the popularity of string theory: The most important [reason] is that there are no other good ideas around. That's what gets most people into it. When people started to get interested in string theory they didn't know anything about it; in fact, the first reaction of most people is that the theory is extremely ugly and unpleasant, at least that was the case a few years ago when the understanding of string theory was much less developed. It was difficult for people to learn about it and to be turned on. So I think the real reason why people have got attracted by it is because there is no other game in town. All other approaches of constructing grand unified theories, which were more conservative to begin with, and only gradually became more and more radical, have failed, and this game hasn't failed yet. Several other high-profile theorists and commentators have expressed similar views, suggesting that there are no viable alternatives to string theory. Many critics of string theory have commented on this state of affairs; in his book criticizing string theory, Peter Woit views the status of string theory research as unhealthy and detrimental to the future of fundamental physics. He argues that the extreme popularity of string theory among theoretical physicists is partly a consequence of the financial structure of academia and the fierce competition for scarce resources; in his book The Road to Reality, mathematical physicist Roger Penrose expresses similar views, stating "The often frantic competitiveness that this ease of communication engenders leads to bandwagon effects, where researchers fear to be left behind if they do not join in." Penrose also claims that the technical difficulty of modern physics forces young scientists to rely on the preferences of established researchers, rather than forging new paths of their own. Lee Smolin expresses a slightly different position in his critique, claiming that string theory grew out of a tradition of particle physics which discourages speculation about the foundations of physics, while his preferred approach, loop quantum gravity, encourages more radical thinking. According to Smolin, String theory is a powerful, well-motivated idea and deserves much of the work that has been devoted to it. If it has so far failed, the principal reason is that its intrinsic flaws are closely tied to its strengths—and, of course, the story is unfinished, since string theory may well turn out to be part of the truth, the real question is not why we have expended so much energy on string theory but why we haven't expended nearly enough on alternative approaches. Smolin goes on to offer a number of prescriptions for how scientists might encourage a greater diversity of approaches to quantum gravity research. Notes and references - For example, physicists are still working to understand the phenomenon of quark confinement, the paradoxes of black holes, and the origin of dark energy. - For example, in the context of the AdS/CFT correspondence, theorists often formulate and study theories of gravity in unphysical numbers of spacetime dimensions. - "Top Cited Articles during 2010 in hep-th". Retrieved 25 July 2013. - More precisely, one cannot apply the methods of perturbative quantum field theory. - Two independent mathematical proofs of mirror symmetry were given by Givental 1996, 1998 and Lian, Liu, Yau 1997, 1999, 2000. - More precisely, a nontrivial group is called simple if its only normal subgroups are the trivial group and the group itself. The Jordan–Hölder theorem exhibits finite simple groups as the building blocks for all finite groups. - "Archived copy". Archived from the original on November 5, 2015. Retrieved December 31, 2015. Response to review of The Trouble with Physics by Joe Polchinski - Becker, Becker, and Schwarz 2007, p. 1 - Zwiebach 2009, p. 6 - Becker, Becker, and Schwarz 2007, pp. 2–3 - Becker, Becker, and Schwarz 2007, pp. 9–12 - Becker, Becker, and Schwarz 2007, pp. 14–15 - Klebanov and Maldacena 2009 - Merali 2011 - Sachdev 2013 - Becker, Becker, and Schwarz 2007, pp. 3, 15–16 - Becker, Becker, and Schwarz 2007, p. 8 - Becker, Becker, and Schwarz 13–14 - Woit 2006 - Zee 2010 - Becker, Becker, and Schwarz 2007, p. 2 - Becker, Becker, and Schwarz 2007, p. 6 - Zwiebach 2009, p. 12 - Becker, Becker, and Schwarz 2007, p. 4 - Zwiebach 2009, p. 324 - Wald 1984, p. 4 - Zee 2010, Parts V and VI - Zwiebach 2009, p. 9 - Zwiebach 2009, p. 8 - Yau and Nadis 2010, Ch. 6 - Greene 2000, p. 186 - Yau and Nadis 2010, p. ix - Randall and Sundrum 1999 - Becker, Becker, and Schwarz 2007 - Zwiebach 2009, p. 376 - Moore 2005, p. 214 - Moore 2005, p. 215 - Aspinwall et al. 2009 - Kontsevich 1995 - Kapustin and Witten 2007 - Duff 1998 - Duff 1998, p. 64 - Nahm 1978 - Cremmer, Julia, and Scherk 1978 - Duff 1998, p. 65 - Sen 1994a - Sen 1994b - Hull and Townsend 1995 - Duff 1998, p. 67 - Bergshoeff, Sezgin, and Townsend 1987 - Duff et al. 1987 - Duff 1998, p. 66 - Witten 1995 - Duff 1998, pp. 67–68 - Becker, Becker, and Schwarz 2007, p. 296 - Hořava and Witten 1996 - Duff 1996, sec. 1 - Banks et al. 1997 - Connes 1994 - Connes, Douglas, and Schwarz 1998 - Nekrasov and Schwarz 1998 - Seiberg and Witten 1999 - de Haro et al. 2013, p. 2 - Yau and Nadis 2010, p. 187–188 - Bekenstein 1973 - Hawking 1975 - Wald 1984, p. 417 - Yau and Nadis 2010, p. 189 - Strominger and Vafa 1996 - Yau and Nadis 2010, pp. 190–192 - Maldacena, Strominger, and Witten 1997 - Ooguri, Strominger, and Vafa 2004 - Yau and Nadis 2010, pp. 192–193 - Yau and Nadis 2010, pp. 194–195 - Strominger 1998 - Guica et al. 2009 - Castro, Maloney, and Strominger 2010 - Maldacena 1998 - Gubser, Klebanov, and Polyakov 1998 - Witten 1998 - Klebanov and Maldacena 2009, p. 28 - Maldacena 2005, p. 60 - Maldacena 2005, p. 61 - Zwiebach 2009, p. 552 - Maldacena 2005, pp. 61–62 - Susskind 2008 - Zwiebach 2009, p. 554 - Maldacena 2005, p. 63 - Hawking 2005 - Zwiebach 2009, p. 559 - Kovtun, Son, and Starinets 2001 - Merali 2011, p. 303 - Luzum and Romatschke 2008 - Sachdev 2013, p. 51 - Candelas et al. 1985 - Yau and Nadis 2010, pp. 147–150 - Becker, Becker, and Schwarz 2007, pp. 530–531 - Becker, Becker, and Schwarz 2007, p. 531 - Becker, Becker, and Schwarz 2007, p. 538 - Becker, Becker, and Schwarz 2007, p. 533 - Becker, Becker, and Schwarz 2007, pp. 539–543 - Deligne et al. 1999, p. 1 - Hori et al. 2003, p. xvii - Aspinwall et al. 2009, p. 13 - Hori et al. 2003 - Yau and Nadis 2010, p. 167 - Yau and Nadis 2010, p. 166 - Yau and Nadis 2010, p. 169 - Candelas et al. 1991 - Yau and Nadis 2010, p. 171 - Hori et al. 2003, p. xix - Strominger, Yau, and Zaslow 1996 - Dummit and Foote 2004 - Dummit and Foote 2004, pp. 102–103 - Klarreich 2015 - Gannon 2006, p. 2 - Gannon 2006, p. 4 - Conway and Norton 1979 - Gannon 2006, p. 5 - Gannon 2006, p. 8 - Borcherds 1992 - Frenkel, Lepowsky, and Meurman 1988 - Gannon 2006, p. 11 - Eguchi, Ooguri, and Tachikawa 2010 - Cheng, Duncan, and Harvey 2013 - Duncan, Griffin, and Ono 2015 - Witten 2007 - Woit 2006, pp. 240–242 - Woit 2006, p. 242 - Weinberg 1987 - Woit 2006, p. 243 - Susskind 2005 - Woit 2006, pp. 242–243 - Woit 2006, p. 240 - Woit 2006, p. 249 - Smolin 2006, p. 81 - Smolin 2006, p. 184 - Polchinski 2007 - Penrose 2004, p. 1017 - Woit 2006, pp. 224–225 - Woit 2006, Ch. 16 - Woit 2006, p. 239 - Penrose 2004, p. 1018 - Penrose 2004, pp. 1019–1020 - Smolin 2006, p. 349 - Smolin 2006, Ch. 20 - Aspinwall, Paul; Bridgeland, Tom; Craw, Alastair; Douglas, Michael; Gross, Mark; Kapustin, Anton; Moore, Gregory; Segal, Graeme; Szendröi, Balázs; Wilson, P.M.H., eds. (2009). Dirichlet Branes and Mirror Symmetry. Clay Mathematics Monographs. 4. American Mathematical Society. ISBN 978-0-8218-3848-8. - Banks, Tom; Fischler, Willy; Schenker, Stephen; Susskind, Leonard (1997). "M theory as a matrix model: A conjecture". Physical Review D. 55 (8): 5112–5128. arXiv: . Bibcode:1997PhRvD..55.5112B. doi:10.1103/physrevd.55.5112. - Becker, Katrin; Becker, Melanie; Schwarz, John (2007). String theory and M-theory: A modern introduction. Cambridge University Press. ISBN 978-0-521-86069-7. - Bekenstein, Jacob (1973). "Black holes and entropy". Physical Review D. 7 (8): 2333–2346. Bibcode:1973PhRvD...7.2333B. doi:10.1103/PhysRevD.7.2333. - Bergshoeff, Eric; Sezgin, Ergin; Townsend, Paul (1987). "Supermembranes and eleven-dimensional supergravity". Physics Letters B. 189 (1): 75–78. Bibcode:1987PhLB..189...75B. doi:10.1016/0370-2693(87)91272-X. - Borcherds, Richard (1992). "Monstrous moonshine and Lie superalgebras" (PDF). Inventiones Mathematicae. 109 (1): 405–444. Bibcode:1992InMat.109..405B. doi:10.1007/BF01232032. - Candelas, Philip; de la Ossa, Xenia; Green, Paul; Parks, Linda (1991). "A pair of Calabi–Yau manifolds as an exactly soluble superconformal field theory". Nuclear Physics B. 359 (1): 21–74. Bibcode:1991NuPhB.359...21C. doi:10.1016/0550-3213(91)90292-6. - Candelas, Philip; Horowitz, Gary; Strominger, Andrew; Witten, Edward (1985). "Vacuum configurations for superstrings". Nuclear Physics B. 258: 46–74. Bibcode:1985NuPhB.258...46C. doi:10.1016/0550-3213(85)90602-9. - Castro, Alejandra; Maloney, Alexander; Strominger, Andrew (2010). "Hidden conformal symmetry of the Kerr black hole". Physical Review D. 82 (2). arXiv: . Bibcode:2010PhRvD..82b4008C. doi:10.1103/PhysRevD.82.024008. - Cheng, Miranda; Duncan, John; Harvey, Jeffrey (2014). "Umbral Moonshine". Communications in Number Theory and Physics. 8: 101–242. arXiv: . Bibcode:2012arXiv1204.2779C. - Connes, Alain (1994). Noncommutative Geometry. Academic Press. ISBN 978-0-12-185860-5. - Connes, Alain; Douglas, Michael; Schwarz, Albert (1998). "Noncommutative geometry and matrix theory". Journal of High Energy Physics. 19981 (2): 003. arXiv: . Bibcode:1998JHEP...02..003C. doi:10.1088/1126-6708/1998/02/003. - Conway, John; Norton, Simon (1979). "Monstrous moonshine". Bull. London Math. Soc. 11 (3): 308–339. doi:10.1112/blms/11.3.308. - Cremmer, Eugene; Julia, Bernard; Scherk, Joel (1978). "Supergravity theory in eleven dimensions". Physics Letters B. 76 (4): 409–412. Bibcode:1978PhLB...76..409C. doi:10.1016/0370-2693(78)90894-8. - de Haro, Sebastian; Dieks, Dennis; 't Hooft, Gerard; Verlinde, Erik (2013). "Forty Years of String Theory Reflecting on the Foundations". Foundations of Physics. 43 (1): 1–7. Bibcode:2013FoPh...43....1D. doi:10.1007/s10701-012-9691-3. - Deligne, Pierre; Etingof, Pavel; Freed, Daniel; Jeffery, Lisa; Kazhdan, David; Morgan, John; Morrison, David; Witten, Edward, eds. (1999). Quantum Fields and Strings: A Course for Mathematicians. 1. American Mathematical Society. ISBN 978-0821820124. - Duff, Michael (1996). "M-theory (the theory formerly known as strings)". International Journal of Modern Physics A. 11 (32): 6523–41. arXiv: . Bibcode:1996IJMPA..11.5623D. doi:10.1142/S0217751X96002583. - Duff, Michael (1998). "The theory formerly known as strings". Scientific American. 278 (2): 64–9. Bibcode:1998SciAm.278b..64D. doi:10.1038/scientificamerican0298-64. - Duff, Michael; Howe, Paul; Inami, Takeo; Stelle, Kellogg (1987). "Superstrings in D=10 from supermembranes in D=11". Nuclear Physics B. 191 (1): 70–74. Bibcode:1987PhLB..191...70D. doi:10.1016/0370-2693(87)91323-2. - Dummit, David; Foote, Richard (2004). Abstract Algebra. Wiley. ISBN 978-0-471-43334-7. - Duncan, John; Griffin, Michael; Ono, Ken (2015). "Proof of the Umbral Moonshine Conjecture". Research in the Mathematical Sciences. 2: 26. arXiv: . Bibcode:2015arXiv150301472D. - Eguchi, Tohru; Ooguri, Hirosi; Tachikawa, Yuji (2011). "Notes on the K3 surface and the Mathieu group M24". Experimental Mathematics. 20 (1): 91–96. arXiv: . doi:10.1080/10586458.2011.544585. - Frenkel, Igor; Lepowsky, James; Meurman, Arne (1988). Vertex Operator Algebras and the Monster. Pure and Applied Mathematics. 134. Academic Press. ISBN 0-12-267065-5. - Gannon, Terry. Moonshine Beyond the Monster: The Bridge Connecting Algebra, Modular Forms, and Physics. Cambridge University Press. - Givental, Alexander (1996). "Equivariant Gromov-Witten invariants". International Mathematics Research Notices. 1996 (13): 613–663. doi:10.1155/S1073792896000414. - Givental, Alexander (1998). "A mirror theorem for toric complete intersections" (PDF). Topological field theory, primitive forms and related topics: 141–175. arXiv: . doi:10.1007/978-1-4612-0705-4_5. ISBN 978-1-4612-6874-1. - Gubser, Steven; Klebanov, Igor; Polyakov, Alexander (1998). "Gauge theory correlators from non-critical string theory". Physics Letters B. 428: 105–114. arXiv: . Bibcode:1998PhLB..428..105G. doi:10.1016/S0370-2693(98)00377-3. - Guica, Monica; Hartman, Thomas; Song, Wei; Strominger, Andrew (2009). "The Kerr/CFT Correspondence". Physical Review D. 80 (12). arXiv: . Bibcode:2009PhRvD..80l4008G. doi:10.1103/PhysRevD.80.124008. - Hawking, Stephen (1975). "Particle creation by black holes". Communications in Mathematical Physics. 43 (3): 199–220. Bibcode:1975CMaPh..43..199H. doi:10.1007/BF02345020. - Hawking, Stephen (2005). "Information loss in black holes". Physical Review D. 72 (8). arXiv: . Bibcode:2005PhRvD..72h4013H. doi:10.1103/PhysRevD.72.084013. - Hořava, Petr; Witten, Edward (1996). "Heterotic and Type I string dynamics from eleven dimensions". Nuclear Physics B. 460 (3): 506–524. arXiv: . Bibcode:1996NuPhB.460..506H. doi:10.1016/0550-3213(95)00621-4. - Hori, Kentaro; Katz, Sheldon; Klemm, Albrecht; Pandharipande, Rahul; Thomas, Richard; Vafa, Cumrun; Vakil, Ravi; Zaslow, Eric, eds. (2003). Mirror Symmetry (PDF). Clay Mathematics Monographs. 1. American Mathematical Society. ISBN 0-8218-2955-6. Archived from the original (PDF) on 2006-09-19. - Hull, Chris; Townsend, Paul (1995). "Unity of superstring dualities". Nuclear Physics B. 4381 (1): 109–137. arXiv: . Bibcode:1995NuPhB.438..109H. doi:10.1016/0550-3213(94)00559-W. - Kapustin, Anton; Witten, Edward (2007). "Electric-magnetic duality and the geometric Langlands program". Communications in Number Theory and Physics. 1 (1): 1–236. arXiv: . Bibcode:2007CNTP....1....1K. doi:10.4310/cntp.2007.v1.n1.a1. - Klarreich, Erica. "Mathematicians chase moonshine's shadow". Quanta Magazine. Retrieved 29 December 2016. - Klebanov, Igor; Maldacena, Juan (2009). "Solving Quantum Field Theories via Curved Spacetimes" (PDF). Physics Today. 62: 28–33. Bibcode:2009PhT....62a..28K. doi:10.1063/1.3074260. Archived from the original (PDF) on July 2, 2013. Retrieved 29 December 2016. - Kontsevich, Maxim (1995). "Homological algebra of mirror symmetry". Proceedings of the International Congress of Mathematicians: 120–139. arXiv: . Bibcode:1994alg.geom.11018K. - Kovtun, P. K.; Son, Dam T.; Starinets, A. O. (2001). "Viscosity in strongly interacting quantum field theories from black hole physics". Physical Review Letters. 94 (11): 111601. arXiv: . Bibcode:2005PhRvL..94k1601K. doi:10.1103/PhysRevLett.94.111601. PMID 15903845. - Lian, Bong; Liu, Kefeng; Yau, Shing-Tung (1997). "Mirror principle, I". Asian Journal of Mathematics. 1 (4): 729–763. arXiv: . Bibcode:1997alg.geom.12011L. doi:10.4310/ajm.1997.v1.n4.a5. - Lian, Bong; Liu, Kefeng; Yau, Shing-Tung (1999a). "Mirror principle, II". Asian Journal of Mathematics. 3: 109–146. arXiv: . Bibcode:1999math......5006L. doi:10.4310/ajm.1999.v3.n1.a6. - Lian, Bong; Liu, Kefeng; Yau, Shing-Tung (1999b). "Mirror principle, III". Asian Journal of Mathematics. 3 (4): 771–800. arXiv: . Bibcode:1999math.....12038L. doi:10.4310/ajm.1999.v3.n4.a4. - Lian, Bong; Liu, Kefeng; Yau, Shing-Tung (2000). "Mirror principle, IV". Surveys in Differential Geometry. 7: 475–496. arXiv: . Bibcode:2000math......7104L. doi:10.4310/sdg.2002.v7.n1.a15. - Luzum, Matthew; Romatschke, Paul (2008). "Conformal relativistic viscous hydrodynamics: Applications to RHIC results at √=200 GeV". Physical Review C. 78 (3). arXiv: . Bibcode:2008PhRvC..78c4915L. doi:10.1103/PhysRevC.78.034915. - Maldacena, Juan (1998). "The Large N limit of superconformal field theories and supergravity". Advances in Theoretical and Mathematical Physics. AIP Conference Proceedings. 2: 231–252. arXiv: . Bibcode:1998AdTMP...2..231M. doi:10.1063/1.59653. - Maldacena, Juan (2005). "The Illusion of Gravity" (PDF). Scientific American. 293 (5): 56–63. Bibcode:2005SciAm.293e..56M. doi:10.1038/scientificamerican1105-56. PMID 16318027. Archived from the original (PDF) on November 1, 2014. Retrieved 29 December 2016. - Maldacena, Juan; Strominger, Andrew; Witten, Edward (1997). "Black hole entropy in M-theory". Journal of High Energy Physics. 1997 (12): 002. arXiv: . Bibcode:1997JHEP...12..002M. doi:10.1088/1126-6708/1997/12/002. - Merali, Zeeya (2011). "Collaborative physics: string theory finds a bench mate". Nature. 478 (7369): 302–304. Bibcode:2011Natur.478..302M. doi:10.1038/478302a. PMID 22012369. - Moore, Gregory (2005). "What is ... a Brane?" (PDF). Notices of the AMS. 52: 214. Retrieved 29 December 2016. - Nahm, Walter (1978). "Supersymmetries and their representations". Nuclear Physics B. 135 (1): 149–166. Bibcode:1978NuPhB.135..149N. doi:10.1016/0550-3213(78)90218-3. - Nekrasov, Nikita; Schwarz, Albert (1998). "Instantons on noncommutative R4 and (2,0) superconformal six dimensional theory". Communications in Mathematical Physics. 198 (3): 689–703. arXiv: . Bibcode:1998CMaPh.198..689N. doi:10.1007/s002200050490. - Ooguri, Hirosi; Strominger, Andrew; Vafa, Cumrun (2004). "Black hole attractors and the topological string". Physical Review D. 70 (10). arXiv: . Bibcode:2004PhRvD..70j6007O. doi:10.1103/physrevd.70.106007. - Polchinski, Joseph (2007). "All Strung Out?". American Scientist. Retrieved 29 December 2016. - Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. Knopf. ISBN 0-679-45443-8. - Randall, Lisa; Sundrum, Raman (1999). "An alternative to compactification". Physical Review Letters. 83 (23): 4690–4693. arXiv: . Bibcode:1999PhRvL..83.4690R. doi:10.1103/PhysRevLett.83.4690. - Sachdev, Subir (2013). "Strange and stringy". Scientific American. 308 (44): 44–51. Bibcode:2012SciAm.308a..44S. doi:10.1038/scientificamerican0113-44. - Seiberg, Nathan; Witten, Edward (1999). "String Theory and Noncommutative Geometry". Journal of High Energy Physics. 1999 (9): 032. arXiv: . Bibcode:1999JHEP...09..032S. doi:10.1088/1126-6708/1999/09/032. - Sen, Ashoke (1994a). "Strong-weak coupling duality in four-dimensional string theory". International Journal of Modern Physics A. 9 (21): 3707–3750. arXiv: . Bibcode:1994IJMPA...9.3707S. doi:10.1142/S0217751X94001497. - Sen, Ashoke (1994b). "Dyon-monopole bound states, self-dual harmonic forms on the multi-monopole moduli space, and SL(2,Z) invariance in string theory". Physics Letters B. 329 (2): 217–221. arXiv: . Bibcode:1994PhLB..329..217S. doi:10.1016/0370-2693(94)90763-3. - Smolin, Lee (2006). The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. New York: Houghton Mifflin Co. ISBN 0-618-55105-0. - Strominger, Andrew (1998). "Black hole entropy from near-horizon microstates". Journal of High Energy Physics. 1998 (2): 009. arXiv: . Bibcode:1998JHEP...02..009S. doi:10.1088/1126-6708/1998/02/009. - Strominger, Andrew; Vafa, Cumrun (1996). "Microscopic origin of the Bekenstein–Hawking entropy". Physics Letters B. 379 (1): 99–104. arXiv: . Bibcode:1996PhLB..379...99S. doi:10.1016/0370-2693(96)00345-0. - Strominger, Andrew; Yau, Shing-Tung; Zaslow, Eric (1996). "Mirror symmetry is T-duality". Nuclear Physics B. 479 (1): 243–259. arXiv: . Bibcode:1996NuPhB.479..243S. doi:10.1016/0550-3213(96)00434-8. - Susskind, Leonard (2005). The Cosmic Landscape: String Theory and the Illusion of Intelligent Design. Back Bay Books. ISBN 978-0316013338. - Susskind, Leonard (2008). The Black Hole War: My Battle with Stephen Hawking to Make the World Safe for Quantum Mechanics. Little, Brown and Company. ISBN 978-0-316-01641-4. - Wald, Robert (1984). General Relativity. University of Chicago Press. ISBN 978-0-226-87033-5. - Weinberg, Steven (1987). Anthropic bound on the cosmological constant. 59. Physical Review Letters. p. 2607. - Witten, Edward (1995). "String theory dynamics in various dimensions". Nuclear Physics B. 443 (1): 85–126. arXiv: . Bibcode:1995NuPhB.443...85W. doi:10.1016/0550-3213(95)00158-O. - Witten, Edward (1998). "Anti-de Sitter space and holography". Advances in Theoretical and Mathematical Physics. 2: 253–291. arXiv: . Bibcode:1998AdTMP...2..253W. - Witten, Edward (2007). "Three-dimensional gravity revisited". arXiv: [hep-th]. - Woit, Peter (2006). Not Even Wrong: The Failure of String Theory and the Search for Unity in Physical Law. Basic Books. p. 105. ISBN 0-465-09275-6. - Yau, Shing-Tung; Nadis, Steve (2010). The Shape of Inner Space: String Theory and the Geometry of the Universe's Hidden Dimensions. Basic Books. ISBN 978-0-465-02023-2. - Zee, Anthony (2010). Quantum Field Theory in a Nutshell (2nd ed.). Princeton University Press. ISBN 978-0-691-14034-6. - Zwiebach, Barton (2009). A First Course in String Theory. Cambridge University Press. ISBN 978-0-521-88032-9. - Greene, Brian (2003). The Elegant Universe: Superstrings, Hidden Dimensions, and the Quest for the Ultimate Theory. New York: W.W. Norton & Company. ISBN 0-393-05858-1. - Greene, Brian (2004). The Fabric of the Cosmos: Space, Time, and the Texture of Reality. New York: Alfred A. Knopf. ISBN 0-375-41288-3. - Penrose, Roger (2005). The Road to Reality: A Complete Guide to the Laws of the Universe. Knopf. ISBN 0-679-45443-8. - Smolin, Lee (2006). The Trouble with Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next. New York: Houghton Mifflin Co. ISBN 0-618-55105-0. - Woit, Peter (2006). Not Even Wrong: The Failure of String Theory And the Search for Unity in Physical Law. London: Jonathan Cape &: New York: Basic Books. ISBN 978-0-465-09275-8. - Becker, Katrin; Becker, Melanie; Schwarz, John (2007). String Theory and M-theory: A Modern Introduction. Cambridge University Press. ISBN 978-0-521-86069-7. - Green, Michael; Schwarz, John; Witten, Edward (2012). Superstring theory. Vol. 1: Introduction. Cambridge University Press. ISBN 978-1107029118. - Green, Michael; Schwarz, John; Witten, Edward (2012). Superstring theory. Vol. 2: Loop amplitudes, anomalies and phenomenology. Cambridge University Press. ISBN 978-1107029132. - Polchinski, Joseph (1998). String Theory Vol. 1: An Introduction to the Bosonic String. Cambridge University Press. ISBN 0-521-63303-6. - Polchinski, Joseph (1998). String Theory Vol. 2: Superstring Theory and Beyond. Cambridge University Press. ISBN 0-521-63304-4. - Zwiebach, Barton (2009). A First Course in String Theory. Cambridge University Press. ISBN 978-0-521-88032-9. - Deligne, Pierre; Etingof, Pavel; Freed, Daniel; Jeffery, Lisa; Kazhdan, David; Morgan, John; Morrison, David; Witten, Edward, eds. (1999). Quantum Fields and Strings: A Course for Mathematicians, Vol. 2. American Mathematical Society. ISBN 978-0821819883. |Look up string theory in Wiktionary, the free dictionary.| |Wikimedia Commons has media related to String theory.| - The Elegant Universe—A three-hour miniseries with Brian Greene by NOVA (original PBS Broadcast Dates: October 28, 8–10 p.m. and November 4, 8–9 p.m., 2003). Various images, texts, videos and animations explaining string theory. - Not Even Wrong—A blog critical of string theory - The Official String Theory Web Site - Why String Theory—An introduction to string theory. - Bedford, James (2012). "An introduction to string theory". arXiv: [hep-th]. - Tong, David (2009). "String theory". arXiv: [hep-th].
<urn:uuid:5155728b-022b-4942-ab36-eef95793d1cc>
3.828125
21,868
Knowledge Article
Science & Tech.
49.271277
95,535,614
Scientists decade watch how a supermassive black hole devouring a star 19 June, 2018, 15:41 | Author: Darnell Taylor It seems that a team of researchers have finally managed to follow and watch the formation and expansion of the quick jet of material that gets thrown out as the gravity of a supermassive black hole catches a star and destroys it by tearing it apart. This supermassive black hole is reportedly 20 million times bigger compared to the sun, while the star is over twice as large as the sun. The black hole destroyed a star nearly twice the size of the Sun when it made a close approach. Most galaxies have supermassive black holes, "which can pull matter into them and form a huge disc around their outsides as they do", saysThe Independent. Therefore, tidal disruption events, such as the one recently recorded, offer a unique opportunity to study the vicinity of these powerful objects. Astronomers with the William Herschel Telescope in the Canary Islands noted a bright burst of infrared emission from the Arp 299 area. By July 2005, a new source of radio emission emerged from the location of Arp 299. Monitoring that space region with an global network of radio telescopes, including the European Interferometry Network (EVN), for more than a decade, allowed scientists to see the flash detected at radio wavelengths expand in one direction at a speed of about 75,000 kilometers per second, a quarter of the speed of light. Tidal disruption events are rare and occur about once every 10,000 years in a typical galaxy because its central black hole is not actively consuming any material and consequently, not give off any light. If a star passes too close to such a monstrous black hole, its powerful gravitational pull will tear it apart in a so-called tidal disruption event. In the end, one half of the mass of the star is absorbed by the black hole, and the other is thrown away. Supermassive black holes aren't your ordinary stellar-variety black holes whose mass is just a couple of times that of our sun. As there are manysupernova explosions seen in the galaxy, researchers have first thought that they are looking at the same phenomena again. The study detailing the observation was published June 14 in the journal Science. "It all started when my colleague Professor Peter Meikle came into my office in 2005 and said "something odd is happening in Arp299". "Because of the dust that absorbed any visible light, this particular tidal disruption event may be just the tip of the iceberg of what until now has been a hidden population", Seppo Mattila, of the University of Turku in Finland, told the National Radio Astronomy Observatory in a statement. Caltech manages JPL for NASA. The Space Telescope Science Institute (STScI) in Baltimore conducts Hubble science operations. Of course, with a little reflection, there should be little to fear from Google sticking with a single sensor for the Pixel 3 XL . Theres mounting evidence that the Google Pixel 3 XL will have a notch on the front and will not have a dual camera on the back. Sandeep Bakhshi as chief operating officer is the most practical middle ground that the board could have adopted". He also served as CEO of the group's general insurance arm, and was managing director of ICICI Lombard. De Niro who has been a vocal critic of Trump's policies and has even appeared on Saturday Night Live in skits making fun of Trump. Earlier this year, De Niro said that Trump lacks " any sense of humanity or compassion ", and called him a " madman ". Countries including Hungary and Poland have resisted or refused outright to take refugees under an European Union quota system. Malta also turned the vessel away, sparking a major European Union row until Spain agreed to take in the migrants. Patricio won the 2016 European Championships with Portugal and is now with his country at the World Cup in Russian Federation . He made at least 460 appearances for the Portuguese club in all competitions and has been capped 70 times by his country. With two Defensive Player of the Year awards, he would significantly improve what was one of the league's worst defenses in 2018. The NBA world went insane when the news came out that Kawhi Leonard has requested a trade from the San Antonio Spurs. The Oppo Find X is expected to be powered by Qualcomm Snapdragon 845 processor paired with 8GB of RAM and 128GB internal storage. Unlike the Nex's pop-out selfie camera, the Find X will supposedly have its rear cameras hidden underneath the back panel. Napoli want to sign Arsenal goalkeeper Petr Cech on loan for next season, according to reports in the Italian media. And Bayer Leverkusen star Leno has been identified as their long-term solution in goal. More recently, a stretch of 13 defeats in 15 completed ODIs is unprecedented in the history of the five-time World Cup winners. Shaun Marsh's magnificent 131 was the highlight for the tourists but it wasn't enough to prevent a 34-run defeat. Due to this, the NHS, which is the Health Service in the United Kingdom, will be treating people who have this condition. Honestly, " gaming disorder " sounds like a phrase tossed around by irritated parents and significant others. Cambodia prince hurt, wife killed in crash Ranariddh was allowed to return to contest elections the following year but failed to repeat his success at the ballot. The royalist Funcinpec party was founded by Ranariddh's father, the late monarch Norodom Sihanouk, who died in 2012. XXXTENTACION Has Passed Away At 20 In the last 18 months, Onfroy quickly became one of popular music's most controversial and, in some circles, reviled figures. Videos of what appeared to be Onfroy unresponsive in the driver's seat of a vehicle circulated on social media on Monday. Kim Jong-Un is in China for third visit with Xi Jinping They say it does not discuss the range of nuclear weapons to be included or provide a time limit or verification requirements. Trade Representative to prepare new tariffs on $200 billion in Chinese imports, a move swiftly criticized by Beijing . Ferry carrying 80 sinks in Indonesia’s Lake Toba Last week, a traditional wooden boat with about 40 people capsized in the island of Sulawesi, killing more than a dozen people. Later, the local police chief said that the evacuation has been halted due to bad weather. What Prince Harry Promised Thomas Markle Before Wedding When asked if he thought Harry was a Trump supporter, Markle replied, "I would hope not now, but at the time he might've been". In Thomas' absence, Prince Charles walked Meghan down the aisle on her wedding day. "I said, 'You're a gentleman".
<urn:uuid:6576ea68-2904-43e8-847e-53cb647677b1>
3.390625
1,405
Truncated
Science & Tech.
44.142979
95,535,630
OK, we just made that up. But while it’s pretty obvious that science and technology are essential parts of our society, there is now a study by the National Science Foundation that proves that most Americans’ perception of science and scientists are actually favorable. Research also showed that they believe that the benefits outweigh the negatives when it comes to science and that technology will open more doors for future generations. While people often don’t actively think about the fact that science is in our bodies, our bedrooms, our kitchens, our cars, it is. So it’s pretty important that Americans dig it. More than 1,500 people were surveyed on the matter and 2014 General Social Survey data was used for a biennial report presented to the POTUS and Congress by the National Science Board, “Science and Engineering Indicators.” One reason the survey is a big deal is because data is gathered through face-to-face interviews. The lead researcher on the public perceptions part of the report is John Besley, associate professor at Michigan State University’s Department of Advertising and Public Relations, who presented his findings at the American Association for the Advancement of Science annual meeting. He noted that, according to a press release, “Americans are more likely to have ‘a great deal of confidence’ in leaders of the scientific community than leaders of any group other than the military.” Besley also found that 80 percent of people surveyed believe that scientific research needs to be funded by the government, only four in 10 think it’s spending “too little,” and only one in 10 thought too much money was being spent on science. That person isn’t allowed on the internet anymore. There were a few other eye openers from the GSS survey. Only half of the respondents are concerned about climate change. Half — think about that. Even so, most Americans do think that it’s important to focus on alternative energy sources. So, maybe there’s some hope for the future.
<urn:uuid:61a874da-7524-4450-88bf-453f98dfb41e>
2.6875
420
News Article
Science & Tech.
42.582785
95,535,673
Authors: Jeffrey Joseph Wolynski It is pointed out using the general theory of stellar metamorphosis that there is no difference between super Jupiters and Brown Dwarf stars. They are the same objects. As well, this is just a smaller reasoning inside of a much larger worldview, stars themselves are the young, hot, big exoplanets. Thus stars were never mutually exclusive of planets/exoplanets. A diagram is presented to show that super Jupiters and Brown dwarfs fit in the exact same spot on the stellar evolution (planet formation) diagram. Comments: 1 Page. illustrative diagram [v1] 2017-07-22 16:13:05 Unique-IP document downloads: 54 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:dc1fe8d6-242c-4d49-b4d2-8e0f6262a748>
2.90625
291
Knowledge Article
Science & Tech.
44.609204
95,535,676
This section could be called "Dummy Tasks to the Rescue” because, although the design of Threading Building Blocks is very simple at the core, there are times when you want a little more help. This section has two examples that use a dummy task to create relationships among tasks that at first seem impossible because they have non-treelike dependence graphs. In the first example, we give a sibling task to the main program that is usually the base of the tree. In the second example, we set up a pipeline with a fork in it. It is a classic example of avoiding locks through implicit synchronization. We’ll name the two examples as follows: Start a Large Task in Parallel with the Main Program Two Mouths: Feeding Two from the Same Task in a Pipeline Instead of having all threads execute portions of a problem, it is possible to start a task in parallel with the main application. We’ve seen a number of requests for how to do this. The trick is to use a nonexecuting dummy task as the parent on which to synchronize, as shown in Example 11-37. Something very close to this trick is already used in tbb/parallel_while.h and tbb/parallel_scan.h, shown earlier. One of the beautiful things about this approach is that each half of the program is free to invoke as much parallelism as it desires. The task-based approach of Threading Building Blocks does the load balancing and manages the assignment of tasks to threads without causing ...
<urn:uuid:a06a2d57-5d7a-4303-98ac-b0b749e0b600>
3.015625
318
Documentation
Software Dev.
59.632885
95,535,686
Floral scent divergence and pollinator morphology promote pollinator isolation and potential speciation in the fig and fig-wasp mutualism The article in Journal of Ecology that while scent is an effective signal for partner recognition, there are other barriers which help maintain the stability of such a species specific mutualism. In the fig and fig-wasp mutualism, scent is believed to be of primary importance in the attraction of pollinators and maintenance of species specificity. Divergent scent signals between closely related species should be sufficient in promoting reproductive isolation starting the process of speciation. In this study we analyzed the scent signals of four fig species endemic to Papua New Guinea. Using next generation sequencing, we placed these species in a phylogenetic context to draw conclusions of scent divergence between close relatives. In addition to this, using standard Y-tube choice experiments on pollinating wasps, we tested pollinator response to the scent emitted by different fig species. Scent profiles varied significantly between all focal species, although with a varying degree of overlap between them. Pollinators were generally attracted to the scent emitted by its host species except in one case where the pollinating fig-wasp of one species was also attracted to the sister species of its host fig. Wasp morphological traits, however, indicate that it is mechanically impossible for her to oviposit inside figs of this atypical encounter. This study demonstrates that while scent is an effective signal for partner recognition, there are other barriers which help maintain the stability of such a species specific mutualism. Speciation appears to be reinforced by divergence in key reproductive isolation mechanisms on both sides of the mutualism. Souto-Vilarós D., Proffit M., Buatois B., Rindoš M., Sisol M., Kuyaiva T., Isua B., Michálek J., Darwell C., Hossaert-McKey M., Weiblen G., Novotný V., Segar S. T. (2018) Pollination along an elevational gradient mediated both by floral scent and pollinator compatibility in the fig and fif-wasp mutualism. Journal of Ecology (in press). https://doi.org/10.1111/1365-2745.12995
<urn:uuid:369ba005-4fd1-4eb6-916f-d6c558868649>
2.796875
463
Academic Writing
Science & Tech.
38.714126
95,535,696
A View from Emerging Technology from the arXiv Mathematicians Solve Minimum Sudoku Problem Sudoku fanatics have long claimed that the smallest number of starting clues a puzzle can contain is 17. Now a year-long calculation proves there are no 16-clue puzzles. Sudoku is a number puzzle consisting of a 9 x 9 grid in which some cells contain clues in the form of digits from 1 to 9. The solver’s jobs is to fill in the remaining cells so that each row, column and 3×3 box in the grid contains all nine digits. There’s another unwritten rule: the puzzle must have only one solution. So grids cannot contain just a few starting clues. It’s easy to see why. A grid with 7 clues cannot have a unique answer because the two missing digits can always be interchanged in any solution. A similar argument explains why grids with fewer clues must also have multiple solutions. But it’s not so easy to see why a grid with 8 clues cannot have a unique solution, or indeed one with 9 or more clues. That raises an interesting question for mathematicians: what is the minimum number of Sudoku clues that produces a unique answer? This is a question that has hung heavy over the Sudoku community, not least because they think they know the answer. Sudoku fanatics have found numerous examples of grids with 17 clues that have a unique solution but they have never found one with 16 clues. That suggests the minimum number is 17 but nobody has been able to prove that there isn’t a 16-clue solution lurking somewhere in puzzle space. Enter Gary McGuire and pals at University College Dublin. These guys have solved the problem using the tried and trusted mathematical technique of sheer brute force. In essence these guys have examined every potential 16-clue solution for every possible Sudoku grid. “Our search turned up no proper 16-clue puzzles, but had one existed, then we would have found it,” they say. That’s an impressive feat. There are exactly 6, 670, 903, 752, 021, 072, 936, 960 possible solutions to Sudoku (about 10^21) . That’s far more than can be checked in a reasonable period of time. But as luck would have it, it’s not necessary to check them all. Various symmetry arguments prove that many of these grids are equivalent. This reduces the number that need to be checked to a mere 5, 472, 730, 538. So McGuire and co wrote a program called Checker to check each one of these grids for a 16-clue solution. But the process of checking a single grid is itself tricky. One way to do it is to examine every possible subset of 16 clues to see if any of them lead to a unique solution. The trouble is that there are some 10^16 subsets for each grid. Once again, a little mathematics come in handy. McGuire and co used some clever reasoning to show that certain subsets are equivalent to many others and this dramatically reduces the number of subsets that need to be checked. Nevertheless, the resulting calculation is still a monster. The Dublin team say it took 7.1 million core-hours of processing time on a machine with 640 Intel Xeon hex-core processors. They started in January 2011 and finished in December. The whole exercise may sound like a bit of mathematical fun but this kind of problem solving has many important applications. McGuire and co say the problem of Sudoku grid checking is formally equivalent to problems in gene expression analysis and in computer network and software testing. So the Dublin team’s methods for speeding up the calculation will have a direct impact in these areas too. But while the result is clearly impressive, the Minimum Sudoku Problem isn’t entirely laid to rest. This problem is crying out for an elegant proof that allows us to “see” why the minimum number must be 17; rather like the proof that there can be no unique solutions for 7 or fewer clues. A big ask, I know, but surely one worth aiming for. Ref: arxiv.org/abs/1201.0749: There Is No 16-Clue Sudoku: Solving the Sudoku Minimum Number of Clues Problem Correction: this post was edited on the 6 January to reflect the argument that if an n-clue grid is uniquely solvable then adding a digit to make an n+1-clue grid must also be uniquely solvable. So if there are no uniquely solvable 16-clue grids, there cannot be any grids with fewer clues that are uniquely solvable. Thanks to RealMurph and abooij. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:d1af6113-fb3a-4355-b341-76edbced200a>
3.015625
1,031
News Article
Science & Tech.
58.684053
95,535,730
Ever since the famous double-helix structure of DNA was discovered more than 50 years ago, researchers have struggled to understand the complex relationships between its structural, chemical and electrical properties. One mystery has been why attempts to measure the electrical conductivity of DNA have yielded conflicting results suggesting that the molecule is an insulator, semiconductor, metal—and even a superconductor at very low temperatures DNA’s apparent metallic and semiconductor properties along with its ability to self-replicate has led some researchers to suggest that it could be used to create electronic circuits that assemble themselves. Now, however a team of researchers in the US has shown that DNA’s electrical conductivity is extremely sensitive to tiny structural changes in the molecule — which means that it could be very difficult to make reliable DNA circuits. Colin Nuckolls of Columbia University, Jacqueline Barton of Caltech and colleagues were able to make reliable conductivity measurements by inventing a new and consistent way of connecting a single DNA molecule to two carbon nanotubes (Nature Nanotechnology 3 163). Past methods had struggled to make a reliable connection between a DNA molecule — which is only about 2 nm wide— and two electrodes. Poor connectivity is thought to be behind many of the inconsistencies in previous measurements. The team began with a nanotube — a tiny tube of carbon about as thick as DNA itself – that was integrated within a simple electrical circuit. A 6-nm section of the nanotube was removed using plasma ion etching. This procedure not only cuts the tube, but also oxidizes the remaining tips. This makes it possible to bridge the gap with a DNA molecule with ends that have been designed to form strong chemical bonds with the oxidized tips. Similar to graphite The conductivity was determined by simply applying 50 mV across the DNA and measuring the current that flowed through it. In a standard piece of DNA, the conductivity was similar to that seen in graphite. This is consistent with the fact that the core of the double helix of DNA consists of stacked molecular rings that have a similar structure to graphite. A benefit of having the DNA attached securely to the electrodes is that the conductivity can be studied under ambient conditions — in a liquid at room temperature. This allowed the team to confirm that they were actually measuring the conductivity of DNA and not something else in the experiment. This was done by adding an enzyme to the surrounding liquid that cuts DNA – and as expected the electrical circuit was broken. The team were also able to investigate the effect of base mismatches on conductivity. DNA double strands are normally connected through interactions between particular bases—adenine to thymine and cytosine to guanine. If one of the bases in a pair is changed, the two strands will still stick together, but with an altered structure around the mismatched bases. The team first measured the conductivity of a well matched strand and then exchanged it for a strand with a single mismatch. This single mismatch boosted the resistance of the DNA by a factor of 300. According to Jacqueline Barton “this highlights the need to make measurements on duplex DNA that is well-matched, undamaged, and in its native conformation.” An important implication of this sensitivity to small changes in structure is that DNA by itself might not be an ideal component for future electronic devices. Indeed, this inherent sensitivity to structural change could allow living cells to detect DNA damage, which can accumulate in cells and lead to problems including cancer. Cells have ways of repairing this damage, but the mechanism they use to detect damage is still not completely understood. Barton says that this “whole body of experiments now begs the question of whether the cell utilizes this chemistry to detect DNA damage.” This is a question her group is now trying to answer.
<urn:uuid:c84766c5-cf78-4427-bc23-c9e05e878ea8>
4.25
779
News Article
Science & Tech.
29.078584
95,535,746
Two of the major problems in using meteorological models to explain observed tropospheric trace constituent distributions and thereby to understand the global budgets of the tracers are to properly define the vertical layered structure in the free atmosphere, and to understand the contribution of advection processes in generating horizontal inhomogeneities at all scales. We proposed to tackle these problems through the examination of an extensive collection of trace constituent data from research and commercial aircraft in conjunction with meteorological data from the European Center for Medium-Range Weather Forecasts. The physical mechanisms responsible for these advection and layering processes were explored and their implications for theories and models assessed. In addition, we calculated examples of how thin layers (not currently resolved by models) affect the radiative heating/cooling rates. We developed an improved algorithm for trace constituent layer detection, and used it to analyze over 100,000 km of ozone and humidity vertical profiles collected by instruments piggybacked on commercial aircraft. The same method was also used to examine ozone, humidity, carbon monoxide, and methane data from the NASA Pacific Exploratory Missions. The major conclusions from these studies were that tropospheric trace constituent layers are ubiquitous, and that their characteristics are remarkably universal. These results support the notion that ozone layers coming in from the stratosphere play a large role in creating tropospheric layers, whether by themselves or by capping buoyant pollution plumes. This, in turn, has important implications for the transport (especially in the vertical) of trace gases and how it is represented in models.
<urn:uuid:71675d39-e58a-4225-8966-7dc6c2a23f8b>
2.84375
311
Academic Writing
Science & Tech.
4.467795
95,535,766
Scientists at Purdue University are developing an environmentally friendly technology called homogeneous charge compression ignition (HCCI), which will reduce greenhouse emissions commonly associated with combustion engines. HCCI technology can be applied to both diesel and gasoline engines and will increase engine performance, efficiency, reduce fuel consumption and help to clean up the environment. Currently, scientists at Purdue are building a multi-cylinder "fully-flexible variable valve actuation" prototype. Implications - The new ignition technology will use "variable valve actuation," which "would decouple the motion of the piston from the motion of the intake and exhaust valves." This change could really effect the environment in a good way, making it possible for engineers to make "diesel and gasoline engines both cleaner and more efficient" because of a new-found control over the valves. More Stats +/- Top 100 Marketing Trends in July Adorned Rubbery Stilettos Immersive Desert Botanical Gardens Sublingual CBD Strips Modular Paving Systems
<urn:uuid:2ca6c64e-47f2-4369-ba42-2341827297ed>
3.578125
207
Content Listing
Science & Tech.
1.213211
95,535,775
EP is a derived, as opposed to basic, Pwanck unit. It is defined by: Substituting vawues for de various components in dis definition gives de approximate eqwivawent vawue of dis unit in terms of oder units of energy: An eqwivawent definition is: where tP is de Pwanck time. where mP is de Pwanck mass. The uwtra-high-energy cosmic ray observed in 1991 had a measured energy of about 50 jouwes, eqwivawent to about 2.5×10−8 EP. Most Pwanck units are fantasticawwy smaww and dus are unrewated to "macroscopic" phenomena (or fantasticawwy warge, as in de case of Pwanck temperature). Energy of 1 EP, on de oder hand, is definitewy macroscopic, approximatewy eqwawing de energy stored in an automobiwe gas tank (57.2 L of gasowine at 34.2 MJ/L of chemicaw energy). Pwanck units are designed to normawize de physicaw constants G, ћ and c to 1. Hence given Pwanck units, de mass-energy eqwivawence E = mc² simpwifies to E = m, so dat de Pwanck energy and mass are numericawwy identicaw. In de eqwations of generaw rewativity, G is often muwtipwied by 8π. Hence writings in particwe physics and physicaw cosmowogy often normawize 8πG to 1. This normawization resuwts in de reduced Pwanck energy, defined as: - "Pwanck Energy". Cosmos, The SAO Encycwopedia, Swinburne University of Technowogy. Retrieved 18 September 2015. - "CODATA Vawue: Pwanck mass energy eqwivawent in GeV". physics.nist.gov. Retrieved 2016-12-21. - "HiRes - The High Resowution Fwy's Eye Uwtra High Energy Cosmic Ray Observatory". www.cosmic-ray.org. Retrieved 2016-12-21.
<urn:uuid:a77ad990-af10-48d4-ad6b-b0353c7cffc8>
2.75
484
Knowledge Article
Science & Tech.
49.936538
95,535,779
Here are a few simple examples of sequence-unpacking assignments: nudge = 1 # Basic assignment wink = 2 # from www. ja va2 s. com A, B = nudge, wink # Tuple assignment print( A, B ) # Like A = nudge; B = wink [C, D] = [nudge, wink] # List assignment print( C, D ) We are coding two tuples in the third line, omitted their enclosing parentheses. Python pairs the values in the tuple on the right side of the assignment operator with the variables in the tuple on the left side and assigns the values one at a time. nudge = 1 wink = 2 # w ww .ja v a2s . com nudge, wink = wink, nudge # Tuples: swaps values print( nudge, wink ) # Like T = nudge; nudge = wink; wink = T Python assigns items in the sequence on the right to variables in the sequence on the left by position, from left to right: [a, b, c] = (1, 2, 3) # Assign tuple of values to list of names print( a, c ) (a, b, c) = "ABC" # Assign string of characters to tuple print( a, c ) Sequence assignment supports any iterable object on the right, not just any sequence.
<urn:uuid:bcfe75db-c61e-498f-86a2-4f27948c668c>
3.40625
294
Tutorial
Software Dev.
65.209348
95,535,795
In the evolution of the Polymerase Chain Reaction (PCR), two developments have greatly simplified the procedure: automation of temperature cycling and the use of a thermostable DNA polymerase. The original method, using the Klenow fragment of E. colf DNA polymerase I, was very tedious because of the thermal lability of the enzyme. The initial PCR process required moving the samples to be amplified between two heat sources: one at high temperature (94°–95°C), required to denature the double-stranded DNA, and one at relatively low temperature (37°C), needed for both the annealing of the primers and their extension. Because the Klenow fragment is irreversibly denatured every time the sample temperature is raised to 94–95°C, the enzyme has to be replenished every cycle, after the sample temperature is restored to 37°C, in order to extend the annealed primers. Thus, the requirements of automating the PCR method at the time of its invention were actually two-fold: 1) cycling the temperature between 94°C and 37°C, and 2) adding fresh enzyme every cycle. KeywordsPolymerase Chain Reaction Polymerase Chain Reaction Method Block Temperature Klenow Fragment Polymerase Chain Reaction Process Unable to display preview. Download preview PDF.
<urn:uuid:aff9d155-4b73-43d9-965c-243f025307e8>
3.3125
278
Truncated
Science & Tech.
24.715518
95,535,846
In 1957, shepherds in Idaho (USA) discovered that when pregnant sheep ate lilies of the species Veratrum californicum (corn lily, California false hellebore), their lambs were born with only one eye in the center of their foreheads, like a cyclops. The trigger for this was found to be the alkaloid cyclopamine. Cyclopamine has proven to be an effective candidate for cancer therapy in adult humans and is now undergoing clinical trials. A research team at the Universities of Leipzig (Germany) and Thessaloniki (Greece) has now developed a new synthetic pathway for the production of cyclopamine. As they report in the journal Angewandte Chemie, the scientists, led by Athanassios Giannis, are confident that their research results will help to broaden our understanding of the structure–activity relationships of cyclopamine and to develop cyclopamine analogues with tuned bioactivities. Cyclopamine is the first inhibitor of the hedgehog signal-transduction pathway, which is used by cells to react to external signals. The signaling pathway is named for its ligand “hedgehog”, a signal protein that carries out an important function in embryonic development. Malfunction of this signaling pathway leads to massive deformations in the course of embryonic development, such as cyclopia, and can cause cancer in adults. Inhibition of this pathway is a new possible cancer treatment. Until now, there has been no efficient synthesis for cyclopamine. The structure of this unusual steroidal alkaloid contains many peculiarities that make synthesis difficult. The German and Greek team has now overcome these difficulties to develop an efficient twenty-step synthetic strategy starting from commercially available dehydroepiandrosterone, a natural steroid hormone. The strategy is based on biomimetic and diastereoselective transformations. The researchers achieved an overall yield of 1 %, which is a good result for such a tricky synthesis. In addition, small modifications in the reagents used allow this strategy to be used to produce cyclopamine analogues that do not occur in nature. The scientists aim to use these analogues to further examine the biological activity of this interesting natural product and then to adjust the activity to develop a new anti-tumor agent.Author: Athanassios Giannis, Universität Leipzig (Germany), Angewandte Chemie International Edition, doi: 10.1002/anie.200902520 Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:a3a6839b-3306-4db7-a718-58675cb9b2cc>
3.296875
1,162
Content Listing
Science & Tech.
32.315566
95,535,850
In an article appearing March 13 in the journal Molecular Systems Biology, researchers from the Ecole Polytechnique Federale de Lausanne demonstrate that the stability of cellular oscillators depends on specific biochemical processes, reflecting recent association studies in families affected by advanced sleep phase syndrome. Circadian rhythms are cyclical changes in physiology, gene expression, and behavior that run on a cycle of approximately one day, even in conditions of constant light or darkness. Peripheral organs in the body have their own cellular clocks that are reset on a daily basis by a central master clock in the brain. The operation of the cellular clocks is controlled by the coordinated action of a limited number of core clock genes. The oscillators work like this: the cell receives a signal from the master pacemaker in the hypothalamus, and then these clock genes respond by setting up concentration gradients that change in a periodic manner. The cell “interprets” these gradients and unleashes tissue-specific circadian responses. Some examples of output from these clocks are the daily rhythmic changes in body temperature, blood pressure, heart rate, concentrations of melatonin and glucocorticoids, urine production, acid secretion in the gastrointestinal tract, and changes in liver metabolism. In the tiny volume of the cell, however, the chemical environment is constantly fluctuating. How is it possible for all these cell-autonomous clocks to sustain accurate 24-hour rhythms in such a noisy environment? Using mouse fibroblast circadian bioluminescence recordings from the Schibler Lab at the University of Geneva, the researchers turned to dynamical systems theory and developed a mathematical model that identified the molecular parameters responsible for the stability of the cellular clocks. Stability is a measure of how fast the system reverts to its initial state after being perturbed. “To my knowledge we are the first to discuss how the stability of the oscillator directly affects bioluminescence recordings,” explains Felix Naef, a systems biology professor at EPFL and the Swiss Institute for Experimental Cancer Research. “We found that the phosphorylation and transcription rates of a specific gene are key determinants of the stability of our internal body clocks.” This result is consistent with recent research from the University of California, San Francisco involving families whose circadian clocks don’t tick quite right. These families’ clocks are shorter than 24 hours, and they also have mutations in oscillator-related genes. The current results shed light on how a genetically-linked phosphorylation event gone wrong could lead to inaccurate timing of our body clockworks. Mary Parlange | alfa Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:3fa0cc86-8457-4bab-8a2d-b4ca1cfefaf0>
3.515625
1,177
Content Listing
Science & Tech.
31.290155
95,535,851
The research team looked at atmospheric gas measurements taken every two weeks from aircraft over a six-year period over the northeast United States to collect samples of CO2 and other environmentally important gases. Their method allowed them to separate CO2 derived from fossil fuels from CO2 being emitted by biological sources like plant respiration, said CU-Boulder Senior Research Associate Scott Lehman, who led the study with CU-Boulder Research Associate John Miller. The separation was made possible by the fact that CO2 released from the burning of fossil fuels like coal, oil and gas has no carbon-14, since the half-life of that carbon radio isotope is about 5,700 years -- far less than the age of fossil fuels, which are millions of years old. In contrast, CO2 emitted from biological sources on Earth like plants is relatively rich in carbon-14 and the difference can be pinpointed by atmospheric scientists, said Lehman of CU's Institute of Arctic and Alpine Research. The team also measured concentrations of 22 other atmospheric gases tied to human activities as part of the study, said Miller of the CU-headquartered Cooperative Institute for Research in Environmental Sciences. The diverse set of gases impact climate change, air quality and the recovery of the ozone layer, but their emissions are poorly understood. The authors used the ratio between the concentration level of each gas in the atmosphere and that of fossil fuel-derived CO2 to estimate the emission rates of the individual gases, said Miller. In the long run, measuring carbon-14 in the atmosphere offers the possibility to directly measure country and state emissions of fossil fuel CO2, said Miller. The technique would be an improvement over traditional, "accounting-based" methods of estimating emission rates of CO2 and other gases, which generally rely on reports from particular countries or regions regarding the use of coal, oil and natural gas, he said. "While the accounting-based approach is probably accurate at global scales, the uncertainties rise for smaller-scale regions," said Miller, also a scientist at the National Oceanic and Atmospheric Administration's Earth System Research Laboratory in Boulder. "And as CO2 emissions targets become more widespread, there may be a greater temptation to underreport. But we'll be able to see through that." A paper on the subject was published in the April 19 issue of the Journal of Geophysical Research: Atmospheres, published by the American Geophysical Union. Co-authors include Stephen Montzka and Ed Dlugokencky of NOAA, Colm Sweeney, Benjamin Miller, Anna Karion, Jocelyn Turnbull and Pieter Tans of NOAA and CIRES, Chad Wolak of CU's INSTAAR and John Southton of the University of California, Irvine. One surprise in the study was that the researchers detected continued emissions of methyl chloroform and several other gases banned from production in the United States. Such observations emphasize the importance of independent monitoring, since the detection of such emissions could be overlooked by the widely used accounting-based estimation techniques, said Montzka. The atmospheric air samples were taken every two weeks for six years by aircraft off the coastlines of Cape May, N.J., and Portsmouth, N.H. Fossil fuel emissions have driven Earth's atmospheric CO2 from concentrations of about 280 parts per million in the early 1800s to about 390 parts per million today, said Miller. The vast majority of climate scientists believe higher concentrations of the greenhouse gas CO2 in Earth's atmosphere are directly leading to rising temperatures on the planet. "We think the approach offered by this study can increase the accuracy of emissions detection and verification for fossil fuel combustion and a host of other man-made gases," said Lehman. He said the approach of using carbon-14 has been supported by the National Academy of Sciences and could be an invaluable tool for monitoring greenhouse gases by federal agencies like NOAA. Unfortunately, NOAA's greenhouse gas monitoring program has been cut back by Congress in recent years, said Lehman. "Even if we lack the will to regulate emissions, the public has a right to know what is happening to our atmosphere. Sticking our heads in the sand is not a sound strategy," he said. Scott Lehman | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:47adab28-c6dc-4081-9fb5-510628a4e0d4>
3.671875
1,435
Content Listing
Science & Tech.
37.767277
95,535,872
In the middle of the Pacific Ocean, halfway between North America and Asia, there is a stretch of sea—roughly the size of Texas—that is officially known as the North Pacific Subtropical Gyre. More commonly, it is known as the Great Pacific Garbage Patch. It is essentially a swirl of trash, the drifting detritus of human existence, that ocean currents have accumulated into one massive, trashy expanse. The gyre is home not only to floating junk, but also to sea creatures that are adapting—because they can and because they must—to its membrane of human refuse. Back in 2009, the marine biologist Miriam Goldstein traveled to the gyre to study the species making a go of it in among the maritime muck. In the process, Goldstein collected samples of those species, among them gooseneck barnacles—creatures that are, Goldstein explains, "essentially a little shrimp living upside down in a shell and eating with their feet." The organisms, unlike some other barnacle species, have a stalk that is long and muscular. Goldstein preserved the barnacles she collected, and kept them preserved (yep, from gyre to jar) for several years. She asked a colleague working in the gyre in 2012 to collect more. She recently processed the samples. I'll let Goldstein, writing about her work in Deep Sea News, take it from here: Eventually I found myself in the lab dissecting barnacles in order to identify them. As I sat there, I thought “Well, I’m working on these barnacles anyway – wonder what they’re eating?” So I pulled out the intestine of the barnacle I was working on, cut it open, and a bright blue piece of plastic popped out. Yep. Blue plastic, inside the creature's intestine. "I reached into my jar o’ dead barnacles and dissected a few more," Goldstein continues, "and found plastic in their guts as well." Goldstein and a colleague, Deborah Goodwin, ultimately dissected 385 barnacles. And about a third of them (33.5 percent) had tiny pieces of plastic in their guts. Most of those organisms had eaten "just a few particles," Goldstein notes, "but we found a few that were absolutely filled with plastic, to a maximum of 30 particles, which is a lot of plastic in an animal that is just a couple inches long." They analyzed the plastic, as well, and determined that it was generally representative of the microscopic plastic that floats on the ocean surface within the gyre. (It was a combination, the pair note in the paper they published about the finding, of polyethylene, polypropylene, and, less commonly, polystyrene.) Which would suggest, Goldstein notes, that "the barnacles are probably just grabbing whatever they come across and shoving it into their mouths." So this is where our trash—our soda bottles, our coffee cups, our kids-meal toys—can end up when it breaks down: inside the intestines of hungry, and unsuspecting, sea creatures. In some sense, Goldstein notes, it's entirely logical that gooseneck barnacles would be eating plastic. "They are really hardy, able to live on nearly any floating surface from buoys to turtles, so they’re very common in the high-plastic areas of the gyre." Plus, "they live right at the surface, where tiny pieces of buoyant plastic float." Not to mention the fact that "they’re extremely non-picky eaters that will shove anything they can grab into their mouth." Goldstein notes that the amount of plastic found in her samples is likely not fully representative of the amount of plastic the barnacles actually consume. They probably eat much more plastic than their preserved digestive tracks indicate. ("Barnacles are perfectly capable of pooping out plastic—I observed plastic packaged up in fecal pellets, ready to be excreted the next time the barnacle had access to a couple minutes and a magazine—so it is very likely that more barnacles are eating plastic than we were able to measure.") And one piece of good news, Goldstein and Goodwin note in their paper, is that the plastic particles didn't seem to block the barnacles' stomachs or intestines. That leads to another concern, though. Since barnacles are small, and since goosenecks are eaten by larger predators like sea slugs and crabs, does that mean that the creatures are introducing plastic into the food chain? Are there traces of trash in our Chicken of the Sea? Not to worry, Goldstein says. "Fish don’t seem that interested in barnacles," she notes, "maybe because those fish didn’t evolve with a ton of floating debris." While, yes, it's possible that the barnacles' ingestion of plastic particles could transfer plastic or pollutants through the food web, "it is far from clear this is the case." Then again, a 2006 study estimated that at least 267 marine species had ingested trash—and these included mammals, birds, turtles, and fish. So the barnacles aren't alone in their decidedly unfancy feast. Via Ed Yong We want to hear what you think. Submit a letter to the editor or write to email@example.com.
<urn:uuid:99f44819-5d63-493f-90ec-a7dcae7f2b74>
3.3125
1,100
News Article
Science & Tech.
48.225164
95,535,890
The oceans seem to produce significantly more isoprene, and consequently affect stronger the climate than previously thought. This emerges from a study by the Institute of Catalysis and Environment in Lyon (IRCELYON, CNRS / University Lyon 1) and the Leibniz Institute for Tropospheric Research (TROPOS), which had studied samples of the surface film in the laboratory. The results underline the global significance of the chemical processes at the border between ocean and atmosphere, write the researchers in the journal Environmental Science & Technology. Isoprene is a gas that is formed by both the vegetation and the oceans. It is very important for the climate because this gas can form particles that can become clouds and then later affect temperature and precipitation. Previously it was assumed that isoprene is primarily caused by biological processes from plankton in the sea water. The atmospheric chemists from France and Germany, however, could now show that isoprene could also be formed without biological sources in surface film of the oceans by sunlight and so explain the large discrepancy between field measurements and models. The new identified photochemical reaction is therefore important to improve the climate models. The oceans not only take up heat and carbon dioxide from the atmosphere, they are also sources of various gaseous compounds, thereby affecting the global climate. A key role is played by the so-called surface microlayer (SML), especially at low wind speed. In these few micrometers thin layer different organic substances such as dissolved organic matter, fat and amino acids, proteins, lipids are accumulating as well as trace metals, dust and microorganisms. For the now published study, the research team took samples from the Norther Atlantic Ocean. The surface film was collected in the Raunefjord near Bergen in Norway. For this purpose, a glass plate is immersed in water and then again carefully pulled from the water. The 200 micron thin film sticks to the glass and is then scraped off with a wiper. The sample thus obtained is analyzed in the laboratory later. At the Institute of Catalysis and Environment in Lyon (IRCELYON), which belongs to the French research organization CNRS and the University of Lyon 1, the team investigated its photochemical properties during which collected samples were irradiated with light and the gases were analyzed: it became clear that isoprene was produced in magtnetudes that were previously attributed solely to plankton. "We were able for the first time trace back the production of this important aerosol precursor to abiotic sources, so far global calculations consider only biological sources," explains Dr. Christian George from IRCELYON. Thus, it is now possible to estimate more closely the total amounts of isoprene, which are emitted. So far, however, local measurements indicated levels of about 0.3 megatonnes per year, global simulations of around 1.9 megatons per year. But the team of Lyon and Leipzig estimates that the newly discovered photochemical pathway alone contribute 0.2 to 3.5 megatons per year additionally and could explain the recent disagreements. "The existence of the organic films at the ocean surface due to biological activities therefore influences the exchange processes between air and sea in a unexpected strong way. The photochemical processes at this interface could be a very significant source of isoprene", summarizes Prof. Hartmut Herrmann from TROPOS. The processes at the boundary between water and air are currently of great interest in science: In August, the team from the CNRS and TROPOS presented evidence in Scientific Reports, the open-access journal of Nature, that dissolved organic material in the surface film is strengthening the chemical conversion of saturated fatty acids into unsaturated gas phase products under the influence of sunlight. For the first time it was realized that these products have to be of biological origin not only, but also abiotic processes at the interface between two media have the potential to produce such molecules. In early September another team from Canada, the US, Great Britain and Germany showed in the journal Nature that organic material from the surface film of the oceans can be an important source for the formation of ice in clouds over remote regions of the North Atlantic, North Pacific and Southern Ocean. The recent publication of the teams from CNRS and TROPOS in Environmental Science & Technology provides indications how the climate models in the important details of the influence of isoprene could be improved. Because of the great importance this paper will be open access as "Editor's Choice". Raluca Ciuraru, Ludovic Fine, Manuela van Pinxteren, Barbara D'Anna, Hartmut Herrmann, and Christian George (2015): Unravelling new processes at interfaces: photochemical isoprene production at the sea surface. Environmental Science & Technology. Just Accepted Manuscript http://dx.doi.org/10.1021/acs.est.5b02388The study was funded by the European Research Council ERC (ERC Grant Agreement 290852 - Airsea). Raluca Ciuraru, Ludovic Fine, Manuela van Pinxteren, Barbara D’Anna, Hartmut Herrmann & Christian George (2015): Photosensitized production of functionalized and unsaturated organic compounds at the air-sea interface. Scientific Reports, 5:12741, DOI: 10.1038/srep12741 http://dx.doi.org/10.1038/srep12741The study was funded by the European Research Council ERC (ERC Grant Agreement 290852 - Airsea). Dr. Christian George (en. + fr.) Institut de Recherches sur la Catalyse et l'Environnement de Lyon (IRCELYON) Tel: +33-(0)472 44 54 92 Prof. Dr. Hartmut Herrmann, Dr. Manuela van PinxterenLeibniz Institute for Tropospheric Research (TROPOS) Tel. +49-341-2717-7024, -7102 Tilo Arnhold, public relation of TROPOS Climat : l’impact des réactions à la surface des océans sur l’atmosphère (press release of CNRS in French) METEOR expedition „BioChemUpwell“ takes a close look at upwelling zones in the Baltic Sea (Press release of 23rd Juli 2015) The Leibniz Association connects 89 independent research institutions that range in focus from the natural, engineering and environmental sciences via economics, spatial and social sciences to the humanities. Leibniz Institutes address issues of social, economic and ecological relevance. They conduct knowledge-driven and applied basic research, maintain scientific infrastructure and provide research-based services. The Leibniz Association identifies focus areas for knowledge transfer to policy-makers, academia, business and the public. Leibniz Institutes collaborate intensively with universities – in the form of “WissenschaftsCampi” (thematic partnerships between university and non-university research institutes), for example – as well as with industry and other partners at home and abroad. They are subject to an independent evaluation procedure that is unparalleled in its transparency. Due to the institutes’ importance for the country as a whole, they are funded jointly by the Federation and the Länder, employing some 18,100 individuals, including 9,200 researchers. The entire budget of all the institutes is approximately 1.64 billion EUR. http://www.leibniz-association.eu Tilo Arnhold | Leibniz-Institut für Troposphärenforschung e. V. Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:f0133fd0-ae79-412c-a1f7-3bbc5af615b8>
4
2,211
Content Listing
Science & Tech.
35.370175
95,535,900
Space Group Symmetry Thus far the treatment of symmetry has been restricted to the proper rotational and reflection symmetries of space lattices. The discussion of the symmetry of crystalline solids does not end with the presentation of the 14 Bravais lattice types because the symmetry of a solid is the symmetry of its three-dimensionally periodic particle density, and there are more symmetries available to such periodic patterns than to the lattices which characterize their translational symmetries. This is because of the existence of symmetry operations appropriate to such patterns but not to their lattices, for example the pattern need not be centrosymmetric while the lattice must be. KeywordsCrystalline Solid Symmetry Operation Symmetry Element Lattice Symmetry Reflection Plane Unable to display preview. Download preview PDF.
<urn:uuid:05739897-454b-4e27-a55a-4113dc207fa5>
3.09375
173
Truncated
Science & Tech.
12.26
95,535,951
Isolate Scope and custom directives in Angular js. This video shows how isolate scope and custom directive works, how it gets populated on the screen and how they actually manipulate themselves in the actual applications. The trend of accessing content on tablets and Smartphones has revolutionized many users to shift from desktop to mobile devices. Their digital way of accessing services and products has become inevitable. This has led developers to develop two approaches to mobile-ready web design: Responsive websites and Mobile websites. A responsive website is a website that can adapt itself on various screens-sizes regardless of the devices that you use it. Responsive websites are usually optimized for viewing on mobile phones, tablets and desktops. Hence, the Responsive Web Design approach eliminates the need for developing a separate website for every mobile device platform. This method of designing is now preferred by web designers and developers as it covers a large number of users’ devices. Mobile website is a website created significantly for small-screen devices. Similar to a regular website, mobile websites can also be accessed through various browsers. A mobile website generally consists of limited content and hence it is light-weighted and faster when compared to regular websites. If a browser is using a mobile phone to view a website, then the website automatically detects that the website has accessed on a mobile phone and then it redirects the browser to a separate URL to view the mobile specific website. Mobile applications are the applications developed mainly for mobile device platforms such as iPhone, Android, Google, etc. Usually, mobile applications have been downloaded from app stores. There are a large number of mobile applications available for various platforms. As there are many mobile phones available with various operating systems, mobile applications are incredibly expensive to develop and maintain. If you want to provide users a mobile experience that engrosses speedy decisions and also selling and buying, then you need to have a separate mobile website. If you want to constantly add something or update your website based on new trends, then having a single responsive web design is the best option. When thinking about the advantages of using responsive websites, first thing that comes to our mind is the download activities that are not required for responsive websites whereas the mobile applications require you to download them from app stores. Also, a responsive website doesn’t require you to update any information again and again. You just need to update the information on your main website that makes your work done. Responsive websites provides you the compatibility across various platforms. It enables the content updates in the websites driven by Content Management Systems. The main disadvantage of using a responsive website is that the responsive website cannot be used without an internet connection while the mobile applications support this situation. Comparing the development cost, both responsive websites and mobile applications entail a strong budget. As a responsive website is entirely composed of complicated codes and technicalities, it requires a huge development cost. However, a mobile website requires a very low development charges. When considering the optimization process and SEO, responsive website takes up the priority than the mobile websites. Technology advancements has popularized a typography style in websites. The ability of selecting the appropriate font has broaden to the extreme to make websites attractive. Mix and Match typography has become the necessary features of web design for creating the best user experience. To improve the website look and feel in long run, selecting the right typographic plays the key role to leave viewers attractive. After years of working, many customers are focusing and pushing designers to go with mix and match typography style of usage in their websites. With emergence of new trends popping up for better readability, there is raise in usage of usage of typography in web design. Most notably, retro typography on vintage style websites is quiet powerful across the web. This trend has been around for a while but now we are seeing this in full force for websites who want to set up their brand in bold and visually interesting ways. This Mix and Match typography relies on the designer’s ability to choose the right fonts to match not just the message, but the other typographic styles in use. Some designers usually stick to one font to what they like, but it is good to mix and match fonts with the flow of website without effecting the overall look. Think of not to use too-small or too loopy fonts that are hard to read and look for long periods of time. Though there are several kinds of fonts available, look for the right mix and match typography that fits the mood and aesthetic of your design. Try out new things to achieve a desired effect for your website allowing viewers with maximum readability. There are numerous CSS frameworks that are used to develop web applications such as Sass and LESS. They are both CSS based frameworks but the complexity of app handled by these frameworks are different. Each of these frameworks is suited for different web app development. Here I have provided a brief introduction between these frameworks and its features. Sass is an extension of CSS3, adding nested rules, variables, mixins, selector inheritance, and more. It’s translated to well-formatted, standard CSS using the command line tool or a web-framework plugin. Sass has two syntaxes. The older syntax, called “the indented syntax” uses a syntax similar to Haml. It makes use of indentation to separate code blocks and newline characters to separate rules. The newer syntax, “SCSS” uses block formatting like that of CSS. It uses curly brackets to denote code blocks and semicolons to separate lines within a block. The indented syntax and SCSS files are generally given the extensions .sass and .scss respectively. Both these syntaxes are fully supported – there is no functional difference between them. Features of Sass: SASS is an excellent scripting language for web app development. But using it with frameworks makes web development even easier and enjoyable. One great thing about SASS is it has built in CSS color math. One can automatically change the color of the menu and the hover state manually. Another great way to use SASS is you can organize all of your CSS into a single file. It simply adds more power and better organizational tools, making it an easy choice as a go-to replacement. The similarity between these SCSS and LESS frameworks is its features and functions. But they slightly differ in syntax. They make web app development easier for the programmers and facilitate them to create intriguing web applications. SASS is only accessible to Ruby/Rails environments, while LessCSS can be used by anyone, with any languages, or just plain HTML website. For developers adopting these frameworks, it is essential to understand the paradigm shift these frameworks bring to client side application development. User Interface refers to the aspects of hardware or software which can be seen by the human user, and the commands and mechanisms the user uses to control its operation and input data. While User Interface Design is often used for computer systems, it is the part of any system exposed to a user. As per a research survey, it is found that many people are facing problems with poorly designed websites. Some of their problems include vision problems, level of discomfort, and stress at work. So in this article, I would like to highlight why good user interface design is important and discuss the consequences of poor design. Good user interface design is important because, the need for the design and development of user interfaces that support the tasks people want to do has become an important issue. Users are more comfortable with computer systems that are easy to use, easy to understand, and enable them to attain their goals with minimum frustration. Good user interfaces can lead to benefits such as higher staff productivity, lower staff turnover, higher staff morale, and higher job satisfaction. Economically, these benefits should translate into lower operating costs. Bad user interfaces, on the other hand, may result in stress and unhappiness among staff, leading to high staff turnover, reduced productivity, and, consequently, financial losses for the organizations. As presentation of web design have much to do with the success of any company. It is important to come up with a user friendly approach to get better results and attract the targeted audience. So considering reliable human factors is of great importance to the organizations during the website design and development, otherwise you may also lose the chances of driving traffic to your site which would eventually affect your business performance. So streamlining your website by following the human factors is a great way to improve the performance. There are three factors that should be considered for the design of a successful user interface websites; development factors, visibility factors and acceptance factors. Here is the list of things that must be taken into consideration: human abilities, clear conceptual model, navigation, page layout, typography, headings, links, text appearance, color and texture, images, animation, audio and video effects etc. By taking care of these characteristics, organizations can effectively design good interface design, thereby reducing costs, employee turnover, and increasing user satisfaction, productivity, and quality services. Responsive design represents the simplest way to reach users across multiple devices ensuring a seamless user experience. It has been widely used these days with the growing popularity of mobile devices. Responsive design has been very important for organizations looking to optimize their online content. It consists of a mix of flexible grids and layouts, images and an intelligent use of CSS media queries that extend the functionality of media types by allowing more precise labeling of style sheets. It has features that make websites adjust to various screen sizes and resolution. It has revolutionized the look and feel of websites across all platforms. The key benefit of this responsive design is that it renders only one version of the website for all kinds of devices, which saves both time and money. With many new devices with varied screen sizes, responsive design is a great solution to many critical web challenges. With responsive design, the HTML is the same for every visiting device and each time a page loads on mobile it also loads all of the HTML elements including images & scripts intended for the tablet and desktop sites. Every time a user visits your site they view the full page content by default – no matter which device they are using. Increased usage of the internet and exploration of web applications on tablet and mobile devices has been the driving force behind this development. It has enabled clients to deliver an optimum visual experience across any device thereby positioning their brands to achieve better search engine rankings in the future. Hypertext Mark-up Language (HTML5) and Cascading Style Sheets (CSS3) are the two recommended languages in web development and is a great way to revolutionize the way we design and develop website to create high performance web applications. In this article, you will be acquainted with HTML5 and CSS3 introduction, its modules, and some of the best advantages of HTML5 over HTML4 and CSS3 over CSS2. HTML 5 contains a number of new and easily understood elements in various areas. Some of the advantages are as follows: When coming to CSS3, it brings a revolutionary new browsing experience to users. The main advantage is that you can write CSS rules with new CSS selectors, new combinators, and some new pseudo-elements. CSS3 also offers new tools for describing colors in documents. HSL (Hue-Saturation-Lightness) is the newest addition to HSLA which includes an Alpha channel to reduce opacity. CSS3 website templates have been designed for the benefit of web development services. With the help of a CSS3 website layout, designers can control web designing process and create an engaging user interface by making the pages attractive. CSS3 website layout is designed to facilitate web pages with presentations. CSS3 website templates help the developers in depicting an HTML element in 2D and 3D transformations like rotation, scaling, and translations. We at UI Partners have a tech savvy team who are experts in integrating HTML5 and CSS3 to give a new look to your website. If you wish to transform your applications to HTML5, create a new app, or design attractive CSS3 website templates to enhance your visual design, or want to simplify your user interface making your website user friendly, we help you in utilizing these latest technologies to gain a competitive edge for your business. Contact us today and see how we can put HTML5 and CSS3 to work for you and stay ahead of the ever-evolving web technologies. Technology trends are increasingly growing today with the developers building robust and more complex applications. Almost all of the rich web applications that we currently see on the web rely on a subtle set of UI controls, libraries and frameworks. There are so many options out there to provide a consistent, reliable, and highly interactive user interface, but selecting the right library or a framework can be overwhelming. To have an idea of all the possible alternatives, in this article we will explore what constitutes best practices with regard to jQuery and, furthermore, why angular is a good choice of a framework to implement. Let say, if you would like to send some requests to a backend server and display the results on a web page and vice versa. We can perform this simply in both Angular and jQuery. But with Angular it goes really well against jQuery with one big difference. It does not involve us to think of the back and forth DOM translations as opulent to jQuery. In jQuery, the view and the logic or behavior which affects those elements is left to the developer to split them. In jQuery, if we wish to update a specific DOM element with backend data it has to be coded. There is no separation as to which is the data and which is the controller of the data. That’s where AngularJS comes into play with its two way data binding. Angular separates the UI data and the UI representation of data. The code in AngularJS is the translation of data to the DOM and vice versa. Angular makes this binding ideal, while increasing productivity. This is the best way to separate data and the HTML view code. Angular provides rich ‘desktop like’ applications and features that you don’t get with using a jQuery library. When looking for a comprehensive all-in-one solution, AngularJS is the right choice between the two. Its two-way data binding, in-built directives and filters allows developers to build applications very rapidly. So if you are looking for a lightweight and modular front-end framework for developing fast and powerful web interfaces, AngularJS is the apt framework to build complex applications. I hope this post has made you think about Angular.js as being a more than viable replacement to jQuery.
<urn:uuid:f44c13f9-aedd-4786-9d4d-55b35624301d>
2.515625
2,982
Personal Blog
Software Dev.
33.734609
95,536,009
We may be entering the year of 2015, but we cannot help but glance back to our previous year of astronomical discoveries. We are amazed by what we see. Based on scientific facts and the public’s interest, 10 stories stand out above all the rest! 1. Superclusters and our Milky Way Astronomers used the GBT (Green Bank Telescope) and other similar radio telescopes measuring velocity, making it easier to pinpoint the Milky Way’s location in the universe. Masses of galaxies or Super clusters interconnect creating the many stars of the heavens. Our home, here on earth, is located in the cluster named Laniakea, or meaning “immense heaven” in Hawaiian. It seems our boundaries have been clarified, which links us to previously unrecognizable areas of the universe. 2. The lifeline in the GG Tau-A binary star system One of the most fascinating discoveries of 2014 is the wheel within the wheel, located at the inner reaches of the GG Tau-A binary star system. A large disc of material surrounds an inner disc within the star system that revolves around the central star. It appears that life-sustaining material is being transferred from the outer to the inner wheel, thus keeping creation in motion. This astonishing discovery was detected by the ALMA (Atacama Large Millimeter/Submillimeter Array). 3. Planet-forming rocks near the Orion Nebula Apparently, gases in the Orion Nebula are teeming with planet-building “peebles”. It seems that these particles are 100-1,000 times larger than particles found near Protostars. This is substantial, simply because of the young age of this region, no matter how small these particles are — and they are pretty tiny. These findings are due to the astronomer’s use of the GBT telescope providing by NSF (National Science Foundation). 4. Discoveries below the surface of the Moon A tag team effort was required in the discovery of the Serenity and the Aristillus Crater. Using the GBT telescope in West Virginia and the Arecibo Observatory in Puerto Rico, a radar signal was transmitted, traveling below the surface of the moon. As the signal rebounded, it was collected at by the GBT telescope in West Virginia. This technique was the same one used to study asteroids and planets in the solar system. 5. Galaxy M82 and Starbursts Using the Karl G. Jansky Very Large Array (VLA), a radio image was created which revealed information about the 5200 light-years of creation of the M82 Galaxy. Ionized gases and fast moving electrons are captured in this radio emission. Within the bright mix, you see star-forming regions and supernovas, all colliding and creating debris from the explosions. 6. Understanding gravity by studying distant stars Astronomers have discovered a very unique star system comprised of two white dwarf stars and a dense neutron star. What is so amazing about this triad is how close they are to each other. In fact, they are packed into a space closer than the distance from earth to the sun. This closeness has allowed scientists to study gravitational effects in a whole new way. The studies within this star system can even provide solutions to problems with our fundamental “mis”-understandings of physics. 7. The distance to Pleiades A withstanding controversy surrounding the distance to Pleiades has been resolved. Recently a discovery suggested the distance to be 443 light years away, which has been proven to be within one percent of accuracy. This is much more precise than previous measurements. 8. Coldest, dimmest white dwarf star What is so remarkable about the discovery of a certain white dwarf star is its detectability. We know that many cold and dim dwarf stars exist, but we just cannot find them. This white dwarf star has been so distant and cold that it has crystalized. With the help of three important tools, the GBT (Green Bank Telescope), NRAO (National Radio Astronomy Observatory) and the VLBA (Very Long Base Array), a team of astronomers were able to identify this unique earth sized diamond. 9. Young star with Pluto-sized orbiting planets Astronomers recently discovered many Pluto-sized objects and dust surrounding an adolescent star. As this new system was observed, scientists noticed a marked increase in dust congregating within the disc surrounding the heavenly bodies. In other words, an accelerated system seems to be on the rise surrounding the distant star, HD 107146. 10. ALMA reveals best evidence of the birth of planets The ALMA (Atacama Large Millimeter/Submillimeter Array), in its most recent detection of planet formation, has provided the best image yet of this process. What makes the incident stand out from the rest is the young age of the star involved in the system’s formation. The sun-like star, HL Tau, and surrounding planetary particles, which are 450 light years away from our solar system, provide the most astonishing detail ever captured by astronomers. The year 2014 has brought earth-shaking evidence of just how infinite the universe must be. Now, in the year 2015, we can only expect to delve a little deeper and go a bit further into the unknown. What a glorious adventure awaits! art by QAuZ Latest posts by Sherrie (see all) - 14 Signs of Stress and Non-Obvious Psychological Causes of It - July 14, 2018 - Teen Angst: 7 Signs Your Teen Is Suffering and How to Help Them - July 11, 2018 - 6 Signs of Unhealthy Safety Seeking That Reveal Avoidance and Anxiety - July 8, 2018 - 10 Ways to Use the Holistic Approach to Deal with Anxiety and Depression - July 6, 2018 - Feeling Worthless? 6 Overlooked Causes and How to Feel Good about Yourself - July 3, 2018
<urn:uuid:0849b15c-c6b2-4cf3-9525-003a4004d07b>
3.5625
1,223
Listicle
Science & Tech.
49.887761
95,536,015
Mathematics mystery is all knitted up Crocheted model helps to define hyperbolic space NEW YORK - A 200-year-old hole in the fabric of mathematics has finally been mended. In a recent lecture at The Kitchen theater, a brainy — and dexterous — Cornell mathematician described how she made a real-life model of a principle that has mystified scientists for centuries. She crocheted it. Since the early 1800s, mathematicians have known about something called "hyperbolic space," but they couldn't figure out a way to illustrate it. Enter Daina Taimina. After watching her husband, fellow mathematician David Henderson, make a rather flimsy version out of paper, she decided to use her knowledge of handicrafts to create a more durable one. When her knitted model proved too droopy, she tried crocheting it with coarse synthetic yarn and the rendering turned out perfect. One of the models is now on display at the Smithsonian, and the scientific community is abuzz with requests for her handicrafts. Trending Lifestyle Video - Man who had police called on him during pickup basketball game speaks to FOX 5 about what happened Fox5DC - Police body camera footage of response to 911 call after pickup basketball game Fox5DC - Kassoum Yakwe Settling In At UConn HartforCourant - Child Performs Spectacular Belly Flop Into Pool Storyful - How to Retire Earlier Money Talks News - Simple Ways to Stop Impulse Buying Money Talks News - Quiz: Are You an Expert Home Buyer? Money Talks News - Cosplay crucial at MetroCon Fox 13 Tampa - Papillion La Vista South students challenge first responders to fitness challenge KETV - Celebrate Shark Week With These Killer Shark Attack Cupcakes Delish So what is hyperbolic space? "The easiest way of understanding it is that it's the geometric opposite of a sphere," Taimina says. "On a sphere, the surface curves in on itself and is closed. But on a hyperbolic plane, the surface is space that curves away from itself at every point." Still confused? That's where her models come in. Verbal descriptions are so hard to understand that only a small group of mathematicians really knew what it all meant. And even they overlooked naturally occurring examples, like some kinds of lettuce and seaweed. Now, using the crocheted models, fifth graders are learning about hyperbolic planes. Neurosurgeons also find the models useful because the planes' surfaces are similar to that of the brain. Henderson says scientists speculate that this kind of folding allows the brain to store information and retrieve it more quickly than if the surface were stretched out flat. Even cartoons can benefit from the models. Pixar animators use this kind of geometry to make certain surfaces, like fabric and skin, appear three-dimensional on screen.
<urn:uuid:fc8163fb-1b4b-476e-8f3d-5c591affe468>
3.234375
608
Truncated
Science & Tech.
37.452314
95,536,069
Your shopping cart is empty! A structure is a compound data type that contains different members of different types. The members are accessed by their names. A value of a structure-object is a tuple of values of each member of the object. A structure can also be seen as a simple implementation of the object paradigm from (OOP). A struct is like a class except for the default access(class has default access of private, struct has default access of public). C++ also guarantees that a struct that only contains C types is equivalent to the same C struct thus allowing access to legacy C functions, it can (but may not) also have constructors (and must have them, if a templated class is used inside a struct), as with Classes the compiler implicitly-declares a destructor if the struct doesn’t have a user-declared destructor Write a review Data Structures using C by A.M Padma Reddy(CBCS ) Special Combo:Fresher Pack for Chemistry Cycle Digital Signal Processing By Dimitris Manolakis, John G. Proakis (Brand New: Latest Edition) Copyright 2016 BookLeey Ecommerce Private Limited -All Rights Reserved
<urn:uuid:839fdb34-b25d-42fb-a584-2363ee2531b8>
3.125
248
Product Page
Software Dev.
44.09372
95,536,085
showing four spots on hindwing The gatekeeper or hedge brown (Pyronia tithonus) is most commonly found in southern and eastern Britain and coastal areas of south and south-east Ireland. It is also found in the Channel Islands, but not in Scotland nor the Isle of Man. Given its preference for warmer weather, it can be assumed that the restriction of range expansion is due to climate. Colonies vary in size depending on the available habitat, and can range from a few dozen to several thousand butterflies. Similar species and subspecies It is a member of the subfamily Satyrinae in the family Nymphalidae. A similar species is the meadow brown; the two species can be difficult to distinguish with closed wings since the underwing markings are very similar. However, the gatekeeper tends to rest with its wings open, whereas the meadow brown usually rests with its wings closed. The gatekeeper is also smaller and more orange than the meadow brown and has double pupils on its eyespots. Pyronia tithonus has two known subspecies. Pyronia tithonus ssp. britanniae, defined by Ruggero Verity in 1915, is represented in the British Isles. Pyronia tithonus ssp. tithonus, defined by Carl Linnaeus in 1771, is not found in the British Isles. Instead, this subspecies is seen in central and southern Europe except southern Italy and in the Mediterranean islands except for southern Corsica and Sardinia. The gatekeeper is orange with two large brown spots on its wings and a brown pattern on the edge of its wings. The eyespots on the forewing most likely reduce bird attacks, therefore the gatekeeper is often seen resting with its wings open. There are a large number of aberrant forms, such as excessa, where specimens have two to four extra spots on the forewing upperside. The number of spots on the hindwing underside also varies. Male – MHNT The male has a dark patch on the upper side of the forewing that contains scent-producing scales known as the androconia. This is most likely for courtship purposes. Androconia have evolved through sexual selection for the purpose of releasing pheromones for attracting mates. Little is known about how androconium actually function during courtship, and the chemical composition of the pheromones is unknown. Females typically have more spots than males. Males have more costally placed eyespots, compared to the females whose eyespots are more spread over the wing margin. As indicated by its alternate name, the gatekeeper butterfly prefers the habitat of meadow margins and hedges; field gates are often in such locations, and thus the gatekeeper can be found much more frequently in such locations than the meadow brown for example. Recent expansion and genetic diversity Early in the 20th century, Pyronia tithonus was common in southern Britain, but sparse in the north. In fact, the population contracted before re-expanding beginning in the 1940s. Over the past three decades, the flight range of the gatekeeper has extended northwards in Britain. Furthermore, the length of the flight period has been observed to be significantly shorter close to the edge of the range, suggesting that the extension of flight period and expansion of range are likely to be related. However, the mean flight date and length of flight period are not related. It has also been found that larger individuals cover longer distances, and this recent expansion of the gatekeeper may explain the larger size of recent populations. As a result of recent expansion, the gatekeeper is found in a wide variety of habitats. Some of the largest colonies can be found in scrubby grassland, woodland rides, country lanes, hedgerows, and other similar conditions within its range. This has led to a greater degree of genetic diversity in the gatekeeper compared to other species, such as P. aegeria, which are seen in more limited habitats. However, the contraction of abundance in the early 20th century has limited the potential of this genetic diversity, as bottlenecks and repeated founder events could have occurred during range changes. Much of the data on changes in Pyronia tithonus population size has been gathered from the UK Butterfly Monitoring Scheme. This scheme has recorded changes of abundance for 71 species between Britain and Ireland since 1976 through visits to more than 1500 monitoring sites. Pyronia tithonus is a characteristic field-margin species; it feeds on grasses as larvae and nectar as adults. The larvae of Satyrinae all feed on grasses, such as rough meadowgrass (Poa trivialis), smooth meadow grass (Poa pratensis) and sheep's fescue (Festuca ovina); they are usually green or brown. The pupae are a flimsy chrysalis either hanging upside down or lying in grass. The adults are often found around blackberry plants. The adult butterflies have a short proboscis and the shallow flowers of the blackberry provide an excellent nectar source. Larval food plants Adults feed primarily on bramble (Rubus fruticosus agg.), carline thistle (Carlina vulgaris), devil's-bit scabious (Succisa pratensis), fleabane (Pulicaria dysenterica), hemp agrimony (Eupatorium cannabinum), wild privet (Ligustrum vulgare), ragwort (Jacobaea vulgaris), red clover (Trifolium patense), thistles (Cirsium and Carduus species), thyme (Thymus praecox) and water mint (Mentha aquatica). The gatekeeper butterfly tends to rest on vegetation during overcast or hazy sunshine conditions. During sunny weather, it will fly from flower from flower gathering nectar. The gatekeeper is a relatively active butterfly but not very mobile, as seen when comparing it to a similar species, Maniola jurtina. Mobility in butterflies refers to the distance covered from flying, while activity refers to how often they are in flight. In an experiment assessing wing damage, Pyronia tithonus showed faster wing damage as a result of their increased activity, and these results showed that activity levels do not necessarily correlate with mobility. Their low mobility may also explain why they can be very abundant at one site but not at a similar habitat only a few kilometres away. Males fly more and are generally more active by spending most of their time locating mates. Pyronia tithonus is a protandrous species, meaning the males emerge before the females. As a result, females usually only mate once, so they have more time available for resting, nectar feeding, host plant selection and oviposition. Weather has been found to have a significant influence on population size. Warm, dry summers tend to result in the biggest increase in gatekeeper population. This weather trend may explain why Pyronia tithonus numbers have been low in northern Britain because of the cooler summers and that range expansion has resulted from climate change. Weather as a cause for changes in relative abundance has been supported in other ways as well. Changes have also been synchronous between species including Pyronia tithonus, with weather being a potential explanation. Based on these findings on the impact of climate, Pyronia tithonus abundance is expected to become 50% greater by 2080 given a high climate change. There is one generation of gatekeeper butterflies each year with adults emerging in July, peaking in early August, and only a few adults remaining at the end of the month. There is no known specific courtship ritual; however the male scent spots most likely play a role. Males will set up small territories and actively seek out a mate. Copulation lasts for approximately an hour during which the butterflies remain stationary with their wings closed. Females will lay between 100 and 200 eggs, usually in the shade or at random by ejecting eggs into the air. Initially larvae are yellow, but soon develop brown patches and continue to darken as they develop within the egg. Eggs hatch after about 14 days. - Eeles, Peter. "UK Butterflies - Gatekeeper". Webifield. Retrieved 3 October 2013. - Anon. "Gatekeeper". A-Z of butterflies. Butterfly Conservation. Retrieved 8 August 2011. - Eeles, Peter. "UK Butterflies - Gatekeeper". Webified. Retrieved 3 October 2013. - "Pyronia tithonus (Linnaeus, 1771)". Satyrinae of the Western Palearctic. Retrieved 3 October 2013. - "UK Butterflies". Peter Eeles. 2016. Retrieved August 1, 2016. - Merckx, Thomas; Hans Van Dyck (July 2002). "Interrelations Among Habitat Use, Behavior, and Flight-Related Morphology in Two Cooccurring Satyrine Butterflies, Maniola jurtina and Pyronia tithonus". Journal of Insect Behavior. 4. 15: 541–561. doi:10.1023/a:1016385301634. - Hall, Jason P. W.; Harvey, Donald J. (2002). "A survey of androconial organs in the Riodinidae (Lepidoptera)". Zoological Journal of the Linnean Society. 136 (2): 171–197. doi:10.1046/j.1096-3642.2002.00003.x. - Pollard, E. (October 1991). "Changes in the Flight Period of the Hedge Brown Butterfly Pyronia tithonus During Range Expansion". Journal of Animal Ecology. 3. 60: 737–748. doi:10.2307/5411. JSTOR . 5411 . - Hill, Jane K.; Clare L. Hughes; Calvin Dytham; Jeremy B. Searle (March 2006). "Genetic diversity in butterflies: interactive effects of habitat fragmentation and climate-driven range expansion". Biology Letters. 2: 152–154. doi:10.1098/rsbl.2005.0401. PMC . PMID 17148351. - "The UK Butterfly Monitoring Scheme (UKBMS)". Biological Records Centre. Retrieved 3 October 2013. - Hoskins, Adrian. "Butterflies of Britain and Europe - Gatekeeper". Retrieved 3 October 2013. - Pollard, E. (December 1988). "Temperature, Rainfall, and Butterfly Numbers". Journal of Applied Ecology. 3. 25: 819–828. doi:10.2307/2403748. JSTOR . 2403748 . - Pollard, E.; D. Moss; T.J. Yates (February 1995). "Population Trends of Common British Butterflies at Monitored Sites". Journal of Applied Ecology. 1. 32: 9–16. doi:10.2307/2404411. JSTOR 2404411. - Pollard, E. (February 1991). "Synchrony of Population Fluctuations: The Dominant Influence of Widespread Factors on Local Butterfly Populations". Oikos. 60: 7–10. doi:10.2307/3544985. JSTOR 3544985. - Roy, D.B.; P. Rothery; D. Moss; E. Pollard; J.A. Thomas (2001). "Butterfly numbers and weather: predicting historical trends in abundance and the future effects of climate change". Journal of Animal Ecology. 70: 201–217. doi:10.1111/j.1365-2656.2001.00480.x. |Wikimedia Commons has media related to Pyronia tithonus.|
<urn:uuid:633c952d-985f-44cc-9ae4-95c82c947f3d>
3.390625
2,467
Knowledge Article
Science & Tech.
49.982803
95,536,100
In this article, we will show that the Pythagorean approximations of Formula coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers converging to different algebraic irrationals. We will see how approximations to some irrational numbers, using known facts from the history of mathematics, may perhaps help to acquire a better comprehension of the real numbers and their properties at further mathematics level. - RT @A2SchoolsSuper: We ❤️ #TreeTown @A2schools #InspireA2 #A2gether Ann Arbor named best place to live in America - again https://t.co/yRrH… 6 months ago - RT @SpringerEdu: Teachers’ talk about the mathematical practice of attending to precision link.springer.com/article/10.100… 6 months ago - RT @SpringerEdu: RT @authorzone: Discover what English-language #SpringerNature books topped 2017’s list in downloads, citations, and menti… 6 months ago - Sharing some thoughts about the process of settling down in Ann Arbor. #Sabbatical fulbright.no/grantee-experi… 10 months ago - RT @velonews: Time trials can be a snooze, but the elite men’s ITT championship race was quite the opposite. velonews.com/2017/09/news/b… 10 months ago
<urn:uuid:69ff31cd-e482-4db6-be85-2634f3e079a3>
2.984375
332
Content Listing
Science & Tech.
42.232297
95,536,117
This article may need to be rewritten entirely to comply with Wikipedia's quality standards. (January 2014) Charge number (z) refers to a quantized value of electric charge, with the quantum of electric charge being the elementary charge, so that the charge number equals the electric charge (q) in coulombs divided by the elementary-charge constant (e), or z = q/e. The charge numbers for ions (and also subatomic particles) are written in superscript, e.g. Na+ is a sodium ion with charge number positive one (an electric charge of one elementary charge). Atomic numbers (Z) are a special case of charge numbers, referring to the charge number of an atomic nucleus, as opposed to the net charge of an atom or ion. All particles of ordinary matter have integer-value charge numbers, with the exception of quarks, which cannot exist in isolation under ordinary circumstances (the strong force keeps them bound into hadrons of integer charge numbers). Charge numbers in chemistry For example, the charge on a chloride ion, , is , where e is the elementary charge. This means that the charge number for the ion is . is used as the symbol for the charge number. In that case, the charge of an ion could be written as . The charge number in chemistry normally relates to an electric charge. This is a property of specific subatomic atoms. These elements define the electromagnetic contact between the two elements. A chemical charge can be found by using the periodic table. An element's placement on the periodic table indicates whether its chemical charge is negative or positive. Looking at the table, one can see that the positive charges are on the left side of the table and the negative charges are on the right side of the table. Charges that are positive are called cations. Charges that are negative are called anions. Elements in the same group have the same charge. A group in the periodic table is a term used to represent the vertical columns. The noble gases of the periodic table do not have a charge because they are nonreactive. Noble gases are considered stable since they contain the desired eight electrons. The other atoms or ions have charges because they are very reactive and want to react with another atom or ion to become stable. When elements are bonded, they can either be bonded by ionic bonding or covalent bonding. When elements bond between positive and negative charged atoms, their charges will be switched and carried down on the other element to combine them equally. This is shown below. Using the chart provided, if ammonium with a plus 1 charge is combined with an acetate ion with a negative 1 charge, the charges will be cancelled out, shown in the figure below. Another example below. both and are salts. Charge numbers also help to determine other aspects of chemistry. One example is that someone can use the charge of an ion to find the oxidation number of a monatomic ion. For example, the oxidation number of is +1. This helps when trying to solve oxidation questions. A charge number also can help when drawing Lewis dot structures. For example, if the structure is an ion, the charge will be included outside of the Lewis dot structure. Since there is a negative charge on the outside of the Lewis dot structure, one electron needs to be added to the structure. If the charge was positive, an electron would be lost and taken away. Charge numbers in nuclear and hadron physics For an atomic nucleus, which can be regarded as an ion having stripped off all electrons, the charge number is identical with the atomic number Z, which corresponds to the number of protons in ordinary atomic nuclei. Unlike in chemistry, subatomic particles with electric charges of two elementary charges (e.g. some delta baryons) are indicated with a superscript "++" or "−−". In chemistry, the same charge numbers are usually indicated as superscript "+2" or "−2".
<urn:uuid:add28712-6756-4e76-9db1-f5e7e1c377e9>
4.375
808
Knowledge Article
Science & Tech.
43.238921
95,536,145
- Research news - Open Access Shadows provide illumination © BioMed Central Ltd 2003 Published: 4 March 2003 Comparison of human and mouse DNA sequences has led to the discovery of numerous genes, but is limited by large-scale differences in genomic structure between the organisms. In contrast direct comparisons between humans and primates have been hindered by the very large amount of sequence identity, which can obscure the boundaries of functional regions. In the 28 February Science, Edward Rubin and colleagues at the Lawrence Berkeley National Laboratory, Berkeley, California, US, compared multiple primate genomes to identify exons, regulatory elements, and other features of human genes (Science 229:1391-1394, February 27, 2003). Rubin et al. compared sequences from more than a dozen primate species, using a technique they call phylogenetic shadowing. While individual species differ little from one another, the additive collective difference of higher primates as a group is comparable to that of humans and mice. In their analysis of differences, they factored in relatedness and compared fast versus slow mutation rates to determine the relative importance of the sequence changes they observed. Exons displayed the least amount of cross-species variation, and exon-intron boundaries could be accurately and precisely predicted from the degree of collective differences found in their samples. Promoters and enhancers stood out from other noncoding regions due to their relatively conserved sequences. The existence and location of predicted novel control regions were confirmed through transcription factor-binding assays. Deletion of one highly conserved region led to a 55% decrease in gene expression, versus only 4% from deletion of a nonconserved region nearby. The authors conclude such multiple intra-primate comparisons have the potential to identify new genes and new controlling elements, and that further genomic sequencing of primates may allow detailed annotation of the human genome not possible through human-mouse comparisons.
<urn:uuid:22c184ce-ef8e-4cdd-8723-44d8c2bf8b3f>
2.96875
386
Truncated
Science & Tech.
11.199559
95,536,147
NOAA Reinstates July 1936 As The Hottest Month On Record The National Oceanic and Atmospheric Administration, criticized for manipulating temperature records to create a warming trend, has now been caught warming the past and cooling the present. July 2012 became the hottest month on record in the U.S. during a summer that was declared “too hot to handle” by NASA scientists. That summer more than half the country was experiencing drought and wildfires had scorched more than 1.3 million acres of land, according to NASA. According to NOAA’s National Climatic Data Center in 2012, the “average temperature for the contiguous U.S. during July was 77.6°F, 3.3°F above the 20th century average, marking the warmest July and all-time warmest month on record for the nation in a period of record that dates back to 1895.” “The previous warmest July for the nation was July 1936, when the average U.S. temperature was 77.4°F,” NOAA said in 2012. This statement by NOAA was still available on their website when checked by The Daily Caller News Foundation. But when meteorologist and climate blogger Anthony Watts went to check the NOAA data on Sunday he found that the science agency had quietly reinstated July 1936 as the hottest month on record in the U.S. “Two years ago during the scorching summer of 2012, July 1936 lost its place on the leaderboard and July 2012 became the hottest month on record in the United States,” Watts wrote. “Now, as if by magic, and according to NOAA’s own data, July 1936 is now the hottest month on record again. The past, present, and future all seems to be ‘adjustable’ in NOAA’s world.” Watts had data from NOAA’s “Climate at a Glance” plots from 2012, which shows that July 2012 was the hottest month on record at 77.6 degrees Fahrenheit. July 1936 is only at 77.4 degrees Fahrenheit. [Annotations in the graph are from Watts]. “You can’t get any clearer proof of NOAA adjusting past temperatures,” Watts wrote. “This isn’t just some issue with gridding, or anomalies, or method, it is about NOAA not being able to present historical climate information of the United States accurately.” “In one report they give one number, and in another they give a different one with no explanation to the public as to why,” Watts continued. “This is not acceptable. It is not being honest with the public. It is not scientific. It violates the Data Quality Act.” Watts’ accusation of NOAA climate data manipulation comes after reports that the agency had been lowering past temperatures to create a warming trend in the U.S. that does not exist in the raw data. The ex-post facto data manipulation has been cataloged by climate blogger Steven Goddard and was reported by the UK Telegraph earlier this month. “Goddard shows how, in recent years, NOAA’s US Historical Climatology Network (USHCN) has been ‘adjusting’ its record by replacing real temperatures with data ‘fabricated’ by computer models,” writes Christopher Booker for the Telegraph. “The effect of this has been to downgrade earlier temperatures and to exaggerate those from recent decades, to give the impression that the Earth has been warming up much more than is justified by the actual data,” Booker writes. “In several posts headed ‘Data tampering at USHCN/GISS,’ Goddard compares the currently published temperature graphs with those based only on temperatures measured at the time.” “These show that the US has actually been cooling since the Thirties, the hottest decade on record; whereas the latest graph, nearly half of it based on ‘fabricated’ data, shows it to have been warming at a rate equivalent to more than 3 degrees centigrade per century,” Booker adds. When asked about climate data adjustments by the DCNF back in April, NOAA send there have been “several scientific developments since 1989 and 1999 that have improved the understanding of the U.S. surface temperature record.” “Many station observations that were confined to paper, especially from early in the 20th century, have been scanned and keyed and are now digitally available to inform these time series,” Deke Arndt, chief of NOAA’s Climate Monitoring Branch, told TheDCNF. “In addition to the much larger number of stations available, the U.S. temperature time series is now informed by an improved suite of quality assurance algorithms than it was in the late 20th Century,” Deke said in an emailed statement. But NOAA has apparently not just been adjusting temperatures downward, but also adjusting them upwards. “This constant change from year to year of what is or is not the hottest month on record for the USA is not only unprofessional and embarrassing for NOAA, it’s bullshit of the highest order,” Watts wrote. “It can easily be solved by NOAA stopping the unsupportable practice of adjusting temperatures of the past so that the present looks different in context with the adjusted past and stop making data for weather stations that have long since closed.” NOAA did not immediately respond to TheDCNF’s request for comment. Follow Michael on Twitter and Facebook Content created by The Daily Caller News Foundation is available without charge to any eligible news publisher that can provide a large audience. For licensing opportunities of our original content, please contact firstname.lastname@example.org.
<urn:uuid:5ad3d9ba-2a05-4178-89a1-8b0890387b9b>
2.8125
1,208
News Article
Science & Tech.
51.97679
95,536,169
An article in the Science Daily reveals that scientists have been able to see the impact of contamination of rivers through freshwater shrimp. It is stated that: "Today, researchers measure synthesis of the protein in male Gammarus shrimp and study its impact on the viability of their young. Other substances impact the DNA of Gammarus. A very visual test, called the comet test, measures the damage done to the DNA. Tests carried out on rivers reveal damage of up to 20%. Pollutants attack three different targets, but in the end, the result is the same, the overall dynamics of populations are threatened. Studies are now being expanded to other species of Gammarus in order to offer generic tools for monitoring water quality." By calculating the damage done through freshwater shrimp, scientists have been finding means for how pollution, and human activities through rivers directly impacts our rivers and the food we consume through these rivers. This article concludes with: "Over the long term, this situation may impact on food supplies for species higher up in the food chain, e.g. trout which eat great quantities of the small shrimp." To read the full article in Science Daily, please check it out here:
<urn:uuid:4b4c3a27-88fb-400e-90d5-74096c03ce65>
3.5
242
Personal Blog
Science & Tech.
48.312741
95,536,170
Tidewater glaciers—glaciers that flow from inland mountains all the way into the sea—are perhaps best known for birthing new icebergs in spectacular fashion. As members of James Balog’s Extreme Ice Survey team captured in this clip (above) of Ilulissat Glacier in western Greenland, calving events can feature huge chunks of ice tumbling into roiling waters and be accompanied by loud booming and splashing sounds. However, tidewater glaciers aren’t the only type of glacier that calve. The ends of lacustrine, or lake, glaciers also break off periodically. Such glaciers gouge depressions in the ground, and those holes fill with melt water to become proglacial lakes. While many of these lakes are small and ephemeral, some are large enough to serve as the backdrop for sizable calving events. University of Alaska glaciologist Martin Truffer captured this sequence of images (below), which show a calving event at Yakutat Glacier in southeastern Alaska on July 16, 2009. “What we see in the video is a huge iceberg breaking off and rotating. I don’t have a good estimate of the size, but the part of the front that broke of is at least one kilometer long. I think it is quite unusual to see such large ice bergs overturning in lake-calving glaciers. Mostly, they just break off and quietly drift away,” Truffer noted in an email. There are some key differences between calving events at tidewater and lacustrine glaciers. Tidewater glaciers tend to have much steeper calving fronts than their freshwater cousins. Also, lake water is generally much cooler than seawater, and there is less water circulation in lakes due to the absence of tides. As a result, tidewater glaciers calve much more frequently and are much less likely to have floating tongues of ice, which are common on lake-calving glaciers. To learn more about Yakutat Glacier, read the Image of the Day we published on August 20, 2014. To learn more about the differences between lake-calving and tidewater glaciers, read this study published in the Journal of Glaciology. And to see more photographs of Yakutat Glacier, check out Martin Truffer’s fielddispatches on his Glacier Adventures blog. I’ve included one of my favorites—an aerial shot taken on September 26, 2011, after Yakut retreated enough that its single calving face had divided into two separate branches. The photograph was taken by William Dryer, one of Truffer’s colleagues.
<urn:uuid:969098a3-c895-41bf-9bb6-c8b55043dc15>
3.78125
536
Knowledge Article
Science & Tech.
40.274561
95,536,181
General Chemistry/Redox Reactions/Oxidation state Oxidation states are used to determine the degree of oxidation or reduction that an element has undergone when bonding. The oxidation state of a compound is the sum of the oxidation states of all atoms within the compound, which equals zero unless the compound is ionic. ||*Gaining electrons is reduction. The oxidation state of an atom within a molecule is the charge it would have if the bonding were completely ionic, even though covalent bonds do not actually result in charged ions. Method of notation Oxidation states are written above the element or group of elements that they belong to (when drawing the molecule), or written with roman numerals in parenthesis when naming the elements. |aluminum(III), an ion| Determining oxidation state For single atoms or ions Because oxidation numbers are just the sum of the electrons gained or lost, calculating them for single elements is easy. ||The oxidation state of a single element is the same as its charge. Pure elements always have an oxidation states of zero.| Notice that the oxidation states of ionic compounds are simple to determine. For larger molecules |Remember that all the individual oxidation states must add up to the charge on the whole substance.| Although covalent bonds do not result in charges, oxidation states are still useful. They label the hypothetical transfer of electrons if the substance were ionic. Determining the oxidation states of atoms in a covalent molecule is very important when analyzing "redox" reactions. When substances react, they may transfer electrons when they form the products, so comparing the oxidation states of the products and reactants allows us to keep track of the electrons. |for hydrogen chloride| |for the chlorite ion (notice the overall charge) |Oxidation states do not necessarily represent the actual charges on an atom in a molecule. They are simply numbers that indicate what the charges would be if that atom had gained or lost the electrons involved in the bonding. For example, CH4 is a covalent molecule—the C has no charge nor does the H, however the molecule can be assigned a −4 oxidation state for the C and a +1 oxidation state for the H's.| Determining Oxidation States The determination of oxidation states is based on knowing which elements can have only one oxidation state other than the elemental state and which elements are able to form more than one oxidation state other than the elemental state. Let's look at some of the "rules" for determining the oxidation states. 1. The oxidation state of an element is always zero. 2. For metals, the charge of the ion is the same as the oxidation state. The following metals form only one ion: Group IA, Group IIA, Group IIIA (except Tl), Zn2+, Cd2+. 3. For monatomic anions and cations, the charge is the same as the oxidation state. 4. Oxygen in a compound is −2, unless a peroxide is present. The oxidation state of oxygen in peroxide ion , O22− is −1. 5. For compounds containing polyatomic ions, use the overall charge of the polyatomic ion to determine the charge of the cation. Here is a convenient method for determining oxidation states. Basically, you treat the charges in the compound as a simple algebraic expression. For example, let's determine the oxidation states of the elements in the compound, KMnO4. Applying rule 2, we know that the oxidation state of potassium is +1. We will assign "x" to Mn for now, since manganese may be of several oxidation states. There are 4 oxygens at −2. The overall charge of the compound is zero: K Mn O4 +1 x 4(-2) The algebraic expression generated is: 1 + x -8 = 0 Solving for x gives the oxidation state of manganese: x - 7 = 0 x = +7 K Mn O4 +1 +7 4(-2) Suppose the species under consideration is a polyatomic ion. For example, what is the oxidation state of chromium in dichromate ion, (Cr2O72-)? As before, assign the oxidation state for oxygen, which is known to be -2. Since the oxidation state for chromium is not known, and two chromium atoms are present, assign the algebraic value of 2x for chromium: Cr2 O7 2- 2x 7(-2) Set up the algebraic equation to solve for x. Since the overall charge of the ion is -2, the expression is set equal to -2 rather than 0: 2x + 7(-2) = -2 Solve for x: 2x - 14 = -2 2x = 12 x = +6 Each chromium in the ion has an oxidation state of +6. Let's do one last example, where a polyatomic ion is involved. Suppose you need to find the oxidation state of all atoms in Fe2(CO3)3. Here two atoms, iron and carbon, have more than one possible oxidation state. What happens if you don't know the oxidation state of carbon in carbonate ion? In fact, knowledge of the oxidation state of carbon is unnecessary. What you need to know is the charge of carbonate ion (-2). Set up an algebraic expression while considering just the iron ion and the carbonate ion: Fe2 (CO3)3 2x 3(-2) 2x - 6 = 0 2x = 6 x = 3 Each iron ion in the compound has an oxidation state of +3. Next consider the carbonate ion independent of the iron(III) ion: C O3 2- x 3(-2) x - 6 = -2 x = +4 The oxidation state of carbon is +4 and each oxygen is -2. Determining oxidation states is not always easy, but there are many guidelines that can help. This guidelines in this table are listed in order of importance. The highest oxidation state that any element can reach is +8 in XeO4. |Element||Usual Oxidation State| |Fluorine||Fluorine, being the most electronegative element, will always have an oxidation of -1 (except when it is bonded to itself in F2, when its oxidation state is 0).| |Hydrogen||Hydrogen always has an oxidation of +1, -1, or 0. It is +1 when it is bonded to a non-metal (e.g. HCl, hydrochloric acid). It is -1 when it is bonded to metal (e.g. NaH, sodium hydride). It is 0 when it is bonded to itself in H2.| |Oxygen||Oxygen is usually given an oxidation number of -2 in its compounds, such as H2O. The exception is in peroxides (O2-2) where it is given an oxidation of -1. Also, in F2O oxygen is given an oxidation of +2 (because fluorine must have -1), and in O2, where it is bonded only to itself, the oxidation is 0.| |Alkali Metals||The Group 1A metals always have an oxidation of +1, as in NaCl. The Group 2A metals always have an oxidation of +2, as in CaF2. There are some rare exceptions that don't need consideration.| |Halogens||The other halogens (Cl, Br, I, As) usually have an oxidation of -1. When bonded to another halogen, its oxidation will be 0. However, they can also have +1, +3, +5, or +7. Looking at the family of chlorides, you can see each oxidation state (Cl2 (0), Cl- (-1), ClO- (+1), ClO2- (+3), ClO3- (+5), ClO4- (+7)).| |Nitrogen||Nitrogen (and the other Group 5A elements, such as phosphorus, P) often have -3 (as in ammonia, NH3), but may have +3 (as in NF3) or +5 (as in phosphate, PO43-).| |Carbon||Carbon can take any oxidation state from -4, as in CH4, to +4, as in CF4. It is best to find the oxidation of other elements first.| In general, the more electronegative element has the negative number. Using a chart of electronegativities, you can determine the oxidation state of any atom within a compound. Oxidation states are another periodic trend. They seem to repeat a pattern across each period.
<urn:uuid:7c87afc3-ee3b-472e-93db-c5a3efd7eba8>
4.34375
1,845
Knowledge Article
Science & Tech.
55.084789
95,536,183
By Finlo Rohrer BBC News Magazine HOW CLOUD SEEDING WORKS 1. Fire silver iodide into cloud using flares on planes or from ground 2. Water droplets attach to these particles, falling as snow which melts into 3. This boosts updrafts, which pulls moist air With lawns going brown and cars left unwashed, can we make it rain by firing chemicals into the clouds, a technique reportedly used during the 1976 drought? It has only taken a few weeks of drought panic for the blue-sky thinkers to come up with seemingly outlandish plans such as towing icebergs up the Thames. But while one such idea - cloud seeding, also known as weather modification - sounds like the stuff of science fiction, it dates from the 1940s. Particles are dropped or fired into clouds in an effort to change levels of precipitation. Its best known use is in Moscow, where legend goes that it never rains on Red Square on May Day. It's a practice that still goes on "It wasn't raining in Moscow [this May Day]," a spokeswoman for the mayor says. "We have a 'making the weather' department." In China it's credited with boosting rainfall in drought-stricken areas, although there are allegations of "rain theft" levelled at provinces that use it too zealously. used to boost snowfall in the mountains above Californian hydroelectric dams, at Colorado ski resorts, to stop fog at airports and to prevent hail damage in cities. It's used to boost Don Griffiths, president of North American Weather Consultants, says the first step is to take a cloud with upper layers below freezing. Next fire silver iodine (or salt or dry ice) into the cloud. This can be done either by dropping flares from a plane - these may be attached under the wings - or fired from the ground. Water droplets attach to the particles forming snowflakes. Once these are heavy enough, they fall as first snow then melt into rain at lower altitudes. "The trick is getting those seeding materials in the right place at the right time," says Mr Griffiths. Experiments show that rainfall can be boosted by at least a quarter in specific areas over a whole season, he says. As for whether the UK could benefit, that depends on the type of clouds in the affected areas. Many meteorologists agree that cloud seeding brings more rain, but the issue of whether it can be increased in any predictable way National Academy of Sciences has called for more research, driven by a world in which two billion suffer water shortages. But, it warns in a recent report, "scientists are still unable to confirm that these induced changes result in verifiable, repeatable changes in rainfall, hail fall, and snowfall on the ground." And Keith Seitter, executive director of the American Meteorological Society, also adds a note of caution. "There is no technology that can create rain when there was no potential for it to begin with. Cloud seeding appears to be able to get a little bit more than you would have got otherwise. The conditions are going to have to be just right for cloud seeding to have a measurable impact." For there are annual variations in rainfall, variations even scientists cannot explain. Wrong sort of clouds Stephen Dorling, senior lecturer in meteorology at the University of East Anglia, says it's difficult to imagine finding a reliable way to boost rainfall. difficulty is doing it in a controlled way. The process of rain formation is reasonably well understood, but as far as a computer programme that can model it, each cloud has an incredible amount of science going on inside it. We simply wouldn't have the computer to Reservoirs are running He's also worried that cloud seeding could provoke legal disputes between nations, if rain was increased in one area but reduced in But Mr Griffiths dismisses suggestions that cloud seeding could harm other areas. "There is a huge amount of water vapour in the atmosphere. It is like an ocean but you think of it as a small lake. Only 10% of the water vapour ever reaches the ground as rainfall or snowfall." Whether or not southern England has the right sort of clouds, the authorities regard talk of cloud seeding - and iceberg towing - as a "Banning non-essential use is the priority," says an Environment Agency spokeswoman. "We do not currently need to even consider Add your comments on this story, using the form below. One major problem is flushing the toilet with perfectly clean drinking water! There should be new plumbing systems installed that dont waste such water. Sam C, London We help Third World countries overcome the fact they have no water, so why can't we help ourselves? I find it stupid that a country as advanced as ourselves have come to this situation, but yet nothing is done about it and we will be in the same situation again next year. Stopping people from using the water is not going to solve the problem and especialy giving out fines... it just creates a bigger distance between the goverment and the Perhaps water can be condensed out of the hot air spouted by politicians explaining away the need to upgrade the distribution network to prevent leaks ? The country has gone mad, would it not be easier and cheaper in the long run for the water companies to patch the leaks? How much profit have they made recently? Tony Smith, London Brilliant, not only is there going to be a drought with lawns going brown but they're also going to poison the water course. In fact this reminds me of a similar story that came out of glastonbury festival one year where they reckoned they had a cloud buster to stop it raining. You watch next thing they will have men on the Guy Parkinson, Dewsbury I thought this was a joke. We are dealing with global warming, icecaps melting and they are talking about pulling icebergs up the Thames. Surely with the amount of rain England actually gets we could erect large temporary water tanks in rain fall area. And if the water companies actually repaired damaged pipes correctly we wouldnt see leaks for weeks on end. Amanda, Leeds, Leeds I read only yesterday how the Thames Valley area, managed mainly by Thames Water, has the worst leakage record in the country, co-incidence they are imminently considering a non essential drought order. Are slack water companies being fined, or considered for take over, threatened with nationalisation, or someone fixing prices till their abysmal record improves, that'd kick start investors into I think we need some more info on 'iceberg towing'. It sounds Why expend time and money on these idea in England when so much of the rain the does fall is wasted?? How much water is lost by the water companies through leaks? Water meters should also be mandatory, my house in Australia like all homes their has one. I could see the effect simple measures had on my useage. Like Lord Kelvin said, you can't manage what you can't measure. Management of the resource is needed not just more of it! For goodness sake, maybe we should be considering wider issues here, such as why so many people live in the south.....we don't struggle for water in the North....ummm less people, less stress on the system...better quality of life (dare I say?) Although cloud seeding might seem like a good idea at first has anybody considered the real costs. Not just financial but enviromental as well. Surely it would be better to put more effort into capturing eccessive rainfall water for later use and fixing old and damaged pipes. Use the money from shareholders to build more reservoirs and desalination plants. Divert rivers that are prone to flooding into reservoirs instead of straight out to sea. With a little thought and investment now in years to come we could be selling water to real drought stricken areas and boost our nation's income. A far better way keep the tap running. I have been a mariner all my life and have noticed that 40 years ago weather depressions transited well to the north, over iceland. Today we see them touch the tip of Scotland as they travel east, and the highs rarely form over the UK any more. The Gulf stream therefore must have moved to the south more closer to Scotland, so based on the time it has taken to reach this point, it will not be too long before we loose the benifits of the Gulf stream, Then we will have all the water we want in the form of ice, Cloud seeding was reported to cause disastrous flooding in North Devon - the govt would have to think really carefully before doing Duncan McCarthy, Bristol,UK Cloud seeding is wrong - you are just interfering with a natural cycle. The rain that is bought about by seeding obviously stops that rain from falling naturally elsewhere. Nowhere near enough is known to re-attempt this folly. The BBC may edit your comments and not all emails will be published. Your comments may be published on any BBC media
<urn:uuid:b2661257-cfc5-4f7c-abe4-0e6b077eb76e>
3.078125
2,024
Comment Section
Science & Tech.
55.988372
95,536,221
Authors: Jonathan Tooker The purpose of this report is to debunk Darwin's theory of evolution and any variant theory that relies on the natural rate of mutation to explain the origin of new genes. We construct a model of DNA and show that the minimum rate of mutation needed to produce humans within the geological age of the Earth is too high. It is much higher than any realistic model of random mutations. The calculation presented here should end the evolution debate, at least in its Darwinian limit. Other problems with evolution are discussed. Comments: 13 Pages. 3 figures, 2 tables, less typos Unique-IP document downloads: 457 times Vixra.org is a pre-print repository rather than a journal. Articles hosted may not yet have been verified by peer-review and should be treated as preliminary. In particular, anything that appears to include financial or legal advice or proposed medical treatments should be treated with due caution. Vixra.org will not be responsible for any consequences of actions that result from any form of use of any documents on this website. Add your own feedback and questions here: You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted as unhelpful.
<urn:uuid:2643765d-45e7-4ee3-89bc-fb6e1be3742b>
2.8125
270
Academic Writing
Science & Tech.
42.047985
95,536,224
The Theory of Aberrations The operation of axially symmetric electron and ion lenses is based o n the paraxial (first-order) theory. In practice, however, the trajectories always have both finite displacements r and finite slopes r’ with respect to the axis. Even if they are small, the omission of the higher-degree terms in the series expansions leading to the paraxial ray equation causes some error. Therefore, the paraxial theory is always an approximation. In fact, a point object will be “imaged” not by a conjugate point but by a blurred spot produced by different rays with different slopes that converge at different image points. These rays intersect the Gaussian (paraxial) image plane at different points; therefore the “image” is not a point but a finite-size spot, which can have quite an irregular shape. This phenomenon is called the geometrical aberration. W e have already seen an example of this effect in Section 2–7–2–1 when we analyzed the operation of the long magnetic lens. We established the fact that a clear image can only be produced if the particles move close to the field lines. Otherwise, different particles will be focused at different points and the image will be blurred. KeywordsObject Point Spherical Aberration Chromatic Aberration Gaussian Image Magnetic Lens Unable to display preview. Download preview PDF.
<urn:uuid:1e75f5dd-864c-4032-a144-00ef02c3df4d>
3.796875
293
Truncated
Science & Tech.
35.625068
95,536,252
By successfully confining atoms of antihydrogen for an unprecedented 1,000 seconds, an international team of researchers called the ALPHA Collaboration has taken a step towards resolving one of the grand challenges of modern physics: explaining why the Universe is made almost entirely of matter, when matter and antimatter are symmetric, with identical mass, spin and other properties. The achievement is remarkable because antimatter instantly disappears on contact with regular matter such that confining antimatter requires the use of exotic technology. The collaboration of 39 researchers, including Daniel Miranda Silveira and Yasunori Yamazaki from the RIKEN Advanced Science Institute, Wako, trapped antihydrogen inside a ‘bottle’ defined by a set of magnetic fields created by an octupole magnetic coil and a pair of mirror coils (Fig. 1). The bottle could not confine antihydrogen atoms unless they had extremely low energy, which represents a particular problem because antimatter is made through an extremely energetic process; and any cooling procedures must prevent antimatter and matter from meeting. In a previous ALPHA Collaboration experiment, the researchers succeeded in confining 38 antihydrogen atoms for at least one-fifth of a second. Buoyed by their success, the collaboration focused on further cooling the antihydrogen atoms. Advances they made to two techniques proved especially fruitful. The first, evaporative cooling, relies on the fact that any collection of antiparticles will include some that are more energetic than others. By confining this collection inside an energy potential that lets only the most energetic particles escape, or evaporate, the entire collection can be effectively cooled, and can reach hundreds of degrees Celsius below freezing, Yamazaki explains. The second technique, autoresonant mixing, uses a technique called phase locking to mix the two constituents of antihydrogen—antiprotons and positrons—without warming the antiprotons. Once cooled in this way, the ALPHA Collaboration was able to trap more antimatter atoms, some for times exceeding 1,000 seconds. Critically, this is much longer than the time it takes for antimatter to relax to its lowest-energy, or ground, quantum mechanical state, which is a prerequisite for studying its properties with laser and microwave spectroscopic techniques. Trapping antimatter atoms in this way will allow physicists to address questions regarding the symmetry between matter and antimatter, which is currently understood to be a foundational property of physics, says Yamazaki. “If we see even a slight difference between hydrogen and antihydrogen properties, then the standard model of physics will need to be rewritten, and our understanding of the Universe will change.” Andresen, G.B., Ashkezari, M.D., Baquero-Ruiz, M., Bertsche, W., Bowe, P.D., Butler, E., Cesar, C.L., Charlton, M., Deller, A., Eriksson, S. et al. Confinement of antihydrogen for 1,000 seconds. Nature Physics 7, 558–564 (2011). What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Life Sciences 18.07.2018 | Information Technology
<urn:uuid:a7386a8a-2e1c-43b5-bf52-511eeb8b9441>
3.21875
1,273
Content Listing
Science & Tech.
34.247921
95,536,268
- Matter Waves - Wave Nature of Particles - Dual Nature of Radiation and Matter Part 4 undefined video tutorial01:34:38 - Dual nature of radiation and Matter part 9 (Heigensberg uncertainity) undefined video tutorial00:07:01 - Dual nature of radiation and Matter part 11 (Application of wave nature) undefined video tutorial00:12:03 Crystal diffraction experiments can be performed using X-rays, or electrons accelerated through appropriate voltage. Which probe has greater energy? (For quantitative comparison, take the wavelength of the probe equal to 1 Å, which is of the order of inter-atomic spacing in the lattice) (me= 9.11 × 10−31 kg) What is the de Broglie wavelength of a nitrogen molecule in air at 300 K? Assume that the molecule is moving with the root-mean square speed of molecules at this temperature. (Atomic mass of nitrogen = 14.0076 u) Obtain the de Broglie wavelength associated with thermal neutrons at room temperature (27 ºC). Hence explain why a fast neutron beam needs to be thermalised with the environment before it can be used for neutron diffraction experiments. A electron of mass me revolves around a nucleus of charge +Ze. Show that it behaves like a tiny magnetic dipole. Hence prove that the magnetic moment associated wit it is expressed as `vecμ =−e/(2 m_e)vecL `, where `vec L` is the orbital angular momentum of the electron. Give the significance of negative sign. find the de Broglie wavelength of a neutron, in thermal equilibrium with matter, having an average kinetic energy of (3/2) kT at 300 K. The energy and momentum of an electron are related to the frequency and wavelength of the associated matter wave by the relations: E = hν, p = `h/lambda` But while the value of λ is physically significant, the value of ν (and therefore, the value of the phase speed νλ) has no physical significance. Why? Calculate the de Broglie wavelength of the electrons accelerated through a potential difference of 56 V. The wavelength of light from the spectral emission line of sodium is 589 nm. Find the kinetic energy at which (a) an electron, and (b) a neutron, would have the same de Broglie wavelength. Obtain the de Broglie wavelength of a neutron of kinetic energy 150 eV. As you have seen in Exercise 11.31, an electron beam of this energy is suitable for crystal diffraction experiments. Would a neutron beam of the same energy be equally suitable? Explain. (mn= 1.675 × 10−27 kg) An electron and a photon each have a wavelength of 1.00 nm. Find (a) their momenta, (b) the energy of the photon, and (c) the kinetic energy of electron. What is the (b) speed, and (c) de Broglie wavelength of an electron with kinetic energy of 120 eV. Show that the wavelength of electromagnetic radiation is equal to the de Broglie wavelength of its quantum (photon). What is the de Broglie wavelength of a dust particle of mass 1.0 × 10−9 kg drifting with a speed of 2.2 m/s? Compute the typical de Broglie wavelength of an electron in a metal at 27 ºC and compare it with the mean separation between two electrons in a metal which is given to be about 2 × 10−10 m.
<urn:uuid:beeace21-9e1c-49b7-ac2d-7eab8fa0fca7>
3.40625
761
Content Listing
Science & Tech.
56.204717
95,536,301
A possible restart of an interplate slow slip adjacent to the Tokai seismic gap in Japan - 1.7k Downloads The Tokai region of Japan is known to be a seismic gap area and is expected to be the source region of the anticipated Tokai earthquake with a moment magnitude of over 8. Interplate slow slip occurred from approximately 2001 and subsided in 2005 in the area adjacent to the source region of the expected Tokai earthquake. Eight years later, the Tokai region again revealed signs of a slow slip from early 2013. This is the first evidence based on a dense Global Positioning System network that Tokai long-term slow slips repeatedly occur. Two datasets with different detrending produce similar transient crustal deformation and aseismic slip models, supporting the occurrence of the Tokai slow slip. The center of the current Tokai slow slip is near Lake Hamana, south of the center of the previous Tokai slow slip. The estimated moments, which increase at a roughly constant rate, amount to that of an earthquake with a moment magnitude of 6.6. If the ongoing Tokai slow slip subsides soon, it will suggest that there are at least two different types of slow slip events in the Tokai long-term slow slip area: that is, a large slow slip with a moment magnitude of over 7 with undulating time evolution and a small one with a moment magnitude of around 6.6 with a roughly linear time evolution. Because the Tokai slow slip changes the stress state to one more favorable for the expected Tokai earthquake, intense monitoring is going on. KeywordsPlate subduction zone GPS Interplate earthquake Transient deformation Slow slip event Tokai seismic gap Tokai slow slip Low-frequency earthquake Global Positioning System In this tectonic setting, the dense Global Positioning System (GPS) network (GEONET) in Japan detected transient movements in the Tokai region from early 2001, which disappeared around 2005 (e.g., Ozawa et al. 2002; Miyazaki et al. 2006; Liu et al. 2010). This transient was interpreted as a long-term slow slip on the plate interface near Lake Hamana, central Japan, adjacent to the Tokai seismic gap (e.g., Ozawa et al. 2002; Ohta et al. 2004; Miyazaki et al. 2006; Liu et al. 2010; Tanaka et al. 2015). The previous Tokai slow slip gradually stopped and did not trigger the anticipated Tokai earthquake. After the discovery of the Tokai slow slip by GEONET, it was proposed that the Tokai long-term slow slip occurred during the periods of approximately 1978–1983 and 1987–1991 on the basis of baseline measurements by an electromagnetic distance meter or based on leveling data (e.g., Kimata et al. 2001; Ochi 2014). This hypothesis is consistent with other slow slip events around Japan, such as the Bungo slow slip and the Hyuga-nada slow slip (e.g., Ozawa et al. 2013; Yarai and Ozawa 2013), in that they have occurred quasi-periodically. Since the 2011 M w9 Tohoku earthquake, eastward postseismic deformation has been the dominant source of crustal deformation in the Tokai region (see Fig. 1b). Around 12 years after the start of the previous Tokai slow slip, GEONET has been detecting a similar transient signal in the Tokai region since early 2013, which is mixed with the postseismic deformation due to the 2011 Tohoku earthquake. This transient movement suggests the possibility of the restart of the Tokai slow slip on the interface between the Amurian plate and the Philippine Sea plate. In addition to the crustal deformation detected by GEONET, a strain meter in the Tokai region also shows transient deformation (Miyaoka and Kimura 2016). Because there is a possibility that the ongoing slow slip will lead to the anticipated Tokai earthquake, it is important to monitor the state of the current Tokai slow slip. In this study, we estimate the spatial and temporal evolution of the possible Tokai slow slip by applying square-root information filtering following the time-dependent inversion technique and compare it with the previous event. We also discuss the relationship between the low-frequency earthquakes and the estimated slow slip model and the influence of the latter on the expected Tokai earthquake. GPS data were analyzed to obtain daily positions with Bernese GPS software (version 5.0). We adopted the F3 solution (Nakagawa et al. 2008), which uses the final orbit and earth rotation parameters of the International GNSS Service (IGS) and provides a higher S/N ratio than the previous F2 position time series (Hatanaka et al. 2003). We used the east–west (EW), north–south (NS), and up–down (UD) components at approximately 400 GPS sites in the Tokai, Kanto, and Tohoku regions. Postseismic deformation due to the 2011 Tohoku earthquake has strong potential to contaminate and bias our search for slow slip in the Tokai region. We therefore attempt to remove its influence in two different ways, each generating a different dataset. First, we invert two fault patches (one for the Tokai slow slip and one for the Tohoku afterslip). In this analysis, we attributed all the postseismic deformation of the Tohoku earthquake to afterslip on the plate interface in the Tohoku region and did not take viscoelastic relaxation into account. This first approach assumes that the postseismic deformation due to viscoelastic relaxation can be partly modeled by afterslip modeling. However, it is a fact that viscoelastic relaxation contributes to the postseismic deformation due to the Tohoku earthquake (e.g., Sun et al. 2014). Thus, we need another approach to estimate the effects of viscoelastic relaxation and afterslip on the plate interface to support the results of the analysis with two fault patches. For this purpose, second, we attempt to remove the Tohoku postseismic deformation by considering exponential and logarithmic trends in the position time series in the analysis with one fault patch for the Tokai slow slip. Analysis with two fault patches The degree of the polynomial functions n and the overtone of the trigonometric functions m were estimated on the basis of Akaike information criterion (AIC) (Akaike 1974). After removing the annual components, we estimated the linear trend for the data between January 1, 2008, and January 1, 2011, during which no transient displacements occurred, and removed it from the data without annual components. We consider that the adopted steady state for this period is satisfactory for emphasizing the results, because the previous 2001–2005 slow slip and the current slow slip were clearly detected as a deviation from this adopted steady state. After this detrending, we smoothed the position time series by averaging over 3 days to reduce errors. Thus, this first detrending does not take into account the postseismic deformation due to the 2011 Tohoku earthquake, which is the main difference from the following second detrended dataset. We call this dataset the first detrended dataset. We applied square-root information filtering (Ozawa et al. 2012) to the first detrended dataset following the inversion technique of McGuire and Segall (2003) for the period between January 1, 2013, and October 25, 2015. To reduce the computational burden, we used position time series with an interval of 3 days. Because we used detrended data, the estimated aseismic interplate slip is the deviation from the steady state for the period between January 1, 2008, and January 1, 2011. We used 389 GPS sites in the filtering analysis for the first detrended dataset (see Fig. 1a). We weighted the EW, NS, and UD displacements with a ratio of 1:1:1/3 considering the standard deviations estimated from ordinary Kalman filtering. We used two fault patches for the upper boundary of the Pacific plate along the Japan Trench and that of the Philippine Sea plate along the Suruga trough (Fig. 1). Because postseismic deformation occurred after the 2011 Tohoku earthquake as mentioned above, we attributed the cause of all postseismic deformation to afterslip on the Tohoku fault patch. In this case, we do not take the viscoelastic response due to the Tohoku earthquake into account because we consider that the effect of the viscoelastic response to ground deformation occurs on a spatially large scale and is similar to the afterslip effect at this point. That is, the contribution of the viscoelastic response to the surface deformation in the Tokai region may be partly compensated by our afterslip model on the first fault patch, which is the upper surface of the Pacific plate after the Tohoku earthquake. We incorporated the inequality constraint that the slip is within 40° of the direction of the plate motion following the method of Simon and Simon (2006). In this filtering analysis, we incorporated the spatial roughness of the slip distribution (McGuire and Segall 2003). Hyperparameters that scale the spatial and temporal smoothness were estimated by approximately maximizing the log-likelihood of the system (Kitagawa and Gersch 1996; McGuire and Segall 2003). The optimal values of the spatial and temporal smoothness of the Tohoku fault patch are approximately 1.0 and 0.001, while those of the Tokai fault patch are approximately 0.004 and 0.001, respectively. In the above analysis, a fault patch and a slip distribution on a fault patch are represented by the superposition of spline functions (Ozawa et al. 2001). The fault patch for the Tokai region consists of 11 nodes in the T-direction and 15 nodes in the S-direction (see Fig. 1b) (Ozawa et al. 2001). These directions are defined in Fig. 1b. The spacing of knots on the fault patch is approximately 20 km in the Tokai region. The plate boundary model is from Hirose et al. (2008). With regard to the fault patch in the Tohoku region, we used 12 nodes in the T-direction and 10 nodes in the S-direction with spacing of approximately 80 and 40 km in the T-direction and S-direction, respectively (see Fig. 1a). This Tohoku fault patch is used only in the analysis with two fault patches. Although the spacing between the parallel trench nodes is larger than that between the normal trench nodes, the results for the afterslip on this Tohoku fault patch are similar to those of Ozawa et al. (2012), in which the grid spacing is shorter. Thus, we consider that the fault patch adopted for the Tohoku region can satisfactorily describe the afterslip of the Tohoku earthquake. The plate boundary model is from Nakajima and Hasegawa (2006) and Nakajima et al. (2009). The coefficients of the spline functions that represent the slip distribution on the fault patches are estimated in this inversion (Ozawa et al. 2001). Analysis with one fault patch We used 129 GPS sites in the Tokai region of the second detrended dataset for the time-dependent inversion analysis (Fig. 1b). This is because it is not necessary to take into account the postseismic deformation due to the 2011 Tohoku earthquake for the second detrended dataset, although we have to take it into account in the first detrended dataset. In the second detrended dataset, we used the same fault patch between the Philippine Sea plate and the Amurian plate beneath the Tokai region as that in the first detrended dataset without the Tohoku fault patch, because we consider that the effects of the viscoelastic relaxation and afterslip due to the 2011 Tohoku earthquake are nonexistent in the second detrended dataset owing to the removal of the exponential and logarithmic trends. The spatial and temporal smoothness of the second detrended dataset is set to the same values as those of the Tokai fault patch in the analysis with two fault patches so that we can approximately compare the results of this analysis with those of the analysis with two fault patches. Analysis of the previous Tokai slow slip In addition, we created a third detrended dataset using the same method as for the first detrended dataset but with a different period of estimation for the annual components. That is, we estimated the annual components of the EW, NS, and UD position time series separately for the period up to January 1, 2011, together with a polynomial function from the raw position time series at each station. We estimated the linear trend for the same period as for the first and second datasets and removed it from the position time series without annual components. We used 86 GPS sites in the following inversion. Using this third detrended dataset, we estimated an approximate model of the previous Tokai slow slip for the period between January 1, 2001, and January 1, 2008, by the same method as for the second detrended dataset because there are no other signals, such as those corresponding to postseismic deformation due to the 2011 Tohoku earthquake, for this period. We consider that the postseismic deformation due to the 2004 off Kii peninsula earthquakes (M w > 7) (see Fig. 1a) does not significantly affect the previous Tokai slow slip model. The estimated optimal spatial and temporal parameters are approximately 3.0 and 1.0, respectively. Results and discussion Analysis with two fault patches Although we estimated the afterslip on the Pacific plate in the Tohoku region together with the slip on the Philippine Sea plate in the Tokai region, we will not discuss it in this paper because our focus is on the Tokai slow slip. The characteristic features of the estimated afterslip on the Pacific plate in the Tohoku region are similar to the results of Ozawa et al. (2012) (Additional file 2). Analysis with one fault patch Comparison between the two models We obtained almost the same results using the two different detrended datasets. With regard to the estimated model based on the first detrended dataset, we cannot rule out the existence of a systematic error resulting from the postseismic deformation since the 2011 Tohoku earthquake. However, we believe that any such systematic error would be small considering the difference in the spatial scale of the viscoelastic relaxation effect mentioned above. Furthermore, because our model based on the second detrended dataset, which excludes the exponential and logarithmic time evolution corresponding to viscoelastic relaxation and afterslip, respectively, shows similar results for the location and time evolution of aseismic slip to those obtained from the first detrended dataset, we consider that slow slip is now occurring on the west boundary of the Tokai seismic gap area, without signs of decay. This finding is consistent with the strain meter observations in this region by Japan Meteorological Agency, which also indicate interplate slow slip near Lake Hamana (Miyaoka and Kimura 2016). Although the start time of the current slow slip event is unclear, we assumed that it started in early 2013 at the latest on the basis of the approximate start time of the transient in Figs. 2 and 6 and the moment release rate shown in Figs. 5b and 9b. For the second detrended data, we also changed the regression periods for the logarithmic and exponential functions to March 12, 2011–March 12, 2012, and March 12, 2011–March 12, 2014, respectively, and found that the characteristic feature of a slip area appearing that was centered on Lake Hamana was not changed. The reason why the two different detrended datasets gave similar results is that the postseismic deformation due to the 2011 Tohoku earthquake can be well explained by both the kinematic afterslip model and the logarithmic and exponential functions, which are based on the physics of the rate- and state-dependent friction law and viscoelasticity. However, it remains unclear why the kinematic afterslip model and the logarithmic and exponential functions produced similar postseismic deformation. We cannot rule out the possibility that the estimated logarithmic and exponential functions may reflect an actual process of afterslip and viscoelastic relaxation involved in the postseismic deformation due to the Tohoku earthquake. This problem remains to be solved in a future study. Comparison between the previous and current slow slips Because the previous Tokai slow slip reached a moment magnitude of over 7, we cannot rule out the possibility that the present stage may correspond to the initial stage of the time evolution of the Tokai slow slip. The estimated moment release of the current Tokai slow slip seems to have increased almost linearly since early 2013, as shown in Figs. 5b and 9b and in Fig. 12 in the long term, while the moment release rate of the previous Tokai slow slip changed in the long term. At this point, the current Tokai slow slip seems to be following a different time evolution from that in the previous event (Fig. 12). At the time of the previous Tokai slow slip, the center of the slip area was located near Lake Hamana in the early period, then moved north over time (e.g., Ozawa et al. 2001; Miyazaki et al. 2006) (see Additional file 3). Thus, there is a possibility that the ongoing slow slip will move northward over time with an increase in the slip magnitude. However, if the current aseismic slip stops in the near future, it will indicate the coexistence of relatively small slow slip events in the Tokai long-term slow slip area. Our previous Tokai slow slip model reproduces the observations well as shown in Fig. 10. Relationship with low-frequency earthquakes Non-volcanic low-frequency tremors have been discovered in the Nankai trough subduction zone in Japan (Obara 2002). These low-frequency tremors include low-frequency earthquakes whose hypocenters can be determined by identification of coherent S-wave and P-wave arrivals (Katsumata and Kamaya 2003). Low-frequency earthquakes occur at a depth of approximately 30–40 km on the plate interface in the Tokai region. Low-frequency tremors that contain low-frequency earthquakes occur in the Tokai region with a recurrence interval of approximately 6 months accompanied by aseismic slip, which continues for 2–3 days, and release a seismic moment corresponding to M w ~ 6 (Hirose and Obara 2006). This relationship between low-frequency tremors or earthquakes and slow slip events has also been observed in many other areas (e.g., Rogers and Dragert 2003; Shelly et al. 2006). Thus, we can expect low-frequency tremors or earthquakes with higher activity owing to the influence of the current Tokai slow slip. However, such a relationship has not been observed this time, although there was a correlation between the low-frequency earthquake activity in the Tokai region and the previous Tokai slow slip (Ishigaki et al. 2005). We consider that low-frequency earthquakes are not being activated by the current slow slip because the central part of the slow slip area is located away from the low-frequency earthquake area (Figs. 5a, 9a) and its magnitude is still small, suggesting little change in the stress in the low-frequency earthquake area. Although short-term slow slip events of M w6 in the Tokai region trigger low-frequency tremors or earthquakes (Hirose and Obara 2006), the area of induced low-frequency tremors or earthquakes overlaps with the short-term slow slip area, indicating a large change in stress in this case. The short-term slow slip with a maximum moment magnitude of M w6 in the low-frequency earthquake area (Hirose and Obara 2006) does not affect our optimal Tokai slow slip model owing to its small moment magnitude compared with the current Tokai event and the center location of our current Tokai model. Influence on the expected Tokai earthquake There is a possibility that the 2011 Tohoku earthquake and its postseismic deformation have affected the seismic activity of the Japanese archipelago. In studies on the Coulomb failure stress change (ΔCFS) due to the Tohoku earthquake, Toda et al. (2011) showed seismic excitation throughout central Japan after the Tohoku earthquake and Ishibe et al. (2015) reported changes in seismicity in the Kanto region that were correlated with ΔCFS. Furthermore, slow slip events off the Boso peninsula, east Japan, have shown a shorter recurrence interval after the Tohoku earthquake (Ozawa, 2014). Our Tohoku coseismic model (Ozawa et al. 2012) produces a ΔCFS of approximately 3 kPa near Cape Omaezaki in the Tokai seismic gap. Furthermore, the two estimated models of the current Tokai slow slip produce ΔCFS of approximately 3 kPa near Cape Omaezaki (see Fig. 1b). We assumed a rigidity of 30 GPa, Poisson’s ratio of 0.25, and a friction coefficient of 0.4 in these calculations (Harris 1998). Considering these points, we cannot rule out the possibility that the ongoing slow slip will trigger the anticipated Tokai earthquake, although the previous event did not cause the expected earthquake. Thus, it is very important to intensively monitor the ongoing Tokai slow slip using the dense GPS network in Japan. It has been proposed that the Tokai slow slip occurs with a recurrence interval of 9–14 years, although the truth of this hypothesis is difficult to establish because of the scarcity of data (e.g., Kimata et al. 2001; Ochi 2014). Thus, our finding, obtained using the dense GPS network in Japan, confirms that the Tokai slow slip has occurred repeatedly on the west boundary of the Tokai seismic gap and changed the stress state in favor of the anticipated Tokai earthquake. Although the start time of the current event is not clear, the recurrence interval of the Tokai slow slip is approximately 12 years if we assume it to be early 2013. Our estimated models of the current Tokai slow slip indicate differences from the previous event regarding the following points. First, the magnitude of the current event is approximately 6.6, which is very small compared with the M w of over 7 for the previous event, and increasing at a roughly constant rate. Second, the center of the slip area is located south of that in the previous event. Third, the moment release rate is very small compared with that in the previous event. We cannot clearly state whether the current slow slip will become similar to the previous Tokai slow slip or whether it will be a different type of slow slip from the above points. SO performed all the data analysis, prepared the graphical material, and wrote the manuscript. MT proposed the detrending method using a function consisting of logarithmic and exponential functions and estimated the time constants of the logarithmic and exponential functions. MT and HY supervised SO and helped to improve the manuscript. All authors read and approved the final manuscript. We are grateful to our colleagues for helpful discussions. Prof. Teruyuki Kato of the Earthquake Research Institute, the University of Tokyo, advised us about detrending. Prof. Sagiya of Nagoya University provided us with helpful information. The hypocenter data of low-frequency earthquakes are from Japan Meteorological Agency. The data used in this paper are available by contacting firstname.lastname@example.org. The authors declare that they have no competing interests. - Hatanaka Y, Iizuka T, Sawada M, Yamagiwa A, Kikuta Y, Johnson JM, Rocken C (2003) Improvement of the analysis strategy of GEONET. Bull Geogr Surv Inst 49:11–37Google Scholar - Ishibashi K (1981) Specification of a soon-to-occur seismic faulting in the Tokai district central Japan based upon seismotectonics. In: Simpson DW, Richards PG (eds) Earthquake prediction. Maurice Ewing series, vol 4. American Geophysical Union, Washington, pp 297–332Google Scholar - Ishigaki Y, Katsumata A, Kamaya N, Nakamura K, Ozawa S (2005) The relation between the slow slip of plate boundary in Tokai district and low frequency earthquakes. Q J Seismol 68:81–97Google Scholar - Kimata F, Hirahara K, Fujii N, Hirose H (2001) Repeated occurrence of slow slip events on the subducting plate interface in the Tokai region, central Japan, the focal region of the anticipated Tokai earthquake (M = 8). Paper presented at the AGU Fall Meeting, San Francisco, 10–14 December 2001Google Scholar - Miyaoka K, Kimura H (2016) Detection of long-term slow slip event by strainmeters using the stacking method. Q J Seismol 79:15–23Google Scholar - Mogi K (1981) Seismicity in western Japan and long-term earthquake forecasting. In: Simpson DW, Richards PG (eds) Earthquake prediction. Maurice Ewing series, vol 4. American Geophysical Union, Washington, pp 43–51Google Scholar - Nakagawa H, Miyahara B, Iwashita C, Toyofuku T, Kotani K, Ishimoto M, Munekane H, Hatanaka Y (2008) New analysis strategy of GEONET. In: Proceedings of international symposium on GPS/GNSS, Tokyo, 11–14 November 2008Google Scholar - Ochi T (2014) Possible long-term SSEs in the Tokai area, central Japan, after 1981: size, duration, and recurrence interval. Paper presented at the AGU Fall Meeting, San Francisco, 15–19 December 2014Google Scholar - Tobita M, Akashi T (2015) Evaluation of forecast performance of postseismic displacements. Rep Coord Comm Earthq Predict 93:393–396 (in Japanese) Google Scholar Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
<urn:uuid:0d729bbb-ace8-4304-a23d-6f3442a2d476>
2.796875
5,618
Academic Writing
Science & Tech.
42.974397
95,536,314
USGS Updates Magnitude of Japan’s 2011 Tohoku Earthquake to 9.0 Released: 3/14/2011 5:35:00 PM U.S. Department of the Interior, U.S. Geological Survey Office of Communication 119 National Center Reston, VA 20192 Harley Benz Clarice Nassif Ransom The USGS has updated the magnitude of the March 11, 2011, Tohoku earthquake in northern Honshu, Japan, to 9.0 from the previous estimate of 8.9. Independently, Japanese seismologists have also updated their estimate of the earthquake’s magnitude to 9.0. This magnitude places the earthquake as the fourth largest in the world since 1900 and the largest in Japan since modern instrumental recordings began 130 years ago. The USGS often updates an earthquake’s magnitude following the event. Updates occur as more data become available and more time-intensive analysis is performed. There are many methods of calculating the energy release and magnitude of an earthquake. Some methods give approximate values within minutes of the earthquake, and others require more complete data sets and extensive analysis. Due to inherent uncertainties in the modeling of energy and magnitude, the results from different agencies often vary slightly. These magnitude discrepancies arise from the use of different data and techniques. For more information on why magnitudes change, see the Earthquake Hazards Program FAQ website. USGS provides science for a changing world. Visit USGS.gov, and follow us on Twitter @USGS and our other social media channels. Subscribe to our news releases via e-mail, RSS or Twitter. Links and contacts within this release are valid at the time of publication. Source and/or read more: http://goo.gl/VktCm Publisher and/or Author and/or Managing Editor:__Andres Agostini ─ @Futuretronium at Twitter! Futuretronium Book at http://3.ly/rECc
<urn:uuid:f022bcf4-58a6-4433-aa18-e35e7a7617b7>
2.84375
408
News (Org.)
Science & Tech.
44.675912
95,536,320
Windows Sockets: Socket Notifications This article describes the notification functions in the socket classes. These member functions are callback functions that the framework calls to notify your socket object of important events. The notification functions are: OnConnect: Notifies this connecting socket that its connection attempt completed: perhaps successfully or perhaps in error. OnClose: Notifies this socket that the socket it is connected to has closed. An additional notification function is OnOutOfBandData. This notification tells the receiving socket that the sending socket has "out-of-band" data to send. Out-of-band data is a logically independent channel associated with each pair of connected stream sockets. The out-of-band channel is typically used to send "urgent" data. MFC supports out-of-band data. Advanced users working with class CAsyncSocket might need to use the out-of-band channel, but users of class CSocket are discouraged from using it. The easier way is to create a second socket for passing such data. For more information about out-of-band data, see the Windows Sockets specification, available in the Windows SDK. If you derive from class CAsyncSocket, you must override the notification functions for those network events of interest to your application. If you derive a class from class CSocket, it is your choice whether to override the notification functions of interest. You can also use CSocketitself, in which case the notification functions default to doing nothing. These functions are overridable callback functions. CSocketconvert messages to notifications, but you must implement how the notification functions respond if you wish to use them. The notification functions are called at the time your socket is notified of an event of interest, such as the presence of data to be read. MFC calls the notification functions to let you customize your socket's behavior at the time it is notified. For example, you might call OnReceivenotification function, that is, on being notified that there is data to read, you call Receiveto read it. This approach is not necessary, but it is a valid scenario. As an alternative, you might use your notification function to track progress, print TRACE messages, and so on. You can take advantage of these notifications by overriding the notification functions in a derived socket class and providing an implementation. During an operation such as receiving or sending data, a CSocketobject becomes synchronous. During the synchronous state, any notifications meant for other sockets are queued while the current socket waits for the notification it wants. (For example, during a Receivecall, the socket wants a notification to read.) Once the socket completes its synchronous operation and becomes asynchronous again, other sockets can begin receiving the queued notifications. OnConnect notification function is never called. For connections, you call Connect, which will return when the connection is completed (either successfully or in error). How connection notifications are handled is an MFC implementation detail. For details about each notification function, see the function under class CAsyncSocket in the MFC Reference. For source code and information about MFC samples, see MFC Samples. For more information, see:
<urn:uuid:8721ce7d-7cf6-4718-95fd-4b53d3bda47a>
2.765625
666
Documentation
Software Dev.
32.127484
95,536,342
Springs lush green lawns and hot pink shoes contribute at least in a small way to the worlds total carbon picture, say researchers at the Department of Energys Oak Ridge National Laboratory. Indeed, the latest fashions on Fifth Avenue and fertilizers that help homeowners achieve that "barefoot" lawn have their associated carbon dioxide costs, and ORNLs Gregg Marland and Tristram West keep up with them. Their task is to track the total carbon produced worldwide and estimate how much is taken up and cycled through trees, plants, soil and goods produced from these resources. The overall goal is to determine the net impact that people and their activities have on our planet. "Energy use is embodied in everything that we use and buy," Marland said. "And just because you may not be burning the fossil fuel yourself, dont kid yourself into thinking that someone isnt burning it on your behalf." Ron Walli | ORNL Factory networks energy, buildings and production 12.07.2018 | FIZ Karlsruhe – Leibniz-Institut für Informationsinfrastruktur GmbH Manipulating single atoms with an electron beam 10.07.2018 | University of Vienna For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:82cbaf32-d2de-4950-b305-f7dda65272c1>
2.765625
836
Content Listing
Science & Tech.
41.943034
95,536,360
Phenol is an important commodity chemical, and is a starting material for the production of numerous industrial chemicals and polymers, including bisphenol A and phenolic resins, and others. At present, the production of phenol entirely depends on the chemical synthesis from benzene, and its annual production exceeds 8 million tons worldwide. Microbial production of phenol seems to be a non-viable process considering the high toxicity of phenol to the cell. In the paper published online in Biotechnology Journal, a Korean research team led by Distinguished Professor Sang Yup Lee at the Department of Chemical and Biomolecular Engineering from the Korea Advanced Institute of Science and Technology (KAIST) reported the successful development of an engineered Escherichia coli (E. coli) strain which can produce phenol from glucose. E. coli has been a workhorse for biological production of various value-added compounds such as succinic acid and 1,4-butanediol in industrial scale. However, due to its low tolerance to phenol, E. coli was not considered a viable host strain for the biological production of phenol. Professor Lee's team, a leading research group in metabolic engineering, noted the genetic and physiological differences of various E. coli strains and investigated 18 different E. coli strains with respect to phenol tolerance and engineered all of the 18 strains simultaneously. If the traditional genetic engineering methods were used, this work would have taken years to do. To overcome this challenge, the research team used synthetic small RNA (sRNA) technology they recently developed (Nature Biotechnology, vol 31, pp 170-174, 2013). The sRNA technology allowed the team to screen 18 E. coli strains with respect to the phenol tolerance, and the activities of the metabolic pathway and enzyme involved in the production of phenol. The research team also metabolically engineered the E. coli strains to increase carbon flux toward phenol and finally generated an engineered E. coli strain which can produce phenol from glucose. Furthermore, the team developed a biphasic extractive fermentation process to minimize the toxicity of phenol to E. coli cells. Glycerol tributyrate was found to have low toxicity to E. coli and allowed efficient extraction of phenol from the culture broth. Through the biphasic fed-batch fermentation using glycerol tributyrate as an in situ extractant, the final engineered E. coli strain produced phenol to the highest titer and productivity reported (3.8 g/L and 0.18 g/L/h, respectively). The strategy used for the strain development and the fermentation process will serve as a framework for metabolic engineering of microorganisms for the production of toxic chemicals from renewable resources. This work was supported by the Intelligent Synthetic Biology Center through the Global Frontier Project (2011-0031963) of the Ministry of Science, ICT & Future Planning through the National Research Foundation of Korea. Further inquiries:Dr. Sang Yup Lee Lan Yoon | EurekAlert! The secret sulfate code that lets the bad Tau in 16.07.2018 | American Society for Biochemistry and Molecular Biology Colorectal cancer risk factors decrypted 16.07.2018 | Max-Planck-Institut für Stoffwechselforschung For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Transportation and Logistics 16.07.2018 | Agricultural and Forestry Science
<urn:uuid:c03cd815-8089-4f05-b805-a4cd1cfcad45>
3.375
1,265
Content Listing
Science & Tech.
36.767276
95,536,361
12 July 2018 New method for synthesizing colloidal quantum dots Published online 28 November 2016 An efficient nano-synthesis method promises high-performance solar cells. Colloidal quantum dots (CQD), whose particles are only several nanometers in size, have been an increasingly important research subject because of their highly tunable properties, which affords them a wide variety of applications as semi-conductors. A recent study1, published in Nature Materials, reports a new method of synthesizing CQD that may help overcome some challenges, such as lack of uniformity in CQD particle sizes, and the inhomogeneity of energy landscape in CQD solids. Previous synthesis methods caused the random packing of the particles and consequently inconsistent energy and particle size. This led to losses in open-circuit voltage and thus prevented CQD from being fully harnessed in applications such as optoelectronic devices, which are devices operating on light and electricity, and include lasers, photo-detectors, and solar cells. Researchers were now able to modify the CQD synthesis procedure that had been poorly controlled and had once contributed to rendering CQD less efficient. The new method, which synthesizes CQD in a solution via a substitution reaction, insures that the CQD films are fused homogenously, and allows for an advantageously high packing density. The CQD obtained were found to exhibit reduced energy losses, and an enhanced light-to-electricity conversion efficiency of 11.28%. Ted Sargent, corresponding author of the study, and professor of nanotechnology at University of Toronto in Canada, says this can substantially enhance the performance of solar cells, and help pave the way for their large-scale production. “The work is the result of a very fruitful and longstanding collaboration with colleagues at King Abdullah University of Science and Technology (KAUST) in Saudi Arabia,” he adds. - Liu, M. et al. Hybrid organic–inorganic inks flatten the energy landscape in colloidal quantum dot solids. Nat. Mater. http://dx.doi.org/10.1038/nmat4800 (2016).
<urn:uuid:3dbb3c59-5876-4897-a513-82d2c85826af>
2.796875
455
News Article
Science & Tech.
32.234375
95,536,383
Life has always played by its own set of molecular rules. From the biochemistry behind the first cells, evolution has constructed wonders like hard bone, rough bark and plant enzymes that harvest light to make food. But our tools for manipulating life — to treat disease, repair damaged tissue and replace lost limbs — come from the nonliving realm: metals, plastics and the like. Though these save and preserve lives, our synthetic treatments are rooted in a chemical language ill-suited to our organic elegance. Implanted electrodes scar, wires overheat and our bodies struggle against ill-fitting pumps, pipes or valves. A solution lies in bridging this gap where artificial meets biological — harnessing biological rules to exchange information between the biochemistry of our bodies and the chemistry of our devices. In a paper published Sept. 22 in Scientific Reports, engineers at the University of Washington unveil peptides — small proteins which carry out countless essential tasks in our cells — that can provide just such a link. The team, led by UW professor Mehmet Sarikaya in the Departments of Materials Science & Engineering, shows how a genetically engineered peptide can assemble into nanowires atop 2-D, solid surfaces that are just a single layer of atoms thick. These nanowire assemblages are critical because the peptides relay information across the bio/nano interface through molecular recognition — the same principles that underlie biochemical interactions such as an antibody binding to its specific antigen or protein binding to DNA. Since this communication is two-way, with peptides understanding the "language" of technology and vice versa, their approach essentially enables a coherent bioelectronic interface. "Bridging this divide would be the key to building the genetically engineered biomolecular solid-state devices of the future," said Sarikaya, who is also a professor of chemical engineering and oral health sciences. His team in the UW Genetically Engineered Materials Science and Engineering Center studies how to coopt the chemistry of life to synthesize materials with technologically significant physical, electronic and photonic properties. To Sarikaya, the biochemical "language" of life is a logical emulation. "Nature must constantly make materials to do many of the same tasks we seek," he said. The UW team wants to find genetically engineered peptides with specific chemical and structural properties. They sought out a peptide that could interact with materials such as gold, titanium and even a mineral in bone and teeth. These could all form the basis of future biomedical and electro-optical devices. Their ideal peptide should also change the physical properties of synthetic materials and respond to that change. That way, it would transmit "information" from the synthetic material to other biomolecules — bridging the chemical divide between biology and technology. In exploring the properties of 80 genetically selected peptides — which are not found in nature but have the same chemical components of all proteins — they discovered that one, GrBP5, showed promising interactions with the semimetal graphene. They then tested GrBP5's interactions with several 2-D nanomaterials which, Sarikaya said, "could serve as the metals or semiconductors of the future." "We needed to know the specific molecular interactions between this peptide and these inorganic solid surfaces," he added. Their experiments revealed that GrBP5 spontaneously organized into ordered nanowire patterns on graphene. With a few mutations, GrBP5 also altered the electrical conductivity of a graphene-based device, the first step toward transmitting electrical information from graphene to cells via peptides. In parallel, Sarikaya's team modified GrBP5 to produce similar results on a semiconductor material — molybdenum disulfide — by converting a chemical signal to an optical signal. They also computationally predicted how different arrangements of GrBP5 nanowires would affect the electrical conduction or optical signal of each material, showing additional potential within GrBP5's physical properties. "In a way, we're at the flood gates," said Sarikaya. "Now we need to explore the basic properties of this bridge and how we can modify it to permit the flow of 'information' from electronic and photonic devices to biological systems." This is the focus of a new endeavor funded by the National Science Foundation's Materials Genome Initiative. It will be led by Sarikaya and joined by UW professors Xiaodong Xu, René Overney and Valerie Daggett. Through UW's CoMotion, he is also working with Amazon to cross that bio/nano divide for nano-sensors to detect early stages of pancreatic cancer. Lead author on the paper is former UW postdoctoral researcher Yuhei Hayamizu, who is now an associate professor at the Tokyo Institute of Technology. Co-authors include two former UW researchers — Christopher So, now with the Naval Research Laboratory, and Sefa Dag, now with IBM — as well as graduate students Tamon Page and David Starkebaum. The research was funded by the NSF, the UW, the National Institutes of Health and the Japan Science and Technology Agency. For more information, contact Sarikaya at 206-543-0724 or firstname.lastname@example.org. Grant numbers: DMR-0520567, T32CA138312, 25706012, DMR-1629071 Public Information Specialist and Science Writer James Urton | newswise NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:39f5388a-308d-4972-b275-450b9ccaa7f1>
3.171875
1,741
Content Listing
Science & Tech.
32.135379
95,536,389
Each prefix has a unique si units list pdf that is prepended to the unit symbol. Metric prefixes have even been prepended to non-metric units. Prefixes adopted before 1960 already existed before SI. Each prefix name has a symbol that is used in combination with the symbols for units of measure. W’, which are the SI symbols for kilometre, kilogram, and kilowatt, respectively. Where Greek letters are unavailable, the symbol for micro ‘µ’ is commonly replaced by ‘u’. Prefixes corresponding to an integer power of one thousand are generally preferred. However, some modern building codes require that the millimetre be used in preference to the centimetre, because “use of centimetres leads to extensive usage of decimal points and confusion”. Prefixes may not be used in combination. In the arithmetic of measurements having units, the units are treated as multiplicative factors to values. If they have prefixes, all but one of the prefixes must be expanded to their numeric multiplier, except when combining values with identical units. The use of prefixes can be traced back to the introduction of the metric system in the 1790s, long before the 1960 introduction of the SI. Metric prefixes may also be used with non-metric units. The choice of prefixes with a given unit is usually dictated by convenience of use. Weight is the force exerted on a body by a gravitational field, suitable for adoption by all countries adhering to the Metre Convention”. CGPM in 1879 and have been retained as units that may be used alongside SI units, the cubic metre is usually used. Some modern building codes require that the millimetre be used in preference to the centimetre, the ‘Periodic table’ bookmark which comes with basic rules for writing chemical equations will do it in a fun and educative way. Metrologists carefully distinguish between the definition of a unit and its realisation. MKS stands for “meter, one contains only symbols and you need to write in the names of the elements, some units are deeply embedded in history and culture. Fold the cut, prefixes are added to unit names to produce multiples and sub, follow the link for more information. Six base units were recommended: the metre, the atomic weights of the elements are rounded to 5 significant figures. Unit prefixes for amounts that are much larger or smaller than those actually encountered are seldom used. Megagram is occasionally used to disambiguate the metric tonne from the various non-metric tons. Alone among SI units, the base unit of mass, the kilogram, already includes a prefix. For scientific purposes, the cubic metre is usually used. The kilometre, metre, centimetre, millimetre, and smaller are common. However, the decimetre is rarely used. For large scales, megametre, gigametre, and larger are rarely used. SI standards as an accepted non-SI unit. The second, millisecond, microsecond, and shorter are common. The kilosecond and megasecond also have some use, though for these and longer times one usually uses either scientific notation or minutes, hours, and so on. The BIPM’s position on the use of SI prefixes with units of time larger than the second is the same as that of the NIST, but their position with regard to angles differs: they state “However astronomers use milliarcsecond, which they denote mas, and microarcsecond, µas, which they use as units for measuring very small angles. C and prefixes may be used with the unit name ‘degree Celsius’. There are gram calories and kilogram calories. However, words in common use outside the scientific community may follow idiosyncratic stress rules. Some of the prefixes formerly used in the metric system have fallen into disuse and were not adopted into the SI. These were disallowed with the introduction of the SI. Although the Consultative Committee for Units considered the proposal, it was rejected. United States use the abbreviations “cc” or “ccm” for cubic centimetres. MCM” still remains in wide use. MMbbl’ is the symbol for ‘millions of barrels’. This page was last edited on 7 January 2018, the speed of light is now an exactly defined constant of nature. Rules and style conventions for writing SI units and quantities shortened to a single page. Or 100 gradations of temperature, and authorised a survey to precisely establish the length of the meridian. Note: both old and new definitions are approximately the luminous intensity of a whale blubber candle burning modestly bright, large educational poster ‘Qualitative inorganic analysis: Separation and identification of cations’ for your laboratory wall. In everyday use, sI units still appear in the scientific, in the other the situation is reversed.
<urn:uuid:411f25de-907f-47dd-8869-353b2919932f>
3.921875
1,035
Knowledge Article
Science & Tech.
32.219685
95,536,400
This paper aims to investigate different methods for the detection of underground concrete structures using as a case study an area of abandoned military bunkers. Underground structures can affect their surrounding landscapes in different ways, such as alter the moisture capacity of soil, its composition and the vegetation vigor. The latter is often observed on the ground as a crop mark; a phenomenon which can be used as a proxy to denote the presence of underground nonvisible structures. A number of vegetation indices such as the Normalized Difference Vegetation Index (NDVI), Simple Ratio (SR) and Enhanced Vegetation Index (EVI) were utilized for the development of a vegetation index-based procedure aiming at the detection of underground military structures by using existing vegetation indices or other in-band algorithms. One of the techniques examined is that of the C-Band Synthetic Aperture Radar (SAR), which provide information on the vegetation height based on the analysis of the difference between areas of buried structures and reference areas. George Melillos, Kyriacos Themistocleous, George Papadavid, Athos Agapiou, Demetris Kouhartsiouk, Maria Prodromou, Silas Michaelides, and Diofantos G. Hadjimitsis, "Using field spectroscopy combined with synthetic aperture radar (SAR) technique for detecting underground structures for defense and security applications in Cyprus," Proc. SPIE 10182, Detection and Sensing of Mines, Explosive Objects, and Obscured Targets XXII, 1018206 (Presented at SPIE Defense + Security: April 10, 2017; Published: 3 May 2017); https://doi.org/10.1117/12.2262279. Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 12,000 conference presentations, including many plenary and keynote presentations.
<urn:uuid:adc702a0-5f06-476f-af79-f7bdf79ea40c>
2.84375
426
Academic Writing
Science & Tech.
16.424144
95,536,438
Their research, published this month in the Proceedings of the National Academy of Sciences, sheds surprising light on the subject of extinction rates of species on islands. The paper, "Species Invasions and Extinction: The Future of Native Biodiversity on Islands," is one in a series of reports by this team studying how humans have altered the ecosystems of the planet. Gaines and Sax started the project with a question: What effect are humans really having on biological diversity? "The presumption at the time was that we are driving biodiversity to lower levels," said Gaines, who directs UCSB's Marine Science Institute. "Certainly, if you think about it at the global level, this is true because humans have done a lot of things that have driven species extinct." However, when studied on the smaller scale of islands, the findings showed something completely different. Diversity is on the rise – markedly so in some instances. Diversity has gone up so dramatically that it might cause some to wonder if the health of the ecosystems might not be better because the number of species is twice as high as it used to be. But it's not that simple, Gaines said. "What Dov and I worked on a few years ago is the fact that the vast majority of introductions (of species) don't have large negative effects," Gaines said. "Indeed, most species that get introduced don't have much effect at all. It doesn't mean that they're not altering the ecosystem, but they're not driving things extinct like some of the big poster-child stories we've been hearing about." Still, the study showed that human colonization has had a massive impact on ecosystems of islands, with the introduction of new, exotic plants and animals. In New Zealand, for example, there were about 2,000 native species of plants. Since colonization, about 2,000 new plant species have become naturalized. Over the same period, there have been few plant extinctions, so the net effect is that humans have transformed New Zealand's landscape by bringing in so many new species. Sax, a former postdoctoral researcher at UCSB who is now assistant professor of ecology and evolutionary biology at Brown University, did much of the fact-finding for this report by painstakingly digging through data that had been collected over hundreds of years on islands around the world. "This is Dov's specialty," Gaines said. "Finding really old data sets that are very interesting." "The dramatic increase in the number of species has changed how the system functions," Sax said. "Changing the abundance of natives versus exotics affects all of the other species that used to depend on the natives for food or shelter. So, it's not in any way to say that increasing biodiversity is a good thing." With birds, it's a different story. The number of bird species on islands today is almost exactly the same as it was prior to human colonization, but the species of birds on the islands are very different. About 40 percent of the species of birds that you find on islands today are introduced species, Sax said, which means that a comparable number of birds has gone extinct. "In the case of birds," he said, "lots of extinctions, no change in total biodiversity." All of this caused Gaines and Sax to ask new questions: Are the islands undersaturated? Can you still keep throwing species in there, with the result that nothing is going to happen? Are they now oversaturated? Are there limits in how many species an ecosystem can hold? Are we building an extinction debt? "Which means," Gaines said, "that by going in and mucking up the system, we may have already created the setting where too many species have been packed in, and we just haven't waited long enough to see these extinctions start to happen. "The whole point of this study was to start looking down the path to see which of these wildly different scenarios might be right," Gaines added. "We haven't nailed the answer yet, but we've set the stage for answering whether islands are now saturated or not." What made the research possible was that many of the explorers who colonized the islands included naturalists on their boats. From the time they landed on the islands, the naturalists were busy cataloging and documenting the plants and animals of each colony. "It was very surprising to find such a strong correlation between the number of native and exotic plant species on islands around the world," Sax said. "In ecological research, a 'strong' correlation often explains 50 percent of the variation. Here, the correlation between native and exotics explains almost 100 percent of the variation. In other words, if you know how many native plants are on an oceanic island then you can predict almost perfectly how many exotic plants are there." The study, which took a year and a half, included islands such as Lord Howe Island east of Australia and Tristan da Cunha, a group of remote volcanic islands in the south Atlantic Ocean, among others. "These were all oceanic islands," Gaines said, "which means islands that are far enough away from a continent that they're not getting regular exchanges with the mainland." George Foulsham | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 23.07.2018 | Health and Medicine 23.07.2018 | Earth Sciences 23.07.2018 | Science Education
<urn:uuid:3fc0591e-c91d-4389-9af4-341448fb4746>
3.828125
1,671
Content Listing
Science & Tech.
46.849929
95,536,447
The team who discovered gravitational waves – ripples in space and time – have been awarded the Nobel Prize of Physics. The waves were predicted by Einstein in his General Theory of relativity, but were not recorded for more than 100 years. They were finally spotted in 2016 by the Laser Interferometer Gravitational-Wave Observatory (Ligo) in a discovery was hailed as “the biggest scientific breakthrough” of the century by scientists. Today the Nobel Prize was awarded to Rainer Weiss of the Massachusetts Institute of Technology and Barry Barish and Kip Thorne of the California Institute of Technology, all members of the Ligo team. “Their discovery shook the world,” said Goran Hansson, the head of the Swedish Royal Academy of Sciences, who announced the prize today. — The Nobel Prize (@NobelPrize) October 3, 2017 But British physicist Ronald Drever, who was a key member of the team, missed out on science’s most coveted prize, after losing his battle with dementia earlier this year. Professor James Hough, of the University of Glasgow’s School of Physics and Astronomy, added: “Back in the 1970s, working with Ron Drever we built one of the world’s first gravitational wave detectors, instrumented with piezoelectric transducers, before moving on to designing increasingly sophisticated technology. “I’m proud to have played a leading role in the conception and expansion of gravitational research at the University of Glasgow, and that the efforts of Glasgow researchers over the decades paid off both with the development of LIGO’s mirror-suspension technology, and now exciting roles in delivering the astrophysical results from the brand new field.” In 1915 Einstein announced his General Theory of Relativity which suggested that huge bodies in space, like planets or black holes, have so much mass that they actually bend space and time. It can be thought of as a giant rubber sheet with a bowling ball in the centre. Just as the ball warps the sheet, so a planet bends the fabric of space-time creating the force that we feel as gravity. Any object that comes near to the body falls towards it because of the effect. Einstein predicted that if two massive bodies came together it would create such a huge ripple in space time that it should be detectable on Earth. [brid video=”168054″ player=”2145″ title=”Nobel Prize for Physics awarded to team who found gravitational waves”] In Feburary last year, the Ligo announced they had detected such a ripple, believed to be caused by two black holes colliding. The experiment has since been confirmed and replicated by other observatories. Professor Mark Hannam, from Cardiff University’s School of Physics and Astronomy, said: “LIGO has already given us a string of discoveries – the first direct detection of gravitational waves, the first observation of a binary black hole system, the first observation of black holes several tens of times more massive than the sun, and arguably the first direct observation of a black hole. But the truly incredible achievement was to make the LIGO detectors. “We already knew gravitational waves existed. We already knew black holes existed. What Weiss, Thorne and Barish did was to build the first machine sensitive enough to be able to directly measure gravitational waves. “It took them over forty years, and the result was the most sensitive measuring device ever made. It is an incredible new tool that has only begun to transform our understanding of the universe. “It’s very sad that one of the UK’s gravitational-wave pioneers, Ron Drever, didn’t live to see this fantastic recognition of these ground-breaking discoveries. “However, he was alive to witness the first ever detection of gravitational waves in 2015, and I understand he was extremely thrilled.” German-born Weiss was awarded half of the 9-million-kronor ($1.1 million) prize amount and Thorne and Barish will split the other half. Prof Lord Martin Rees, Astronomer Royal & Fellow of Trinity College, University of Cambridge, said: “The Nobel committee has apportioned credit appropriately among three leaders of the LIGO project – outstanding individuals whose contributions were distinctive and complementary. “Weiss deserves prime credit for devising (along with the late Ron Drever) the amazingly precise laser techniques. Moreover he was ‘hands on’ and saw the project through to its success.” Join our list Subscribe to our mailing list and get interesting stuff and updates to your email inbox. More from Amazing Places There’s been no shortage of shows this century seeking to dissect and expound on the strange natural mysteries of Earth. David …
<urn:uuid:19490a48-7656-4314-a83a-44b7f97ce398>
3.765625
1,008
News Article
Science & Tech.
34.367811
95,536,452
188.8.131.52. Cloud Feedback Cloud feedback is the term used to encompass effects of changes in cloud and their associated radiative properties on a change of climate, and has been identified as a major source of uncertainty in climate models (Cubasch & Cess, 1990). This feedback mechanism incorporates both changes in cloud distribution (both horizontal and vertical) and changes in cloud radiative properties (cloud optical depth and cloud droplet distribution) (Wigley, 1989; Charlson et al., 1987); these are not mutually independent. Although clouds contribute to the greenhouse warming of the climate system by absorbing more outgoing infrared radiation (positive feedback), they also produce a cooling through the reflection and reduction in absorption of solar radiation (negative feedback) (Cubasch & Cess, 1990). It is generally assumed that low clouds become more reflective as temperatures increase, thereby introducing a negative feedback, whilst the feedback from high clouds depends upon their height and coverage and could be of either sign (Gates et al., 1992).
<urn:uuid:cabd95d0-047f-43aa-8756-22555535a64e>
2.96875
207
Knowledge Article
Science & Tech.
36.532009
95,536,455
Nuclear power is the use of nuclear reactions that release nuclear energy to generate heat, which most frequently is then used in steam turbines to produce electricity in a nuclear power plant. Nuclear power can be obtained from nuclear fission, nuclear decay and nuclear fusion. Presently, the vast majority of electricity is produced by nuclear fission of elements in the actinide series of the periodic table. Nuclear decay processes are used in niche applications such as radioisotope thermoelectric generators. The possibility of generating electricity from nuclear fusion is still at a research phase with no commercial applications. This article mostly deals with nuclear fission power for electricity generation. Nuclear power is one of the leading low carbon power generation methods of producing electricity. In terms of total life-cycle greenhouse gas emissions per unit of energy generated, nuclear power has emission values comparable or lower than renewable energy. From the beginning of its commercialization in the 1970s, nuclear power prevented about 1.84 million air pollution-related deaths and the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels in thermal power stations. There is a social debate about nuclear power. Proponents, such as the World Nuclear Association and Environmentalists for Nuclear Energy, contend that nuclear power is a safe, sustainable energy source that reduces carbon emissions. Opponents, such as Greenpeace International and NIRS, contend that nuclear power poses many threats to people and the environment. Far-reaching fission power reactor accidents, or accidents that resulted in medium to long-lived fission product contamination of inhabited areas, have occurred in Generation I and II reactor designs. These include the Chernobyl disaster in 1986, the Fukushima Daiichi nuclear disaster in 2011, and the more contained Three Mile Island accident in 1979. There have also been some nuclear submarine accidents. In terms of lives lost per unit of energy generated, analysis has determined that fission-electric reactors have caused fewer fatalities per unit of energy generated than the other major sources of energy generation. Energy production from coal, petroleum, natural gas and hydroelectricity has caused a greater number of fatalities per unit of energy generated due to air pollution and energy accident effects. As of 2017, the International Atomic Energy Agency states that 60 reactors, mostly of Generation III reactor design, are under construction around the world, with the majority in Asia. Collaboration on research & developments towards greater passive nuclear safety, efficiency and recycling of spent fuel in future Generation IV reactors presently includes Euratom and the co-operation of more than 10 permanent countries globally. - 1 History - 2 Industry - 3 Economics - 4 Nuclear power plants - 5 Life cycle of nuclear fuel - 6 Accidents, attacks and safety - 7 Nuclear proliferation - 8 Environmental impact - 9 Comparison with renewable energy - 10 Nuclear decommissioning - 11 Debate on nuclear power - 12 Use in space - 13 Research - 14 See also - 15 References - 16 Further reading - 17 External links In 1932 physicist Ernest Rutherford discovered that when lithium atoms were "split" by protons from a proton accelerator, immense amounts of energy were released in accordance with the principle of mass–energy equivalence. However, he and other nuclear physics pioneers Niels Bohr and Albert Einstein believed harnessing the power of the atom for practical purposes anytime in the near future was unlikely, with Rutherford labeling such expectations "moonshine." The same year, his doctoral student James Chadwick discovered the neutron, which was immediately recognized as a potential tool for nuclear experimentation because of its lack of an electric charge. Experimentation with bombardment of materials with neutrons led Frédéric and Irène Joliot-Curie to discover induced radioactivity in 1934, which allowed the creation of radium-like elements at much less the price of natural radium. Further work by Enrico Fermi in the 1930s focused on using slow neutrons to increase the effectiveness of induced radioactivity. Experiments bombarding uranium with neutrons led Fermi to believe he had created a new, transuranic element, which was dubbed hesperium. But in 1938, German chemists Otto Hahn and Fritz Strassmann, along with Austrian physicist Lise Meitner and Meitner's nephew, Otto Robert Frisch, conducted experiments with the products of neutron-bombarded uranium, as a means of further investigating Fermi's claims. They determined that the relatively tiny neutron split the nucleus of the massive uranium atoms into two roughly equal pieces, contradicting Fermi. This was an extremely surprising result: all other forms of nuclear decay involved only small changes to the mass of the nucleus, whereas this process—dubbed "fission" as a reference to biology—involved a complete rupture of the nucleus. Numerous scientists, including Leó Szilárd, who was one of the first, recognized that if fission reactions released additional neutrons, a self-sustaining nuclear chain reaction could result. Once this was experimentally confirmed and announced by Frédéric Joliot-Curie in 1939, scientists in many countries (including the United States, the United Kingdom, France, Germany, and the Soviet Union) petitioned their governments for support of nuclear fission research, just on the cusp of World War II, for the development of a nuclear weapon. First nuclear reactor In the United States, where Fermi and Szilárd had both emigrated, this led to the creation of the first man-made reactor, known as Chicago Pile-1, which achieved criticality on December 2, 1942. This work became part of the Manhattan Project, a massive secret US government military project to make enriched uranium by building large reactors to breed plutonium for use in the first nuclear weapons. The US tested atom bombs and eventually these weapons were used to attack the cities of Hiroshima and Nagasaki. In 1945, the first widely distributed account of nuclear energy, in the form of the pocketbook The Atomic Age, discussed the peaceful future uses of nuclear energy and depicted a future where fossil fuels would go unused. Nobel laurette Glenn Seaborg, who later chaired the Atomic Energy Commission, is quoted as saying "there will be nuclear powered earth-to-moon shuttles, nuclear powered artificial hearts, plutonium heated swimming pools for SCUBA divers, and much more". The United Kingdom, Canada, and the USSR proceeded to research and develop nuclear industries over the course of the late 1940s and early 1950s. Electricity was generated for the first time by a nuclear reactor on December 20, 1951, at the EBR-I experimental station near Arco, Idaho, which initially produced about 100 kW. Work was also strongly researched in the US on nuclear marine propulsion, with a test reactor being developed by 1953 (eventually, the USS Nautilus, the first nuclear-powered submarine, would launch in 1955). In 1953, US President Dwight Eisenhower gave his "Atoms for Peace" speech at the United Nations, emphasizing the need to develop "peaceful" uses of nuclear power quickly. This was followed by the 1954 Amendments to the Atomic Energy Act which allowed rapid declassification of US reactor technology and encouraged development by the private sector. On June 27, 1954, the USSR's Obninsk Nuclear Power Plant became the world's first nuclear power plant to generate electricity for a power grid, and produced around 5 megawatts of electric power. Later in 1954, Lewis Strauss, then chairman of the United States Atomic Energy Commission (U.S. AEC, forerunner of the U.S. Nuclear Regulatory Commission and the United States Department of Energy) spoke of electricity in the future being "too cheap to meter". Strauss was very likely referring to hydrogen fusion —which was secretly being developed as part of Project Sherwood at the time—but Strauss's statement was interpreted as a promise of very cheap energy from nuclear fission. The U.S. AEC itself had issued far more realistic testimony regarding nuclear fission to the U.S. Congress only months before, projecting that "costs can be brought down... [to]... about the same as the cost of electricity from conventional sources..." In 1955 the United Nations' "First Geneva Conference", then the world's largest gathering of scientists and engineers, met to explore the technology. In 1957 EURATOM was launched alongside the European Economic Community (the latter is now the European Union). The same year also saw the launch of the International Atomic Energy Agency (IAEA). The world's first commercial nuclear power station, Calder Hall at Windscale, England, was opened in 1956 with an initial capacity of 50 MW (later 200 MW). The first commercial nuclear generator to become operational in the United States was the Shippingport Reactor (Pennsylvania, December 1957). One of the first organizations to develop nuclear power was the U.S. Navy, for the purpose of propelling submarines and aircraft carriers. The first nuclear-powered submarine, USS Nautilus, was put to sea in December 1954. As of 2016, the U.S. Navy submarine fleet is made up entirely of nuclear-powered vessels, with 75 submarines in service. Two U.S. nuclear submarines, USS Scorpion and USS Thresher, have been lost at sea. The Russian Navy is currently (2016) estimated to have 61 nuclear submarines in service; eight Soviet and Russian nuclear submarines have been lost at sea. This includes the Soviet submarine K-19 reactor accident in 1961 which resulted in 8 deaths and more than 30 other people were over-exposed to radiation. The Soviet submarine K-27 reactor accident in 1968 resulted in 9 fatalities and 83 other injuries. Moreover, Soviet submarine K-429 sank twice, but was raised after each incident. Several serious nuclear and radiation accidents have involved nuclear submarine mishaps. The U.S. Army also had a nuclear power program, beginning in 1954. The SM-1 Nuclear Power Plant, at Fort Belvoir, Virginia, was the first power reactor in the U.S. to supply electrical energy to a commercial grid (VEPCO), in April 1957, before Shippingport. The SL-1 was a U.S. Army experimental nuclear power reactor at the National Reactor Testing Station in eastern Idaho. It underwent a steam explosion and meltdown in January 1961, which killed its three operators. In the Soviet Union at The Mayak Production Association facility there were a number of accidents, including an explosion, that released 50–100 tonnes of high-level radioactive waste, contaminating a huge territory in the eastern Urals and causing numerous deaths and injuries. The Soviet government kept this accident secret for about 30 years. The event was eventually rated at 6 on the seven-level INES scale (third in severity only to the disasters at Chernobyl and Fukushima). Installed nuclear capacity initially rose relatively quickly, rising from less than 1 gigawatt (GW) in 1960 to 100 GW in the late 1970s, and 300 GW in the late 1980s. Since the late 1980s worldwide capacity has risen much more slowly, reaching 366 GW in 2005. Between around 1970 and 1990, more than 50 GW of capacity was under construction (peaking at over 150 GW in the late 1970s and early 1980s) — in 2005, around 25 GW of new capacity was planned. More than two-thirds of all nuclear plants ordered after January 1970 were eventually cancelled. A total of 63 nuclear units were canceled in the USA between 1975 and 1980. During the 1970s and 1980s rising economic costs (related to extended construction times largely due to regulatory changes and pressure-group litigation) and falling fossil fuel prices made nuclear power plants then under construction less attractive. In the 1980s (U.S.) and 1990s (Europe), flat load growth and electricity liberalization also made the addition of large new baseload capacity unattractive. The 1973 oil crisis had a significant effect on countries, such as France and Japan, which had relied more heavily on oil for electric generation (39% and 73% respectively) to invest in nuclear power. Some local opposition to nuclear power emerged in the early 1960s, and in the late 1960s some members of the scientific community began to express their concerns. These concerns related to nuclear accidents, nuclear proliferation, high cost of nuclear power plants, nuclear terrorism and radioactive waste disposal. In the early 1970s, there were large protests about a proposed nuclear power plant in Wyhl, Germany. The project was cancelled in 1975 and anti-nuclear success at Wyhl inspired opposition to nuclear power in other parts of Europe and North America. By the mid-1970s anti-nuclear activism had moved beyond local protests and politics to gain a wider appeal and influence, and nuclear power became an issue of major public protest. Although it lacked a single co-ordinating organization, and did not have uniform goals, the movement's efforts gained a great deal of attention. In some countries, the nuclear power conflict "reached an intensity unprecedented in the history of technology controversies". In France, between 1975 and 1977, some 175,000 people protested against nuclear power in ten demonstrations. In West Germany, between February 1975 and April 1979, some 280,000 people were involved in seven demonstrations at nuclear sites. Several site occupations were also attempted. In the aftermath of the Three Mile Island accident in 1979, some 120,000 people attended a demonstration against nuclear power in Bonn. In May 1979, an estimated 70,000 people, including then governor of California Jerry Brown, attended a march and rally against nuclear power in Washington, D.C. Anti-nuclear power groups emerged in every country that has had a nuclear power programme. Three Mile Island and Chernobyl Health and safety concerns, the 1979 accident at Three Mile Island, and the 1986 Chernobyl disaster played a part in stopping new plant construction in many countries, although the public policy organization, the Brookings Institution states that new nuclear units, at the time of publishing in 2006, had not been built in the U.S. because of soft demand for electricity, and cost overruns on nuclear plants due to regulatory issues and construction delays. By the end of the 1970s it became clear that nuclear power would not grow nearly as dramatically as once believed. Eventually, more than 120 reactor orders in the U.S. were ultimately cancelled and the construction of new reactors ground to a halt. A cover story in the February 11, 1985, issue of Forbes magazine commented on the overall failure of the U.S. nuclear power program, saying it "ranks as the largest managerial disaster in business history". Unlike the Three Mile Island accident, the much more serious Chernobyl accident did not increase regulations affecting Western reactors since the Chernobyl reactors were of the problematic RBMK design only used in the Soviet Union, for example lacking "robust" containment buildings. Many of these RBMK reactors are still in use today. However, changes were made in both the reactors themselves (use of a safer enrichment of uranium) and in the control system (prevention of disabling safety systems), amongst other things, to reduce the possibility of a duplicate accident. An international organization to promote safety awareness and professional development on operators in nuclear facilities was created: WANO; World Association of Nuclear Operators. Opposition in Ireland and Poland prevented nuclear programs there, while Austria (1978), Sweden (1980) and Italy (1987) (influenced by Chernobyl) voted in referendums to oppose or phase out nuclear power. In July 2009, the Italian Parliament passed a law that cancelled the results of an earlier referendum and allowed the immediate start of the Italian nuclear program. After the Fukushima Daiichi nuclear disaster a one-year moratorium was placed on nuclear power development, followed by a referendum in which over 94% of voters (turnout 57%) rejected plans for new nuclear power. Since about 2001 the term nuclear renaissance has been used to refer to a possible nuclear power industry revival, driven by rising fossil fuel prices and new concerns about meeting greenhouse gas emission limits. Since commercial nuclear energy began in the mid-1950s, 2008 was the first year that no new nuclear power plant was connected to the grid, although two were connected in 2009. Fukushima Daiichi Nuclear Disaster Japan's 2011 Fukushima Daiichi nuclear accident prompted a re-examination of nuclear safety and nuclear energy policy in many countries and raised questions among some commentators over the future of the renaissance. Germany plans to close all its reactors by 2022, and Italy has re-affirmed its ban on electric utilities generating, but not importing, fission derived electricity. China, Switzerland, Israel, Malaysia, Thailand, United Kingdom, and the Philippines have also reviewed their nuclear power programs, while Indonesia and Vietnam still plan to build nuclear power plants. In 2011 the International Energy Agency halved its prior estimate of new generating capacity to be built by 2035. In 2013 Japan signed a deal worth $22 billion, in which Mitsubishi Heavy Industries would build four modern Atmea reactors for Turkey. In August 2015, following 4 years of near zero fission-electricity generation, Japan began restarting its fission fleet, after safety upgrades were completed, beginning with Sendai fission-electric station. The World Nuclear Association has said that "nuclear power generation suffered its biggest ever one-year fall through 2012 as the bulk of the Japanese fleet remained offline for a full calendar year". Data from the International Atomic Energy Agency showed that nuclear power plants globally produced 2346 TWh of electricity in 2012 – seven per cent less than in 2011. The figures illustrate the effects of a full year of 48 Japanese power reactors producing no power during the year. The permanent closure of eight reactor units in Germany was also a factor. Problems at Crystal River, Fort Calhoun and the two San Onofre units in the USA meant they produced no power for the full year, while in Belgium Doel 3 and Tihange 2 were out of action for six months. Compared to 2010, the nuclear industry produced 11% less electricity in 2012. The Fukushima Daiichi nuclear accident sparked controversy about the importance of the accident and its effect on nuclear's future. IAEA Director General Yukiya Amano said the Japanese nuclear accident "caused deep public anxiety throughout the world and damaged confidence in nuclear power", and the International Energy Agency halved its estimate of additional nuclear generating capacity to be built by 2035. Though Platts reported in 2011 that "the crisis at Japan's Fukushima nuclear plants has prompted leading energy-consuming countries to review the safety of their existing reactors and cast doubt on the speed and scale of planned expansions around the world", Progress Energy Chairman/CEO Bill Johnson made the observation that "Today there’s an even more compelling case that greater use of nuclear power is a vital part of a balanced energy strategy". In 2011, The Economist opined that nuclear power "looks dangerous, unpopular, expensive and risky", and that "it is replaceable with relative ease and could be forgone with no huge structural shifts in the way the world works". Earth Institute Director Jeffrey Sachs disagreed, claiming combating climate change would require an expansion of nuclear power. "We won't meet the carbon targets if nuclear is taken off the table," he said. "We need to understand the scale of the challenge." Investment banks were critical of nuclear soon after the accident. Deutsche Bank advised that "the global impact of the Fukushima accident is a fundamental shift in public perception with regard to how a nation prioritizes and values its populations health, safety, security, and natural environment when determining its current and future energy pathways...renewable energy will be a clear long-term winner in most energy systems, a conclusion supported by many voter surveys conducted over the past few weeks. In September 2011, German engineering giant Siemens announced it will withdraw entirely from the nuclear industry, as a response to the Fukushima nuclear accident in Japan, and said that it would no longer build nuclear power plants anywhere in the world. The company’s chairman, Peter Löscher, said that "Siemens was ending plans to cooperate with Rosatom, the Russian state-controlled nuclear power company, in the construction of dozens of nuclear plants throughout Russia over the coming two decades". In February 2012, the United States Nuclear Regulatory Commission approved the construction of two additional reactors at the Vogtle Electric Generating Plant, the first reactors to be approved in over 30 years since the Three Mile Island accident, but NRC Chairman Gregory Jaczko cast a dissenting vote citing safety concerns stemming from Japan's 2011 Fukushima nuclear disaster, and saying "I cannot support issuing this license as if Fukushima never happened". Jaczko resigned in April 2012. One week after Southern received the license to begin major construction on the two new reactors, a dozen environmental and anti-nuclear groups sued to stop the Plant Vogtle expansion project, saying "public safety and environmental problems since Japan's Fukushima Daiichi nuclear reactor accident have not been taken into account". In July 2012, the suit was rejected by the Washington, D.C. Circuit Court of Appeals. In 2013, four aging uncompetitive reactors in the United States were closed. In the United States, four new Generation III reactors were under construction at Vogtle and Summer station, while a fifth was nearing completion at Watts Bar station, all five were expected to become operational before 2020. In 2012, the World Nuclear Association reported that nuclear electricity generation was at its lowest level since 1999. According to the World Nuclear Association, the global trend is for new nuclear power stations coming online to be balanced by the number of old plants being retired. Countries such as Australia, Austria, Denmark, Greece, Ireland, Italy, Latvia, Liechtenstein, Luxembourg, Malta, Portugal, Israel, Malaysia, New Zealand, and Norway have no nuclear power reactors and remain opposed to nuclear power. By 2015, the IAEA's outlook for nuclear energy had become more promising. "Nuclear power is a critical element in limiting greenhouse gas emissions," the agency noted, and "the prospects for nuclear energy remain positive in the medium to long term despite a negative impact in some countries in the aftermath of the [Fukushima-Daiichi] accident...it is still the second-largest source worldwide of low-carbon electricity. And the 72 reactors under construction at the start of last year were the most in 25 years." As of 2015, 441 reactors had a worldwide net electric capacity of 382,9 GW, with 67 new nuclear reactors under construction. Over half of the 67 total being built were in Asia, with 28 in China, where there is an urgent need to control pollution from coal plants. Eight new grid connections were completed by China in 2015 and the most recently completed reactor to be connected to the electrical grid, as of January 2016, was at the Kori Nuclear Power Plant in the Republic of Korea. In October 2016, Watts Bar 2 became the first new United States reactor to enter commercial operation since 1996. Future of the industry As of January 2016, there are over 150 nuclear reactors planned, equivalent to nearly half of capacity at that time. The future of nuclear power varies greatly between countries, depending on government policies. Some countries, many of them in Europe, such as Germany, Belgium, and Lithuania, have adopted policies of nuclear power phase-out. At the same time, some Asian countries, such as China and India, have committed to rapid expansion of nuclear power. Many other countries, such as the United Kingdom and the United States, have policies in between. Japan was a major generator of nuclear power before the Fukushima accident, but as of August 2016, Japan has restarted only three of its nuclear plants, and the extent to which it will resume its nuclear program is uncertain. While South Korea has a large nuclear power industry, in 2017 responding to widespread public concerns after the Fukushima Daiichi nuclear disaster, the high earthquake risk in South Korea, and a 2013 nuclear scandal involving the use of counterfeit parts, the new government decided to gradually phase out nuclear power as reactors that are now operating or under construction close after 40 years of operations. In 2015, the International Energy Agency reported that the Fukushima accident had a strongly negative effect on nuclear power, yet “the prospects for nuclear energy remain positive in the medium to long term despite a negative impact in some countries in the aftermath of the accident.” The IEA noted that at the start of 2014, there were 72 nuclear reactors under construction worldwide, the largest number in 25 years, and that China planned to increase nuclear power capacity from 17 gigawatts (GW) in 2014, to 58 GW in 2020. In 2016, the US Energy Information Administration projected for its “base case” that world nuclear power generation would increase from 2,344 billion kW-hr in 2012 to 4,501 billion kW-hr in 2040. Most of the predicted increase was expected to be in Asia. The nuclear power industry in western nations has a history of construction delays, cost overruns, plant cancellations, and nuclear safety issues despite significant government subsidies and support. In December 2013, Forbes magazine cited a report which concluded that, in western countries, "reactors are not a viable source of new power". Even where they make economic sense, they are not feasible because nuclear’s "enormous costs, political and popular opposition, and regulatory uncertainty". This view echoes the statement of former Exelon CEO John Rowe, who said in 2012 that new nuclear plants in the United States "don’t make any sense right now" and won’t be economically viable in the foreseeable future. John Quiggin, economics professor, also says the main problem with the nuclear option is that it is not economically-viable. Quiggin says that we need more efficient energy use and more renewable energy commercialization. Former NRC member Peter Bradford and Professor Ian Lowe made similar statements in 2011. However, some "nuclear cheerleaders" and lobbyists in the West continue to champion reactors, often with proposed new but largely untested designs, as a source of new power. Much more new build activity is occurring in developing countries like South Korea, India and China. In March 2016, China had 30 reactors in operation, 24 under construction and plans to build more, However, according to a government research unit, China must not build "too many nuclear power reactors too quickly", in order to avoid a shortfall of fuel, equipment and qualified plant workers. In the US, licenses of almost half its reactors have been extended to 60 years, Two new Generation III reactors are under construction at Vogtle, a dual construction project which marks the end of a 34-year period of stagnation in the US construction of civil nuclear power reactors. The station operator licenses of almost half the present 104 power reactors in the US, as of 2008, have been given extensions to 60 years. As of 2012, U.S. nuclear industry officials expect five new reactors to enter service by 2020, all at existing plants. In 2013, four aging, uncompetitive, reactors were permanently closed. Relevant state legislatures are trying to close Vermont Yankee and Indian Point Nuclear Power Plant. The U.S. NRC and the U.S. Department of Energy have initiated research into Light water reactor sustainability which is hoped will lead to allowing extensions of reactor licenses beyond 60 years, provided that safety can be maintained, as the loss in non-CO2-emitting generation capacity by retiring reactors "may serve to challenge U.S. energy security, potentially resulting in increased greenhouse gas emissions, and contributing to an imbalance between electric supply and demand." Research into nuclear reactors that can last 100 years, known as Centurion Reactors, is already being conducted. There is a possible impediment to production of nuclear power plants as only a few companies worldwide have the capacity to forge single-piece reactor pressure vessels, which are necessary in the most common reactor designs. Utilities across the world are submitting orders years in advance of any actual need for these vessels. Other manufacturers are examining various options, including making the component themselves, or finding ways to make a similar item using alternate methods. According to the World Nuclear Association, globally during the 1980s one new nuclear reactor started up every 17 days on average, and in the year 2015 it was estimated that this rate could in theory eventually increase to one every 5 days, although no plans exist for that. As of 2007, Watts Bar 1 in Tennessee, which came on-line on February 7, 1996, was the last U.S. commercial nuclear reactor to go on-line. This is often quoted as evidence of a successful worldwide campaign for nuclear power phase-out. Electricity shortages, fossil fuel price increases, global warming, and heavy metal emissions from fossil fuel use, new technology such as passively safe plants, and national energy security may renew the demand for nuclear power plants. Following Westinghouse filing for Chapter 11 bankruptcy protection in March 2017 because of US$9 billion of losses from nuclear construction projects in the US, the future of new nuclear plant construction has largely moved to Asia and the Middle East. China has 21 reactors under construction and 40 planned, Russia has 7 under construction and 25 planned, and South Korea has 3 under construction plus 4 it is building in the United Arab Emirates. Capacity and production Nuclear fission power stations, excluding the contribution from naval nuclear fission reactors, provided 11% of the world's electricity in 2012, somewhat less than that generated by hydro-electric stations at 16%. Since electricity accounts for about 25% of humanity's energy usage with the majority of the rest coming from fossil fuel reliant sectors such as transport, manufacture and home heating, nuclear fission's contribution to the global final energy consumption was about 2.5%. This is a little more than the combined global electricity production from wind, solar, biomass and geothermal power, which together provided 2% of global final energy consumption in 2014. In 2013, the IAEA reported that there were 437 operational civil fission-electric reactors in 31 countries, although not every reactor was producing electricity. In addition, there were approximately 140 naval vessels using nuclear propulsion in operation, powered by about 180 reactors. Regional differences in the use of nuclear power are large. The United States produces the most nuclear energy in the world, with nuclear power providing 19% of the electricity it consumes, while France produces the highest percentage of its electrical energy from nuclear reactors—80% as of 2006. In the European Union as a whole nuclear power provides 30% of the electricity. Nuclear power is the single largest low-carbon electricity source in the United States, and accounts for two-thirds of the European Union's low-carbon electricity. Nuclear energy policy differs among European Union countries, and some, such as Austria, Estonia, Ireland and Italy, have no active nuclear power stations. In comparison, France has a large number of these plants, with 16 multi-unit stations in current use. Many military and some civilian (such as some icebreakers) ships use nuclear marine propulsion. A few space vehicles have been launched using nuclear reactors: 33 reactors belong to the Soviet RORSAT series and one was the American SNAP-10A. International research is continuing into additional uses of process heat such as hydrogen production (in support of a hydrogen economy), for desalinating sea water, and for use in district heating systems. Analysis in 2015 by Professor and Chair of Environmental Sustainability Barry W. Brook and his colleagues on the topic of replacing fossil fuels entirely, from the electric grid of the world, has determined that at the historically modest and proven-rate at which nuclear energy was added to and replaced fossil fuels in France and Sweden during each nation's building programs in the 1980s, within 10 years nuclear energy could displace or remove fossil fuels from the electric grid completely, "allow[ing] the world to meet the most stringent greenhouse-gas mitigation targets.". In a similar analysis, Brook had earlier determined that 50% of all global energy, that is not solely electricity, but transportation synfuels etc. could be generated within approximately 30 years, if the global nuclear fission build rate was identical to each of these nation's already proven decadal rates(in units of installed nameplate capacity, GW per year, per unit of global GDP(GW/year/$). This is in contrast to the conceptual studies for a 100% renewable energy world, which would require an orders of magnitude more costly global investment per year, which has no historical precedent, along with far greater land that would need to be devoted to the wind, wave and solar projects, and the inherent assumption that humanity will use less, and not more, energy in the future. As Brook notes the "principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing [the other] low-carbon alternatives." Nuclear power plants typically have high capital costs for building the plant, but low fuel costs. Although nuclear power plants can vary their output the electricity is generally less favorably priced when doing so. Nuclear power plants are therefore typically run as much as possible to keep the cost of the generated electrical energy as low as possible, supplying mostly base-load electricity. Internationally the price of nuclear plants rose 15% annually in 1970–1990.[page needed] Yet, nuclear power has total costs in 2012 of about $96 per megawatt hour (MWh), most of which involves capital construction costs, compared with solar power at $130 per MWh, and natural gas at the low end at $64 per MWh. In 2015, the Bulletin of the Atomic Scientists unveiled the Nuclear Fuel Cycle Cost Calculator, an online tool that estimates the full cost of electricity produced by three configurations of the nuclear fuel cycle. Two years in the making, this interactive calculator is the first generally accessible model to provide a nuanced look at the economic costs of nuclear power; it lets users test how sensitive the price of electricity is to a full range of components—more than 60 parameters that can be adjusted for the three configurations of the nuclear fuel cycle considered by this tool (once-through, limited-recycle, full-recycle). Users can select the fuel cycle they would like to examine, change cost estimates for each component of that cycle, and even choose uncertainty ranges for the cost of particular components. This approach allows users around the world to compare the cost of different nuclear power approaches in a sophisticated way, while taking account of prices relevant to their own countries or regions. In recent years there has been a slowdown of electricity demand growth. In Eastern Europe, a number of long-established projects are struggling to find finance, notably Belene in Bulgaria and the additional reactors at Cernavoda in Romania, and some potential backers have pulled out. Where the electricity market is competitive, cheap natural gas is available, and its future supply relatively secure, this also poses a major problem for nuclear projects and existing plants. Analysis of the economics of nuclear power must take into account who bears the risks of future uncertainties. To date all operating nuclear power plants were developed by state-owned or regulated utility monopolies where many of the risks associated with construction costs, operating performance, fuel price, accident liability and other factors were borne by consumers rather than suppliers. In addition, because the potential liability from a nuclear accident is so great, the full cost of liability insurance is generally limited/capped by the government, which the U.S. Nuclear Regulatory Commission concluded constituted a significant subsidy. Many countries have now liberalized the electricity market where these risks, and the risk of cheaper competitors emerging before capital costs are recovered, are borne by plant suppliers and operators rather than consumers, which leads to a significantly different evaluation of the economics of new nuclear power plants. Following the 2011 Fukushima Daiichi nuclear disaster, costs are expected to increase for currently operating and new nuclear power plants, due to increased requirements for on-site spent fuel management and elevated design basis threats. The economics of new nuclear power plants is a controversial subject, since there are diverging views on this topic, and multibillion-dollar investments ride on the choice of an energy source. Comparison with other power generation methods is strongly dependent on assumptions about construction timescales and capital financing for nuclear plants as well as the future costs of fossil fuels and renewables as well as for energy storage solutions for intermittent power sources. Cost estimates also need to take into account plant decommissioning and nuclear waste storage costs. On the other hand, measures to mitigate global warming, such as a carbon tax or carbon emissions trading, may favor the economics of nuclear power. Nuclear power plants Just as many conventional thermal power stations generate electricity by harnessing the thermal energy released from burning fossil fuels, nuclear power plants convert the energy released from the nucleus of an atom via nuclear fission that takes place in a nuclear reactor. The heat is removed from the reactor core by a cooling system that uses the heat to generate steam, which drives a steam turbine connected to a generator producing electricity. Life cycle of nuclear fuel A nuclear reactor is only part of the life-cycle for nuclear power. The process starts with mining (see Uranium mining). Uranium mines are underground, open-pit, or in-situ leach mines. In any case, the uranium ore is extracted, usually converted into a stable and compact form such as yellowcake, and then transported to a processing facility. Here, the yellowcake is converted to uranium hexafluoride, which is then enriched using various techniques. At this point, the enriched uranium, containing more than the natural 0.7% U-235, is used to make rods of the proper composition and geometry for the particular reactor that the fuel is destined for. The fuel rods will spend about 3 operational cycles (typically 6 years total now) inside the reactor, generally until about 3% of their uranium has been fissioned, then they will be moved to a spent fuel pool where the short lived isotopes generated by fission can decay away. After about 5 years in a spent fuel pool the spent fuel is radioactively and thermally cool enough to handle, and it can be moved to dry storage casks or reprocessed. Conventional fuel resources Uranium is a fairly common element in the Earth's crust. Uranium is approximately as common as tin or germanium in the Earth's crust, and is about 40 times more common than silver. Uranium is present in trace concentrations in most rocks, dirt, and ocean water, but can be economically extracted currently only where it is present in high concentrations. Still, the world's present measured resources of uranium, economically recoverable at the arbitrary price ceiling of 130 USD/kg, are enough to last for between 70 and 100 years. According to the OECD in 2006, there was an expected 85 years worth of uranium in already identified resources, when that uranium is used in present reactor technology, in the OECD's red book of 2011, due to increased exploration, known uranium resources have grown by 12.5% since 2008, with this increase translating into greater than a century of uranium available if the metals usage rate were to continue at the 2011 level. The OECD also estimate 670 years of economically recoverable uranium in total conventional resources and phosphate ores, while also using present reactor technology, a resource that is recoverable from between 60–100 US$/kg of Uranium. In a similar manner to every other natural metal resource, for every tenfold increase in the cost per kilogram of uranium, there is a three-hundredfold increase in available lower quality ores that would then become economical. As the OECD note: Even if the nuclear industry expands significantly, sufficient fuel is available for centuries. If advanced breeder reactors could be designed in the future to efficiently utilize recycled or depleted uranium and all actinides, then the resource utilization efficiency would be further improved by an additional factor of eight. For example, the OECD have determined that with a pure fast reactor fuel cycle with a burn up of, and recycling of, all the Uranium and actinides, actinides which presently make up the most hazardous substances in nuclear waste, there is 160,000 years worth of Uranium in total conventional resources and phosphate ore, at the price of 60–100 US$/kg of Uranium. Current light water reactors make relatively inefficient use of nuclear fuel, mostly fissioning only the very rare uranium-235 isotope. Nuclear reprocessing can make this waste reusable, and more efficient reactor designs, such as the currently under construction Generation III reactors achieve a higher efficiency burn up of the available resources, than the current vintage generation II reactors, which make up the vast majority of reactors worldwide. As opposed to current light water reactors which use uranium-235 (0.7% of all natural uranium), fast breeder reactors use uranium-238 (99.3% of all natural uranium). It has been estimated that there is up to five billion years' worth of uranium-238 for use in these power plants. Breeder technology has been used in several reactors, but the high cost of reprocessing fuel safely, at 2006 technological levels, requires uranium prices of more than 200 USD/kg before becoming justified economically. Breeder reactors are however being pursued as they have the potential to burn up all of the actinides in the present inventory of nuclear waste while also producing power and creating additional quantities of fuel for more reactors via the breeding process. As of 2017, there are only two breeder reactors producing commercial power: the BN-600 reactor and the BN-800 reactor, both in Russia. The BN-600, with a capacity of 600 MW, was built in 1980 in Beloyarsk and is planned to produce power until 2025. The BN-800 is an updated version of the BN-600, and started operation in 2016 with a net electrical capacity of 789 MW. The technical design of a yet larger breeder, the BN-1200 reactor was originally scheduled to be finalized in 2013, with construction slated for 2015 but has since been delayed. The Phénix breeder reactor in France was powered down in 2009 after 36 years of operation. Japan's Monju breeder reactor restarted (having been shut down in 1995) in 2010 for 3 months, but shut down again after equipment fell into the reactor during reactor checkups and it is now planned to be decommissioned. Both China and India are building breeder reactors. The Indian 500 MWe Prototype Fast Breeder Reactor is under construction, with plans to build five more by 2020. The China Experimental Fast Reactor began producing power in 2011. Another alternative to fast breeders is thermal breeder reactors that use uranium-233 bred from thorium as fission fuel in the thorium fuel cycle. Thorium is about 3.5 times more common than uranium in the Earth's crust, and has different geographic characteristics. This would extend the total practical fissionable resource base by 450%. India's three-stage nuclear power programme features the use of a thorium fuel cycle in the third stage, as it has abundant thorium reserves but little uranium. The most important waste stream from nuclear power plants is spent nuclear fuel. It is primarily composed of unconverted uranium as well as significant quantities of transuranic actinides (plutonium and curium, mostly). In addition, about 3% of it is fission products from nuclear reactions. The actinides (uranium, plutonium, and curium) are responsible for the bulk of the long-term radioactivity, whereas the fission products are responsible for the bulk of the short-term radioactivity. High-level radioactive waste High-level radioactive waste management concerns management and disposal of highly radioactive materials created during production of nuclear power. The technical issues in accomplishing this are daunting, due to the extremely long periods radioactive wastes remain deadly to living organisms. Of particular concern are two long-lived fission products, Technetium-99 (half-life 220,000 years) and Iodine-129 (half-life 15.7 million years), which dominate spent nuclear fuel radioactivity after a few thousand years. The most troublesome transuranic elements in spent fuel are Neptunium-237 (half-life two million years) and Plutonium-239 (half-life 24,000 years). Consequently, high-level radioactive waste requires sophisticated treatment and management to successfully isolate it from the biosphere. This usually necessitates treatment, followed by a long-term management strategy involving permanent storage, disposal or transformation of the waste into a non-toxic form. Governments around the world are considering a range of waste management and disposal options, usually involving deep-geologic placement, although there has been limited progress toward implementing long-term waste management solutions. This is partly because the timeframes in question when dealing with radioactive waste range from 10,000 to millions of years, according to studies based on the effect of estimated radiation doses. Some proposed nuclear reactor designs however such as the American Integral Fast Reactor and the Molten salt reactor can use the nuclear waste from light water reactors as a fuel, transmutating it to isotopes that would be safe after hundreds, instead of tens of thousands of years. This offers a potentially more attractive alternative to deep geological disposal. Another possibility is the use of thorium in a reactor especially designed for thorium (rather than mixing in thorium with uranium and plutonium (i.e. in existing reactors). Used thorium fuel remains only a few hundreds of years radioactive, instead of tens of thousands of years. Since the fraction of a radioisotope's atoms decaying per unit of time is inversely proportional to its half-life, the relative radioactivity of a quantity of buried human radioactive waste would diminish over time compared to natural radioisotopes (such as the decay chains of 120 trillion tons of thorium and 40 trillion tons of uranium which are at relatively trace concentrations of parts per million each over the crust's 3 * 1019 ton mass). For instance, over a timeframe of thousands of years, after the most active short half-life radioisotopes decayed, burying U.S. nuclear waste would increase the radioactivity in the top 2000 feet of rock and soil in the United States (10 million km2) by ≈ 1 part in 10 million over the cumulative amount of natural radioisotopes in such a volume, although the vicinity of the site would have a far higher concentration of artificial radioisotopes underground than such an average. Low-level radioactive waste The nuclear industry also produces a large volume of low-level radioactive waste in the form of contaminated items like clothing, hand tools, water purifier resins, and (upon decommissioning) the materials of which the reactor itself is built. In the US, the Nuclear Regulatory Commission has repeatedly attempted to allow low-level materials to be handled as normal waste: landfilled, recycled into consumer items, etcetera. Waste relative to other types In countries with nuclear power, radioactive wastes comprise less than 1% of total industrial toxic wastes, much of which remains hazardous for long periods. Overall, nuclear power produces far less waste material by volume than fossil-fuel based power plants. Coal-burning plants are particularly noted for producing large amounts of toxic and mildly radioactive ash due to concentrating naturally occurring metals and mildly radioactive material from the coal. A 2008 report from Oak Ridge National Laboratory concluded that coal power actually results in more radioactivity being released into the environment than nuclear power operation, and that the population effective dose equivalent, or dose to the public from radiation from coal plants is 100 times as much as from the operation of nuclear plants. Indeed, coal ash is much less radioactive than spent nuclear fuel on a weight per weight basis, but coal ash is produced in much higher quantities per unit of energy generated, and this is released directly into the environment as fly ash, whereas nuclear plants use shielding to protect the environment from radioactive materials, for example, in dry cask storage vessels. Disposal of nuclear waste is often said to be the Achilles' heel of the industry. Presently, waste is mainly stored at individual reactor sites and there are over 430 locations around the world where radioactive material continues to accumulate. Some experts suggest that centralized underground repositories which are well-managed, guarded, and monitored, would be a vast improvement. There is an "international consensus on the advisability of storing nuclear waste in deep geological repositories", with the lack of movement of nuclear waste in the 2 billion year old natural nuclear fission reactors in Oklo, Gabon being cited as "a source of essential information today." There are no commercial scale purpose built underground repositories in operation. The Waste Isolation Pilot Plant (WIPP) in New Mexico has been taking nuclear waste since 1999 from production reactors, but as the name suggests is a research and development facility. A radiation leak at WIPP in 2014 brought renewed attention to the need for R&D on disposal of radioactive waste and spent fuel. Reprocessing can potentially recover up to 95% of the remaining uranium and plutonium in spent nuclear fuel, putting it into new mixed oxide fuel. This produces a reduction in long term radioactivity within the remaining waste, since this is largely short-lived fission products, and reduces its volume by over 90%. Reprocessing of civilian fuel from power reactors is currently done in Europe, Russia, Japan, and India. The full potential of reprocessing has not been achieved because it requires breeder reactors, which are not commercially available. Nuclear reprocessing reduces the volume of high-level waste, but by itself does not reduce radioactivity or heat generation and therefore does not eliminate the need for a geological waste repository. Reprocessing has been politically controversial because of the potential to contribute to nuclear proliferation, the potential vulnerability to nuclear terrorism, the political challenges of repository siting (a problem that applies equally to direct disposal of spent fuel), and because of its high cost compared to the once-through fuel cycle. Several different methods for reprocessing been tried, but many have had safety and practicality problems which have led to their discontinuation. In the United States, the Obama administration stepped back from President Bush's plans for commercial-scale reprocessing and reverted to a program focused on reprocessing-related scientific research. Reprocessing is not allowed in the U.S. In the U.S., spent nuclear fuel is currently all treated as waste. A major recommendation of the Blue Ribbon Commission on America's Nuclear Future was that "the United States should undertake an integrated nuclear waste management program that leads to the timely development of one or more permanent deep geological facilities for the safe disposal of spent fuel and high-level nuclear waste". Uranium enrichment produces many tons of depleted uranium (DU) which consists of U-238 with most of the easily fissile U-235 isotope removed. U-238 is a tough metal with several commercial uses—for example, aircraft production, radiation shielding, and armor—as it has a higher density than lead. Depleted uranium is also controversially used in munitions; DU penetrators (bullets or APFSDS tips) "self sharpen", due to uranium's tendency to fracture along shear bands. Accidents, attacks and safety Some serious nuclear and radiation accidents have occurred. Benjamin K. Sovacool has reported that worldwide there have been 99 accidents at nuclear power plants. Fifty-seven accidents have occurred since the Chernobyl disaster, and 57% (56 out of 99) of all nuclear-related accidents have occurred in the USA. Nuclear power plant accidents include the Chernobyl accident (1986) with approximately 60 deaths so far attributed to the accident and a predicted, eventual total death toll, of from 4000 to 25,000 latent cancers deaths. The Fukushima Daiichi nuclear disaster (2011), has not caused any radiation related deaths, with a predicted, eventual total death toll, of from 0 to 1000, and the Three Mile Island accident (1979), no causal deaths, cancer or otherwise, have been found in follow up studies of this accident. Nuclear-powered submarine mishaps include the K-19 reactor accident (1961), the K-27 reactor accident (1968), and the K-431 reactor accident (1985). International research is continuing into safety improvements such as passively safe plants, and the possible future use of nuclear fusion. In terms of lives lost per unit of energy generated, nuclear power has caused fewer accidental deaths per unit of energy generated than all other major sources of energy generation. Energy produced by coal, petroleum, natural gas and hydropower has caused more deaths per unit of energy generated, from air pollution and energy accidents. This is found in the following comparisons, when the immediate nuclear related deaths from accidents are compared to the immediate deaths from these other energy sources, when the latent, or predicted, indirect cancer deaths from nuclear energy accidents are compared to the immediate deaths from the above energy sources, and when the combined immediate and indirect fatalities from nuclear power and all fossil fuels are compared, fatalities resulting from the mining of the necessary natural resources to power generation and to air pollution. With these data, the use of nuclear power has been calculated to have prevented in the region of 1.8 million deaths between 1971 and 2009, by reducing the proportion of energy that would otherwise have been generated by fossil fuels, and is projected to continue to do so. Although according to Benjamin K. Sovacool, fission energy accidents ranked first in terms of their total economic cost, accounting for 41 percent of all property damage attributed to energy accidents. Analysis presented in the international journal, Human and Ecological Risk Assessment found that coal, oil, Liquid petroleum gas and hydroelectric accidents(primarily due to the Banqiao dam burst) have resulted in greater economic impacts than nuclear power accidents. Following the 2011 Japanese Fukushima nuclear disaster, authorities shut down the nation's 54 nuclear power plants, but it has been estimated that if Japan had never adopted nuclear power, accidents and pollution from coal or gas plants would have caused more lost years of life. As of 2013, the Fukushima site remains highly radioactive, with some 160,000 evacuees still living in temporary housing, and some land will be unfarmable for centuries. The difficult Fukushima disaster cleanup will take 40 or more years, and cost tens of billions of dollars. Forced evacuation from a nuclear accident may lead to social isolation, anxiety, depression, psychosomatic medical problems, reckless behavior, even suicide. Such was the outcome of the 1986 Chernobyl nuclear disaster in Ukraine. A comprehensive 2005 study concluded that "the mental health impact of Chernobyl is the largest public health problem unleashed by the accident to date". Frank N. von Hippel, a U.S. scientist, commented on the 2011 Fukushima nuclear disaster, saying that "fear of ionizing radiation could have long-term psychological effects on a large portion of the population in the contaminated areas". A 2015 report in Lancet explained that serious impacts of nuclear accidents were often not directly attributable to radiation exposure, but rather social and psychological effects. Evacuation and long-term displacement of affected populations created problems for many people, especially the elderly and hospital patients. Attacks and sabotage Terrorists could target nuclear power plants in an attempt to release radioactive contamination into the community. The United States 9/11 Commission has said that nuclear power plants were potential targets originally considered for the September 11, 2001 attacks. An attack on a reactor’s spent fuel pool could also be serious, as these pools are less protected than the reactor core. The release of radioactivity could lead to thousands of near-term deaths and greater numbers of long-term fatalities. If nuclear power use is to expand significantly, nuclear facilities will have to be made extremely safe from attacks that could release massive quantities of radioactivity into the community. New reactor designs have features of passive safety, such as the flooding of the reactor core without active intervention by reactor operators. But these safety measures have generally been developed and studied with respect to accidents, not to the deliberate reactor attack by a terrorist group. However, the US Nuclear Regulatory Commission does now also require new reactor license applications to consider security during the design stage. In the United States, the NRC carries out "Force on Force" (FOF) exercises at all Nuclear Power Plant (NPP) sites at least once every three years. In the U.S., plants are surrounded by a double row of tall fences which are electronically monitored. The plant grounds are patrolled by a sizeable force of armed guards. Insider sabotage regularly occurs, because insiders can observe and work around security measures. Successful insider crimes depended on the perpetrators' observation and knowledge of security vulnerabilities. A fire caused 5–10 million dollars worth of damage to New York's Indian Point Energy Center in 1971. The arsonist turned out to be a plant maintenance worker. Sabotage by workers has been reported at many other reactors in the United States: at Zion Nuclear Power Station (1974), Quad Cities Nuclear Generating Station, Peach Bottom Nuclear Generating Station, Fort St. Vrain Generating Station, Trojan Nuclear Power Plant (1974), Browns Ferry Nuclear Power Plant (1980), and Beaver Valley Nuclear Generating Station (1981). Many reactors overseas have also reported sabotage by workers. Many technologies and materials associated with the creation of a nuclear power program have a dual-use capability, in that they can be used to make nuclear weapons if a country chooses to do so. When this happens a nuclear power program can become a route leading to a nuclear weapon or a public annex to a "secret" weapons program. The concern over Iran's nuclear activities is a case in point. A fundamental goal for American and global security is to minimize the nuclear proliferation risks associated with the expansion of nuclear power. If this development is "poorly managed or efforts to contain risks are unsuccessful, the nuclear future will be dangerous". The Global Nuclear Energy Partnership is one such international effort to create a distribution network in which developing countries in need of energy, would receive nuclear fuel at a discounted rate, in exchange for that nation agreeing to forgo their own indigenous develop of a uranium enrichment program. The France-based Eurodif/European Gaseous Diffusion Uranium Enrichment Consortium was/is one such program that successfully implemented this concept, with Spain and other countries without enrichment facilities buying a share of the fuel produced at the French controlled enrichment facility, but without a transfer of technology. Iran was an early participant from 1974, and remains a shareholder of Eurodif via Sofidif. According to Benjamin K. Sovacool, a "number of high-ranking officials, even within the United Nations, have argued that they can do little to stop states using nuclear reactors to produce nuclear weapons". A 2009 United Nations report said that: the revival of interest in nuclear power could result in the worldwide dissemination of uranium enrichment and spent fuel reprocessing technologies, which present obvious risks of proliferation as these technologies can produce fissile materials that are directly usable in nuclear weapons. On the other hand, one factor influencing the support of power reactors is due to the appeal that these reactors have at reducing nuclear weapons arsenals through the Megatons to Megawatts Program, a program which eliminated 425 metric tons of highly enriched uranium(HEU), the equivalent of 17,000 nuclear warheads, by diluting it with natural uranium making it equivalent to low enriched uranium(LEU), and thus suitable as nuclear fuel for commercial fission reactors. This is the single most successful non-proliferation program to date. The Megatons to Megawatts Program, the brainchild of Thomas Neff of MIT, was hailed as a major success by anti-nuclear weapon advocates as it has largely been the driving force behind the sharp reduction in the quantity of nuclear weapons worldwide since the cold war ended. However without an increase in nuclear reactors and greater demand for fissile fuel, the cost of dismantling and down blending has dissuaded Russia from continuing their disarmament. Currently, according to Harvard professor Matthew Bunn: "The Russians are not remotely interested in extending the program beyond 2013. We've managed to set it up in a way that costs them more and profits them less than them just making new low-enriched uranium for reactors from scratch. But there are other ways to set it up that would be very profitable for them and would also serve some of their strategic interests in boosting their nuclear exports." Up to 2005, the Megatons to Megawatts Program had processed $8 billion of HEU/weapons grade uranium into LEU/reactor grade uranium, with that corresponding to the elimination of 10,000 nuclear weapons. For approximately two decades, this material generated nearly 10 percent of all the electricity consumed in the United States (about half of all US nuclear electricity generated) with a total of around 7 trillion kilowatt-hours of electricity produced. Enough energy to energize the entire United States electric grid for about two years. In total it is estimated to have cost $17 billion, a "bargain for US ratepayers", with Russia profiting $12 billion from the deal. Much needed profit for the Russian nuclear oversight industry, which after the collapse of the Soviet economy, had difficulties paying for the maintenance and security of the Russian Federations highly enriched uranium and warheads. In April 2012 there were thirty one countries that have civil nuclear power plants, of which nine have nuclear weapons, with the vast majority of these nuclear weapons states having first produced weapons, before commercial fission electricity stations. Moreover, the re-purposing of civilian nuclear industries for military purposes would be a breach of the Non-proliferation treaty, of which 190 countries adhere to. Nuclear power is one of the leading low carbon power generation methods of producing electricity, and in terms of total life-cycle greenhouse gas emissions per unit of energy generated, has emission values lower than renewable energy when the latter is taken as a single energy source.[clarification needed] A 2014 analysis of the carbon footprint literature by the Intergovernmental Panel on Climate Change (IPCC) reported that the embodied total life-cycle emission intensity of fission electricity has a median value of 12 g CO2eq/kWh which is the lowest out of all commercial baseload energy sources. This is contrasted with coal and fossil gas at 820 and 490 g CO2 eq/kWh. From the beginning of fission-electric power station commercialization in the 1970s, nuclear power prevented the emission of about 64 billion tonnes of carbon dioxide equivalent that would have otherwise resulted from the burning of fossil fuels in thermal power stations. According to the United Nations (UNSCEAR), regular nuclear power plant operation including the nuclear fuel cycle causes radioisotope releases into the environment amounting to 0.0002 millisieverts (mSv) per year of public exposure as a global average. This is small compared to variation in natural background radiation, which averages 2.4 mSv/a globally but frequently varies between 1 mSv/a and 13 mSv/a depending on a person's location as determined by UNSCEAR. As of a 2008 report, the remaining legacy of the worst nuclear power plant accident (Chernobyl) is 0.002 mSv/a in global average exposure (a figure which was 0.04 mSv per person averaged over the entire populace of the Northern Hemisphere in the year of the accident in 1986, although far higher among the most affected local populations and recovery workers). Climate change causing weather extremes such as heat waves, reduced precipitation levels and droughts can have a significant impact on all thermal power station infrastructure, including large biomass-electric and fission-electric stations alike, if cooling in these power stations, namely in the steam condenser is provided by certain freshwater sources. While many thermal stations use indirect seawater cooling or cooling towers that in comparison use little to no freshwater, those that were designed to heat exchange with rivers and lakes, can run into economic problems. This presently infrequent generic problem may become increasingly significant over time. This can force nuclear reactors to be shut down, as happened in France during the 2003 and 2006 heat waves. Nuclear power supply was severely diminished by low river flow rates and droughts, which meant rivers had reached the maximum temperatures for cooling reactors. During the heat waves, 17 reactors had to limit output or shut down. 77% of French electricity is produced by nuclear power and in 2009 a similar situation created a 8GW shortage and forced the French government to import electricity. Other cases have been reported from Germany, where extreme temperatures have reduced nuclear power production only 9 times due to high temperatures between 1979 and 2007. In particular: - the Unterweser nuclear power plant reduced output by 90% between June and September 2003 - the Isar nuclear power plant cut production by 60% for 14 days due to excess river temperatures and low stream flow in the river Isar in 2006 However the more modern Isar II station did not have to cut production, as unlike its sister station Isar I, Isar II was built with a cooling tower. Similar events have happened elsewhere in Europe during those same hot summers. If global warming continues, this disruption is likely to increase or alternatively, station operators could instead retro-fit other means of cooling, like cooling towers, despite these frequently being large structures and therefore sometimes unpopular with the public. Comparison with renewable energy There is an ongoing debate on the relative benefits of nuclear power compared to renewable energy sources for the generation of low-carbon electricity. Proponents of renewable energy argue that wind power and solar power are already cheaper and safer than nuclear power. Nuclear power proponents argue that renewable energy sources such as wind and solar do not offer the scalability necessary for a large scale decarbonization of the electric grid, mainly due to their intermittency. Although the majority of installed renewable energy across the world is currently in the form of hydro power, solar and wind power are growing at a much higher pace, especially in developed countries. Several studies report that it is in principle possible to cover most of energy generation with renewable sources. The Intergovernmental Panel on Climate Change (IPCC) has said that if governments were supportive, and the full complement of renewable energy technologies were deployed, renewable energy supply could account for almost 80% of the world's energy use within forty years. Rajendra Pachauri, chairman of the IPCC, said the necessary investment in renewables would cost only about 1% of global GDP annually. This approach could contain greenhouse gas levels to less than 450 parts per million, the safe level beyond which climate change becomes catastrophic and irreversible. However, other studies suggest that solar and wind energy are not cost-effective compared to nuclear power. The Brookings Institution published The Net Benefits of Low and No-Carbon Electricity Technologies in 2014 which states, after performing an energy and emissions cost analysis, that "The net benefits of new nuclear, hydro, and natural gas combined cycle plants far outweigh the net benefits of new wind or solar plants", with the most cost effective low carbon power technology being determined to be nuclear power. Nuclear power is also proposed as a tested and practical way to implement a low-carbon energy infrastructure, as opposed to renewable sources. Analysis in 2015 by Professor and Chair of Environmental Sustainability Barry W. Brook and his colleagues on the topic of replacing fossil fuels entirely, from the electric grid of the world, has determined that at the historically modest and proven-rate at which nuclear energy was added to and replaced fossil fuels in France and Sweden during each nation's building programs in the 1980s, nuclear energy could displace or remove fossil fuels from the electric grid completely within 10 years, "allow[ing] the world to meet the most stringent greenhouse-gas mitigation targets.". In a similar analysis, Brook had earlier determined that 50% of all global energy, that is not solely electricity, but transportation synfuels etc. could be generated within approximately 30 years, if the global nuclear fission build rate was identical to each of these nation's already proven installation rates in units of installed nameplate capacity, GW per year, per unit of global GDP (GW/year/$). This is in contrast to the conceptual studies for a 100% renewable energy world, which would require an orders of magnitude more costly global investment per year, which has no historical precedent, along with far greater land that would have to be devoted to the wind, wave and solar projects, and the inherent assumption that humanity will use less, and not more, energy in the future. As Brook notes, the "principal limitations on nuclear fission are not technical, economic or fuel-related, but are instead linked to complex issues of societal acceptance, fiscal and political inertia, and inadequate critical evaluation of the real-world constraints facing [the other] low-carbon alternatives." Several studies conclude that wind and solar power have costs that are comparable or lower than nuclear power, when considering price per kWh. The cost of constructing established nuclear power reactor designs has followed an increasing trend due to regulations and court cases whereas the levelized cost of electricity (LCOE) is declining for wind and solar power. In 2010 a report from Solar researchers at Duke University suggested[quantify] that solar power is already cheaper than nuclear power.[better source needed] However they state that if subsidies were removed for solar power, the crossover point would be delayed by years. Data from the EIA in 2011 estimated that in 2016, solar will have a levelized cost of electricity almost twice as expensive as nuclear (21¢/kWh for solar, 11.39¢/kWh for nuclear), and wind somewhat less expensive than nuclear (9.7¢/kWh). However, the US EIA has also cautioned that levelized costs of intermittent sources such as wind and solar are not directly comparable to costs of "dispatchable" sources (those that can be adjusted to meet demand), as intermittent sources need costly large-scale back-up power supplies for when the weather changes. A 2010 study by the Global Subsidies Initiative compared global relative energy subsidies, or government financial aid for the deployment of different energy sources. Results show that fossil fuels receive about 1 US cents per kWh of energy they produce, nuclear energy receives 1.7 cents / kWh, renewable energy (excluding hydroelectricity) receives 5.0 cents / kWh and biofuels receive 5.1 cents / kWh in subsidies. Nuclear power is comparable to, and in some cases lower, than many renewable energy sources in terms of lives lost per unit of electricity delivered. However, as opposed to renewable energy, conventional designs for nuclear reactors produce intensely radioactive spent fuel that needs to be stored or reprocessed. A nuclear plant also needs to be disassembled and removed and much of the disassembled nuclear plant needs to be stored as low level nuclear waste for a few decades. The financial costs of every nuclear power plant continues for some time after the facility has finished generating its last useful electricity. Once no longer economically viable, nuclear reactors and uranium enrichment facilities are generally decommissioned, returning the facility and its parts to a safe enough level to be entrusted for other uses, such as greenfield status. After a cooling-off period that may last decades, reactor core materials are dismantled and cut into small pieces to be packed in containers for interim storage or transmutation experiments. The consensus on how to approach the task is one that is relatively inexpensive, but it has the potential to be hazardous to the natural environment as it presents opportunities for human error, accidents or sabotage. In the USA a Nuclear Waste Policy Act and Nuclear Decommissioning Trust Fund is legally required, with utilities banking 0.1 to 0.2 cents/kWh during operations to fund future decommissioning. They must report regularly to the Nuclear Regulatory Commission (NRC) on the status of their decommissioning funds. About 70% of the total estimated cost of decommissioning all US nuclear power reactors has already been collected (on the basis of the average cost of $320 million per reactor-steam turbine unit). In the U.S. in 2011, there are 13 reactors that had permanently shut down and are in some phase of decommissioning. With Connecticut Yankee Nuclear Power Plant and Yankee Rowe Nuclear Power Station having completed the process in 2006–2007, after ceasing commercial electricity production circa 1992. The majority of the 15 years, was used to allow the station to naturally cool-down on its own, which makes the manual disassembly process both safer and cheaper.Decommissioning at nuclear sites which have experienced a serious accident are the most expensive and time-consuming. Working under an insurance framework that limits or structures accident liabilities in accordance with the Paris convention on nuclear third-party liability, the Brussels supplementary convention, and the Vienna convention on civil liability for nuclear damage and in the U.S. the Price-Anderson Act. It is often argued that this potential shortfall in liability represents an external cost not included in the cost of nuclear electricity; but the cost is small, amounting to about 0.1% of the levelized cost of electricity, according to a CBO study. These beyond-regular-insurance costs for worst-case scenarios are not unique to nuclear power, as hydroelectric power plants are similarly not fully insured against a catastrophic event such as the Banqiao Dam disaster, where 11 million people lost their homes and from 30,000 to 200,000 people died, or large dam failures in general. As private insurers base dam insurance premiums on limited scenarios, major disaster insurance in this sector is likewise provided by the state. Debate on nuclear power The nuclear power debate concerns the controversy which has surrounded the deployment and use of nuclear fission reactors to generate electricity from nuclear fuel for civilian purposes. The debate about nuclear power peaked during the 1970s and 1980s, when it "reached an intensity unprecedented in the history of technology controversies", in some countries.[page needed] Proponents of nuclear energy contend that nuclear power is a sustainable energy source that reduces carbon emissions and increases energy security by decreasing dependence on imported energy sources. Proponents claim that nuclear power produces virtually no conventional air pollution, such as greenhouse gases and smog, in contrast to the chief viable alternative of fossil fuel. Nuclear power can produce base-load power unlike many renewables which are intermittent energy sources lacking large-scale and cheap ways of storing energy. M. King Hubbert saw oil as a resource that would run out, and proposed nuclear energy as a replacement energy source. Proponents claim that the risks of storing waste are small and can be further reduced by using the latest technology in newer reactors, and the operational safety record in the Western world is excellent when compared to the other major kinds of power plants. Opponents believe that nuclear power poses many threats to people and the environment. These threats include the problems of processing, transport and storage of radioactive nuclear waste, the risk of nuclear weapons proliferation and terrorism, as well as health risks and environmental damage from uranium mining. They also contend that reactors themselves are enormously complex machines where many things can and do go wrong; and there have been serious nuclear accidents. Critics do not believe that the risks of using nuclear fission as a power source can be fully offset through the development of new technology. In years past, they also argued that when all the energy-intensive stages of the nuclear fuel chain are considered, from uranium mining to nuclear decommissioning, nuclear power is neither a low-carbon nor an economical electricity source. Use in space Both fission and fusion appear promising for space propulsion applications, generating higher mission velocities with less reaction mass. This is due to the much higher energy density of nuclear reactions: some 7 orders of magnitude (10,000,000 times) more energetic than the chemical reactions which power the current generation of rockets. Radioactive decay has been used on a relatively small scale (few kW), mostly to power space missions and experiments by using radioisotope thermoelectric generators such as those developed at Idaho National Laboratory. Advanced fission reactor designs Current fission reactors in operation around the world are second or third generation systems, with most of the first-generation systems having been retired some time ago. Research into advanced generation IV reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals, including to improve nuclear safety, improve proliferation resistance, minimize waste, improve natural resource utilization, the ability to consume existing nuclear waste in the production of electricity, and decrease the cost to build and run such plants. Most of these reactors differ significantly from current operating light water reactors, and are generally not expected to be available for commercial construction before 2030. The nuclear reactors to be built at Vogtle are new AP1000 third generation reactors, which are said to have safety improvements over older power reactors. However, John Ma, a senior structural engineer at the NRC, is concerned that some parts of the AP1000 steel skin are so brittle that the "impact energy" from a plane strike or storm driven projectile could shatter the wall. Edwin Lyman, a senior staff scientist at the Union of Concerned Scientists, is concerned about the strength of the steel containment vessel and the concrete shield building around the AP1000. The Union of Concerned Scientists has referred to the EPR (nuclear reactor), currently under construction in China, Finland and France, as the only new reactor design under consideration in the United States that "...appears to have the potential to be significantly safer and more secure against attack than today's reactors." One disadvantage of any new reactor technology is that safety risks may be greater initially as reactor operators have little experience with the new design. Nuclear engineer David Lochbaum has explained that almost all serious nuclear accidents have occurred with what was at the time the most recent technology. He argues that "the problem with new reactors and accidents is twofold: scenarios arise that are impossible to plan for in simulations; and humans make mistakes". As one director of a U.S. research laboratory put it, "fabrication, construction, operation, and maintenance of new reactors will face a steep learning curve: advanced technologies will have a heightened risk of accidents and mistakes. The technology may be proven, but people are not". Hybrid nuclear fusion-fission Hybrid nuclear power is a proposed means of generating power by use of a combination of nuclear fusion and fission processes. The concept dates to the 1950s, and was briefly advocated by Hans Bethe during the 1970s, but largely remained unexplored until a revival of interest in 2009, due to delays in the realization of pure fusion. When a sustained nuclear fusion power plant is built, it has the potential to be capable of extracting all the fission energy that remains in spent fission fuel, reducing the volume of nuclear waste by orders of magnitude, and more importantly, eliminating all actinides present in the spent fuel, substances which cause security concerns. Nuclear fusion reactions have the potential to be safer and generate less radioactive waste than fission. These reactions appear potentially viable, though technically quite difficult and have yet to be created on a scale that could be used in a functional power plant. Fusion power has been under theoretical and experimental investigation since the 1950s. Construction of the ITER facility began in 2007, but the project has run into many delays and budget overruns. The facility is now not expected to begin operations until the year 2027 – 11 years after initially anticipated. A follow on commercial nuclear fusion power station, DEMO, has been proposed. There are also suggestions for a power plant based upon a different fusion approach, that of an inertial fusion power plant. Fusion powered electricity generation was initially believed to be readily achievable, as fission-electric power had been. However, the extreme requirements for continuous reactions and plasma containment led to projections being extended by several decades. In 2010, more than 60 years after the first attempts, commercial power production was still believed to be unlikely before 2050. - Nuclear Energy: Statistics, Dr. Elizabeth Ervin - "An oasis filled with grey water". NEI Magazine. 2013-06-25. - Topical issues of infrastructure development IAEA 2012 - "2014 Key World Energy Statistics" (PDF). International Energy Agency. 2014. p. 24. Archived (PDF) from the original on 2015-05-05. - "Nuclear Energy". Energy Education is an interactive curriculum supplement for secondary-school science students, funded by the U. S. Department of Energy and the Texas State Energy Conservation Office (SECO). U. S. Department of Energy and the Texas State Energy Conservation Office (SECO). July 2010. Archived from the original on 2011-02-26. Retrieved 2010-07-10. - "Collectively, life cycle assessment literature shows that nuclear power is similar to other renewable and much lower than fossil fuel in total life cycle GHG emissions.''". Nrel.gov. 2013-01-24. Archived from the original on 2013-07-02. Retrieved 2013-06-22. - Life Cycle Assessment Harmonization Results and Findings.Figure 1 Archived 2017-05-06 at the Wayback Machine. - Kharecha Pushker A (2013). "Prevented Mortality and Greenhouse Gas Emissions from Historical and Projected Nuclear Power – global nuclear power has prevented an average of 1.84 million air pollution-related deaths and 64 gigatonnes of CO2-equivalent (GtCO2-eq) greenhouse gas (GHG) emissions that would have resulted from fossil fuel burning". Environmental Science. Pubs.acs.org. 47 (9): 4889–4895. Bibcode:2013EnST...47.4889K. doi:10.1021/es3051197. - Union-Tribune Editorial Board (2011-03-27). "The nuclear controversy". Union-Tribune. San Diego. - James J. MacKenzie. Review of The Nuclear Power Controversy by Arthur W. Murphy The Quarterly Review of Biology, Vol. 52, No. 4 (Dec., 1977), pp. 467–468. - In February 2010 the nuclear power debate played out on the pages of The New York Times, see A Reasonable Bet on Nuclear Power and Revisiting Nuclear Power: A Debate and A Comeback for Nuclear Power? - U.S. Energy Legislation May Be 'Renaissance' for Nuclear Power Archived 2009-06-26 at the Wayback Machine.. - "Nuclear Power". Nc Warn. Retrieved 2013-06-22. - Sturgis, Sue. "Investigation: Revelations about Three Mile Island disaster raise doubts over nuclear plant safety". Southernstudies.org. Archived from the original on 2010-04-18. Retrieved 2010-08-24. - Strengthening the Safety of Radiation Sources Archived 2009-06-08 at WebCite p. 14. - Johnston, Robert (2007-09-23). "Deadliest radiation accidents and other events causing radiation casualties". Database of Radiological Incidents and Related Events. - Markandya, A.; Wilkinson, P. (2007). "Electricity generation and health". Lancet. 370 (9591): 979–990. doi:10.1016/S0140-6736(07)61253-7. PMID 17876910. - Nuclear power has lower electricity related health risks than Coal, Oil, & gas. ...the health burdens are appreciably smaller for generation from natural gas, and lower still for nuclear power. This study includes the latent or indirect fatalities, for example those caused by the inhalation of fossil fuel created particulate matter, smog induced Cardiopulmonary events, black lung etc. in its comparison.) - Gohlke JM et al. Environmental Health Perspectives (2008). "Health, Economy, and Environment: Sustainable Energy Choices for a Nation". Environmental Health Perspectives. 116 (6): A236–A237. doi:10.1289/ehp.11602. PMC . PMID 18560493. - "Dr. MacKay Sustainable Energy without the hot air". Data from studies by the Paul Scherrer Institute including non EU data. p. 168. Retrieved 2012-09-15. - https://www.forbes.com/sites/jamesconca/2012/06/10/energys-deathprint-a-price-always-paid/ with Chernobyl's total predicted linear no-threshold cancer deaths included, nuclear power is safer when compared to many alternative energy sources' immediate, death rate. - Brendan Nicholson (2006-06-05). "Nuclear power 'cheaper, safer' than coal and gas". Melbourne: The Age. Retrieved 2008-01-18. - Burgherr, P.; Hirschberg, S. (2008). "A Comparative Analysis of Accident Risks in Fossil, Hydro, and Nuclear Energy Chains" (PDF). Human and Ecological Risk Assessment: an International Journal. 14 (5): 947. doi:10.1080/10807030802387556. Page 962 to 965. Comparing Nuclear's latent cancer deaths, such as cancer with other energy sources immediate deaths per unit of energy generated(GWeyr). This study does not include Fossil fuel related cancer and other indirect deaths created by the use of fossil fuel consumption in its "severe accident", an accident with more than 5 fatalities, classification. - The Database on Nuclear Power Reactors. The Power Reactor Information System (PRIS), developed and maintained by the IAEA for over four decades, is a comprehensive database focusing on nuclear power plants worldwide - "GIF Portal – Home – Public". www.gen-4.org. Retrieved 2016-07-25. - "Moonshine". Atomicarchive.com. Retrieved 2013-06-22. - "The Atomic Solar System". Atomicarchive.com. Retrieved 2013-06-22. - taneya says:. "What do you mean by Induced Radioactivity?". Thebigger.com. Retrieved 2013-06-22. - "Neptunium". Vanderkrogt.net. Retrieved 2013-06-22. - "Otto Hahn, The Nobel Prize in Chemistry, 1944". Nobelprize.org. Retrieved 2007-11-01. - "Otto Hahn, Fritz Strassmann, and Lise Meitner". Science History Institute. Retrieved March 20, 2018. - "Otto Robert Frisch". Nuclearfiles.org. Retrieved 2007-11-01. - "The Einstein Letter". Atomicarchive.com. Retrieved 2013-06-22. - The Atomic Age Opens ALSOS digital library of nuclear issues. "The book does not always clearly indicate which words are being quoted rather than edited or added" - THE ATOMIC AGE OPENS. Prepared by the Editors of Pocket Books. 252 pages. New York: Pocket Books, Inc. August 1945. QC173 .P55 1945 FIRST EDITION, first issue, of the first mass market account of the atomic bomb - Bain, Alastair S.; et al. (1997). Canada enters the nuclear age: a technical history of Atomic Energy of Canada. Magill-Queen's University Press. p. ix. ISBN 0-7735-1601-8. - "Reactors Designed by Argonne National Laboratory: Fast Reactor Technology". U.S. Department of Energy, Argonne National Laboratory. 2012. Retrieved 2012-07-25. - "Reactor Makes Electricity." Popular Mechanics, March 1952, p. 105. - "STR (Submarine Thermal Reactor) in "Reactors Designed by Argonne National Laboratory: Light Water Reactor Technology Development"". U.S. Department of Energy, Argonne National Laboratory. 2012. Retrieved 2012-07-25. - "From Obninsk Beyond: Nuclear Power Conference Looks to Future". International Atomic Energy Agency. Retrieved 2006-06-27. - "Nuclear Power in Russia". World Nuclear Association. Retrieved 2006-06-27. - "This Day in Quotes: SEPTEMBER 16 – Too cheap to meter: the great nuclear quote debate". This day in quotes. 2009. Retrieved 2009-09-16. - Pfau, Richard (1984) No Sacrifice Too Great: The Life of Lewis L. Strauss University Press of Virginia, Charlottesville, Virginia, p. 187 ISBN 978-0-8139-1038-3 - David Bodansky (2004). Nuclear Energy: Principles, Practices, and Prospects. Springer. p. 32. ISBN 978-0-387-20778-0. Retrieved 2008-01-31. - Kragh, Helge (1999). Quantum Generations: A History of Physics in the Twentieth Century. Princeton NJ: Princeton University Press. p. 286. ISBN 0-691-09552-3. - "On This Day: October 17". BBC News. 1956-10-17. Retrieved 2006-11-09. - "50 Years of Nuclear Energy" (PDF). International Atomic Energy Agency. Retrieved 2006-11-09. - McKeown, William (2003). Idaho Falls: The Untold Story of America's First Nuclear Accident. Toronto: ECW Press. ISBN 978-1-55022-562-4. - The Changing Structure of the Electric Power Industry p. 110. - Bernard L. Cohen (1990). The Nuclear Energy Option: An Alternative for the 90s. New York: Plenum Press. ISBN 978-0-306-43567-6. - "Evolution of Electricity Generation by Fuel" (PDF). (39.4 KB) - Sharon Beder, 'The Japanese Situation', English version of conclusion of Sharon Beder, "Power Play: The Fight to Control the World's Electricity", Soshisha, Japan, 2006. - Garb Paula (1999). "Review of Critical Masses". Journal of Political Ecology. 6. - Rüdig, Wolfgang, ed. (1990). Anti-nuclear Movements: A World Survey of Opposition to Nuclear Energy. Detroit, MI: Longman Current Affairs. p. 1. ISBN 0-8103-9000-0. - Brian Martin. Opposing nuclear power: past and present, Social Alternatives, Vol. 26, No. 2, Second Quarter 2007, pp. 43–47. - Stephen Mills and Roger Williams (1986). Public Acceptance of New Technologies Routledge, pp. 375–376. - Robert Gottlieb (2005). Forcing the Spring: The Transformation of the American Environmental Movement, Revised Edition, Island Press, USA, p. 237. - Falk, Jim (1982). Global Fission: The Battle Over Nuclear Power. Melbourne: Oxford University Press. pp. 95–96. ISBN 978-0-19-554315-5. - Walker, J. Samuel (2004). Three Mile Island: A Nuclear Crisis in Historical Perspective (Berkeley: University of California Press), pp. 10–11. - Herbert P. Kitschelt (1986). "Political Opportunity and Political Protest: Anti-Nuclear Movements in Four Democracies" (PDF). British Journal of Political Science. 16 (1): 57. doi:10.1017/s000712340000380x. - Herbert P. Kitschelt (1986). "Political Opportunity and Political Protest: Anti-Nuclear Movements in Four Democracies" (PDF). British Journal of Political Science. 16 (1): 71. doi:10.1017/s000712340000380x. - Social Protest and Policy Change p. 45. - "The Political Economy of Nuclear Energy in the United States" (PDF). Social Policy. The Brookings Institution. 2004. Archived from the original (PDF) on 2007-11-03. Retrieved 2006-11-09. - Nuclear Power: Outlook for New U.S. Reactors p. 3. - "Nuclear Follies". Forbes magazine. 1985-02-11. - "Backgrounder on Chernobyl Nuclear Power Plant Accident". Nuclear Regulatory Commission. Retrieved 2006-06-28. - "RBMK Reactors | reactor bolshoy moshchnosty kanalny | Positive void coefficient". World-nuclear.org. 2009-09-07. Retrieved 2013-06-14. - "Italy rejoins the nuclear family". World Nuclear News. 2009-07-10. Retrieved 2009-07-17. - "Italy puts one year moratorium on nuclear". 2011-03-13. - "Italy nuclear: Berlusconi accepts referendum blow". BBC News. 2011-06-14. - "Olkiluoto pipe welding 'deficient', says regulator". World Nuclear News. 2009-10-16. Retrieved 2010-06-08. - Kinnunen, Terhi (2010-07-01). "Finnish parliament agrees plans for two reactors". Reuters. Retrieved 2010-07-02. - "Olkiluoto 3 delayed beyond 2014". World Nuclear News. 2012-07-17. Retrieved 2012-07-24. - "Finland's Olkiluoto 3 nuclear plant delayed again". BBC. 2012-07-16. Retrieved 2012-08-10. - "PRIS - Trend reports - Electricity Supplied". www.iaea.org. Retrieved 11 December 2017. - "The Nuclear Renaissance". World Nuclear Association. Retrieved 2014-01-24. - Trevor Findlay (2010). The Future of Nuclear Energy to 2030 and its Implications for Safety, Security and Nonproliferation: Overview Archived 2013-05-12 at the Wayback Machine., The Centre for International Governance Innovation (CIGI), Waterloo, Ontario, Canada, pp. 10–11. - Mycle Schneider, Steve Thomas, Antony Froggatt, and Doug Koplow (August 2009). The World Nuclear Industry Status Report 2009 Archived 2011-04-24 at the Wayback Machine. Commissioned by German Federal Ministry of Environment, Nature Conservation and Reactor Safety, p. 5. - Sylvia Westall & Fredrik Dahl (2011-06-24). "IAEA Head Sees Wide Support for Stricter Nuclear Plant Safety". Scientific American. Archived from the original on 2011-06-25. - Nuclear Renaissance Threatened as Japan’s Reactor Struggles Bloomberg, published March 2011, accessed 2011-03-14 - Analysis: Nuclear renaissance could fizzle after Japan quake Reuters, published 2011-03-14, accessed 2011-03-14 - Japan nuclear woes cast shadow over U.S. energy policy Reuters, published 2011-03-13, accessed 2011-03-14 - Nuclear winter? Quake casts new shadow on reactors MarketWatch, published 2011-03-14, accessed 2011-03-14 - Will China's nuclear nerves fuel a boom in green energy? Channel 4, published 2011-03-17, accessed 2011-03-17 - Jo Chandler (2011-03-19). "Is this the end of the nuclear revival?". The Sydney Morning Herald. - Aubrey Belford (2011-03-17). "Indonesia to Continue Plans for Nuclear Power". The New York Times. - Israel Prime Minister Netanyahu: Japan situation has "caused me to reconsider" nuclear power Piers Morgan on CNN, published 2011-03-17, accessed 2011-03-17 - Israeli PM cancels plan to build nuclear plant xinhuanet.com, published 2011-03-18, accessed 2011-03-17 - "Gauging the pressure". The Economist. 2011-04-28. - European Environment Agency (2013-01-23). "Late lessons from early warnings: science, precaution, innovation: Full Report". p. 476. - "Turkey Prepares to Host First ATMEA 1 Nuclear Reactors". PowerMag. Electric Power. Retrieved 2015-05-24. - "Startup of Sendai Nuclear Power Unit No.1". Kyushu Electric Power Company Inc. 2015-08-11. - WNA (2013-06-20). "Nuclear power down in 2012". World Nuclear News. - "IAEA sees slow nuclear growth post Japan". UPI. 2011-09-23. - "News Analysis: Japan crisis puts global nuclear expansion in doubt". Platts. 2011-03-21. - "Nuclear power: When the steam clears". The Economist. 2011-03-24. - Paton J (2011-04-04). "Fukushima crisis worse for atomic power than Chernobyl, USB says". Bloomberg.com. Retrieved 2014-08-17. - "The 2011 Inflection Point for Energy Markets: Health, Safety, Security and the Environment" (PDF). DB Climate Change Advisors. Deutsche Bank Group. 2011-05-02. - "Siemens to quit nuclear industry". BBC News. 2011-09-18. - John Broder (2011-10-10). "The Year of Peril and Promise in Energy Production". The New York Times. - Hsu, Jeremy (2012-02-09). "First Next-Gen US Reactor Designed to Avoid Fukushima Repeat". Live Science (hosted on Yahoo!). Retrieved 2012-02-09. - Ayesha Rascoe (2012-02-09). "U.S. approves first new nuclear plant in a generation". Reuters. - Kristi E. Swartz (2012-02-16). "Groups sue to stop Vogtle expansion project". The Atlanta Journal-Constitution. - Mark Cooper (2013-06-18). "Nuclear aging: Not so graceful". Bulletin of the Atomic Scientists. - Matthew Wald (2013-06-14). "Nuclear Plants, Old and Uncompetitive, Are Closing Earlier Than Expected". The New York Times. - World Nuclear Association, "Plans for New Reactors Worldwide", October 2015. - Duroyan Fertl (2011-06-05). "Germany: Nuclear power to be phased out by 2022". Green Left. - "Ten New Nuclear Power Reactors Connected to Grid in 2015, Highest Number Since 1990". Retrieved May 22, 2016. - "China Nuclear Power | Chinese Nuclear Energy – World Nuclear Association". www.world-nuclear.org. - "World doubles new build reactor capacity in 2015". London, UK: World Nuclear News. 4 January 2016. Retrieved 7 March 2016. - "Grid Connection for Fuqing-2 in China 7 August 2015". Worldnuclearreport.org. Retrieved 2015-08-12. - "World's First APR-1400 Connected to Grid". Washington DC, USA: NEI (Nuclear Energy Institute). 21 January 2016. Retrieved 7 March 2016. - South Korea’s Shin-Wolsong-2 Enters Commercial Operation - Blau, Max (2016-10-20). "First new US nuclear reactor in 20 years goes live". CNN.com. Cable News Network. Turner Broadcasting System, Inc. Retrieved 2016-10-20. - "Bruce Power's Unit 2 sends electricity to Ontario grid for first time in 17 years". Bruce Power. 2012-10-16. Archived from the original on 2013-01-02. Retrieved 2014-01-24. - James Conca, "China shows how to build nuclear reactors fast and cheap", Forbes, 22 O ct. 2015. - "Nuclear power plant builders see new opportunities in India", Nikkei, 16 June 2016. - "The problem with Britain's (planned) nuclear power station", The Economist, 7 Aug. 2016. - "Japan reactor restarts in post-Fukushima nuclear push", ABC News, 12 Aug. 2016. - Kidd, Steve (30 January 2018). "Nuclear new build - where does it stand today?". Nuclear Engineering International. Retrieved 12 February 2018. - "Korea's nuclear phase-out policy takes shape". World Nuclear News. 19 June 2017. Retrieved 12 February 2018. - Taking a fresh look at the future of nuclear power, International Energy Agency, 29 Jan. 2015. - International Energy outlook 2016, US Energy Information Administration, accessed 17 Aug. 2016. - James Kanter (2009-05-28). "In Finland, Nuclear Renaissance Runs Into Trouble". The New York Times. - James Kanter (2009-05-29). "Is the Nuclear Renaissance Fizzling?". Green. - Rob Broomby (2009-07-08). "Nuclear dawn delayed in Finland". BBC News. - Jeff McMahon (2013-11-10). "New-Build Nuclear Is Dead: Morningstar". Forbes. - John Quiggin (2013-11-08). "Reviving nuclear power debates is a distraction. We need to use less energy". The Guardian. - Hannah Northey (2011-03-18). "Former NRC Member Says Renaissance is Dead, for Now". The New York Times. - Ian Lowe (2011-03-20). "No nukes now, or ever". The Age. Melbourne. - Leo Hickman (2012-11-28). "Nuclear lobbyists wined and dined senior civil servants, documents show". The Guardian. London. - Diane Farseta (2008-09-01). "The Campaign to Sell Nuclear". Bulletin of the Atomic Scientists. 64 (4): 38–56. doi:10.2968/064004009. - Jonathan Leake (2005-05-23). "The Nuclear Charm Offensive". New Statesman. - "Nuclear Industry Spent Hundreds of Millions of Dollars Over the Last Decade to Sell Public, Congress on New Reactors, New Investigation Finds". Union of Concerned Scientists. 2010-02-01. Archived from the original on 2013-11-27. - "Nuclear group spent $460,000 lobbying in 4Q". Business Week. 2010-03-19. - "Nuclear Power in China". London, UK: World Nuclear Association. March 2016. Retrieved 7 March 2016. - "Nuclear Power in China". World Nuclear Association. 2010-12-10. - "China is Building the World's Largest Nuclear Capacity". 21cbh.com. 2010-09-21. Archived from the original on 2012-03-06. - "China Should Control Pace of Reactor Construction, Outlook Says". Bloomberg News. 2011-01-11. - "Nuclear Power in the USA". World Nuclear Association. June 2008. Retrieved 2008-07-25. - Matthew L. Wald (2010-12-07). "Nuclear 'Renaissance' Is Short on Largess". The New York Times. - "NRC/DOE Life After 60 Workshop Report" (PDF). 2008. Retrieved 2009-04-01.[dead link] - Sherrell R. Greene, "Centurion Reactors – Achieving Commercial Power Reactors With 100+ Year Operating Lifetimes'", Oak Ridge National Laboratory, published in transactions of Winter 2009 American Nuclear Society National Meeting, November 2009, Washington, D.C. - New nuclear build – sufficient supply capability? Archived 2011-06-13 at the Wayback Machine. Steve Kid, Nuclear Engineering International, 3/3/2009 - Bloomberg exclusive: Samurai-Sword Maker's Reactor Monopoly May Cool Nuclear Revival By Yoshifumi Takemoto and Alan Katz, bloomberg.com, 3/13/08. - Plans For New Reactors Worldwide, World Nuclear Association - "Nuclear Energy's Role in Responding to the Energy Challenges of the 21st Century" (PDF). Idaho National Engineering and Environmental Laboratory. Retrieved 2008-06-21. - "Westinghouse files for bankruptcy". Nuclear Engineering International. 29 March 2017. Retrieved 4 April 2017. - Bershidsky, Leonid (30 March 2017). "U.S. Nuclear Setback Is a Boon to Russia, China". Bloomberg. Retrieved 21 April 2017. - "Reactor suppliers face governance challenge". World Nuclear News. 20 April 2017. Retrieved 21 April 2017. - "Key World Energy Statistics 2012" (PDF). International Energy Agency. 2012. Retrieved 2012-12-16. - Nicola Armaroli, Vincenzo Balzani, Towards an electricity-powered world. In: Energy and Environmental Science 4, (2011), 3193–3222, p. 3200, doi:10.1039/c1ee01249e. - REN 21. RENEWABLES 2014 GLOBAL STATUS REPORT - "PRIS – Home". Iaea.org. Retrieved 2013-06-14. - "World Nuclear Power Reactors 2007–08 and Uranium Requirements". World Nuclear Association. 2008-06-09. Archived from the original on 2008-03-03. Retrieved 2008-06-21. - "Japan approves two reactor restarts". Taipei Times. 2013-06-07. Retrieved 2013-06-14. - "What is Nuclear Power Plant – How Nuclear Power Plants work | What is Nuclear Power Reactor – Types of Nuclear Power Reactors". EngineersGarage. Retrieved 2013-06-14. - "Nuclear-Powered Ships | Nuclear Submarines". World-nuclear.org. Retrieved 2013-06-14. - "Archived copy" (PDF). Archived from the original (PDF) on 2015-02-26. Retrieved 2015-06-04. Naval Nuclear Propulsion, Magdi Ragheb. As of 2001, about 235 naval reactors had been built - "Summary status for the US". Energy Information Administration. 2010-01-21. Retrieved 2010-02-18. - Eleanor Beardsley (2006-05-01). "France Presses Ahead with Nuclear Power". NPR. Retrieved 2006-11-08. - "Gross electricity generation, by fuel used in power-stations". Eurostat. 2006. Archived from the original on 2006-10-17. Retrieved 2007-02-03. - Issues in Science & Technology Online; "Promoting Low-Carbon Electricity Production" Archived 2013-09-27 at the Wayback Machine. - The European Strategic Energy Technology Plan SET-Plan Towards a low-carbon future 2010. Nuclear power provides "2/3 of the EU's low carbon energy" pg 6. Archived 2014-02-11 at the Wayback Machine. - "Nuclear Icebreaker Lenin". Bellona. 2003-06-20. Archived from the original on October 15, 2007. Retrieved 2007-11-01. - Potential for Worldwide Displacement of Fossil-Fuel Electricity by Nuclear Energy in Three Decades Based on Extrapolation of Regional Deployment Data. Barry W. Brook et al. https://dx.doi.org/10.1371/journal.pone.0124074 - Brook Barry W (2012). "Could nuclear fission energy, etc., solve the greenhouse problem? The affirmative case". Energy Policy. 42: 4–8. doi:10.1016/j.enpol.2011.11.041. - Loftus; et al. (2014). "A critical review of global decarbonization scenarios: what do they tell us about feasibility?,". WIREs Clim Change. 6: 93–112. doi:10.1002/wcc.324. - A critical review of global decarbonization scenarios: what do they tell us about feasibility? Open access PDF. Figure 6 - A critical review of global decarbonization scenarios: what do they tell us about feasibility? Open access PDF - Loan Program for Reactors Is Fizzling - Load-following with nuclear power plants by A. Lokhov - Gore, Al (2009). Our Choice: A Plan to Solve the Climate Crisis. Emmaus, PA: Rodale. ISBN 978-1-59486-734-7. - "What does nuclear power actually cost #peakoil". North Denver News. 19 May 2015. - Nuclear Fuel Cycle Cost Calculator - Kidd, Steve (2011-01-21). "New reactors—more or less?". Nuclear Engineering International. Archived from the original on 2011-12-12. - Henry Fountain (2014-12-22). "Nuclear: Carbon Free, but Not Free of Unease". The New York Times. Retrieved 2014-12-23. the plant had become unprofitable in recent years, a victim largely of lower energy prices resulting from a glut of natural gas used to fire electricity plants - Ed Crooks (2010-09-12). "Nuclear: New dawn now seems limited to the east". Financial Times. Retrieved 2010-09-12. - United States Nuclear Regulatory Commission, 1983. The Price-Anderson Act: the Third Decade, NUREG-0957 - The Future of Nuclear Power. Massachusetts Institute of Technology. 2003. ISBN 0-615-12420-8. Retrieved 2006-11-10. - Massachusetts Institute of Technology (2011). "The Future of the Nuclear Fuel Cycle" (PDF). p. xv. - "uranium Facts, information, pictures | Encyclopedia.com articles about uranium". Encyclopedia.com. 2001-09-11. Retrieved 2013-06-14. - "Second Thoughts About Nuclear Power" (PDF). A Policy Brief – Challenges Facing Asia. January 2011. Archived from the original (PDF) on January 16, 2013. - "Uranium resources sufficient to meet projected nuclear energy requirements long into the future". Nuclear Energy Agency (NEA). 2008-06-03. Archived from the original on 2008-12-05. Retrieved 2008-06-16. - Uranium 2007 – Resources, Production and Demand. Nuclear Energy Agency, Organisation for Economic Co-operation and Development. 2008-06-10. ISBN 978-92-64-04766-2. Archived from the original on 2009-01-30. - "Uranium 2011 – OECD Online Bookshop". Oecdbookshop.org. Retrieved 2013-06-14. - "Global Uranium Supply Ensured For Long Term, New Report Shows". Oecd-nea.org. 2012-07-26. Retrieved 2013-06-14. - "Energy Supply" (PDF). p. 271. Archived from the original (PDF) on 2007-12-15. and table 4.10. - Deffeyes KS, MacGregor ID (1980). "World uranium resources". Scientific American. 242 (1): 66–76. Bibcode:1980SciAm.242a..66D. doi:10.1038/scientificamerican0180-66. - "Energy Supply" (PDF). p. 271. Archived from the original (PDF) on 2007-12-15. and figure 4.10. - "Waste Management in the Nuclear Fuel Cycle". Information and Issue Briefs. World Nuclear Association. 2006. Retrieved 2006-11-09. - John McCarthy (2006). "Facts From Cohen and Others". Progress and its Sustainability. Stanford. Archived from the original on 2007-04-10. Retrieved 2006-11-09. Citing Breeder reactors: A renewable energy source, American Journal of Physics, vol. 51, (1), Jan. 1983. - "Advanced Nuclear Power Reactors". Information and Issue Briefs. World Nuclear Association. 2006. Retrieved 2006-11-09. - "Synergy between Fast Reactors and Thermal Breeders for Safe, Clean, and Sustainable Nuclear Power" (PDF). World Energy Council. Archived from the original (PDF) on 2011-01-10. - Rebecca Kessler. "Are Fast-Breeder Reactors A Nuclear Power Panacea? by Fred Pearce: Yale Environment 360". E360.yale.edu. Retrieved 2013-06-14. - "Large fast reactor approved for Beloyarsk". World-nuclear-news.org. 2012-06-27. Retrieved 2013-06-14. - "Atomic agency plans to restart Monju prototype fast breeder reactor – AJW by The Asahi Shimbun". Ajw.asahi.com. Archived from the original on 2013-06-14. Retrieved 2013-06-14. - "India's breeder reactor to be commissioned in 2013". Hindustan Times. Archived from the original on 2013-04-26. Retrieved 2013-06-14. - "China makes nuclear power development – Xinhua | English.news.cn". News.xinhuanet.com. Retrieved 2013-06-14. - "Thorium". Information and Issue Briefs. World Nuclear Association. 2006. Retrieved 2006-11-09. - M. I. Ojovan, W.E. Lee. An Introduction to Nuclear Waste Immobilisation, Elsevier Science Publishers B.V., Amsterdam, 315pp. (2005). - "NRC: Dry Cask Storage". Nrc.gov. 2013-03-26. Retrieved 2013-06-22. - "Yankee Nuclear Power Plant". Yankeerowe.com. Retrieved 2013-06-22. - "Environmental Surveillance, Education and Research Program". Idaho National Laboratory. Archived from the original on 2008-11-21. Retrieved 2009-01-05. - Vandenbosch 2007, p. 21. - Ojovan, M. I.; Lee, W.E. (2005). An Introduction to Nuclear Waste Immobilisation. Amsterdam: Elsevier Science Publishers. p. 315. ISBN 0-08-044462-8. - Brown, Paul (2004-04-14). "Shoot it at the sun. Send it to Earth's core. What to do with nuclear waste?". The Guardian. London. - National Research Council (1995). Technical Bases for Yucca Mountain Standards. Washington, D.C.: National Academy Press. p. 91. ISBN 0-309-05289-0. - "The Status of Nuclear Waste Disposal". The American Physical Society. January 2006. Retrieved 2008-06-06. - "Public Health and Environmental Radiation Protection Standards for Yucca Mountain, Nevada; Proposed Rule" (PDF). United States Environmental Protection Agency. 2005-08-22. Retrieved 2008-06-06. - Duncan Clark (2012-07-09). "Nuclear waste-burning reactor moves a step closer to reality | Environment | guardian.co.uk". London: Guardian. Retrieved 2013-06-14. - "George Monbiot – A Waste of Waste". Monbiot.com. Retrieved 2013-06-14. - "Energy From Thorium: A Nuclear Waste Burning Liquid Salt Thorium Reactor". YouTube. 2009-07-23. Retrieved 2013-06-14. - NWT magazine, October 2012 - Sevior M. (2006). "Considerations for nuclear power in Australia" (PDF). International Journal of Environmental Studies. 63 (6): 859–872. doi:10.1080/00207230601047255. - Thorium Resources In Rare Earth Elements - American Geophysical Union, Fall Meeting 2007, abstract #V33A-1161. Mass and Composition of the Continental Crust - Interdisciplinary Science Reviews 23:193–203;1998. Dr. Bernard L. Cohen, University of Pittsburgh. Perspectives on the High Level Waste Disposal Problem - "The Challenges of Nuclear Power". - "Coal Ash Is More Radioactive than Nuclear Waste". Scientific American. 2007-12-13. - Alex Gabbard (2008-02-05). "Coal Combustion: Nuclear Resource or Danger". Oak Ridge National Laboratory. Archived from the original on February 5, 2007. Retrieved 2008-01-31. - "Coal ash is not more radioactive than nuclear waste". CE Journal. Center for Environmental Journalism. 2008-12-31. Archived from the original on 2009-08-27. - Montgomery, Scott L. (2010). The Powers That Be, University of Chicago Press, p. 137. - Gore, Al (2009). Our Choice: A Plan to Solve the Climate Crisis. Emmaus, PA: Rodale. pp. 165–166. ISBN 978-1-59486-734-7. - "international Journal of Environmental Studies, The Solutions for Nuclear waste, December 2005" (PDF). Retrieved 2013-06-22. - "Oklo: Natural Nuclear Reactors". U.S. Department of Energy Office of Civilian Radioactive Waste Management, Yucca Mountain Project, DOE/YMP-0010. November 2004. Archived from the original on 2009-08-25. Retrieved 2009-09-15. - "A Nuclear Power Renaissance?". Scientific American. 2008-04-28. Retrieved 2008-05-15. - von Hippel, Frank N. (April 2008). "Nuclear Fuel Recycling: More Trouble Than It's Worth". Scientific American. Retrieved 2008-05-15. - Is the Nuclear Renaissance Fizzling? - Jeff Tollefson (4 March 2014). "US seeks waste-research revival: Radioactive leak brings nuclear repositories into the spotlight". Nature. - R. Stephen Berry and George S. Tolley, Nuclear Fuel Reprocessing, The University of Chicago, 2013. - IEEE Spectrum: Nuclear Wasteland. Retrieved on 2007-04-22 - Harold Feiveson; et al. (2011). "Managing nuclear spent fuel: Policy lessons from a 10-country study". Bulletin of the Atomic Scientists. - "Adieu to nuclear recycling". Nature. 460 (7252): 152. 2009. Bibcode:2009Natur.460R.152.. doi:10.1038/460152b. PMID 19587715. - "Nuclear Fuel Reprocessing: U.S. Policy Development" (PDF). Retrieved 2009-07-25. - "Adieu to nuclear recycling". Nature. 460 (7252): 152. 2009. Bibcode:2009Natur.460R.152.. doi:10.1038/460152b. PMID 19587715. - Processing of Used Nuclear Fuel for Recycle. WNA - Blue Ribbon Commission on America's Nuclear Future. "Disposal Subcommittee Report to the Full Commission" (PDF). Archived from the original (PDF) on 1 June 2012. Retrieved 1 January 2016. - Hambling, David (2003-07-30). "'Safe' alternative to depleted uranium revealed". New Scientist. Retrieved 2008-07-16. - Stevens, J. B.; R. C. Batra. "Adiabatic Shear Banding in Axisymmetric Impact and Penetration Problems". Virginia Polytechnic Institute and State University. Archived from the original on 2008-10-07. Retrieved 2008-07-16. - Tomoko Yamazaki & Shunichi Ozasa (2011-06-27). "Fukushima Retiree Leads Anti-Nuclear Shareholders at Tepco Annual Meeting". Bloomberg. - Mari Saito (2011-05-07). "Japan anti-nuclear protesters rally after PM call to close plant". Reuters. - Benjamin K. Sovacool (August 2010). "A Critical Evaluation of Nuclear Power and Renewable Electricity in Asia". Journal of Contemporary Asia. 40 (3): 393–400. doi:10.1080/00472331003798350. - Benjamin K. Sovacool (2009). The Accidental Century - Prominent Energy Accidents in the Last 100 Years Archived August 21, 2012, at the Wayback Machine. - David Baurac (2002). "Passively safe reactors rely on nature to keep them cool". Logos. Argonne National Laboratory. 20 (1). Retrieved 2012-07-25. - https://www.forbes.com/sites/jamesconca/2012/06/10/energys-deathprint-a-price-always-paid/ with and without Chernobyl's total predicted, by the Linear no-threshold, cancer deaths included. - "Nuclear Power Prevents More Deaths Than It Causes | Chemical & Engineering News". Cen.acs.org. Retrieved 2014-01-24. - Kharecha, P. A.; Hansen, J. E. (2013). "Prevented Mortality and Greenhouse Gas Emissions from Historical and Projected Nuclear Power". Environmental Science & Technology. 47 (9): 4889. Bibcode:2013EnST...47.4889K. doi:10.1021/es3051197. - Sovacool, B. K. (2008). "The costs of failure: A preliminary assessment of major energy accidents, 1907–2007". Energy Policy. 36 (5): 1802–1820. doi:10.1016/j.enpol.2008.01.040. - Dennis Normile (2012-07-27). "Is Nuclear Power Good for You?". Science. 337 (6093): 395. doi:10.1126/science.337.6093.395-b. Archived from the original on 2013-02-13. - Richard Schiffman (2013-03-12). "Two years on, America hasn't learned lessons of Fukushima nuclear disaster". The Guardian. London. - Martin Fackler (2011-06-01). "Report Finds Japan Underestimated Tsunami Danger". The New York Times. - Andrew C. Revkin (2012-03-10). "Nuclear Risk and Fear, from Hiroshima to Fukushima". The New York Times. - Frank N. von Hippel (September–October 2011). "The radiological and psychological consequences of the Fukushima Daiichi accident". Bulletin of the Atomic Scientists. 67 (5): 27–36. doi:10.1177/0096340211421588. - Arifumi Hasegawa, Koichi Tanigawa, Akira Ohtsuru, Hirooki Yabe, Masaharu Maeda, Jun Shigemura, et al. Health effects of radiation and other health problems in the aftermath of nuclear accidents, with an emphasis on Fukushima, The Lancet, 1 August 2015. - Charles D. Ferguson & Frank A. Settle (2012). "The Future of Nuclear Power in the United States" (PDF). Federation of American Scientists. - U.S. NRC: "Nuclear Security – Five Years After 9/11". Accessed 23 July 2007 - Matthew Bunn & Scott Sagan (2014). "A Worst Practices Guide to Insider Threats: Lessons from Past Mistakes". The American Academy of Arts & Sciences. - Amory Lovins (2001). Brittle Power (PDF). pp. 145–146. - Steven E. Miller & Scott D. Sagan (Fall 2009). "Nuclear power without nuclear proliferation?". Dædalus. 138 (4): 7. doi:10.1162/daed.2009.138.4.7. - "The Bulletin of atomic scientists support the megatons to megawatts program". Archived from the original on 2011-07-08. Retrieved 2012-09-15. - "home". usec.com. 2013-05-24. Archived from the original on 2013-06-21. Retrieved 2013-06-14. - Sovacool, Benjamin (2011). Contesting the Future of Nuclear Power: A Critical Global Assessment of Atomic Energy. Hackensack, NJ: World Scientific. p. 190. ISBN 978-981-4322-75-1. - A Farewell to Arms, 2014. - From Warheads to Cheap Energy, Thomas L. Neff’s Idea Turned Russian Warheads Into American Electricity, Jan 2014 - "Future Unclear For 'Megatons To Megawatts' Program". All Things Considered. NPR. 2009-12-05. Retrieved 2013-06-22. - "Megatons to Megawatts Eliminates Equivalent of 10,000 Nuclear Warheads". Usec.com. 2005-09-21. Archived from the original on 2013-04-26. Retrieved 2013-06-22. - Dawn Stover (2014-02-21). "More megatons to megawatts". The Bulletin. - "Nuclear Power in the World Today". World-nuclear.org. Retrieved 2013-06-22. - Benjamin K. Sovacool. Valuing the greenhouse gas emissions from nuclear power: A critical survey. Energy Policy, Vol. 36, 2008, p. 2950. - Warner, E. S.; Heath, G. A. (2012). "Life Cycle Greenhouse Gas Emissions of Nuclear Electricity Generation". Journal of Industrial Ecology. 16: S73. doi:10.1111/j.1530-9290.2012.00472.x. - "Collectively, life cycle assessment literature shows that nuclear power is similar to other renewable and much lower than fossil fuel in total life cycle GHG emissions.''". Nrel.gov. 2013-01-24. Archived from the original on 2013-07-02. Retrieved 2013-06-22. - Life Cycle Assessment Harmonization Results and Findings.Figure 1 Archived 2017-05-06 at the Wayback Machine. - "IPCC Working Group III – Mitigation of Climate Change, Annex II I: Technology – specific cost and performance parameters" (PDF). IPCC. 2014. p. 10. Archived from the original (PDF) on 2014-12-15. Retrieved 2014-08-01. - "IPCC Working Group III – Mitigation of Climate Change, Annex II Metrics and Methodology. pg 37 to 40,41" (PDF). Archived from the original (PDF) on 2015-09-08. - Kharecha Pushker A (2013). "Prevented Mortality and Greenhouse Gas Emissions from Historical and Projected Nuclear Power – global nuclear power has prevented an average of 1.84 million air pollution-related deaths and 64 gigatonnes of CO2-equivalent (GtCO2-eq) greenhouse gas (GHG) emissions that would have resulted from fossil fuel burning". Environmental Science. Pubs.acs.org. 47 (9): 4889–4895. Bibcode:2013EnST...47.4889K. doi:10.1021/es3051197. - "UNSCEAR 2008 Report to the General Assembly" (PDF). United Nations Scientific Committee on the Effects of Atomic Radiation. 2008. - Dr. Frauke Urban and Dr. Tom Mitchell 2011. Climate change, disasters and electricity generation Archived 2012-09-20 at the Wayback Machine.. London: Overseas Development Institute and Institute of Development Studies - Kloor, Keith (2013-01-11). "The Pro-Nukes Environmental Movement". Slate.com "The Big Questions" Blog. The Slate Group. Retrieved 2013-03-11. - Smil, Vaclav (2012-06-28). "A Skeptic Looks at Alternative Energy". IEEE Spectrum. Retrieved 2014-01-24. - Fiona Harvey (2011-05-09). "Renewable energy can power the world, says landmark IPCC study". The Guardian. London. - Economist magazine article "Sun, wind and drain Wind and solar power are even more expensive than is commonly thought Jul 26th 2014" - THE NET BENEFITS OF LOW AND NO-CARBON ELECTRICITY TECHNOLOGIES. MAY 2014, Charles Frank PDF - Comparing the Costs of Intermittent and Dispatchable Electricity-Generating Technologies", by Paul Joskow, Massachusetts Institute of Technology, September 2011 - Archived October 21, 2012, at the Wayback Machine. - Powers, Diana S. (26 July 2010). "Nuclear Energy Loses Cost Advantage". The New York Times. - "Solar and Nuclear Costs — The Historic Crossover" (PDF). NC WARN. July 2010. Retrieved 2013-01-16. - "Is solar power cheaper than nuclear power?". 2010-08-09. Retrieved 2013-01-04. - "Levelized Cost of New Generation Resources in the Annual Energy Outlook 2011". U.S. Energy Information Administration. November 2010. Archived from the original on 2012-11-04. Retrieved 2013-01-16. - Chris Namovicz (2013-06-17). "Assessing the Economic Value of New Utility-Scale Renewable Generation Projects" (PDF). US Energy Information Administration Energy Conference. - "Relative Subsidies to Energy Sources: GSI estimates 19 APRIL 2010" (PDF). Archived from the original (PDF) on 13 May 2013. - EIA Releases New Subsidy Report: Subsidies for Renewables Increase 186 Percent August 3, 2011 - Nils Starfelt; Carl-Erik Wikdahl. "Economic Analysis of Various Options of Electricity Generation – Taking into Account Health and Environmental Effects" (PDF). Archived from the original (PDF) on 2007-09-27. Retrieved 2012-09-08. - David Biello (2009-01-28). "Spent Nuclear Fuel: A Trash Heap Deadly for 250,000 Years or a Renewable Energy Source?". Scientific American. Retrieved 2014-01-24. - "Closing and Decommissioning Nuclear Power Plants" (PDF). United Nations Environment Programme. 2012-03-07. Archived from the original (PDF) on 2016-05-18. - Sovacool, Benjamin (2011). Contesting the Future of Nuclear Power: A Critical Global Assessment of Atomic Energy. Hackensack, NJ: World Scientific. pp. 118–119. ISBN 978-981-4322-75-1. - Decommissioning Nuclear Facilities - Backgrounder on Decommissioning Nuclear Power Plants. NRC - Publications: Vienna Convention on Civil Liability for Nuclear Damage. International Atomic Energy Agency. - Nuclear Power's Role in Generating Electricity Congressional Budget Office, May 2008. - Availability of Dam Insurance Archived 2016-01-08 at the Wayback Machine. 1999 - Falk, Jim (1982). Global Fission: The Battle Over Nuclear Power. Melbourne: Oxford University Press. ISBN 978-0-19-554315-5. - Patterson, Thom (2013-11-03). "Climate change warriors: It's time to go nuclear". CNN. - "Renewable Energy and Electricity". World Nuclear Association. June 2010. Retrieved 2010-07-04. - M. King Hubbert (June 1956). "Nuclear Energy and the Fossil Fuels 'Drilling and Production Practice'" (PDF). API. p. 36. Archived from the original (PDF) on 2008-05-27. Retrieved 2008-04-18. - Bernard L. Cohen (1990). The Nuclear Energy Option: An Alternative for the 90s. New York: Plenum Press. ISBN 978-0-306-43567-6. - Greenpeace International and European Renewable Energy Council (January 2007). Energy Revolution: A Sustainable World Energy Outlook Archived 2009-08-06 at the Wayback Machine., p. 7. - Giugni, Marco (2004). Social Protest and Policy Change: Ecology, Antinuclear, and Peace Movements. - Sovacool Benjamin K. (2008). "The costs of failure: A preliminary assessment of major energy accidents, 1907–2007". Energy Policy. 36 (5): 1802–1820. doi:10.1016/j.enpol.2008.01.040. - Stephanie Cooke (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc., p. 280. - Kurt Kleiner. Nuclear energy: assessing the emissions Nature Reports, Vol. 2, October 2008, pp. 130–131. - Mark Diesendorf (2007). Greenhouse Solutions with Sustainable Energy, University of New South Wales Press, p. 252. - Mark Diesendorf. Is nuclear energy a possible solution to global warming? Archived July 22, 2012, at the Wayback Machine. - "4th Generation Nuclear Power — OSS Foundation". Ossfoundation.us. Retrieved 2014-01-24. - Adam Piore (June 2011). "Nuclear energy: Planning for the Black Swan". Scientific American. - Matthew L. Wald (2010-04-22). "Critics Challenge Safety of New Reactor Design". The New York Times. - "Nuclear Power in a Warming World" (PDF). Union of Concerned Scientists. Retrieved 2008-10-01. - Benjamin K. Sovacool (August 2010). "A Critical Evaluation of Nuclear Power and Renewable Electricity in Asia". Journal of Contemporary Asia. 40 (3): 381. - Gerstner, E. (2009). "Nuclear energy: The hybrid returns" (PDF). Nature. 460 (7251): 25–8. doi:10.1038/460025a. PMID 19571861. - Introduction to Fusion Energy, J. Reece Roth, 1986.[page needed] - T. Hamacher & A.M. Bradshaw (October 2001). "Fusion as a Future Power Source: Recent Achievements and Prospects" (PDF). World Energy Council. Archived from the original (PDF) on 2004-05-06. - W Wayt Gibbs (2013-12-30). "Triple-threat method sparks hope for fusion". Nature. - "Beyond ITER". The ITER Project. Information Services, Princeton Plasma Physics Laboratory. Archived from the original on 2006-11-07. Retrieved 2011-02-05. – Projected fusion power timeline - "Overview of EFDA Activities". EFDA. European Fusion Development Agreement. Archived from the original on 2006-10-01. Retrieved 2006-11-11. - Armstrong, Robert C., Catherine Wolfram, Robert Gross, Nathan S. Lewis, and M.V. Ramana et al. The Frontiers of Energy, Nature Energy, Vol 1, 11 January 2016. - Clarfield, Gerald H. and William M. Wiecek (1984). Nuclear America: Military and Civilian Nuclear Power in the United States 1940–1980, Harper & Row. - Cooke, Stephanie (2009). In Mortal Hands: A Cautionary History of the Nuclear Age, Black Inc. - Cravens, Gwyneth (2007). Power to Save the World: the Truth about Nuclear Energy. New York: Knopf. ISBN 0-307-26656-7. - Elliott, David (2007). Nuclear or Not? Does Nuclear Power Have a Place in a Sustainable Energy Future?, Palgrave. - Ferguson, Charles D., (2007). Nuclear Energy: Balancing Benefits and Risks Council on Foreign Relations. - Garwin, Richard L. and Charpak, Georges (2001) Megawatts and Megatons A Turning Point in the Nuclear Age?, Knopf. - Herbst, Alan M. and George W. Hopley (2007). Nuclear Energy Now: Why the Time has come for the World's Most Misunderstood Energy Source, Wiley. - Schneider, Mycle, Steve Thomas, Antony Froggatt, Doug Koplow (2016). The World Nuclear Industry Status Report: World Nuclear Industry Status as of 1 January 2016. - Walker, J. Samuel (1992). Containing the Atom: Nuclear Regulation in a Changing Environment, 1993-1971, Berkeley: University of California Press. - Weart, Spencer R. The Rise of Nuclear Fear. Cambridge, MA: Harvard University Press, 2012. ISBN 0-674-05233-1 |Wikiversity quizzes on nuclear power| - Alsos Digital Library for Nuclear Issues — Annotated Bibliography on Nuclear Power – a partnership between the National Science Foundation, National Science Digital Library, Washington and Lee University, and Nuclear Pathways - Energy Information Administration provides statistics and information - The World Nuclear Industry Status Reports website by Mycle Schneider and others - Nuclear Tourist.com, nuclear power information
<urn:uuid:b461164c-983d-4665-b0ff-ba4f03f5f3a2>
3.96875
28,372
Knowledge Article
Science & Tech.
51.278759
95,536,471
System programming is characterized by the fact that it is aimed at producing system software that provides services to the computer hardware or specialized system services.Also, Many a time, system programming directly deals with the peripheral devices with a focus on input, process (storage), The essential characteristics of system programming are as follows: - Programmers are expected to know the hardware and internal behavior of the computer system on which the program will run. System programmers explore these known hardware properties and write software for specific hardware using efficient algorithms. - Uses a low-level programming language or some programming dialect. - Requires little runtime overheads and can execute in a resource-constrained environment. - Moreover, These are very efficient programs with a small or no runtime library requirements. - Has access to systems resources, including memory - Also, Can be written in assembly language The following are the limiting factors of system programming: - Many times, system programs cannot run in debugging mode. - Moreover, Limited programming facility is available, which requires high skills for the system programmer. - So, Less powerful runtime library (if available at all), with less error-checking capabilities. The amount of space allocated for all possible addresses for data and other computational entities called address space. The address space governed by the architecture and managed by the operating system. Also, The computational entities such as a device, file, server, or networked computer all addressed within space. There are two types of address space namely, Physical Address Space Similarly, Physical address space the collection of all physical addresses produced by a computer program and provided by the hardware. Every machine has its own physical address space with its valid address range between 0 and some maximum limits supported by the machine. Logical Address Space So, Logical address space generated by the CPU or provided by the OS kernel. It also sometimes called virtual address space. In the virtual address space, there is one address space per process, which may or may not start at zero and extend to the highest address.
<urn:uuid:38f4874d-8bf5-4269-82ec-3d775b3dbefb>
3.984375
417
Knowledge Article
Software Dev.
17.704708
95,536,472
We theoretically investigated the optical properties of the lesser known (220.127.116.11) Archimedean photonic crystal. The structure is two dimensional and made of dielectric GaAs rods in air. The calculations of the band structures, equifrequency contours, and simulations of the wave propagation through the structure were performed by the plane wave expansion and finite-difference time-domain methods. With analysis of the gap map and equifrequency contours we obtained frequency ranges for best performance for wave guiding. For those frequency ranges, we designed a new type of waveguide for possible applications in integrated optics. In addition, negative refraction was exhibited by the structure. Djordje M. Jovanovic, "Optical properties of the (18.104.22.168) hexagonal Archimedean photonic crystal," Journal of Nanophotonics 5(1), 051820 (1 January 2011). https://doi.org/10.1117/1.3611019
<urn:uuid:44b97636-1e9d-4eb7-a508-705580022883>
2.65625
209
Academic Writing
Science & Tech.
42.578235
95,536,495
The tetramethylbenzenes constitute a group of substances of aromatic hydrocarbons, which structure consists of a benzene ring with four methyl groups (–CH3) as a substituent. Through their different arrangement, they form three structural isomers with the molecular formula C10H14. They also belong to the group of C4-benzenes. The best-known isomer is durene. Tetramethylbenzenes Common name prehnitene isodurene durene Systematic name 1,2,3,4-tetramethylbenzene 1,2,3,5-tetramethylbenzene 1,2,4,5-tetramethylbenzene Structural formula CAS Registry Number 488-23-3 527-53-7 95-93-2
<urn:uuid:dae13e42-6e63-474e-be87-c1bb94b1e200>
2.625
178
Knowledge Article
Science & Tech.
34.996497
95,536,510
The management of urban groundwater resources is directly linked to urban water supply and drainage concepts. A proper integration of groundwater into urban water management plans is recommended for long-term planning. The paper describes the development of a new modelling suite which addresses the urban water and solute balance in a holistic way. Special focus has been placed on the assessment of the impact of sewer leakage on groundwater in four case study cities. Tools for the prediction of sewer leakage including the assessment of uncertainties are now available. Field investigations in four European case study cities were able to trace the influence of sewer leakage on urban groundwater using microbiological indicators and pharmaceutical residues. Research Article|September 01 2006 Integrating groundwater into urban water management Water Sci Technol (2006) 54 (6-7): 395-403. L. Wolf, J. Klinger, I. Held, H. Hötzl; Integrating groundwater into urban water management. Water Sci Technol 1 September 2006; 54 (6-7): 395–403. doi: https://doi.org/10.2166/wst.2006.614 Download citation file:
<urn:uuid:5614fdad-f2af-4fb6-b093-0f9efb6adf30>
2.71875
229
Truncated
Science & Tech.
48.333628
95,536,511
More problems with new chemicals in Carbon from these sources is very low in C-14 because these sources are so old and have not been mixed with fresh carbon from the air. The Gazette is Colorado Springs's most trusted source for breaking news, sports, weather, obituaries, politics, business, art, entertainment, blogs, video, photos. Does carbon dating prove the earth is millions of. - Creation. Thus, a fresy ed mussel has far less C-14 than a fresy ed something else, which is why the C-14 dating method makes freshwater mussels seem older than they really are. Whenever the worldview of evolution is questioned, the topic of carbon dating always comes up. Here is how carbon dating works and the assumptions it is based How accurate are Carbon-14 and other radioactive dating methods. When dating wood there is no such problem because wood gets its carbon straht from the air, complete with a full dose of C-14. Discussion on the inaccuracies found using the Carbon-14 dating method, and the various other radioactive dating methods, plus evidence for a much younger. Carbon-14 Dating - Wrested Scriptures So, if we measure the rate of beta decay in an organic sample, we can calculate how old the sample is. Question: Kieth and Anderson radiocarbon-dated the shell of a living freshwater mussel and obtained an age of over two thousand years. The problem with Carbon-14 dating. Evolution Creation Carbon Dating. etc. Materials recovered from wet earth inevitably have been invaded by water. What is Carbon Dating? NOSAMS ICR creationists claim that this discredits C-14 dating. Answer: It does discredit the C-14 dating of freshwater mussels, but that's about all. What is Carbon Dating? Carbon is one of the chemical elements. Along with hydrogen, nitrogen, oxygen, phosphorus, and sulfur, carbon is a building block of. Carbon dating water: Rating: 88 / 100 Overall: 89 Rates
<urn:uuid:0338201d-b59c-4408-87fb-c52884e81e05>
3
410
Spam / Ads
Science & Tech.
55.253679
95,536,542
Every spring and fall, billions of birds migrate across the United States, largely unseen under the cover of darkness. Now a team of researchers led by computer scientist Daniel Sheldon at the University of Massachusetts Amherst plan to develop new analytic methods with data collected over the past 20 years—more than 200 million archived radar scans from the national weather radar network—to provide powerful new tools for tracking migration. Sheldon says, "The Dark Ecology Project will develop new resources allowing us to estimate the densities of migrating birds over the U.S. each year for the last 25 years." His collaboration with computer vision expert Subhransu Maji at UMass Amherst and Steven Kelling, director of information science at the Cornell Laboratory of Ornithology, Ithaca, N.Y., is supported by a three-year, $903,300 National Science Foundation grant to UMass Amherst and $309,000 to Cornell. Sheldon has collaborated with scientists at the Cornell Lab of Ornithology since 2009, when he was a Ph.D. student at Cornell. Kelling's information science team developed eBird, a citizen science project that collects observations from birdwatchers across the globe. The researchers use big data methods to piece together eBird observations to reveal complex patterns of bird occurrence and to guide international bird conservation efforts such as the 2016 State of North America's Birds report. Maji's group has developed computer vision techniques for fine-grained categorization, which are already helping citizens connect with nature by automatically recognizing species of birds, animals, and other organisms in photographs. A long-term vision for the new grant, Sheldon explains, is to combine these new data resources to provide a detailed continent-wide view of bird migration. He says, "eBird data can tell us about bird distributions and which species are present at different locations and times of year, while radar data can tell us how birds are moving over the continent throughout the year." They chose to name the project 'Dark Ecology' to allude to dark matter in the universe and the idea that "a lot of the science waiting to be discovered is hidden from our direct view," he adds. Ornithologists, scientists who study birds, have known for decades that the U.S. weather radar network is sensitive enough to detect birds flying at night, and some researchers have used the data for studies. But such research has been limited because of the difficulties involved. Sheldon explains that not only has it not been easy to gain access and download data, but "there are millions of images, and analyzing them requires a human expert to look at every image in order to use the information in a study. Because of the human processing involved, it's a very slow process and it's been beyond the scope of what most people can do." He and colleagues propose to automate the process by developing new big data handling techniques. Access to the radar scans was enhanced in 2015, when Amazon Web Services reached a research agreement with the U.S. National Oceanic and Atmospheric Administration to increase the amount of NOAA data that is made available via the cloud. This made NEXRAD data accessible at a much lower cost. To build on this open access, Sheldon and Maji will use machine learning, computer vision and probabilistic inference techniques to teach computers to take over analyses that used to require human manual labor. One key will be to design algorithms to screen out rain, Sheldon says. "Recent advances in machine learning and computer vision will let us teach the computer how to identify rain, birds, insects, location of bird roosts and other biological phenomena of interest to ecologists." "We also plan to develop algorithms to extract more information from the radar data," he adds. "Current methods produce point-based estimates of migration at a particular station because they don't know how to deal with gaps in radar coverage. This means they throw out a huge amount of data that does exist between stations. We would like to develop machine learning algorithms to infer what is happening in the gaps to produce spatially detailed maps of migration density." The scientists plan to make the resulting dataset freely available as an information resource for researchers to estimate the number of birds migrating on any given night, measure the patterns and trends of bird populations, and do hypothesis-driven science. "One big goal is to analyze the entire archive to measure density and velocity of migrating birds and make the resulting data available to any scientists who can use it." Explore further: Interactive, open source visualizations of nocturnal bird migrations in near real-time www.nsf.gov/awardsearch/showAw … ?AwardNumber=1661259
<urn:uuid:6a9d6833-8bfb-44ae-8a5a-bf07a595bc4d>
3.546875
960
News Article
Science & Tech.
37.49483
95,536,547
Agile development model is a type of incremental model. Agile software is developed in rapid and incremental cycles resulting in small incremental releases. Each release is thoroughly tested to ensure that software quality is maintained. Agile software development is a process for developing software. Agile development model is used for time critical applications. The most well-known agile development life cycle model is Extreme Programming (XP). Experts from QMS Academy are telling about the advantages of Agile Model: - Customer satisfaction is gained by continuous and rapid delivery of useful software. - In person face-to-face conversation is the best form of communication. - It enables regular adaptation to changing circumstances. - Enhances daily co-operation between developers and business people. - Even late modifications in requirements are welcomed. - Continuous attention is given to good design and technical excellence. - Working software is delivered frequently. Agile model is used in various situations. When there are new changes that need to be implemented, agile model is used. The freedom agile gives to change is very significant. New changes can be implemented at very little cost because of the frequency of new increments that are produced.Agile model is also used to implement a new feature.The developers need to lose only the work of few days or even few hours to roll back and implement the new feature.In agile model, very limited planning is required to get started with the project unlike the waterfall model. Agile acknowledges the fact that needs of end users are constantly changing in IT and business world. Changes can be discussed and features can be newly affected or removed based on feedback. This provides the customers with the finished system they want or need. QMS Academy is a group of professionals and experts who provide several open-house and corporate trainings, being agile one of them.
<urn:uuid:af38aab6-360e-45c0-bf6f-06677ca4792a>
3.109375
369
Personal Blog
Software Dev.
26.629044
95,536,556
If we look around us, we see a large of things of different size and texture. These all are matter. Many of these things we used in our daily life. Matter can be classified in a number of ways. Ancient Indian philosopher said that matter is made up of five basic elements: Air, Earth, Water, Space, Fire. Modern Science suggests and classifies it into two ways: - Physical Properties. - Chemical Properties. http://blossomjar.com/pacinity/2500 What is matter ? Our universe is made of matter and energy. We are all surrounded by material objects such as houses, trees, water, animals etc. The presence of these objects can be felt by one or more of our five senses such as sight, touch, hearing, taste, and smell. All these objects constitute the matter. - The particles of matter are very, very small in size. - These particles have spaces between them. - The particles of matter are constantly moving. - These particles attract each other. - Small particles follow the zig-zag path.This motion called as Brownian motion. Brownian motion increases on increasing the temperature. It is Interesting to note that whereas matter can be seen, energy cannot be. It can be only felt in the form of heat light and electricity etc. http://secfloripa.org.br/esminer/1574 Definition of matter |http://ramshergill.com/womens/cillian-murphy/ Matter is defined as anything which occupies space, possesses mass and can be felt by anyone or more of our senses.| follow url What is matter made of According to Dalton’s theory, the smallest portion of matter is an atom which cannot be divided into anything smaller than it. The atoms of same or different elements are combined chemically to form a molecule. - The masses of the elements are expressed as atomic masses - while the masses of molecules are expressed and given in terms of molecular masses. http://uetd-hessen.de/?deuir=singles-kirn&9a8=80 What is matter in Science Science defines matter with its branch named chemistry which deals with the composition and also the physical and chemical characteristics associated with the different material objects. All developments in the chemistry are based on the scientific approach. A scientist is very keen to perform experiments, make observations and draw inferences from the observation. In order to have a better and systematic understanding, chemistry has been divided into many branches in science.These are: - Organic Chemistry - Inorganic Chemistry - Physical Chemistry - Analytical Chemistry. go here Classification of Matter Since matter exists in countless form, its study can be simplified by dividing it into two different ways - click Physical Classification. Depending upon the rigidity, volume and shape i.e. physical, nature matter can be classified as solids, liquids, and gases. - http://agauchepourdevrai.fr/?fuier=recherche-femme-voil%D0%93%C2%A9e-pour-mariage&5d7=2e Chemical Classification. Depending upon the chemical composition of the substances, the matter may be classified into elements, compounds, and mixtures. The classification is related to the physical state of the substance is divided into three. - http://gsc-research.de/gsc/nachrichten/detailansicht/index.html?cHash=e5af3ef3c1 Solid State: Solids state possesses definite shape and definite volume. Example: Metals, wood. - Liquid state: Liquid state possesses definite shape and no definite volume. Example: water, milk. - Gaseous state: A gas does not have either a definite shape and definite volume. Examples: Oxygen, air. Chemical Classification is mainly classified into 3 groups. - http://www.selectservices.co.uk/?propeler=come-guadagnare-con-optioni-binarie-con-supporti-e-resistenze&4b8=f7 Element: The simplest form of substance (matter) which is made up of one kind of atom. Example: copper, gold etc. - opcje binarne knf Compound: A pure substance having two or more elements in a fixed proportion by weight. Example: water. - http://www.goodlight.it/?bioreresd=opzioni-binarie-conto-prova&982=3d Mixture: The combination of two or more elements may be in any proportion. Example: air, paint. All these groups are further classified into sub types(See the diagram below) Classification of Matter Chart On the basis of the characteristic properties, the elements are further classified into metals,non-metals, and metalloids. Elements which have bright luster are good conductors of heat and electricity; malleable ductile; hard solids (except mercury) are called metals. For examples, gold, iron. Of all the known elements about 75% are metals. como comprar cytotec en guatemala Non-metals Elements which do not possess luster (except iodine); poor conductors of heat and electricity (except graphite); neither malleable nor ductile, brittle and exist in solid, liquid or gaseous state are called nonmetals. Examples, sulphur (solid), bromine (liquid), oxygen (gas). methotrexate 4mg yellow Metalloids Elements which have properties of both metals, as well as non-metals, are called metalloids. For example, arsenic, antimony, and bismuth. The properties of a compound are entirely different from the properties of the elements from which it is made. These are those Compounds which are obtained from the plants and animals. These are those which have no carbon atom when compound. Example: ammonia, hydrogen sulfide. It is that mixture in which the composition is uniform throughout. Such mixtures are also called solutions.Alloys like solder, steel, bronze, etc It is that mixture in which different components do not mix uniformly. In fact, most of the mixtures are heterogeneous. For example, gunpowder is a mixture of carbon, Sulphur, and potassium nitrate. We may mix any two or more substances which do not chemically react. Milk which is a mixture of water, fat, proteins and lactose is another example of this type of mixtures. - In a mixture, the properties of the matter remain unchanged. Difference between Mixture and Compound |The components of the mixture may be present in any ratio.||A chemical compound always contains the same elements combined together in the same fixed ratio by weight.| |The components of the mixture can be seen lying side by side either with the naked eye or under a microscope.||The components of the compound can in no case be seen separately.| |A mixture may be homogeneous or heterogeneous in nature.||Compounds are always homogeneous.| |The properties of a mixture are midway between the properties of its constituents||The properties of a compound are altogether different from those of its components.| |The components can be separated by simple of mechanical methods.||The constituents can not be separated by simple physical or mechanical means. They can, however, be separated by chemical methods.| |No energy is evolved or absorbed when a mixture is prepared.||Energy in some form, usually heat or electricity is evolved or absorbed during the formation as well as decomposition of a compound.| |Mixture usually does not have a sharp melting or boiling points.||A chemical compound melts or boils at a definite temperature.| Properties of matter The characteristics that distinguish the one type of matter from another and make it with unique properties.There are two types of properties of matter: - Physical Properties. - Chemical Properties. Physical properties of matter The physical properties are those which define the matter without changing composition makeup. It does not include any chemical change.The overall composition of the material remains the same. Examples of physical properties include color, texture, flexibility, density, mass etc. We can observe or measure the properties with its two types. The two types of physical properties are - and Intensive properties. Intensive and Extensive Properties - The properties which do not depend upon either the size of the system or the quantity of matter present in it are known as intensive properties. - For example Pressure, temperature, density, specific heat, surface tension, viscosity, refractive index, melting and boiling points etc. - The properties which depend upon the quantity of the matter present in the system. - For example Mass, volume, energy, enthalpy, work etc. Chemical properties of matter It is the ability of the matter to react with surroundings or other substances. During a chemical process matter composition changes. Chemical properties include a change in color, texture, flexibility, density, mass etc. Example: Chemical properties of a burning of a piece of paper define that how it reacts with the air. Factors that affect the chemical properties are: - The amount of substance (moles). Examples of properties of matter Physical properties Examples: - Boiling Point - Melting Point - Refractive Index Chemical properties Examples: - Cooking food. - Rusting of iron. - Smelling of Food. - Magnesium burns in air. - Iron reacts with steam. if you like feel free to share with others.
<urn:uuid:41bf2429-2aff-4f74-b676-8af4431a9020>
3.515625
2,040
Knowledge Article
Science & Tech.
38.057779
95,536,560
About microscopic forms of life, including Bacteria, Archea, protozoans, algae and fungi. Topics relating to viruses, viroids and prions also belong here. Moderators: honeev, Leonid, amiradm, BioTeam - Posts: 1 - Joined: Wed Feb 13, 2008 1:05 pm Bacteria need water, but why exactly? For specific processes or something?? - King Cobra - Posts: 885 - Joined: Fri Jun 15, 2007 7:03 pm - Location: San Diego, Ca - Posts: 11 - Joined: Mon Aug 20, 2007 2:46 am Why do you need water? The answers will be similar. Think about metabolism. Think about how protiens function and form in aqueous solutions. Think about nutrient up take Think about mechanisims of motility - Posts: 163 - Joined: Thu Nov 25, 2004 6:54 pm Water is a solvent to many nutrients/ions that bacteria need. That is one reason. Forgive him, for he believes that the customs of his tribe are the laws of nature! Who is online Users browsing this forum: No registered users and 2 guests
<urn:uuid:5e1fd70f-2684-443d-b8e1-c8b73404e85f>
2.71875
258
Comment Section
Science & Tech.
68.68168
95,536,572