text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
This feature is obsolete. Although it may still work in some browsers, its use is discouraged since it could be removed at any time. Try to avoid using it. The obsolete HTML Base Font element ( <basefont>) sets a default font face, size, and color for the other elements which are descended from its parent element. With this set, the font's size can then be varied relative to the base size using the Like all other HTML elements, this element supports the global attributes. Do not use this element! Though once (imprecisely) normalized in HTML 3.2, it wasn't supported in all major browsers. Further, browsers, and even successive versions of browsers, never implemented it in the same way: practically, using it has always brought indeterminate results. <basefont> element was deprecated in the standard at the same time as all elements related to styling only. Starting with HTML 4, HTML does not convey styling information anymore (outside the <style> element or the style attribute of each element). In HTML5, this element has been removed completely. For any new web development, styling should be written using CSS only. This element implements the <basefont color="#FF0000" face="Helvetica" size="+2" /> |Feature||Android webview||Chrome for Android||Edge mobile||Firefox for Android||IE mobile||Opera Android||iOS Safari| © 2005–2018 Mozilla Developer Network and individual contributors. Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later.
<urn:uuid:cbebdc13-0a4d-43b0-aaeb-c1decb78cc3f>
2.71875
328
Documentation
Software Dev.
39.512055
95,600,036
Antioxidants are natural food ingredients that protect cells from harmful influences. Their main task is to neutralize so-called “free radicals” which are produced in the process of oxidation and which are responsible for cell degeneration. Scientists at the Max Planck Institute for Chemical Ecology in Jena, Germany, and the University of Lund, Sweden, now show that vinegar flies are able to detect these protective substances by using olfactory cues. Odors that are exclusively derived from antioxidants attract flies, increase feeding behavior and trigger oviposition in female flies. Hany Dweck is stimulating olfactory sensory neurons in fruit flies with different odors using the single sensillum recording (SSR) technique. Hydroxycinnamic acids are secondary plant metabolites and important dietary antioxidants. For animals as well as humans, antioxidants are essential components of a healthy diet, because they protect the cells and boost the immune system. Notably, they prevent the emergence of too many free radicals, mostly oxygen compounds, and therefore a metabolic condition, which is generally called oxidative stress. If an organism suffers from oxidative stress, free radicals attack its cells and weaken its immune system. In fruit flies, oxidative stress is induced by immune responses to toxins produced by pathogens in the food. Hydroxycinnamic acids are found in high amounts in fruit. Since fruit is the preferred breeding substrate of fruit flies, scientists in the Department of Evolutionary Neuroethology at the Max-Planck-Institute for Chemical Ecology in Jena, Germany, took a closer look at these substances and their possible effect on the flies. Fruit flies are not able to smell hydroxycinnamic acids directly. However, yeasts metabolize the antioxidants and produce ethylphenols. These volatile substances activate targeted olfactory neurons housed on the maxillary palps of the fruit flies, which express the odorant receptor Or71a. Interestingly, fly larvae which are also attracted by yeasts enriched with hydroxycinnamic acids using ethylphenols as olfactory cues, employ another odorant receptor for binding ethylphenols: Or94b, which is exclusively found in larvae, and which is co-expressed with Or94a, a receptor binding a general yeast odor. Because flies cannot smell the antioxidants directly, ethylphenols provide reliable cues for the presence of these protective compounds in the food. The perception of these odorant signals has a direct impact of the flies’ behavior: They are attracted by the odor sources, show increased feeding behavior and choose oviposition sites where ethylphenols indicate that antioxidants are present in the breeding substrate. "This form of olfactory proxy detection is not only a phenomenon in insects. It has also been shown in humans, that odors that we perceive as pleasant or appetizing, are in fact derived from important and healthy nutrients, such as essential amino acids, fatty acids and vitamins," Marcus Stensmyr explains. The scientist, who carried out the studies in the Department of Evolutionary Neuroethology together with his colleagues, has recently moved to a position as senior lecturer at the University of Lund. These findings demonstrate a further example of an individual neuronal pathway, which has a profound effect on the flies: from the odorant signal to olfactory neurons and dedicated odorant receptors to behavior (see also our press release "A Direct Line through the Brain to Avoid Rotten Food – A Full STOP Signal for Drosophila − Odor activation of a dedicated neural pathway by geosmin, an odor produced by toxic microorganisms, activates a hard-wired avoidance response in the fly": http://www.ice.mpg.de/ext/971.html, December 7, 2012). The ethylphenol pathway as an olfactory proxy detection of dietary antioxidants shows yet another facet of the complex odor-guided behavior in fruit flies. The scientists will now try to identify further neural pathways involved in the detection of essential nutrients, which ultimately trigger the flies’ behavior. [AO] Dweck, H., Ebrahim, S. A. M., Farhan, A., Hansson, B. S., Stensmyr, M. C. (2015). Olfactory proxy detection of dietary antioxidants in Drosophila. Current Biology, DOI: 10.1016/j.cub.2014.11.062 Prof. Dr. Bill S. Hansson, Max Planck Institute for Chemical Ecology, email@example.com Dr. Marcus C. Stensmyr, Department of Biology, Lund University, firstname.lastname@example.org Kontakt und Bildanfragen Angela Overmeyer M.A., Max-Planck-Institut für chemische Ökologie, Hans-Knöll-Str. 8, 07743 Jena, +49 3641 57-2110, E-Mail email@example.com Download von hochaufgelösten Fotos über http://www.ice.mpg.de/ext/735.html Angela Overmeyer | Max-Planck-Institut für chemische Ökologie Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:53f52cc9-6621-4c2a-bc2e-d6c13a9c265c>
3.734375
1,671
Knowledge Article
Science & Tech.
36.102525
95,600,043
In This Chapter Reviewing the compilation process Installing the Code::Blocks development environment Testing your installation with a default program Reviewing the common installation errors In this chapter, you will review what it takes to create executable programs from C++ source code that you can run on the Windows, Linux, or Macintosh computer. You will then install the Code::Blocks integrated development environment used in the remainder of the book, and you will build a default test program to check out your installation. If all is working, by the time you reach the end of this chapter, you will be ready to start writing and building C++ programs of your own — with a little help, of course! You need two programs to create your own C++ programs. First, you need a text editor that you can use to enter your C++ instructions. Any editor capable of generating straight ASCII text letters will work. I have written programs using the Notepad editor that comes with Windows. However, an editor that knows something about the syntax of C++ is preferable since it can save you a lot of typing and sometimes highlight mistakes that you might be making as you type, in much the same way that a spelling checker highlights misspelled words in a word processor. The second program you will need is a compiler that converts your C++ source statements into machine language that the computer can understand and interpret. This process of ...
<urn:uuid:4161dd7f-a9f7-417e-a34a-f8cf144fdd59>
3.359375
285
Truncated
Software Dev.
50.503472
95,600,046
•natural selection occurs slowly,need very old Earth •lived ~50 miles NW of Dover •erosion of the Weald Dome (in: The Origin of Species) •"Hence, under ordinary circumstances, I conclude that for a cliff 500 feet in height, a denudation of one inch per century for the whole length would be an ample allowance. At this rate, on the above data, the denudation of the Weald must have required 306,662,400 years; or say three hundred million years." •not accurate, but at the time no-one even knew how many digits (thousands, millions, or billions) •"order of magnitude" estimate: hundreds of million Three examples of geologic estimates Darwin 300 million years (minimum bound) Phillips 96 million years (minimum bound) Joly 95 (+/- 5) million years note: similar "order of magnitude" all are lower bounds, not precise estimates aside: much very fundamental science just establishes orders of magnitudes; precise values come later. •ex. How many light-years, atoms, neurons, genes? •An error of an extra digit (+1000% ) or one less digit (-90% ) is trivial at this initial level of science 5. cooling of Earth •Isaac Newton (1643-1727) omathematician and physicist o"one of the foremost scientific intellects of all time" o1687 Philosophiae naturalis principia mathematica o"a red hot iron equal to our earth, that is, about 40,000,000 feet in diameter, would scarcely cool in above 50,000 years" Georges-Louis Leclerc, Comte de Buffon •studied botany, math, medicine in Angers duel in 1730, left for Dijon; there he met Duke of Kingston with whom he traveled, returned to estate near Dijon in •1749 first volume of "Histoire Naturelle" (39 volumes in all) •1774 member of Academie Royale Buffon postulated evolution of Earth in 7 stages: 1. formation of molten Earth 2. cooling to hand-hot 3. enveloped in universal sea
<urn:uuid:5f6fa101-b86a-4815-9ca8-30ce57c0e1f3>
3.203125
485
Knowledge Article
Science & Tech.
32.780165
95,600,057
Three years after landing in a giant Martian crater, NASA’s Curiosity rover has found what scientists call proof that the basin had repeatedly filled with water, bolstering chances for life on Mars, a study published on Thursday showed. The research offered the most comprehensive picture of how Gale Crater, an ancient, 87-mile (140-km) wide impact basin, formed and left a 3-mile (5-km) mound of sediment standing on the crater floor. Early in its mission, Curiosity discovered the gravel remnants of streams and deposits from a shallow lake. The new research, published in the journal Science, showed that the crater floor rose over time, the result of sediments in water settling down, layer after layer, for what may have been thousands of years, California Institute of Technology geologist John Grotzinger said. - NASA’s Curiosity rover captures images of Martian dust storm - NASA rover finds building blocks of life on Mars - NASA finds 3 billion years old organic molecules in rocks on Mars - NASA’s Curiosity rover finds new clues to life on Mars - Lakes on Mars dried up 3.5 billion years ago: Study - NASA rover discovers ancient fresh water lake on Mars “We knew that we had a lake there, but we hadn’t grasped just how big it was,” Grotzinger said. Water from north of the crater regularly filled the basin, creating long-lasting lakes that could have been a haven for life. Scientists suspect the water came from rain or snow. “If one discovers evidence of lakes, that’s a very positive sign for life,” Grotzinger said. Eventually, the crater filled with sediments. Then the winds took over and eroded the lakebed, leaving behind just a mound at the center. That mound, named Mount Sharp, is why Curiosity was sent to Gale Crater to look for ancient habitats suitable for microbial life. Scientists have learned that Mars had all the ingredients thought to be necessary for life. Exactly how Mars managed to support long-lived surface water is a mystery. Billions of years ago, the planet lost its global magnetic field, which allowed solar and cosmic radiation to gradually blast away its protective atmosphere. Under those conditions, liquid water evaporates quickly. “If you have a body of standing water that lasts more than hours to days without boiling off, that is a huge surprise,” Grotzinger said. Current computer models of Mars fall well short of an atmospheric blanket thick enough to support the long-lived lakes, the researchers noted. Grotzinger suspects that Mars may have had greenhouse gases or some other chemistry that so far has gone undetected. Last week, another team of scientists published research showing that trickles of briny water seasonally flow on present-day Mars, carving narrow streaks into cliff walls throughout the equator. The source of the water is not yet known.
<urn:uuid:67802186-878d-4710-8729-5cd15306d1e8>
4.09375
609
News Article
Science & Tech.
46.252403
95,600,079
High-temperature superconductors (abbreviated high-Tc or HTS) are materials that behave as superconductors at unusually high temperatures. The first high-Tc superconductor was discovered in 1986 by IBM researchers Georg Bednorz and K. Alex Müller, who were awarded the 1987 Nobel Prize in Physics "for their important break-through in the discovery of superconductivity in ceramic materials". Whereas "ordinary" or metallic superconductors usually have transition temperatures (temperatures below which they are superconductive) below 30 K (−243.2 °C), and must be cooled using liquid helium in order to achieve superconductivity, HTS have been observed with transition temperatures as high as 138 K (−135 °C), and can be cooled to superconductivity using liquid nitrogen. Until 2008, only certain compounds of copper and oxygen (so-called "cuprates") were believed to have HTS properties, and the term high-temperature superconductor was used interchangeably with cuprate superconductor for compounds such as bismuth strontium calcium copper oxide (BSCCO) and yttrium barium copper oxide (YBCO). Several iron-based compounds (the iron pnictides) are now known to be superconducting at high temperatures. In 2015, hydrogen sulfide (H2S) under extremely high pressure (around 150 gigapascals) was found to undergo superconducting transition near 203 K (-70 °C), the highest temperature superconductor known to date. For an explanation about Tc (the critical temperature for superconductivity), see Superconductivity § Superconducting phase transition and the second bullet item of BCS theory § Successes of the BCS theory. - 1 History - 2 Crystal structures of high-temperature ceramic superconductors - 3 Preparation of high-Tc superconductors - 4 Properties - 5 Ongoing research - 6 Possible mechanism - 7 Examples - 8 See also - 9 References - 10 External links The phenomenon of superconductivity was discovered by Kamerlingh Onnes in 1911, in metallic mercury below 4 K (−269.15 °C). Ever since, researchers have attempted to observe superconductivity at increasing temperatures with the goal of finding a room-temperature superconductor. In the late 1970s, superconductivity was observed in certain metal oxides at temperatures as high as 13 K (−260.1 °C), which were much higher than those for elemental metals. In 1986, J. Georg Bednorz and K. Alex Müller, working at the IBM research lab near Zurich, Switzerland were exploring a new class of ceramics for superconductivity. Bednorz encountered a barium-doped compound of lanthanum and copper oxide whose resistance dropped to zero at a temperature around 35 K (−238.2 °C). Their results were soon confirmed by many groups, notably Paul Chu at the University of Houston and Shoji Tanaka at the University of Tokyo. Shortly after, P. W. Anderson, at Princeton University came up with the first theoretical description of these materials, using the resonating valence bond theory, but a full understanding of these materials is still developing today. These superconductors are now known to possess a d-wave[clarification needed] pair symmetry. The first proposal that high-temperature cuprate superconductivity involves d-wave pairing was made in 1987 by Bickers, Scalapino and Scalettar, followed by three subsequent theories in 1988 by Inui, Doniach, Hirschfeld and Ruckenstein, using spin-fluctuation theory, and by Gros, Poilblanc, Rice and Zhang, and by Kotliar and Liu identifying d-wave pairing as a natural consequence of the RVB theory. The confirmation of the d-wave nature of the cuprate superconductors was made by a variety of experiments, including the direct observation of the d-wave nodes in the excitation spectrum through Angle Resolved Photoemission Spectroscopy, the observation of a half-integer flux in tunneling experiments, and indirectly from the temperature dependence of the penetration depth, specific heat and thermal conductivity. Until 2015 the superconductor with the highest transition temperature that had been confirmed by multiple independent research groups (a prerequisite to being called a discovery, verified by peer review) was mercury barium calcium copper oxide (HgBa2Ca2Cu3O8) at around 133 K. After more than twenty years of intensive research, the origin of high-temperature superconductivity is still not clear, but it seems that instead of electron-phonon attraction mechanisms, as in conventional superconductivity, one is dealing with genuine electronic mechanisms (e.g. by antiferromagnetic correlations), and instead of conventional, purely s-wave pairing, more exotic pairing symmetries are thought to be involved (d-wave in the case of the cuprates; primarily extended s-wave, but occasionally d-wave, in the case of the iron-based superconductors). In 2014, evidence showing that fractional particles can happen in quasi two-dimensional magnetic materials, was found by EPFL scientists lending support for Anderson's theory of high-temperature superconductivity. Crystal structures of high-temperature ceramic superconductors The structure of high-Tc copper oxide or cuprate superconductors are often closely related to perovskite structure, and the structure of these compounds has been described as a distorted, oxygen deficient multi-layered perovskite structure. One of the properties of the crystal structure of oxide superconductors is an alternating multi-layer of CuO2 planes with superconductivity taking place between these layers. The more layers of CuO2, the higher Tc. This structure causes a large anisotropy in normal conducting and superconducting properties, since electrical currents are carried by holes induced in the oxygen sites of the CuO2 sheets. The electrical conduction is highly anisotropic, with a much higher conductivity parallel to the CuO2 plane than in the perpendicular direction. Generally, critical temperatures depend on the chemical compositions, cations substitutions and oxygen content. They can be classified as superstripes; i.e., particular realizations of superlattices at atomic limit made of superconducting atomic layers, wires, dots separated by spacer layers, that gives multiband and multigap superconductivity. The first superconductor found with Tc > 77 K (liquid nitrogen boiling point) is yttrium barium copper oxide (YBa2Cu3O7−x); the proportions of the three different metals in the YBa2Cu3O7 superconductor are in the mole ratio of 1 to 2 to 3 for yttrium to barium to copper, respectively. Thus, this particular superconductor is often referred to as the 123 superconductor. The unit cell of YBa2Cu3O7 consists of three pseudocubic elementary perovskite unit cells. Each perovskite unit cell contains a Y or Ba atom at the center: Ba in the bottom unit cell, Y in the middle one, and Ba in the top unit cell. Thus, Y and Ba are stacked in the sequence [Ba–Y–Ba] along the c-axis. All corner sites of the unit cell are occupied by Cu, which has two different coordinations, Cu(1) and Cu(2), with respect to oxygen. There are four possible crystallographic sites for oxygen: O(1), O(2), O(3) and O(4). The coordination polyhedra of Y and Ba with respect to oxygen are different. The tripling of the perovskite unit cell leads to nine oxygen atoms, whereas YBa2Cu3O7 has seven oxygen atoms and, therefore, is referred to as an oxygen-deficient perovskite structure. The structure has a stacking of different layers: (CuO)(BaO)(CuO2)(Y)(CuO2)(BaO)(CuO). One of the key feature of the unit cell of YBa2Cu3O7−x (YBCO) is the presence of two layers of CuO2. The role of the Y plane is to serve as a spacer between two CuO2 planes. In YBCO, the Cu–O chains are known to play an important role for superconductivity. Tc is maximal near 92 K when x ≈ 0.15 and the structure is orthorhombic. Superconductivity disappears at x ≈ 0.6, where the structural transformation of YBCO occurs from orthorhombic to tetragonal. Bi-, Tl- and Hg-based high-Tc superconductors The crystal structure of Bi-, Tl- and Hg-based high-Tc superconductors are very similar. Like YBCO, the perovskite-type feature and the presence of CuO2 layers also exist in these superconductors. However, unlike YBCO, Cu–O chains are not present in these superconductors. The YBCO superconductor has an orthorhombic structure, whereas the other high-Tc superconductors have a tetragonal structure. The Bi–Sr–Ca–Cu–O system has three superconducting phases forming a homologous series as Bi2Sr2Can−1CunO4+2n+x (n=1, 2 and 3). These three phases are Bi-2201, Bi-2212 and Bi-2223, having transition temperatures of 20, 85 and 110 K, respectively, where the numbering system represent number of atoms for Bi, Sr, Ca and Cu respectively. The two phases have a tetragonal structure which consists of two sheared crystallographic unit cells. The unit cell of these phases has double Bi–O planes which are stacked in a way that the Bi atom of one plane sits below the oxygen atom of the next consecutive plane. The Ca atom forms a layer within the interior of the CuO2 layers in both Bi-2212 and Bi-2223; there is no Ca layer in the Bi-2201 phase. The three phases differ with each other in the number of CuO2 planes; Bi-2201, Bi-2212 and Bi-2223 phases have one, two and three CuO2 planes, respectively. The c axis lattice constants of these phases increases with the number of CuO2 planes (see table below). The coordination of the Cu atom is different in the three phases. The Cu atom forms an octahedral coordination with respect to oxygen atoms in the 2201 phase, whereas in 2212, the Cu atom is surrounded by five oxygen atoms in a pyramidal arrangement. In the 2223 structure, Cu has two coordinations with respect to oxygen: one Cu atom is bonded with four oxygen atoms in square planar configuration and another Cu atom is coordinated with five oxygen atoms in a pyramidal arrangement. Tl–Ba–Ca–Cu–O superconductor: The first series of the Tl-based superconductor containing one Tl–O layer has the general formula TlBa2Can-1CunO2n+3, whereas the second series containing two Tl–O layers has a formula of Tl2Ba2Can-1CunO2n+4 with n =1, 2 and 3. In the structure of Tl2Ba2CuO6 (Tl-2201), there is one CuO2 layer with the stacking sequence (Tl–O) (Tl–O) (Ba–O) (Cu–O) (Ba–O) (Tl–O) (Tl–O). In Tl2Ba2CaCu2O8 (Tl-2212), there are two Cu–O layers with a Ca layer in between. Similar to the Tl2Ba2CuO6 structure, Tl–O layers are present outside the Ba–O layers. In Tl2Ba2Ca2Cu3O10 (Tl-2223), there are three CuO2 layers enclosing Ca layers between each of these. In Tl-based superconductors, Tc is found to increase with the increase in CuO2 layers. However, the value of Tc decreases after four CuO2 layers in TlBa2Can-1CunO2n+3, and in the Tl2Ba2Can-1CunO2n+4 compound, it decreases after three CuO2 layers. Hg–Ba–Ca–Cu–O superconductor: The crystal structure of HgBa2CuO4 (Hg-1201), HgBa2CaCu2O6 (Hg-1212) and HgBa2Ca2Cu3O8 (Hg-1223) is similar to that of Tl-1201, Tl-1212 and Tl-1223, with Hg in place of Tl. It is noteworthy that the Tc of the Hg compound (Hg-1201) containing one CuO2 layer is much larger as compared to the one-CuO2-layer compound of thallium (Tl-1201). In the Hg-based superconductor, Tc is also found to increase as the CuO2 layer increases. For Hg-1201, Hg-1212 and Hg-1223, the values of Tc are 94, 128 and the record value at ambient pressure 134 K, respectively, as shown in table below. The observation that the Tc of Hg-1223 increases to 153 K under high pressure indicates that the Tc of this compound is very sensitive to the structure of the compound. |Formula||Notation||Tc (K)||No. of Cu-O planes in unit cell Preparation of high-Tc superconductors The simplest method for preparing high-Tc superconductors is a solid-state thermochemical reaction involving mixing, calcination and sintering. The appropriate amounts of precursor powders, usually oxides and carbonates, are mixed thoroughly using a Ball mill. Solution chemistry processes such as coprecipitation, freeze-drying and sol-gel methods are alternative ways for preparing a homogeneous mixture. These powders are calcined in the temperature range from 800 °C to 950 °C for several hours. The powders are cooled, reground and calcined again. This process is repeated several times to get homogeneous material. The powders are subsequently compacted to pellets and sintered. The sintering environment such as temperature, annealing time, atmosphere and cooling rate play a very important role in getting good high-Tc superconducting materials. The YBa2Cu3O7−x compound is prepared by calcination and sintering of a homogeneous mixture of Y2O3, BaCO3 and CuO in the appropriate atomic ratio. Calcination is done at 900–950 °C, whereas sintering is done at 950 °C in an oxygen atmosphere. The oxygen stoichiometry in this material is very crucial for obtaining a superconducting YBa2Cu3O7−x compound. At the time of sintering, the semiconducting tetragonal YBa2Cu3O6 compound is formed, which, on slow cooling in oxygen atmosphere, turns into superconducting YBa2Cu3O7−x. The uptake and loss of oxygen are reversible in YBa2Cu3O7−x. A fully oxygenated orthorhombic YBa2Cu3O7−x sample can be transformed into tetragonal YBa2Cu3O6 by heating in a vacuum at temperature above 700 °C. The preparation of Bi-, Tl- and Hg-based high-Tc superconductors is difficult compared to YBCO. Problems in these superconductors arise because of the existence of three or more phases having a similar layered structure. Thus, syntactic intergrowth and defects such as stacking faults occur during synthesis and it becomes difficult to isolate a single superconducting phase. For Bi–Sr–Ca–Cu–O, it is relatively simple to prepare the Bi-2212 (Tc ≈ 85 K) phase, whereas it is very difficult to prepare a single phase of Bi-2223 (Tc ≈ 110 K). The Bi-2212 phase appears only after few hours of sintering at 860–870 °C, but the larger fraction of the Bi-2223 phase is formed after a long reaction time of more than a week at 870 °C. Although the substitution of Pb in the Bi–Sr–Ca–Cu–O compound has been found to promote the growth of the high-Tc phase, a long sintering time is still required. "High-temperature" has two common definitions in the context of superconductivity: - Above the temperature of 30 K that had historically been taken as the upper limit allowed by BCS theory (1957). This is also above the 1973 record of 23 K that had lasted until copper-oxide materials were discovered in 1986. - Having a transition temperature that is a larger fraction of the Fermi temperature than for conventional superconductors such as elemental mercury or lead. This definition encompasses a wider variety of unconventional superconductors and is used in the context of theoretical models. The label high-Tc may be reserved by some authors for materials with critical temperature greater than the boiling point of liquid nitrogen (77 K or −196 °C). However, a number of materials – including the original discovery and recently discovered pnictide superconductors – had critical temperatures below 77 K but are commonly referred to in publication as being in the high-Tc class. Technological applications could benefit from both the higher critical temperature being above the boiling point of liquid nitrogen and also the higher critical magnetic field (and critical current density) at which superconductivity is destroyed. In magnet applications, the high critical magnetic field may prove more valuable than the high Tc itself. Some cuprates have an upper critical field of about 100 tesla. However, cuprate materials are brittle ceramics which are expensive to manufacture and not easily turned into wires or other useful shapes. Also, high-temperature superconductors do not form large, continuous superconducting domains, but only clusters of microdomains within which superconductivity occurs. They are therefore unsuitable for applications requiring actual superconducted currents, such as magnets for magnetic resonance spectrometers. After two decades of intense experimental and theoretical research, with over 100,000 published papers on the subject, several common features in the properties of high-temperature superconductors have been identified. As of 2011[update], no widely accepted theory explains their properties. Relative to conventional superconductors, such as elemental mercury or lead that are adequately explained by the BCS theory, cuprate superconductors (and other unconventional superconductors) remain distinctive. There also has been much debate as to high-temperature superconductivity coexisting with magnetic ordering in YBCO, iron-based superconductors, several ruthenocuprates and other exotic superconductors, and the search continues for other families of materials. HTS are Type-II superconductors, which allow magnetic fields to penetrate their interior in quantized units of flux, meaning that much higher magnetic fields are required to suppress superconductivity. The layered structure also gives a directional dependence to the magnetic field response. Cuprate superconductors are generally considered to be quasi-two-dimensional materials with their superconducting properties determined by electrons moving within weakly coupled copper-oxide (CuO2) layers. Neighbouring layers containing ions such as lanthanum, barium, strontium, or other atoms act to stabilize the structure and dope electrons or holes onto the copper-oxide layers. The undoped "parent" or "mother" compounds are Mott insulators with long-range antiferromagnetic order at low enough temperature. Single band models are generally considered to be sufficient to describe the electronic properties. The cuprate superconductors adopt a perovskite structure. The copper-oxide planes are checkerboard lattices with squares of O2− ions with a Cu2+ ion at the centre of each square. The unit cell is rotated by 45° from these squares. Chemical formulae of superconducting materials generally contain fractional numbers to describe the doping required for superconductivity. There are several families of cuprate superconductors and they can be categorized by the elements they contain and the number of adjacent copper-oxide layers in each superconducting block. For example, YBCO and BSCCO can alternatively be referred to as Y123 and Bi2201/Bi2212/Bi2223 depending on the number of layers in each superconducting block (n). The superconducting transition temperature has been found to peak at an optimal doping value (p=0.16) and an optimal number of layers in each superconducting block, typically n=3. Possible mechanisms for superconductivity in the cuprates are still the subject of considerable debate and further research. Certain aspects common to all materials have been identified. Similarities between the antiferromagnetic low-temperature state of the undoped materials and the superconducting state that emerges upon doping, primarily the dx2-y2 orbital state of the Cu2+ ions, suggest that electron-electron interactions are more significant than electron-phonon interactions in cuprates – making the superconductivity unconventional. Recent work on the Fermi surface has shown that nesting occurs at four points in the antiferromagnetic Brillouin zone where spin waves exist and that the superconducting energy gap is larger at these points. The weak isotope effects observed for most cuprates contrast with conventional superconductors that are well described by BCS theory. Similarities and differences in the properties of hole-doped and electron doped cuprates: - Presence of a pseudogap phase up to at least optimal doping. - Different trends in the Uemura plot relating transition temperature to the superfluid density. The inverse square of the London penetration depth appears to be proportional to the critical temperature for a large number of underdoped cuprate superconductors, but the constant of proportionality is different for hole- and electron-doped cuprates. The linear trend implies that the physics of these materials is strongly two-dimensional. - Universal hourglass-shaped feature in the spin excitations of cuprates measured using inelastic neutron diffraction. - Nernst effect evident in both the superconducting and pseudogap phases. The electronic structure of superconducting cuprates is highly anisotropic (see the crystal structure of YBCO or BSCCO). Therefore, the Fermi surface of HTSC is very close to the Fermi surface of the doped CuO2 plane (or multi-planes, in case of multi-layer cuprates) and can be presented on the 2D reciprocal space (or momentum space) of the CuO2 lattice. The typical Fermi surface within the first CuO2 Brillouin zone is sketched in Fig. 1 (left). It can be derived from the band structure calculations or measured by angle resolved photoemission spectroscopy (ARPES). Fig. 1 (right) shows the Fermi surface of BSCCO measured by ARPES. In a wide range of charge carrier concentration (doping level), in which the hole-doped HTSC are superconducting, the Fermi surface is hole-like (i.e. open, as shown in Fig. 1). This results in an inherent in-plane anisotropy of the electronic properties of HTSC. Iron-based superconductors contain layers of iron and a pnictogen—such as arsenic or phosphorus—or a chalcogen. This is currently the family with the second highest critical temperature, behind the cuprates. Interest in their superconducting properties began in 2006 with the discovery of superconductivity in LaFePO at 4 K and gained much greater attention in 2008 after the analogous material LaFeAs(O,F) was found to superconduct at up to 43 K under pressure. The highest critical temperatures in the iron-based superconductor family exist in thin films of FeSe, where a critical temperature in excess of 100 K was reported in 2014. Since the original discoveries several families of iron-based superconductors have emerged: - LnFeAs(O,F) or LnFeAsO1−x (Ln=lanthanide) with Tc up to 56 K, referred to as 1111 materials. A fluoride variant of these materials was subsequently found with similar Tc values. - (Ba,K)Fe2As2 and related materials with pairs of iron-arsenide layers, referred to as 122 compounds. Tc values range up to 38 K. These materials also superconduct when iron is replaced with cobalt - LiFeAs and NaFeAs with Tc up to around 20 K. These materials superconduct close to stoichiometric composition and are referred to as 111 compounds. - FeSe with small off-stoichiometry or tellurium doping. Most undoped iron-based superconductors show a tetragonal-orthorhombic structural phase transition followed at lower temperature by magnetic ordering, similar to the cuprate superconductors. However, they are poor metals rather than Mott insulators and have five bands at the Fermi surface rather than one. The phase diagram emerging as the iron-arsenide layers are doped is remarkably similar, with the superconducting phase close to or overlapping the magnetic phase. Strong evidence that the Tc value varies with the As-Fe-As bond angles has already emerged and shows that the optimal Tc value is obtained with undistorted FeAs4 tetrahedra. The symmetry of the pairing wavefunction is still widely debated, but an extended s-wave scenario is currently favoured. At pressures above 90 GPa (Gigapascals), hydrogen sulfide becomes a metallic conductor of electricity. When cooled below a critical temperature this high-pressure phase exhibits superconductivity. The critical temperature increases with pressure, ranging from 23 K at 100 GPa to 150 K at 200 GPa. If hydrogen sulfide is pressurized at higher temperatures, then cooled, the critical temperature reaches 203 K (−70 °C), the highest accepted superconducting critical temperature as of 2015. It has been predicted that by substituting a small part of sulfur with phosphorus and using even higher pressures it may be possible to raise the critical temperature to above 0 °C (273 K) and achieve room-temperature superconductivity. Other materials sometimes referred to as high-temperature superconductors Magnesium diboride is occasionally referred to as a high-temperature superconductor because its Tc value of 39 K is above that historically expected for BCS superconductors. However, it is more generally regarded as the highest-Tc conventional superconductor, the increased Tc resulting from two separate bands being present at the Fermi level. Some organic superconductors and heavy fermion compounds are considered to be high-temperature superconductors because of their high Tc values relative to their Fermi energy, despite the Tc values being lower than for many conventional superconductors. This description may relate better to common aspects of the superconducting mechanism than the superconducting properties. Theoretical work by Neil Ashcroft in 1968 predicted that solid metallic hydrogen at extremely high pressure should become superconducting at approximately room-temperature because of its extremely high speed of sound and expected strong coupling between the conduction electrons and the lattice vibrations. As of 2016[update] this prediction is yet to be experimentally verified. All known high-Tc superconductors are Type-II superconductors. In contrast to Type-I superconductors, which expel all magnetic fields due to the Meissner effect, Type-II superconductors allow magnetic fields to penetrate their interior in quantized units of flux, creating "holes" or "tubes" of normal metallic regions in the superconducting bulk called vortices. Consequently, high-Tc superconductors can sustain much higher magnetic fields. The question of how superconductivity arises in high-temperature superconductors is one of the major unsolved problems of theoretical condensed matter physics. The mechanism that causes the electrons in these crystals to form pairs is not known. Despite intensive research and many promising leads, an explanation has so far eluded scientists. One reason for this is that the materials in question are generally very complex, multi-layered crystals (for example, BSCCO), making theoretical modelling difficult. Improving the quality and variety of samples also gives rise to considerable research, both with the aim of improved characterisation of the physical properties of existing compounds, and synthesizing new materials, often with the hope of increasing Tc. Technological research focuses on making HTS materials in sufficient quantities to make their use economically viable and optimizing their properties in relation to applications. There have been two representative theories for high-temperature or unconventional superconductivity. Firstly, weak coupling theory suggests superconductivity emerges from antiferromagnetic spin fluctuations in a doped system. According to this theory, the pairing wave function of the cuprate HTS should have a dx2-y2 symmetry. Thus, determining whether the pairing wave function has d-wave symmetry is essential to test the spin fluctuation mechanism. That is, if the HTS order parameter (pairing wave function) does not have d-wave symmetry, then a pairing mechanism related to spin fluctuations can be ruled out. (Similar arguments can be made for iron-based superconductors but the different material properties allow a different pairing symmetry.) Secondly, there was the interlayer coupling model, according to which a layered structure consisting of BCS-type (s-wave symmetry) superconductors can enhance the superconductivity by itself. By introducing an additional tunnelling interaction between each layer, this model successfully explained the anisotropic symmetry of the order parameter as well as the emergence of the HTS. Thus, in order to solve this unsettled problem, there have been numerous experiments such as photoemission spectroscopy, NMR, specific heat measurements, etc. Up to date the results were ambiguous, some reports supported the d symmetry for the HTS whereas others supported the s symmetry. This muddy situation possibly originated from the indirect nature of the experimental evidence, as well as experimental issues such as sample quality, impurity scattering, twinning, etc. Junction experiment supporting the d symmetry An experiment based on flux quantization of a three-grain ring of YBa2Cu3O7 (YBCO) was proposed to test the symmetry of the order parameter in the HTS. The symmetry of the order parameter could best be probed at the junction interface as the Cooper pairs tunnel across a Josephson junction or weak link. It was expected that a half-integer flux, that is, a spontaneous magnetization could only occur for a junction of d symmetry superconductors. But, even if the junction experiment is the strongest method to determine the symmetry of the HTS order parameter, the results have been ambiguous. J. R. Kirtley and C. C. Tsuei thought that the ambiguous results came from the defects inside the HTS, so that they designed an experiment where both clean limit (no defects) and dirty limit (maximal defects) were considered simultaneously. In the experiment, the spontaneous magnetization was clearly observed in YBCO, which supported the d symmetry of the order parameter in YBCO. But, since YBCO is orthorhombic, it might inherently have an admixture of s symmetry. So, by tuning their technique further, they found that there was an admixture of s symmetry in YBCO within about 3%. Also, they found that there was a pure dx2-y2 order parameter symmetry in the tetragonal Tl2Ba2CuO6. Qualitative explanation of the spin-fluctuation mechanism Despite all these years, the mechanism of high-Tc superconductivity is still highly controversial, mostly due to the lack of exact theoretical computations on such strongly interacting electron systems. However, most rigorous theoretical calculations, including phenomenological and diagrammatic approaches, converge on magnetic fluctuations as the pairing mechanism for these systems. The qualitative explanation is as follows: In a superconductor, the flow of electrons cannot be resolved into individual electrons, but instead consists of many pairs of bound electrons, called Cooper pairs. In conventional superconductors, these pairs are formed when an electron moving through the material distorts the surrounding crystal lattice, which in turn attracts another electron and forms a bound pair. This is sometimes called the "water bed" effect. Each Cooper pair requires a certain minimum energy to be displaced, and if the thermal fluctuations in the crystal lattice are smaller than this energy the pair can flow without dissipating energy. This ability of the electrons to flow without resistance leads to superconductivity. In a high-Tc superconductor, the mechanism is extremely similar to a conventional superconductor, except, in this case, phonons virtually play no role and their role is replaced by spin-density waves. Just as all known conventional superconductors are strong phonon systems, all known high-Tc superconductors are strong spin-density wave systems, within close vicinity of a magnetic transition to, for example, an antiferromagnet. When an electron moves in a high-Tc superconductor, its spin creates a spin-density wave around it. This spin-density wave in turn causes a nearby electron to fall into the spin depression created by the first electron (water-bed effect again). Hence, again, a Cooper pair is formed. When the system temperature is lowered, more spin density waves and Cooper pairs are created, eventually leading to superconductivity. Note that in high-Tc systems, as these systems are magnetic systems due to the Coulomb interaction, there is a strong Coulomb repulsion between electrons. This Coulomb repulsion prevents pairing of the Cooper pairs on the same lattice site. The pairing of the electrons occur at near-neighbor lattice sites as a result. This is the so-called d-wave pairing, where the pairing state has a node (zero) at the origin. Examples of high-Tc cuprate superconductors include La1.85Ba0.15CuO4, and YBCO (yttrium-barium-copper oxide), which is famous as the first material to achieve superconductivity above the boiling point of liquid nitrogen. (in degrees Celsius) |203||−70||H2S (at 150 GPa pressure)||Hydrogen-based superconductor| |195||−78||Sublimation point of dry ice| |184||−89.2||Lowest temperature recorded on Earth| |145||−128||Boiling point of tetrafluoromethane| |90||−183||Boiling point of liquid oxygen| |77||−196||Boiling point of liquid nitrogen| |20||−253||Boiling point of liquid hydrogen| |18||−255||Nb3Sn||Metallic low-temperature superconductors| |4.2||−268.8||Boiling point of liquid helium| |4.2||−268.8||Hg (mercury)||Metallic low-temperature superconductors| - Cooper pair - Flux pumping - Macroscopic quantum phenomena - Mixed conduction - Superconducting wire - Timmer, John (May 2011). "25 years on, the search for higher-temp superconductors continues". Ars Technica. Archived from the original on 4 March 2012. Retrieved 2 March 2012. - Saunders, P. J. Ford; G. A. (2005). The rise of the superconductors. Boca Raton, Fla.: CRC Press. ISBN 9780748407729. - Bednorz, J. G.; Müller, K. A. (1986). "Possible high TC superconductivity in the Ba-La-Cu-O system". Zeitschrift für Physik B. 64 (2): 189–193. Bibcode:1986ZPhyB..64..189B. doi:10.1007/BF01303701. - The Nobel Prize in Physics 1987: J. Georg Bednorz, K. Alex Müller Archived 19 September 2008 at the Wayback Machine.. Nobelprize.org. Retrieved 2012-04-19. - A. Leggett (2006). "What DO we know about high Tc?". Nature Physics. 2 (3): 134–136. Bibcode:2006NatPh...2..134L. doi:10.1038/nphys254. - Choi, Charles Q. Iron Exposed as High-Temperature Superconductor: Scientific American Archived 2 September 2012 at the Wayback Machine.. April 23, 2008. Retrieved 2012-04-19. - Ren, Zhi-An; Che, Guang-Can; Dong, Xiao-Li; Yang, Jie; Lu, Wei; Yi, Wei; Shen, Xiao-Li; Li, Zheng-Cai; Sun, Li-Ling; Zhou, Fang; Zhao, Zhong-Xian (2008). "Superconductivity and phase diagram in iron-based arsenic-oxides ReFeAsO1−δ (Re=rare-earth metal) without fluorine doping". EPL. 83: 17002. arXiv: . Bibcode:2008EL.....8317002R. doi:10.1209/0295-5075/83/17002. - Drozdov, A. P.; Eremets, M. I.; Troyan, I. A.; Ksenofontov, V.; Shylin, S. I. (2015). "Conventional superconductivity at 203 kelvin at high pressures in the sulfur hydride system". Nature. 525: 73–6. arXiv: . Bibcode:2015Natur.525...73D. doi:10.1038/nature14964. PMID 26280333. - Cartlidge, Edwin (18 August 2015). "Superconductivity record sparks wave of follow-up physics". Nature News. Archived from the original on 18 August 2015. Retrieved 18 August 2015. - Christoph Heil; Lilia Boeri. "Influence of bonding on superconductivity in high-pressure hydrides". Phys Rev B. 92. arXiv: . Bibcode:2015PhRvB..92f0508H. doi:10.1103/PhysRevB.92.060508. - Nisbett, Alec (Producer) (1988). Superconductor: The race for the prize (Television Episode). - Mourachkine, A. (2004). Room-Temperature Superconductivity. Cambridge International Science Publishing. arXiv: . Bibcode:2006cond.mat..6187M. ISBN 1-904602-27-4. - Stuart A Wolf & Vladimir Z Kresin, Eds, Novel Superconductivity, Springer (October, 1987) - Tanaka, Shoji (2001). "High temperature superconductivity: History and Outlook" (PDF). JSAP International. Archived (PDF) from the original on 16 August 2012. Retrieved 2 March 2012. - Anderson, Philip (1987). "The Resonating valence bond state in la-2CuO-4 and superconductivity". Science. 235 (4793): 1196–1198. Bibcode:1987Sci...235.1196A. doi:10.1126/science.235.4793.1196. PMID 17818979. Archived from the original on 24 September 2015. - Bickers, N.E.; Scalapino, D. J.; Scalettar, R. T. (1987). "CDW and SDW mediated pairing interactions". Int. J. Mod. Phys. B. 1 (3n04): 687–695. Bibcode:1987IJMPB...1..687B. doi:10.1142/S0217979287001079. - Inui, Masahiko; Doniach, Sebastian; Hirschfeld, Peter J.; Ruckenstein, Andrei E.; Zhao, Z.; Yang, Q.; Ni, Y.; Liu, G. (1988). "Coexistence of antiferromagnetism and superconductivity in a mean-field theory of high-Tc superconductors". Phys. Rev. B. 37 (10): 5182–5185. Bibcode:1988PhRvB..37.5182D. doi:10.1103/PhysRevB.37.5182. Archived from the original on 2013-07-03. - Gros, Claudius; Poilblanc, Didier; Rice, T. Maurice; Zhang, F. C. (1988). "Superconductivity in correlated wavefunctions". Physica C. 153–155: 543–548. Bibcode:1988PhyC..153..543G. doi:10.1016/0921-4534(88)90715-0. - Kotliar, Gabriel; Liu, Jialin (1988). "Superexchange mechanism and d-wave superconductivity". Physical Review B. 38 (7): 5182. Bibcode:1988PhRvB..38.5142K. doi:10.1103/PhysRevB.38.5142. Archived from the original on 2013-07-03. - Schilling, A.; Cantoni, M.; Guo, J. D.; Ott, H. R. (1993). "Superconductivity in the Hg-Ba-Ca-Cu-O system". Nature. 363 (6424): 56–58. Bibcode:1993Natur.363...56S. doi:10.1038/363056a0. - Dalla Piazza, B.; Mourigal, M.; Christensen, N. B.; Nilsen, G. J.; Tregenna-Piggott, P.; Perring, T. G.; Enderle, M.; McMorrow, D. F.; Ivanov, D. A.; Rønnow, H. M. (15 December 2014). "Fractional excitations in the square-lattice quantum antiferromagnet". Nature Physics. 11 (1): 62–68. arXiv: . Bibcode:2015NatPh..11...62D. doi:10.1038/nphys3172. - "How electrons split: New evidence of exotic behaviors". Nanowerk. École Polytechnique Fédérale de Lausanne. 23 December 2014. Archived from the original on 23 December 2014. Retrieved 23 December 2014. - Hazen, R.; Finger, L.; Angel, R.; Prewitt, C.; Ross, N.; Mao, H.; Hadidiacos, C.; Hor, P.; Meng, R.; Chu, C. (1987). "Crystallographic description of phases in the Y-Ba-Cu-O superconductor". Physical Review B. 35 (13): 7238–7241. Bibcode:1987PhRvB..35.7238H. doi:10.1103/PhysRevB.35.7238. - Khare, Neeraj (2003). Handbook of High-Temperature Superconductor Electronics. CRC Press. ISBN 0-8247-0823-7. - Hermann, Allen M. and Yakhmi, J.V. eds. (1994) Thallium-Based High-Temperature Superconductors, Marcel Dekker - Hazen, R.; Prewitt, C.; Angel, R.; Ross, N.; Finger, L.; Hadidiacos, C.; Veblen, D.; Heaney, P.; Hor, P.; Meng, R.; Sun, Y.; Wang, Y.; Xue, Y.; Huang, Z.; Gao, L.; Bechtold, J.; Chu, C. (1988). "Superconductivity in the high-Tc Bi-Ca-Sr-Cu-O system: Phase identification". Physical Review Letters. 60 (12): 1174–1177. Bibcode:1988PhRvL..60.1174H. doi:10.1103/PhysRevLett.60.1174. - Tarascon, J.; McKinnon, W.; Barboux, P.; Hwang, D.; Bagley, B.; Greene, L.; Hull, G.; Lepage, Y.; Stoffel, N.; Giroud, M. (1988). "Preparation, structure, and properties of the superconducting compound series Bi2Sr2Can−1CunOy with n=1, 2, and 3". Physical Review B. 38 (13): 8885–8892. Bibcode:1988PhRvB..38.8885T. doi:10.1103/PhysRevB.38.8885. - Sheng, Z. Z.; Hermann, A. M.; El Ali, A; Almasan, C; Estrada, J; Datta, T; Matson, R. J. (1988). "Superconductivity at 90 K in the Tl-Ba-Cu-O system". Physical Review Letters. 60 (10): 937–940. Bibcode:1988PhRvL..60..937S. doi:10.1103/PhysRevLett.60.937. PMID 10037895. - Sheng, Z. Z.; Hermann, A. M. (1988). "Superconductivity in the rare-earth-free Tl-Ba-Cu-O system above liquid-nitrogen temperature". Nature. 332 (6159): 55–58. Bibcode:1988Natur.332...55S. doi:10.1038/332055a0. - Putilin, S. N.; Antipov, E. V.; Chmaissem, O.; Marezio, M. (1993). "Superconductivity at 94 K in HgBa2Cu04+δ". Nature. 362 (6417): 226–228. Bibcode:1993Natur.362..226P. doi:10.1038/362226a0. - Schilling, A.; Cantoni, M.; Guo, J. D.; Ott, H. R. (1993). "Superconductivity above 130 K in the Hg–Ba–Ca–Cu–O system". Nature. 363 (6424): 56–58. Bibcode:1993Natur.363...56S. doi:10.1038/363056a0. - Chu, C. W.; Gao, L.; Chen, F.; Huang, Z. J.; Meng, R. L.; Xue, Y. Y. (1993). "Superconductivity above 150 K in HgBa2Ca2Cu3O8+δ at high pressures". Nature. 365 (6444): 323–325. Bibcode:1993Natur.365..323C. doi:10.1038/365323a0. - Shi, Donglu; Boley, Mark S.; Chen, J. G.; Xu, Ming; Vandervoort, K.; Liao, Y. X.; Zangvil, A.; Akujieze, Justin; Segre, Carlo (1989). "Origin of enhanced growth of the 110 K superconducting phase by Pb doping in the Bi-Sr-Ca-Cu-O system". Applied Physics Letters. 55 (7): 699. Bibcode:1989ApPhL..55..699S. doi:10.1063/1.101573. - Norman, Michael R. (2008). "Trend: High-temperature superconductivity in the iron pnictides". Physics. 1 (21): 21. Bibcode:2008PhyOJ...1...21N. doi:10.1103/Physics.1.21. Archived from the original on 19 March 2012. Retrieved 30 March 2012. - "High-Temperature Superconductivity: The Cuprates". Devereaux group. Stanford University. Archived from the original on 15 June 2010. Retrieved 30 March 2012. - Graser, S.; Hirschfeld, P. J.; Kopp, T.; Gutser, R.; Andersen, B. M.; Mannhart, J. (27 June 2010). "How grain boundaries limit supercurrents in high-temperature superconductors". Nature Physics. 6 (8): 609–614. arXiv: . Bibcode:2010NatPh...6..609G. doi:10.1038/nphys1687. Archived from the original on 7 October 2010. More than one of - M. Buchanan (2001). "Mind the pseudogap". Nature. 409 (6816): 8–11. doi:10.1038/35051238. PMID 11343081. - Sanna, S.; Allodi, G.; Concas, G.; Hillier, A.; Renzi, R. (2004). "Nanoscopic Coexistence of Magnetism and Superconductivity in YBa2Cu3O6+x Detected by Muon Spin Rotation". Physical Review Letters. 93 (20): 207001. arXiv: . Bibcode:2004PhRvL..93t7001S. doi:10.1103/PhysRevLett.93.207001. - C. Hartinger. "DFG FG 538 – Doping Dependence of Phase transitions and Ordering Phenomena in Cuprate Superconductors". Wmi.badw-muenchen.de. Archived from the original on 27 December 2008. Retrieved 29 October 2009. - Luetkens, H; Klauss, H. H.; Kraken, M; Litterst, F. J.; Dellmann, T; Klingeler, R; Hess, C; Khasanov, R; Amato, A; Baines, C; Kosmala, M; Schumann, O. J.; Braden, M; Hamann-Borrero, J; Leps, N; Kondrat, A; Behr, G; Werner, J; Büchner, B (2009). "Electronic phase diagram of the LaO1−xFxFeAs superconductor". Nature Materials. 8 (4): 305–9. arXiv: . Bibcode:2009NatMa...8..305L. doi:10.1038/nmat2397. PMID 19234445. - Drew, A. J.; Niedermayer, Ch; Baker, P. J.; Pratt, F. L.; Blundell, S. J.; Lancaster, T; Liu, R. H.; Wu, G; Chen, X. H.; Watanabe, I; Malik, V. K.; Dubroka, A; Rössle, M; Kim, K. W.; Baines, C; Bernhard, C (2009). "Coexistence of static magnetism and superconductivity in SmFeAsO1−xFx as revealed by muon spin rotation". Nature Materials. 8 (4): 310–314. arXiv: . Bibcode:2009NatMa...8..310D. doi:10.1038/nmat2396. PMID 19234446. - Sanna, S.; De Renzi, R.; Lamura, G.; Ferdeghini, C.; Palenzona, A.; Putti, M.; Tropeano, M.; Shiroka, T. (2009). "Competition between magnetism and superconductivity at the phase boundary of doped SmFeAsO pnictides". Physical Review B. 80 (5): 052503. arXiv: . Bibcode:2009PhRvB..80e2503S. doi:10.1103/PhysRevB.80.052503. - Zhao, J; Huang, Q; de la Cruz, C; Li, S; Lynn, J. W.; Chen, Y; Green, M. A.; Chen, G. F.; Li, G; Li, Z; Luo, J. L.; Wang, N. L.; Dai, P (2008). "Structural and magnetic phase diagram of CeFeAsO1−xFx and its relation to high-temperature superconductivity". Nature Materials. 7 (12): 953–959. arXiv: . Bibcode:2008NatMa...7..953Z. doi:10.1038/nmat2315. PMID 18953342. - Chu, Jiun-Haw; Analytis, James; Kucharczyk, Chris; Fisher, Ian (2008). "Determination of the phase diagram of the electron doped superconductor Ba(Fe1−xCox)2As2". Physical Review B. 79: 014506. arXiv: . Bibcode:2009PhRvB..79a4506C. doi:10.1103/PhysRevB.79.014506. - Kamihara, Y; Hiramatsu, H; Hirano, M; Kawamura, R; Yanagi, H; Kamiya, T; Hosono, H (2006). "Iron-Based Layered Superconductor: LaOFeP". Journal of the American Chemical Society. 128 (31): 10012–10013. doi:10.1021/ja063355c. PMID 16881620. - Kamihara, Y; Watanabe, T; Hirano, M; Hosono, H (2008). "Iron-Based Layered Superconductor La[O1−xFx]FeAs (x=0.05–0.12) with Tc =26 K". Journal of the American Chemical Society. 130 (11): 3296–3297. doi:10.1021/ja800073m. PMID 18293989. - Takahashi, H; Igawa, K; Arii, K; Kamihara, Y; Hirano, M; Hosono, H (2008). "Superconductivity at 43 K in an iron-based layered compound LaO1-xFxFeAs". Nature. 453 (7193): 376–378. Bibcode:2008Natur.453..376T. doi:10.1038/nature06972. PMID 18432191. - Wang, Qing-Yan; Li, Zhi; Zhang, Wen-Hao; Zhang, Zuo-Cheng; Zhang, Jin-Song; Li, Wei; Ding, Hao; Ou, Yun-Bo; Deng, Peng; Chang, Kai; Wen, Jing; Song, Can-Li; He, Ke; Jia, Jin-Feng; Ji, Shuai-Hua; Wang, Ya-Yu; Wang, Li-Li; Chen, Xi; Ma, Xu-Cun; Xue, Qi-Kun (2012). "Interface-Induced High-Temperature Superconductivity in Single Unit-Cell FeSe Films on SrTiO3". Chin. Phys. Lett. 29 (3): 037402. arXiv: . Bibcode:2012ChPhL..29c7402W. doi:10.1088/0256-307X/29/3/037402. - Liu, Defa; Zhang, Wenhao; Mou, Daixiang; He, Junfeng; Ou, Yun-Bo; Wang, Qing-Yan; Li, Zhi; Wang, Lili; Zhao, Lin; He, Shaolong; Peng, Yingying; Liu, Xu; Chen, Chaoyu; Yu, Li; Liu, Guodong; Dong, Xiaoli; Zhang, Jun; Chen, Chuangtian; Xu, Zuyan; Hu, Jiangping; Chen, Xi; Ma, Xucun; Xue, Qikun; Zhou, X.J. (2012). "Electronic origin of high-temperature superconductivity in single-layer FeSe superconductor". Nat. Commun. 3: 931. arXiv: . Bibcode:2012NatCo...3E.931L. doi:10.1038/ncomms1946. - He, Shaolong; He, Junfeng; Zhang, Wenhao; Zhao, Lin; Liu, Defa; Liu, Xu; Mou, Daixiang; Ou, Yun-Bo; Wang, Qing-Yan; Li, Zhi; Wang, Lili; Peng, Yingying; Liu, Yan; Chen, Chaoyu; Yu, Li; Liu, Guodong; Dong, Xiaoli; Zhang, Jun; Chen, Chuangtian; Xu, Zuyan; Chen, Xi; Ma, Xucun; Xue, Qikun; Zhou, X. J. (2013). "Phase diagram and electronic indication of high-temperature superconductivity at 65 K in single-layer FeSe films". Nat. Mater. 12 (7): 605–610. arXiv: . Bibcode:2013NatMa..12..605H. doi:10.1038/NMAT3648. - Jian-Feng Ge; et al. (2014). "Superconductivity in single-layer films of FeSe with a transition temperature above 100 K". Nature Materials. 1406: 3435. arXiv: . Bibcode:2015NatMa..14..285G. doi:10.1038/nmat4153. - Wu, G; Xie, Y L; Chen, H; Zhong, M; Liu, R H; Shi, B C; Li, Q J; Wang, X F; Wu, T; Yan, Y J; Ying, J J; Chen, X H (2009). "Superconductivity at 56 K in Samarium-doped SrFeAsF". Journal of Physics: Condensed Matter. 21 (3): 142203. arXiv: . Bibcode:2009JPCM...21n2203W. doi:10.1088/0953-8984/21/14/142203. - Rotter, M; Tegel, M; Johrendt, D (2008). "Superconductivity at 38 K in the Iron Arsenide (Ba1−xKx)Fe2As2". Physical Review Letters. 101 (10): 107006. arXiv: . Bibcode:2008PhRvL.101j7006R. doi:10.1103/PhysRevLett.101.107006. PMID 18851249. - Sasmal, K; Lv, B; Lorenz, B; Guloy, A. M.; Chen, F; Xue, Y. Y.; Chu, C. W. (2008). "Superconducting Fe-Based Compounds (A1−xSrx)Fe2As2 with A=K and Cs with Transition Temperatures up to 37 K". Physical Review Letters. 101 (10): 107007. Bibcode:2008PhRvL.101j7007S. doi:10.1103/PhysRevLett.101.107007. PMID 18851250. - Pitcher, M. J.; Parker, D. R.; Adamson, P; Herkelrath, S. J.; Boothroyd, A. T.; Ibberson, R. M.; Brunelli, M; Clarke, S. J. (2008). "Structure and superconductivity of LiFeAs". Chemical Communications. 2008 (45): 5918–5920. arXiv: . doi:10.1039/b813153h. PMID 19030538. - Tapp, Joshua H.; Tang, Zhongjia; Lv, Bing; Sasmal, Kalyan; Lorenz, Bernd; Chu, Paul C. W.; Guloy, Arnold M. (2008). "LiFeAs: An intrinsic FeAs-based superconductor with Tc=18 K". Physical Review B. 78 (6): 060505. arXiv: . Bibcode:2008PhRvB..78f0505T. doi:10.1103/PhysRevB.78.060505. - Parker, D. R.; Pitcher, M. J.; Baker, P. J.; Franke, I; Lancaster, T; Blundell, S. J.; Clarke, S. J. (2009). "Structure, antiferromagnetism and superconductivity of the layered iron arsenide NaFeAs". Chemical Communications (16): 2189–2191. arXiv: . doi:10.1039/b818911k. PMID 19360189. - Hsu, F. C.; Luo, J. Y.; Yeh, K. W.; Chen, T. K.; Huang, T. W.; Wu, P. M.; Lee, Y. C.; Huang, Y. L.; Chu, Y. Y.; Yan, D. C.; Wu, M. K. (2008). "Superconductivity in the PbO-type structure α-FeSe". Proceedings of the National Academy of Sciences. 105 (38): 14262–14264. Bibcode:2008PNAS..10514262H. doi:10.1073/pnas.0807325105. PMC . PMID 18776050. - Kordyuk, A. A. (2012). "Iron-based superconductors: Magnetism, superconductivity, and electronic structure (Review Article)" (PDF). Low Temp. Phys. 38 (9): 888–899. arXiv: . Bibcode:2012LTP....38..888K. doi:10.1063/1.4752092. Archived (PDF) from the original on 11 May 2015. - Lee, Chul-Ho; Iyo, Akira; Eisaki, Hiroshi; Kito, Hijiri; Teresa Fernandez-Diaz, Maria; Ito, Toshimitsu; Kihou, Kunihiro; Matsuhata, Hirofumi; Braden, Markus; Yamada, Kazuyoshi (2008). "Effect of Structural Parameters on Superconductivity in Fluorine-Free LnFeAsO1−y (Ln=La, Nd)". Journal of the Physical Society of Japan. 77 (8): 083704. arXiv: . Bibcode:2008JPSJ...77h3704L. doi:10.1143/JPSJ.77.083704. - Drozdov, A.; Eremets, M. I.; Troyan, I. A. (2014). "Conventional superconductivity at 190 K at high pressures". arXiv: . - Ge, Y. F.; Zhang, F.; Yao, Y. G. (2016). "First-principles demonstration of superconductivity at 280 K in hydrogen sulfide with low phosphorus substitution". Phys. Rev. B. 93 (22): 224513. arXiv: . Bibcode:2016PhRvB..93v4513G. doi:10.1103/PhysRevB.93.224513. Archived from the original on 7 November 2017. - Preuss, Paul. "A Most Unusual Superconductor and How It Works". Berkeley Lab. Archived from the original on 3 July 2012. Retrieved 12 March 2012. - Hebard, A. F.; Rosseinsky, M. J.; Haddon, R. C.; Murphy, D. W.; Glarum, S. H.; Palstra, T. T. M.; Ramirez, A. P.; Kortan, A. R. (1991). "Superconductivity at 18 K in potassium-doped C60". Nature. 350 (6319): 600–601. Bibcode:1991Natur.350..600H. doi:10.1038/350600a0. - Ganin, A. Y.; Takabayashi, Y; Khimyak, Y. Z.; Margadonna, S; Tamai, A; Rosseinsky, M. J.; Prassides, K (2008). "Bulk superconductivity at 38 K in a molecular system". Nature Materials. 7 (5): 367–71. Bibcode:2008NatMa...7..367G. doi:10.1038/nmat2179. PMID 18425134. - Ashcroft, N. W. (1968). "Metallic Hydrogen: A High-Temperature Superconductor?". Physical Review Letters. 21 (26): 1748–1749. Bibcode:1968PhRvL..21.1748A. doi:10.1103/PhysRevLett.21.1748. - Monthoux, P.; Balatsky, A.; Pines, D. (1992). "Weak-coupling theory of high-temperature superconductivity in the antiferromagnetically correlated copper oxides". Physical Review B. 46 (22): 14803–14817. Bibcode:1992PhRvB..4614803M. doi:10.1103/PhysRevB.46.14803. - Chakravarty, S; Sudbø, A; Anderson, P. W.; Strong, S (1993). "Interlayer Tunneling and Gap Anisotropy in High-Temperature Superconductors". Science. 261 (5119): 337–340. Bibcode:1993Sci...261..337C. doi:10.1126/science.261.5119.337. PMID 17836845. - Geshkenbein, V.; Larkin, A.; Barone, A. (1987). "Vortices with half magnetic flux quanta in heavy-fermion superconductors". Physical Review B. 36 (1): 235–238. Bibcode:1987PhRvB..36..235G. doi:10.1103/PhysRevB.36.235. PMID 9942041. - Kirtley, J. R.; Tsuei, C. C.; Sun, J. Z.; Chi, C. C.; Yu-Jahnes, Lock See; Gupta, A.; Rupp, M.; Ketchen, M. B. (1995). "Symmetry of the order parameter in the high-Tc superconductor YBa2Cu3O7−δ". Nature. 373 (6511): 225–228. Bibcode:1995Natur.373..225K. doi:10.1038/373225a0. - Kirtley, J. R.; Tsuei, C. C.; Ariando, A.; Verwijs, C. J. M.; Harkema, S.; Hilgenkamp, H. (2006). "Angle-resolved phase-sensitive determination of the in-plane gap symmetry in YBa2Cu3O7−δ". Nature Physics. 2 (3): 190–194. Bibcode:2006NatPh...2..190K. doi:10.1038/nphys215. - Tsuei, C. C.; Kirtley, J. R.; Ren, Z. F.; Wang, J. H.; Raffy, H.; Li, Z. Z. (1997). "Pure dx2-y2 order-parameter symmetry in the tetragonal superconductor Tl2Ba2CuO6+δ". Nature. 387 (6632): 481–483. Bibcode:1997Natur.387..481T. doi:10.1038/387481a0. - High temperature superconductivity research at Cornell University - Superconductor Science and Technology - American Superconductor and Consolidaded Edison laying first superconductor grid in New York - Video of a magnet floating on a HTSC - High-Temperature Superconductor Technologies - High-Temperature Superconductivity in Cuprates (2002) Book - New LaOFeAs HTS SciAm
<urn:uuid:f1129395-bf01-4275-b490-f0e0b7f5d6fc>
4.15625
14,665
Knowledge Article
Science & Tech.
62.164489
95,600,119
Karl Weierstraß and the theory of Abelian and elliptic functions The starting point for Weierstraß’s mathematical research and his career was the theory of elliptic and, more general, Abelian functions: His final decision to devote his life to mathematics resulted from his success in finding an alternative proof of one of Abel’s results on elliptic functions. And Weierstraß received his recognition in academic community because of his complete solution of the prestigious Jacobi inversion problem for hyperelliptic Abelian functions in his 1854 paper. Weierstraß’s contributions to the fundaments of analysis, in particular to the theory of analytic functions of one and several complex variables, mainly originated in the foundational problems that he saw himself confronted with when attacking problems concerning the type of functions mentioned above. This was in particular the case after Riemann had given his view of the theory of general Abelian functions in 1857 which represented an other way of approach to the problem than Weierstraß’s but had some deficits in the details of the argumentation. KeywordsElliptic Function Power Series Expansion Elliptic Integral Theta Series Elliptic Case Unable to display preview. Download preview PDF.
<urn:uuid:9503b931-e04b-4d96-aac4-aa5ac122a549>
2.609375
258
Truncated
Science & Tech.
14.182805
95,600,135
This site offers new information pertaining to the Big Bang Theory. The new idea that they are proposing is that inflation took place at less than a tiny fraction of a second during which space underwent an enormous expansion. During the past 13 billion years, space has been expanding but at increasing rates, that is, the Universe is accelerating. Most of this expansion is driven not by the mass in protons and neutron but by a cosmological constant and by mysterious, unknown dark matter. The site discusses how they came to these conclusions. Jupiter Scientific . The Understanding of the History of Our Universe by Cosmologists Evolves. Jupiter Scientific , 1999. http://www.jupiterscientific.org/sciinfo/newcosmology.html (accessed 22 July 2018). %0 Electronic Source %D 1999 %T The Understanding of the History of Our Universe by Cosmologists Evolves %I Jupiter Scientific %V 2018 %N 22 July 2018 %9 text/html %U http://www.jupiterscientific.org/sciinfo/newcosmology.html Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:b6d398ba-5f82-4b2b-84c9-a7a84b5ee2f8>
3.328125
266
Knowledge Article
Science & Tech.
46.636458
95,600,164
Ray Radio Tomography The problems of ray radio tomography of large-scale structures are usually formulated as follows: to recover the structure of some ionospheric region from linear integrals measured along a series of rays intersecting this region. Since the sizes of large-scale irregularities, both natural (such as, e.g., the ionospheric trough) and artificial (spacecraft traces, technological emissions), are of the order of dozens to thousands of kilometers, diffraction effects can be neglected in VLF/UHF probing. KeywordsTotal Electron Content Initial Guess Ionospheric Irregularity Radio Tomography Algebraic Reconstruction Technique Unable to display preview. Download preview PDF.
<urn:uuid:9c7e5427-3b9b-4eeb-948a-b279b2bf5df4>
2.53125
143
Truncated
Science & Tech.
11.560748
95,600,182
Sunday, November 25, 2012 A supernova explosion effect weighing 100 Earth masses radioactive titanium According to scientists, a single supernova explosion in the surrounding area can be radioactive titanium, the total weight of which can exceed 100 times the mass of our planet. These new data, to dispose of the radioactive substance during a supernova explosion, it may give an explanation of the mysterious processes occurring inside stars just before they are followed by a catastrophic explosion and the release of elements from which the universe is almost everything, from the stars and planets and ending human . The most powerful explosions of stars - the birth of supernova stars. Moreover, in the process, the synthesis of heavy elements and a process called explosive nucleosynthesis. In the extremely complex, and the majority did not understand, the birth of the supernova at the moment, for today's scientists are still many unknown and incomprehensible. However, according to astrophysics, in the monitoring of supernova explosions, in particular the residual effects of the explosions, you can try to understand the processes that lead to them. In a recent study, the researchers made the study of the remains of the supernova SN1987A, whose explosion was observed in 1987. This supernova was located on the outskirts of the Nebula Tarantula, which is located in the Large Magellan Cloud, a dwarf galaxy located at a distance of 168,000 light years from the solar system. During their study, the researchers focused on creating, during the supernova explosion of a radioactive isotope such as titanium-44. According to computer simulations, supernovae such class as SN1987A, can throw in their surroundings to 100 Earth masses in the form of titanium-44. Using a telescope INTEGRAL, which belongs to the European Space Agency, the scientists looked for very specific wavelengths in X-rays, which are emitted and this particular isotope of titanium. Scientists intend to measure radiation levels in order to get the possibility of estimating the quantity of radioactive material in space. As a result, discovered titanium-44, in the space around the site of the explosion of the supernova SN1987A, enough that the remains of the supernova emitted light is not worse than a little sun. As a result, scientists have yet another "starting point" for modeling processes that precede the most direct of the supernova explosion, as well as additional material for professionals working in the field of study of explosive nucleosynthesis. The Beijing University of Aviation and Cosmonautics completed a 370-day experiment to simulate the lives of people on the moon, settin... United Launch Alliance Company successfully conducted the launch of a rocket Delta IV , which brought with it the WGS-9 satellite for mi... The following month, the NASA spacecraft "Cassini" will make his 126th and final passage by Saturn's largest moon Titan . ... The spacecraft SpaceX Dragon US company made a successful soft landing on Earth, splashed down off the coast of southern California nea...
<urn:uuid:9c933c5e-3cf0-4ee6-8500-e23fe1026948>
3.140625
611
Spam / Ads
Science & Tech.
35.545672
95,600,209
Now showing items 1-6 of 6 Urbanization alters the influence of weather and primary productivity on avian populations in the Seattle metropolitan area Two novel, abiotic challenges that affect primary productivity, and the biodiversity dependent upon it, are urbanization and climate change. Increased levels of urbanization cause an inversely proportional decrease in primary productivity, while climate change promises concurrent changes in temperature and precipitation. ... Quantifying vertical and horizontal stand structure using terrestrial LiDAR in Pacific Northwest forests Abstract Quantifying vertical and horizontal stand structure using terrestrial LiDAR in Pacific Northwest forests Alexandra Kazakova Chair of Supervisory Committee: Dr. L. Monika Moskal School of Environmental and Forest Science Stand level spatial distribution is a fundamental part of forest structure that influences many ... Challenges and Opportunities for the Development of a Sustainable Forest Sector in the Russian Far East Twenty-three percent of global forests are contained within the Russian Federation's nine time zones, which is more than the combined forest area of Canada and Brazil. Despite the fact that Russia contains the largest area of natural forests in the world, its current share in the trade of world forest products is below 4 ... Very large wildfires in the western contiguous United States: probabilistic models for historical and future conditions Wildfires, especially the largest ones, can have lasting ecological and social effects both directly on the landscape and indirectly on the atmosphere and climate. Both climate and fire regimes are expected to change into the future while air quality, the composition of the atmosphere, continues to be regulated. It is necessary ... Establishment Histories and Structural Development of Mature and Early Old-Growth Douglas-fir Forests of Western Washington and Oregon Regeneration of tree populations following stand-replacing wildfires is an important process in the multi-century development of Douglas-fir<&mdash> --</&mdash>western hemlock forests. Temporal patterns of tree establishment in naturally regenerated, mid-aged (100 to 350 years) Douglas-fir-dominated forests have received ... The Effect of Agricultural Riparian Buffer Width on Generalist Natural Enemy Diversity University of Washington Abstract The Effect of Agricultural Riparian Buffer Width on Generalist Natural Enemy Diversity Matthew Chabot Maria Chair of the Supervisory Committee: Professor Sarah E. Reichard School of Environmental and Forest Sciences Wooded stream buffers support food production, from fish to farms. While ...
<urn:uuid:3448f714-e105-40fb-b201-dfa7f816f4dd>
2.625
494
Content Listing
Science & Tech.
15.371793
95,600,231
Species Detail - Purple Bar (Cosmorhoe ocellata) - Species information displayed is based on all datasets. Terrestrial Map - 10kmDistribution of the number of records recorded within each 10km grid square (ITM). Marine Map - 50kmDistribution of the number of records recorded within each 50km grid square (WGS84). insect - moth 24 April (recorded in 2009) 10 October (recorded in 2010) National Biodiversity Data Centre, Ireland, Purple Bar (Cosmorhoe ocellata), accessed 18 July 2018, <https://maps.biodiversityireland.ie/Species/78726>
<urn:uuid:3481b121-980d-4db9-b26a-ec11489b147d>
2.53125
140
Structured Data
Science & Tech.
38.202
95,600,235
These changes, driven by new wind patterns, are consistent with predictive models of global climate change, scientists said this week at the annual meeting of the American Association for the Advancement of Science. But the researchers stopped short of saying that climate change was the definitive cause. "This coming year will be important," said Jack Barth, a professor of oceanic and atmospheric sciences at Oregon State University. "If the persistent wind patterns of the last few years continue through 2007, it might be enough to tip the scales in favor of climate change as a cause for these extreme variations in our West Coast marine environment. "Our research has shown there is a 'wobble' in the Jet Stream that in some years has tended to overpower the more historic day-to-day variations in climate in favor of these two- to three-week wind patterns that influence upwelling and ultimately, ocean production." Eight scientists, including five with ties to Oregon State University, are part of a AAAS symposium, "Predicting the Unpredictable: Marine Die-Offs along the West Coast." This week, they outlined how marine ecosystems are responding to widely different climate-driven variables, beginning in 1997-98 with one of the most powerful El Nino episodes on record. During that El Niño, ocean waters off the West Coast grew warmer, nutrients decreased, biological production was reduced, and species from zooplankton to salmon disappeared, were drastically reduced or moved from their typical habitats. The El Niño capped what had been a series of years through the 1990s characterized by warm waters and weak upwelling. That regime ended abruptly in late 1998, and the California Current system entered a four-year period of cold ocean conditions, according to Bill Peterson, a NOAA oceanographer who works out of OSU's Hatfield Marine Science Center in Newport, Ore. The ecosystem response to this change, Peterson said, was immediate and dramatic. "Zooplankton stocks more than doubled in biomass, and the zooplankton community structure suddenly changed to one dominated by cold-water, lipid-rich species," Peterson said. "Salmon stocks rebounded immediately and the good conditions lasted for four years. But the cold-water period ended as quickly as it began, in late 2002, and the ecosystem began to revert to conditions seen during the 1990s." Before the change, however, the West Coast experienced an unprecedented invasion of sub-arctic water in the summer of 2002. This cold, nutrient-rich water triggered massive phytoplankton production in the surface waters, and as the organisms decayed and sank to the bottom, they sucked oxygen out of the lower water column, leading to hypoxia and marine die-offs. And though the ocean waters warmed over the next four years, the West Coast experienced hypoxia events every summer, according to Francis Chan, a senior research assistant professor at Oregon State University. "When it comes to upwelling and phytoplankton production, there can be too much of a good thing," Chan said. "Although the low-oxygen zone has varied in intensity from year to year, 2006 saw an unexpected expansion and degradation in oxygen conditions. At least 3,000 square kilometers of the continental shelf along the Oregon coast were affected. "This latest hypoxic event," he added, "was off the charts." Nature threw a different wrinkle at the California Current system in 2005, when the spring upwelling was delayed by a month. Winds that normally cause upwelling were absent, creating the lowest "upwelling-favorable wind stress" in 20 years. Near-shore waters were two degrees (C) warmer than average, surf zone chlorophyll levels were 50 percent of normal, and nutrient levels were reduced by one-third. Changes in water movement, triggered by the wind shifts, had a drastic effect on mussel and barnacle larvae, which decreased by 83 and 66 percent respectively. What this showed scientists is that changes to the system are multi-faceted. Large-scale changes have an imprint on the entire ecosystem, but there are surprises in local systems that may depend on the timing of winds as much as their overall strength and duration. "We used to think we could look at the wind and predict runs of salmon," Peterson said. "That's not necessarily the case. It's a lot more complex out there." Bruce Menge, an Oregon State marine ecologist, said another lesson scientists have learned is that there are ecologic winners and losers during these climatic variations. The general perception that cold water cycles are good for the ocean may be true for the open ocean environment, he said, but they can wreak havoc on near-shore communities such as kelp forests and rocky intertidal zones. And while El Niño events and warm water cycles lower ocean production in general, they also can boost near-shore food webs. "I think what we're seeing is that the Pacific Decadal Oscillation has shifted," Menge said. "The 20- to 30-year cycles are becoming less prominent than these four-year cycles. What we don't yet know is whether these last couple of four-year cycles are just blips, or the whole system has gone haywire." Oregon State University's Jane Lubchenco, a co-organizer of the West Coast variability symposium and past president of the AAAS, said the bottom line is that the dramatic events of the past few years have shown how vulnerable our oceans are to changes in overall climate – and how quickly ecosystems respond. "Wild fluctuations in the timing and intensity of the winds that drive the system are wreaking havoc with the historically rich ocean ecosystems off the West Coast," Lubchenco said. "As climate continues to change, these arrhythmias may become more erratic. Improved monitoring and understand of the connection between temperatures, winds, upwelling and ecosystem responses will greatly facilitate capacity to manage those parts of the system we can control." Jack Barth | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:c6191555-97bb-4d56-80b1-c730ac0a750f>
3.234375
1,844
Content Listing
Science & Tech.
39.774527
95,600,274
A View from David Talbot Playing the Odds on Tornado Warnings Pinpoint predictions are a long way off, but taking daily odds into account might help make the public more alert. The devastation in Moore, Oklahoma, shows the limits of sensing, modeling, and warning technologies. While some technologies promise somewhat more accurate hurricane tracks and thus sharper evacuation orders (see “A Model for Hurricane Evacuation”), tornado warnings are another story altogether (see “The Limits of Tornado Predictions”). Twisters can form, become more dangerous, and change direction in a matter of tens of seconds, wiping out one neighborhood but leaving another a quarter-mile away unscathed. And the conditions that form them often don’t exist for very long. Thus, the warnings necessarily cover larger geographic areas and time periods. Forecasters are wary of issuing too many warnings, lest they be seen as crying wolf and lose effectiveness. But if micro-scale warnings are a long way off, a macro view might help. If you can’t get a street-level five-minute warning before the twister comes, at least you can know which days to be especially alert. This visualization put together by the National Oceanic and Atmospheric Administration shows, day by day and week by week, where the harsh weather is more likely to hit. And it shows just how bad late May can be in Oklahoma. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:9550a054-3c6e-431f-ba6d-3c3dc0ddcead>
3.3125
319
Truncated
Science & Tech.
40.645693
95,600,296
doi:10.1038/nindia.2018.65 Published online 25 May 2018 Researchers have synthesised nanorods that can move on their own in a fluidic environment without the need for any external energy1. Chemical reactions on the surface of the nanorods trigger their motion, making them suitable for transporting tiny cargo, such as catalysts, across short distances. In separate experiments, other researchers have used external stimuli such as chemicals, magnetic fields, light and ultrasound to move tiny particles in fluid. However, the particles showed poor motility in these experiments. Scientists from the Indian Institute of Science Education and Research, Kolkata, and the Institute of Mathematical Sciences, Chennai, both in India, led by Soumyajit Roy and Ronojoy Adhikari, made the nanorods using ammonium heptamolybdate tetrahydrate, a metal-oxide-based soft material. They then tested the nanorods’ ability to move by adding hydrazine sulphate solution to the nanorods. The hydrazine solution set off chemical reactions on the surface of the nanorods. These reactions produced nitrogen gas that, in turn, generated an uneven osmotic stress on the fluid that surrounded the nanorods, triggering their motion. With an increase in hydrazine concentration, the velocity of the nanorods increased, reaching a maximum value at a critical concentration. Above the critical concentration, nanorods’ velocity began to drop. The nanorods were able to retain their motion for three days. The nanorods could clump together, creating large surface areas – a property potentially useful for making nanocarpets and nanorafts. “In the future, such nanostructures with large surface areas can be utilised to ferry different kinds of micro objects,” says Roy. 1. Mallick, A. et al. Redox reaction triggered nanomotors based on soft-oxometalates with high and sustained motility. Front. Chem. 6, 152 (2018)
<urn:uuid:f9e3963e-20d4-4955-b8a7-28cbecce3a02>
3.75
431
Truncated
Science & Tech.
27.567808
95,600,305
Bildtext får vara max två rader text. Hela texten ska högerjusteras om den bara ska innehålla fotobyline! Photo: B. Christensen/Azote In the world of arcades, the whack-a-mole is a classic. The game, in which players use a mallet to hit randomly appearing toy moles back into their holes, is an innocent reminder that fixing a problem in one place may only cause others to pop up elsewhere. But within sustainability, such problem solving come with more serious consequences. Coined environmental leakage, it refers to how interventions aimed at reducing environmental pressures at one site may be locally successful, but increase pressures elsewhere. On example is how the recovery of fish stocks in Europe has led to increased fishing pressure in West African waters. Another is how improved regulations of Chinese and European forests have led to deforestation in the tropics due to increased Chinese and European biomass imports. This not only has global environmental consequences but social ones as well, since people’s livelihoods in those distant places are often negatively impacted. An "out-of-sight-out-of-mind" approach can mean big problems when dealing with complex social-ecological challenges and can put into question well-intended place based sustainability practices. In a study recently published in the journal Environmental Research Letters, centre researcher Tim Daw, together with a team of colleagues from Spain, Canada, Germany, the Netherlands and the UK, shows how ecosystem assessments often overlook what they describe as “distant, diffuse and delayed” impacts. These impacts, termed “off-stage ecosystem service burdens" by Tim Daw and his colleagues may be critical for global sustainability. To succeed, these burdens must be better recognised and incorporated in ecosystem assessments such as those led by the Intergovernmental Platform on Biodiversity and Ecosystem Services and the Intergovernmental Panel on Climate Change. Some policies do recognise environmental leakage of particular impacts, for example where protection of a coral reef from fishing leads to more fishing in neighbouring sites. However ‘off stage burdens’ also include impacts that differ from the ‘on stage impacts’: For example where people displaced from fishing revert to activities that cause other types of environmental impacts such as diffuse pollutant or emissions of climate. Off-stage burdens ultimately impact people’s quality of life but these are often in distant populations or even future generations. As such they are difficult to measure and are generally outside the scope of most environmental policies and ecosystem service assessments. We’ve made great progress in understanding how human wellbeing is affected by ecosystems in many different places. But in our tightly interconnected world, we really can’t achieve any sustainability transition unless we take better account of ‘off-stage burdens’ that are felt elsewhere or in the future Tim Daw, co-author Lead author Unai Pascual from the Basque Centre for Climate Change argues that neglecting these off-stage burdens may jeopardise achieving the Sustainable Development Goals. “For global sustainability to be achieved, assessments and policies need to account for impacts on ecosystems and people across sites and scales. The lack of attention to off-stage burdens is partly because of the methodological difficulties and costs involved in systematically addressing them and the absence of effective institutions. But also because they have not been recognized as important components in ecosystem assessment frameworks,” he says. In the study, Pascual and his colleagues suggest various ways for science and decision-makers to deal with these “burdens” in ecosystem assessments. Fundamentally, there is a need to merge work on environmental impacts and risk analysis with ecosystem service assessments across time and space. This then must be converted into relevant policy action. In addition, we can measure and visualise burdens by using existing concepts such as ‘virtual water’ which, for example, captures how consuming imported goods in one place impacts water supplies in regions where these goods are produced. Pascual and his colleagues argue that “off-stage ecosystem service burdens may well be an inconvenient truth, but neglecting them does not help to avoid their impacts”. In fact, ignoring them will hamper science-policy efforts for transitioning towards global sustainability and for living within critical global biophysical boundaries, they argue. "Our planet is a large, coupled human and natural system consisting of many smaller coupled social-ecological systems linked in complex ways through flows of information, matter and energy. These unprecedented interdependencies link the management of ecosystems and the wellbeing of people in distant places," the authors conclude. Pascual, U., Palomo, I., Adams, W., Chan, K., Daw, T., Garmendia, E., Gómez-Baggethun, E., de Groot, R., Mace, G., Martín-López, B., Phelps, J. (2017). Off-stage ecosystem service burdens: A blind spot for global sustainability. Environmental Research Letters. https://doi.org/10.1088/1748-9326/aa7392 Tim Daw studies the interaction between ecological and social aspects of coastal systems and how these contribute to human wellbeing and development. Research news | 2018-07-10 The World in 2050 initiative launches new report outlining synergies and benefits that render the goals achievable Educational news | 2018-07-02 LEAP our leadership programme designed for changemakers that want to lead social-ecological transformations to sustainability. Application deadline is 5 August 2018. Research news | 2018-06-27 Overfishing, fractured international relationships and political conflicts loom as fish migrate more unpredictably because of climate change. Here is how to deal with it Research news | 2018-06-26 Profit-maximizing approaches are most likely to produce outcomes that harm people or the environment. But it depends on the circumstances whether a sustainable or a safe approach is most suitable, new study argues General news | 2018-06-20 Will lead a redesign of the organisational structure at the centre Research news | 2018-06-20 New book chapter looks into the economic, cultural and ecological reasons why some people leave the fisheries and aquaculture sector, and what could be done to reverse the trend
<urn:uuid:8daa0447-0748-46ef-9aea-bcf446222c14>
3.25
1,310
News (Org.)
Science & Tech.
27.254488
95,600,334
in Python Introduction to Classes -17, I am stuck... It says an error message like below. "Oops, try again. Did you create an instance of Triangle called my_triangle?" I can't figure out why... This is the code I wrote below. class Triangle(object): number_of_sides = 3 def __int__(self, angle1, angle2, angle3): self.angle1 = angle1 self.angle2 = angle2 self.angle3 = angle3 def check_angle(self): if self.angle1 + self.angle2 + self.angle3 == 180: return True else: return False my_triangle = Triangle(90,30,60) print my_triangle.number_of_sides print my_triangle.check_angles() Please give me some advice
<urn:uuid:cae712b5-efe2-42f9-8e30-1cb4bddfd37d>
3.109375
182
Comment Section
Software Dev.
64.229543
95,600,335
- Paper report - Open Access A census of E. coli sequence repeats - Rachel Brem © BioMed Central Ltd 2000 Received: 1 February 2000 Published: 27 April 2000 A statistical analysis of simple repeat motifs in DNA across the Escherichia coli genome is described. Significance and context Many genomes contain kilobases of DNA sequence that do not code for protein or RNA. Much of this DNA is simple sequence repeats (SSRs), long repeats of simple nucleotide patterns. Recent biochemical work suggests that SSRs might have a role in the regulation of gene expression, histone binding or DNA replication. Because the environment of one base in an SSR is a lot like that of its neighbors, the DNA replication machinery can confuse one base with another, skip a base or repeat replication of a base. As a consequence, SSRs are more likely to mutate than are other DNA sequences. Here, Gur-Arie et al. identify putative SSRs in the genome of E. coli. They then compare SSR sequences from several strains of E. coli, and report the polymorphisms - the sequence differences - between strains. The proposed SSRs are of interest as the basis for further biochemical and diagnostic work with bacteria. Gur-Arie et al. first identify tracts of one-, two-, three- and four-nucleotide patterns in the E. coli genome that might be genuine SSRs. They weed out artifacts from this set in two ways. First they compare against control genomes. These are computer-generated random strings of sequence with the same nucleotide composition as the real E. coli genome. From this experiment they find several thousand mononucleotide repeats of up to eight bases (for example, AAAAAA) and about 2300 trinucleotide repeats (for example GAT) which appear more often in the E. coli genome than expected by chance. A and T appear more often in these repeats than G and C. The putative SSRs were not randomly distributed in the genome. Many appeared in noncoding regions; of these, 50% lay within 200 bases of the start site of a gene. Finally, the authors compare the sequences of 14 SSRs among 23 strains of E. coli. They find four mononucleotide SSRs that vary in length among the strains, some by as many as six bases. The authors' newly developed SSR identifying software is available by FTP (named ssr.exe). Gur-Arie et al. obtained the E. coli genome from the Escherischia coli WWW homepage. The authors hypothesize that, as has been found in other organisms, SSRs in E. coli might have a role in the rapid mutation of the regulation of gene expression or histone binding. They postulate that the SSRs they find close to genes are good candidates for such regulators. They also suggest that the polymorphisms they have discovered might be used to make probes to pick out infectious strains of E. coli, for example in screening packaged food for contamination. The list of new SSRs is a useful starting point for constructing hypotheses to be tested by experiment. Gur-Arie et al. argue that the SSRs they find near gene sequences are biologically important, but this needs to be confirmed by mutational analysis in vitro. Similarly, their polymorphism results might be diagnostically useful as probes to distinguish between E. coli strains, but probes to long repeat sequences can cross-react, so it might be difficult to put this idea into practice.
<urn:uuid:561250dc-ed3d-4dd6-bed2-742f5bd3ff59>
2.984375
731
Academic Writing
Science & Tech.
48.654669
95,600,336
The Lengths of the Quaternion and Octonion Algebras - 21 Downloads The classical Hurwitz theorem claims that there are exactly four normed algebras with division: the real numbers (ℝ), complex numbers (ℂ), quaternions (ℍ), and octonions (𝕆). The length of ℝ as an algebra over itself is zero; the length of ℂ as an ℝ-algebra equals one. The purpose of the present paper is to prove that the lengths of the ℝ-algebras of quaternions and octonions equal two and three, respectively. Unable to display preview. Download preview PDF.
<urn:uuid:b7ee8263-9867-4dae-8cb6-f9078779af7f>
2.921875
152
Truncated
Science & Tech.
58.541034
95,600,337
It may sounder counter-intuitive, but adding drops of humble old water into solid materials can actually increase their strength and changes their other properties in interesting and useful ways. Researchers from Yale have discovered that adding droplets of water into some solids can strengthen and stiffen them, by virtue of their surface tension. The force is often observed in fluids, and its role is typically to reduce the surfaces area of the material. Embedding small droplets of water — each around a micron in size — into solids means they can make use of this force. It's a kind of restoring force, always attempting to keep the droplets as small as possible so in turn resisting extension — and stiffening the lump of water-infused material. Eric Dufresne, the researchers behind the work, explains to The Speaker: "As the solid gets stiffer, the liquid droplets need to be smaller in order to have this stiffening or cloaking effect. By embedding the solid with droplets of different materials, one can give it new electrical, optical or mechanical properties. On the simple scale, they could lower the cost be replacing expensive polymers with simple liquids. More excitingly, embedded droplets could provide an electromagnetic handle to actuate structures." So far, the team have shown that it works with silicone, making it up to 30 per cent stronger by adding water droplets. Obviously it still needs to be done with other materials, but this is certainly a neat and affordable method for tweak material properties. [Nature Physics via The Speaker]
<urn:uuid:507258da-0da0-41dd-aae5-1e95b59ae579>
3.96875
319
Truncated
Science & Tech.
36.33369
95,600,338
Walking with my son at 3:14pm the other day, I mentioned to him, “Hey, it’s Pi Time”. My son knows 35 digits of $\pi$ (don’t ask), and knows that it’s transcendental. He replied, “is it exactly $\pi$ time?” This led to a discussion about whether there is ever a time each afternoon that is exactly $\pi$, meaning 3:14:15.926535… This feels like some kind of Zeno’s Paradox. I told him that (assuming time is continuous) it had to be $\pi$ time at some point between 3:14:00 and 3:15:00, but the length of that moment was 0. However, this discussion left him confused. Can anyone suggest a good way to explain this to a child? Pi time is not like a concrete slab on the sidewalk that you can stand on; it’s not even like the crack between the slabs, which have a width. It is like the line precisely down the middle of that crack. When you’re walking on the sidewalk, you cross right over it without stopping on it. And so is every other precise time: like exactly noon or midnight. Of course, the analogy fails a little bit because when you stop on the sidewalk, you cover a whole range of positions. As far as we can tell in everyday life, that’s not true of time… not that we can stop in time, anyway… That is how I would explain it to a non-mathematician.
<urn:uuid:3022d89f-b72d-4469-8110-5d1971756563>
2.53125
340
Personal Blog
Science & Tech.
74.376942
95,600,344
As MIT professor, Sara Seager, explained that NASA's new satellite will be the ideal tool for discovering which new exoplanets we should be studying next. "TESS will cast a wider net than ever before for enigmatic worlds whose properties can be probed by NASA's upcoming James Webb Space Telescope and other missions", he added. On Monday, April 16, 2018, NASA will launch their new TESS (Transiting Exoplanet Survey) satellite to search for alien planets orbiting distant stars. Much like NASA's Kepler space observatory, TESS will use its high-spec tech to pinpoint undiscovered planets. More than 500,000 stars will come under its gaze during its two-year lifespan. "We expect TESS will discover a number of planets whose atmospheric compositions, which hold potential clues to the presence of life, could be precisely measured by future observers". How to Watch the 2018 ACM Awards The shooting last October was the country's deadliest mass shooting. "I have one really awful memory from Las Vegas but I have a thousand great ones". The Transiting Exoplanet Survey Satellite (TESS), the heir to NASA's Kepler exoplanet mission throne, is set to orbit Earth while pointing it's viewfinders out to space. "There's no science that will tell us life is out there right now, except that small rocky planets appear to be incredibly common", MIT exoplanet hunter Sara Seager said. The TESS satellite will be launched on a SpaceX Falcon 9 rocket and will hunt for new exoplanets to determine if they could harbor life. NASA's Kepler spacecraft used the same method to spot more than 2,600 confirmed exoplanets, majority orbiting faint stars 300 to 3,000 light-years away. The spacecraft will scan our solar neighborhood looking for stars that exhibit "temporary drops in brightness caused by planetary transits", which is a sign that a previously undiscovered planet may be crossing in front of a star, NASA's website explains. Come-from-behind win keeps UCHS softball team flawless in conference play The Trojans were technically the home team but the game was played at O-A and the Hornets capitalized on the home field advantage. Those contests include divisional Hall (0-1), divisional Conard (0-2), Glastonbury (3-1), Rockville (0-2), and Farmington (0-2). "We learned from Kepler that there are more planets than stars in our sky, and now TESS will open our eyes to the variety of planets around some of the closest stars". TESS will also be primed to identify the worlds circling red dwarfs, the small, dim stars that make up around roughly three-quarters of the stars in the sky. The satellite will spend full 13.7-day orbits observing a segment, then move on to the next one. Now it's TESS's turn to take Keppler's discoveries one step further and narrow down the possible contenders for the next Planet Earth. That's because Ricker's team designed a new kind of orbit - a highly elliptical 13.7-day trip that allows the spacecraft to avoid damage from Earth's Van Allen radiation belts while also bringing it close enough to regularly send back loads of image data. But since the 13 observation strips in each hemisphere overlap at the poles, TESS will have eyes on both the northern and southern polar skies for almost a year at a time. Man City's record-breaking season reveals how dominant their title win was Mourinho added: "I think Man City wins the title (this season)". "We are not going to do anything insane ". If I expect him to score goals every match? No. "TESS is going to essentially provide the catalog, like the phone book, if you will, of all the best planets for following up, for looking at their atmospheres and studying more about them."
<urn:uuid:0afda5ff-264b-4b20-afe5-7c26ab8a09f6>
2.59375
813
News Article
Science & Tech.
51.881433
95,600,348
Delicate and translucent as a puff of air, yet mechanically stable, flexible, and possessing amazing heat-insulation properties—these are the properties of a new aerogel made of cellulose and silica gel. Researchers led by Jie Cai have introduced this novel material, which consists almost completely of air, in the journal Angewandte Chemie. Gels are familiar to us in forms like Jell-O or hair gel. A gel is a loose molecular network that holds liquids within its cavities. Unlike a sponge, it is not possible to squeeze the liquid out of a gel. An aerogel is a gel that holds air instead of a liquid. For example, aerogels made from silicon dioxide may consist of 99.98 % air-filled pores. This type of material is nearly as light as air and is translucent like solidified smoke.In addition, it is not flammable and is a very good insulator—even at high temperatures. One prominent application for aerogels was the insulation used on space shuttles. Because of their extremely high inner surface area, aerogels are also potential supports for catalysts or pharmaceuticals. Silica-based aerogels are also nontoxic and environmentally friendly. The researchers at Wuhan University (China) and the University of Tokyo (Japan) have now developed a special composite aerogel from cellulose and silicon dioxide. They begin by producing a cellulose gel from an alkaline urea solution. This causes the cellulose to dissolve, and to regenerate to form a nanofibrillar gel. The cellulose gel then acts as a scaffold for the silica gel prepared by a standard sol–gel process, in which a dissolved organosilicate precursor is cross-linked, gelled, and deposited onto the cellulose nanofibers. The resulting liquid-containing composite gel is then dried with supercritical carbon dioxide to make an aerogel. The novel composite aerogel demonstrates an interesting combination of advantageous properties: mechanical stability, flexibility, very low thermal conductivity, semitransparency, and biocompatibility. If required, the cellulose part can be removed through combustion, leaving behind a silicon dioxide aerogel. The researchers are optimistic: "Our new method could be a starting point for the synthesis of many new porous materials with superior properties, because it is simple and the properties of the resulting aerogels can be varied widely."About the Author Angewandte Chemie International Edition, Permalink to the article: http://dx.doi.org/10.1002/anie.201105730 Jie Cai | Angewandte Chemie NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:eec6763d-215c-4026-98ad-13aa92860985>
3.484375
1,182
Content Listing
Science & Tech.
33.656187
95,600,356
About the e-Book Python Programming for Teens pdf If you want to learn how to program in Python, one of today's most popular computer programming languages, PYTHON PROGRAMMING FOR TEENS is the perfect first step. Written by teacher, author, and Python expert Kenneth Lambert, this book will help you build a solid understanding of programming and prepare you to make the jump to other languages and more advanced instruction. In PYTHON PROGRAMMING FOR TEENS, you will learn problem solving, program development, the basics of using classes and objects, and more. Special topics include 2-D geometry, fractals, animations, and recursion. The book's topics are illustrated using turtle graphics, a system that provides graphical output from programs and makes learning more fun. Get started programming today with PYTHON PROGRAMMING FOR TEENS. Preview Python Programming for Teens Pdf Download Python Programming for Teens 1st Edition Pdf This site comply with DMCA digital copyright. We do not store files not owned by us, or without the permission of the owner. We also do not have links that lead to sites DMCA copyright infringement. If You feel that this book is belong to you and you want to unpublish it, Please Contact us . Mastering Embedded Linux Programming Penetration Tester's Open Source Toolkit
<urn:uuid:8fb8ce9f-85a7-42c9-8e39-efe16ccefdf7>
3.515625
281
Product Page
Software Dev.
46.445302
95,600,364
Posted 2 years, 5 months and 3 days ago Following web standards is very important for the projects that are intended to be used as web application over World Wide Web (WWW) and internet. This makes things easy for everyone, developers and users, in various ways such as improving the speed of the application, improving the quality, ease of conversion to various formats, easy Google indexing, etc. For Developers, usage of web standards makes the development easy and can be clearly understood if someone else wants to work or review the same code. This result in no (or very few) browser compatibility issues – developers need not code for each browser separately. This makes development process more simplified as it is following the globally understood coding patterns. Setting the code right with web standards is cost effective as there is notable decrease in work hours when compared with similar development without web standards. The results are show much better in the longer run for bigger projects which are required to be kept under support and maintenance. For Users, the application with web standards gives the end user better and good user experience. There will the standard view in all the browsers; users viewing in different browsers will not face any user interface (UI) issues. This obviously enhances the performance of the web application resulting on increased user base and a great deal of traffic leading to eventual success of the application. Organizations involved in standardization. Web standards are the universally accepted over communities and regions which is not owned by any individual or organization. These are formal, technical specifications that define and describe the features and facts of the World Wide Web. These standards are built and enforced to be used to betterment and co-existence or different technologies. There are few organizations which are formulating, maintaining and supporting these standards on the sole idea of co-existence of different useful technologies. Few of the organizations are listed below:
<urn:uuid:47518323-aca5-4e75-9a94-d48277870d2e>
3.0625
378
Knowledge Article
Software Dev.
27.304967
95,600,379
The new study shows the oil content of sediments is highest closest to the seeps and tails off with distance, creating an oil fallout shadow. It estimates the amount of oil in the sediments down current from the seeps to be the equivalent of approximately 8-80 Exxon Valdez oil spills. The paper is being published in the May 15 issue of Environmental Science & Technology. “Farwell developed and mapped out our plan for collecting sediment samples from the ocean floor,” said WHOI marine chemist Chris Reddy, referring to lead author Chris Farwell, at the time an undergraduate working with UCSB’s Dave Valentine. “After conducting the analysis of the samples, we were able to make some spectacular findings.” There is an oil spill everyday at Coal Oil Point (COP), the natural seeps off Santa Barbara, California, where 20-25 tons of oil have leaked from the seafloor each day for the last several hundred thousand years. Earlier research by Reddy and Valentine at the site found that microbes were capable of degrading a significant portion of the oil molecules as they traveled from the reservoir to the ocean bottom and that once the oil floated to sea surface, about 10 percent of the molecules evaporated within minutes. “One of the natural questions is: What happens to all of this oil?” Valentine said. “So much oil seeps up and floats on the sea surface. It’s something we’ve long wondered. We know some of it will come ashore as tar balls, but it doesn’t stick around. And then there are the massive slicks. You can see them, sometimes extending 20 miles from the seeps. But what really is the ultimate fate?” Based on their previous research, Valentine and Reddy surmised that the oil was sinking “because this oil is heavy to begin with,” Valentine said. “It’s a good bet that it ends up in the sediments because it’s not ending up on land. It’s not dissolving in ocean water, so it’s almost certain that it is ending up in the sediments.” To conduct their sampling, the team used the research vessel Atlantis, the 274-foot ship that serves as the support vessel for the Alvin submersible. “We were conducting research at the seeps using Alvin during the summer of 2007,” recalls Reddy. “One night during that two-week cruise, after the day’s Alvin dive was complete and its crew prepared the sub for the next day’s dive, Captain AD Colburn guided the Atlantis on an all-night sediment sampling campaign. It was no easy task for the crew of the Atlantis. We were operating at night, awfully close to land with a big ship where hazards are frequent. I tip my hat to Captain Colburn, his crew, and the shipboard technician for making this sampling effort so seamless.” The research team sampled 16 locations in a 90 km2 (35 square mile) grid starting 4 km west of the active seeps. Sample stations were arranged in five longitudinal transects with three water depths (40, 60, and 80 m) for each transect, with one additional comparison sample obtained from within the seep field. To be certain that the oil they measured in the sediments came from the natural seeps, Farwell worked in Reddy’s lab at WHOI using a comprehensive two-dimensional gas chromatograph (GC×GC), that allowed them to identify specific compounds in the oil, which can differ depending on where the oil originates. “The instrument reveals distinct biomarkers or chemical fossils -- like bones for an archeologist -- present in the oil. These fossils were a perfect match for the oil from the reservoir, the oil collected leaking into the ocean bottom, oil on the sea surface, and oil back in the sediment. We could say with confidence that the oil we found in the sediments was genetically connected to the oil reservoir and not from an accidental spill or runoff from land.” The oil that remained in the sediments represents what was not removed by “weathering” -- dissolving into the water, evaporating into the air, or being degraded by microbes. Next steps for this research team involve investigating why microbes consume most, but not all, of the compounds in the oil. “Nature does an amazing job acting on this oil but somehow the microbes stopped eating, leaving a small fraction of the compounds in the sediments,” said Reddy. “Why this happens is still a mystery, but we are getting closer.” Support for this research came from the Department of Energy, the National Science Foundation, and the Seaver Institute. The Woods Hole Oceanographic Institution is a private, independent organization in Falmouth, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the oceans and their interaction with the Earth as a whole, and to communicate a basic understanding of the oceans’ role in the changing global environment. WHOI Media Relations | Newswise Science News Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:3ea2670d-d6ad-4c29-a41f-7f18ff062791>
3.4375
1,675
Content Listing
Science & Tech.
46.026246
95,600,396
Chemistry & Environmental Dictionary Quantum - Quantum Number The tiniest amount of physical energy that can exist independently, especially a finite amount of electromagnetic radiation. Quantum (wave) Mechanics A branch of physics that describes the wave properties of subatomic particles mathematically. The basic unit of electromagnetic energy. This characterizes the wave properties of electrons, as distinct from their particulate properties. This determines the principle energy level of an electron. Citing this page If you need to cite this page, you can copy this text: Kenneth Barbalace. Chemistry & Environmental Dictionary - Quantum - Quantum Number. EnvironmentalChemistry.com. 1995 - 2018. Accessed on-line: 7/15/2018 Linking to this page If you would like to link to this page from your website, blog, etc., copy and paste this link code (in red) and modify it to suit your needs: <a href="https://EnvironmentalChemistry.com/yogi/chemistry/dictionary/Q01.html">echo Chemistry & Environmental Dictionary (EnvironmentalChemistry.com)</a>- 'Contains definitions for most chemistry, environmental and other technical terms used on EnvironmentalChemistry.com as well as many other chemistry and environmental terms. NOTICE: While linking to articles is encouraged, OUR ARTICLES MAY NOT BE COPIED TO OR REPUBLISHED ON ANOTHER WEBSITE UNDER ANY CIRCUMSTANCES. PLEASE, if you like an article we published simply link to it on our website do not republish it.
<urn:uuid:912a8923-46d9-47b3-a0b1-860014cc62a2>
2.578125
330
Structured Data
Science & Tech.
23.591142
95,600,413
Extreme Weather and Flood Forecasting and Modelling for Eastern Tana Sub Basin, Upper Blue Nile Basin, Ethiopia Received Date: Aug 13, 2016 / Accepted Date: Aug 31, 2016 / Published Date: Sep 07, 2016 River flood is a natural disaster that occurs each year in the Fogera floodplain causing enormous damage to the human life and property. Overflow of Ribb and Gummara rivers and backwater effects from Lake Tana has affected and displaced thousands of people since 2006. Heavy rainfall for a number of days in the upper stream part of the catchment caused the river to spill and to inundate the floodplain. Three models were used for this research; the numerical weather prediction model (WRF), physical based semi distributed hydrological model SWAT and the LISFLOOD-FP 1D/2D flood inundation hydrodynamic model to forecast the extreme weather, flood and flood modeling. Daily rainfall, maximum and minimum temperature for the forecasted period ranges from 0 to 95.8 mm, 18°C to 28°C and 9°C to 18°C, respectively. The maximum forecasted flow at Ribb and Gummara Rivers have 141 m3/s and 185 m3/s respectively. The flood extent of the forecasted period is 32 km2; depth ranges 0.01 m to 3.5 m; and velocity ranges from 0 to 2.375 m/s. This technique has shown to be an effective way of flood forecasting and modeling. Integrating Rainfall Runoff model with hydrodynamic model provides thus good alternative for flood forecasting and modeling. Keywords: SWAT; LISFLOOD; WRF; Extreme weather; Forecasting and modeling Weather-related disasters are increasing in intensity and are expected to increase with climate change . Approximately 70% of all disasters occurring in the world are related to hydro-meteorological event . Death and destruction due to flooding continue to be all too common phenomena throughout the world; and affecting millions of people annually, which is about a third of all natural disasters throughout the world and are responsible for more than half of the fatalities . Scientists agreed that changes in the earth`s climate will hit developing countries like Ethiopia first and the hardest because their economic are strongly dependent on crude forms of natural resources and their economic structure is less flexible to adjust to such drastic changes . In Ethiopia, floods are common and occurring throughout the country with varying time and magnitude. Flood disasters are caused by rivers overflow or burst their banks and inundate to downstream flood plain land; particularly large scale flooding (riverine flooding) in the country is common in the low land flat parts due to high intensity of rainfall from highland parts . As recently as 2006, flooding occurred in almost all parts of the country and devastate the entire country of which Lake Tana remains one of these areas regularly inundated. In spite of the recurrent flood problem, the existing disaster management mechanism has primarily focused on strengthening rescue and relief arrangements during and after flood disasters. Little work has been done in scientific context on minimizing the incidence and extent of flood damage; but need to forecast the extreme weather as well as extreme weather related disasters. Hence, it is essential to forecast and model the occurrence of extreme weather related disasters to secure human life and property. Therefore, the objective of this study is to forecast extreme weather and flood, and evaluate the applicability of integrating WRF-SWATLISFLOOD- FP models to forecast flooding in Fogera floodplain, Easter Tana sub basin. Materials and Methods The study has conducted in the upper Blue Nile part of Ethiopia in Amahara Region, South Gondar Zone. Geographically the area is located between 10° 57´and 12° 47´N and 36° 38´and 38° 14´E (Figure 1). It has an aerial extent of about 4174.33 km2 drained by Ribb and Gummara Rivers; which is nearly 600 km away from Addis Ababa. Different geographic futures like flood plain, high mountainous land with cold weather (Guna Mountain), Plateau, and rivers characterize it. The basins topography ranges from 1783 m near to Lake Tana up to 4089 m above mean see level on Guna Mountain. The climate is tropical highland monsoon where the seasonal rainfall distribution is controlled by the movement of the inter-tropical convergence zone and moist air from the Atlantic and Indian Ocean in the summer (June- September) . The northward and southward movement of the Inter- Tropical Convergence Zone (ITCZ) controls the seasonal distribution of rainfall. Moist air masses have driven from the Atlantic and Indian Oceans during summer (June-September). During the rest of the year the ITCZ shifts southwards and dry conditions persists in the region between October and May. The data set. Time series daily rainfall and temperature data for the selected stations from 1951-2014 were obtained from National Meteorological Agency of Ethiopia (NMA). The other variables evapotranspiration, solar radiation, wind speed and relative humidity, have simulated from SWAT weather Generator. Similarly, daily stream flow data of Ribb and Gummara rivers for the years of 1973 to 20014 have obtained from Ethiopian ministry of water Irrigation and Energy. Spatial resolution of 30 × 30 m land use image has downloaded from landsat8 OPL sensor with 169 Path and 52 Row for 01/02/2014 and reclassified using supervised maximum likelihood land use classification method using GIS technique. In addition, soil data has extracted from Blue Nile Basin soil data (Soill90) obtained from Ministry of Water Irrigation and Energy of Ethiopia (MoWIE). River cross section data of Ribb and Gummara Rivers and Survey data for Fogera flood Plain have obtained from Tana Sub Basin Office (TaSBO) which has collected by MoWIE. The rivers width has also obtained by digitizing from ESRi high-resolution world imagery base map of resolution 1 m and better of resolutions (15 cm and 60 cm) on ArcGIS map window. Extreme weather forecasting Extreme weather has forecasted for the entire period of August 20, 2006-Sepetember 10, 2006 using a numerical weather prediction WRF model. To forecast the extreme weather nested three domains [Ethiopia (45 km), Northern part (15 km), and fogera (5 km)] resolution were selected by assuming that 1-degree ~ 111 km around equator. The model handled three domains at the same nest level (no overlapping nests), and/or three nest levels (telescoping). The nesting ratio for the WRF-ARW is three and the grid spacing of a nest was 1/3 of its parent. The initial and lateral boundary, meteorological and terrestrial gridded data that used to run the WRF-ARW model has downloaded from Global Forecasting System (GFS) that has produced by the National Centers for Environmental Prediction (NCEP) and it has updated for every six hours. The Real-data has interpolated to run the NWP using WRF Pre-processing System (WPS). The WRF Model (ARW dynamical cores) was initialized numerical integration programs for real data processing. The output of the model WRF-ARW has processed on WRF-post processing and visualized using Grid Analysis and Display System (GRADS). The output of the WRF model, weather data, has processed for SWAT model input. From the output of the model few parameter has selected and used as SWAT model impute (Figure 2). A conceptual, physically based, continuous SWAT model has employed to simulate stream flow. The SWAT (Soil and Water Assessment Tool) model was developed by the USDA (U.S. Department of Agriculture), ARS (Agriculture Research Service) and represents a continuation of roughly 40 years of modeling efforts . SWAT is a public domain watershed scale model developed to predict the effects of land management on water, sediment, nutrients, pesticides and agricultural chemicals in small to large complex basins . It is a physically based, semi-distributed parameter model with a robust hydrologic and pollution element that has successfully employed in a number of watersheds. Widely known application of SWAT model is simulating hydrology of a watershed, water quality, sediment yield and plant growth in relation to watershed management practices. However, it has also applied for flow forecasting. The soil and water assessment tool (SWAT) can forecast the flow of a watershed but it is performed lower than Artificial Neural Network (ANN) models . Hydrological modeling of a SWAT has used in flash flood forecasting with the application of three days weather forecast from the NWP (Numerical Weather Prediction) and the data from the NWP can be used with the SWAT model and provides relatively sound results . Predicting flood hazard areas and damage reduction by flood inundation and the sediments using SWAT and HEC-RAS [11,12]. The major components of SWAT are climate, hydrology, erosion, land cover and plant growth, nutrients, pesticides and land management. The SWAT has used to simulate the hydrologic processes of the study watershed. Simulations of the hydrology of a watershed can separated into two major divisions. The first division is the land phase of the hydrologic cycle and the second division is the water or routing phase of the cycle . For Land phase, the hydrologic cycle has based on the water balance equation: Where, SWt=final water content (mm) SW0=initial water content in time I (mm) t=time (in days, months, or years) Rt=amount of rainfall in time I (mm) Qsurf=amount of surface runoff in time I (mm) Ea=amount of evapotranspiration in time I (mm) Wseep=amount of water entering the vadose zone from the soil profile in time I (mm) Qgw=amount of return or base flow in time I (mm). Surface runoff: Also known as overland flow, the part of the rainfall, infiltration excesses rainfall flowing along the slopes. SWAT uses the Soil Conservations Service (SCS) Curve Number (CN) method to calculate surface runoff. Surface runoff can express using the equation 2: Where, S=soil storage or retention Ia=initial surface abstraction that includes surface storage, interception and infiltration to moist soil surface up to runoff generation, all in mm water. Soil storage or retention volume has expressed in terms of curve number CN (equation 3) By substituting Ia and S in equation 2, surface runoff is expressed as: Surface runoff will occur when the amount of rainfall amount exceeds the initial abstraction and infiltration to the root zone of the soil. For a reason, CN is a function of land-use, soil and antecedent soil moisture content. These functional relationship and CN values can be obtained in the SWAT manual and user guide . Before forecasting, the model has calibrated and validated using observed flow data. From the available data, 2 years (1994-1996) for warm-up, 9 years (1996-2004) for calibration and 5 years (2005- 2009) for validation have used. Model calibration was performed using the Sequential Uncertainty Fitting version 2 (SUFI-2) interface of SWAT-CUP. SWAT-CUP is a separate calibration and uncertainty program developed by Abbaspour. It is a commonly used procedure for calibration and uncertainty analysis. Setegn et al. [14,15] compared different procedures and found SUFI-2 is better that gives good results even at smallest number of runs as compared to other procedures. The performance of model was evaluated using dimensionless Nash- Sutcliffe Efficiency (NSE) (Nash and Sutcliffe) and coefficient of determination (R2). Flood modeling and forecasting Among the most widely used hydraulic models LISFLOOD model has selected for this research. LISFLOOD is a distributed, raster- based; combination rainfall-runoff and hydrodynamic model embedded in a dynamic GIS environment [16-18], and has been developed for the simulation of hydrological processes and floods in European drainage basins. It is a flexible tool, which is capable of simulating hydrological processes on a wide range of spatial and temporal scales, maintaining high resolution even when simulating large catchment areas. LISFLOOD-FP (Flood Plain) raster based inundation model. It shows a 2D/3D simulation of the floodplain has inundated and runs at a time step of seconds. The inputs for this module are, a high resolution DEM including all the topographic detail of features inside the modeled area considered necessary to produce probable flood inundation prediction, the rivers hydrograph, a map of flood source areas, and the outputs of the Flood Simulation model . The LISFLOOD-FP, one of the modules of LISFLOOD, includes a number of numerical schemes (solvers) which simulate the propagation of flood waves along channels and across floodplains using the shallow water equations. The choice of numerical scheme is depend on the characteristics of the system has to be modeled, requirements of time for execution and type of data available. The momentum and continuity equations for the 1D full shallow water equations have given below (equations 5,6 respectively): Where, Qx=volumetric flow rate in the x Cartesian direction A=cross sectional area of flow n=Manning’s coefficient of friction x=distance in the x Cartesian direction. Floodplain flow solvers: LISFLOOD Roe: The “Roe” solver includes all of the terms in the full shallow water equations was selected for this research . The method has based on the Godunov approach and uses an approximate Riemann solver by Roe based on the TRENT model presented in Villanueva and Wright. The explicit descritisation is first order in space on a raster grid. It solves the full shallow water equations with a shock-capturing scheme. LISFLOOD-Roe uses a point wise friction based on the Manning´s equation, while the domain boundary/internal boundary (wall) uses the ghost cell approach. The stability of this approach has approximated by the Courant-Friedrichs- Levy (CFL) condition for shallow water models. A=cross section area ht=water free surface height Channel flow solvers: The “diffusive” channel flow solver has selected for this research uses the 1D diffusive wave equations and it includes the water slope term, which is able to predict backwater effects. Using the 1D-channel solvers, once channel water depth reaches bank full ight, water has routed onto adjacent floodplain cells has distributed as per the chosen floodplain solver. There is no transfer of momentum between the channel and floodplain, only mass. The 1D diffusive solvers assume that the in-channel flow component can be represented using a diffusive 1D wave equation with the channel geometry simplified to a rectangle. The 1D diffusive channel flow solver assumes that the channel to be wide and shallow, so the wetted perimeter is approximated by the channel width such that the lateral friction is neglected. Results and Discussion Extreme weather forecasting The extreme weather for the study area has forecasted using a numerical weather prediction model WRF-ARW from 20 August 2006-10 September 2006. The weather parameters have forecasted at a six-hour time step and converted to daily for SWAT model input. Air temperature, wind speed at two meter, solar radiation, relative humidity, precipitation, geopotential height, sea surface temperature and Surface temperature were among the outputs of the WRF model. Precipitation and temperature of the output parameters have selected for SWAT model input to forecast the flood. The result in Figure 3 shows that Eastern Tana Sub basin has subjected to intense and heavy rains during the selected period. The developments of intensive weather events that invade Eastern Tana sub basin during 20 August 2006-7 September 2006, have characterized by “exceptional and extremely heavy rainfall,” which affected almost all part of the Eastern Tana Sub Basin. The forecasted rainfall of the selected station has obtained from the WRF output gridded data. Unfortunately, the selected stations point has no the same coordinate with girded point. Hence, the forecasted rainfall for the station points has obtained from the neighboring gridded points using regression method. The cumulative of forecasted rainfall is similar with the cumulative of the observed. The forecasted daily rainfall for the forecasted period ranges from 0 to 95.8 mm. A very intense and heavy rain has occurred during 25th of the days almost all over the entire sub basin. Even though the WRF model captures rainfall climatology in the study area both in space and time basin, there is variations of the forecasted rainfall for selected stations. The maximum rainfall has recorded in the upper part of the sub basin climatological stations, which are D/Tabor, Lewaye, K/Dnigay and M/Eyesus for the entire period. The maximum daily rainfall has observed at K/Dnigay station about 95.8 mm followed by Lewaye and D/Tabor stations about 60 mm and 40 mm respectively. In the meanwhile, the minimum daily rainfall has recorded in the lower part of the study area stations, which were Yifag, Wanzaye (Figure 4). Similarly, the forecasted temperature for the station points has obtained from the neighboring gridded points using regression method (Figure 5). As can be seen Figure 6, the spatial variation of average temperature over the Tana Sub basin. The maximum forecasted air temperature for the selected period of the entire sub basin ranges from 18°C to 28°C. Generally, the WRF model has well forecasted the maximum temperature compared with the observed data for the sub basin. Flow modelling and forecasting Hydrological model calibration and validation: The calibration and validation of the model was a key factor in reducing the uncertainty and increasing confidence in its predicative abilities, which makes the application an effective model. Information on the sensitivity analysis, calibration and validation of multivariable SWAT models was provided to assist watershed modelers in developing their models to achieve their watershed management goals . SWAT simulation has executed for the 1994-2009 period to provide two-years for an initialization period. Calibration of SWAT has performed for 1994-2004, while 2005-2009 have used as the validation years. The goodness of fit of the model is evaluated using coefficient of determination (R2) and Nash-Sutcliffe Efficiency (NSE). It has found that the model has strong predictive capability as shown in Table 1. Statistical model efficiency criteria fulfilled the requirement of R2>0.6 and NSE>0.5 which is recommended by SWAT developer . This showed the model parameters represent the processes occurring in the watershed to the best of their ability given available data and can used to predict watershed response for various outputs (Figures 7 and 8). Table 1: SWAT Model Calibration. Flow forecasting: The forecasted weather data using NWP-WRF model has used as input for SWAT model. The simulated value has considered as forecasted flow. It has also found that the simulated flow rate using NWP-WRF data was lower than the observations for both watersheds for consecutive five days from 27 August 2006-1 September 2006. This was because the rainfall from the NWP-WRF model was lower than the measured rainfall. In Summary, the simulated flow rates for the rivers using data rom NWP-WRF were higher than the observations flow at Ribb River and lower than at Gummara River. The maximum forecasted flow at Ribb was 141 cm but the maximum observed flow was 93 cm. Similarly, for Gummara, the maximum forecasted and observed flow was 185 cm but the maximum observed flow was 206 cm (Figure 9). Both upstream and downstream boundary conditions have given for the diffusive channel solver. The upstream boundary is the forecasted flow rate at gauging site of Ribb and Gummara rivers; and the downstream boundary condition is the Lake Tana water level. The advantage of the diffusive channel solver over Kinematic solver is that the tributaries have handled automatically by LISFLOOD-FP. To simulate a dynamic flood wave both upstream and downstream time varying boundary condition (QVAR and HVAR) have used. The forecasted flood extent for the design period 20 August 2006- 10 September 2006 has computed by integrating the hydrology model (SWAT) and a hydrodynamics model (LISFLOOD). The output from SWAT that is a hydrograph used as upper boundary for LISFLOOD model and the Lake level interims of elevation used as lower boundary. Therefore, the LISFLOOD computes the flood extent on accounts of the boundary conditions, the rivers width, and river cross-section and Manning's friction coefficient. The flood extent obtained from LISFLOOD-FP has processed on GIS environment. The extent of flood for the forecasted period is 329 Km2. The flood depth ranges from 0.01 m to 3.5 m and the maximum depth is at the rivers. The flood velocity for the forecasted period ranges from 0 to 2.375 m/s. The model has not accounted the rainfall over the flood plain and the small rivers those are not tributary of the main rivers (Ribb and Gummara) (Figure 10). This might be under estimate the flood extent. Flood model verification The goodness of fit between the created flood map from flood model and the flood map extracted from the satellite images has assessed by the measure of Relative Error (RE) and F-statistics (F). As shown the Figure 11 indicates that the inundation area of the extracted flood images from the satellite is 259.7 Km2 and predicted flood inundation area is 256.9 Km2. The area of overlapping portion of the two flood inundations is 236.55 Km2 with RE of 0.01 and F-statistics of 84.47%. This shows that the compared areas of the flood inundation are similar to each other but they are not geospatially similar. As can be seen in Figure 11 the satellite image shows more flooded area in the side of Ribb River but the forecasted flood area is more in the Gummara riverside. This seems rescannable because the satellite image also accounts the logged water over the area due to rainfall and other tributaries. Near to the Ribb, river and center of the flood plain there are tributaries, which are causing flood. The goodness of fit between the created flood map from flood model and the flood map extracted from the satellite images in Table 2 shows that the model has well fitted. Table 2: Measure of goodness of fit of the satellite image and forecasted flood extent. Ao indicates the inundation area of the extracted flood images from Ap refers to the predicted flood inundation area Aop represents the intersection of Ao and Ap. Conclusion and Recommendations Flooding is the main challenge natural hazard in Eastern Tana sub Basin that affects human life and property badly. Thousands of people are displacing each year and the people in the area are always a misery due to flood. The Ethiopian government has taken an operation measure to control the impact of flood but the measure has abused by the farmers that they love the flood to get fertile soil. The extreme weather over the study area is controlled by the movement of the Inter-Tropical Convergence Zone (ITCZ) position and orientation, monsoon trough, Low level jet (Somali Jet), Southern hemisphere high pressures, Southerly (cross equatorial) moisture flows, Strengthening frequency of Tropical Easterly Jet (TEJ), ENSO events and seasonal rainfall features Wet/dry summer (La Nina/El Nino). Flooding has estimated by integrating three models (WRF-SWATLISFLOOD) and the approach that gave a good result. Even though the main source of flood was included in the flood model domain during flood modeling, few sources have left (the rainfall and the tributaries over the floodplain). This will a little bit underestimated the estimated flood. This work is the first work in Ethiopia and the method can extend for other regions with similar/different climates. The ongoing rapid land use change and expansion of agricultural area in this study area will have negative effects on the runoff properties. To attenuate the occurrence of flood on Fogera flood plain, a better land use management system is required, which can impede the unregulated conversion from one land use to another land use. To avoid future flood disasters, flood early warning and forecasting system, flood management and flood mitigation plans are need to be able to react quickly to areas affected by flooding. Flood monitoring system is required to assess, on a continuous basis, the areas affected by floods and to have emergency measures plan to reduce the damage of exceptional floods. Also, need to aware the community/ farmers about the effect of flood strongly to solve problems related to dyke breaking. Further investigations should consider on the possibility of flood forecasting and modeling with including other events in the area. Future works are needed on establishing early warning system by considering the outputs of this study. I would like to thank the Blue Nile Water Institute for funding this research and providing necessary facilities. First, I would like to express my sincere gratitude to my advisors Dr. Solomon Demissie and Dr. Seifu Admassu for the continuous support of my study and research, for their patience, motivation, enthusiasm, and immense knowledge. Their guidance helped me in all the time of research and writing of this thesis. I could not have imagined having a better advisor for my research. My sincere thanks also go to Dr. Essayas Kaba, Mr. Fasikaw Atanaw and Mr. Mamaru Moges for their guidance and help on land use classification, guiding on writ up of my thesis under difficult condition and crowded time. - Parry M (2007) Climate Change 2007: impacts, adaptation and vulnerability. Contribution of Working Group II to the fourth assessment report of the Intergovernmental Panel on Climate Change, Cambridge University Press. - Barrientos HG, Swain A (2014) Linking flood management to integrated water resource management in guatemala: A critical review. International Journal of Water Governance 4: 53-74. - Berz G (2000) Flood disasters: lessons from the past-worries for the future. Proc Inst Civ Eng Water Marit Energy 142: 3-8. - Bryan E, Deressa T, Gbetibouo G, Ringler C (2009) Adaptation to climate change in Ethiopia and South Africa: Options and constraints. Environ. Sci. Policy 12: 413-426. - Deressa TT, Hassan RM, Ringler C, Alemu T, Yesuf M (2009) Determinants of farmers’ choice of adaptation methods to climate change in the Nile Basin of Ethiopia. Global environmental change 19: 248-255. - Kebede S, Travi Y, Alemayehu T, Marc V (2006) Water balance of Lake Tana and its sensitivity to fluctuations in rainfall, Blue Nile basin, Ethiopia. Journal of hydrology 16: 233-247. - Williams JR, Arnold JG, Kiniry JR, Gassman PW, Green CH (2008) History of model development at Temple, Texas. Hydrological sciences journal 53: 948-960. - Arnold J, Srinivasan RS, Muttiah RS, Williams JR (1998) Large area hydrologic modeling and assessment part I: Model development. Journal of the American Water Resources Association (JAWRA) 34: 73-89. - Demirel MC, Venancio A, Kahya E (2009) Flow forecast by SWAT model and ANN in Pracana basin, Portugal. Advances in Engineering Software 40: 467-473. - Wangpimool W, Pongput K, Supriyasilp T, Kamol PN, Sakolnakhon S, et al. (2013) Hydrological evaluation with swat model and numerical weather prediction for flash flood warning system in Thailand. Journal of Earth Science and Engineering 3: 349. - Rivera S, Hernandez A, Ramsey RD, Suarez G (2007) Predicting flood hazard areas: a swat and HEC-RAS simulations conducted in Aguan river basin of Honduras, Central America. Paper presented at ASPRS 2007 Annual Conference. - Jung CG, Joh HK, Yu YS, Park JY, Kim SJ (2012) Study on damage reduction by flood inundation and the sediments by SWAT and HEC-RAS modeling of flow dynamics with watershed hydrology-for 27 july 2011 heavy storm event at GonjiamCheon watershed. Journal of the Korean Society of Agricultural Engineers 54: 87-94. - Neitsch SL, Arnold JG, Kiniry JR, Williams JR (2011) Soil and water assessment tool theoretical documentation version 2009. Texas Water Resources Institute Technical Report No. 406, Texas Water Resources Institute. - Setegn SG, Srinivasan R, Dargahi B (2008) Hydrological modelling in the Lake Tana Basin, Ethiopia using SWAT model. The Open Hydrology Journal 2: 49-62. - Yang J, Peter R, Abbaspour KC, Xia J, Yang H (2008) Comparing uncertainty analysis techniques for a SWAT application to the Chaohe Basin in China. Journal of Hydrology 358: 1-23. - Roo APJD, Wesseling C, Deursen WV (2000) Physically based river basin modelling within a GIS: the LISFLOOD model. Hydrological Processes 14: 1981-1992. - Roo AD, Gouweleeuw B, Pozo JT, Sattler K (2003) Development of a European flood forecasting system. International Journal of River Basin Management 1: 49-59. - Roo APJD (1999) LISFLOOD: a rainfall-runoff model for large river basins to assess the influence of land use changes on flood risk in RIBAMOD: river basin modelling, management and flood mitigation, concerted action by European Communities, pp: 349-358. - Trigg MA, Wilson MD, Bates PD, Horritt MS, Alsdorf DE, et al. (2009) Amazon flood wave hydraulics. Journal of Hydrology 374: 92-105. - White KL, Chaubey I (2005) Sensitivity analysis, calibration, and validations for a multisite and multivariable SWAT model. Journal of the American Water Resources Association (JAWRA) 41: 1077-1089. - Santhi C, Arnold JG, Williams JR, Dugas WA, Srinivasan R, et al. (2001) Validation of the SWAT model on a large river basin with point and nonpoint sources. Journal of the American Water Resources Association (JAWRA) 37: 1169-1188. Citation: Desalegn A, Demissie S, Admassu S (2016) Extreme Weather and FloodForecasting and Modelling for Eastern Tana Sub Basin, Upper Blue Nile Basin,Ethiopia. Hydrol Current Res 7:257. Doi: 10.4172/2157-7587.1000257 Copyright: © 2016 Desalegn A, et al. This is an open-access article distributedunder the terms of the Creative Commons Attribution License, which permitsunrestricted use, distribution, and reproduction in any medium, provided theoriginal author and source are credited. Select your language of interest to view the total content in your interested language Share This Article 4th World Congress on Climate Change and Global Warming August 06-07, 2018 Osaka, Japan International Conference on Environment and Climate Change September 13-14,2018 Bucharest, Romania - Total views: 8837 - [From(publication date): 9-2016 - Jul 16, 2018] - Breakdown by view type - HTML page views: 8621 - PDF downloads: 216
<urn:uuid:955a65e3-6a13-47f9-9837-ebad6982c806>
3.203125
6,745
Truncated
Science & Tech.
40.591514
95,600,434
Forest ecosystems may produce large volumes of nitrous oxide (N2O), an important greenhouse gas, which affects the atmosphere's chemical and radiative properties. Yet, our understanding of controls on forest N2O emissions is insufficient. This study investigates the quantitative and qualitative relationships between nitrogen-cycling and N2O production in European forests. The authors conclude that changes in forest composition in response to land use activities and global change may have serious implications for regional budgets of greenhouse gases. It also became clear that accelerated nitrogen inputs predicted for forest ecosystems in Europe may lead to increased greenhouse gas emissions from forest ecosystems. Read article: http://www.biogeosciences.net/3/135/2006/bg-3-135-2006.html Bacterial carbon sources in coastal sediments: a cross-system analysis based on stable isotope data of biomarkers. Coastal ecosystems are among the most productive regions in the world ocean. Because of the ample nutrient supplies, the coastal zone accounts for about 20% of oceanic primary production — despite its small geographic extent. Local organic producers span from phytoplankton to bottom-dwelling algae to seagrasses and mangroves. Because of the high rates of sediment accumulation, among other factors, a comparatively large percentage of this new organic matter survives early decay and is buried into the geologic record. Coastal regions also receive large inputs of organic material reworked and transported from surrounding regions by strong currents, including contributions from rivers that drain adjacent land areas. Through the combined effects of high production, large inputs of reworked material, and efficient sequestration, a vast majority of the world’s organic carbon burial occurs in these marginal marine settings. As the dominant site of oceanic organic carbon burial, the coastal zone factors prominently in most models for short- and long-term carbon cycling and, correspondingly, in scientists’ estimates for CO2 variation in the atmosphere on a variety of time scales. In this paper, Bouillon and Boschker explore this complex organic reservoir through carbon isotope analysis of the many constituents, including large plant fragments and lipid biomarkers that are chemically extracted from the sediments and fingerprint bacterial sources. Using this approach the authors explored which of the organic components bacteria most easily degrade and thus which have the potential for burial and removal from at least the short-term carbon cycle. Importantly, the authors compared the carbon isotope properties of bacterial biomarkers from a wide range of coastal settings and concluded that the microbes are feeding on a diverse assortment of organic constituents. In fact, at most sites where organic matter is readily available, bacteria show little selectivity in the compounds they decompose. In light of the previous consensus that such materials should show widely varying biodegradability, this result will certainly raise questions, fuel future work, and ultimately refine our understanding of how carbon flows through its global biogeochemical cycle and impacts the composition of the atmosphere. Read article: http://www.biogeosciences.net/3/175/2006/bg-3-175-2006.html Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e117c8b0-77e3-48d4-8f9c-7800979d1f30>
2.984375
1,293
Content Listing
Science & Tech.
32.729898
95,600,468
The Anthropocene, global change and sleeping giants: where on Earth are we going? KeywordsTropical Cyclone Soil Respiration Carbon Cycle Carbon Cycle Model Pest Outbreak The "climate problem" has come to the fore in public policy debates over the last year or so. The continuing high temperatures, the spate of intense tropical cyclones and deepening droughts in some parts of the world have focused attention on the issue of defining "dangerous climate change" . This is often conceptualised as an upper limit to the rise in global mean temperature, for example, 2°C above pre-industrial levels, which in turn leads to a back calculation of the permissible concentration of CO2 in the atmosphere and then to the trajectories of the corresponding maximum anthropogenic carbon emissions. Although a very important exercise, this approach to defining dangerous climate change can itself be dangerous, in particular because it often ignores the systemic nature of the global environment. Feedbacks and nonlinearities are the rule, not the exception, in the functioning of the Earth System , and in this Anthropocene era, where human activities have become a global geophysical force in their own right, there is no doubt that surprises await those who apply linear logic to the climate problem. The carbon cycle is centrally involved in many of these feedbacks and nonlinearities. Here we briefly review several of the more important so-called "sleeping giants" in the carbon cycle, processes that have the potential to accelerate the rate of warming beyond that attributed to human emissions of greenhouse gases . The first of these is based on the impact on soil respiration of rising temperature and changing soil moisture, an example of a response of ecosystem physiology to climate change. Although there is still debate about the magnitude of the increase in soil respiration with temperature, and whether there are compensating effects of enhanced plant growth due to mobilisation of nitrogen in the process, the general consensus is that increasing temperature will cause an increase in the emission of CO2 from soil carbon . A second "sleeping giant" is the increase in disturbance in terrestrial ecosystems, often associated with pulses of carbon to the atmosphere. The most notable of these are wildfires and pest outbreaks, both sensitive to both warming and changes in the moisture regime. Although these are natural phenomena in the dynamics of terrestrial ecosystems, an increase in the frequency or extent of these disturbances results in a net loss of carbon to the atmosphere. Observations of the large areas of boreal forest in the northern high latitudes suggest that over the past couple of decades, these forests have experienced enhanced rates and/or areas of disturbance, and hence their capacity to act as sinks for atmospheric CO2 has weakened significantly . Human-driven land-use and land-cover change is another type of disturbance with important implications for the terrestrial carbon cycle, with perhaps even more complex dynamics than wildfires or pest outbreaks. Coupling global climate models to carbon cycle models that account for these responses to a warming climate provides first estimates of the magnitude of these carbon cycle feedbacks . The results vary considerably, from an additional 20 ppmV CO2 in the atmosphere by 2100 to over 200 additional ppmV. However, all simulations showed a positive feedback – the warming is accelerated beyond that due solely to anthropogenic carbon emissions. In a worst case scenario, the additional warming by 2100 due to these carbon cycle feedbacks could be about 1.5°C. Potentially important carbon cycle feedbacks do not, however, end with soil respiration, fire and insect outbreaks. Permafrost soils in the high latitudes of the northern hemisphere and moist peatlands in both the tropics and the high latitudes contain hundreds of gigatons of carbon that is currently stored away from contact with the atmosphere. However, both of these pools of carbon are vulnerable to rising temperatures, which could melt much of the current permafrost areas and dry out peatlands, leading to the emission of CO2 and CH4 to the atmosphere . Carbon in its elemental form – soot or black carbon – plays a complex role in the climate system. For example, black carbon produced by wildfires and stored in soils or water systems acts as a sink for carbon; however, fine soot particles released to the atmosphere can also act to warm the climate further, unlike many other aerosol particles. The marine carbon cycle may also provide surprises in the future. The dissolution of atmospheric CO2 in the surface waters of the ocean increases their acidity through the formation of carbonic acid. This, in turn, affects the saturation state of calcium carbonate, a basic building block for organisms such as corals, shellfish, sea urchins, starfish and some forms of phytoplankton that form calcium carbonate shells . The rising acidity of the ocean will no doubt have significant effects of the trophic structure of marine ecosystems, but will also affect the functioning of these systems. The implications for the marine carbon cycle are not yet clear, but a weakening oceanic sink for carbon due to the increasing concentration of carbonic acid is a likely result. In an Earth System context the carbon cycle can act as a buffer to keep the planetary environment within well-defined limits, but if critical thresholds are crossed, the carbon cycle can act as a giant flywheel that helps to push the Earth System into another state. As the 21st century progresses, the concentration of CO2 in the atmosphere will depend not only on the amount of anthropogenic emissions, but increasingly on the response of natural carbon cycle dynamics to the changing climate. Clearly there is an urgent need for the carbon cycle policy and management communities to move beyond the simple human emissions-atmospheric CO2 equation and to take a much more holistic view of the carbon cycle – its natural dynamics, feedbacks, nonlinearities and potential surprises. - 1.Schellnhuber HJ, Cramer W, Nakicenovic N, Wigley T, Yohe G, (Eds): Avoiding Dangerous Climate Change. Cambridge UK: Cambridge University Press; 2006.Google Scholar - 2.Steffen W, Sanderson A, Tyson P, Jäger J, Matson P, Moore B III, Oldfield F, Richardson K, Schellnhuber H-J, Turner BL II, Wasson R: Global Change and the Earth System: A Planet Under Pressure. IGBP Global Change Series. Berlin Heidelburg New York: Springer-Verlag; 2004.Google Scholar - 3.Field C, Raupach M, (Eds): Global Carbon Cycle, Integrating Humans, Climate and the Natural World. Washington, DC: Island Press; 2004.Google Scholar - 4.Rustad LE, Campbell J, Marion GM, Norby RJ, Mitchell MJ, Hartley AE, Cornelissen JHC, Gurevitch J, GCTE-NEWS: A meta-analyses of the response of soil respiration, net N mineralization, and aboveground plant growth to experimental ecosystem warming. Oecologia 2000, 126: 543–562.Google Scholar - 6.Friedlingstein P, Cox P, Betts R, Bopp L, von Bloh W, Brovkin V, Doney VS, Eby MI, Fung I, Govindasamy B, John J, Jones C, Joos F, Kato T, Kawamiya M, Knorr W, Lindsay K, Matthews HD, Raddatz T, Rayner P, Reick C, Roeckner E, Schnitzler K-G, Schnur R, Strassmann K, Weaver AJ, Yoshikawa C, Zeng N: Climate-carbon cycle feedback analysis, results from the C4MIP model intercomparison. Climatic Change, in press.Google Scholar - 7.Gruber N, Friedlingstein P, Field CB, Valentini R, Heimann M, Richey JE, Lankao PR, Schulze E-D, Chen C-TA: The vulnerability of the carbon cycle in the 21stcentury: an assessment of carbon-climate-human interactions. In Global Carbon Cycle, Integrating Humans, Climate and the Natural World. Edited by: Field C, Raupach M. Washington, DC: Island Press; 2004.Google Scholar - 8.Royal Society: Ocean acidification due to increasing atmospheric carbon dioxide. Policy document 12/05. London: The Royal Society UK; 2005.Google Scholar This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
<urn:uuid:9c5dd80c-b9ce-40b8-9c72-231779548849>
2.625
1,814
Academic Writing
Science & Tech.
35.098185
95,600,478
The recent retreat of Arctic sea ice is likely to accelerate so rapidly that the Arctic Ocean could become nearly devoid of ice during summertime as early as 2040, according to new research published in the December 12 issue of Geophysical Research Letters. The study, by a team of scientists from the National Center for Atmospheric Research (NCAR), the University of Washington, and McGill University, analyzes the impact of greenhouse gas emissions on the Arctic. Scenarios run on supercomputers show that the extent of sea ice each September could be reduced so abruptly that, within about 20 years, it may begin retreating four times faster than at any time in the observed record. "We have already witnessed major losses in sea ice, but our research suggests that the decrease over the next few decades could be far more dramatic than anything that has happened so far," says NCAR scientist Marika Holland, the study's lead author. "These changes are surprisingly rapid." Arctic sea ice has retreated in recent years, especially in the late summer, when ice thickness and area are at a minimum. To analyze how global warming will affect the ice in coming decades, the team studied a series of seven simulations run on the NCAR-based Community Climate System Model, one of the world's leading tools for studying climate change. The scientists first tested the model by simulating fluctuations in ice cover since 1870, including a significant shrinkage of late-summer ice from 1979 to 2005. The simulations closely matched observations, a sign that the model was accurately capturing the present-day climate variability in the Arctic. The team then simulated future ice loss. The model results indicate that, if greenhouse gases continue to build up in the atmosphere at the current rate, the Arctic's future ice cover will go through periods of relative stability followed by abrupt retreat. For example, in one model simulation, the September ice shrinks from about 2.3 million to 770,000 square miles in a 10-year period. By 2040, only a small amount of perennial sea ice remains along the north coasts of Greenland and Canada, while most of the Arctic basin is ice-free in September. The winter ice also thins from about 12 feet thick to less than 3 feet. Why expect abrupt change? The research team points to several reasons for the abrupt loss of ice in a gradually warming world. Open water absorbs more sunlight than does ice, meaning that the growing regions of ice-free water will accelerate the warming trend. In addition, global climate change is expected to influence ocean circulations and drive warmer ocean currents into the Arctic. "As the ice retreats, the ocean transports more heat to the Arctic and the open water absorbs more sunlight, further accelerating the rate of warming and leading to the loss of more ice," Holland explains. "This is a positive feedback loop with dramatic implications for the entire Arctic region." Avoiding abrupt change The scientists also conclude that different rates of greenhouse gas emissions can affect the probability of abrupt ice loss. By examining 15 additional leading climate models, they found that if emissions of carbon dioxide and other greenhouse gases were to slow, the likelihood of rapid ice loss would decrease. Instead, summer sea ice would probably undergo a much slower retreat. "Our research indicates that society can still minimize the impacts on Arctic ice," Holland says. The study drew on extensive and sophisticated computer modeling recently carried out for the Intergovernmental Panel on Climate Change. The IPCC's next assessment report will be released early in 2007. Source: University Corporation for Atmospheric Research Explore further: Fingerprint of ancient abrupt climate change found in Arctic
<urn:uuid:d253f3ad-e91b-40a6-a2c3-5512f229d897>
3.875
731
News Article
Science & Tech.
33.907391
95,600,502
Radioactive dating requires the use of a decay curve. Introduction: Scientists use radioactive dating to determine the time in years ago that an event happened.Knowing this first: that scoffers will come in the last days, walking according to their own lusts, and saying, “Where is the promise of His coming? While this may sound questionable at first, keep in mind that we also accept the Law of Gravity with out direct proof. Visit the following site and read about each of Steno's laws (principles). Each atom is thought to be made up of three basic parts. The nucleus contains protons (tiny particles each with a single positive electric charge) and neutrons (particles without any electric charge). Following this introduction, there are several links to different sites concerning the methods scientists use to assist them in estimating the age of the Earth.
<urn:uuid:d4650a67-8549-48e9-b487-28205ca9eebe>
3.71875
178
Knowledge Article
Science & Tech.
46.114253
95,600,505
Even though the moon circles the Earth at an average distance of 378,000 kilometers (234,878 miles), its gravity still has a noticeable effect on the planet. The moon’s gravitational pull is the major driving force behind the ocean’s tides, raising and lowering ocean levels and contributing to the flow of water around the globe. In areas like the Bay of Fundy in Canada, the moon’s effects shift water levels by as much as 16 meters (53 feet) during a single cycle. When the moon is directly overhead any point on the Earth, its gravity pulls on the surface. This force pulls water toward the moon, creating a “sublunar” high tide on that side of the planet. As the water flows toward the moon, it draws water from the sides of the planet perpendicular to the moon’s position, creating low tides. The gravitational pull is strongest on water, but the moon’s gravity tugs at the Earth as well, causing the two bodies to accelerate toward each other and creating a 30-centimeter (about 1 foot) shift in the Earth’s solid surface. On the other side of the planet, the moon’s gravitational effect is weakest, blocked by the mass of the Earth. In addition, the planet is slightly accelerating toward the moon on the opposite side, pulling the mass of the Earth away from the water on the far side. These effects combine to create an “antipodal” high tide on the side opposite the moon. Because the moon orbits every 24 hours and 50 minutes, each point on Earth receives two high tides each day, 12 hours and 25 minutes apart. While the moon’s gravitational force remains constant, its distance from the surface of Earth does not. The moon’s orbit varies by almost 50,000 kilometers (31,000 miles) over the course of its path, and when the moon is closest, the sublunar tide is highest. In addition, geographic features affect the flow of water, contributing to differences in high tide levels over the course of the lunar cycle. The moon is not the only body that affects the tides. The sun, though much farther away, has its own gravitational influence, raising and lowering water levels appropriately over the course of a year. When the moon’s gravitational pull lines up with the sun’s effect, it can significantly increase tidal variations, causing “spring” tides. When these two forces are perpendicular to one another, they reduce the tidal differences, creating “neap” tides. The Earth’s distance to the sun also varies over the course of a year, increasing or decreasing this effect accordingly.
<urn:uuid:93d2afe6-ef12-4c6f-8cf9-798d47526a75>
4.4375
559
Knowledge Article
Science & Tech.
50.963619
95,600,518
SOCS3 (Suppressors of Cytokine Signalling) controls the responses of cells to cytokines (growth factors). It is important that cytokine signalling is properly regulated within the human body. If SOCS3 permits cytokine signalling to be too "loud", then the excess of growth signals can cause crippling inflammatory diseases such as Rheumatoid Arthritis or diseases where cells multiply uncontrollably – cancer. Conversely, if cytokine signalling is overly repressed by SOCS3, then bone marrow is deprived of sufficient white blood cells required to rejuvenate the damaged immune system following chemotherapy. An unfortunate side effect of chemotherapy is damage caused to the bone marrow that produces the white blood cells of the immune system. This leaves cancer patients prey to opportunistic infections that can delay and adversely affect their recovery. A cytokine called G-CSF (developed in previous years at WEHI) is in clinical use worldwide to stimulate the restoration of bone marrow and the reinvigoration of the immune system in chemotherapy patients. The success of G-CSF (or Granulocyte Colony Stimulating Factor) depends on the complementary proper functioning of SOCS3. A research team at WEHI has determined the three-dimensional structure of SOCS3. This discovery about the structure may enable the design of selective inhibitors of SOCS3 that might be useful in extending the activity of G-CSF in restoring white blood cells. The structure also showed that SOCS3 contains a region that could be engineered out, improving the stability of SOCS3. This newly engineered version of SOCS3 also has the potential to enhance its repressive functions, which may allow inflammatory diseases to be treated more effectively. Brad Allen | EurekAlert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:0e7ebb56-1c64-475c-9ec7-53f35494ca7b>
2.875
937
Content Listing
Science & Tech.
31.722579
95,600,519
Recent Examples of mathematical logic from the Web However, let’s look at this using pure mathematical logic. Germany ran a current account surplus of 8.4 percent of GDP in 2016, which implies by inexorable mathematical logic that other countries are running deficits. Before the war, Turing had disrupted the rarefied world of mathematical logic with his vision of a universal computing machine. These example sentences are selected automatically from various online news sources to reflect current usage of the word 'mathematical logic.' Views expressed in the examples do not represent the opinion of Merriam-Webster or its editors. Send us feedback. Learn More about mathematical logic Britannica.com: Encyclopedia article about mathematical logic Seen and Heard What made you want to look up mathematical logic? Please tell us where you read or heard it (including the quote, if possible).
<urn:uuid:d37392e9-5ebb-437f-a0ea-c1747107a43c>
2.546875
180
Structured Data
Science & Tech.
31.1235
95,600,561
This article needs additional citations for verification. (May 2014) (Learn how and when to remove this template message) |Part of a series of articles about| Wave–particle duality is the concept in quantum mechanics that every particle or quantic entity may be partly described in terms not only of particles, but also of waves. It expresses the inability of the classical concepts "particle" or "wave" to fully describe the behavior of quantum-scale objects. As Albert Einstein wrote: It seems as though we must use sometimes the one theory and sometimes the other, while at times we may use either. We are faced with a new kind of difficulty. We have two contradictory pictures of reality; separately neither of them fully explains the phenomena of light, but together they do. Through the work of Max Planck, Albert Einstein, Louis de Broglie, Arthur Compton, Niels Bohr and many others, current scientific theory holds that all particles exhibit a wave nature (and vice versa). This phenomenon has been verified not only for elementary particles, but also for compound particles like atoms and even molecules. For macroscopic particles, because of their extremely short wavelengths, wave properties usually cannot be detected. Although the use of the wave-particle duality has worked well in physics, the meaning or interpretation has not been satisfactorily resolved; see Interpretations of quantum mechanics. Bohr regarded the "duality paradox" as a fundamental or metaphysical fact of nature. A given kind of quantum object will exhibit sometimes wave, sometimes particle, character, in respectively different physical settings. He saw such duality as one aspect of the concept of complementarity. Bohr regarded renunciation of the cause-effect relation, or complementarity, of the space-time picture, as essential to the quantum mechanical account. Werner Heisenberg considered the question further. He saw the duality as present for all quantic entities, but not quite in the usual quantum mechanical account considered by Bohr. He saw it in what is called second quantization, which generates an entirely new concept of fields which exist in ordinary space-time, causality still being visualizable. Classical field values (e.g. the electric and magnetic field strengths of Maxwell) are replaced by an entirely new kind of field value, as considered in quantum field theory. Turning the reasoning around, ordinary quantum mechanics can be deduced as a specialized consequence of quantum field theory. - 1 Brief history of wave and particle viewpoints - 2 Turn of the 20th century and the paradigm shift - 3 Wave behavior of large objects - 4 Treatment in modern quantum mechanics - 5 Alternative views - 6 Applications - 7 See also - 8 Notes and references - 9 External links Brief history of wave and particle viewpoints Democritus—the original atomist—argued that all things in the universe, including light, are composed of indivisible sub-components (light being some form of solar atom). At the beginning of the 11th Century, the Arabic scientist Alhazen wrote the first comprehensive treatise on optics; describing refraction, reflection, and the operation of a pinhole lens via rays of light traveling from the point of emission to the eye. He asserted that these rays were composed of particles of light. In 1630, René Descartes popularized and accredited the opposing wave description in his treatise on light, showing that the behavior of light could be re-created by modeling wave-like disturbances in a universal medium ("plenum"). Beginning in 1670 and progressing over three decades, Isaac Newton developed and championed his corpuscular hypothesis, arguing that the perfectly straight lines of reflection demonstrated light's particle nature; only particles could travel in such straight lines. He explained refraction by positing that particles of light accelerated laterally upon entering a denser medium. Around the same time, Newton's contemporaries Robert Hooke and Christiaan Huygens—and later Augustin-Jean Fresnel—mathematically refined the wave viewpoint, showing that if light traveled at different speeds in different media (such as water and air), refraction could be easily explained as the medium-dependent propagation of light waves. The resulting Huygens–Fresnel principle was extremely successful at reproducing light's behavior and was subsequently supported by Thomas Young's 1801 discovery of double-slit interference. The wave view did not immediately displace the ray and particle view, but began to dominate scientific thinking about light in the mid 19th century, since it could explain polarization phenomena that the alternatives could not. James Clerk Maxwell discovered that he could apply his equations for electromagnetism, which had been previously discovered, along with a slight modification to describe self-propagating waves of oscillating electric and magnetic fields. It quickly became apparent that visible light, ultraviolet light, and infrared light (phenomena thought previously to be unrelated) were all electromagnetic waves of differing frequency. The wave theory had prevailed—or at least it seemed to. While the 19th century had seen the success of the wave theory at describing light, it had also witnessed the rise of the atomic theory at describing matter. Antoine Lavoisier deduced the law of conservation of mass and categorized many new chemical elements and compounds; and Joseph Louis Proust advanced chemistry towards the atom by showing that elements combined in definite proportions. This led John Dalton to propose that elements were invisible sub components; Amedeo Avogadro discovered diatomic gases and completed the basic atomic theory, allowing the correct molecular formulae of most known compounds—as well as the correct weights of atoms—to be deduced and categorized in a consistent manner. Dimitri Mendeleev saw an order in recurring chemical properties, and created a table presenting the elements in unprecedented order and symmetry. Turn of the 20th century and the paradigm shift Particles of electricity At the close of the 19th century, the reductionism of atomic theory began to advance into the atom itself; determining, through physics, the nature of the atom and the operation of chemical reactions. Electricity, first thought to be a fluid, was now understood to consist of particles called electrons. This was first demonstrated by J. J. Thomson in 1897 when, using a cathode ray tube, he found that an electrical charge would travel across a vacuum (which would possess infinite resistance in classical theory). Since the vacuum offered no medium for an electric fluid to travel, this discovery could only be explained via a particle carrying a negative charge and moving through the vacuum. This electron flew in the face of classical electrodynamics, which had successfully treated electricity as a fluid for many years (leading to the invention of batteries, electric motors, dynamos, and arc lamps). More importantly, the intimate relation between electric charge and electromagnetism had been well documented following the discoveries of Michael Faraday and James Clerk Maxwell. Since electromagnetism was known to be a wave generated by a changing electric or magnetic field (a continuous, wave-like entity itself) an atomic/particle description of electricity and charge was a non sequitur. Furthermore, classical electrodynamics was not the only classical theory rendered incomplete. In 1901, Max Planck published an analysis that succeeded in reproducing the observed spectrum of light emitted by a glowing object. To accomplish this, Planck had to make an ad hoc mathematical assumption of quantized energy of the oscillators (atoms of the black body) that emit radiation. Einstein later proposed that electromagnetic radiation itself is quantized, not the energy of radiating atoms. Black-body radiation, the emission of electromagnetic energy due to an object's heat, could not be explained from classical arguments alone. The equipartition theorem of classical mechanics, the basis of all classical thermodynamic theories, stated that an object's energy is partitioned equally among the object's vibrational modes. But applying the same reasoning to the electromagnetic emission of such a thermal object was not so successful. That thermal objects emit light had been long known. Since light was known to be waves of electromagnetism, physicists hoped to describe this emission via classical laws. This became known as the black body problem. Since the equipartition theorem worked so well in describing the vibrational modes of the thermal object itself, it was natural to assume that it would perform equally well in describing the radiative emission of such objects. But a problem quickly arose: if each mode received an equal partition of energy, the short wavelength modes would consume all the energy. This became clear when plotting the Rayleigh–Jeans law which, while correctly predicting the intensity of long wavelength emissions, predicted infinite total energy as the intensity diverges to infinity for short wavelengths. This became known as the ultraviolet catastrophe. In 1900, Max Planck hypothesized that the frequency of light emitted by the black body depended on the frequency of the oscillator that emitted it, and the energy of these oscillators increased linearly with frequency (according to his constant h, where E = hν). This was not an unsound proposal considering that macroscopic oscillators operate similarly: when studying five simple harmonic oscillators of equal amplitude but different frequency, the oscillator with the highest frequency possesses the highest energy (though this relationship is not linear like Planck's). By demanding that high-frequency light must be emitted by an oscillator of equal frequency, and further requiring that this oscillator occupy higher energy than one of a lesser frequency, Planck avoided any catastrophe; giving an equal partition to high-frequency oscillators produced successively fewer oscillators and less emitted light. And as in the Maxwell–Boltzmann distribution, the low-frequency, low-energy oscillators were suppressed by the onslaught of thermal jiggling from higher energy oscillators, which necessarily increased their energy and frequency. The most revolutionary aspect of Planck's treatment of the black body is that it inherently relies on an integer number of oscillators in thermal equilibrium with the electromagnetic field. These oscillators give their entire energy to the electromagnetic field, creating a quantum of light, as often as they are excited by the electromagnetic field, absorbing a quantum of light and beginning to oscillate at the corresponding frequency. Planck had intentionally created an atomic theory of the black body, but had unintentionally generated an atomic theory of light, where the black body never generates quanta of light at a given frequency with an energy less than hν. However, once realizing that he had quantized the electromagnetic field, he denounced particles of light as a limitation of his approximation, not a property of reality. Photoelectric effect illuminated While Planck had solved the ultraviolet catastrophe by using atoms and a quantized electromagnetic field, most contemporary physicists agreed that Planck's "light quanta" represented only flaws in his model. A more-complete derivation of black body radiation would yield a fully continuous and 'wave-like' electromagnetic field with no quantization. However, in 1905 Albert Einstein took Planck's black body model to produce his solution to another outstanding problem of the day: the photoelectric effect, wherein electrons are emitted from atoms when they absorb energy from light. Since their existence was theorized eight years previously, phenomenon had been studied with the electron model in mind in physics laboratories worldwide. In 1902 Philipp Lenard discovered that the energy of these ejected electrons did not depend on the intensity of the incoming light, but instead on its frequency. So if one shines a little low-frequency light upon a metal, a few low energy electrons are ejected. If one now shines a very intense beam of low-frequency light upon the same metal, a whole slew of electrons are ejected; however they possess the same low energy, there are merely more of them. The more light there is, the more electrons are ejected. Whereas in order to get high energy electrons, one must illuminate the metal with high-frequency light. Like blackbody radiation, this was at odds with a theory invoking continuous transfer of energy between radiation and matter. However, it can still be explained using a fully classical description of light, as long as matter is quantum mechanical in nature. If one used Planck's energy quanta, and demanded that electromagnetic radiation at a given frequency could only transfer energy to matter in integer multiples of an energy quantum hν, then the photoelectric effect could be explained very simply. Low-frequency light only ejects low-energy electrons because each electron is excited by the absorption of a single photon. Increasing the intensity of the low-frequency light (increasing the number of photons) only increases the number of excited electrons, not their energy, because the energy of each photon remains low. Only by increasing the frequency of the light, and thus increasing the energy of the photons, can one eject electrons with higher energy. Thus, using Planck's constant h to determine the energy of the photons based upon their frequency, the energy of ejected electrons should also increase linearly with frequency; the gradient of the line being Planck's constant. These results were not confirmed until 1915, when Robert Andrews Millikan, who had previously determined the charge of the electron, produced experimental results in perfect accord with Einstein's predictions. While the energy of ejected electrons reflected Planck's constant, the existence of photons was not explicitly proven until the discovery of the photon antibunching effect, of which a modern experiment can be performed in undergraduate-level labs. This phenomenon could only be explained via photons, and not through any semi-classical theory (which could alternatively explain the photoelectric effect). When Einstein received his Nobel Prize in 1921, it was not for his more difficult and mathematically laborious special and general relativity, but for the simple, yet totally revolutionary, suggestion of quantized light. Einstein's "light quanta" would not be called photons until 1925, but even in 1905 they represented the quintessential example of wave-particle duality. Electromagnetic radiation propagates following linear wave equations, but can only be emitted or absorbed as discrete elements, thus acting as a wave and a particle simultaneously. Einstein's explanation of the photoelectric effect In 1905, Albert Einstein provided an explanation of the photoelectric effect, a hitherto troubling experiment that the wave theory of light seemed incapable of explaining. He did so by postulating the existence of photons, quanta of light energy with particulate qualities. In the photoelectric effect, it was observed that shining a light on certain metals would lead to an electric current in a circuit. Presumably, the light was knocking electrons out of the metal, causing current to flow. However, using the case of potassium as an example, it was also observed that while a dim blue light was enough to cause a current, even the strongest, brightest red light available with the technology of the time caused no current at all. According to the classical theory of light and matter, the strength or amplitude of a light wave was in proportion to its brightness: a bright light should have been easily strong enough to create a large current. Yet, oddly, this was not so. Einstein explained this enigma by postulating that the electrons can receive energy from electromagnetic field only in discrete portions (quanta that were called photons): an amount of energy E that was related to the frequency f of the light by where h is Planck's constant (6.626 × 10−34 J seconds). Only photons of a high enough frequency (above a certain threshold value) could knock an electron free. For example, photons of blue light had sufficient energy to free an electron from the metal, but photons of red light did not. One photon of light above the threshold frequency could release only one electron; the higher the frequency of a photon, the higher the kinetic energy of the emitted electron, but no amount of light (using technology available at the time) below the threshold frequency could release an electron. To "violate" this law would require extremely high-intensity lasers which had not yet been invented. Intensity-dependent phenomena have now been studied in detail with such lasers. Einstein was awarded the Nobel Prize in Physics in 1921 for his discovery of the law of the photoelectric effect. De Broglie's wavelength In 1924, Louis-Victor de Broglie formulated the de Broglie hypothesis, claiming that all matter, not just light, has a wave-like nature; he related wavelength (denoted as λ), and momentum (denoted as p): This is a generalization of Einstein's equation above, since the momentum of a photon is given by p = and the wavelength (in a vacuum) by λ = , where c is the speed of light in vacuum. De Broglie's formula was confirmed three years later for electrons (which differ from photons in having a rest mass) with the observation of electron diffraction in two independent experiments. At the University of Aberdeen, George Paget Thomson passed a beam of electrons through a thin metal film and observed the predicted interference patterns. At Bell Labs, Clinton Joseph Davisson and Lester Halbert Germer guided their beam through a crystalline grid. De Broglie was awarded the Nobel Prize for Physics in 1929 for his hypothesis. Thomson and Davisson shared the Nobel Prize for Physics in 1937 for their experimental work. Heisenberg's uncertainty principle - here indicates standard deviation, a measure of spread or uncertainty; - x and p are a particle's position and linear momentum respectively. - is the reduced Planck's constant (Planck's constant divided by 2). Heisenberg originally explained this as a consequence of the process of measuring: Measuring position accurately would disturb momentum and vice versa, offering an example (the "gamma-ray microscope") that depended crucially on the de Broglie hypothesis. The thought is now, however, that this only partly explains the phenomenon, but that the uncertainty also exists in the particle itself, even before the measurement is made. In fact, the modern explanation of the uncertainty principle, extending the Copenhagen interpretation first put forward by Bohr and Heisenberg, depends even more centrally on the wave nature of a particle: Just as it is nonsensical to discuss the precise location of a wave on a string, particles do not have perfectly precise positions; likewise, just as it is nonsensical to discuss the wavelength of a "pulse" wave traveling down a string, particles do not have perfectly precise momenta (which corresponds to the inverse of wavelength). Moreover, when position is relatively well defined, the wave is pulse-like and has a very ill-defined wavelength (and thus momentum). And conversely, when momentum (and thus wavelength) is relatively well defined, the wave looks long and sinusoidal, and therefore it has a very ill-defined position. de Broglie–Bohm theory De Broglie himself had proposed a pilot wave construct to explain the observed wave-particle duality. In this view, each particle has a well-defined position and momentum, but is guided by a wave function derived from Schrödinger's equation. The pilot wave theory was initially rejected because it generated non-local effects when applied to systems involving more than one particle. Non-locality, however, soon became established as an integral feature of quantum theory (see EPR paradox), and David Bohm extended de Broglie's model to explicitly include it. In the resulting representation, also called the de Broglie–Bohm theory or Bohmian mechanics, the wave-particle duality vanishes, and explains the wave behaviour as a scattering with wave appearance, because the particle's motion is subject to a guiding equation or quantum potential. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored", J.S.Bell. Wave behavior of large objects Since the demonstrations of wave-like properties in photons and electrons, similar experiments have been conducted with neutrons and protons. Among the most famous experiments are those of Estermann and Otto Stern in 1929. Authors of similar recent experiments with atoms and molecules, described below, claim that these larger particles also act like waves. A dramatic series of experiments emphasizing the action of gravity in relation to wave–particle duality was conducted in the 1970s using the neutron interferometer. Neutrons, one of the components of the atomic nucleus, provide much of the mass of a nucleus and thus of ordinary matter. In the neutron interferometer, they act as quantum-mechanical waves directly subject to the force of gravity. While the results were not surprising since gravity was known to act on everything, including light (see tests of general relativity and the Pound–Rebka falling photon experiment), the self-interference of the quantum mechanical wave of a massive fermion in a gravitational field had never been experimentally confirmed before. In 1999, the diffraction of C60 fullerenes by researchers from the University of Vienna was reported. Fullerenes are comparatively large and massive objects, having an atomic mass of about 720 u. The de Broglie wavelength of the incident beam was about 2.5 pm, whereas the diameter of the molecule is about 1 nm, about 400 times larger. In 2012, these far-field diffraction experiments could be extended to phthalocyanine molecules and their heavier derivatives, which are composed of 58 and 114 atoms respectively. In these experiments the build-up of such interference patterns could be recorded in real time and with single molecule sensitivity. In 2003, the Vienna group also demonstrated the wave nature of tetraphenylporphyrin—a flat biodye with an extension of about 2 nm and a mass of 614 u. For this demonstration they employed a near-field Talbot Lau interferometer. In the same interferometer they also found interference fringes for C60F48., a fluorinated buckyball with a mass of about 1600 u, composed of 108 atoms. Large molecules are already so complex that they give experimental access to some aspects of the quantum-classical interface, i.e., to certain decoherence mechanisms. In 2011, the interference of molecules as heavy as 6910 u could be demonstrated in a Kapitza–Dirac–Talbot–Lau interferometer. In 2013, the interference of molecules beyond 10,000 u has been demonstrated. Whether objects heavier than the Planck mass (about the weight of a large bacterium) have a de Broglie wavelength is theoretically unclear and experimentally unreachable; above the Planck mass a particle's Compton wavelength would be smaller than the Planck length and its own Schwarzschild radius, a scale at which current theories of physics may break down or need to be replaced by more general ones. Recently Couder, Fort, et al. showed that we can use macroscopic oil droplets on a vibrating surface as a model of wave–particle duality—localized droplet creates periodical waves around and interaction with them leads to quantum-like phenomena: interference in double-slit experiment, unpredictable tunneling (depending in complicated way on practically hidden state of field), orbit quantization (that particle has to 'find a resonance' with field perturbations it creates—after one orbit, its internal phase has to return to the initial state) and Zeeman effect. Treatment in modern quantum mechanics Wave–particle duality is deeply embedded into the foundations of quantum mechanics. In the formalism of the theory, all the information about a particle is encoded in its wave function, a complex-valued function roughly analogous to the amplitude of a wave at each point in space. This function evolves according to a differential equation (generically called the Schrödinger equation). For particles with mass this equation has solutions that follow the form of the wave equation. Propagation of such waves leads to wave-like phenomena such as interference and diffraction. Particles without mass, like photons, have no solutions of the Schrödinger equation so have another wave. The particle-like behavior is most evident due to phenomena associated with measurement in quantum mechanics. Upon measuring the location of the particle, the particle will be forced into a more localized state as given by the uncertainty principle. When viewed through this formalism, the measurement of the wave function will randomly "collapse", or rather "decohere", to a sharply peaked function at some location. For particles with mass the likelihood of detecting the particle at any particular location is equal to the squared amplitude of the wave function there. The measurement will return a well-defined position, (subject to uncertainty), a property traditionally associated with particles. It is important to note that a measurement is only a particular type of interaction where some data is recorded and the measured quantity is forced into a particular eigenstate. The act of measurement is therefore not fundamentally different from any other interaction. Following the development of quantum field theory the ambiguity disappeared. The field permits solutions that follow the wave equation, which are referred to as the wave functions. The term particle is used to label the irreducible representations of the Lorentz group that are permitted by the field. An interaction as in a Feynman diagram is accepted as a calculationally convenient approximation where the outgoing legs are known to be simplifications of the propagation and the internal lines are for some order in an expansion of the field interaction. Since the field is non-local and quantized, the phenomena which previously were thought of as paradoxes are explained. Within the limits of the wave-particle duality the quantum field theory gives the same results. There are two ways to visualize the wave-particle behaviour: by the "standard model", described below; and by the Broglie–Bohm model, where no duality is perceived. Below is an illustration of wave–particle duality as it relates to De Broglie's hypothesis and Heisenberg's uncertainty principle (above), in terms of the position and momentum space wavefunctions for one spinless particle with mass in one dimension. These wavefunctions are Fourier transforms of each other. The more localized the position-space wavefunction, the more likely the particle is to be found with the position coordinates in that region, and correspondingly the momentum-space wavefunction is less localized so the possible momentum components the particle could have are more widespread. Conversely the more localized the momentum-space wavefunction, the more likely the particle is to be found with those values of momentum components in that region, and correspondingly the less localized the position-space wavefunction, so the position coordinates the particle could occupy are more widespread. Wave–particle duality is an ongoing conundrum in modern physics. Most physicists accept wave-particle duality as the best explanation for a broad range of observed phenomena; however, it is not without controversy. Alternative views are also presented here. These views are not generally accepted by mainstream physics, but serve as a basis for valuable discussion within the community. The pilot wave model, originally developed by Louis de Broglie and further developed by David Bohm into the hidden variable theory proposes that there is no duality, but rather a system exhibits both particle properties and wave properties simultaneously, and particles are guided, in a deterministic fashion, by the pilot wave (or its "quantum potential") which will direct them to areas of constructive interference in preference to areas of destructive interference. This idea is held by a significant minority within the physics community. At least one physicist considers the "wave-duality" as not being an incomprehensible mystery. L.E. Ballentine, Quantum Mechanics, A Modern Development, p. 4, explains: When first discovered, particle diffraction was a source of great puzzlement. Are "particles" really "waves?" In the early experiments, the diffraction patterns were detected holistically by means of a photographic plate, which could not detect individual particles. As a result, the notion grew that particle and wave properties were mutually incompatible, or complementary, in the sense that different measurement apparatuses would be required to observe them. That idea, however, was only an unfortunate generalization from a technological limitation. Today it is possible to detect the arrival of individual electrons, and to see the diffraction pattern emerge as a statistical pattern made up of many small spots (Tonomura et al., 1989). Evidently, quantum particles are indeed particles, but whose behaviour is very different from classical physics would have us to expect. The Afshar experiment (2007) may suggest that it is possible to simultaneously observe both wave and particle properties of photons. This claim is, however, disputed by other scientists. Carver Mead, an American scientist and professor at Caltech, proposes that the duality can be replaced by a "wave-only" view. In his book Collective Electrodynamics: Quantum Foundations of Electromagnetism (2000), Mead purports to analyze the behavior of electrons and photons purely in terms of electron wave functions, and attributes the apparent particle-like behavior to quantization effects and eigenstates. According to reviewer David Haddon: Mead has cut the Gordian knot of quantum complementarity. He claims that atoms, with their neutrons, protons, and electrons, are not particles at all but pure waves of matter. Mead cites as the gross evidence of the exclusively wave nature of both light and matter the discovery between 1933 and 1996 of ten examples of pure wave phenomena, including the ubiquitous laser of CD players, the self-propagating electrical currents of superconductors, and the Bose–Einstein condensate of atoms. This double nature of radiation (and of material corpuscles) ... has been interpreted by quantum-mechanics in an ingenious and amazingly successful fashion. This interpretation ... appears to me as only a temporary way out... The three wave hypothesis of R. Horodecki relates the particle to wave. The hypothesis implies that a massive particle is an intrinsically spatially, as well as temporally extended, wave phenomenon by a nonlinear law. Still in the days of the old quantum theory, a pre-quantum-mechanical version of wave–particle duality was pioneered by William Duane, and developed by others including Alfred Landé. Duane explained diffraction of x-rays by a crystal in terms solely of their particle aspect. The deflection of the trajectory of each diffracted photon was explained as due to quantized momentum transfer from the spatially regular structure of the diffracting crystal. It has been argued that there are never exact particles or waves, but only some compromise or intermediate between them. For this reason, in 1928 Arthur Eddington coined the name "wavicle" to describe the objects although it is not regularly used today. One consideration is that zero-dimensional mathematical points cannot be observed. Another is that the formal representation of such points, the Dirac delta function is unphysical, because it cannot be normalized. Parallel arguments apply to pure wave states. Roger Penrose states: "Such 'position states' are idealized wavefunctions in the opposite sense from the momentum states. Whereas the momentum states are infinitely spread out, the position states are infinitely concentrated. Neither is normalizable [...]." Relational approach to wave–particle duality Relational quantum mechanics has been developed as a point of view that regards the event of particle detection as having established a relationship between the quantized field and the detector. The inherent ambiguity associated with applying Heisenberg’s uncertainty principle is consequently avoided; hence there is no wave-particle duality. Although it is difficult to draw a line separating wave–particle duality from the rest of quantum mechanics, it is nevertheless possible to list some applications of this basic idea. - Wave–particle duality is exploited in electron microscopy, where the small wavelengths associated with the electron can be used to view objects much smaller than what is visible using visible light. - Similarly, neutron diffraction uses neutrons with a wavelength of about 0.1 nm, the typical spacing of atoms in a solid, to determine the structure of solids. - Photos are now able to show this dual nature, which may lead to new ways of examining and recording this behaviour. - Arago spot - Afshar experiment - Basic concepts of quantum mechanics - Complementarity (physics) - Einstein's thought experiments - Electron wave-packet interference - Englert–Greenberger–Yasin duality relation - Faraday wave - Hanbury Brown and Twiss effect - Kapitsa–Dirac effect - Photon polarization - Scattering theory - Wheeler's delayed choice experiment Notes and references - Harrison, David (2002). "Complementarity and the Copenhagen Interpretation of Quantum Mechanics". UPSCALE. Dept. of Physics, U. of Toronto. Retrieved 2008-06-21. - Walter Greiner (2001). Quantum Mechanics: An Introduction. Springer. ISBN 3-540-67458-6. - R. Eisberg & R. Resnick (1985). Quantum Physics of Atoms, Molecules, Solids, Nuclei, and Particles (2nd ed.). John Wiley & Sons. pp. 59–60. ISBN 047187373X. For both large and small wavelengths, both matter and radiation have both particle and wave aspects.... But the wave aspects of their motion become more difficult to observe as their wavelengths become shorter.... For ordinary macroscopic particles the mass is so large that the momentum is always sufficiently large to make the de Broglie wavelength small enough to be beyond the range of experimental detection, and classical mechanics reigns supreme. - Kumar, Manjit (2011). Quantum: Einstein, Bohr, and the Great Debate about the Nature of Reality (Reprint ed.). W. W. Norton & Company. pp. 242, 375–376. ISBN 978-0393339888. - Bohr, N. (1927/1928). The quantum postulate and the recent development of atomic theory, Nature Supplement April 14 1928, 121: 580–590. - Camilleri, K. (2009). Heisenberg and the Interpretation of Quantum Mechanics: the Physicist as Philosopher, Cambridge University Press, Cambridge UK, ISBN 978-0-521-88484-6. - Preparata, G. (2002). An Introduction to a Realistic Quantum Physics, World Scientific, River Edge NJ, ISBN 978-981-238-176-7. - Nathaniel Page Stites, M.A./M.S. "Light I: Particle or Wave?," Visionlearning Vol. PHY-1 (3), 2005. http://www.visionlearning.com/library/module_viewer.php?mid=132 - Young, Thomas (1804). "Bakerian Lecture: Experiments and calculations relative to physical optics". Philosophical Transactions of the Royal Society. 94: 1–16. Bibcode:1804RSPT...94....1Y. doi:10.1098/rstl.1804.0001. - Thomas Young: The Double Slit Experiment - Buchwald, Jed (1989). The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. Chicago: University of Chicago Press. ISBN 0-226-07886-8. OCLC 18069573. - Lamb, Willis E.; Scully, Marlan O. (1968). "The photoelectric effect without photons" (PDF). - "Observing the quantum behavior of light in an undergraduate laboratory". American Journal of Physics. 72: 1210. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397. - Zhang, Q (1996). "Intensity dependence of the photoelectric effect induced by a circularly polarized laser beam". Physics Letters A. 216 (1–5): 125–128. Bibcode:1996PhLA..216..125Z. doi:10.1016/0375-9601(96)00259-9. - Donald H Menzel, "Fundamental formulas of Physics", volume 1, page 153; Gives the de Broglie wavelengths for composite particles such as protons and neutrons. - Brian Greene, The Elegant Universe, page 104 "all matter has a wave-like character" - See this Science Channel production (Season II, Episode VI "How Does The Universe Work?"), presented by Morgan Freeman, https://www.youtube.com/watch?v=W9yWv5dqSKk - Bohmian Mechanics, Stanford Encyclopedia of Philosophy. - Bell, J. S., "Speakable and Unspeakable in Quantum Mechanics", Cambridge: Cambridge University Press, 1987. - Y. Couder, A. Boudaoud, S. Protière, Julien Moukhtar, E. Fort: Walking droplets: a form of wave-particle duality at macroscopic level?, doi:10.1051/epn/2010101, (PDF) - Estermann, I.; Stern O. (1930). "Beugung von Molekularstrahlen". Zeitschrift für Physik. 61 (1–2): 95–125. Bibcode:1930ZPhy...61...95E. doi:10.1007/BF01340293. - R. Colella, A. W. Overhauser and S. A. Werner, Observation of Gravitationally Induced Quantum Interference, Phys. Rev. Lett. 34, 1472–1474 (1975). - Arndt, Markus; O. Nairz; J. Voss-Andreae, C. Keller, G. van der Zouw, A. Zeilinger (14 October 1999). "Wave–particle duality of C60". Nature. 401 (6754): 680–682. Bibcode:1999Natur.401..680A. doi:10.1038/44348. PMID 18494170. - Juffmann, Thomas; et al. (25 March 2012). "Real-time single-molecule imaging of quantum interference". Nature Nanotechnology. Retrieved 27 March 2012. - Quantumnanovienna. "Single molecules in a quantum interference movie". Retrieved 2012-04-21. - Hackermüller, Lucia; Stefan Uttenthaler; Klaus Hornberger; Elisabeth Reiger; Björn Brezger; Anton Zeilinger; Markus Arndt (2003). "The wave nature of biomolecules and fluorofullerenes". Phys. Rev. Lett. 91 (9): 090408. arXiv: . Bibcode:2003PhRvL..91i0408H. doi:10.1103/PhysRevLett.91.090408. PMID 14525169. - Clauser, John F.; S. Li (1994). "Talbot von Lau interefometry with cold slow potassium atoms". Phys. Rev. A. 49 (4): R2213–17. Bibcode:1994PhRvA..49.2213C. doi:10.1103/PhysRevA.49.R2213. PMID 9910609. - Brezger, Björn; Lucia Hackermüller; Stefan Uttenthaler; Julia Petschinka; Markus Arndt; Anton Zeilinger (2002). "Matter-wave interferometer for large molecules". Phys. Rev. Lett. 88 (10): 100404. arXiv: . Bibcode:2002PhRvL..88j0404B. doi:10.1103/PhysRevLett.88.100404. PMID 11909334. Archived from the original on 2016-05-21. - Hornberger, Klaus; Stefan Uttenthaler; Björn Brezger; Lucia Hackermüller; Markus Arndt; Anton Zeilinger (2003). "Observation of Collisional Decoherence in Interferometry". Phys. Rev. Lett. 90 (16): 160401. arXiv: . Bibcode:2003PhRvL..90p0401H. doi:10.1103/PhysRevLett.90.160401. PMID 12731960. Archived from the original on 2016-05-21. - Hackermüller, Lucia; Klaus Hornberger; Björn Brezger; Anton Zeilinger; Markus Arndt (2004). "Decoherence of matter waves by thermal emission of radiation". Nature. 427 (6976): 711–714. arXiv: . Bibcode:2004Natur.427..711H. doi:10.1038/nature02276. PMID 14973478. - Gerlich, Stefan; et al. (2011). "Quantum interference of large organic molecules". Nature Communications. 2 (263). Bibcode:2011NatCo...2E.263G. doi:10.1038/ncomms1263. PMC . PMID 21468015. - Eibenberger, S.; Gerlich, S.; Arndt, M.; Mayor, M.; Tüxen, J. (2013). "Matter–wave interference of particles selected from a molecular library with masses exceeding 10 000 amu". Physical Chemistry Chemical Physics. 15 (35): 14696–14700. arXiv: . Bibcode:2013PCCP...1514696E. doi:10.1039/c3cp51500a. PMID 23900710. - Peter Gabriel Bergmann, The Riddle of Gravitation, Courier Dover Publications, 1993 ISBN 0-486-27378-4 online - https://www.youtube.com/watch?v=W9yWv5dqSKk - You Tube video - Yves Couder Explains Wave/Particle Duality via Silicon Droplets - Y. Couder, E. Fort, Single-Particle Diffraction and Interference at a Macroscopic Scale, PRL 97, 154101 (2006) online - A. Eddi, E. Fort, F. Moisy, Y. Couder, Unpredictable Tunneling of a Classical Wave–Particle Association, PRL 102, 240401 (2009) - Fort, E.; Eddi, A.; Boudaoud, A.; Moukhtar, J.; Couder, Y. (2010). "Path-memory induced quantization of classical orbits". PNAS. 107 (41): 17515–17520. arXiv: . Bibcode:2010PNAS..10717515F. doi:10.1073/pnas.1007386107. - http://prl.aps.org/abstract/PRL/v108/i26/e264503 - Level Splitting at Macroscopic Scale - (Buchanan pp. 29–31) - Afshar, S.S.; et al. (2007). "Paradox in Wave Particle Duality". Found. Phys. 37: 295. arXiv: . Bibcode:2007FoPh...37..295A. doi:10.1007/s10701-006-9102-8. - Kastner, R (2005). "Why the Afshar experiment does not refute complementarity". Studies in History and Philosophy of Science Part B: Studies in History and Philosophy of Modern Physics. 36 (4): 649–658. arXiv: . Bibcode:2005SHPMP..36..649K. doi:10.1016/j.shpsb.2005.04.006 – via Elsevier Science Direct. - Steuernagel, Ole (2007-08-03). "Afshar's Experiment Does Not Show a Violation of Complementarity". Foundations of Physics. 37 (9): 1370–1385. arXiv: . Bibcode:2007FoPh...37.1370S. doi:10.1007/s10701-007-9153-5. ISSN 0015-9018. - Jacques, V.; Lai, N. D.; Dréau, A.; Zheng, D.; Chauvat, D.; Treussart, F.; Grangier, P.; Roch, J.-F. (2008-01-01). "Illustration of quantum complementarity using single photons interfering on a grating". New Journal of Physics. 10 (12): 123009. arXiv: . Bibcode:2008NJPh...10l3009J. doi:10.1088/1367-2630/10/12/123009. ISSN 1367-2630. - Georgiev, Danko (2012-01-26). "Quantum Histories and Quantum Complementarity". ISRN Mathematical Physics. 2012: 1–37. doi:10.5402/2012/327278. - David Haddon. "Recovering Rational Science". Touchstone. Retrieved 2007-09-12. - Paul Arthur Schilpp, ed, Albert Einstein: Philosopher-Scientist, Open Court (1949), ISBN 0-87548-133-7, p 51. - See section VI(e) of Everett's thesis: The Theory of the Universal Wave Function, in Bryce Seligman DeWitt, R. Neill Graham, eds, The Many-Worlds Interpretation of Quantum Mechanics, Princeton Series in Physics, Princeton University Press (1973), ISBN 0-691-08131-X, pp 3–140. - Horodecki, R. (1981). "De broglie wave and its dual wave". Phys. Lett. A. 87 (3): 95–97. Bibcode:1981PhLA...87...95H. doi:10.1016/0375-9601(81)90571-5. - Horodecki, R. (1983). "Superluminal singular dual wave". Lett. Novo Cimento. 38: 509–511. - Duane, W. (1923). The transfer in quanta of radiation momentum to matter, Proc. Natl. Acad. Sci. 9(5): 158–164. - Landé, A. (1951). Quantum Mechanics, Sir Isaac Pitman and Sons, London, pp. 19–22. - Heisenberg, W. (1930). The Physical Principles of the Quantum Theory, translated by C. Eckart and F.C. Hoyt, University of Chicago Press, Chicago, pp. 77–78. - Eddington, Arthur Stanley (1928). The Nature of the Physical World. Cambridge, UK.: MacMillan. p. 201. - Penrose, Roger (2007). The Road to Reality: A Complete Guide to the Laws of the Universe. Vintage. p. 521, §21.10. ISBN 978-0-679-77631-4. - http://www.quantum-relativity.org/Quantum-Relativity.pdf. See Q. Zheng and T. Kobayashi, Quantum Optics as a Relativistic Theory of Light; Physics Essays 9 (1996) 447. Annual Report, Department of Physics, School of Science, University of Tokyo (1992) 240. - "Press release: The first ever photograph of light as both a particle and wave". Ecole Polytechnique Federale de Lausanne. 2 March 2015. - Animation, applications and research linked to the wave-particle duality and other basic quantum phenomena (Université Paris Sud) - H. Nikolic. "Quantum mechanics: Myths and facts". Foundations of Physics. 37: 1563–1611. arXiv: . Bibcode:2007FoPh...37.1563N. doi:10.1007/s10701-007-9176-y. - Young & Geller. "College Physics". - B. Crowell. "Ch. 34, Light as a Particle" (Web page). Retrieved December 10, 2006. - E.H. Carlson, Wave–Particle Duality: Light on Project PHYSNET - R. Nave. "Wave–Particle Duality" (Web page). HyperPhysics. Georgia State University, Department of Physics and Astronomy. Retrieved December 12, 2005. - Juffmann, Thomas; et al. (25 March 2012). "Real-time single-molecule imaging of quantum interference". Nature Nanotechnology. Retrieved 21 January 2014.
<urn:uuid:d33f8649-a80c-464b-9975-e191fb24ce16>
3.296875
10,177
Knowledge Article
Science & Tech.
45.454896
95,600,570
A hydrogen nucleus, which has a charge e, is situated to the left of a carbon nucleus, which has a charge 6e. Which statement is true? The answer is the electrical force experienced by the hydrogen nucleus is to the left, and the magnitude is equal to the force exerted on the carbon nucleus. But why is the electrical force experienced by the hydrogen nucleus to the left?? Explain please Recently Asked Questions - 1) Read the textbook Jurmain et al Ch. 2. 2) The Alabama State Board of Education adopted the following disclaimer on evolution to be put into every biology - Valley View Manufacturing Inc., sought a $500,000 loan from First National Bank. First National insisted that audited financial statements be submitted before - Document the hardware and software components needed to implement the data warehouse and the front-end reporting dashboard in Microsoft ® Word or Excel ® .
<urn:uuid:9f56e352-3937-4080-88f6-ab95ea4f488a>
3.296875
179
Q&A Forum
Science & Tech.
46.310418
95,600,617
0327 GMT July 21, 2018 That's the conclusion of a citizen science project called Project MERCCURI that analyzed bacteria found on 15 locations on the ISS and compared them with samples from homes on Earth as well as the Human Microbiome Project, news.xinhuanet.com wrote. This study, titled ‘A microbial survey of the International Space Station’ was published in PeerJ, a peer-reviewed open access journal. Study author David Coil, a microbiologist at the University of California (UC), Davis, said, "So 'is it gross?' and 'will you see microbes from space?' are probably the two most common questions we get about this work. "As to the first, we are completely surrounded by mostly harmless microbes on Earth, and we see a broadly similar microbial community on the ISS. So it is probably no more or less gross than your living room." Regarding finding microbes from space, he said, "Since the ISS is completely enclosed, the microbes inside the station come from the people on the ISS and the supplies sent to them.” Overall, the ISS is home to at least 12,554 distinct microbial species, and the proportion of species that are closely related to known human pathogens is on par with similar built environments on Earth, the study said. Lead author Jenna Lang, former postdoctoral scholar at UC Davis, said, "The microbiome on the surfaces on the ISS looks very much like the surfaces of its inhabitants, which is not surprising, given that they are the primary source. "What we were also pleased to see is that the diversity was fairly high, indicating that it did not look like a 'sick' microbial community." Project MERCCURI is a collaboration between UC Davis and other organizations including Science Cheerleader, a group of current and former professional cheerleaders pursuing careers in science and math. Previous work from the same Project MERCCURI team sent 48 bacterial samples collected in the US to the ISS and described a bacteria that grew better in space than on Earth.
<urn:uuid:e7c9037b-843c-403c-895a-a00cfb72f21c>
3.34375
421
News Article
Science & Tech.
44.76607
95,600,623
Extensive fields of hydrocarbon-rich gas seepage, mud volcanoes and pockmarks have all been mapped by the EUROCORES programme EUROMARGINS. On 4 - 6 October 2006, scientists from 50 different research groups in 12 different countries came together in Bologna, Italy to discuss future cross-discipline, pan-European and pan-World research following in the footsteps of this four year programme as EUROMARGINS is coming to an end. Collaboration in the ‘cold’ As ocean sediments compact in cold seeps, fluids ooze out of the sediment and into the water. The cold-seep fluids contain chemical compounds produced by the decomposition of organic materials or by inorganic chemical reactions which occur at high temperatures and pressures. Near cold seeps in the Eastern Mediterranean, Sébastien Duperron from Université Pierre et Marie Curie in France has found unique bacterial symbiosis with mussels. Symbiotic associations between bivalves (mussels) and bacteria allow the former to benefit from the bacteria’s ability to chemosynthetically (without light) derive energy from the chemical compounds produced and use this energy to ensure primary production. “In the bivalve species Idas sp., we have found an association with six different symbionts. This is the widest diversity of symbionts ever described in a bivalve species,” said Duperron. This means that the mussel, depending on which type of symbionts it carries, can derive its energy from either sulphide or methane. In addition, Duperron has also found that in the Idas sp., three of the symbionts belong to bacterial groups previously not reported to include symbiotic bacteria. They seem to provide their hosts with nutrient from a yet unidentified source. But life in these alien environments can also exist without symbionts as Ian MacDonald from Texas A&M University, Corpus Christi US has demonstrated. His observations of the fauna around coastal margin hydrocarbon seeps in the Gulf of Mexico have revealed a habitat rich in biological activity and without a need for symbionts to extract nutrients. MacDonald found that the productivity of deep-water seeps is overwhelmingly based on chemosynthesis (deriving energy from chemical compounds instead of light) and also some chemoautotrophic symbiosis (using a symbiont to derive energy from chemical compounds). However some communities of deep-sea corals associated with many seeps are probably filter feeders. Recent research findings indicate that the corals around the seeps may be much more widespread at seeps than previously realised. This fact adds to the biological diversity and ecological complexity of seep communities. Underwater mud volcanoes In the Nile deep-sea fan, mud volcanoes were discovered in the mid-1990s and they are still being investigated by a EUROMARGINS project. In the Gulf of Cadiz, the first mud volcanoes were discovered in 1999. The deepest mud volcano in this area is located at 3890m. Luis Pinheiro from the University of Aveiro in Portugal participated in the 1999 cruise when mud volcanoes were first discovered. Pinheiro and his team have been investigating this area in close collaboration with Spain, France and Belgium. So far they have mapped 40 mud volcanoes, some as big as over 4km across and a few hundred meters high supporting characteristic ecosystems with particular faunal communities, living directly or indirectly on methane, some of which appear to represent completely new species to science. Over four years, the EUROMARGINS have gatherered about 75 teams from 12 countries on a variety of complementary topics dedicated to the imaging, monitoring, reconstruction and modelling of the physical and chemical processes that occur in the passive margin system. Further information is available at www.esf.org/euromargins or by contacting email@example.com. When it comes to an end in late 2007, EUROMARGINS will be succeeded by new EUROCORES Programmes such as EuroMARC and Topo-Europe, which will both contribute to the future of European geosciences. Sofia Valleley | alfa Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:801460ba-4dbd-4c65-b3b2-5e8d97ad0e9a>
3.453125
1,450
Content Listing
Science & Tech.
34.220065
95,600,633
Understanding Nullable Types in Typescript with examples Understanding Nullable Types in Typescript with examples: Nullable type is introduced in typescript 2.0, this helps us to keep initialized objects to have some values other than null all the time. Default number | Assigning null: let home2OfficeKm = 10; home2OfficeKm = null; Works fine in typescript 2.0 earlier versions, if you do not want null to be assigned to the home2OfficeKm anytime then you can add the below entry in your tsconfig.json file // add this with your existing compilerOptions properties in the tsconfig.json file then the above snippet throws the “Type null is not assignable to type number” error. But same thing won’t happen to the below snippet, Default undefined | Assigning null: home2BeachKm = null; This is because the above snippet is not initialized with any default values like the one(number=10) we saw above, so it accepts null for home2BeachKm variable. Thing you have many variables and if you enable strictNullChecks then it works for all the variable same way, then the variable which can be assigned up with null also could create the issue, In those timings you can actually use, Default union(null | number) | assigned with null: let km: null | number =18; km = null; Then this also wont create any issue, because km is of union types with null and number. So it will allow both numbers and null. Default null | assigned with number: let home2OfficeKm = null; home2OfficeKm = 10; This also throws the error. Nullable Types – Things to keep in mind: - Introduced in typescript 2.0 only - “strictNullChecks”:true should be enabled in the tsconfig.json to make use of this feature/type. - Variables which needs to work with nullable type should be initialized with some variable values like number/string etc. Then only it will not allow the null to get assigned for the variables which is assigned with the default values. - It will not work for the variables which is only declared and does not assigned with default values. So it will be assigned with undefined by default and this nullable type will not work with undefined value variables. 466 total views, 1 views today
<urn:uuid:27264720-3de6-4c19-a533-bfa17599c161>
2.890625
530
Documentation
Software Dev.
46.690882
95,600,642
Ocean historians affiliated with the Census of Marine Life have painted the first detailed portrait of a burst of fishing from 1900 to 1950 that preceded the collapse of once abundant bluefin tuna populations off the coast of northern Europe. The chronicle of decimation of the bluefin tuna population in the North Atlantic is being published as other affiliated researchers release the latest results of modern electronic fish tagging efforts off Ireland and in the Gulf of Mexico, revealing remarkable migrations and life-cycle secrets of the declining species. Tuna Past: Case of the Disappearing Bluefins Dusting off sales records, fishery yearbooks and other sources, researchers Brian R. MacKenzie of the Technical University of Denmark and the late Ransom Myers of Canada’s Dalhousie University show majestic bluefins teemed in northern European waters (North Sea, Norwegian Sea, Skaggerak, Kattegat, and Oresund ) for a few months each summer until an industrialized fishery geared up in the 1920s and literally filled the floors of European market halls with them. The research, to appear in a special edition of the peer-reviewed journal Fisheries Research, shows that generations ago Atlantic bluefins typically arrived in the northern waters by the thousands in late June and departed by October at the latest, their foraging travels likely related to seasonal warming. Danish fishers from the mid-1800s welcomed the bluefin tuna as a partner in the catch of the garfish species. The bluefins pursued garfish into nets that fishers set close to shore. Before World War I, the bluefins were rarely captured and even coastal sightings were exciting events. One measuring 2.7 meters (almost 9 feet) washed up on a German shore in 1903. Those captured in the 1920s ranged from 40 kilograms to giants of 700 kilograms, with an average weight of 50 to 100 kilograms. After World War I, ever better know-how and equipment, including harpoon rifles and hydraulic net lifts, helped northern European fishers land burgeoning quantities of tuna. Major tuna fishing countries at the time such as Norway, Denmark, Sweden and Germany, recorded virtually no bluefin landings in 1910 and almost 5,500 tonnes by 1949. In 1915 nearly 8,000 bluefin (690 tons) were landed in Gothenburg, Sweden alone. In 1929, Denmark built its first tuna cannery – a milestone in the new industrial approach echoing elsewhere in Europe. In the 1920s, the catch peaked at Boulogne, France, homeport for the French bluefin fishers in the North Sea. Landings of bluefin tuna by northern European boats in record numbers soared through the 1940s and by decade’s end approached the catch levels of traditional Mediterranean fisheries. In 1949, Norway had 43 boats in pursuit of the bluefin; the next year it had 200. Norwegian catches briefly exceeded 10,000 tons per year in the early 1950s. Better means of catching the tuna early in the 20th Century also attracted sport fishing pursuits and businesses, beginning in the 1920s. One sport fisher caught 62 bluefins near the Danish island of Anholt in 1928 as enthusiasm spread quickly through the U.K., Norway and elsewhere. Abundant fish allowed recreational fishermen to establish the Scandinavian Tuna Club, which arranged bluefin tournaments in the straits between Denmark and Sweden until the early 1960s. The authors say the fish was a top predator in the North Atlantic ecosystem, feeding largely on herring and mackerel, squid and other species. By one estimate from the 1950s, some 30% of all herring consumed by its predators in the area was ingested by bluefin tuna. The booming catches helped strip the Atlantic bluefin population in a relative blink of time – 1910 to 1950. The species virtually disappeared from the region in the early 1960s and is still rare today. Financed by the CONWOY project of Denmark’s National Science Foundation and the European Union’s MARBEF Network of Excellence, the work is part of the History of Marine Animal Populations, one of 17 projects that comprise the global Census of Marine Life. Says Dr. MacKenzie: “We can’t say with certainty that over-exploitation is the smoking gun in the bluefin tuna’s disappearance – but clearly there’s been a murder. “We’ve shown bluefin tuna were here for a long time in high numbers. High fishing pressure preceded the species’ virtual disappearance from the area and apparently played a key role but other factors under study might have compounded the fishery’s demise –the catch of juvenile tuna in subsequent years, for example. “As well, it’s important to note that fishers from many other countries in Europe as well as Japan, the U.S. and Canada, contributed to the soaring increase in North Atlantic bluefin tuna fishing throughout the last century. We hope our work will inspire a more precautionary approach to the management of bluefin tuna in the Atlantic, with more concern about re-establishing and maintaining the historical range of the species.” Another new study suggests that the bluefin tuna that historically foraged in the Norwegian and North Seas in summer likely commuted to and from the area via waters off the northwest coast of Ireland. CoML-associated scientists working with the Tag-A-Giant (TAG) program used electronic tags to track migrations of giant bluefins found off the north of Ireland – fish targeted by Irish sport fishers and Japanese long line vessels today. In a paper for the journal Hydrobiologia, CoML and marine scientists from Canada, Ireland and the U.S. (Michael Stokesbury, Ronan Cosgrove, Andre Boustany, Daragh Browne, Steven L. H. Teo, Ron O’Dor and Barbara A. Block) report on the remarkable migrations of Atlantic bluefin tuna tagged off western Ireland in 2003 and 2004. Two fish tagged within minutes of each other wound up more than 5,000 km apart eight months later. One traveled 6,000 km southwest in 177 days past Bermuda to waters about 300 kilometres northeast of Cuba; the other remained in the eastern Atlantic and moved off the coasts of Portugal. A third tagged bluefin swam into the Mediterranean Sea and was caught by fishers southeast of Malta in 2005. Experts believe there are two stocks of Atlantic bluefin tuna, one that spawns in the Mediterranean Sea, the other in the Gulf of Mexico and the Florida Straits. It’s theorized that the two stocks forage together in the North Atlantic and travel to opposite sides of the ocean to reproduce. “These tagging data potentially provides new evidence that mixing is occurring in the northern waters of the eastern Atlantic and complement prior data showing that the western and eastern stocks of north Atlantic bluefin mingle in rich foraging grounds of the central Atlantic,” says Barbara Block, chief scientist of the CoML Tagging of Pacific Predators program and TAG. “It’s possible all three fish were spawned in the Mediterranean,” she emphasized. “We’re conducting genetic tests on fin clips taken while tagging to help confirm the origins of the specific bluefin tuna that we followed electronically.” The research is funded by Ireland’s Bord Iascaigh Mhara, the Packard Foundation and the Monterey Bay Aquarium Foundation. Understanding the Gulf of Mexico Bluefin Tuna Breeding Area Yet another modern tagging experiment has yielded important new insights into the vital bluefin breeding area in the Gulf of Mexico and the oceanographic conditions in which the adult tuna court and spawn. The work by U.S. researchers Steven L. H. Teo, Andre M. Boustany and Barbara Block, for publication in the journal Marine Biology, sheds light on habitat used by spawning bluefin tuna and may one day lead to better protection in U.S. waters. The spawning stock has dropped 90% in the past 30 years. Using data from 28 tagged Atlantic bluefins (recording location, depth, water temperature, light level and the tuna’s abdominal temperature), the researchers correlated the breeding behaviours revealed by the tags with a suite of variables at those locations, including sea surface temperature, current and wind speed, topography of the ocean floor, eddies, and surface chlorophyll concentrations. They found a majority of bluefin (some weighing up to 300 kilograms) gravitated to areas near the Florida straits and the western part of the Gulf of Mexico to reproduce and that the favourite places tend to be where sea surface temperatures are 24 to 27 degrees Celsius and where the slope of the shelf is steep. Even a relatively small change in the distribution of sea surface temperatures in the Gulf of Mexico will likely cause a change in the timing and location of spawning, the authors say. Companion research says the tags recorded remarkable behaviour shifts among tuna breeding in this region, including shallower, more oscillatory diving and a tendency to stay near the surface. Lingering in the region, the courting and breeding fish also made more sinuous tracks. As well, says lead author Steve Teo: “We found that bluefin get warm when they undergo courtship and our physiological measurements indicate they are vulnerable to overheating while in the Gulf of Mexico.” Gulf of Mexico eddies – tornado-like features that move counter-clockwise and cause a retention of food -- may also play an important role in the bluefin’s breeding process. “The bluefin showed a preference for regions of the Gulf of Mexico – information that can help map where they spawn,” says Dr. Teo. Funded by NOAA Fisheries, the National Science Foundation and the Packard, Pew, Disney and Monterey Bay Aquarium Foundations, the researchers say bluefin tuna breeding areas could be better protected thanks to tagging data showing where the species spawns from March to June. Past research showed breeding locations are similar to areas where the bluefin by-catch is high. “Together these new reports help define where bluefin spawn and provide evidence for their trans-oceanic migrations. A fisherman from the Scandinavian Tuna Club might have chased the same giant bluefin as a Cuban fisherman,” says Dr. O’Dor, a Dalhousie University professor and senior scientist of the Census of Marine Life. “Part of the lesson here is that restoring bluefin tuna populations to health requires us to consider and manage activities one-fifth of the way around the world.” Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:50340ae0-2f01-4941-89bc-b0eff67e242b>
3.34375
2,795
Content Listing
Science & Tech.
43.213712
95,600,652
Near Realtime Maps of Possible Earthquake-Triggered Landslides USGS scientists have been developing a system to quickly identify areas where landslides may have been triggered by a significant earthquake.Read Story Mission Areas L2 Landing Page Tabs U.S. Geological Survey natural hazards science strategy: promoting the safety, security, and economic well-being of the Nation The mission of the U.S. Geological Survey (USGS) in natural hazards is to develop and apply hazard science to help protect the safety, security, and economic well-being of the Nation. The costs and consequences of natural hazards can be enormous, and each year more people and infrastructure are at risk. USGS scientific research—founded on...Holmes, Robert R.; Jones, Lucile M.; Eidenshink, Jeffery C.; Godt, Jonathan W.; Kirby, Stephen H.; Love, Jeffrey J.; Neal, Christina A.; Plant, Nathaniel G.; Plunkett, Michael L.; Weaver, Craig S.; Wein, Anne; Perry, Suzanne C. Characterization of the Hosgri Fault Zone and adjacent structures in the offshore Santa Maria Basin, south-central California: Chapter CC of Evolution of sedimentary basins/onshore oil and gas investigations - Santa Maria province The Hosgri Fault Zone trends subparallel to the south-central California coast for 110 km from north of Point Estero to south of Purisima Point and forms the eastern margin of the present offshore Santa Maria Basin. Knowledge of the attributes of the Hosgri Fault Zone is important for petroleum development, seismic engineering, and environmental...Willingham, C. Richard; Rietman, Jan D.; Heck, Ronald G.; Lettis, William R. Evaluation of ISCCP multisatellite radiance calibration for geostationary imager visible channels using the moon Since 1983, the International Satellite Cloud Climatology Project (ISCCP) has collected Earth radiance data from the succession of geostationary and polar-orbiting meteorological satellites operated by weather agencies worldwide. Meeting the ISCCP goals of global coverage and decade-length time scales requires consistent and stable calibration of...Stone, Thomas C.; William B. Rossow; Joseph Ferrier; Laura M. Hinkelman Soil diversity and hydration as observed by ChemCam at Gale crater, Mars The ChemCam instrument, which provides insight into martian soil chemistry at the submillimeter scale, identified two principal soil types along the Curiosity rover traverse: a fine-grained mafic type and a locally derived, coarse-grained felsic type. The mafic soil component is representative of widespread martian soils and is similar in...Meslin, P.-Y.; Gasnault, O.; Forni, O.; Schroder, S.; Cousin, A.; Berger, G.; Clegg, S.M.; Lasue, J.; Maurice, S.; Sautter, V.; Le Mouélic, S.; Wiens, R.C.; Fabre, C.; Goetz, W.; Bish, D.L.; Mangold, N.; Ehlmann, B.; Lanza, N.; Harri, A.-M.; Anderson, Ryan Bradley; Rampe, E.; McConnochie, T.H.; Pinet, P.; Blaney, D.; Archer, D.; Barraclough, B.; Bender, S.; Blake, D.; Blank, J.G.; Bridges, N.; Clark, B. C.; DeFlores, L.; Delapp, D.; Dromart, G.; Dyar, M.D.; Fisk, M. R.; Gondet, B.; Grotzinger, J.; Herkenhoff, K.; Johnson, J.; Lacour, J.-L.; Langevin, Y.; Leshin, L.; Lewin, E.; Madsen, M.B.; Melikechi, N.; Mezzacappa, Alissa; Mischna, M.A.; Moores, J.E.; Newsom, H.; Ollila, A.; Renno, N.; Sirven, J.B.; Tokar, R.; de la Torre, M.; d'Uston, L.; Vaniman, D.; Yingst, A. Operational Group Sandy technical progress report Hurricane Sandy made US landfall near Atlantic City, NJ on 29 October 2012, causing 72 direct deaths, displacing thousands of individuals from damaged or destroyed dwellings, and leaving over 8.5 million homes without power across the northeast and mid-Atlantic. To coordinate federal rebuilding activities in the affected region, the President... The role of photogeologic mapping in traverse planning: Lessons from DRATS 2010 activities We produced a 1:24,000 scale photogeologic map of the Desert Research and Technology Studies (DRATS) 2010 simulated lunar mission traverse area and surrounding environments located within the northeastern part of the San Francisco Volcanic Field (SFVF), north-central Arizona. To mimic an exploratory mission, we approached the region “blindly” by...Skinner, James A.; Fortezzo, Corey M. Stratigraphic architecture of bedrock reference section, Victoria Crater, Meridiani Planum, Mars The Mars Exploration Rover Opportunity has investigated bedrock outcrops exposed in several craters at Meridiani Planum, Mars, in an effort to better understand the role of surface processes in its geologic history. Opportunity has recently completed its observations of Victoria crater, which is 750 m in diameter and exposes cliffs up to ~15 m...Edgar, Lauren A.; Grotzinger, John P.; Hayes, Alex G.; Rubin, David M.; Squyres, Steve W.; Bell, James F.; Herkenhoff, Ken E. An atlas of Mars sedimentary rocks as seen by HiRISE Images of distant and unknown places have long stimulated the imaginations of both explorers and scientists. The atlas of photographs collected during the Hayden (1872) expedition to the Yellowstone region was essential to its successful advocacy and selection in 1872 as America's ӿrst national park. Photographer William Henry Jackson of the...Grotzinger, John P.; Milliken, Ralph E.; Beyer, Ross; Stack, Kathryn M.; Griffes, Jennifer L.; Milliken, Ralph E.; Herkenhoff, Ken E.; Byrne, Shane; Holt, John W.; Grotzinger, John P. Note: Rotaphone, a new self-calibrated six-degree-of-freedom seismic sensor We have developed and tested (calibration, linearity, and cross-axis errors) a new six-degree-of-freedom mechanical seismic sensor for collocated measurements of three translational and three rotational ground motion velocity components. The device consists of standard geophones arranged in parallel pairs to detect spatial gradients. The...Brokešová, Johana; Málek, Jiří; Evans, John R. Mars global digital dune database: MC-30 The Mars Global Digital Dune Database (MGD3) provides data and describes the methodology used in creating the global database of moderate- to large-size dune fields on Mars. The database is being released in a series of U.S. Geological Survey Open-File Reports. The first report (Hayward and others, 2007) included dune fields from lat 65° N. to...Hayward, R.K.; Fenton, L.K.; Titus, T.N.; Colaprete, A.; Christensen, P.R. Earthquake recurrence models fail when earthquakes fail to reset the stress field Parkfield's regularly occurring M6 mainshocks, about every 25 years, have over two decades stoked seismologists' hopes to successfully predict an earthquake of significant size. However, with the longest known inter-event time of 38 years, the latest M6 in the series (28 Sep 2004) did not conform to any of the applied forecast models, questioning...Tormann, Thessa; Wiemer, Stefan; Hardebeck, Jeanne L. Coseismic and postseismic stress rotations due to great subduction zone earthquakes The three largest recent great subduction zone earthquakes (2011 M9.0 Tohoku, Japan; 2010 M8.8 Maule, Chile; and 2004 M9.2 Sumatra-Andaman) exhibit similar coseismic rotations of the principal stress axes. Prior to each mainshock, the maximum compressive stress axis was shallowly plunging, while immediately after the mainshock, both the maximum...Hardebeck, Jeanne L. Fissure 8 continues to feed lava into multiple flow lobes. One lobe is advancing through agricultural lands toward the northeast, as shown in this image taken from a helicopter overflight on June 1, 2018, at 6:21 AM. This animated GIF shows a pair of radar amplitude images that were acquired by the Italian Space Agency's Cosmo-SkyMed satellite system. The images illustrate changes to the... USGS Hawaiian Volcano Observatory Status of Kīlauea Volcano June 1, 2018 Jessica Ball, USGS Volcanologist Deployment of an Edgetech 512i subbottom profiler from the deck of Stockton University's R/V Petrel near Little Egg Inlet, NJ. The USGS Coastal and Marine Geology Program is working to characterize the sea floor and shallow substrate in nearshore waters, using high-resolution geophysical techniques, sediment sampling, and sea-floor photography and videography. This long thin, strand of volcanic glass is called Pele's hair. Named for Pele, the Hawaiian goddess of fire, Pele's hair is formed from lava fountains and rapidly moving lava flows. This strand of Pele’s hair was found on Kupono Street in Leilani Estates, Hawaii, during the Kīlauea volcano eruption. Helicopter overflight shows advancing lobes from fissure 8 (fissure 8 is not pictured but located to the right, out of view). Advance rates were less than 100 yards per hour for the three lobes of the flow, as measured during the overnight hours. The flow moved north of Highway 132 in the vicinity of Noni Farms and Halekamahina roads, from which the two easternmost lobes... After more than three years of monitoring the towering granite cliffs of Yosemite National Park, scientists have new insights into a potentially important mechanism that can trigger rockfalls in the park. Although many conditions can trigger rockfalls, some rockfalls are more likely to happen in the hottest part of the day, during the hottest part of the year. On March 28, USGS scientists will release a report and the first-ever maps showing potential ground-shaking hazards from both human-induced and natural earthquakes. In the past, USGS maps only identified natural earthquake hazards. California's hotter droughts are a preview of a warmer future world. El Niño is a phenomenon that occurs when unusually warm ocean water piles up along the equatorial west coast of South America. When this phenomenon develops, it affects weather patterns around the globe, including the winter weather along the west coast of North America. This unusual pattern of sea surface temperatures occurs in irregular cycles about three to seven years apart. WASHINGTON—The President’s fiscal year (FY) 2017 budget request for the U.S. Geological Survey reflects the USGS's vital role in addressing some of the most pressing challenges of the 21st Century by advancing scientific discovery and innovation. The U.S. Geological Survey is implementing new measures that will improve public access to USGS-funded science as detailed in its new public access plan. Globally there were 14,588 earthquakes of magnitude 4.0 or greater in 2015. This worldwide number is on par with prior year averages of about 40 earthquakes per day of magnitude 4.0, or about 14,500 annually. The 2015 number may change slightly as the final results are completed by seismic analysts at the NEIC in Golden, Colorado. Magnetic storms can interfere with the operation of electric power grids and damage grid infrastructure. They can also disrupt directional drilling for oil and gas, radio communications, communication satellites and GPS systems.
<urn:uuid:31d97581-5706-4d97-b738-2cf2fc68b536>
3.015625
2,572
Content Listing
Science & Tech.
52.724892
95,600,656
A superconducting magnetic levitation device for the transport of light payloads A device for the frictionless transport of light payloads, e.g., silicon wafers, using superconducting magnetic levitation has been demonstrated. The device consists of an array of rigidly connected “carrier” magnets levitating above a corresponding array of superconducting discs. Silicon wafers placed on the carrier have been linearly transported with velocities up to 50 cm/s. The configuration provides for excellent lateral stability. Studies of the height and lateral friction effects (caused by flux pinning) were measured as a function of payload mass. Future applications may include the frictionless transport of silicon wafers in vacuum environments. Key wordstransport magnetic levitation superconductor Unable to display preview. Download preview PDF. - 1.V. Arkadiev,Nature (London) 160, 330 (1947).Google Scholar - 2.M. K. Wu, J. R. Ashburn, C. J. Torng, P. H. Hor, R. L. Meng, L. Gao, Z. J. Huang, Y. Q. Wang, and C. W. Chu,Phys. Rev. Lett. 58, 908 (1987).Google Scholar - 3.S. Nonaka, S. Sakamoto, and T. Ueno, 1987 Int. Conf. on Maglev and Linear Drives, May 1987 p. 67.Google Scholar - 4.S. Tagaki, S. Kanda, T. Azukizawa, and T. Yokoyama, 1987 Int. Conf. on Maglev and Linear Drives, May 1987, p. 73.Google Scholar - 5.M. B. Maple,Phys. Today 40, 3 (1987).Google Scholar - 6.F. Hellman, E. M. Gyorgy, D. W. Johnson, Jr., H. M. O'Bryan, and R. C. Sherwood,J. Appl. Phys. 63, 447 (1988).Google Scholar - 7.J. J. Croat,J. Appl. Phys. 55, 2078 (1984).Google Scholar - 8.D. W. Johnson, Jr, E. M. Gyorgy, W. W. Rhodes, R. J. Cava, L. C. Feldman, and R. B. van Dover,Adv. Ceram. Mater. 2, 364 (1987).Google Scholar - 9.M. Tinkham,Introduction to Superconductivity (McGraw-Hill, New York, 1975).Google Scholar - 10.F. C. Moon, M. M. Yanoviak, and R. Ware,Appl. Phys. Lett. 52, 1534 (1988).Google Scholar
<urn:uuid:f2e48bd2-f40d-45b4-95b3-2d2010239bf0>
2.53125
599
Academic Writing
Science & Tech.
92.363629
95,600,664
Coexistence and displacement in consumer-resource systems with local and shared resources Competition for local and shared resources is widespread. For example, colonial waterbirds consume local prey in the immediate vicinity of their colony, as well as shared prey across multiple colonies. However, there is little understanding of conditions facilitating coexistence vs. displacement in such systems. Extending traditional models based on type I and type II functional responses, we simulate consumer-resource systems in which resources are “substitutable,” “essential,” or “complementary.” It is shown that when resources are complementary or essential, a small increase in carrying capacity or decrease in handling time of a local resource may displace a spatially separate consumer species, even when the effect on shared resources is small. This work underscores the importance of determining both the nature of resource competition (substitutable, essential, or complementary) and appropriate scale-dependencies when studying metacommunities. We discuss model applicability to complex systems, e.g., urban wildlife that consume natural and anthropogenic resources which may displace rural competitors by depleting shared prey. KeywordsHolling’s disc Functional response Consumer-resource model Coexistence Displacement Zero net growth isocline The authors thank Rosalyn Rael, Paul Orlando, and two anonymous reviewers for providing very helpful comments and insights that improved the manuscript. - Brown JS (2000) Foraging ecology of animals. In: Hutchings MJ, John LA, Stewart AJA (eds) The ecological consequences of environmental heterogeneity: the 40th Symposium of the British Ecological Society held at University of Sussex. Cambridge University Press, Cambridge, p 184Google Scholar - Cobb CW, Douglas PH (1928) A theory of production. Am Econ Rev 18:139–165Google Scholar - Cooper SD, Diehl S, Kratz K, Sarnelle O (1998) Implications of scale for patterns and processes in stream ecology. Aust J Ecol 23:27–40. https://doi.org/10.1111/j.1442-9993.1998.tb00703.x CrossRefGoogle Scholar - Denny MW, Gaines SD (2007) Encyclopedia of tidepools and rocky shores. University of California Press, OaklandGoogle Scholar - Franco AC, Rossatto DR, de Ramos Silva LC, da Ferreira CS (2014) Cerrado vegetation and global change: the role of functional types, resource availability and disturbance in regulating plant community responses to rising CO2 levels and climate warming. Theor Exp Plant Physiol 26:19–38. https://doi.org/10.1007/s40626-014-0002-6 CrossRefGoogle Scholar - Henny CJ (1972) An analysis of the population dynamics of selected avian species—with special references to changes during the modern pesticide era. U.S. Fish and Wildlife Service, Patuxent Wildlife Research Center, LaurelGoogle Scholar - Holling CS (1959) Some Characteristics of Simple Types of Predation and Parasitism. Can Entomol 91:385–398. https://doi.org/10.4039/Ent91385-7 - Monod J (1949) The growth of bacterial cultures. Annu Rev Microbiol 3:371–394. https://doi.org/10.1146/annurev.mi.03.100149.002103 CrossRefGoogle Scholar - R Development Core Team (2009) R: A language and environment for statistical computing. R Foundation for Statistical Computing, ViennaGoogle Scholar - Soetaert K, Petzoldt T, Setzer RW (2010) Solving differential equations in R: package deSolve. Journal of Statistical Software 33: 1-25Google Scholar - Trouwborst A (2012) Transboundary wildlife conservation in a changing climate: adaptation of the Bonn convention and its daughter instruments to climate change. Social Science Research Network, RochesterGoogle Scholar - Vickery WL, Brown JS (2009) Spite, egotism, population stability, and resource conservation. Evolutionary Ecology Research 11:253–263.Google Scholar
<urn:uuid:c4bd584e-1e9c-418c-8ac9-cc14fb8d2311>
2.5625
871
Academic Writing
Science & Tech.
37.251171
95,600,665
The angle of lines Calculate the angle of two lines y=x-21 and y=-2x+14 Leave us a comment of example and its solution (i.e. if it is still somewhat unclear...): Showing 0 comments: Be the first to comment! To solve this example are needed these knowledge from mathematics: Next similar examples: A straight line p given by the equation ?. Calculate the size of angle in degrees between line p and y-axis. What is the slope of a line with an inclination -221°? What is the slope of the perpendicular bisector of line segment AB if A[-4,-5] and B[1,-1]? - Right angled triangle 2 LMN is a right angled triangle with vertices at L(1,3), M(3,5) and N(6,n). Given angle LMN is 90° find n - Slope RR A line has a rise of 2 and a run of 11. What is the slope? In how many points will intersect 14 different lines, where no two are parallel? In the circle with a radius 7.5 cm are constructed two parallel chord whose lengths are 9 cm and 12 cm. Calculate the distance of these chords (if there are two possible solutions write both). Calculate the slope of a line that intersects points (-84,41) and (-76,-32). Tatra's tachometer shows the initial state 886123 km this morning. Tatra today travel at an average speed of 44 km/h. Determine the function that describes the Tatra's tachometer depending on the time. What is the state of tachometer after 4 hours? - XY triangle Determine area of triangle given by line 7x+8y-69=0 and coordinate axes x and y. Determine the slope of the line perpendicular to the line p: y = -x +4. Straight line passing through points A [-3; 22] and B [33; -2]. Determine the total number of points of the line which both coordinates are positive integers. If the segment of the line y = -3x +4 that lies in quadrant I is rotated about the y-axis, a cone is formed. What is the volume of the cone? Line p passing through A[-10, 6] and has direction vector v=(3, 2). Is point B[7, 30] on the line p? - Lie/do not lie The function is given by the rule f(x) = 8x+16. Determine whether point D[-1; 8] lies on this function. Solve graphically or numerically and give reasons for the your answer. - V - slope The slope of the line whose equation is -3x -9 = 0 is What is the slope of the line defined by the equation -2x +3y = -1 ?
<urn:uuid:79a7a0d9-bc75-4a07-9f0c-cf59bbdefe91>
3.59375
631
Content Listing
Science & Tech.
81.629673
95,600,677
|Scientific Name:||Dasycercus cristicauda (Krefft, 1867)| Chaetocercus cristicauda Krefft, 1867 Dasycercus hillieri (Thomas, 1905) The taxonomy of Dasycercus has been confusing, but is now resolved (Woolley 2005). Historically, three species, D. cristicauda, D. hillieri and D. blythi, have been described; were then synonymized (under D. cristicauda); and re-split (to Mulgara D. cristicauda and Ampurta D. hillieri). However, Woolley (2005) demonstrated that the correct names for the two species were Crest-tailed Mulgara D. cristicauda and Brush-tailed Mulgara D. blythi, and that there was no straightforward linkage between the previously applied ascription of names and the current classification (in many to most cases, what was referred to previously as D. cristicauda is now considered to be D. blythi, and what was previously referred to as D. hillieri is now D. cristicauda); with the issue further clouded by co-occurrence across some regions. Many observations or studies in which voucher specimens were not collected are now ambiguous; however, Woolley (2006) provided interpretation of the currently-accepted nomenclature to names used in a series of previous studies. |Red List Category & Criteria:||Near Threatened ver 3.1| |Assessor(s):||Woinarski, J. & Burbidge, A.A.| |Reviewer(s):||Hawkins, C. & Johnson, C.N.| |Contributor(s):||Bluff, L., Brandle, R., Dickman, C., Masters, P., Pedler, R., Southgate, R. & Woolley, P.| Ascription of Red List status to the Crest-tailed Mulgara is difficult, in part because previous taxonomic confusion has made assessment of historic changes in population size and distribution difficult to interpret. Furthermore, there are no reliable estimates of population size or trends across the species geographic range, and the detection and interpretation of long-term trends may be complicated by substantial short-term fluctuations in numbers of individuals related to rainfall conditions. |Previously published Red List assessments:| The Crest-tailed Mulgara has (or had) a wide distribution across central and inland Australia. However, precise circumscription of distribution is hampered by long-standing nomenclatural confusion (see Taxonomic notes), which renders many previous non-vouchered records ambiguous. As such, distribution maps are likely to be an imperfect representation, particularly of the former distribution. A current study by P. Woolley (pers. comm. 2014) seeks to review and re-attribute all museum records, and will substantial increase the reliability of such mapping. Native:Australia (New South Wales - Possibly Extinct, Northern Territory, Queensland, South Australia, Western Australia - Possibly Extinct) |Range Map:||Click here to open the map viewer and explore range.| There has been no robust assessment of population size, nor that of individual subpopulations, and the abundance probably varies substantially in association with rainfall conditions. Woolley (2008) considered that it has ‘a presumed large population’. Masters (2008) considered it ‘sparse’. |Current Population Trend:||Stable| |Habitat and Ecology:| The Crest-tailed Mulgara is a mostly nocturnal marsupial, with a diet comprising a broad range of invertebrates and small vertebrates (Masters 2008). During the day it shelters in burrow systems, typically located at the base of grass clumps or bushes (Woolley 1990). It mostly occurs in sand dunes, with sparse vegetation (including the tall grass Zygochloa paradoxa), and in herblands and sparse grasslands bordering salt lakes (Masters 2008; Pavey et al. 2011). In an area of sympatry, the Brush-tailed Mulgara occupied sand plain and gibber plain, and the Crest-tailed Mulgara occupied sand ridges with tussock grasses (Woolley 2005; Pavey et al. 2011). |Continuing decline in area, extent and/or quality of habitat:||Yes| |Generation Length (years):||2| |Movement patterns:||Not a Migrant| |Major Threat(s):||Threats are poorly understood but include predation by and competition with feral cats and Red Foxes, habitat degradation due to livestock and feral herbivores, and, possibly, inappropriate fire regimes.| The Crest-tailed Mulgara is present in some conservation reserves, where it is protected from some threats. |Citation:||Woinarski, J. & Burbidge, A.A. 2016. Dasycercus cristicauda. The IUCN Red List of Threatened Species 2016: e.T6266A21945813.Downloaded on 17 July 2018.| |Feedback:||If you see any errors or have any questions or suggestions on what is shown on this page, please provide us with feedback so that we can correct or extend the information provided|
<urn:uuid:a5a928ab-9d04-45a7-859c-f10ded99e10f>
3.140625
1,142
Knowledge Article
Science & Tech.
40.578522
95,600,684
Chemistry - Atomic absorption spectroscopy - AAS Atomic absorption spectroscopy - AAS Atomic absorption spectroscopy is a form of qualitative analysis which makes use of the fact that specific quantities of energy are required to promote electrons to higher energy levels. AAS is used throughout the world to detect the presence of up to 70 metals. The main components of an atomic absorption spectrometer are shown below: In atomic absorption spectroscopy, the light source must contain wavelengths that the element being tested for will definitely absorb. Hence if testing the nickel content of an iron ore sample, a nickel cathode lamp would be used. This is consistent with the fact that the wavelengths of light emitted by excited Ni atoms will be exactly the same as those absorbed by Ni atoms in their ground states. The atomiser / flame combination is a key part of the instrument. A sample of the substance being analysed, say iron ore for nickel content, is dissolved in acid and sprayed into the flame where the atoms of nickel absorb some of the light also passing into the flame. When the light beam is shone through the flame the nickel atoms absorb particular wavelengths and consequently the intensity of the light of those wavelengths hitting the detector is less than the intensity with which they left the lamp. The monochromator acts as a wavelength (or colour) selector and controls the wavelengths of light that reach the detector. The amount of light absorbed is found by comparing the intensity of the incident radiation, i.e. the light hitting the atomised sample, with the intensity of the transmitted radiation, i.e. the light of particular wavelength that passes through the atomised sample. The amount of light absorbed is known as the absorbance. As in colorimetry, the absorbance is directly related to the concentration of the absorbing atoms. Knowing the absorbance for a particular sample of iron ore will not however provide the nickel content unless the instrument has been calibrated to determine the relationship between absorbance and amount of Previous | Next Please log in so we can save your progress and see when you successfully complete Alison’s free Advanced Chemistry 1 online course Please sign up so we can save your progress and see when you successfully complete Alison’s free Advanced Chemistry 1 online course We will send your password reset instructions to your associated address. Please enter your current email.
<urn:uuid:3817f8f5-e269-4735-9701-bb1c9abbbaf9>
3.578125
521
Truncated
Science & Tech.
32.854726
95,600,700
The peculiar cosmic object known as 47 Tuc W (denoted by arrow in the X-ray image) is a double star system consisting of a normal star and a neutron star that makes a complete rotation every 2.35 milliseconds. Blink your eye and a superdense star the size of Manhattan Island will have rotated 25 or more times! Image credit: X-ray: NASA/CXC/Northwestern U./C.Heinke et al.; Optical: ESO/Danish 1.54-m/W.Keel et al. New Chandra observations give the best information yet on why such neutron stars, called millisecond pulsars, are rotating so fast. The key, as in real estate, is location, location, location - in this case the crowded confines of the globular star cluster 47 Tucanae, where stars are less than a tenth of a light year apart. Almost two dozen millisecond pulsars are located there. This large sample is a bonanza for astronomers seeking to test theories for the origin of millisecond pulsars, and increases the chances that they will find a critical transitional object such 47 Tuc W. 47 Tuc W stands out from the crowd because it produces more high-energy X-rays than the others. This anomaly points to a different origin of the X-rays, namely a shock wave due to a collision between matter flowing from a companion star and particles racing away from the pulsar at near the speed of light. Regular variations in the optical and X-ray light corresponding to the 3.2-hour orbital period of the stars support this interpretation. A team of astronomers from the Harvard-Smithsonian Center for Astrophysics in Cambridge, MA pointed out that the X-ray signature and variability of the light from 47 Tuc W are nearly identical to those observed from an X-ray binary source known as J1808. They suggest that these similarities between a known millisecond pulsar and a known X-ray binary provide the long-sought link between these types of objects. In theory, the first step toward producing a millisecond pulsar is the formation of a neutron star when a massive star goes supernova. If the neutron star is in a globular cluster, it will perform an erratic dance around the center of the cluster, picking up a companion star which it may later swap for another. As on a crowded dance floor, the congestion in a globular cluster can cause the neutron star to move closer to its companion, or to swap partners to form an even tighter pair. When the pairing becomes close enough, the neutron star begins to pull matter away from its partner. As matter falls onto the neutron star, it gives off X-rays. An X-ray binary system has been formed, and the neutron star has made the crucial second step toward becoming a millisecond pulsar. The matter falling onto the neutron star slowly spins it up, in the same way that a child's carousel can be spun up by pushing it every time it comes around. After 10 to 100 million years of pushing, the neutron star is rotating once every few milliseconds. Finally, due to the rapid rotation of the neutron star, or the evolution of the companion, the infall of matter stops, the X-ray emission declines, and the neutron star emerges as a radio-emitting millisecond pulsar. Illustration of Shock Wave Around Millisecond Pulsar In binary star systems such as 47 Tuc W, which contains a normal star and an extremely rapidly rotating neutron star called a millisecond pulsar (MSP), matter is pulled from the normal star by the gravitational tug of the more massive neutron star. In contrast to X-ray binary systems, this matter (yellow streamer in illustration) does not form a hot disk around the neutron star. Instead, it is pushed back by the pressure of a wind of fast-moving particles (blue) produced by the pulsar. The resulting shock wave (white) is a source of high-energy X-rays. (Illustration: NASA/CfA/S.Bogdanov) It is likely that the companion star in 47 Tuc W - a normal star with a mass greater than about an eighth that of the Sun - is a new partner, rather than the companion that spun up the pulsar. The new partner, acquired fairly recently in an exchange that ejected the previous companion, is trying to dump on the already spun-up pulsar, creating the observed shock wave. In contrast, the X-ray binary J1808 is not in a globular cluster, and is very likely making do with its original companion, which has been depleted to a brown dwarf size with a mass less than 5% that of the Sun. Most astronomers accept the binary spin-up scenario for creating millisecond pulsars because they have observed neutron stars speeding up in X-ray binary systems, and almost all radio millisecond pulsars are observed to be in binary systems. Until now, definitive proof has been lacking, because very little is known about transitional objects between the second and final steps. That is why 47 Tuc W is hot. It links a millisecond pulsar with many of the properties of an X-ray binary, to J1808, an X-ray binary that behaves in many ways like a millisecond pulsar, thus providing a strong chain of evidence to support the theory. Source: Chandra X-ray Observatory Explore further: Faint outburst of an accreting millisecond X-ray pulsar observed by astronomers
<urn:uuid:38deb07d-f025-4998-b982-a267a9a7ce30>
3.34375
1,134
Knowledge Article
Science & Tech.
51.034603
95,600,705
Jian Lin, a WHOI senior scientist in geology and geophysics, said that even though the quake was “large but not huge,” there were three factors that made it particularly devastating: First, it was centered just 10 miles southwest of the capital city, Port au Prince; second, the quake was shallow—only about 10-15 kilometers below the land’s surface; third, and more importantly, many homes and buildings in the economically poor country were not built to withstand such a force and collapsed or crumbled. All of these circumstances made the Jan. 12 earthquake a “worst-case scenario,” Lin said. Preliminary estimates of the death toll ranged from thousands to hundreds of thousands. “It should be a wake-up call for the entire Caribbean,” Lin said. The quake struck on a 50-60-km stretch of the more than 500-km-long Enriquillo-Plantain Garden Fault, which runs generally east-west through Haiti, to the Dominican Republic to the east and Jamaica to the west. It is a “strike-slip” fault, according to the U.S. Geological Survey, meaning the plates on either side of the fault line were sliding in opposite directions. In this case, the Caribbean Plate south of the fault line was sliding east and the smaller Gonvave Platelet north of the fault was sliding west. But most of the time, the earth’s plates do not slide smoothly past one another. They stick in one spot for perhaps years or hundreds of years, until enough pressure builds along the fault and the landmasses suddenly jerk forward to relieve the pressure, releasing massive amounts of energy throughout the surrounding area. A similar, more familiar, scenario exists along California’s San Andreas Fault. Such seismic areas “accumulate stresses all the time,” says Lin, who has extensively studied a nearby, major fault , the Septentrional Fault, which runs east-west at the northern side of the Hispaniola island that makes up Haiti and Dominican Republic. In 1946, an 8.1 magnitude quake, more than 30 times more powerful than this week’s quake, struck near the northeastern corner of the Hispaniola. Compounding the problem, he says, is that in addition to the Caribbean and North American plates, , a wide zone between the two plates is made up of a patchwork of smaller “block” plates, or “platelets”—such as the Gonvave Platelet—that make it difficult to assess the forces in the region and how they interact with one another. “If you live in adjacent areas, such as the Dominican Republic, Jamaica and Puerto Rico, you are surrounded by faults.” Residents of such areas, Lin says, should focus on ways to save their lives and the lives of their families in the event of an earthquake. “The answer lies in basic earthquake education,” he says. Those who can afford it should strengthen the construction and stability of their houses and buildings, he says. But in a place like Haiti, where even the Presidential Palace suffered severe damage, there may be more realistic solutions. Some residents of earthquake zones know that after the quake’s faster, but smaller, primary, or “p” wave hits, there is usually a few-second-to-one-minute wait until a larger, more powerful surface, or “s” wave strikes, Lin says. P waves come first but have smaller amplitudes and are less destructive; S waves, though slower, are larger in amplitude and, hence, more destructive. “At least make sure you build a strong table somewhere in your house and school,” said Lin. When a quake comes, “duck quickly under that table.” Lin said the Haiti quake did not trigger an extreme ocean wave such as a Tsunami, partly because it was large but not huge and was centered under land rather than the sea. The geologist says that aftershocks, some of them significant, can be expected in the coming days, weeks, months, years, “even tens of years.” But now that the stress has been relieved along that 50-60-km portion of the Enriquillo-Plantain Garden Fault, Lin says this particular fault patch should not experience another quake of equal or greater magnitude for perhaps 100 years. However, the other nine-tenths of that fault and the myriad networks of faults throughout the Caribbean are, definitely, “active.” “A lot of people,” Lin says, “forget [earthquakes] quickly and do not take the words of geologists seriously. But if your house is close to an active fault, it is best that you do not forget where you live.” The Woods Hole Oceanographic Institution is a private, independent organization in Falmouth, Mass., dedicated to marine research, engineering, and higher education. Established in 1930 on a recommendation from the National Academy of Sciences, its primary mission is to understand the oceans and their interaction with the Earth as a whole, and to communicate a basic understanding of the oceans’ role in the changing global environment. WHOI Media Relations | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:ac7394eb-a19c-4b40-9b35-032c6bc47d5a>
3.828125
1,740
Content Listing
Science & Tech.
45.783313
95,600,735
Built: about 1 month ago Size: 1.77 MB Home page: http://aspell.net/ Summary: An Open Source interactive spelling checker program GNU Aspell is a spell checker designed to eventually replace Ispell. It can either be used as a library or as an independent spell checker. Its main feature is that it does a much better job of coming up with possible suggestions than just about any other spell checker out there for the English language, including Ispell and Microsoft Word. It also has many other technical enhancements over Ispell such as using shared memory for dictionaries and intelligently handling personal dictionaries when more than one Aspell process is open at once. List of contributors: - Fix build with gcc7 (patch from Fedora) - NMU: added BR: texinfo - Updated to 0.60.6.1 - Make interpackage dependencies strict
<urn:uuid:36319f0b-471e-4c0b-9c36-28d4ce51f14b>
2.609375
198
Product Page
Software Dev.
47.502693
95,600,741
For about two decades, astronomers have known about an object called VLA J213002.08+120904 (VLA J2130+12 for short). Although it is close to the line of sight to the globular cluster M15, most astronomers had thought that this source of bright radio waves was probably a distant galaxy. By combining data from Chandra and several other telescopes, astronomers have identified the true nature of an unusual source in the Milky Way galaxy. This discovery implies that there could be a much larger number of black holes in the Galaxy that have previously been unaccounted for. The images on the left show X-rays from Chandra and an optical image from Hubble of a large area around the source VLA J2130+12, including M15. The images on the right show the source VLA J2130+12 that is bright in radio waves, but can only be giving off a very small amount of X-rays. These pieces of information indicate the source contains a black hole with a few times the mass of the Sun. (Credit: X-ray: NASA/CXC/Univ. of Alberta/B.Tetarenko et al; Optical: NASA/STScI; Radio: NSF/AUI/NRAO/Curtin Univ./J. Miller-Jones) Thanks to recent distance measurements with an international network of radio telescopes, including the EVN (European Very Long Baseline Interferometry Network) telescopes, the NSF's Green Bank Telescope and Arecibo Observatory, astronomers realized that VLA J2130+12 is at a distance of 7,200 light years, showing that it is well within our own Milky Way galaxy and about five times closer than M15. A deep image from Chandra reveals it can only be giving off a very small amount of X-rays, while recent VLA data indicates the source remains bright in radio waves. This new study indicates that VLA J2130+12 is a black hole a few times the mass of our Sun that is very slowly pulling in material from a companion star. At this paltry feeding rate, VLA J2130+12 was not previously flagged as a black hole since it lacks some of the telltale signs that black holes in binaries typically display. "Usually, we find black holes when they are pulling in lots of material. Before falling into the black hole this material gets very hot and emits brightly in X-rays," said Bailey Tetarenko of the University of Alberta, Canada, who led the study. "This one is so quiet that it's practically a stealth black hole." This is the first time a black hole binary system outside of a globular cluster has been initially discovered while it is in such a quiet state. Hubble observations identified VLA J2130+12 with a star having only about one-tenth to one-fifth the mass of the Sun. The observed radio brightness and the limit on the X-ray brightness from Chandra allowed the researchers to rule out other possible interpretations, such as an ultra-cool dwarf star, a neutron star, or a white dwarf pulling material away from a companion star. Because this study only covered a very small patch of sky, the implication is that there should be many of these quiet black holes around the Milky Way. The estimates are that tens of thousands to millions of these black holes could exist within our Galaxy, about three to thousands of times as many as previous studies have suggested. "Unless we were incredibly lucky to find one source like this in a small patch of the sky, there must be many more of these black hole binaries in our Galaxy than we used to think," said co-author Arash Bahramian, also of the University of Alberta. There are other implications of finding that VLA J2130+12 is relatively near to us. "Some of these undiscovered black holes could be closer to the Earth than we previously thought," said Robin Arnason, a co-author from Western University, Canada "However there's no need to worry as even these black holes would still be many light years away from Earth." Sensitive radio and X-ray surveys covering large regions of the sky will need to be performed to uncover more of this missing population. If, like many others, this black hole was formed in the plane of the Milky Way's disk, it would have needed a large kick at birth to launch it to its current position about 3,000 light years above the plane of the Galaxy. These results appear in a paper in The Astrophysical Journal. NASA's Marshall Space Flight Center in Huntsville, Alabama, manages the Chandra program for NASA's Science Mission Directorate in Washington. The Smithsonian Astrophysical Observatory in Cambridge, Massachusetts, controls Chandra's science and flight operations. Dr James Miller-Jones ICRAR-Curtin University, research co-author ICRAR media contact Chandra X-ray Center, Cambridge, Mass. Kirsten Gottschalk | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:1e28b303-1de7-4fb4-b533-aaa960d10793>
3.4375
1,614
Content Listing
Science & Tech.
46.600242
95,600,760
|Wind farm are no-go territory for most gannets| TO what extent are birds threatened by the proliferation of offshore wind farms? The BTO has this month released preliminary findings of a survey it carried out in partnership with the University of Highlands and Islands' Environmental Research Institute The review found that more than 99 per cent of seabirds were likely to alter their flight paths in order to avoid collision. However, BTO research ecologist Aonghais Cook, who led the study said “It is important not to get lulled into a false sense of security by these figures. “Whilst most may avoid turbines, collision may still be a significant risk at sites with large numbers of birds. “Furthermore, there are still a number of key gaps in knowledge for some vulnerable species.” The review indicated species-specific differences in the way in which seabirds respond to wind farms. A significant proportion of gannets will avoid even entering a wind farm, but gulls are much less cautious and may even be attracted to the sites as a result of the foraging opportunities they offer. Despite this, once inside the wind farms even gulls seem to show a strong avoidance of the turbine blades. The work was carried out on behalf of Marine Scotland Science. Photo credit: Mmo iwdg via Wikipedia Commons
<urn:uuid:09317ec7-6f39-4511-a5e7-85d9fea9156a>
2.625
281
News Article
Science & Tech.
38.203889
95,600,769
This article needs additional citations for verification. (December 2009) (Learn how and when to remove this template message) The (light) compensation point is the light intensity on the light curve where the rate of photosynthesis exactly matches the rate of cellular respiration. At this point, the uptake of CO2 through photosynthetic pathways is equal to the respiratory release of carbon dioxide, and the uptake of O2 by respiration is equal to the photosynthetic release of oxygen. In assimilation terms, at the compensation point, the net carbon dioxide assimilation is zero. Leaves release CO2 by photorespiration and cellular respiration, but CO2 is also converted into carbohydrate by photosynthesis. Assimilation is therefore the difference in the rate of these processes. At a normal partial pressure of CO2 (0.343 hPa in 1980), there is an irradiation at which the net assimilation of CO2 is zero. For instance, in the early morning and late evenings, the compensation point may be reached as photosynthetic activity decreases and respiration increases. Therefore, the partial pressure of CO2 at the compensation point, also known as gamma, is a function of irradiation. The irradiation dependence of the compensation point is explained by the RuBP (ribulose-1,5-bisphosphate) concentration. When the acceptor RuBP is in saturated concentration, gamma is independent of irradiation. However at low irradiation, only a small fraction of the sites on RuBP carboxylase-oxygenase (RuBisCO) have the electron acceptor RuBP. This decreases the photosynthetic activity and therefore affects gamma. The intracellular concentration of CO2 affects the rates of photosynthesis and photorespiration. Higher CO2 concentrations favour photosynthesis whereas low CO2 concentrations favor photorespiration. The compensation point is reached during early mornings and late evenings. Respiration is relatively constant, whereas photosynthesis depends on the intensity of sunlight. When the rate of photosynthesis equals the rate of respiration or photorespiration, the compensation point occurs. At the compensation point, the rate of photosynthesis is euqal to the rate of respiration. Products of photosynthesis are used up in respiration so that the organism is neither consuming nor building biomass. The net gaseous exchange is also zero at this point. For aquatic plants where the level of light at any given depth is roughly constant for most of the day, the compensation point is the depth at which light penetrating the water creates the same balanced effect. The marine environment Respiration occurs by both plants and animals throughout the water column, resulting in the destruction, or usage, of organic matter, but photosynthesis can only take place via photosynthetic algae in the presence of light, nutrients and CO2. In well-mixed water columns plankton are evenly distributed, but a net production only occurs above the compensation depth. Below the compensation depth there is a net loss of organic matter. The total population of photosynthetic organisms cannot increase if the loss exceeds the net production. The compensation depth between photosynthesis and respiration of phytoplankton in the ocean must be dependent on some factors: the illumination at the surface, the transparency of the water, the biological character of the plankton present, and the temperature. The compensation point was found nearer to the surface as you move closer to the coast. It is also lower in the winter seasons in the Baltic Sea according to a study that examined the compensation point of multiple photosynthetic species. The blue portion of the visible spectrum, between 455 and 495 nanometers, dominates light at the compensation depth. A concern regarding the concept of the compensation point is it assumes that phytoplankton remain at a fixed depth throughout a 24-hour period (time frame in which compensation depth is measured), but phytoplankton experience displacement due to isopycnals moving them tens of meters. - ESRL / Mauna Loa CO2 annual mean data, , - Farquhar, G. D.; et al. (1982). "Modelling of Photosynthetic Response to Environmental Conditions". In Lange, O.L.; et al. Physiological Plant Ecology II. Water Relations and Carbon Assimilation. New York: Springer-Verlag. pp. 556–558. - Sverdrup, H.U. (1953). "On conditions of the vernal blooming of phytoplankton". Journal du Conseil. 18 (3): 287–295. doi:10.1093/icesjms/18.3.287. - Gran, H.H. & Braarud, T. (1935). "A quantitative study of the phytoplankton in the Bay of Fundy and the Gulf of Maine (including observations on hydrography, chemistry and turbidity)". Journal of the Biological Board of Canada. 1 (5): 279–467. doi:10.1139/f35-012. - King, R.J. & Schramm, W. (1976). "Photosynthetic rates of benthic marine algae in relation to light intensity and seasonal variations". Marine Biology. 37 (3): 215–222. doi:10.1007/bf00387606. - Laws, E.A.; Letelier, R.M. & Karl, D.M (2014). "Estimating the compensation irradiance in the ocean: The importance of accounting for non-photosynthetic uptake of inorganic carbon". Deep-Sea Research Part I: Oceanographic Research Papers. 93: 35–40. doi:10.1016/j.dsr.2014.07.011.
<urn:uuid:879e9417-f2e5-452d-a109-10d3d40b198c>
3.484375
1,197
Knowledge Article
Science & Tech.
43.438878
95,600,779
Erin and Ann reinacted "The Titanic" at the Lighthouse on San Salvador, Bahamas. See other beautiful phenomena from the Bahamas. Coral reefs provide billions of dollars a year in resources for humans, with an overwhelming amount in the area of food from various fish populations. Developing countries owe twenty five percent of their fish catch solely to coral reef fish (Hoegh-Culdberg 2005). However a majority of fishing techniques used to catch large quantities of fish employs the use of cyanide. Both unregulated international fish trade and countries that lack laws banning the use of cyanide, fuel cyanide use in fishing. Cyanide stuns fish and makes them easier to catch, but in the process kills non-target species and harms coral (Best 2005). The use of cyanide creates issues that not only “snowball” into more problems for marine ecosystems, but also for the fishermen who use them. The cyanide allows for fishermen to catch fish in areas already under-populated, but damages the food source of the target fish. Also over fishing creates a “funnel” effect on the fish population by making it less and less likely for fish to encounter mates and replenish populations (ETE 2004). Since the 1960’s, more than one million kilograms of cyanide has been spread onto Philippine reefs alone (Best 2005). Cyanide use is not the only problem that results from over fishing. A popular fish in coral reefs is the grouper, which is very popular to eat. The grouper is a natural predator of the damselfish (ETE 2004). Damselfish make their home in coral by making pockets in the reef itself. These pockets create an area for algae to grow, which the damselfish feeds on. Studies have shown damselfish are able to cultivate and modify algae populations in their territories to suit their needs. This increase in the algae population comes despite the fact that the surrounding area consists of an environment hostel to excessive algae growth (Ceccarelli 2005). If left unchecked, the damselfish will create many pockets in which algae can start to overgrow surrounding coral. While poor fishing habits create problems for coral reefs, mans quest for more food from terrestrial environments has also damage coral reef environments. Both agricultural and coastal development contributes to increased sediment output from watersheds into marine zones where coral occurs. Coastal development destroys vegetation that when left in place secures soil and sediments and prevents them from entering the ocean. With coastal populations set to double in the next 50 years, sediment run off only shows signs of increasing in the future (Best 2005). Sediment blocks out light that coral need in order for their photosynthetic algae to produce the necessary sugars for the coral to survive. Agriculture not only contributes to increased levels of marine sediment, but also increases the amount of nutrients and poisons in the water from pesticides and fertilizer. Nitrogen and phosphorous are key components of fertilizer and manure, and while in certain levels beneficial to all organisms, it can lead to coral reef damage. In large amounts, the nitrogen and phosphorous fuels algae blooms that go out of control causing devastating effects like the Red Tide. In other extreme cases, high levels of nutrients produce “dead zones” as organisms consume all the available oxygen in the nutrient-rich water. “Dead zone” events of the Mississippi Delta have been regularly recorded and “dead zones” also been documented on a smaller scale along a majority of the Florida coastline. The pesticides that are introduced from agricultural runoff devastate coral communities by killing sea grass beds and change the reef community structure by making conditions more hospitable to sponges and algae. Sewage, human waste, is also a huge problem especially in the area of the Florida Keys. With almost no sewage treatment in developing countries and vast septic systems in areas like the Florida Keys, enough human was is introduced to have similar effects of fertilizer and manure runoff (Risk 2004). Key ecosystems destroyed by coastal development are the mangrove forests. Mangroves and coral reefs can form in many cases and interdependence on one another. A key function of mangrove forests for coral reef systems is the fact that mangrove roots clump together sediment and reduce the amount of nutrients introduced into the marine environment. Also mangroves provide a safe haven for many coral reef organisms to develop to the point where they can better survive on the reef. Mangroves also provide valuable resources to humans, such as anti-inflammatories and other pharmaceuticals, along with wood and honey. Also mangrove forests provide excellent barriers for land against sea storms that would easily erode coastal areas (Saenger 2002). Another human interference in coral reef health and activity is a combination of tourism and organism collection for aquariums. Tourism results in increased human interaction with coral reefs, many times to the disadvantage of the coral. Many things happen when humans visit reefs, boats that drop anchor often damage coral reefs below, while divers break fragile branched corals (ETE 2004). Tourism also contributes to coastal development by requiring new “pristine” land to be cleared and housing developed for the tourists and as well for those that reside permanently in the area (Risk 2004). Collection of specimens for aquariums is far more harmful than most people think. While the United States prohibits the collection of specimens from its own reefs, the United States imported eighty percent of the collected live coral in the 90’s (Best 2005). Periodic removal of shells has been linked to disrupting the natural predators of the crown of thorns starfish. The crown of thorns starfish eats coral polyps, and in recent years the population of this starfish has skyrocketed. This overpopulation of starfish has caused massive devastation to many coral reef communities. Even the periodic removal of coral itself has had profound effects on coral reef communities. Usually the target species for collection are rare, and therefore are vital to the reef for any hope of the species bouncing back to a safe size in population. With coral, the more valuable species are usually the ones that have the slowest rate of growth. Combining a slow growth rate with an ever-increasing market demand for these exotic coral, and the result is major loss of coral species in many areas of the world (Risk 2004). Despite the many actions humans take that have adverse effects on coral reefs, there is also an effort to have positive impacts. The United Nations Environment Programme established a Coral Reef Unit in December of 2000. This organization is leading international efforts to save the earths coral reefs by working actively with international partners around the globe to increase coral conservation. One big focus of the Coral Reef Unit is to help educate people on the benefits of conserving coral reefs, and regulating fishing tactics. The four regional areas the Coral Reef Unit is active in are the Caribbean, Eastern Africa, East Asia, and the South Pacific (UN 2006). Coral reefs offer humans a wide variety of things; from a beautiful natural wonder to a vital source of food. Many humans would be worse of without coral reefs around, still humans choose not to respect the coral reefs. While efforts have been made to save coral reefs, without widespread proper management of land development and responsible use of resources, we may be looking at the possible extinction of these natural phenomenons. With only 5 percent of the possible medical applications of coral reefs applied (Baker 2005), and the ever-growing needs for food and coastal protection, humans continue to have an ever-growing dependence on coral reefs. Hopefully humanity can find some way in the near future to become less of a negative impact on coral reefs; and maybe even a positive influence. Best, Barbara; Moore, Franklin The World’s Coral Reefs Are Threatened Are the World’s Coral Reefs Threatened? 2005 Greenhaven Press Farmington Hills, MI. Book Editor: Charlene Ferguson. At Issue Series. Hoegh-Culdberg, Ove Global warming causes coral bleaching Are the World’s Coral Reefs Threatened? 2005 Greenhaven Press Farmington Hills, MI. Book Editor: Charlene Ferguson. At Issue Series. United Nations System-Wide Earthwatch 2006 http://earthwatch.unep.net/emergingissues/oceans/coralreefs.php Accessed: May 19 ETE Exploring the Environment: Coral Reefs 2004 http://www.cet.edu/ete/modules/coralreef/CRanthro.html Accessed: May 21 Ceccarelli, Daniela; Jones, Geoffrey; McCook, Laurence 2005. Effects of territorial damselfish on an algal-dominated coastal coral reef. Burke, Lauretta; Maidens, Jonathan. 2004. Reef’s at Risk Word Resources Institute. Washington D.C. Saenger, Peter. 2002. Mangrove Ecology, Silviculture and Conservation. Kluwer Academic Publishers Norwell, MA. Return to Topic Menu We also have a GUIDE for depositing articles, images, data, etc in your research folders. Article complete. Click HERE to return to the Pre-Course Presentation Outline and Paper Posting Menu. Or, you can return to the course syllabus WEATHER & EARTH SCIENCE RESOURCES OTHER ACADEMIC COURSES, STUDENT RESEARCH, OTHER STUFF TEACHING TOOLS & OTHER STUFF It is 6:39:09 AM on Monday, July 16, 2018. Last Update: Wednesday, May 7, 2014
<urn:uuid:21e61faa-a19a-4634-97c7-f100cdd74883>
3.71875
1,972
Academic Writing
Science & Tech.
38.096095
95,600,785
January 10, 2018 at 2:00 PM The two paleontologists dissolving rock cores more than 200 million years old were looking for vestiges of freshwater algae. Instead, tiny fragments of insect scales caught their eye — remnants that a report published Wednesday identifies as the oldest evidence of butterflies and moths. A series of fortunate events led to this discovery, which dates the insects to around 70 million years earlier than previously known, well before there were flowers around that they could pollinate. In the fall of 2012, Paul K. Strother of Boston College, an expert in prehistoric pollen and spores, traveled to Germany to the lab of Bas van de Schootbrugge, a fellow microfossil paleontologist. There they dissolved cores dating to the late Triassic and early Jurassic periods by exposing the material to a nasty acid. The acid erased everything but fossilized organic material. What grabbed Strother's attention were infinitesimally tiny scales. “It struck me that these looked like butterfly scales,” he recalled. Butterfly and moth wings are covered in tiny scales that overlap like shingles on a roof. The scientists contacted experts studying modern insects, who deflated their hopes of identifying butterflies. The scales were described as “not diagnostic,” Strother said — meaning such parts did not belong only to a specific insect group. Some mosquitoes and flies have scales, too. About a year later in Paris, Strother found himself seated at a dinner near a man named Torsten Wappler. The University of Bonn scientist, an expert in extinct insects, had just published an extensive phylogeny, or family tree, describing 479 million years of insect evolution. Strother, who kept images of the curious scales on his computer, whipped out the photos. Wappler examined them and told Strother that it would be possible to classify the insects. It would just take a lot of grunt work down the barrel of a microscope. Van de Schootbrugge enlisted an undergraduate student named Timo J. B. van Eldijk for the task. “Timo is the guy that did all the work,” Strother said. The acid reduced the rock cores to what van Eldijk called “black organic smudge.” Out of this smudge he had to isolate the scales, which to the naked eye look just like a pile of dust. He embedded the dust in a mixture of glycerol and water. Then, using a needle tipped with a human nose hair, van Eldijk managed to prod the scales into view beneath an electron microscope. His investigation revealed that the scales were divided into two types. One set of scales was solid all the way through and compact as steamrolled almonds — “primitive,” van Eldijk called them. The other scales were hollow, which proved to be the critical discovery — “the real shocker,” he said. Previous studies of insect phylogeny had shown the earliest moth and butterfly families with solid scales, van Eldijk said. They also had mandibles for chewing food. The insects that later split off the family tree developed hollow scales in their wings. And these younger moths and butterflies also grew proboscises: long sucking tubes for drinking plant nectar that curled like crazy straws beneath their heads. In the textbook example of coevolution, butterflies developed proboscises in response to plants that developed flowers. The more intricate the flower's nectar spur, the more intricate the insect slurper became. Charles Darwin once received a box containing an orchid with an exceptionally long and slender spur. “Good Heavens, what insect can suck it,” he wrote in an 1862 letter to a friend. Four decades later, biologists in Madagascar discovered an African hawkmoth with a wiry proboscis more than 10 inches long. But plants did not evolve flowers until 130 million years ago, according to the earliest fossil flowers. “There’s two possible scenarios if these [insects] really are pollinators” with proboscises, Strother said. Maybe there is a missing record of Triassic or early Jurassic flowers. Or maybe the proboscis came first — the scenario that the study authors hypothesize is more probable. “During the Jurassic, the most dominant group of plants were the gymnosperms, like your classic pine tree,” van Eldijk said. Conifer cones have indentations to catch male pollen. Insects might have drunk this pollen, if they had tubular mouths. The new research, published in Science Advances, also reveals that moths and butterflies are survivors. At the end of the Triassic period, 201 million years ago, the world was going through an upheaval. Many marine species and some land animals went extinct. Some scientists suggest that intense volcanic activity wracked the planet, altering its climate. But moth and butterfly scales are present in the rock cores on both sides of the extinction divide. “If anything, these butterflies probably profited” from the ecological niches left open by vanished species, van Eldijk said. “If we are to understand how this dramatic climate change, how this mass extinction, might affect insects right now, look to the past.”
<urn:uuid:d60d3b60-7feb-4626-a95d-56c2ff884e03>
3.890625
1,092
News Article
Science & Tech.
45.711289
95,600,811
A collaborative study, recently published in Elsevier's journal, Rangeland Ecology and Management, suggests many land managers in the Flint Hills need to increase burning frequency to more than once every three years to keep the tallgrass prairie ecosystem from transitioning to woodland. The study applied 40 years of data collected at Konza Prairie Biological Station, an 8,600-acre native tallgrass prairie jointly owned by Kansas State University and The Nature Conservancy, to satellite fire maps of the Flint Hills from 2000 to 2010. The satellite data used in the study—"Assessing the Potential for Transitions from Tallgrass Prairie to Woodland: Are We Operating Beyond Critical Fire Thresholds? "—indicated at least 50 percent of the tallgrass prairie in the Flint Hills is burned every three to four years or less frequently and is susceptible to becoming shrubland. Fire intervals greater than ten years apart or complete fire suppression have drastic effects—particularly in the absence of grazing. "In this area, if we completely exclude fire, the landscape can go from tallgrass prairie to a cedar forest in as little as 30-40 years," said John Briggs, director of Konza Prairie and one of the authors of the study. "Once it gets to that point, we are not confident that fire alone is going to bring that back." According to Briggs, also a professor of biology, the tallgrass prairie is one of the most altered ecosystems in North America with only four percent remaining. The grasslands are conducive to cattle ranching and provide economic stability for the area. Native grasses filter freshwater, prevent soil erosion, provide wildlife habitat for grassland birds like the prairie chicken, and mitigate nutrient loading. Briggs also said that if woody vegetation increases near human settlements, so will the chances of dangerous wildfire. "We knew some areas around the Flint Hills were beyond these fire thresholds but we were still surprised how much of the region is susceptible to shrub and tree expansion," said Zak Ratajczak, the study's lead author and Kansas State University doctoral alumnus. Ratajczak, now a National Science postdoctoral fellow at the University of Virginia, started comparing the results from the Konza Prairie fire experiments with the fire maps from K-State's geography researchers as part of his doctoral studies at Kansas State University. Assisting with the study were Doug Goodin, professor of geography, Lei Luo, master's student in geography, and Jesse Nippert, associate professor of biology, all from Kansas State University; Rhett Mohler, Kansas State University alumnus and assistant professor of geography at Saginaw Valley State University; and Brian Obermeyer, director of The Nature Conservancy's Flint Hills Initiative. "Prescribed fire is the most effective tool owners have to manage their land," Briggs said. "Other means, such as mechanically removing woody vegetation or using herbicides, are very expensive and very harmful. Fire is pennies per acre to implement; the other methods can be dollars per acre. That can really add up." Managed by the university's Division of Biology, Konza Prairie has more than 50 sections of land called watersheds—because they are partitioned based on water flow—that are burned at varying frequencies—from annually to every 20 years—since the land was donated in 1971. The areas of the station with one- and two-year fire intervals have minimal large shrubs compared to a nearby watershed that is burned at three-and-a-half-year intervals and that has lost 40 percent of its area to shrub expansion. This comparison, combined with the satellite data of the region, is one reason the researchers are advising an increase in burning in many areas, even though they realize this might stimulate discussion locally and for communities downwind. "This comes at a time where people are really concerned about smoke and our suggestion to increase burning comes with a trade-off," Briggs said. "We are going to have more fire and more smoke, which can affect the air quality in the region and other parts of North America." To find solutions for this problem, Briggs said land managers are working with fire cooperatives and the Kansas Flint Hills Smoke Management to find best practices and compromise. Briggs said a tour of Konza can give land managers access to research data and might help them establish collaborative practices to reduce the abundance of smoke. "There is always a conflict to burning," Briggs said. "Most people think that the remaining tallgrass prairie should be a fenced-off preserve. They think that it will take care of itself, but this system is fire derived and historically fire maintained. Aside from the sustainable and ecological aspects, it is critical to people's livelihoods and necessary to ranching communities." National Science Foundation Long-Term Ecological Research program, The Nature Conservancy, Kansas State University's Agricultural Experimental Research Station, and the Division of Biology in Kansas State University's College of Arts & Sciences provide funding and resources for Konza Prairie research. Like this article? Click here to subscribe to free newsletters from Lab Manager
<urn:uuid:4116e764-dcc3-43ca-a2fa-1295c9a75a8b>
3.203125
1,039
News Article
Science & Tech.
26.450957
95,600,841
Zitterbewegung ("trembling motion" in German) is a hypothetical rapid motion of elementary particles, in particular electrons, that obey the Dirac equation. The existence of such motion was first proposed by Erwin Schrödinger in 1930 as a result of his analysis of the wave packet solutions of the Dirac equation for relativistic electrons in free space, in which an interference between positive and negative energy states produces what appears to be a fluctuation (at the speed of light) of the position of an electron around the median, with a frequency of 2mc2/, or approximately ×1021 radians per second. A reexamination of Dirac theory, however, shows that interference between positive and negative energy states may not be a necessary criterion for observing zitterbewegung. 1.6 For the hydrogen atom, the zitterbewegung produces the Darwin term which plays the role in the fine structure as a small correction of the energy level of the s-orbitals. Zitterbewegung of a free relativistic particle has never been observed. However, it has been simulated twice. First, with a trapped ion, by putting it in an environment such that the non-relativistic Schrödinger equation for the ion has the same mathematical form as the Dirac equation (although the physical situation is different). Then, in 2013, it was simulated in a setup with Bose–Einstein condensates.. The time-dependent Dirac equation where H is the Dirac Hamiltonian for an electron in free space in the Heisenberg picture implies that any operator Q obeys the equation In particular, the time-dependence of the position operator is given by The above equation shows that the operator αk can be interpreted as the kth component of a "velocity operator". To add time-dependence to αk, one implements the Heisenberg picture, which says The time-dependence of the velocity operator is given by Now, because both pk and H are time-independent, the above equation can easily be integrated twice to find the explicit time-dependence of the position operator. First: where xk(t) is the position operator at time t. The resulting expression consists of an initial position, a motion proportional to time, and an unexpected oscillation term with an amplitude equal to the Compton wavelength. That oscillation term is the so-called zitterbewegung. The zitterbewegung term vanishes on taking expectation values for wave-packets that are made up entirely of positive- (or entirely of negative-) energy waves. This can be achieved by taking a Foldy–Wouthuysen transformation. Thus, we arrive at the interpretation of the zitterbewegung as being caused by interference between positive- and negative-energy wave components. References and notesEdit - David Hestenes (1990). "The zitterbewegung interpretation of quantum mechanics". Foundations of Physics. 20 (10): 1213–1232. Bibcode:1990FoPh...20.1213H. doi:10.1007/BF01889466. - "Quantum physics: Trapped ion set to quiver". Nature News and Views. 463 (7277). - Gerritsma; Kirchmair; Zähringer; Solano; Blatt; Roos (2010). "Quantum simulation of the Dirac equation". Nature. 463 (7277): 68–71. arXiv: . Bibcode:2010Natur.463...68G. doi:10.1038/nature08688. - Leblanc; Beeler; Jimenez-Garcia; Perry; Sugawa; Williams; Spielman (2013). "Direct observation of zitterbewegung in a Bose–Einstein condensate". New Journal of Physics. 15 (7): 073011. doi:10.1088/1367-2630/15/7/073011. - Schrödinger, E. (1930). Über die kräftefreie Bewegung in der relativistischen Quantenmechanik [On the free movement in relativistic quantum mechanics] (in German). pp. 418–428. OCLC 881393652. - Schrödinger, E. (1931). Zur Quantendynamik des Elektrons [Quantum Dynamics of the Electron] (in German). pp. 63–72. - Messiah, A. (1962). "XX, Section 37". Quantum Mechanics (pdf). II. pp. 950–952. ASIN B001Q71VQS. ISBN 9780471597681.
<urn:uuid:bd25306d-f3e1-4aab-87a9-15710ff6e9b8>
3.609375
1,012
Knowledge Article
Science & Tech.
46.949816
95,600,860
Quantifying the light sensitivity of Calanus spp. during the polar night: potential for orchestrated migrations conducted by ambient light from the sun, moon, or aurora borealis? - 492 Downloads Recent studies have shown that the biological activity during the Arctic polar night is higher than previously thought. Zooplankton perform diel vertical migration during the dark period/winter, with the calanoid copepods Calanus spp. being one of the main taxa assumed to contribute to the observed diel vertical migration. We investigated the sensitivity of field-collected Calanus spp. to irradiance by keeping individuals in an aquarium and exposing them to gradually increasing irradiance in white, blue, green, and red wavebands, recording their phototactic response with a near-infrared-sensitive video camera. Experiments were performed with the two oldest copepodite stages as well as adult males and females. The copepods were negatively phototactic, and the lowest irradiance eliciting a significant phototactic response was of the order of 10−8–10−6 μmol photons m−2 s−1 for white, green, and blue wavebands, whereas the comparative irradiance for red wavebands was up to three orders of magnitudes higher. The different copepod developmental stages displayed different sensitivities to irradiance. During the darkest part of the polar night, the lowest irradiance for significant response corresponded to 0.0005–0.5 % of the ambient surface irradiance, depending on light source. Accordingly, Calanus spp. may respond to irradiance from the night sky down to 70–80 m, moonlight to 120–170 m, and aurora borealis down to 80–120 m depth. The high sensitivity to blue and green light may explain the Calanus’ ability to perform diel vertical migration during the polar night when intensity and diurnal variation of ambient irradiance is low. KeywordsPhototaxis Light response Spectral sensitivity Copepods Arctic Funding for the Ph.D. Project of A. S. Båtnes was provided by the Faculty of Natural Sciences and Technology (SO funding), NTNU, and the field work was funded by the Arctic Field Grant (Svalbard Science Forum, Norwegian Polar institute). The Ph.D. Project of C. Miljeteig was funded by VISTA—a basic research programme funded by Statoil, conducted in close collaboration with The Norwegian Academy of Science and Letters (Project No. 6156). J. Berge is supported by the Norwegian Research Council project Circa (Project No. 214271). M. Greenacre’s research is partially supported by the BBVA Foundation in Madrid and grant MTM2012-37195 of the Spanish Ministry of Education and Competitiveness. - Bates D, Maechler M, Bolker B (2011) lme4: linear mixed-effects models using S4 classes. R package version 0.999375-42. http://CRAN.R-project.org/package=lme4 - Clarke GL (1971) Light conditions in the sea in relation to the diurnal vertical migration of animals. In: Farquhar GB (ed) Proceedings of the International Symposium on Biological Sound Scattering in Ocean. Maury Center for Ocean Science, Washington, pp 41–50Google Scholar - Falk-Petersen S, Hopkins CCE, Sargent JR (1990) Trophic relationships in the pelagic, Arctic food web. In: Barnes M, Gibson RN (eds) Trophic relationships in the marine environment. Aberdeen University Press, Aberdeen, pp 315–333Google Scholar - Forward RB (1988) Diel vertical migration: zooplankton photobiology and behaviour. Oceanogr Mar Biol Annu Rev 26:361–393Google Scholar - Frost BW (1988) Variability and possible significance of diel vertical migration in Calanus pacificus, a planktonic marine copepod. Bull Mar Sci 43:675–694Google Scholar - Jensen HW, Durand F, Stark M, Premoze S, Dorsey J, Shirley P (2001) A physically-based night sky model. Proc SIGGRAPH. doi: 10.1145/383259.383306 - Jerlov NG (1968) Optical oceanography. Elsevier, AmsterdamGoogle Scholar - Johnsen G, Volent Z, Sakshaug E, Sigernes F, Pettersson LH (2009) Remote sensing in the Barens Sea. In: Sakshaug E, Johnsen G, Kovacs K (eds) Ecosystem Barents Sea. Tapir Academic Press, Trondheim, pp 139–168Google Scholar - Müller A, Wuchterl G, Sarazin M (2011) Measuring the night sky brightness with the lightmeter. RevMexAA (Serie de Conferencias) 41:46–49Google Scholar - R Development Core Team (2012) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0Google Scholar - Rasband WS (1997–2012) ImageJ, U.S. National Institutes of Health, Bethesda, Maryland, USA, http://imagej.nih.gov/ij/ - Sakshaug E, Johnsen G, Zsolt V (2009) Light. In: Sakshaug E, Johnsen G, Kovacs K (eds) Ecosystem Barents Sea. Tapir Academic Press, Trondheim, pp 117–138Google Scholar - Tande KS (1982) Ecological investigations on the zooplankton community in Balsfjorden, northern Norway: generation cycles, and variations in body weight and body content of carbon and nitrogen related to overwintering and reproduction in the copepod Calanus finmarchicus (Gunnerus). J Exp Mar Biol Ecol 62:129–142CrossRefGoogle Scholar - Vadstein (2009) Interactions the planktonic food web. In: Sakshaug E, Johnsen G, Kovacs KM (eds) Ecosystem Barents Sea. Tapir Academic Press, Trondheim, pp 251–266Google Scholar - Wallace MI, Cottier FR, Berge J, Tarling GA, Griffiths C, Brierley AS (2010) Comparison of zooplankton vertical migration in an ice-free and a seasonally ice-covered Arctic fjord: an insight into the influence of sea ice cover on zooplankton behavior. Limnol Oceanogr 55:831–845CrossRefGoogle Scholar - Webster CN, Varpe Ø, Falk-Petersen S, Berge J, Stübner E, Brierley AS (in press) Moonlit swimming: vertical distributions of macrozooplankton and nekton during the polar night. Polar BiolGoogle Scholar - Yamagutchi A, Ikeda T, Watanabe Y, Ishizaka J (2004) Vertical distribution patterns of pelagic copepods as viewed from the predation pressure hypothesis. Zool Stud 43:475–485Google Scholar
<urn:uuid:e2b94891-9d50-41e8-91ed-8195f0f21d94>
2.9375
1,516
Academic Writing
Science & Tech.
47.981923
95,600,861
A newly discovered structure found along the San Andreas Fault near the Salton Sea could trigger the next big Southern California earthquake. A study by the Geological Society of America concludes the so-called Durmid ladder structure could be ground zero for the next “big one” on the San Andreas fault. San Francisco based TV station KRON4 asked the U.S. Geological Survey’s Dr. Walter Mooney to comment on the discovery. “If you have a ladder and you break a rung or a step, then the ladder begins to deform, and if you break another rung, well I wouldn’t want to be on that ladder because the whole thing, both sides could go,” Dr. Mooney said. While fraught with numerous small temblors through the years, the far southern end of the San Andreas Fault is long overdue for a large earthquake. In a worst-case scenario, the collapse of the Durmid ladder structure, which runs for about 15 miles through a region known for numerous small temblors, could trigger a much larger earthquake with devastating consequences. “We would be very concerned if we begin to see movement on that ladder structure because it could trigger or influence or encourage other associated faults to move and what we’re really afraid of is a big one, a magnitude 8 in Southern California,” Dr. Mooney said. Dr. Mooney says it’s unlikely the collapse of the ladder structure would impact anywhere outside of Southern Caifornia, where USGS scientists have concluded there is a 72 percent chance of one or more magnitude 6.7 or greater earthquakes in the next 30 years.
<urn:uuid:57618c4a-52a6-426f-8f9f-efae04d4b40f>
2.890625
345
News Article
Science & Tech.
56.758588
95,600,886
Electron Energy Loss Spectroscopy Methodology for Boron Localisation in Plant Cell Walls Electron energy loss spectroscopy (EELS) depends on the phenomenon that when an electron beam interacts with electrons in matter, as in a conventional transmission electron microscope, each beam electron can lose a characteristic amount of its energy. The magnitude of the energy loss depends on which element has been struck by the beam electron, and on which transition occurs between inner-shell atomic orbitals of that element. For a specific element there are one, or sometimes more, characteristic features or ‘edges’ in the EELS spectrum. When there are two distinct EELS edges (e.g. the K and L edges) for an element, these are derived from different orbital transitions. Valence (outer) electrons are not involved in these transitions but the fine structure of the L edges, in particular, contains information derived from minor interactions with the valence electron configuration and hence on the bonding environment of the atom. KeywordsBackground Subtraction Secondary Cell Wall Electron Energy Loss Spectroscopy Flax Fibre Fibre Cell Wall Unable to display preview. Download preview PDF. - Bode, E., Kozik, S., Kunz, U., Lehmann, H., 1994, Comparative electron-microscopic studies on the process of silification in leaves of two different grass species. Deut. Travarztl. Wochenschr. 101: 367–372.Google Scholar - Brydson, R., 2000, A brief review of quantitative aspects of electron energy loss spectroscopy and imaging. Mater.Sci.Technol. 16: 1187–1198.Google Scholar
<urn:uuid:fcdf14b2-b772-4766-b341-8fa1595c614c>
2.703125
353
Truncated
Science & Tech.
38.944373
95,600,899
This article has multiple issues. Please help talk page. (Learn how and when to remove these template messages)( or discuss these issues on the Learn how and when to remove this template message) |Paradigms and models| |Methodologies and frameworks| |Standards and Bodies of Knowledge| In software engineering, a software development process is the process of dividing software development work into distinct phases to improve design, product management, and project management. It is also known as a software development life cycle. The methodology may include the pre-definition of specific deliverables and artifacts that are created and completed by a project team to develop or maintain an application. Most modern development processes can be vaguely described as agile. Other methodologies include waterfall, prototyping, iterative and incremental development, spiral development, rapid application development, and extreme programming. Some people consider a life-cycle "model" a more general term for a category of methodologies and a software development "process" a more specific term to refer to a specific process chosen by a specific organization. For example, there are many specific software development processes that fit the spiral life-cycle model. The field is often considered a subset of the systems development life cycle The software development methodology (also known as SDM) framework didn't emerge until the 1960s. According to Elliott (2004) the systems development life cycle (SDLC) can be considered to be the oldest formalized methodology framework for building information systems. The main idea of the SDLC has been "to pursue the development of information systems in a very deliberate, structured and methodical way, requiring each stage of the life cycle--from inception of the idea to delivery of the final system--to be carried out rigidly and sequentially" within the context of the framework being applied. The main target of this methodology framework in the 1960s was "to develop large scale functional business systems in an age of large scale business conglomerates. Information systems activities revolved around heavy data processing and number crunching routines". Methodologies, processes, and frameworks range from specific proscriptive steps that can be used directly by an organization in day-to-day work, to flexible frameworks that an organization uses to generate a custom set of steps tailored to the needs of a specific project or group. In some cases a "sponsor" or "maintenance" organization distributes an official set of documents that describe the process. Specific examples include: It is notable that since DSDM in 1994, all of the methodologies on the above list except RUP have been agile methodologies - yet many organisations, especially governments, still use pre-agile processes (often waterfall or similar). Software process and software quality are closely interrelated; some unexpected facets and effects have been observed in practice Since the early 2000s scaling agile delivery processes has become the biggest challenge for teams using agile processes. Among these another software development process has been established in open source. The adoption of these best practices known and established processes within the confines of a company is called inner source. Several software development approaches have been used since the origin of information technology, in two main categories. Typically an approach or a combination of approaches is chosen by management or a development team. "Traditional" methodologies such as waterfall that have distinct phases are sometimes known as software development life cycle (SDLC) methodologies, though this term could also be used more generally to refer to any methodology. A "life cycle" approach with distinct phases is in contrast to Agile approaches which define a process of iteration, but where design, construction, and deployment of different pieces can occur simultaneously. Continuous integration is the practice of merging all developer working copies to a shared mainline several times a day.Grady Booch first named and proposed CI in his 1991 method, although he did not advocate integrating several times a day. Extreme programming (XP) adopted the concept of CI and did advocate integrating more than once per day - perhaps as many as tens of times per day. Software prototyping is about creating prototypes, i.e. incomplete versions of the software program being developed. The basic principles are: A basic understanding of the fundamental business problem is necessary to avoid solving the wrong problems, but this is true for all software methodologies. Various methods are acceptable for combining linear and iterative systems development methodologies, with the primary objective of each being to reduce inherent project risk by breaking a project into smaller segments and providing more ease-of-change during the development process. There are three main variants of incremental development: Rapid application development (RAD) is a software development methodology, which favors iterative development and the rapid construction of prototypes instead of large amounts of up-front planning. The "planning" of software developed using RAD is interleaved with writing the software itself. The lack of extensive pre-planning generally allows software to be written much faster, and makes it easier to change requirements. The rapid development process starts with the development of preliminary data models and business process models using structured techniques. In the next stage, requirements are verified using prototyping, eventually to refine the data and process models. These stages are repeated iteratively; further development results in "a combined business requirements and technical design statement to be used for constructing new systems". The term was first used to describe a software development process introduced by James Martin in 1991. According to Whitten (2003), it is a merger of various structured techniques, especially data-driven Information Engineering, with prototyping techniques to accelerate software systems development. The basic principles of rapid application development are: "Agile software development" refers to a group of software development methodologies based on iterative development, where requirements and solutions evolve via collaboration between self-organizing cross-functional teams. The term was coined in the year 2001 when the Agile Manifesto was formulated. Agile software development uses iterative development as a basis but advocates a lighter and more people-centric viewpoint than traditional approaches. Agile processes fundamentally incorporate iteration and the continuous feedback that it provides to successively refine and deliver a software system. There are many agile methodologies, including: The waterfall model is a sequential development approach, in which development is seen as flowing steadily downwards (like a waterfall) through several phases, typically: The first formal description of the method is often cited as an article published by Winston W. Royce in 1970 although Royce did not use the term "waterfall" in this article. Royce presented this model as an example of a flawed, non-working model. The basic principles are: The waterfall model is a traditional engineering approach applied to software engineering. A strict waterfall approach discourages revisiting and revising any prior phase once it is complete. This "inflexibility" in a pure waterfall model has been a source of criticism by supporters of other more "flexible" models. It has been widely blamed for several large-scale government projects running over budget, over time and sometimes failing to deliver on requirements due to the Big Design Up Front approach. Except when contractually required, the waterfall model has been largely superseded by more flexible and versatile methodologies developed specifically for software development. See Criticism of Waterfall model. In 1988, Barry Boehm published a formal software system development "spiral model," which combines some key aspect of the waterfall model and rapid prototyping methodologies, in an effort to combine advantages of top-down and bottom-up concepts. It provided emphasis in a key area many felt had been neglected by other methodologies: deliberate iterative risk analysis, particularly suited to large-scale complex systems. The basic principles are: Other high-level software project methodologies include: Some "process models" are abstract descriptions for evaluating, comparing, and improving the specific process adopted by an organization. A variety of such frameworks have evolved over the years, each with its own recognized strengths and weaknesses. One software development methodology framework is not necessarily suitable for use by all projects. Each of the available methodology frameworks are best suited to specific kinds of projects, based on various technical, organizational, project and team considerations. Software development organizations implement process methodologies to ease the process of development. Sometimes, contractors may require methodologies employed, an example is the U.S. defense industry, which requires a rating based on process models to obtain contracts. The international standard for describing the method of selecting, implementing and monitoring the life cycle for software is ISO/IEC 12207. A decades-long goal has been to find repeatable, predictable processes that improve productivity and quality. Some try to systematize or formalize the seemingly unruly task of designing software. Others apply project management techniques to designing software. Large numbers of software projects do not meet their expectations in terms of functionality, cost, or delivery schedule - see List of failed and overbudget custom software projects for some notable examples. Organizations may create a Software Engineering Process Group (SEPG), which is the focal point for process improvement. Composed of line practitioners who have varied skills, the group is at the center of the collaborative effort of everyone in the organization who is involved with software engineering process improvement. A particular development team may also agree to programming environment details, such as which integrated development environment is used, and one or more dominant programming paradigms, programming style rules, or choice of specific software libraries or software frameworks. These details are generally not dictated by the choice of model or general methodology. Manage research, learning and skills at defaultLogic. Create an account using LinkedIn or facebook to manage and organize your Digital Marketing and Technology knowledge. defaultLogic works like a shopping cart for information -- helping you to save, discuss and share.Visit defaultLogic's partner sites below:
<urn:uuid:c4de0149-ee70-402e-aa5d-6749fc9c6bd6>
3.203125
2,007
Knowledge Article
Software Dev.
9.587658
95,600,904
|TUNICATA : APLOUSOBRANCHIA : Didemnidae||SEA SQUIRTS| Description: This ascidian forms transparent gelatinous colonies on algae. The small zooids are scattered densely throughout the sheet. Each zooid has a small inhalant pore and there are a few larger exhalant openings, but these openings are not conspicuously pigmented. There is a pattern of small yellow pigment bodies in the surface layer which can be seen on close inspection. 4mm thick x 50mm wide sheets. Habitat: A common ascidian in a wide variety of habitats in shallow water. Distribution: Widespread around the British Isles. Similar Species: Diplosoma spongiforme has been much mistaken for this species in the past. It forms much larger, more heavily pigmented, sheets on hard substrata. Diplosoma singulare forms similar smaller transparent colonies on seaweeds but has not been reported yet from the British Isles. Key Identification Features: Distribution Map from NBN: Interactive map : National Biodiversity Network mapping facility, data for UK. WoRMS: Species record : World Register of Marine Species. |Picton, B.E. & Morrow, C.C. (2016). Diplosoma listerianum (Milne-Edwards, 1841). [In] Encyclopedia of Marine Life of Britain and Ireland. | http://www.habitas.org.uk/marinelife/species.asp?item=ZD970 Accessed on 2018-07-19 |Copyright © National Museums of Northern Ireland, 2002-2015|
<urn:uuid:d08a3731-b90f-4f0d-960f-812cb995ea0c>
3.046875
343
Knowledge Article
Science & Tech.
42.802501
95,600,919
From Event: SPIE Optical Engineering + Applications, 2016 The use of Pseudo Invariant Calibration Sites (PICS) for establishing the radiometric trending of optical remote sensing systems has a long history of successful implementation. Past studies have shown that the PICS method is useful for evaluating the trend of sensors over time or cross-calibration of sensors but was not considered until recently for deriving absolute calibration. Current interest in using this approach to establish absolute radiometric calibration stems from recent research that indicates that with empirically derived models of the surface properties and careful atmospheric characterisation Top of Atmosphere (TOA) reflectance values can be predicted and used for absolute sensor radiometric calibration. Critical to the continued development of this approach is the accurate characterization of the Bidirectional Reflectance Distribution Function (BRDF) of PICS sites. This paper presents BRDF data collected by a high-performance portable goniometer system in order to develop a temporal BRDF model for the Algodones Dunes in California. The results demonstrated that the BRDF of a reasonably simple sand surface was complex with changes in anisotropy taking place in response to changing solar zenith angles. The nature of these complex interactions would present challenges to future model development. Craig A. Coburn, Gordon Logie, and Jason Beaver, "Temporal dynamics of sand dune bidirectional reflectance characteristics for absolute radiometric calibration of optical remote sensing data," Proc. SPIE 9972, Earth Observing Systems XXI, 99720J (Presented at SPIE Optical Engineering + Applications: August 30, 2016; Published: 19 September 2016); https://doi.org/10.1117/12.2237069. Conference Presentations are recordings of oral presentations given at SPIE conferences and published as part of the conference proceedings. They include the speaker's narration along with a video recording of the presentation slides and animations. Many conference presentations also include full-text papers. Search and browse our growing collection of more than 12,000 conference presentations, including many plenary and keynote presentations.
<urn:uuid:e72730a6-8b8e-4ef1-8374-2a24610e90e1>
2.78125
428
Academic Writing
Science & Tech.
14.752118
95,600,953
Dec 24, 2016 10:52 PM EST NASA astronauts are reportedly preparing for the first manned mission in 2035. How does living in igloo or "ice homes" relates to the mission? Will human beings survived in Mars by making themselves comfortable surrounded by ice? Read more details here! Astronauts Preparation for 2035 NASA First Manned Mission Astronauts who will composed the team for NASA first manned mission are reportedly preparing for their expedition coming 2035. According to The Daily Caller, as part of the preparation, NASA has designed a habitat called "Ice Home" or commonly known as igloo for the astronauts to settle in when they finally landed in Mars. Accordingly, such an abode is likely to be more cost-effective because the materials to be used to build an "Ice Home" is already found in Mars; thus, shipping materials from Earth will not be necessary because NASA has presumed that ice are readily available in the Utopia Planitia Region of the Red Planet. Furthermore, they presumed that Mars has more buried water in the form of ice on that region than the earth's Lake Superior. How "Ice Home" Could Protect the Astronauts? Living in an "Ice Home" for the longer time will give biological benefits to the astronauts. According to Kevin Krempton, NASA Ice Home Principal Investigator, "Ice Home" have the capacity to lower down the dosages of galactic cosmic rays absorbed by the astronauts that cannot be done houses structure built from aluminum materials, Space.com reported. Aside from shielding the astronauts from GSC, "Ice Home" also provides a pressurized working environment where the scientists won't have to breathe the dust from the equipment they are working with while wearing their environmental suit. This saves the astronauts from the stress of working with a pressurized gloves. Inside Look at Mars "Ice Home" Generally, NASA's concept on "Ice Home" would not be a typical structure that is akin to a cave. As added by Krempton, the structure will be made of translucent materials so that the light could pass through it and dwellers won't feel trapped in a dark cave. "Ice Home" will be made to house four dwellers and will have areas for work, recreation, logistics, food production and sleeping. See Now: Facebook will use AI to detect users with suicidal thoughts and prevent suicide© 2017 University Herald, All rights reserved. Do not reproduce without permission.
<urn:uuid:e4a91cfc-b7fc-4e09-898c-069e75ee4cec>
3.109375
498
News Article
Science & Tech.
39.729886
95,600,967
December 4, 2009 Stem Cell Biology and Its ComplicationsBy GINA KOLATA Stem Cell Ruling Will Be Appealed (August 25, 2010) The cells are a sort of blank slate, plucked from human embryos just a few days after fertilization. They tantalize scientists because they could in theory turn into any of the body’s 200 mature cell types, from blood to brain to liver to heart. They could be used to study and treat diseases and to study the basic biology of what determines a cell’s destiny — why a heart cell becomes a heart cell, for example, instead of a brain cell. The problem is their origin — human embryos. In order to get stem cells, embryos must be destroyed. It is this fact that led to the court ruling on Monday blocking most federal financing for embryonic stem cell research. The scientist who isolated human embryonic stem cells in 1998 struggled with this dilemma, consulting ethicists before proceeding. But in the end, the scientist, Dr. James Thomson of the University of Wisconsin, decided to go ahead because the embryos were from fertility clinics and were going to be destroyed anyway. And, he reasoned, the work could greatly benefit humanity. Yet despite the high hopes for embryonic stem cells, progress has been slow — so far there are no treatments with the cells. The Food and Drug Administration just approved the first clinical study, a dose and safety test, of human embryonic stem cells to treat spinal cord injuries. All along, though, scientists wondered if they could sidestep the ethical debate by creating embryonic stem cells without the embryos. Every cell has the same DNA. A heart cell is different from a liver cell because it uses different genes. But all the genes to make a liver cell, or any other cell, are there in the cell. The liver genes are masked in a heart cell and vice versa. Why can’t scientists find a way to unmask all of a cell’s genes and turn it directly into a stem cell without using an embryo? A few years ago, two groups of researchers — one led by Dr. Thomson — did just that. They discovered that all they had to do was add four genes and a cell would reprogram itself back to its original state when it was a stem cell in an embryo. Like an embryonic stem cell, that reprogrammed cell seemed to be able to then turn into the many kinds of specialized cells in the body, an ability called pluripotent. What has happened since that discovery, scientists say, is that stem cell biology turned out to be more complicated than they anticipated. Besides the stem cells from embryos, there are so-called adult stem cells found in all tissues but with limited potential because they can only turn into cells from their tissue of origin. And there are these newer cells made by reprogramming mature cells. Now researchers are trying to figure out whether stem cells made by this reprogramming process really are the same as ones taken from embryos. Some say they found subtle differences between these cells, known as induced pluripotent stem cells, or I.P.S.C.’s, and embryonic stem cells. Others are not so sure. They say they need embryonic stem cells as a basis of comparison, a gold standard to see if the newer reprogrammed cells are as good. “We are not at the stage where you will find many investigators saying, ‘We don’t need embryonic stem cells because I.P. cells are the same,” said Dr. Timothy Kamp, a stem cell researcher and professor of medicine at the University of Wisconsin School of Medicine and Public Health. “We don’t know that yet.” One complication is that different labs use different methods to obtain the reprogrammed cells and to study them, Dr. Kamp said. As a result, he said, “not all I.P. cells are the same.” John Gearhart, director of the Institute for Regenerative Medicine at the University of Pennsylvania, and one of the first to isolate human embryonic stem cells, said some investigators ended up with reprogrammed cells “that will have little utility.” They are only partly reprogrammed, he explains. “One worries about how safe and effective they are going to be” if they are ever used in therapies, Dr. Gearhart said. Dr. George Q. Daley, a stem cell researcher at Children’s Hospital in Boston, saw subtle differences in a recent study. When he just compared the two types of cells side by side with molecular tests, they looked identical. Then he tried turning them into various types of mature cells and comparing the results. Dr. Daley published a paper in March, in Nature Biotechnology, reporting that mouse I.P.S.C.’s from different tissues remembered, in a sense, where they came from. He has a similar paper under review showing the same effect with human induced pluripotent stem cells. In the mouse study, it was harder to get pluripotent mouse cells derived from a skin cell, for example, to turn into blood cells than it was to get pluripotent stem cells made from blood cells to turn into blood cells. “They tended to remember their tissue of origin,” Dr. Daley said. Researchers need to find ways to make the cells forget where they came from, he said. Rudolf Jaenisch, a stem cell researcher and biology professor at M.I.T., said he was not certain there were meaningful differences between human embryonic stem cells and human induced pluripotent cells. But to answer that question will require the use of embryonic stem cells for comparisons, Dr. Jaenisch said. “Things are very much in flux,” he said. “We will probably need human embryonic stem cells for a while. And then we probably will not need them anymore.” http://www.nytimes.com/2010/08/25/healt ... cleic_acid “Living backwards!” Alice repeated in great astonishment. “I never heard of such a thing!” “—but there’s one great advantage in it, that one’s memory works both ways.” — Lewis Carroll, Through the Looking-Glass Most Users Ever Online: 288 Currently Browsing this Page: Guest Posters: 2 Newest Members:James Eddy, iceblake, Jeff Phillips, Iain, Yev, awa30, joesmallbone, Alex, Jordan Davidson, HotRod Administrators: John Greenewald: 539, blackvault: 1776
<urn:uuid:08ebb305-6c51-4fb9-9405-92b2e00fc2b9>
3.484375
1,400
Comment Section
Science & Tech.
59.872076
95,600,974
Los Alamos National Laboratory played part in two major discoveries and powered the spacecraft during its 20-year flight Los Alamos National Laboratory scientists led the development of two scientific sensors on NASA’s spacecraft Cassini that provided key measurements of the space environment around Saturn after its launch in 1997, arrival in 2004 and continuing mission that ended Friday, when it burned up in the Saturn atmosphere. The Laboratory also provided the plutonium heat sources that were part of the spacecraft’s Radioisotope Thermoelectric Generator (RTG) that provided electrical power to Cassini throughout its mission. “Ultimately, the Cassini mission was about exploring to the very edges of our solar system,” said Terry Wallace, head of Global Security at Los Alamos National Laboratory. “It’s an extraordinarily difficult mission, but one with extraordinary rewards: giving us a glimpse into how our solar system formed and how it operates today. Watching a mission come to an end is always bittersweet, but we’re proud to have been a part of something so successful that will continue to inform our understanding of our universe.” The two instruments developed by Los Alamos were the ion beam spectrometer, which grew from a space technology that Los Alamos first developed and flew in the 1970s, and an ion mass spectrometer, which featured a completely new design at the time that allowed mission scientists to sort out the composition of the rings and moons orbiting within Saturn’s magnetic influence. The two sensors were part of the Cassini Plasma Spectrometer, or CAPS, a microwave oven-sized unit that was one of 12 scientific instruments on the two-story-tall Cassini spacecraft. Cassini was a joint effort of NASA and the European Space Agency. “Developing sensors for spacecraft is something Los Alamos has done since the launch of Vela, the first nuclear treaty monitoring satellite, in 1963,” said Wallace. “The sensors on Cassini were an extension of that work and have led to notable discoveries about Saturn that have really changed the way we view the planet.” The RTGs produced by Los Alamos that powered the spacecraft were also part of the space power systems aboard the Mars Science Laboratory-Curiosity Rover, Pioneer 10, Pioneer 11, Voyager 1, Voyager 2, Galileo, Ulysses, Cassini and New Horizons. RTGs were also used to power the two Viking landers and for the scientific experiments left on the Moon by the crews of Apollo 12 through 17. In 2006 Cassini data obtained during a close flyby of the Saturn moon Enceladus found that large amounts of water are spewing into space from the tiny moon’s surface. This water originates near south polar “hot spots” on the moon, possible locations for the development of primitive life in the solar system. The finding, supported by CAPS measurement, was announced by the Cassini Imaging Science Team in Science magazine and reported in the same issue by a team led by Robert Tokar of Los Alamos National Laboratory. In addition, in 2012, the Los Alamos sensors on Cassini “sniffed” molecular oxygen ions around Saturn’s icy moon Dione for the first time, confirming the presence of a very tenuous atmosphere. The oxygen ions are quite sparse—one for every 0.67 cubic inches of space (one for every 11 cubic centimeters of space) or about 2,550 per cubic foot (90,000 per cubic meter)—and show that Dione has an extremely thin neutral atmosphere. “We now know that Dione, in addition to Saturn’s rings and the moon Rhea, is a source of oxygen molecules,” said Robert Tokar, a Cassini team member based at Los Alamos National Laboratory, and the lead author of the paper. “This shows that molecular oxygen is actually common in the Saturn system and reinforces that it can come from a process that doesn’t involve life.”
<urn:uuid:d996e629-5b58-48e5-a97d-5c2c65a552fe>
3.640625
829
News Article
Science & Tech.
26.870607
95,601,017
Publisher: Wikipedia 2014 A supernova is a stellar explosion that briefly outshines an entire galaxy, radiating as much energy as the Sun or any ordinary star is expected to emit over its entire life span, before fading from view over several weeks or months. Home page url Download or read it online for free here: by James Schombert - University of Oregon This course studies the birth, evolution and death of stars in our galaxy, emphasizing the underlying science behind stellar and galactic evolution, the observational aspect and our knowledge of how the Universe operates on the stellar scale. by F. Thielemann, R. Hirschi, M. Liebendorfer, R. Diehl - arXiv The authors focus on the astrophysical aspects, i.e. a description of the evolution of massive stars and their endpoints, with a special emphasis on the composition of their ejecta in form of stellar winds during the evolution or of explosive ejecta. by James N. Pierce - Minnesota State University This book provides an introduction to the details of the structure, operation, and evolution of stars. It will be most useful to undergraduates in upper-level astronomy courses and graduate students taking Stellar Interiors or Stellar Atmospheres. by J. B. Tatum A course on stellar atmospheres: radiation theory, blackbody radiation, flux, specific intensity and other astrophysical terms, absorption, scattering, extinction, the equation of transfer, limb darkening, atomic spectroscopy, and more.
<urn:uuid:a527ee62-331c-4ac7-b555-10803efa3ea3>
2.921875
313
Content Listing
Science & Tech.
29.728411
95,601,021
Land-Atmosphere Transfer Parameters in the Brazilian Pantanal during the Dry Season AbstractThe Brazilian region of Pantanal is one of the largest wetlands in the world, characterized by a wet season, in which it is covered by a shallow water layer, and a dry season, in which the water layer disappears. The aim of this study is the estimation of the main parameters (drag coefficients and surface scale lengths) involved in modelling the surface atmosphere transfer of momentum, heat and water vapor from the dataset of the second Interdisciplinary Pantanal Experiment (IPE2). The roughness parameters and the stability correction parameters have been estimated in the framework of the similarity theory for the vertical profiles of wind speed and temperature. Thus, a previously-developed methodology was adapted to the available dataset from the IPE2 five-level mast. The results are in reasonable agreement with the available literature. An attempt to obtain the scalar transfer parameters for water vapor has been performed by a Penman–Monteith approach using a two-component surface resistance in parallel between a vegetation and a bare soil part. The parameters of the model have been calibrated using a non-linear regression method. The scalar drag coefficient retrieved in this way is in agreement with that calculated by the flux-gradient approach for the sensible heat flux. Eventually, an evaluation of the vegetation contribution to the total vapor flux is given. View Full-Text Share & Cite This Article Martano, P.; Filho, E.P.M.; de Abreu Sá, L.D. Land-Atmosphere Transfer Parameters in the Brazilian Pantanal during the Dry Season. Atmosphere 2015, 6, 805-821. Martano P, Filho EPM, de Abreu Sá LD. Land-Atmosphere Transfer Parameters in the Brazilian Pantanal during the Dry Season. Atmosphere. 2015; 6(6):805-821.Chicago/Turabian Style Martano, Paolo; Filho, Edson P.M.; de Abreu Sá, Leonardo D. 2015. "Land-Atmosphere Transfer Parameters in the Brazilian Pantanal during the Dry Season." Atmosphere 6, no. 6: 805-821.
<urn:uuid:1f98c210-6d5b-4c9a-a6d2-778ee67b8474>
2.546875
465
Academic Writing
Science & Tech.
43.632626
95,601,026
Join GitHub today GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up Clone this wiki locally Supported recipe types (also known as modes), to be given as argument to recipe_type: recipe_type=configure is used for Programs based on "configure" scripts, (autoconf or not.) Some options are only relevant for configure: Flags to be passed to the configure script. These flags are passed in addition to default flags detected by PrepareProgram (such as --prefix and --sysconfdir on autoconf-based configure scripts), unless the override_default_options declaration is used. Use it if you need to run ./autogen.sh in order to generate the configure script. The program to run for the above. Defaults to "autogen.sh". By default the configure script is assumed to be called "configure". Use this variable to override this value. Remember that the current directory during execution will still be the one set by the dir variable, even if a directory path is given (as in the second example below). If the behavior you intended is for Compile to "cd" to the "unix" directory and run its build sequence there, use dir instead. Examples (only one applies at a time): recipe_type=cabal is used for Programs based on "Haskell Cabal". Some options are only relevant for cabal: Flags to be passed to the Cabal configure operation. These flags are passed in addition to default flags detected by PrepareProgram (such as --prefix) unless the override_default_options declaration is used. Specifies the method of invoking Haskell to perform a Cabal-based compilation. The default is "runhaskell". recipe_type=cmake is used for Programs based on CMake. Some options are only relevant for cmake: Flags to be passed to the CMake configure operation. These flags are passed in addition to default flags (such as -DCMAKE_INSTALL_PREFIX). Variables to be defined in the environment during the execution of cmake. recipe_type=makefile is used for Programs based on Makefiles. No options are relevant only for makefile. recipe_type=python is used for Programs based on Python Distutils. Some options are only relevant for python: Array of options to be passed to the Python Distutils build script. This works similarly to the configure_options array. Specify the same for the Python build script. If none is given, Compile tries a few default ones, such as setup.py. recipe_type=scons is used for Programs based on SCons. Some options are only relevant for scons: Variables to be passed to scons. recipe_type=xmkmf is used for Programs based on X11 Imake. No options are relevant only for xmkmf. recipe_type=manifest is used to directly copy appropriate files from the archive into place. Some options are only relevant for manifest: Specify which files should be copied over, and to where. Destination is relative to target. manifest=( "some_script:bin" "include/a_header.h:include" "lib/libfoo.so:lib" "some_script.1:man/man1/some_script.1" ) recipe_type=meta only depends on other Recipes. All included recipes are built relative to the same installation prefix. Some options are only relevant for meta: In a meta-recipe, this array holds the list of recipes that should be built to constitute the complete program. Recipe names should be in the format "App--1.0". The order of the entries in the array is significant, because it is the order in which the recipes are built. Note: be careful with the order, because re-building a meta-package that's already installed may cover up ordering problems. Indicates that this recipe is generally included as part of a meta-recipe. Unless Compile is called with "-i"/"--install-separately", the Program will be installed into the parent Program's directory. Implies keep_existing_target. In meta-recipes, Compile only calls UpdateSettings for the meta-recipe and not for its sub-recipes. Set this variable to override this behavior and have UpdateSettings called in every sub-recipe.
<urn:uuid:1076ec0f-4698-4ad0-8bd5-18bf5be6e0d8>
2.8125
951
Documentation
Software Dev.
35.707453
95,601,028
Authors: M. Hladik Affilation: Brewer Science, United States Pages: 547 - 549 Keywords: CNT, dispersion, fugitive Carbon nanotubes (CNTs) do not disperse very easily in organic media or at low viscosities. Thus, to produce a nanomaterial coating that can be applied by various methods, such as screen printing or ink-jet printing, the dispersed material must have the appropriate chemical properties and the dispersion method must be carefully selected. Moreover, the dispersion medium’s chemistry must allow for formulation of non-viscous and viscous materials alike and possibly function as a fugitive material to avoid interfering with the properties of the deposited nanomaterials. Most electronic nanomaterials have properties such that any surfactant or polymer used in the dispersion medium to disperse the nanomaterials, stabilize the dispersion, or increase its viscosity hinders the electronic nature of the nanomaterial and therefore must be removed. The nanomaterial’s chemistry must also allow for adjustments to the formulation that make the coating thermally stable at up to 300oC or the give it the ability to be fugitive as low as 130oC but still stable enough at room temperature to maintain its form or function. In other words, dispersing CNTs poses many challenges. In this work, we have devised a polymer-based system for dispersing CNTs that addresses these issues. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:58135298-2f59-4015-8018-ee22d514c1ed>
2.90625
320
Academic Writing
Science & Tech.
15.257857
95,601,030
Global warning: Fossilised rainforest reveals how climate change can devastate the planet Prehistoric rainforests which grew on earth more than 300 million years ago were wiped out in the first known case of global warming, according to a new study. British scientists believe vast swathes of tropical trees and plants were destroyed as the planet's core rose to extreme temperatures. They were then replaced by huge swathes of hardy ferns over time. The researchers made the link after studying the fossilised remains of long-extinct plants, trees and mosses, which were discovered in 2005. They were found inside a network of underground coalmines near Chicago in the U.S. Fossilised ferns were preserved in the tunnels of more than 50 coal mines in Illinois, U.S Scientists at the University of Bristol said the discovery would be invaluable in the fight to stave off global warming and save modern rainforests like the Amazon. Speaking yesterday at the seven-day BA Festival of Science in Liverpool, Dr Falcon-Lang from the Department of Earth Sciences said: 'It is the first tropical rainforest to evolve in this manner and will go a long way to providing information on how current rainforests will be affected by global warming. 'In geological terms the transformation 300 million years ago happened very abruptly - in real terms probably over the course of hundreds or thousands of years. 'It's very difficult to know how these changes will affect rainforests like the Amazon but shows the collapse could be very abrupt indeed.' Dr Falcon-Lang is set to give a lecture about his findings at the Festival later this week. The research, which was supported by the Paleontological Association, found that huge areas of the US were blanketed in lush, tropical rainforests. Long-extinct species of trees, plants and shrubs grew to more than 100ft in height, and thrived during the carboniferous period - pre-dating the first dinosaurs by 50 million years. Dr Falcon-Lang and his team discovered the fossilised remains in 2005, but unveiled the incredible link between their extinction and global warming for the first time last week. They claim the forests died out because of extreme heat, and were gradually replaced by huge swathes of hardy ferns. The scientists will now spend the next five years researching cause of the global warming thanks to a grant from the Natural Environment Research Council, based in Swindon, Wilts. Most watched News videos - Brutal bat attack caught on surveillance video in the Bronx - Waitress tackles male customer after grabbing her backside - Trump's daughter grasps her Secret Service agent's hand - Comedian is forced to move her scooter from disability space on train - Desperate parents queue for hours for school breakfast club place - Brexit Secretary Dominic Raab wants to 'heat up' talks with EU - The terrifying moment a plane comes crashing down in South Africa - Utah train worker calls a group of girls porn stars - NFL quarterback Jimmy Garoppolo goes on a date with porn star - Man sets up projector to make garden look like jurassic park - Biker jailed after filming himself speeding at 200mph - Shocking video shows mother brutally beating her twin girls
<urn:uuid:bb4dbdb7-d58c-4948-b348-d9ed5b536c5b>
3.515625
681
Truncated
Science & Tech.
37.034158
95,601,053
In a turbofan engine, the baseballs that the engine is throwing out are air molecules. The air molecules are already there, so the airplane does not have to carry them around at least. An individual air molecule does not weigh very much, but the engine is throwing a lot of them and it is throwing them at very high speed. Thrust is coming from two components in the turbofan: - The gas turbine itself - Generally a nozzle is formed at the exhaust end of the gas turbine (not shown in this figure) to generate a high-speed jet of exhaust gas. A typical speed for air molecules exiting the engine is 1,300 mph (2,092 kph). - The bypass air generated by the fan - This bypass air moves at a slower speed than the exhaust from the turbine, but the fan moves a lot of air. As you can see, gas turbine engines are quite common. They are also quite complicated, and they stretch the limits of both fluid dynamics and materials sciences. If you want to learn more, one worthwhile place to go would be the library of a university with a good engineering department. Books on the subject tend to be expensive, but two well-known texts include "Aircraft Gas Turbine Engine Technology" and ""Elements of Gas Turbine Propulsion." There is a surprising amount of activity in the home-built gas-turbine arena, and you can find other people interested in the same topic by participating in newsgroups or mailing lists on the subject. For more information on gas turbine engines and related topics, check out the links on the following page.
<urn:uuid:42f8a612-3382-4cec-9fc4-8e4106b28bf2>
4.09375
334
Knowledge Article
Science & Tech.
47.082212
95,601,083
Shaker-type K(+) channels in plants display distinct voltage-sensing properties despite sharing sequence and structural similarity. For example, an Arabidopsis K(+) channel (SKOR) and a tomato K(+) channel (LKT1) share high amino acid sequence similarity and identical domain structures; however, SKOR conducts outward K(+) current and is activated by positive membrane potentials (depolarization), whereas LKT1 conducts inward current and is activated by negative membrane potentials (hyperpolarization). The structural basis for the "opposite" voltage-sensing properties of SKOR and LKT1 remains unknown. Using a screening procedure combined with random mutagenesis, we identified in the SKOR channel single amino acid mutations that converted an outward-conducting channel into an inward-conducting channel. Further domain-swapping and random mutagenesis produced similar results, suggesting functional interactions between several regions of SKOR protein that lead to specific voltage-sensing properties. Dramatic changes in rectifying properties can be caused by single amino acid mutations, providing evidence that the inward and outward channels in the Shaker family from plants may derive from the same ancestor. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:ac7018b7-ba34-42a5-baab-69a93ba82958>
2.953125
260
Academic Writing
Science & Tech.
-1.434887
95,601,085
Researchers were able to predict violent storm 90 minutes before it hit Forecast system being improved Researchers running highly detailed simulations using satellite images, radar and ground-based weather stations were able to predict a specific violent storm 90 minutes before it hit a western Oklahoma town and killed a man two months ago. The National Severe Storms Laboratory in Norman, Okla., said Friday the still-experimental forecast system could give emergency planners up to three hours’ notice of upcoming bad weather, including tornadoes. Such notice is key for hospitals, schools and other places where large crowds gather. “The theoretical groundwork was laid in the 1980s and 1990s,” said Patrick Skinner, a research meteorologist with the University of Oklahoma’s Cooperative Institute for Mesoscale Meteorological Studies. “When this project started in the late 2000s, there was some worry this wasn’t feasible. We are becoming more confident that we can do it.” After general forecasts showed that severe weather could develop in the eastern Texas Panhandle or western Oklahoma on May 16, National Severe Storms Laboratory meteorologists in central Oklahoma ran data through 36 simulations, tweaking data for each scenario. “On May 16, we had a large number that all predicted the same evolution of a supercell. That gave us a lot of confidence,” Skinner said. For the first time, the National Weather Service used a “Warn on Forecast” to notify emergency planners around Elk City, Okla., that tornado warnings would likely be issued later in the day. Todd Lindley, the science operations officer at the Norman weather office, said the special weather advisory was essentially “forecasting that we would be issuing warnings.” Beckham County, Okla., emergency managers started sirens 30 minutes before the tornado hit. Currently, most communities receive about 13 minutes’ notice that a tornado is on the way. “This could open up a new door for a very detailed increased lead time,” said Steve Koch, director of the National Severe Storms Laboratory. “That’s the direction we’re heading.” Using computer models to predict weather isn’t new. General forecasts sent out every day are created with the help of computers, but detailed forecasts such as the one at Elk City require the researchers to run millions of calculations in a short period of time. In addition to images from satellites, radar units and weather stations, “the one thing we need is computational power,” Skinner said. In their work, the lab uses about 1.2 terabytes of bandwidth a day to ensure that modeling is done in a timely manner. After all, if it takes 12 hours to run the various simulations, the storm will be gone before any prediction comes out. “We aim to do this in a half-hour,” Skinner said. The May 16 tornado formed from a classic supercell formed in the relatively desolate Plains. Forecasting in other areas of the country might not be so easy because storms can form under several conditions such as cold fronts and sharp contrasts between dry and humid air. After this year’s experiment period was over, a tornado hit Jonesboro in northeastern Arkansas on July 3. It was never detected ahead of time. “We had a little spin-up tornado that never showed up on the radar,” said Jeff Presley, director of the E911 system in Jonesboro. “It got really dark. We anticipated roads getting flooded, then people started calling saying there were power outages, and trees down, then a roof off an apartment complex.” Such pop-up storms will often elude forecasters. “Smaller storms like the 30-second tornado, that is an extremely big challenge,” said Koch, the Severe Storms Laboratory director. At a minimum, the researchers said they expect to do a better job reading the atmosphere ahead of a storm. “It’s not a game changer at this point because the system is still quite experimental, but it is a big step,” Lindley said. A home sits destroyed in Elk City, Okla., on May 17 after a tornado swept through the area, killing one. Researchers at the National Weather Center in Norman, Okla., said Friday it was able to tell 90 minutes before a specific storm cell would cause significant weather in the area.
<urn:uuid:8723da2e-b460-4763-9b30-a3ffa8d9f44d>
2.90625
925
News Article
Science & Tech.
42.65654
95,601,102
Seventeen Rare Siamese Crocodiles Released in Lao PDR by WCS and Partners Fewer than 1,000 critically endangered Siamese crocodiles remain in the wild The Wildlife Conservation Society announced today the successful release of 17 juvenile critically endangered Siamese crocodiles into a protected wetland in Lao PDR. The one-to-two-year-old crocodiles, which range between 50-100 cm (20-39 inches) in length, were raised in facilities managed by local communities working with WCS to protect the endangered reptiles and their habitat. The juvenile crocodiles were released this week into the Xe Champhone wetland, Than Soum village, Savannakhet Province. This is one of two RAMSAR wetland sites in the country. Lao PDR became a signatory to the RAMSAR convention in 2010. A ceremony observing cultural traditions was held prior to the release and involved participants from local communities, government and WCS staff. Local communities have traditional beliefs about Siamese crocodiles, and events on the day included welcoming the crocodiles to the village area and wishing both them and community residents good luck in the future. Following the completion of the release ceremony, the crocodiles were transported by boat into the heart of the wetland complex that is managed by local communities to provide habitat and protect the species. It is estimated that there may be fewer than 1000 Siamese crocodiles remaining in the wild, with a significant proportion of this population located in Lao PDR. The release of these crocodiles is the culmination of several years of conservation action implemented by WCS, local communities, and the Government of Lao PDR, Ministry of Natural Resources and Environment, Department of Forest Resources and Environment. Alex McWilliam of the WCS’s Lao PDR Program said: “We are extremely pleased with the success of this collaborative program and believe it is an important step in contributing to the conservation of the species by involving local communities in long term wetland and species management.” Classified as Critically Endangered by the IUCN, the Siamese crocodile grows up to 10 feet in length. The species has been eliminated from much of its former range through Southeast Asia and parts of Indonesia by overhunting and habitat degradation and loss. WCS’s Lao PDR Program designed and implemented the Community-based Crocodile Recovery and Livelihood Improvement Project, whose goal is the recovery of the local Siamese crocodile population and restoration of associated wetlands, linked by socio-economic incentives that improve local livelihoods. The program has three key objectives: contributing to local livelihoods by improving coordination of water resource use and zoning of lands used in local agriculture; conserving and restoring crocodile wetland habitat important for local livelihoods, crocodiles, and other species; and replenishing the crocodile population in the wetland complex and surveying and monitoring the current population. The program has worked with nine villages – each village has a “Village Crocodile Conservation Group” (VCCG) to coordinate implementation of program activities in the Xe Champone wetland complex and surrounding areas. The program has received extensive financial support from MMG Lane Xang Minerals Limited Sepon. The Critical Ecosystem Partnership Fund and IUCN support ongoing components of the program. The Critical Ecosystem Partnership Fund is a joint initiative of l’Agence Francaise de Développement, Conservation International, the European Union, the Global Environment Facility, the Government of Japan, the John D. and Catherine T. MacArthur Foundation, and the World Bank. A fundamental goal is to ensure society is engaged in biodiversity conservation. Wildlife Conservation Society (WCS) MISSION: WCS saves wildlife and wild places worldwide through science, conservation action, education, and inspiring people to value nature. VISION: WCS envisions a world where wildlife thrives in healthy lands and seas, valued by societies that embrace and benefit from the diversity and integrity of life on earth. To achieve our mission, WCS, based at the Bronx Zoo, harnesses the power of its Global Conservation Program in more than 60 nations and in all the world’s oceans and its five wildlife parks in New York City, visited by 4 million people annually. WCS combines its expertise in the field, zoos, and aquarium to achieve its conservation mission. Visit: www.wcs.org; http://www.facebook.com/TheWCS; http://www.youtube.com/user/WCSMedia Follow: @thewcs. The MacArthur Foundation supports creative people and effective institutions committed to building a more just, verdant, and peaceful world. In addition to selecting the MacArthur Fellows, the Foundation works to defend human rights, advance global conservation and security, make cities better places, and understand how technology is affecting children and society. More information is at www.macfound.org. Director of Communications Stephen Sautner | newswise Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:2f7b5091-5eda-4f6f-afc8-f9a0f08bde04>
3.265625
1,690
Content Listing
Science & Tech.
32.496055
95,601,108
As the decline of pollinator populations continue, the risk for these insects and animals is becoming more and more apparent. With this information becoming newsworthy, Environmental Education in Georgia has revealed that the Fernbank Science Center is now accepting milkweed seeds from those able to collect the seed pods from surrounding areas. If citizens are able to collect and ship the milkweed seeds, those new plants will be put towards replenishing the habitats of vital pollinators. The monarch overwintering sites located in Mexico have been reported to be at an all-time low during the winter of 2013. One factor could be due to the upkeep of large farms where herbicides and pesticides kill milkweed in the United States, which monarchs require for reproduction. As milkweeds are the only location monarchs will lay their eggs, and the only source of food for the larvae when they hatch, the preservation of milkweeds is very important. Similarly, Georgia’s habitats are rare and not sufficient for large populations of monarch butterflies. The citizen initiative to send in these milkweed seeds will help with native plants of Georgia in upcoming years. The Fernbank Science Center is partnering on the project with Monarch Watch that will help propagate these native species. Obtaining these seeds can revitalize the habitats of monarchs across Greater Atlanta and the outskirts of Georgia. –Please visit this website to help you identify your Georgia Native Milkweed! Here are a few tips, provided by Environmental Education in Georgia, for maximizing the benefit of your milkweed packaging. If you, or anyone you know, could help with the collection of milkweed seeds in Georgia, the Fernbank Science Center ask that you include: Your Name, Street Address, Email, Date, County, State where the seed was collected, and Species. Send seeds to contact: Trecia Neal, Fernbank Science Center, Milkweed Seeds, 156 Heaton Park Dr. NE, Atlanta, GA 30307 On behalf of the Monarchs of Georgia, The Fernbank Science Center, and Greater Atlanta Pollinator Partnership thank you for your assistance and participation! For More Information, please visit these websites: Categories: Citizen GAPP
<urn:uuid:eb6dac41-dccf-4f03-9e1c-c4c5bbda9327>
3.5
442
News (Org.)
Science & Tech.
32.804992
95,601,118
HTML5 is a Markup language (A Markup language is a set of markup tags) used for structuring and presenting content on the World Wide Web. It is the fifth revision and newest version of the HTML standard. It offers new features that provide not only rich media support, but also enhanced support for creating web applications that can interact with the user, his/her local data and servers more easily and effectively than that of the previous version. How it is different from its predecessors There is a term used frequently in HTML called "TAG Soup" that refers to structurally and syntactically incorrect code. This malformed code still works. Almost 90% webpages on internet are malformed in some way or the other, due to lack of rules defined for TAG Soup. Previously, any new browser was to test all such malformed pages in existing browsers and reproduce or reverse engineer their error handling mechanism to display the webpage's content rightly. HTML5 is an attempt to discover and code this error handling for standardising and consistent display of things. HTML5 reaches almost all devices, such as desktop, mobile, TV etc. It can also be used to create cross platform mobile applications, due to its low power usage. HTML introduced Application Programming Interfaces (APIs) for complex web applications. A few of these APIs are: To make it multimedia and graphics friendly, <video>, <audio> and <canvas> elements were added. For better semantics of document, elements such as <section>, <article>, <header>, <footer>, <aside>, and <nav> were added. Some other attributes and elements were either removed, changed, redefined or standardized. The idea here was to make anything that previously needed a browser plugin - an integral part of the browser standard itself. Programming with HTML5 HTML programming needs but a computer. There are many HTML editors available. You can even write HTML code in notepad (for Windows) or in TextEdit (for Mac) and it is recommended to do so if you are taking your first step into coding. If you are already familiar with it, that you can also use advanced editors such as Visual Studio 2010 SP1, Microsoft WebMatrix or Sublime Text to name a few. We will start where previous versions of HTML leave us i.e. from the technical and practical differences. If you want to know it all from the beginning click here. The declaration <!DOCTYPE> helps the browser to determine both the type and version of the document and to display a web page correctly. It is not case sensitive, so you can type it anyway you want but the common declaration is <!DOCTYPE html>. W3C dug through huge amounts of webpages and added commonly used IDs and classes as elements. These are: section, article, header, footer, main, figure, aside and nav. Although these are self-explanatory but just in case one is unsure, below is a visualisation of these elements. So now, there is no need to define various divisions (done as <div>) and then separately mention the ID and/or class names for all division. These newly defined elements will convey the information to browser by themselves. You can also create your own elements using CSS. Below is the declaration for canvas, give it attributes like ID (for reference while scripting), width and height and you are good to go. <canvas id="myCanvas" width="300" height="300"> </canvas> Previously web browsers used Scalable Vector Graphics as a standard for drawing. The difference between Canvas and SVG is that canvas causes graphics to be directly rendered to display whereas SVG retains a complete model of the object to be rendered. It means that you cannot alter the graphics created from canvas. If you want to make a change, you will have to again start from the beginning. In SVG, alterations are easier to implement. You can change a specific step and browser will re-render the graphics accordingly. It makes for less work but is very heavyweight to maintain. It is a thing that is most talked about. Unlike its predecessors which used plug-in like Silverlight or Flash, HTML5 have new <audio> and <video> tags which are simpler to use and are supported by all major browsers. These tags have become very popular. Youtube, which previously used flash to play videos, now has been fully redesigned to use HTML5 video tag. It isn't perfect however. The issue is that audio and video files need codecs to play and different browsers support different codecs. So, as a resolution, you need to provide multiple sources of audio and video (using src attribute) to accommodate various browsers. The browser will try different files and will play the one for which it has supporting codec. Both audio and video tags have a common control attribute, which, when enabled, will display play-pause button, a progress bar and volume controls. Its look and feel varies with browsers. Among other attributes, both audio and video elements have autoplay, loop and preload. Other than that, you can also change the size of player screen in video tag using height and width attributes.
<urn:uuid:6fc5466d-df5e-47b3-a5e1-4ff758e833ed>
3.703125
1,070
Documentation
Software Dev.
48.929906
95,601,122
This book is dedicated to the multiple aspects, that is, biological, physical and computational of DNA and RNA molecules. These molecules, central to vital processes, have been experimentally studied by molecular biologists for five decades since the discovery of the structure of DNA by Watson and Crick in 1953. Recent progresses (e.g. use of DNA chips, manipulations at the single molecule level, availability of huge genomic databases...) have revealed an imperious need for theoretical modelling. Further progresses will clearly not be possible without an integrated understanding of all DNA and RNA aspects and studies. The book is intended to be a desktop reference for advanced graduate students or young researchers willing to acquire a broad interdisciplinary understanding of the multiple aspects of DNA and RNA. It is divided in three main sections: The first section comprises an introduction to biochemistry and biology of nucleic acids. The structure and function of DNA are reviewed in R. Lavery's chapter. The next contribution, by V. Fritsch and E. Westhof, concentrates on the folding properties of RNA molecules. The cellular processes involving these molecules are reviewed by J. Kadonaga, with special emphasis on the regulation of transcription. These chapters does not require any preliminary knowledge in the field (except that of elementary biology and chemistry). The second section covers the biophysics of DNA and RNA, starting with basics in polymer physics in the contribution by R. Khokhlov. A large space is then devoted to the presentation of recent experimental and theoretical progresses in the field of single molecule studies. T. Strick's contribution presents a detailed description of the various micro-manipulation techniques, and reviews recent experiments on the interactions between DNA and proteins (helicases, topoisomerases, ...). The theoretical modeling of single molecules is presented by J. Marko, with a special attention paid to the elastic and topological properties of DNA. Finally, advances in the understanding of electrophoresis, a technique of crucial importance in everyday molecular biology, are exposed in T. Duke's contribution. The third section presents provides an overview of the main computational approaches to integrate, analyse and simulate molecular and genetic networks. First, J. van Helden introduces a series of statistical and computational methods allowing the identification of short nucleic fragments putatively involved in the regulation of gene expression from sets of promoter sequences controlling co-expressed genes. Next, the chapter by Samsonova et al. connects this issue of transcriptional regulation with that of the control of cell differentiation and pattern formation during embryonic development. Finally, H. de Jong and D. Thieffry review a series of mathematical approaches to model the dynamical behaviour of complex genetic regulatory networks. This contribution includes brief descriptions and references to successful applications of these approaches, including the work of B. Novak, on the dynamical modelling of cell cycle in different model organisms, from yeast to mammals.
<urn:uuid:0a919782-e983-40d1-9b10-f93d693440a1>
2.765625
584
Truncated
Science & Tech.
26.129359
95,601,159
| IUPAC name | Preferred IUPAC name | Systematic IUPAC name | Other names Hydrogen carboxylic acid |Molar mass||46.03 g/mol| |Density||1.220 g/cm3 (20 °C)| |Melting point||8.4 °C (47.1 °F; 281.5 K)| |Boiling point||100.8 °C (213.4 °F; 373.9 K)| |Solubility|| Reacts with amines| Miscible with acetone, diethyl ether, ethanol, ethyl acetate, glycerol, methanol Partially soluble in benzene, toluene, xylene |Vapor pressure||35 mmHg (20 °C)| Std enthalpy of |Safety data sheet||Sigma-Aldrich (85% solution)| |Flash point||69 °C (156 °F; 342 K)| |Lethal dose or concentration (LD, LC):| LD50 (Median dose) | 700 mg/kg (mouse, oral)| 1,100 mg/kg (rat, oral) 4,000 mg/kg (dog, oral) LC50 (Median concentration) | 7,853 ppm (rat, 15 min)| 3,246 ppm (mouse, 15 min) | Acetic acid| Except where otherwise noted, data are given for materials in their standard state (at 25 °C [77 °F], 100 kPa). Formic acid or methanoic acid is an organic compound with chemical formula HCOOH and the simplest carboxylic acid. - HCOOH + H2SO4 → CO + H2O + H2SO4 As with other carboxylic acids, formic acid is easily esterified with primary alcohols, often forming pleasant-smelling compounds such as methyl formate. Being a strong acid, esterifications with formic acid generally don't require an acid catalyst. In organic chemistry, formic acid is used to introduce a formyl group. Formic acid is a clear liquid with a highly irritating pungent odor, and is responsible for the painful sensation of many ant stings. Its boiling point is nearly the same as water (100.8 ˚C), though it forms an azeotrope of 22.4% formic acid with water that boils instead at 107.3 ˚C. Since formic acid slowly decomposes when boiled, distillation must be done at low temperature and in near vacuum (25 °C at 40 mm Hg). While not typically found as a consumer product, formic acid can be bought online for relatively low prices, typically mixed with about 5-10% water. A good supplier is Duda Diesel. Formic acid can also be purchased at beekeeping stores, as 60-85% concentration. It is used for the treatment of Varroasis. Aqueous formic acid can be distilled from a mixture of anhydrous glycerol and oxalic acid, producing carbon dioxide as a byproduct. It can also be obtained by acidifying sodium formate with dilute sulfuric acid, producing formic acid in solution that can then be distilled over. Using concentrated sulfuric acid or heating too much will produce carbon monoxide and potentially create a very dangerous situation. - Make formate esters - Generate carbon monoxide for reactions - Formic anhydride synthesis (stable only as solution in diethyl ether) Concentrated solutions of formic acid are corrosive to human skin, as well as nose, mouth and eyes and also slowly decompose to form water and carbon monoxide, which can cause an explosion from pressure buildup in a sealed container. Adding a dehydrating agent will also generate large amounts of carbon monoxide, a deadly gas that is impossible to detect with human senses. Formic acid is not inherently very toxic if ingested, though long exposure to it by any means can cause chronic bodily effects. It is especially important that formic acid is kept away from the eyes, as it readily damages the optic nerve and can cause permanent blindness. Formic acid should be stored in closed bottles, away from any heat source. Keep it away from dwelling areas, as it slowly gives off carbon monoxide over time. - Purification of Laboratory Chemicals (Fifth Edition), Wilfred L.F. Armarego and Christina Li Lin Chai, 2003
<urn:uuid:889951e3-bb46-4591-ad3c-dc49efbcc20d>
2.578125
962
Knowledge Article
Science & Tech.
49.829289
95,601,196
To get a super-detailed X-ray view inside a cell — right down to the individual molecules — scientists dunk the cell they're looking at in preservative chemicals. That not only kills the cell, it changes its internal structure ever so slightly, meaning researchers aren't getting an exact look at the cell's natural state. Now, scientists at Germany's DESY Research Center have found a way around that, with a technique that's produced the world's first X-ray of an individual living cell. In a paper published this week in Physical Review Letters, the team describes a system for keeping cancer cells from the adrenal cortex alive during X-ray study. They grew the cells on silicon nitrite plates, which are nearly invisible to X-rays, pumping nutrients to the cells and evacuating metabolic waste through incredibly tiny 0.5mm channels. Because long exposure to high-energy X-ray can damage or kill a living cell, the researchers used tiny, 0.05-second X-ray blasts to produce images so clear, even nanometre-scale structures are visible. When compared with images of chemically fixed cells, these X-rays prove that the chemical fixation process makes significant changes to the tiny, 30- to 50-nanometre structures within the cell. Yes, technically speaking, the X-ray you got at your last dentist's appointment was looking at (and through) living cells. But the high-energy, super-fine X-rays needed to view nanometre-size structures have never produced images of living cells before. A technique like this could revolutionise our view of the structures inside cells. In fact, by proving that standard fixation techniques change the cellular structure, it already has. [DESY via Eurekalert]
<urn:uuid:9f52956e-01b4-4295-9b1e-dafe32aba0f0>
4.03125
361
News Article
Science & Tech.
44.293811
95,601,200
Migratory songbirds enjoy the best of both worlds—food-rich summers and balmy winters—but they pay for it with a tough commute. Their twice-a-year migrations span thousands of miles and are the most dangerous, physically demanding parts of their year. Surprisingly, for many North American species the best route between summer and winter homes is not a straight line, according to new research published in the Proceedings of the Royal Society B. In spring, the study shows, birds follow areas of new plant growth—a so-called "green wave" of new leaves and numerous insects. In fall, particularly in the western U.S., they stick to higher elevations and head directly southward, making fewer detours along the way for food. "We're discovering that many more birds than anyone ever suspected fly these looped migrations, where their spring and fall routes are not the same," said Frank La Sorte, a research associate at the Cornell Lab of Ornithology. "And now we're finding out why—they have different seasonal priorities and they're trying to make the best of different ecological conditions." The research—the first to reveal this as a general pattern common to many species—may help land managers improve conservation efforts by improving their understanding of how birds use habitat seasonally. "All this information helps us understand where we should focus conservation across time," La Sorte said. "Then we can drill down and make local and regional recommendations. In the West particularly, the systems are very complicated, but we're starting to build a nice foundation of knowledge." In a 2013 study, La Sorte and his colleagues discovered that many species of North American birds flew looping, clockwise migration routes. But they could only partially explain why. For eastern species, it was clear from atmospheric data that the birds were capitalizing on strong southerly tailwinds in spring over the Gulf of Mexico and less severe headwinds in fall. By adding the effect of plant growth, the new study helps explain why western species also fly looped routes. The study examined 26 species of western birds, including the Rufous Hummingbird and Lazuli Bunting, and 31 species of eastern birds such as the Wood Thrush and Black-throated Blue Warbler. Birds on both sides of the continent showed a strong tendency to follow the flush of green vegetation in spring. In the relatively continuous forests of the eastern U.S. this tight association with green vegetation persisted all summer and into fall. In the West, however, green space occurs along rivers and mountains, and is often isolated by expanses of desert or rangeland. "Western migrants can't necessarily cross big stretches of desert to get to the greenest habitat when it's the most green," La Sorte said. "So in spring, they stick to the foothills where insects are already out. But in fall they tend to migrate along browner, higher-elevation routes that take them more directly south." For decades scientists have known that some herbivorous species, including geese and deer, follow the "green wave" of spring vegetation on their northward migrations. La Sorte's study is the first to extend that idea to insectivorous species, which are tiny (most weigh an ounce or less) and much harder to study using tracking devices. The researchers solved that problem by using sightings data—lots of it—to substitute for tracking data. They analyzed 1.7 million crowdsourced bird checklists from eBird, a free online birding-list program, to construct a detailed picture of species occurrence for each week of the year. Then they used satellite imagery to determine the ecological productivity—or amount of new plant growth—across the U.S. What emerged was a composite picture of where each species occurred, week by week, that the scientists then compared with satellite-derived estimates of where the greenest or most productive habitats were. "Up till eBird data became available, people have had to look at migration on a species by species basis, by tracking individual birds," La Sorte said. "We're bringing in the population perspective using big data, and that's enabling us to describe general mechanisms across species." In addition to La Sorte, the paper's authors include Daniel Fink, Wesley Hochachka, and Steve Kelling of the Cornell Lab, and John DeLong of the University of Nebraska, Lincoln. The research was supported in part by grants from Leon Levy Foundation, Wolf Creek Foundation, and the National Science Foundation. Related research on looped migration strategies: Pat Leonard | Eurek Alert! Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:56cb17d5-d02a-4b4b-a35e-741ebbee7944>
3.875
1,538
Content Listing
Science & Tech.
43.633846
95,601,201
The effectiveness of crystalline pharmaceuticals is not only influenced by molecular composition; the structure of the crystals is also important because it determines both the solubility and the rate of dissolution, which in turn affect the bioavailability. Researchers from Cambridge, Massachusetts (USA) have recently developed a method by which different crystals can be separated by their density in a magnetic field. In the journal Angewandte Chemie, they have now demonstrated the extraordinary efficiency of separation through “magnetic levitation”. Many organic substances crystallize in multiple crystal structures known as polymorphs. Drugs are not the only class of products for which this can lead to problems. Different crystal structures can lead to color variation in pigments and dyes; in explosives it can lead to changes in sensitivity. It is not always possible to control the crystallization process to obtain only the desired polymorph. Clean separation is often difficult, and occurs either by chance or through long and complex procedures. A team led by Allan S. Myerson at the Massachusetts Institute of Technology and George M. Whitesides at Harvard University has recently developed a simple method that makes it possible to separate polymorphs conveniently and reliably within minutes through magnetic levitation. The technique is based on the fact that different crystal modifications almost always have different densities. Their clever method works like this: Two magnets are placed one over the other at 4.5 cm apart with like poles facing. This produces a magnetic field with a linear gradient and a minimum in the middle, between the two magnets. The crystals to be separated are suspended in a solution of paramagnetic ions and placed in a tube within the magnetic field. The gravitational force causes the crystals to sink down to the bottom of the tube. By doing so, a crystal “displaces” its own volume of the paramagnetic fluid “upwards”. Yet, this is unfavorable, because the paramagnetic fluid is attracted by the magnet — the attraction gets stronger closer to the face of the magnet. The crystal sinks as long as it reaches a distance above the magnet where the gravitational force and the magnetic attraction on the equivalent volume of the paramagnetic fluid are balanced. At this point, the crystal will “float” in the fluid. As the strength of the gravitational force depends on the density of the crystal, the “floating point” is different for different crystal modification. The solution is then removed from the tube with a cannula and divided into multiple fractions. Through separation of different polymorphs of 5-methyl-2-[(2-nitrophenyl)amino]-3-thiophencarbonitrile, sulfathiazole, carbamazepine, and trans-cinnamic acid, the scientists have presented impressive evidence of the efficiency of their new technique, which allows for the separation of crystal forms with a difference in density as low as 0.001 g/cm3.About the Author George M. Whitesides | Angewandte Chemie Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:f581f2f8-80c9-413d-b8ab-10548dfead0c>
3.4375
1,190
Content Listing
Science & Tech.
34.163611
95,601,202
The dew point temperature is the temperature to which the air must be cooled at constant pressure for it to become saturated. The higher the dew point is, the more uncomfortable people feel. This is because people cool themselves by sweating and if the dew point is at a high temperature, then it becomes more difficult for sweat to evaporate off the skin. High summertime dew point temperature readings were frequent news stories during the sweltering summer months in the late 1990s to mid 2000s. Are summertime Twin Cities dew points increasing? Here is a look at part of the Twin Cities dew point record that may answer some questions and bring up new ones. Dew point measurements began in September 1902 at the Minneapolis Weather Bureau in downtown Minneapolis. As far as can be determined the dew point was measured at the top of the old U.S. Court House building. Dew point readings were only measured once per day at 1800 hours. Later, a Noon observation was added. At 1:00 pm April 9, 1938, the official record was transferred to the Wold-Chamberlain terminal building at what would become the International Airport. The measurements were made on the roof of the terminal building until 1960 when they were moved to five feet above the ground on the airport grounds. On June 1, 1996 an ASOS (Automated Surface Observing System) was installed and took over as the official measurement at the International Airport. Historical data were analyzed for the "meteorological summer", the months of June, July and August. During the summer, the minimum dew point temperature typically occurs at sunrise. There are two maxima, one just before noon, and another near sunset. Thus the early evening dew point is typically not the maximum for the day, but has a tendency to be close. The graph below shows some interesting results. The first of which is the minima beginning in 1924 and lasting until 1937. This stretch of lower dew points matches well with the Dust Bowl era when precipitation was also at a minimum. The period 1938 to 1945 corresponds with a period of higher precipitation that immediately followed the Dust Bowl. Note the similarity with the 1990s. The high dew point period of the 1990s to the early 2000s also reflects an era of higher precipitation. The lack of dry dew point years from 1993 to 2002 stands out. Both 2003 and 2004 were somewhat drier in the Twin Cities and the dew point values reflect that accordingly. 2005 saw a return to average summer dew point temperatures that were above the long term mean. 2006 through 2009 fell below the long term mean. In fact, 2009 had the lowest 1800 hour summertime average for the 107 year record with 52.3 degrees. Dew point temperature data used to construct this graph are available as a comma-separated value. Maximum Daily Dew Point Temperature Records 80 degree dew point temperatures are rare in the Twin Cities historical record. Since 1945, there have been only twenty-eight hours of 80 degree dew points recorded. Ten of those twenty hours came in a ten hour period on July 12 and 13, 1995. 8 hours are from July 17-19, 2011. The highest dew point temperature ever recorded in the Twin Cities was 82 degrees at 3pm and 4pm July 19, 2011. This broke the old record of 81 degrees at 11am on July 30, 1999. Maximum Observed Dew Point Temperatures for the Twin Cities in Recent Years 2003 77 degrees F 2004 75 degrees F 2005 81 degrees F 2006 76 degrees F 2007 78 degrees F 2008 73 degrees F 2009 75 degrees F 2010 79 degrees F 2011 82 degrees F 2012 77 degrees F 2013 77 degrees F 2014 76 degrees F 2015 76 degrees F 2016 77 degrees F 2017 74 degrees F 2018 78 degrees F
<urn:uuid:537327e8-02c4-49b4-a517-08250accd36e>
3.40625
788
Knowledge Article
Science & Tech.
67.061148
95,601,221
Challenging decades of scientific belief that the decoding of sound originates from a preferred side of the brain, UCLA and University of Arizona scientists have demonstrated that right-left differences for the auditory processing of sound start at the ear. Reported in the Sept. 10 edition of Science, the new research could hold profound implications for rehabilitation of persons with hearing loss in one or both ears, and help doctors enhance speech and language development in hearing-impaired newborns. "From birth, the ear is structured to distinguish between various types of sounds and to send them to the optimal side in the brain for processing," explained Yvonne Sininger, Ph.D., visiting professor of head and neck surgery at the David Geffen School of Medicine at UCLA. "Yet no one has looked closely at the role played by the ear in processing auditory signals." Scientists have long understood that the auditory regions of the two halves of the brain sort out sound differently. The left side dominates in deciphering speech and other rapidly changing signals, while the right side leads in processing tones and music. Because of how the brains neural network is organized, the left half of the brain controls the right side of the body, and the left ear is more directly connected to the right side of the brain. Elaine Schmidt | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:6f633054-e1ec-46b7-acf0-590dfcd7672d>
3.625
914
Content Listing
Science & Tech.
41.744129
95,601,222
Asked by: Martin Simpson, Huddersfield In the open ocean, whatever water evaporates must eventually return. Rivers continually wash more salt in from the land, but the sea has reached equilibrium now, and extra salt just precipitates out of solution onto the ocean floor. The concentration of salt in the Dead Sea is almost 10 times higher than the average for the rest of the oceans. This is far too salty for fish and plants, but even here there are some bacteria and fungi that can survive. The Dead Sea’s high salinity is because the water is evaporating much faster than fresh water flows in. - How much salt would I need to float in my bath? - Do seabirds drink seawater? And if so, how do they prevent salt poisoning?
<urn:uuid:234f01d7-898e-4d5c-bf90-7436f5767235>
3.4375
162
Q&A Forum
Science & Tech.
57.993706
95,601,234
4 days ago Friday, March 7, 2008 Neutron stars and magnetars Considering my PhD research, I can appreciate the importance of statistics, so when there are only 12 of something - in this case, magnetars - which have been discovered in the entire universe, I can understand why a possible thirteenth is an amazing discovery (a free article may be found here). Neutron stars, objects so dense that they basically can't get any more so (they are supported merely by degenerate neutron pressure), exist throughout the universe. They are typically the remnants of supernovae explosions, a tiny, rapidly spinning core where a giant once stood. Pulsars specifically are those which, by their quick rotation, produce a telltale EM radiation of rapid, regular pulses (typically in the radio frequency range). Magnetars are theorized to be neutron stars with such strong magnetic fields that "the magnetic field actually slows the star's rotation and causes starquakes that pump enough energy into the surrounding gases to generate bursts of soft gamma radiation" (NASA) that exceed, in fractions of a second, the energy output of the sun in a thousand years. They are so rare that the first was not discovered until 1979, and only twelve are known, though millions to hundreds of millions could exist. The discovery of a possible "evolutionary step" between normal (in, of course, a funny context) neutron stars, pulsars and magnetars is therefore a momentous one. Perhaps, since this intermediate form seems to have a strong magnetic field, but not quite as strong as the twelve known magnetars, there is some mechanism by which the magnetic field strength grows throughout the neutron star's lifetime. Perhaps all magnetars begin as pulsars, or perhaps they evolve completely separately, but to differing degrees. Perhaps, since my research deals instead with novae and x-ray bursts, I should get back to work. Gavriil, F.P., Gonzalez, M.E., Gotthelf, E.V., Kaspi, V.M., Livingstone, M.A., Woods, P.M. (2008). Magnetar-like Emission from the Young Pulsar in Kes 75. Science DOI: 10.1126/science.1153465
<urn:uuid:1d0957ca-33db-4a8c-8b49-415b8bdc6630>
2.859375
470
Personal Blog
Science & Tech.
52.241816
95,601,256
The energies involved are so large, and the nucleus is so small that physical conditions in the Earth (i.e. The rate of decay or rate of change of the number N of particles is proportional to the number present at any time, i.e.The half-life is the amount of time it takes for one half of the initial amount of the parent, radioactive isotope, to decay to the daughter isotope.The oldest fossils (of bacteria) are 3.8 billion years old.Choose: No Frames Layout Choose: Human Evolution Frames .Thus, if we start out with 1 gram of the parent isotope, after the passage of 1 half-life there will be 0.5 gram of the parent isotope left.After the passage of two half-lives only 0.25 gram will remain, and after 3 half lives only 0.125 will remain etc. The highest rate of carbon-14 production takes place at altitudes of 9 to 15 km (30,000 to 50,000 ft). The first lesson, Isotopes of Pennies, deals with isotopes and atomic mass. The second lesson, Radioactive Decay: A Sweet Simulation of Half-life, introduces the idea of half-life. Prior to 1905 the best and most accepted age of the Earth was that proposed by Lord Kelvin based on the amount of time necessary for the Earth to cool to its present temperature from a completely liquid state. Although we now recognize lots of problems with that calculation, the age of 25 my was accepted by most physicists, but considered too short by most geologists. Recognition that radioactive decay of atoms occurs in the Earth was important in two respects: Principles of Radiometric Dating Radioactive decay is described in terms of the probability that a constituent particle of the nucleus of an atom will escape through the potential (Energy) barrier which bonds them to the nucleus.
<urn:uuid:15a7a65f-a5c2-4fbf-8aeb-f0966b820e19>
4
385
Knowledge Article
Science & Tech.
58.041681
95,601,260
The world has never had a nuclear war. Therefore, we do not know exactly what would happen. But if the major powers engaged in a nuclear war, it probably would be the last for at least a thousand years, because human civilization would be destroyed. Preventing nuclear war is the most important issue facing humankind today. On the road toward prevention, we must first do our best to estimate what would happen in a nuclear war. KeywordsNuclear Weapon Blast Wave Nuclear Explosion Electromagnetic Pulse Ground Zero Unable to display preview. Download preview PDF. - 1.Hiroo Kato, “Cancer mortality,” in Shigematsu and Kagan, 1986—see Bibliography.Google Scholar - 2.Pierce, 1989—see Bibliography.Google Scholar - 3.Sohei Kondo, 1988, “Mutation and cancer in relation to the atomic-bomb radiation effects,” Japanese Journal of Cancer Research 79, 785799.Google Scholar - 4.BEIR V, 1990—see Bibliography.Google Scholar - 5.J. V. Neel, W. J. Schull, A. A. Awa, C. Satoh, H. Kato, M. Otake, and Y. Yoshimoto, 1990, “The children of parents exposed to atomic bombs: Estimates of the genetic doubling dose of radiation for humans,” American Journal of Human Genetics 46, 1053–1072.Google Scholar - 7.Information presented here is derived chiefly from Ehrlich, Sagan, Kennedy, and Roberts, 1984—see Bibliography, and from references cited therein.Google Scholar - 9.See pp. 128–130, in Mark A. Harwell and Christine C. Harwell, “Nuclear famine: The indirect effects of nuclear war,” pp. 117–135, in Solomon and Marston, 1986—see Bibliography.Google Scholar - 10.Around the year 1000, there was a warming trend throughout the Northern Hemisphere. The average temperature rose about 3° F, and the tree line moved 60 miles farther north! (Barry Lopez, 1986, Arctic Dreams. New York: Chas. Scribner’s Sons, p. 184 )Google Scholar - 11.A 1985 report of a 2-year study by the Scientific Committee on Problems of the Environment (SCOPE) of the International Council of Scientific Unions, in which more than 300 scientists from 30 countries participated, estimated that only 1% of the world population could survive without organized agriculture. (Mark A. Harwell and Christine C. Harwell, “Nuclear famine: The indirect effects of nuclear war,” pp. 117–135, in Solomon and Marston, 1986—see Bibliography.)Google Scholar - 12.Alan Robock, 1989, “New models confirm nuclear winter,” Bulletin of the Atomic Scientists 45 (September), 32–35.Google Scholar
<urn:uuid:81427dcb-24ad-472f-ba78-39938ad71501>
3.25
614
Truncated
Science & Tech.
64.587377
95,601,264
As the Earth’s climate continues to warm, researchers are working to understand how human-driven emissions of carbon dioxide will affect the release of naturally occurring greenhouse gases from arctic permafrost. As the perennially frozen soil continues to thaw, the increase of greenhouse gas emissions could significantly accelerate warming conditions changes on Earth. An estimated 1,330 billion to 1,580 billion tons of organic carbon are stored in permafrost soils of Arctic and subarctic regions with the potential for even higher quantities stored deep in the frozen soil. The carbon is made up of plant and animal remnants stored in soil for thousands of years. Thawing and decomposition by microbes cause the release of carbon dioxide and methane greenhouse gases into the atmosphere. “Our big question is how much, how fast and in what form will this carbon come out,” said Ted Schuur, Northern Arizona University biology professor and lead author on a paper published in Nature. The rate of carbon release can directly affect how fast climate change happens. Schuur and fellow researchers coalesced new studies to conclude that thawing permafrost in the Artic and sub-Arctic regions will likely produce a gradual and prolonged release of substantial quantities of greenhouse gases spanning decades as opposed to an abrupt release in a decade or less. Modern climate change is often attributed to human activities as a result of fossil fuel burning and deforestation, but natural ecosystems also play a role in the global carbon cycle. “Human activities might start something in motion by releasing carbon gases but natural systems, even in remote places like the Arctic, may add to this problem of climate change,” Schuur said. During the past 30 years, temperatures in the Arctic have increased twice as fast as other parts of the planet. Schuur and his team of researchers from around the world also present next steps for improving knowledge of permafrost carbon and how the dynamics will affect the global carbon cycle. Approaches include improving climate change models by integrating newly created databases, changing models to differentiate between carbon and methane emissions and improved observations of carbon release from the landscape as the Arctic continues to warm. Public Affairs Coordinator Theresa Bierer | newswise Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:d535e3e6-fb7e-4aa8-848c-ad9149d3d326>
4.0625
1,024
Content Listing
Science & Tech.
35.299559
95,601,303
Satellite-based thermal infrared remote sensing is the main method used to acquire distributed land surface thermal information. However, the spatial resolution of existing satellite-based remote sensing thermal infrared images is generally low. For example, the spatial resolution of thermal infrared imageries collected by the moderate resolution imaging spectroradiometer (MODIS) and the advanced very high resolution radiometer (AVHRR) is , and the resolution of thermal infrared images collected by the Geostationary Operational Environmental Satellite Imager is 4000 m. Such low spatial resolution significantly restricts the usefulness of these images in drought monitoring and other agricultural and forestry practices,12.3.–4 particularly in mountainous regions with complex landscape. High-resolution images can be obtained through sharpening of coarse resolution images under the premise that most physical information is maintained. Sharpening is an effective method used to solve the problem of low spatial resolution of thermal images.3 Thermal image sharpening, which is also called thermal information downscaling or subpixel temperature estimation, has been an important topic in the international remote sensing field in recent years.126.96.36.199.188.8.131.52.10.11.12.13.14.–15 The two following paragraphs will provide an analysis of previous studies from the study area types and explanatory variables. The studied areas in previous studies can be classified into agricultural areas (e.g., see Refs. 2, 3, and 5), urban areas (e.g., see Refs. 11, 1617.18.–19), and areas with complex landscapes (e.g., see Refs. 8 and 20). The performance of a certain method in different landscapes was reported.14,21 Most of these areas featured flat terrain. Thus far, studies on thermal image sharpening in mountainous areas have been relatively rare.1 Functioning as complicated energy budget systems, mountainous regions refer to regions that are characterized by a certain altitude, relative height, and gradient.22 Different from other geographical units, mountainous regions have their own laws in weather and climate, including: (1) the temperature, water vapor pressure, solar radiation, rainfall, and other factors vary with the change in altitude.23,24 (2) Solar radiation, wind speed, and other factors vary with the difference in slope aspect and gradient.2324.25.–26 (3) Different terrains exert differentiated aggregating effects on water, mineral substances, plant litter, and other substances.22 (4) Terrain has a significant effect on plant species and growth status.22 The aforementioned influences have direct or indirect effects on the surface energy balance (SEB) in mountainous regions, and incoming and outgoing items of SEB vary with topographical change. As a state variable in the surface energy budgeting process, land surface temperature (LST) in mountainous regions results in more obvious spatial variation than that in flat areas and has intrinsic relationships with more environmental factors. At the beginning of this research field, disTrad (later known as TsHARP) was the typical thermal sharpening method.27 It used the normalized difference vegetation index (NDVI) as the explanatory variable (the term of “explanatory variable” mentioned in the paper is basically tantamount to “kernal” and “scale factor” involved in similar literature, which have been summarized in Ref. 1). Subsequent scholars improved TsHARP and used the transformed NDVI as explanatory variables [e.g., Refs. 5 and 28 used vegetation coverage (VC) as explanatory variable] or introduced new explanatory variables (e.g., albedo11 and normalized multiband drought index20). Some scholars still used NDVI as explanatory variables but introduced new regression approaches or interpolation methods (such as least-median square regression downscaling,2 regression-kriging,3 the combination of TsHARP and thin plate spline,21 and so on). An emerging tendency for this research field is to use multiple factors as explanatory variables and adopt machine learning algorithm for modeling.8,14 Most previous studies in this field used merely the optical image, without further exploring the role that external data [such as digital elevation model (DEM), meteorological data, microwave data, and so on] play on thermal image sharpening (see the discussion in Ref. 1). As stated earlier, LST in mountainous regions features special laws in spatial variation and is subject to many environmental factors. Viewed from this perspective, it is impossible to satisfyingly sharpen thermal images in mountainous regions if only explanatory factors from previous studies are used. Still further studies should be conducted on issues like the selection of explanatory variables suited to thermal image sharpening in mountainous regions and the ways for spatially depicting these explanatory variables at high resolution. To improve the effectiveness and accuracy of the thermal sharpening technique in mountainous regions, this study proposed a method based on the geographical statistical model (GSM). Given that LST is the result of land SEB,29,30 it is determined by land surface properties and energy balance together. This study selected various parameters in the SEB process as the explanatory variables. These variables were spatially simulated to generate their raster layers. On this basis, the local adaptation statistical relationship between low-resolution top-of-atmosphere brightness temperature (BT) and explanatory variables, i.e., the GSM, was established through multivariate adaptive regression splines (MARS).31 Then, the GSM was applied to high-resolution explanatory variables, thereby obtaining high-resolution BT image. In this paper, the GSM method was assessed by using it in sharpening a simulated coarse resolution (1026 m) BT image to fine resolution (57 m). A series of experimental designs was made in this study to analyze the influences of explanatory variable combination, sampling method, and residual error correction on sharpening results, as well as their influence mechanisms. Study Area and Data Source The study area, which is located west of Oregon in the northwestern United States, is an approximate rectangular region between the west longitude of 123 to 124 deg and the northern latitude of 43 to 44 deg. The elevation of the region is 0 to 1299 m (335 m on average), and the slope degree is 0 to 54.5 deg (15.9 deg on average). The main land cover types (LCTs) in this region include forest, cutover land, grassland, farmland, and water bodies. This region was selected as the study area because (1) it is a typical mountainous region, (2) it has rich LCTs and rather complicated landscape, and (3) sufficient data were available in this region, especially DEM with high spatial resolution and accuracy. The Landsat enhanced thematic mapper plus (ETM+) remote sensing images used, whose path and row number was P46R30, were generated on September 7, 1999. These images were L1T terrain correction data products downloaded from United States Geological Survey (USGS) Earth Explorer. The resolution of these images is 57 m for thermal band and 28.5 m for visible and near-infrared (Vis-NIR) bands. The thermal band was used to simulate a coarse resolution (1026 m) thermal image (see Sec. 2.2 for details). In the following process, the coarse resolution thermal image would be sharpened back to 57-m resolution, and then the sharpening result would be verified based on the initial 57-m resolution thermal image. Using ETM+ data in the study is justified by the following considerations: first, it provides a resolution (57 m) in the thermal infrared band much higher than most legacy and current satellite-borne sensors do. As to regions having a strong spatial heterogeneity in LCT, a more practical value may be provided if the study effort focuses on how to sharpen the thermal images from a 1000- to 60-m resolution level. Second, from May 2003 to the present, about a quarter of the data in ETM+ images are missing because of a scan line corrector (SLC) failure, but fortunately, the remaining three quarters are accurate. Using these data to evaluate the thermal image sharpening effect creates a new application of these data. (As a basic study, this paper used the ETM+ data before the SLC failure; however, the approach developed in such a way is likewise suitable for post-SLC-failure data). A DEM, with a spatial resolution of 10 m and a mean square error of , was obtained from the National Elevation Dataset of USGS. Precipitation data 2 weeks before the imaging date and air temperature (AT) and atmospheric precipitable water data on the imaging date were collected from the U.S. National Climatic Data Center of National Oceanic and Atmospheric Administration. The solar radiation data of the stations on the imaging date and the atmospheric data related with solar radiation were obtained from the solar radiation database of the American National Renewable Energy Laboratory (ANREL). The National Land Cover Database (NLCD; Edition 2) of USGS was used as a reference in the classification of remote sensing images. Preparation of Images for Sharpening The sharpening object in previous studies can be divided into three classes: digital number (DN) images (e.g., Ref. 6), top-of-atmosphere BT images (e.g., Refs. 11 and 14), and land surface real temperature images (e.g., Refs. 2, 5, and 10). All these classes can be called a “thermal image.” Because there are various retrieval methods from DN or top-of-atmosphere BT to land surface real temperature (e.g., Refs. 3184.108.40.206.–36) and different methods will get different results,37,38 the relatively original data, top-of-atmosphere BT images, are better sharpening objects compared to the land surface real-temperature images. This study used the top-of-atmosphere BT image as a sharpening object. First, the original DN image of ETM+ Band6 with 57-m resolution was converted into the radiance image at the entrance pupil () according to the gain and bias parameters in its header file. Then, was converted into a top-of-atmosphere BT image (abbreviated as BT57) according to the Planck formula. Finally, the BT image with 1026-m resolution (abbreviated as BT1026) was simulated based on BT57. In the following procedure (see Sec. 2.7), BT1026 would be used as sharpening object. The reasons that we selected 1026 m as the source resolution were as follows: (1) 1026 m is integral multiple of the resolution of BT57, which made the aggregating process relatively easy; (2) 1026 m is approximately equal to the spatial resolution of thermal infrared images collected by the MODIS or AVHRR; via the GSM’s performance in sharpening the simulated 1026-m resolution image, its performance in sharpening MODIS or AVHRR thermal image can be evaluated. The procedure of generating BT1026 was as follows:14 every BT57 pixel aggregates into one pixel with a resolution of 1026 m, and the pixel value was calculated from the values of the corresponding BT57 pixels with the equation: Selection of Explanatory Variables and Establishment of Their Raster Layers LST is related to many natural factors, and a good understanding of the intrinsic relationships between LST and its determinants is the basis of downscaling. Since the basic idea of this method was to simulate LST as the result of SEB, the main environmental factors of the SEB were selected as explanatory variables. These environmental factors were judged on the basis of previous studies focusing on SEB.29,30,39,40 In addition, some other explanatory variables were proxy variables, which were used to replace those environmental factors proved by previous studies as directly related to the BT, whereas their raster layers were difficult to obtain. This process could guarantee the simplicity and feasibility of this method. In this study, 13 kinds of environmental factors were selected according to the aforementioned principles. The specific selection basis and the raster layer establishment method of these factors are detailed in Secs. 2.3.1–2.3.6 Total solar radiation In mountainous regions, the spatial variation of total solar radiation () received by the land surface caused by topographic relief25,26 is a significant cause of the small-scale spatial variation of surface temperature in mountainous regions. For instance, a simulation study based on an ecohydrological model shows that for the climate conditions considered in that study, the incoming solar radiation is one of the major factors controlling LST spatial distribution.41 In addition, because there is a time lag between the surface temperature intraday change and intraday change, the effect of in some period before imaging on thermal image is possibly superior to the effect of simultaneous with imaging on thermal image;42 Ref. 42 shows that under the conditions of their experiment, a time lag of 1 h considerably increases the linear relation between vegetation canopy temperature and local insolation angle. Therefore, in several tiny periods before imaging was selected as an alternative explanatory variable and the simulation method is as follows: 1. Tiny period definition: to facilitate calculation, we divided the period from 8:06:00 to 10:36:00 (i.e., 2.5 h before ETM+ imaging) into 30 tiny periods, i.e., each tiny period is 5 min. The tiny periods were named 5-min-periods. In each 5-min-period, all factors related to solar radiation were assumed to remain unchanged and be the status at the 5-min-period’s midpoint. 2. Astronomical solar radiation (): it was determined whether each DEM grid cell is irradiated directly in a specific 5-min-period(s) with the method indicated in Ref. 43. For each DEM grid cell that can be irradiated directly, was calculated as follows:25 and is the cosine of the insolation angle on a tilted surface, which can be calculated as follows:4325 and is the solar hour angle (rad). 3. Direct solar radiation (): of each grid cell that is irradiated directly in clear sky condition was calculated according to Iqbal C model25 as follows:25. Among critical parameters needed in the calculation, atmosphere moisture content and aerosol optical thickness came from the solar radiation database of ANREL, and vertical thickness of the ozone layer was calculated with latitude and date according to Ref. 44. 4. Scatting radiation: First, solar scattering radiation in the clear sky () was calculated on the assumption of “level ground surface without obstruction around” on basis of Iqbal C model.25 Then, solar scattering radiation in clear sky () was calculated on the condition of “possibly inclined ground surface with potential obstruction around toward sky” according to Ref. 43. 5. : is the sum of and . of six 5-min-periods before imaging was used as alternative explanatory variables, which are denoted as (). In addition, every three adjacent 5-min-periods were taken as one group, and their was summed up generating eight (), which were used as alternative explanatory variables also. In other words, two kinds of with different duration were generated including with the period of 5 min and with the period of 15 min. The initial resolution of and layers was 10 m depending on DEM used. We converted these layers to 57-m resolution layers via the following process: at first, via a bilinear resample method, we converted these layers into 11.4-m ones (11.4, a number close to 10, was obtained by dividing 57 by integer 5). Then, we converted layers of 11.4-m resolution into 57-m resolution layers using aggregation method, which meant that each set was grouped into one pixel, whose average formed the value in their corresponding pixel in the new layers. Precipitation before imaging and topographic wetness index Soil moisture affects the plant canopy temperature by influencing the transpiration intensity of vegetation and has a profound effect on the energy distribution ratio to sensible heat flux and latent heat flux.30,39,45,46 Generally, the more moist the soil, the more evaporation removes heat. Therefore, soil moisture is an indispensable factor of thermal image modeling.41 Because it is difficult to simulate soil moisture accurately, precipitation data 2 weeks before the imaging date (PBI) and topographic wetness index (TWI) were used as proxy variables of soil moisture. Spatial interpolation was conducted to PBI of 37 meteorological stations in the study area and its surrounding areas by using the common Kriging method, thereby obtaining 57-m resolution raster data of PBI. The 57-m resolution layer of TWI was generated via the following process: first, using the bilinear resample method, a 10-m-resolution DEM was converted to a 11.4-m-resolution DEM. Second, the latter was depression filled. Third, flow direction was calculated using the single-flow direction algorithm47 on the basis of the depression-filled DEM, thereby deriving the flow accumulation (). Thus, TWI could be calculated as follows:2.3.1. Vegetation index and vegetation coverage Transpiration, which is the significant expenditure item in the energy balance of vegetation canopy, controls vegetation canopy temperature.29 Transpiration is largely determined by vegetation quantity and flourishing degree. Therefore, the terrain-corrected NDVI (NDVI′) and VC were selected as the explanatory variables. Their computation methods are described in the following. The original DN image of the third and fourth bands was converted into the radiance image at the entrance pupil. On this basis, the top-of-atmosphere reflectance was calculated. Then, the calibration model was used for topographical correction, as follows:48,49 The raster layer of VC was calculated by using the 57-m resolution NDVI′, as follows: Emissivity, water vapor content of air column at imaging, and air temperature at imaging Given that the sharpening object is a top-of-atmosphere BT image, according to the radiative transfer equation,32,38 atmospheric transmissivity, atmospheric downward and upward long-wave radiations, and emissivity () should be considered. Their simulation or proxy methods are as follows: 1. Although atmospheric transmissivity of the thermal infrared band is influenced by many factors (e.g., air pressure, aerosol content, and content of various atmospheric components), its spatial variation mainly depends on the atmospheric water vapor content (AWVC).50 Therefore, the AWVC during imaging was used as the proxy factor of atmospheric transmissivity. The atmospheric precipitable water data of 19 meteorological stations in the study area and its surrounding areas were used to generate the 57-m-resolution AWVC raster data on the basis of the macrofactor regression and residual interpolation method.51 2. The AT at imaging was used to represent the atmospheric downward and upward long-wave radiations indirectly. Based on the AT data of 12 meteorological stations in the study area and its surrounding areas, 57-m resolution AT raster data were generated by using the combined method.52 3. Based on VC, 57-m resolution raster data were generated by using the method introduced in Ref. 37. Profile curvature, slope degree, and elevation Profile curvature (PC) affects the acceleration or deceleration of flow across the surface. Many environmental factors, such as soil thickness, litter thickness, and soil humidity, have a certain degree of correspondence with PC.53,54 Slope degree () and elevation (ELEV) were employed as proxy variables of factors, which are related to terrain but unknown currently and their raster data were generated. The 57-m-resolution layers of these three variables were generated via the following process: first, via the bilinear resample method, we converted the 10-m-resolution DEM to a 11.4-m-resolution DEM. Then, we generated 11.4-m-resolution layers for PC and from the 11.4-m-resolution DEM using corresponding tools in ArcGIS. Finally, we converted these layers of 11.4-m resolution to layers of 57-m resolution using aggregation method mentioned in Sec. 2.3.1. Land cover type Aside from VC, vegetation index, soil moisture, and emissivity, several other thermodynamic factors of the land surface (e.g., specific heat, soil density, thermal conductivity, surface roughness, and albedo) also affect surface temperature. Unfortunately, the spatial distributions of these factors are difficult to describe accurately. Previous studies proved that there is a significant relationship between these factors and LCT.5556.–57 Therefore, the spatial variation of the LCT is one of the major factors causing the spatial variation of LST and SEB.41,58,59 Reference 41 shows that in addition to solar radiation, the land cover variability is an another major factor causing LST spatial variation. The procedure of generating the LCT raster layer is as follows: first, clustering was conducted with unsupervised classification (iterative self-organization analysis algorithm). Second, the attribute of the unsupervised classification results was judged according to the high-resolution remote sensing images (from Google Earth) and NLCD. As a result, eight LCTs (i.e., evergreen forest, mixed forest, open forest, shrubbery, grassland, farmland, water body, and construction land) were obtained. LCT is a categorical variable, which is unsuitable for scale conversion. Therefore, LCT data were converted to quantitative variables, that is, the area percentage of each LCT in each 57-m resolution pixel; these variables were recorded as , (). Radiance of visible and near infrared bands Radiance at the entrance pupil of the first to fifth and seventh bands of ETM+ [, (] could also be used as explanatory variables to determine whether adding into the aforementioned variables could further improve the sharpening effects. So, at-entrance-pupil radiance layers of these six bands were calculated using the parameters of gain and bias in metadata. The layers’ resolution was converted from 28.5 to 57 m using the aggregation method. Through the aforementioned steps, thirty-eight 57-m-resolution raster layers (including six , eight , PBI, TWI, NDVI′, VC, AWVC, AT, , PC, , ELEV, eight , and six ) of the explanatory variables were acquired. For every 57-m-resolution raster layer of the explanatory variables, every set was divided into one group (corresponding to a BT1026 pixel) for pixel value averaging, thereby obtaining thirty-eight 1026-m-resolution raster layers of the explanatory variables. Combining Alternative Explanatory Variables Considering that testing all combinations of 38 variables was laborious and unnecessary, only specific variable groups with particular purposes were tested (Table 1). Among these groups, group 1 was the default explanatory variable group by GSM, and groups 2 to 9 were used to test the influence of cutting or adding explanatory variables on sharpening. Different combinations of alternative explanatory variables. |Group No.||Alternative explanatory variables||Description| |1||, , NDVI′, VC, TWI, PC, ELEV, , , PBI, AT, , AWVC||Basic explanatory variable group| |2||NDVI′, VC, TWI, PC, ELEV, , , PBI, AT, , AWVC||Subtract solar radiation based on group 1| |3||, , TWI, PC, ELEV, , , PBI, AT, , AWVC||Subtract NDVI′ and VC based on group 1| |4||, , NDVI′, VC, PC, ELEV, , , PBI, AT, , AWVC||Subtract TWI based on group 1| |5||, , NDVI′, VC, TWI, PC, ELEV, , , PBI, AT, AWVC||Subtract based on group 1| |6||, , TWI, PC, ELEV, , PBI, AT, AWVC||Subtract all variables derived from the remote sensing image based on group 1| |7||PBI, AT, AWVC, NDVI,||Subtract the variables derived from DEM based on group 1| |8||, , NDVI′, VC, TWI, PC, ELEV, , ,||Subtract the macro variables based on group 1| |9||, , NDVI′, VC, TWI, PC, ELEV, , , PBI, AT, , AWVC,||Add radiance at the Vis-NIR bands based on group 1| Selecting the Sample Pixels After this process, images of BT and explanatory variables at 1026-m resolution were prepared. All or a part of the pixels in these images could be selected as sample pixels, values of which (i.e., values of BT and explanatory variables) would be used for building the MARS model. The internal homogeneity degrees varied for different sample pixels. Previous research posited that the statistical model built based on samples with high homogeneity degrees is less scale dependent than one built based on samples with low homogeneity degrees.14 To observe the influence of the homogeneity degree of sample pixels on the model for the mountainous region, two groups of pixel samples were selected. The first group was the homogeneous pixel sample (HoPS) group. To select HoPSs, the variation coefficients of values at 57-m resolution corresponding to each pixel at 1026-m resolution were calculated and ordered from high to low for each kind of explanatory variable; if the variation coefficients of all explanatory variables of a pixel at 1026-m resolution were listed out of top 30%, then this pixel was regarded as a HoPS. The second group was the heterogeneous pixel sample (HePS) group, in which all pixels at 1026-m resolution were selected as the pixel samples without considering their homogeneity degrees. Multivariate Adaptive Regression Spline MARS is an automatic regression modeling method that can predict an output variable according to a group of explanatory variables. This method excels at finding the complex data structure that often hides in high-dimensional data. The method divides the entire high-dimensional data space into several regions. In each specific region, an independent function is used for fitting. The core formula of MARS model is In this study, nine groups of the aforementioned explanatory variables (see Table 1) were taken as the dependent variables () to build the MARS models one by one. Given that two sampling methods (i.e., HoPS and HePS methods) were used, 18 MARS models were developed. Simulation of Thermal Images at 57-m Resolution and Residual Redistributing The 57-m-resolution explanatory variable raster layers were input into the 18 developed MARS models using the raster calculation tool in GIS software, and then 18 simulation results of BT at 57-m resolution (i.e., sharpening results, written as BT57′ images) were obtained. BT1026 image was used to modify each BT57′ image through the following procedures. First, a BT57′ image was reaggregated into an image at 1026-m resolution (each set of BT57′ pixels were aggregated into a pixel at 1026-m resolution, and its pixel value was the average of the corresponding BT57′ pixels) written as BT1026′. Second, the BT1026′ image was subtracted from the BT1026 image, resulting in , which was then added to the BT57′ image to obtain the result after modification, written as BT57″; this process is called residual redistributing (RR) below. The BT57′ and BT57″ images are the final sharpening results. Validation of the Sharpening Results The mean absolute error (MAE) and root mean-square error (RMSE) of the sharpening results (BT57′ and BT57″) were calculated with the BT57 image as a reference to measure accuracy. The MAE and RMSE were calculated as follows: Evaluation of the Sharpening Effect As previously mentioned, 18 MARS models and 36 sharpening results (18 BT57′ images and 18 BT57″ images) were obtained in this study. Table 2 lists the basic parameters of these MARS models and the MAEs and RMSEs of the corresponding sharpening results. In Table 2, the sharpening results of group 1, group 3, group 4, and group 9 based on the HePS method have the lowest MAEs and RMSEs. These MAEs before and after RR are as low as about 1.3 and 1.2 K respectively, showing the sharpening results are of high accuracy. Basic parameters of the MARS models and the sharpening errors. The data before and after “\” are the errors of results of HePS and HoPS method, respectively. |Group No.||Basic parameter of MARS model||Sharpening error (K)| |Number of alternative explanatory variables||Number of BFs||Self error of the model||BT57′ image MAE||BT57″ image MAE||BT57′ image RMSE||BT57″ image RMSE| To measure the sharpening effects of the proposed method, we compared them with the effects of two simple processing methods (bilinear resampling and cubic convolution resampling) that can be regarded as benchmarks for measuring the effect of any thermal sharpening method. The results show that the MAE of both simple processing methods is 1.776 K. The MAE of the best results obtained in this study is smaller than that of the simple processing methods by . We qualitatively evaluate the sharpening effects by visual inspection. The visual presentation of the various sharpening results is closely correlated to their error. The smaller the error is, the better the visual presentation. To save space, we only present part of the sharpening results of the models constructed based on the first group of explanatory variables (Fig. 1). Comparing Fig. 1(e) with Fig. 1(b) reveals that the sharpening result based on the default explanatory variable group, and the HePS method shows significant improvement in its ability to reflect details of spatial variation of thermal information. Comparing Fig. 1(e) with Fig. 1(c) reveals that this sharpening result is very similar to the reference image. Figure 1(e) is much better than the result of cubic convolution resampling [see Fig. 1(d)]. Figures 1(g) and 1(h) are less similar than Figs. 1(e) or 1(f) to the reference image, indicating that the HoPS method is worse than the HePS method in a mountainous region. Very slight artificial box-like traces are found in specific areas in Fig. 1(f), and rather serious artificial box-like traces are found in Fig. 1(h). Effect of Sampling on Sharpening Table 2 shows that in terms of the self errors, the errors of the models built by the HoPS method are smaller than those of the models built by the HePS method. However, in terms of the errors of the sharpening results (MAE and RMSE), the errors of the model built by the HoPS method are larger than those of the models built by the HePS method (before and after RR). The reason behind this condition was the fact that the HoPS method pursued the consistency of 57-m resolution pixels covered by each 1026-m-resolution sample pixel on explanatory variables, and thereby the majority of the extracted sample pixels belonged to the flat region with weak regional representativeness of the large area of rugged terrain. The values of the HoPSs were more concentrated than the overall value distribution. The MARS models built by the HoPS method had a simple model structure (i.e., small number of BFs) and small self error. Nonetheless, given the obvious difference between the value distribution of the HoPSs and the overall value distribution, large errors generated when the HoPS-based models were applied to the entire research region. Thus, the HoPS method cannot adapt to the thermal image sharpening of the mountainous region. The analysis in the subsequent paragraph analyzes only the HePS-based models without considering HoPS-based models. Effect of Explanatory Variables on Sharpening Table 2 indicates that the errors of the sharpening effect obtained by different explanatory variable groups significantly varied. The variables in group 1 were all alternative ones [except ], which generated a sharpening result with very low MAE and RSME. Groups 2 to 8 eliminated one or several variables based on group 1. The MAE and RSME change after specific variable(s) was eliminated that could reflect the importance of the eliminated variable(s) to sharpening. The elimination of solar radiation increased MAE by (see group 2), indicating that this variable is valuable for thermal sharpening. Eliminating NDVI′ and VC only slightly influenced MAE (see group 3). The reason behind the case was the condition that another explanatory variable was highly correlated with NDVI′; replaced the explanatory effect of NDVI′ and VC. After the elimination of TWI, MAE did not rise but reduced slightly (see group 4). This finding may be due to the fact that TWI inaccurately reflected the spatial differentiation of soil moisture decided by the terrain and introduced numerous new noises. After was eliminated, MAE rose by (see group 5), implying that these variables are very valuable for thermal sharpening. If all variables derived from images at Vis-NIR bands were eliminated [i.e., NDVI′, VC, , and (see group 6)], then the MAE of the sharpening result would reach over 2 K. This phenomenon suggests that thermal sharpening could not be separated from these variables completely. If all variables reflecting topographic features (see group 7) were eliminated, then MAE would increase by nearly 0.2 K. If three macrovariables (i.e., PBI, AT, and AWVC) were eliminated simultaneously, then the MAEs of BT57′ and BT57″ images would increase by and 0.6 K (see group 8), respectively. The common characteristic of the macrovariables is that they have the spatial variation only on the macroscale and that they are nearly uniform inside each 1026 pixel. As such, the BT variation inside the area should bear no relation to these macrovariables. However, the results of group 8 show the importance of macrovariables to sharpening. The reason behind this event is the case that the loss of macrovariables sharply increased the self errors of the MARS model (see Table 2), which in turn caused large errors in the sharpening result. Group 9 added the Vis-NIR radiance variables based on the variables in group 1, and the MAE of the sharpening results reduced slightly. Nevertheless, group 9 did not get rid of the dependence on synchronously acquired Landsat Vis-NIR images. Vis-NIR radiance changes rapidly with date; hence, the Vis-NIR radiance of the target date cannot be accurately calculated with Vis-NIR images on other dates. This makes such explanatory variable group less feasible in practical applications. The explanatory variable group built based on the prior knowledge about the energy budge and thermal infrared electromagnetic radiation transmission (i.e., group 1 in this study) could accurately conduct sharpening on thermal image. In practice, choosing the explanatory variables is difficult, especially in regions with complex environmental conditions. This study suggests that in this case, the explanatory variable group should be determined based on the laws of land surface energy budget and thermal infrared electromagnetic radiation transmission, and thereby the blind choice of explanatory variables can be avoided. Comparison with Previous Methods In previous studies of thermal imagery sharpening based on statistical models, the NDVI or its transformed form was most commonly used as a scale factor.2,3,5,21,27 Reference 27 performed sharpening of multiple resolutions using the NDVI as the scale factor. As in the current study, that study downscaled the resolution from 1536 to 192 m, and the RMSE of the results was 1.61 K. Reference 5 used four forms of NDVI-based function to sharpen the resolution from 960 to 60 m, and the RMSEs ranged from 1.20 to 3.27 K. Reference 10 used a global model, a resolution-adjusted global model, a piecewise regression model, a stratified model, and a local model to sharpen the resolution from 990 to 90 m, and the RMSEs ranged from 3.24 to 3.68 K. Using an artificial neural network method and 11 explanatory variables including the NDVI, difference vegetation index, and leaf area index, Ref. 8 sharpens LST images; the RMSEs of the results sharpened from 1080 to 90 m were 2.443 K (vegetated area) and 1.573 K (bare area). Using a regression tree method and six Vis-NIR bands as the explanatory variables, Ref. 14 sharpens BT images; the MAEs of the sharpening results in flat areas were generally smaller than 1 K, while large errors were obtained in mountainous regions (960 m to 120 m, MAE 1.67 K). Previous research has shown that within mountainous regions, it is more difficult to sharpen the thermal imagery.14,15 In the current study, we performed sharpening of the thermal imagery in a typical mountainous region. The errors in the current results can be controlled at a relatively low level. For the model built with the default explanatory variable group of GSM and based on HePS method, the MAEs were 1.258 K (before RR) and 1.134 K (after RR) and the RMSE were 1.985 K (before RR) and 1.871 K (after RR). Thus, we tentatively put forward that the GSM-based method is superior to previous methods for mountainous regions. For a more accurate comparison with TsHARP, we performed sharpening using five TsHARP-based methods proposed by previous study (an NDVI-based linear function model,5,10,27 an NDVI-based quadratic function model,5 a vegetation cover model,5 a simplified vegetation cover model,5 and a local NDVI-based model10), and the thermal image to be sharpened was BT1026. The MAEs of the sharpening results were 1.631, 1.627, 1.668, 1.668, and 1.591 K, and the RMSE values were 2.223, 2.425, 2.226, 2.226, and 2.214 K, respectively. These MAEs and RMSEs were larger than corresponding MAEs and RMSEs of the sharpening results in the GSM-based method. Localization of the Model Statistical sharpening models can be divided into two kinds, namely, global and local models. The global model refers to a single model built by the sample pixels from the entire study area. Thermal image sharpening with a global model assumes that the target variable–explanatory variables relation is uniform across the entire area, but this assumption often contradicts the reality. In the real world, the target variable–explanatory variables relation cannot remain the same globally. If samples are collected in each local region in the form of a sliding window and the special model for each local region is established based on these samples, then the model is a local model. Several studies have shown that for thermal image sharpening, a local model performs better than a global one.14,18 In this study, the statistical models between BT and explanatory variables were established through MARS instead of with the method based on sliding window. In essence, the MARS models in this study are local models. To explain this problem, the model built with the variables in group 1 was chosen as an example. The model is depicted as follows: Feasibility of Geographical Statistical Model in Practice The presented results indicate that the GSM method is capable of generating a fine spatial resolution thermal image from a coarse spatial resolution image ( resolution, e.g., the thermal images of MODIS and AVHRR) accurately. However, the following issue needs further discussion. In this study, the NDVI′, , and raster data at 57-m resolution used by GSM were generated based on the ETM+ Vis-NIR data. How can these data be obtained in practical applications? For special regions, most satellite-borne moderate-resolution sensors (e.g., ETM+, Operational Land Imager, and Advanced Spaceborne Thermal Emission and Reflection Radiometer) cannot produce Vis-NIR data every day. Thus, when conducting thermal image sharpening on the ’th day, we may need the Vis-NIR data of moderate resolution sensors on the ’th or ’th day to generate NDVI′, , and . Generally, NDVI′ and are stable within several days, and LCT is stable in a year. Thus, it is feasible to using NDVI′, , and LCT of the ’th or ’th day () to serve as the corresponding variables on the ’th day. Other variables in the default explanatory variable group of GSM are all stable in the time domain (e.g., slope and elevation), or the data of these variables can be observed or simulated every day (e.g., precipitation, AT, and solar radiation). Therefore, it can be concluded that the GSM sharpening method can eliminate the dependence on synchronous imagery at VIS-NIR bands. It should be noted that the GSM-based method needs raster layers of many variables involved in land surface energy budget and thermal infrared electromagnetic radiation transmission. The establishment of these layers requires sufficient initial data (DEM, meteorological data, and so on) and much calculation time. In terms of data availability and implementation simplicity, several methods proposed by previous studies (e.g., Refs. 3, 5, 14, and so on) are better than the GSM-based method. The robustness of this method needs further investigations; specifically, future work should address the variation of the importance of explanatory variables in various regions. In addition, some satellite remote-sensing data products (like MODIS water vapor, MODIS atmosphere profile, and TRMM precipitation products) possess well-defined physical meaning; they are likely suitable for acting as explanatory variables and can enhance the thermal sharpening effect. Once these data are used as explanatory variables, they should be deliberately analyzed in respect of spatial resolution, data quality, and the degree to which they synchronize temporally with the sharpened objects. A sharpening method of thermal image based on GSM was proposed in this study. This method attached more importance to geography law and obtained sharpening results with fewer errors and good visual effects in mountainous areas with complex environment. The GSM-based sharpening method can avoid the blind choice of explanatory variables and reduce the dependence on synchronous imagery at VIS-NIR bands, which makes it more feasible in practice. The method is helpful to the agricultural and forestry studies and practices (e.g., drought monitoring and wildfire danger modeling) in mountainous regions using coarse spatial resolution thermal images (e.g., the images of MODIS and AVHRR). Furthermore, this study found that: (1) the homogeneous sample pixel model is not suitable for sharpening of a thermal image in a mountain area; (2) although macrovariables, such as PBI, AT, and AWVC, can only present spatial variability in macroscale, they are indispensable for thermal image sharpening; and (3) the MARS modeling method is capable of reflecting the locality of the relationship between the target variable and explanatory variables, thereby reducing errors of sharpening results efficiently. This work was supported by the National Natural Science Foundation of China (No. 41201099) and Postdoctoral Research Funds of Henan University (Grant No. BH2012042). We are thankful to the United States Geological Survey (USGS) for providing the ETM+, the National Elevation Dataset, and the National Land Cover Database. We are also thankful to the National Climatic Data Center of National Oceanic and Atmospheric Administration of US for providing meteorological data. We also deeply appreciate the American National Renewable Energy Laboratory for providing the solar radiation data and the atmospheric data related with solar radiation. We also thank the two anonymous reviewers for their insightful comments, which improved this paper. Pengcheng Qi received his PhD degree in physical geography from Lanzhou University, Lanzhou, China, in 2009. Since 2010, he has been working at Nanyang Normal University, where he is currently an associate professor with the Laboratory of Remote Sensing Monitoring of Natural Disaster. His research interests include remote sensing digital image processing and applications of thermal infrared remote sensing in ecology. Shixiong Hu received his PhD degree in physical geography and environmental analysis from State University of New York at Buffalo, United States, in 2004. Since 2004, he has been working in the Department of Geography, East Stroudsburg University of Pennsylvania. His research interests include hillslope processes, environmental modeling with GIS, and soil and water conservation. Haijun Zhang received the BS degree in geographic information systems and the ME degree in cartography and geographic information engineering from Chang’an University, China. He is currently a lecturer in the School of Environmental Science and Tourism, Nanyang Normal University, China. His research interests focus on the integration of remote sensing and GIS in addressing environmental issues. Guangmeng Guo received his PhD from Institute of Geographic Sciences and Natural Resources Research, Chinese Academy of Sciences, in 2004. He is currently a professor with the Laboratory of Remote Sensing Monitoring of Natural Disaster, Nanyang Normal University. His research interests focus on applications of remote sensing in disaster monitoring.
<urn:uuid:5a7ede03-f752-4430-91c3-ab1afabf4932>
3.90625
9,637
Academic Writing
Science & Tech.
40.443057
95,601,304
Scientists use a technique called radiometric dating to estimate the ages of rocks, fossils, and the earth.Many people have been led to believe that radiometric dating methods have proved the earth to be billions of years old.This has caused many in the church to reevaluate the biblical creation account, specifically the meaning of the word “day” in Genesis 1. Note that, contrary to a popular misconception, carbon dating is not used to date rocks at millions of years old. Before we get into the details of how radiometric dating methods are used, we need to review some preliminary concepts from chemistry. Recall that atoms are the basic building blocks of matter. Atoms are made up of much smaller particles called protons, neutrons, and electrons. Protons and neutrons make up the center (nucleus) of the atom, and electrons form shells around the nucleus. The number of protons in the nucleus of an atom determines the element.For example, all carbon atoms have 6 protons, all atoms of nitrogen have 7 protons, and all oxygen atoms have 8 protons.The number of neutrons in the nucleus can vary in any given type of atom.So, a carbon atom might have six neutrons, or seven, or possibly eight—but it would always have six protons.An “isotope” is any of several different forms of an element, each having different numbers of neutrons.The illustration below shows the three isotopes of carbon.
<urn:uuid:30107e12-6041-4cd4-b6ce-3e1f71caa5ba>
3.984375
310
Tutorial
Science & Tech.
46.6725
95,601,306
ENCORE: Birds do it, bees do it, but humans may not do it for much longer. At least not for having children. Relying on sex to reproduce could be supplanted by making babies in the lab, where parents-to-be can select genomes that will ensure ideal physical and behavioral traits. Men hoping to be fathers should act sooner rather than later. These same advancements in biotechnology could allow women to fertilize their own eggs, making the need for male sperm obsolete. animals already reproduce asexually. Find out how female African bees can opt to shut out male bees intent on expanding the hive. Will engineering our offspring have a down side? Sex creates vital genetic diversity, as demonstrated by evolution of wild animals in urban areas. Find out how birds, rodents and insects use sex in the city to adapt and thrive. Schilthuizen – Biologist and ecologist, at the Naturalis Biodiversity Center and Leiden University in The Netherlands. His New York Times op-ed, “Evolution is Happening Faster Than We Thought,” is here Matthew Webster – Evolutionary biologist, Uppsala University, Sweden Hank Greely – Law professor and ethicist, Stanford University, who specializes in the ethical, legal and social implications of biomedical technologies. His book is “The End of Sex and The Future of Reproduction.” This encore podcast was first released on 09/19/2017
<urn:uuid:58e38831-d37d-42e9-a6d5-0e10b1a95007>
2.703125
327
Truncated
Science & Tech.
37.069491
95,601,324
Planet in Peril. "Where science gets respect." This recent satellite image shows a river of Saharan dust streaming out over the Mediterranean toward Italy. Credit: NASA Sunday, 19 June 2016 37 Million Bees Instantly Dropped Dead After Farms Started Spraying "Neonics" on Crops Nation of Change Honeybee on milkthistle plant. Photo credit: Fir0002 Dave Schuit, a honey-producer in Elmwood, Canada, said that his farm lost 37 million beesalmost immediately after a nearby farm began planting GMO corn and spraying neonics on their crops. Story here.
<urn:uuid:8c37860c-fa52-4ff0-ae59-d840f3df7baf>
2.609375
123
Truncated
Science & Tech.
49.647132
95,601,328
Today's "Planet Earth Report" --Our Planet's Deep Hidden Oceans --"Where Did All the Water Come From?" Water-bearing minerals reveal that Earth’s mantle could hold more water than all its oceans. Researchers now ask: Where did it all come from? Yet, for researchers like Jacobsen, continues Marcus Woo in Quanta Magazine, these fragments of crystalline carbon are every bit as precious — not for the diamond itself, but for what is locked inside: specks of minerals forged hundreds of kilometers underground, deep in Earth’s mantle. These mineral flecks — some too small to see even under a microscope — offer a peek into Earth’s otherwise unreachable interior. In 2014, researchers glimpsed something embedded in these minerals that, if not for its deep origins, would’ve been unremarkable: water. Not actual drops of water, or even molecules of H20, but its ingredients, atoms of hydrogen and oxygen embedded in the crystal structure of the mineral itself. This hydrous mineral isn’t wet. But when it melts, out spills water. The discovery was the first direct proof that water-rich minerals exist this deep, between 410 and 660 kilometers down, in a region called the transition zone, sandwiched between the upper and lower mantles. Since then, scientists have found more tantalizing evidence of water. In March, a team announced that they had discovered diamonds from Earth’s mantle that have actual water encased inside. Seismic data has also mapped water-friendly minerals across a large portion of Earth’s interior. Some scientists now argue that a huge reservoir of water could be lurking far beneath our feet. If we consider all of the planet’s surface water as one ocean, and there turn out to be even a few oceans underground, it would change how scientists think of Earth’s interior. But it also raises another question: Where could it have all come from? Without water, life as we know it would not exist. Neither would the living, dynamic planet we’re familiar with today. Water plays an integral role in plate tectonics, triggering volcanoes and helping parts of the upper mantle flow more freely. Still, most of the mantle is relatively dry. The upper mantle, for instance, is primarily made of a mineral called olivine, which can’t store much water. But below 410 kilometers, in the transition zone, high temperatures and pressures squeeze the olivine into a new crystal configuration called wadsleyite. In 1987, Joe Smyth, a mineralogist at the University of Colorado, realized that wadsleyite’s crystal structure would be afflicted with gaps. These gaps turn out to be perfect fits for hydrogen atoms, which could snuggle into these defects and bond with the adjacent oxygen atoms already in the mineral. Wadsleyite, Smyth found, can potentially grab onto lots of hydrogen, turning it into a hydrous mineral that produces water when it melts. For scientists like Smyth, hydrogen means water.
<urn:uuid:3c170455-5b84-4d58-ae9d-9c7559db6d1c>
4.09375
631
News Article
Science & Tech.
45.403026
95,601,364
A new study suggests that compounds that might be indicative of life may be located within the scarred, jumbled areas that make up Europa’s so-called “chaos terrain.” Jupiter’s moon Europa is believed to possess a large salty ocean beneath its icy exterior, and that ocean, scientists say, has the potential to harbor life. Indeed, a mission recently suggested by NASA would visit the icy moon’s surface to search for compounds that might be indicative of life. But where is the best place to look? New research by Caltech graduate student Patrick Fischer; Mike Brown, the Richard and Barbara Rosenberg Professor and Professor of Planetary Astronomy; and Kevin Hand, an astrobiologist and planetary scientist at JPL, suggests that it might be within the scarred, jumbled areas that make up Europa’s so-called “chaos terrain.” A paper about the work has been accepted to The Astronomical Journal. “We have known for a long time that Europa’s fresh icy surface, which is covered with cracks and ridges and transform faults, is the external signature of a vast internal salty ocean,” Brown says. The areas of chaos terrain show signatures of vast ice plates that have broken apart, shifted position, and been refrozen. These regions are of particular interest, because water from the oceans below may have risen to the surface through the cracks and left deposits there. “Directly sampling Europa’s ocean represents a major technological challenge and is likely far in the future,” Fischer says. “But if we can sample deposits left behind in the chaos areas, it could reveal much about the composition and dynamics of the ocean below.” That ocean is thought to be as deep as 100 kilometers. “This could tell us much about activity at the boundary of the rocky core and the ocean,” Brown adds. In a search for such deposits, the researchers took a new look at data from observations made in 2011 at the W. M. Keck Observatory in Hawaii using the OSIRIS spectrograph. Spectrographs break down light into its component parts and then measure their frequencies. Each chemical element has unique light-absorbing characteristics, called spectral or absorption bands. The spectral patterns resulting from light absorption at particular wavelengths can be used to identify the chemical composition of Europa’s surface minerals by observing reflected sunlight. The OSIRIS instrument measures spectra in infrared wavelengths. “The minerals we expected to find on Europa have very distinct spectral fingerprints in infrared light,” Fischer says. “Combine this with the extraordinary abilities of the adaptive optics in the Keck telescope, and you have a very powerful tool.” Adaptive optics mechanisms reduce blurring caused by turbulence in the earth’s atmosphere by measuring the image distortion of a bright star or laser and mechanically correcting it. The OSIRIS observations produced spectra from 1600 individual spots on Europa’s surface. To make sense of this collection of data, Fischer developed a new technique to sort and identify major groupings of spectral signatures. “Patrick developed a very clever new mathematical tool that allows you to take a collection of spectra and automatically, and with no preconceived human biases, classify them into a number of distinct spectra,” Brown says. The software was then able to correlate these groups of readings with a surface map of Europa from NASA’s Galileo mission, which mapped the Jovian moon beginning in the late 1990s. The resulting composite provided a visual guide to the composition of the regions the team was interested in. Three compositionally distinct categories of spectra emerged from the analysis. The first was water ice, which dominates Europa’s surface. The second category includes chemicals formed when ionized sulfur and oxygen—thought to originate from volcanic activity on the neighboring moon Io—bombard the surface of Europa and react with the native ices. These findings were consistent with results of previous work done by Brown, Hand and others in identifying Europa’s surface chemistry. But the third grouping of chemical indicators was more puzzling. It did not match either set of ice or sulfur groupings, nor was it an easily identified set of salt minerals such as they might have expected from previous knowledge of Europa. Magnesium is thought to reside on the surface but has a weak spectral signature, and this third set of readings did not match that either. “In fact, it was not consistent with any of the salt materials previously associated with Europa,” Brown says. When this third group was mapped to the surface, it overlaid the chaos terrain. “I was looking at the maps of the third grouping of spectra, and I noticed that it generally matched the chaos regions mapped with images from Galileo. It was a stunning moment,” Fischer says. “The most important result of this research was understanding that these materials are native to Europa, because they are clearly related to areas with recent geological activity.” The composition of the deposits is still unclear. “Unique identification has been difficult,” Brown says. “We think we might be looking at salts left over after a large amount of ocean water flowed out onto the surface and then evaporated away. He compares these regions to their earthly cousins. “They may be like the large salt flats in the desert regions of the world, in which the chemical composition of the salt reflects whatever materials were dissolved in the water before it evaporated.” Similar deposits on Europa could provide a view into the oceans below, according to Brown. “If you had to suggest an area on Europa where ocean water had recently melted through and dumped its chemicals on the surface, this would be it. If we can someday sample and catalog the chemistry found there, we may learn something of what’s happening on the ocean floor of Europa and maybe even find organic compounds, and that would be very exciting.” PDF copy of the Study: Spatially Resolved Spectroscopy of Europa: The Distinct Spectrum of Large-scale Chaos Source: Rod Pyle, Caltech
<urn:uuid:28e22863-f21f-42c2-9b04-6c0c480a7239>
3.453125
1,269
News Article
Science & Tech.
37.469967
95,601,367
The National Human Genome Research Institute (NHGRI), one of the National Institutes of Health (NIH), today announced that the first draft version of the honey bee genome sequence has been deposited into free public databases. The sequence of the honey bee, Apis mellifera, was assembled by a team led by Richard Gibbs, Ph.D., director of the Human Genome Sequencing Center at Baylor College of Medicine in Houston. The honey bee genome is about one-tenth the size of the human genome, containing about 300 million DNA base pairs. Researchers have deposited the initial assembly, which is based on six-fold sequence coverage of the honey bee genome, into the NIH-run, public database, GenBank (www.ncbi.nih.gov/Genbank). In turn, Genbank will distribute the sequence data to the European Molecular Biology Laboratorys Nucleotide Sequence Database, EMBL-Bank (www.ebi.ac.uk/embl/index.html), and the DNA Data Bank of Japan, DDBJ (www.ddbj.nig.ac.jp). World’s Largest Study on Allergic Rhinitis Reveals new Risk Genes 17.07.2018 | Helmholtz Zentrum München - Deutsches Forschungszentrum für Gesundheit und Umwelt Plant mothers talk to their embryos via the hormone auxin 17.07.2018 | Institute of Science and Technology Austria For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:e9b9f432-7155-4bbd-b13b-7a7c6be49b71>
3.40625
891
Content Listing
Science & Tech.
42.963397
95,601,368
Any parent knows that newborns still have a lot of neurological work to do to attain fully acute vision. In a wide variety of nascent animals, genes provide them with only a rough wiring plan and then leave it to the developing nervous system to do its own finish work. Two studies by Brown University researchers provide new evidence of a role for exposure to light in the environment as mouse pups and tadpoles organize and refine the circuitry of their vision systems. “Through a combination of light-independent and light-dependent processes, the visual system is getting tuned up over time,” said David Berson, professor of neuroscience. His new work, published in advance online June 5 in Nature Neuroscience, offers the surprising result that light exposure can enhance how well mice can organize the nerve endings from their left eye and their right eye in an area of the brain where they start out somewhat jumbled. Neuroscientists had thought that mammals were unable to see at this stage, but a new type of light-sensitive cell that Berson discovered a decade ago turns out to let in the light. Meanwhile, Berson’s colleague Carlos Aizenman, assistant professor of neuroscience, co-authored a paper online May 31 in the Journal of Neuroscience showing that newborn tadpoles depend on light to coordinate and improve the response speed, strength and reliability of a network of neurons in a vision-processing region of their brains. “This is how activity is allowing visual circuits to refine and sort themselves out,” said Aizenman. “Activity is fine-tuning all these connections. It’s making the circuit function in a much more efficient, synchronous way.” Not completely blind mice Berson, postdoctoral scholar Jordan Renna, and former postdoctoral researcher Shijun Weng conducted several experiments in newborn mice to see whether light influences the process by which the mice rewire to distinguish between their eyes. “For certain functions, the brain wants to keep track of which eye is which,” Berson said. Among those functions are the perception of depth and distance. At a circuit level, the brain keeps signals from the two eyes distinct by segregating their nerve endings into separate regions in the dorsal lateral geniculate nucleus (dLGN), a key waystation on the path to the visual cortex and conscious visual perception. Scientists have long known this sorting-out process depends on waves of activity that spontaneously excite cells in the inner retina. They did not know until now that the waves are influenced by a light-sensitive type of cell called intrinsically photosensitive retinal ganglion cells (ipRGCs). About a decade ago, a team Berson led at Brown discovered the ipRGCs, which are the first light-sensitive cells to develop in the eye. They reside in the inner retina, the home of retinal cells that send visual information directly to the brain. The outer retina is where the more familiar rods and cones sense light. Early in life, when the brain is segregating nerve endings into distinct regions in the dLGN, the two retinal layers are not connected, so until ipRGCs were discovered there was no reason to believe that light would affect the sorting process. The new research doesn’t say anything definitive about the consequences of light exposure at this stage for eyesight in adults, especially given that some mammals (such as monkeys) experience this developmental stage in utero. “Whether different animals in nature are exposed to enough light to induce a change in segregation patterns is unclear,” Renna said. But the research shows that light exposure does improve how well the sorting goes, Berson said, and the work advances neuroscientists’ understanding of the eye-distinction process, which is widely studied as a model of “activity-driven” neural development. To assess the effect of light on retinal waves, Renna used electrodes to record the activity of cells in the inner retinas of newborn mice, first recording in the dark, then in the light, and then again in the dark. In every case retinas experienced waves, but when the retinas were exposed to light, the waves lasted about 50 percent longer. Renna then tested whether the light-sensitive cells were really creating this wave-lengthening effect by repeating the study in “knock-out” mice in which the ability of the ipRGCs to sense light had been genetically abolished. With the cells disabled, exposure to light no longer made any difference in the duration of the waves. Finally, to assess the effect of light on the left-right sorting process in the dLGN, Renna examined the tissues from normal mice and the mice whose ipRGCs couldn’t sense light. In each case he fluorescently labeled the nerve endings from one eye red and the other green. A computer comparison of the tissues showed that the normal mice developed a higher degree of segregation between red and green than the knockout mice. In other words, the ability of ipRGCs to sense light improved sorting out one eye from another in the dLGN.When tadpoles see the light Credit: David Orenstein/Brown UniversityTwinkling tadpoles In his study, Aizenman collaborated with Arto Nurmikko, professor of engineering and physics, to investigate the function of in the optic tectum of tadpole brains. They flooded the tectal neurons in live tadpoles with a molecule that makes calcium ions fluoresce. As whole networks of neurons became active, they’d take in the ions and glow. The researchers recorded the tadpoles with a high-resolution, high-speed camera that could capture the millisecond-to-millisecond activity of the neurons. Led in the lab by engineering graduate student Heng Xu, the lead author, and postdoctoral researcher Arseny Khakhalin, the team reared some young tadpoles under normal conditions of 12 hours of light and 12 hours of darkness during the crucial days of development when the tectum is developing. They reared others in the dark, and still others with a chemical that blocks the activity of NMDA receptors, a subtype of receptor to the neurotransmitter glutamate, that is known to promote neural rewiring. Then they exposed all the tadpoles, however they were reared, to blue LED light flashes delivered via a fiber optic cable mounted next to the eye. What they found over the course of several experiments was that the neural networks in the tectums of tadpoles reared under normal conditions developed a faster, more cohesive, and stronger response (in terms of the number of neurons) to light. The tectal neural networks of tadpoles kept in the dark during development failed to progress at all. Those whose NMDA receptors were blocked occupied a middle ground, showing more progress than dark-reared tadpoles but less than normal tadpoles. Tadpoles, they found, train their brains with the light they see. Aizenman said he hopes the calcium ion imaging technique will prove useful in a wide variety of other neuroscience experiments, including studying how tadpoles neurally encode behaviors such as fleeing when they see certain stimuli. In the meantime, his team and Berson’s have added to the understanding scientists have been building of how creatures turn the somewhat mushy approximations of their brains at birth into high-functioning animal minds. “That’s what everybody is after,” Aizenman said. “How do you get this fine-tuned, finely wired brain in the first place?” Berson and Renna’s work was funded by the National Institutes of Health. Aizemnan and Nurmikko’s research received support from the National Science Foundation, the NIH’s National Eye Institute, and the Whitehall Foundation. Editors: Brown University has a fiber link television studio available for domestic and international live and taped interviews, and maintains an ISDN line for radio interviews. For more information, call (401) 863-2476. David Orenstein | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:8589f713-5e48-41d2-98c6-da6938bffeec>
3.375
2,347
Content Listing
Science & Tech.
38.450109
95,601,382
Don’t panic. Cosmic voids are actually all around us. Imagine an especially hole-y block of Swiss cheese, and you have a pretty good visual for the leading theory for the structure of the universe. Voids, vast expanses of nearly empty space, account for about 80 percent of the observable universe. The other stuff, like dust and stars and galaxies like the Milky Way, exists in thread-like filaments between these voids. As the universe expanded, gravity drew matter into clumps, leaving behind cavernous spheres. These empty regions, which can measure hundreds of millions of light years across, do contain some galaxies, but they’re dark caverns compared to the dense, bright bands of millions of galaxies ringing their edges. According to researchers at the University of Wisconsin-Madison, our very own Milky Way galaxy may float near the center of one of these voids. Using data from large-scale telescope surveys that count galaxies, the researchers concluded that the Milky Way exists near the center of a region that has fewer galaxies than other parts of the universe. They estimated the size of this void to have a radius of about 1 billion light-years. If they’re right, humans are living in the middle of the largest known void in the observable universe. The researchers first advanced this idea in 2013, but they took it a step further this week, in findings presented at a meeting of the American Astronomical Society in Austin. The Milky Way’s place inside a void, they said, would help explain a question in the way astronomers measure how fast the universe is expanding. The universe has been expanding ever since the Big Bang, more than 13 billion years ago, and evidence suggests its expansion rate is accelerating. There is, however, dispute about the precise rate of expansion. Some astronomers observe bright objects like Cepheid stars or supernovae in the nearby cosmic neighborhood, studying their light to determine how fast they’re moving away from Earth. Others peer deeper into the universe’s history and study the cosmic microwave background, the radiation leftover from the Big Bang that fills the universe to this day. Different measurements yield different results, and the measurements from the local universe turn out to be higher than those gleaned from the early universe. Astronomers don’t know whether the discrepancies are a result of statistical fluctuations or hints of new physics we don’t yet understand. If the Milky Way were in a void, the difference in results would make sense, according to Ben Hoscheit, one of the University of Wisconsin-Madison researchers, who graduated from the school this spring. “If you’re living inside this void, you’re going to see things being pulled away from you, towards the more dense regions of the universe,” he said. So if you’re sitting in this void, your surroundings expand faster than the rest of the universe. From this vantage point, observers would calculate a higher rate of expansion compared to what they find in the distant, early universe—like they do now. “We should take into account our place in this very large universe, and we should be aware of how our place could potentially influence the measurements we make on Earth,” Hoscheit said. Previous research has suggested the Milky Way may exist in a region less dense than others, said Peter Melchior, an astrophysicist at Princeton University who studies the distribution of matter in the universe. The idea that the Milky Way might exist in a void is not unreasonable, given the abundance of voids out there. Astronomers believe the Milky Way and its neighboring galaxies reside near the Local Void, a region 150 million light-years across and so empty that it’s pushing galaxies like ours away. But there’s no way to zoom out far enough to pinpoint the galaxy’s spot in the wider universe. The size of the void Hoscheit and his team have proposed, though, is remarkable, Melchior said. The void is far larger than any previously observed by telescope observations, like the Sloan Digital Sky Survey, which in 2000 allowed astronomers to start investigating the large, Swiss-cheese scale of the universe for the first time. Most voids measure between 90 million and 450 million light-years in radius. “They’re kind of like bubbles—they get bigger and bigger as the universe not only expands, but as more galaxies get pulled out over time,” said Greg Aldering, an astrophysicist at the Lawrence Berkeley National Laboratory who studies cosmological measurements and dark energy. “But it gets harder and harder to make a really, really big void.” The suggestion that we might be in the center of that is a little uncomfortable, too, Melchior said. “It’s mathematically unlikely,” he explained over email. “But even more so, being in the center of the largest void in the observable universe would put us at a very special place, and since the times when we learned that the Earth (or the sun) isn’t at the center of the universe, astronomers try to avoid theories that put us at special places.” Eventually, astronomers hope to use such voids’ shapes and distributions to explore the nature of dark energy, the unknown force responsible for accelerating the cosmos’ expansion. Last year, astronomers used data from the Sloan Digital Sky Survey to compile a catalog of cosmic voids, characterizing their sizes and densities. From observations of a quarter of the night sky, they found hundreds of voids. Scientists hope these patches of darkness can help them investigate dark energy, the invisible force accelerating the expansion of the cosmos. The effects of dark energy may be easier to detect in the darkest territories of the universe, away from the glare of stars and galaxies. The study of cosmic voids, whether we live in them or not, serves as a reminder of how young scientists’ understanding remains of humanity’s place in the cosmos. “I don’t know how many people realize just how filamentary and Swiss cheese-y the universe is,” Aldering said. We want to hear what you think. Submit a letter to the editor or write to email@example.com.
<urn:uuid:87fbb8d6-1e7e-4874-a215-f1b5c9776d01>
3.734375
1,302
News Article
Science & Tech.
41.559991
95,601,384
Definition of Newton's law of gravitation 1. Noun. (physics) the law that states any two bodies attract each other with a force that is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. Generic synonyms: Law, Law Of Nature Group relationships: Gravitational Theory, Newton's Theory Of Gravitation, Theory Of Gravitation, Theory Of Gravity Category relationships: Natural Philosophy, Physics Terms within: Constant Of Gravitation, G, Gravitational Constant, Universal Gravitational Constant Newton's Law Of Gravitation Pictures Click the following link to bring up a new window with an automated collection of images related to the term: Newton's Law Of Gravitation Images Lexicographical Neighbors of Newton's Law Of Gravitation Literary usage of Newton's law of gravitation Below you will find example usage of this term as found in modern and/or classical literature: 1. Theoretical Chemistry from the Standpoint of Avogadro's Rule & Thermodynamics by Walther Nernst (1904) "Mass may be referred to length and time by means of Newton-s law of gravitation (Maxwell) : The gas equation pv = KT may by putting R = 1 be used to define ..." 2. The Phase Rule and Its Applications by Alexander Findlay (1904) "For example, Newton-s law of " gravitation "—that two masses approach each other as if impelled by a force which varies directly as their masses, ..." 3. Electrical Engineering: First Course by Ernst Julius Berg, Walter Lyman Upson (1916) "It is similar to NEWTON,S law of gravitation and is: where F is the force acting upon the point charges Qi and Q2, or the magnet poles of strength mi and m2 ..." 4. Mechanics, Molecular Physics and Heat: A Twelve Weeks' College Course by Robert Andrews Millikan (1903) "The mathematical statement of Newton,s law of gravitation is foe —j-, in which/ is the force acting between any two bodies, m and M their respective masses ..." 5. Mechanics: An Elementary Text-book, Theoretical and Practical, for Colleges by Richard Glazebrook (1895) "... from the definition of force that the weight of a body is proportional to its mass; this result also is in accordance with Newton-s law of gravitation. ..." 6. Transaction by Texas Medical Association (1876) "Before the announcement of Newton-s law of gravitation, astronomy was the most conjectural of all the sciences, but with the thread furnished us by this ..." 7. Applications of the Calculus to Mechanics by Earle Raymond Hedrick, Oliver Dimon Kellogg (1909) "... 107 Momentum, 106, 107 Motion. See Rectilinear motion, Plane motion, etc. Newton,s law of gravitation, 49 Newton,s third law, 89 Nonconcurrent forces, ..."
<urn:uuid:ee2e96cc-af62-4d6b-b8f1-53c688a6768f>
3.515625
635
Structured Data
Science & Tech.
53.024068
95,601,493
Scientists May Soon Have Evidence for Exotic Predictions of String Theory Researchers at Northeastern University and the University of California, Irvine say that scientists might soon have evidence for extra dimensions and other exotic predictions of string theory. Early results from a neutrino detector at the South Pole, called AMANDA, show that ghostlike particles from space could serve as probes to a world beyond our familiar three dimensions, the research team says. No more than a dozen high-energy neutrinos have been detected so far. However, the current detection rate and energy range indicate that AMANDA’s larger successor, called IceCube, now under construction, could provide the first evidence for string theory and other theories that attempt to build upon our current understanding of the universe. Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8ee8ffdc-fcbe-43f0-be73-1d7a0c7d0689>
3.171875
726
Content Listing
Science & Tech.
35.894247
95,601,535