text
stringlengths
174
655k
id
stringlengths
47
47
score
float64
2.52
5.25
tokens
int64
39
148k
format
stringclasses
24 values
topic
stringclasses
2 values
fr_ease
float64
-483.68
157
__index__
int64
0
1.48M
Through the Human Genome Project, the HapMap Project and other efforts, we are beginning to identify genes that are modified in some diseases. More difficult to measure and identify are the regulatory regions in DNA – the ‘managers’ of genes – that control gene activity and might be important in causing disease. Today, a team led by the Wellcome Trust Sanger Institute, together with colleagues in the USA and Switzerland, provide a measure of just how important regulatory region variation might be in a pilot study based on some 2% of the human genome. As many as 40 of 374 genes showed alteration in genetic activity that could be related to changes in DNA sequence called SNPs. “We were amazed at the power of this study to detect associations between SNP variations and gene activity,” commented Dr Manolis Dermitzakis, Investigator, Division of Informatics at the Wellcome Trust Sanger Institute. “We were even more amazed at the number of genes affected: more than 10% of our sample – or perhaps 3000 genes across the genome – could be subject to modification of activity in human populations due to common genetic variations.” NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:89455696-0fc1-4c3f-ac66-4b92dd723ceb>
3.09375
863
Content Listing
Science & Tech.
37.27897
95,607,714
|Part of a series of articles about| An electromagnetic field (also EMF or EM field) is a physical field produced by electrically charged objects. It affects the behavior of charged objects in the vicinity of the field. The electromagnetic field extends indefinitely throughout space and describes the electromagnetic interaction. It is one of the four fundamental forces of nature (the others are gravitation, weak interaction and strong interaction). The field can be viewed as the combination of an electric field and a magnetic field. The electric field is produced by stationary charges, and the magnetic field by moving charges (currents); these two are often described as the sources of the field. The way in which charges and currents interact with the electromagnetic field is described by Maxwell's equations and the Lorentz force law. The force created by the electric field is much stronger than the force created by the magnetic field. From a classical perspective in the history of electromagnetism, the electromagnetic field can be regarded as a smooth, continuous field, propagated in a wavelike manner; whereas from the perspective of quantum field theory, the field is seen as quantized, being composed of individual particles. - 1 Structure - 2 Dynamics - 3 Feedback loop - 4 Mathematical description - 5 Properties of the field - 6 Relation to and comparison with other physical fields - 7 Applications - 8 Health and safety - 9 See also - 10 References - 11 Further reading - 12 External links The electromagnetic field may be viewed in two distinct ways: a continuous structure or a discrete structure. Classically, electric and magnetic fields are thought of as being produced by smooth motions of charged objects. For example, oscillating charges produce electric and magnetic fields that may be viewed in a 'smooth', continuous, wavelike fashion. In this case, energy is viewed as being transferred continuously through the electromagnetic field between any two locations. For instance, the metal atoms in a radio transmitter appear to transfer energy continuously. This view is useful to a certain extent (radiation of low frequency), but problems are found at high frequencies (see ultraviolet catastrophe). The electromagnetic field may be thought of in a more 'coarse' way. Experiments reveal that in some circumstances electromagnetic energy transfer is better described as being carried in the form of packets called quanta (in this case, photons) with a fixed frequency. Planck's relation links the photon energy E of a photon to its frequency ν through the equation: where h is Planck's constant, and ν is the frequency of the photon . Although modern quantum optics tells us that there also is a semi-classical explanation of the photoelectric effect—the emission of electrons from metallic surfaces subjected to electromagnetic radiation—the photon was historically (although not strictly necessarily) used to explain certain observations. It is found that increasing the intensity of the incident radiation (so long as one remains in the linear regime) increases only the number of electrons ejected, and has almost no effect on the energy distribution of their ejection. Only the frequency of the radiation is relevant to the energy of the ejected electrons. This quantum picture of the electromagnetic field (which treats it as analogous to harmonic oscillators) has proved very successful, giving rise to quantum electrodynamics, a quantum field theory describing the interaction of electromagnetic radiation with charged matter. It also gives rise to quantum optics, which is different from quantum electrodynamics in that the matter itself is modelled using quantum mechanics rather than quantum field theory. In the past, electrically charged objects were thought to produce two different, unrelated types of field associated with their charge property. An electric field is produced when the charge is stationary with respect to an observer measuring the properties of the charge, and a magnetic field as well as an electric field is produced when the charge moves, creating an electric current with respect to this observer. Over time, it was realized that the electric and magnetic fields are better thought of as two parts of a greater whole — the electromagnetic field. Until 1820, when the Danish physicist H. C. Ørsted discovered the effect of electricity through a wire on a compass needle, electricity and magnetism had been viewed as unrelated phenomena. In 1831, Michael Faraday, one of the great thinkers of his time, made the seminal observation that time-varying magnetic fields could induce electric currents and then, in 1864, James Clerk Maxwell published his famous paper A Dynamical Theory of the Electromagnetic Field. Once this electromagnetic field has been produced from a given charge distribution, other charged objects in this field will experience a force in a similar way that planets experience a force in the gravitational field of the sun. If these other charges and currents are comparable in size to the sources producing the above electromagnetic field, then a new net electromagnetic field will be produced. Thus, the electromagnetic field may be viewed as a dynamic entity that causes other charges and currents to move, and which is also affected by them. These interactions are described by Maxwell's equations and the Lorentz force law. This discussion ignores the radiation reaction force. The behavior of the electromagnetic field can be divided into four different parts of a loop: - the electric and magnetic fields are generated by electric charges, - the electric and magnetic fields interact with each other, - the electric and magnetic fields produce forces on electric charges, - the electric charges move in space. A common misunderstanding is that (a) the quanta of the fields act in the same manner as (b) the charged particles that generate the fields. In our everyday world, charged particles, such as electrons, move slowly through matter with a drift velocity of a fraction of a centimeter (or inch) per second, but fields propagate at the speed of light - approximately 300 thousand kilometers (or 186 thousand miles) a second. The mundane speed difference between charged particles and field quanta is on the order of one to a million, more or less. Maxwell's equations relate (a) the presence and movement of charged particles with (b) the generation of fields. Those fields can then affect the force on, and can then move other slowly moving charged particles. Charged particles can move at relativistic speeds nearing field propagation speeds, but, as Einstein showed, this requires enormous field energies, which are not present in our everyday experiences with electricity, magnetism, matter, and time and space. The feedback loop can be summarized in a list, including phenomena belonging to each part of the loop: - charged particles generate electric and magnetic fields - the fields interact with each other - fields act upon particles - Lorentz force: force due to electromagnetic field - electric force: same direction as electric field - magnetic force: perpendicular both to magnetic field and to velocity of charge - Lorentz force: force due to electromagnetic field - particles move - current is movement of particles - particles generate more electric and magnetic fields; cycle repeats There are different mathematical ways of representing the electromagnetic field. The first one views the electric and magnetic fields as three-dimensional vector fields. These vector fields each have a value defined at every point of space and time and are thus often regarded as functions of the space and time coordinates. As such, they are often written as E(x, y, z, t) (electric field) and B(x, y, z, t) (magnetic field). If only the electric field (E) is non-zero, and is constant in time, the field is said to be an electrostatic field. Similarly, if only the magnetic field (B) is non-zero and is constant in time, the field is said to be a magnetostatic field. However, if either the electric or magnetic field has a time-dependence, then both fields must be considered together as a coupled electromagnetic field using Maxwell's equations. With the advent of special relativity, physical laws became susceptible to the formalism of tensors. Maxwell's equations can be written in tensor form, generally viewed by physicists as a more elegant means of expressing physical laws. The behaviour of electric and magnetic fields, whether in cases of electrostatics, magnetostatics, or electrodynamics (electromagnetic fields), is governed by Maxwell's equations. In the vector field formalism, these are: where is the charge density, which can (and often does) depend on time and position, is the permittivity of free space, is the permeability of free space, and J is the current density vector, also a function of time and position. The units used above are the standard SI units. Inside a linear material, Maxwell's equations change by switching the permeability and permittivity of free space with the permeability and permittivity of the linear material in question. Inside other materials which possess more complex responses to electromagnetic fields, these terms are often represented by complex numbers, or tensors. The Lorentz force law governs the interaction of the electromagnetic field with charged matter. When a field travels across to different media, the properties of the field change according to the various boundary conditions. These equations are derived from Maxwell's equations. The tangential components of the electric and magnetic fields as they relate on the boundary of two media are as follows: The angle of refraction of an electric field between media is related to the permittivity of each medium: The angle of refraction of a magnetic field between media is related to the permeability of each medium: Properties of the field Reciprocal behavior of electric and magnetic fields The two Maxwell equations, Faraday's Law and the Ampère-Maxwell Law, illustrate a very practical feature of the electromagnetic field. Faraday's Law may be stated roughly as 'a changing magnetic field creates an electric field'. This is the principle behind the electric generator. Ampere's Law roughly states that 'a changing electric field creates a magnetic field'. Thus, this law can be applied to generate a magnetic field and run an electric motor. Behavior of the fields in the absence of charges or currents Maxwell's equations take the form of an electromagnetic wave in a volume of space not containing charges or currents (free space) – that is, where and J are zero. Under these conditions, the electric and magnetic fields satisfy the electromagnetic wave equation: Relation to and comparison with other physical fields This section needs expansion. You can help by adding to it. (June 2008) Being one of the four fundamental forces of nature, it is useful to compare the electromagnetic field with the gravitational, strong and weak fields. The word 'force' is sometimes replaced by 'interaction' because modern particle physics models electromagnetism as an exchange of particles known as gauge bosons. Electromagnetic and gravitational fields Sources of electromagnetic fields consist of two types of charge – positive and negative. This contrasts with the sources of the gravitational field, which are masses. Masses are sometimes described as gravitational charges, the important feature of them being that there are only positive masses and no negative masses. Further, gravity differs from electromagnetism in that positive masses attract other positive masses whereas same charges in electromagnetism repel each other. The relative strengths and ranges of the four interactions and other information are tabulated below: |Chromodynamics||Strong interaction||gluon||1038||1||10−15 m| |Flavordynamics||Weak interaction||W and Z bosons||1025||1/r5 to 1/r7||10−16 m| Electromagnetic field can be used to record data on static electricity. Old televisions can be traced with Electromagnetic fields. Static E and M fields and static EM fields When an EM field (see electromagnetic tensor) is not varying in time, it may be seen as a purely electrical field or a purely magnetic field, or a mixture of both. However the general case of a static EM field with both electric and magnetic components present, is the case that appears to most observers. Observers who see only an electric or magnetic field component of a static EM field, have the other (electric or magnetic) component suppressed, due to the special case of the immobile state of the charges that produce the EM field in that case. In such cases the other component becomes manifest in other observer frames. A consequence of this, is that any case that seems to consist of a "pure" static electric or magnetic field, can be converted to an EM field, with both E and M components present, by simply moving the observer into a frame of reference which is moving with regard to the frame in which only the “pure” electric or magnetic field appears. That is, a pure static electric field will show the familiar magnetic field associated with a current, in any frame of reference where the charge moves. Likewise, any new motion of a charge in a region that seemed previously to contain only a magnetic field, will show that the space now contains an electric field as well, which will be found to produces an additional Lorentz force upon the moving charge. Thus, electrostatics, as well as magnetism and magnetostatics, are now seen as studies of the static EM field when a particular frame has been selected to suppress the other type of field, and since an EM field with both electric and magnetic will appear in any other frame, these "simpler" effects are merely the observer's. The "applications" of all such non-time varying (static) fields are discussed in the main articles linked in this section. Time-varying EM fields in Maxwell’s equations An EM field that varies in time has two “causes” in Maxwell’s equations. One is charges and currents (so-called “sources”), and the other cause for an E or M field is a change in the other type of field (this last cause also appears in “free space” very far from currents and charges). An electromagnetic field very far from currents and charges (sources) is called electromagnetic radiation (EMR) since it radiates from the charges and currents in the source, and has no "feedback" effect on them, and is also not affected directly by them in the present time (rather, it is indirectly produced by a sequences of changes in fields radiating out from them in the past). EMR consists of the radiations in the electromagnetic spectrum, including radio waves, microwave, infrared, visible light, ultraviolet light, X-rays, and gamma rays. The many commercial applications of these radiations are discussed in the named and linked articles. A notable application of visible light is that this type of energy from the Sun powers all life on Earth that either makes or uses oxygen. A changing electromagnetic field which is physically close to currents and charges (see near and far field for a definition of “close”) will have a dipole characteristic that is dominated by either a changing electric dipole, or a changing magnetic dipole. This type of dipole field near sources is called an electromagnetic near-field. Changing electric dipole fields, as such, are used commercially as near-fields mainly as a source of dielectric heating. Otherwise, they appear parasitically around conductors which absorb EMR, and around antennas which have the purpose of generating EMR at greater distances. Changing magnetic dipole fields (i.e., magnetic near-fields) are used commercially for many types of magnetic induction devices. These include motors and electrical transformers at low frequencies, and devices such as metal detectors and MRI scanner coils at higher frequencies. Sometimes these high-frequency magnetic fields change at radio frequencies without being far-field waves and thus radio waves; see RFID tags. See also near-field communication. Further uses of near-field EM effects commercially, may be found in the article on virtual photons, since at the quantum level, these fields are represented by these particles. Far-field effects (EMR) in the quantum picture of radiation, are represented by ordinary photons. Health and safety The potential health effects of the very low frequency EMFs surrounding power lines and electrical devices are the subject of on-going research and a significant amount of public debate. The US National Institute for Occupational Safety and Health (NIOSH) and other US government agencies do not consider EMFs a proven health hazard. NIOSH has issued some cautionary advisories but stresses that the data are currently too limited to draw good conclusions. The potential effects of electromagnetic fields on human health vary widely depending on the frequency and intensity of the fields. For more information on the health effects due to specific parts of the electromagnetic spectrum, see the following articles: - Static electric fields: see Electric shock - Static magnetic fields: see MRI#Safety - Extremely low frequency (ELF): see Power lines#Health concerns - Radio frequency (RF): see Electromagnetic radiation and health - Mobile telephony: see Mobile phone radiation and health - Light: see Laser safety - Ultraviolet (UV): see Sunburn - Gamma rays: see Gamma ray - Afterglow plasma - Antenna factor - Classification of electromagnetic fields - Electric field - Electromagnetic propagation - Electromagnetic tensor - Electromagnetic therapy - Free space - Fundamental interaction - Electromagnetic radiation - Electromagnetic spectrum - Electromagnetic field measurements - Gravitational field - List of environment topics - Magnetic field - Maxwell's equations - Photoelectric effect - Quantization of the electromagnetic field - Quantum electrodynamics - Riemann–Silberstein vector - SI units - Richard Feynman (1970). The Feynman Lectures on Physics Vol II. Addison Wesley Longman. ISBN 978-0-201-02115-8. A “field” is any physical quantity which takes on different values at different points in space. - Purcell. p5-11;p61;p277-296 - Purcell, p235: We then calculate the electric field due to a charge moving with constant velocity; it does not equal the spherically symmetric Coulomb field. - Spencer, James N.; et al. (2010). Chemistry: Structure and Dynamics. John Wiley & Sons. p. 78. ISBN 9780470587119. - Maxwell 1864 5, page 499; also David J. Griffiths (1999), Introduction to electrodynamics, third Edition, ed. Prentice Hall, pp. 559-562"(as quoted in Gabriela, 2009) - Electromagnetic Fields (2nd Edition), Roald K. Wangsness, Wiley, 1986. ISBN 0-471-81186-6 (intermediate level textbook) - Schaum's outline of theory and problems of electromagnetics(2nd Edition), Joseph A. Edminister, McGraw-Hill, 1995. ISBN 0070212341(Examples and Problem Practice) - Field and Wave Electromagnetics (2nd Edition), David K. Cheng, Prentice Hall, 1989. ISBN 978-0-201-12819-2 (Intermediate level textbook) - "NIOSH Fact Sheet: EMFs in the Workplace". United States National Institute for Occupational Safety and Health. 1996. Retrieved 31 August 2015. - Griffiths, David J. (1999). Introduction to Electrodynamics (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0138053260. - Maxwell, J. C. (1 January 1865). "A Dynamical Theory of the Electromagnetic Field". Philosophical Transactions of the Royal Society of London. 155 (0): 459–512. doi:10.1098/rstl.1865.0008. (This article accompanied a December 8, 1864 presentation by Maxwell to the Royal Society.) - Purcell, Edward M. (2012). Electricity and magnetism (3rd ed.). Cambridge: Cambridge Univ. Press. ISBN 9781-10701-4022. - On the Electrodynamics of Moving Bodies by Albert Einstein, June 30, 1905. - Non-Ionizing Radiation, Part 1: Static and Extremely Low-Frequency (ELF) Electric and Magnetic Fields (2002) by the IARC. - Zhang J, Clement D, Taunton J (January 2000). "The efficacy of Farabloc, an electromagnetic shield, in attenuating delayed-onset muscle soreness". Clin J Sport Med. 10 (1): 15–21. PMID 10695845. - National Institute for Occupational Safety and Health – EMF Topic Page - Biological Effects of Power Frequency Electric and Magnetic Fields (May 1989) (110 pages) prepared for US Congress Office of Technology Assessment by Indira Nair, M.Granger Morgan, Keith Florig, Department of Engineering and Public Policy Carnegie Mellon University
<urn:uuid:f469df14-6ad6-445b-aeb8-a1c814e85b8a>
3.796875
4,363
Knowledge Article
Science & Tech.
35.445457
95,607,720
This article needs additional citations for verification. (December 2009) (Learn how and when to remove this template message) Radiative cooling is the process by which a body loses heat by thermal radiation. Terrestrial radiative cooling Earth's energy budget In the case of the Earth-atmosphere system radiative cooling is the process by which long-wave (infrared) radiation is emitted to balance the absorption of short-wave (visible) energy from the sun. The exact process by which Earth loses heat is rather more complex than often portrayed. In particular, convective transport of heat, and evaporative transport of latent heat are both important in removing heat from the surface and redistributing it in the atmosphere. Pure radiative transport is more important higher up. Diurnal and geographical variation further complicate the picture. The large-scale circulation of the Earth's atmosphere is driven by the difference in absorbed solar radiation per square meter, as the sun heats the Earth more in the Tropics, mostly because of geometrical factors. The atmospheric and oceanic circulation redistributes some of this energy as sensible heat and latent heat partly via the mean flow and partly via eddies, known as cyclones in the atmosphere. Thus the tropics radiate less to space than they would if there were no circulation, and the poles radiate more; however in absolute terms the tropics radiate more energy to space. Nocturnal surface cooling Radiative cooling is commonly experienced on cloudless nights, when heat is radiated into space from the surface of the Earth, or from the skin of a human observer. The effect is well-known among amateur astronomers, and can personally be felt on the skin of an observer on a cloudless night. To feel the effect, one compares the difference between looking straight up into a cloudless night sky for several seconds, to that of placing a sheet of paper between one's face and the sky. Since outer space radiates at about a temperature of 3 kelvins (-270 degrees Celsius or -450 degrees Fahrenheit), and the sheet of paper radiates at about 300 kelvins (room temperature), the sheet of paper radiates more heat to one's face than does the darkened cosmos. The effect is blunted by Earth's surrounding atmosphere, and particularly the water vapor it contains, so the apparent temperature of the sky is far warmer than outer space. Note that it is not correct to say that the sheet "blocks the cold" of the night sky; instead, the sheet is radiating heat to your face, just like a camp fire warms your face; the only difference is that a campfire is several hundred degrees warmer than a sheet of paper, just like a sheet of paper (at approximately air temperature) is warmer than the night sky. Kelvin's estimate of the Earth's age The term radiative cooling is generally used for local processes, though the same principles apply to cooling over geological time, which was first used by Kelvin to estimate the age of the Earth (although his estimate ignored the substantial heat released by radioisotope decay, not known at the time). Radiative cooling is one of the few ways an object in space can give off energy. In particular, white dwarf stars are no longer generating energy by fusion or gravitational contraction, and have no solar wind. So the only way their temperature changes is by radiative cooling. This makes their temperature as a function of age very predictable, so by observing the temperature, astronomers can deduce the age of the star. Cool roofs combine high optical reflectance with high infrared emissivity, thereby simultaneously reducing heat transfer from the sun and increasing heat removal through radiation. Radiative cooling thus offers immense potential for supplementary passive cooling to residential and commercial buildings. In 2017 researchers announced a metamaterial that embedded resonant polar dielectric microspheres randomly in a polymeric matrix. The material is transparent to the solar spectrum and achieved infrared emissivity greater than 0.93. When backed with silver coating, the material effected a midday radiative cooling power of 93 W/m2 under direct sunshine along with high-throughput, economical roll-to-roll manufacturing. Nocturnal ice making In India before the invention of artificial refrigeration technology, ice making by nocturnal cooling was common. The apparatus consisted of a shallow ceramic tray with a thin layer of water, placed outdoors with a clear exposure to the night sky. The bottom and sides were insulated with a thick layer of hay. On a clear night the water would lose heat by radiation upwards. Provided the air was calm and not too far above freezing, heat gain from the surrounding air by convection was low enough to freeze the water. - Optical solar reflector, used for thermal control of spacecraft - Passive cooling - Radiative forcing - Stefan-Boltzmann law - Terrestrial albedo effect - Urban heat island - Urban thermal plume - Mestel, L. (1952). "On the theory of white dwarf stars. I. The energy sources of white dwarfs". Monthly Notices of the Royal Astronomical Society. 112: 583–597. Bibcode:1952MNRAS.112..583M. doi:10.1093/mnras/112.6.583. - "Cooling white dwarfs" (PDF). - Hossain, Md Muntasir; Gu, Min (2016-02-04). "Radiative cooling: Principles, progress and potentials". Advanced Science. 3 (7): 1500360. doi:10.1002/advs.201500360. ISSN 2198-3844. - Zhai, Yao; Ma, Yaoguang; David, Sabrina N.; Zhao, Dongliang; Lou, Runnan; Tan, Gang; Yang, Ronggui; Yin, Xiaobo (2017-03-10). "Scalable-manufactured randomized glass-polymer hybrid metamaterial for daytime radiative cooling". Science. 355 (6329): 1062–1066. Bibcode:2017Sci...355.1062Z. doi:10.1126/science.aai7899. ISSN 0036-8075. PMID 28183998. - "Lesson 1: History Of Refrigeration, Version 1 ME" (PDF). Indian Institute of Technology Kharagpur. Archived from the original (PDF) on 2011-11-06.
<urn:uuid:0d077c56-8b4b-4314-8e8f-f7a516d5fc06>
4.09375
1,348
Knowledge Article
Science & Tech.
47.707247
95,607,741
Rutherford Atomic Model We all have seen plums in padding. Previously it was thought that electrons in an atom are distributed on positive charge just like plums in a pudding. In other words, it was thought that positive charge exists throughout the atom and negative electrons are unevenly distributed on it just like plums in the pudding. This concept of the atomic model is so known as plums in pudding model of atoms. This concept was introduced by J.J.Thomson who was also the inventor of electrons. As according to plums in the pudding model, positive and negative charges of an atom are distributed throughout the body of the atom and there must not be any concentrated mass in an atom. In 1899, Ernest Rutherford of Manchester University had discovered alpha particles which are positively charged helium ions emitted from radioactive substance like uranium. These alpha particles create bright spots when they strike on a zinc sulphide coated screen. As there is no concentrated mass in an atom, it was predicted that if a thin metallic foil is bombarded with positively charged alpha particles, then all such alpha particles would pass the foil without much deflection in their travelling path. The tiny electric field developed in the atoms can not affect the motion of the particle much. So it was predicted that there may be less than 1o deflection in the path of the motion of the alpha particles. This prediction inspired Ernest Rutherford to conduct the experiments to verify the plums in pudding model of atoms. He instructed his fellow scientist Ernest Marsden and Hans Geiger to bombard with alpha particles on a thin metallic foil to verify this prediction. As per instruction, Ernest Marsden and Hans Geiger conducted an experiment and made a history. They placed a very thin gold film in front of alpha ray gun. They also placed a zinc sulphide screen surrounding the gold film to observe the bright spots on it when alpha particles strike on it. They conducted the experiment in a dark room. They observed during the experiment that as predicted, the alpha particles are crossing the film and striking on the zinc sulphide screen behind the film. You may also be interested on But after counting the bright spots on the screen they found an unexpected result came. All the alpha particles did not cross the foil in the straight way as expected. Very little percentage of bombarded alpha particles changed their way of travel during crossing the gold foil. Not only the particles diverted from their way, but also very few of them directly bounced back towards the source or alpha gun. After detailed study of the observation, Ernest Marsden and Hans Geiger submitted a report to Ernest Rutherford. After viewing and studying their report Rutherford predicted a different model of an atom, which is known as Rutherford model of the atom. He predicted that the alpha particles which directly bounced back must have collided with some much heavier mass and that mass should be positively charged. This was also found that some of the diverted alpha particles are not bounced back but they had a very large angle of diversion. By observing different angles of diversion and the number of particles diverted with these angles he predicted that the positive alpha particles are also influenced by a comparatively huge concentrated positive charge. He stated that the concentrations of mass and positive charge are in the same place in an atom and this is at the center of the atom and he called it as a nucleus of the atom. He also stated that except central nucleus, the entire space in the atom is vacant. After this gold foil experiment, Rutherford gave a more realistic model of an atom. This model is also called Nuclear Atomic Model or Planetary Model of Atom. This model is given in the year of 1911. According to Rutherford’s Atomic Model, almost all the mass of an atom is concentrated in this nucleus. This nucleus is positively charged and is surrounded by tiny light negatively charged particles, which are called electrons. These electrons circulate around the nucleus in the same manner such as planets circulate around the sun in the planetary system. That is why this model is also referred as the Planetary Model of Atom. The radius of the nucleus is about 10-13 cm. The radius of circular path travelled by electrons around the nucleus is about 10-12 cm which is greater the diameter of an electron. The radius of the atom is about 10-8 cm. Thus the, like a planetary system, the atom is also of exceedingly open nature, due to which it can be penetrated by high-speed particles of various kinds. The Rutherford’s Planetary Atomic Model is shown in figure below- A force of attraction exists between positively charged nucleus and negatively charged electrons travelling around the nucleus. This electrostatic force between positively charged nucleus and negatively charged electrons is similar to the gravitational force of attraction between Sun and planets revolving around the sun. Most of the part of this planetary atom is open space, which does not offer any resistance for passes of positively charged tiny particles such as Alpha particles. The nucleus of the atom is very small, dense and positively charged which results in the scattering of positively charged particles. This phenomenon for the scattering of positively charged alpha particles by positively charged nucleus, explains the scattering of positively charged alpha particles by the gold foil as observed by Ernest Rutherford. The Ernest Rutherford Atomic Model succeeded to replace the atomic model Thomson’s Plum Pudding model given by English Physicist Sir J.J. Thomson. According to the Ernest Rutherford’s atomic model, the electrons are not attached to the mass of atom. The electrons are either stationary in space or rotate in circular paths around the nucleus. But if the electrons are stationary they must be fallen to nucleus due to attraction force between electron and nucleus. On the other hand if the electrons are moving in a circular path, then according to electromagnetic theory, the accelerated charge of electron would have continuously lost its energy and would have down into the nucleus as shown in figure below Rutherford Atomic Model fails to explain why electrons are not fallen into positively charge nucleus. Thus, the deficiencies of Rutherford’s Atomic model can be described as below- - The Rutherford’s atomic model does not explain the distribution of electrons in the orbits. - The Rutherford’s atomic model does not explain the stability of the atom as a whole.
<urn:uuid:25eb6dee-b3c5-48c0-ad52-f8068f5e7cc8>
3.859375
1,245
Knowledge Article
Science & Tech.
38.968999
95,607,742
A cluster of double layers forming in an Alfvén wave, about a sixth of the distance from the left. Legend: * Red =electrons * green =ions * yellow =electric potential * orange =parallel electric field * pink =charge density * blue In plasma physics, an Alfvén wave, named after Hannes Alfvén, is a type of magnetohydrodynamic wave in which ions oscillate in response to a restoring force provided by an effective tension on the magnetic field lines. An Alfvén wave in a plasma is a low-frequency (compared to the ion cyclotron frequency) travelling oscillation of the ions and the magnetic field. The ion mass density provides the inertia and the magnetic field line tension provides the restoring force. The wave propagates in the direction of the magnetic field, although waves exist at oblique incidence and smoothly change into the magnetosonic wave when the propagation is perpendicular to the magnetic field. The motion of the ions and the perturbation of the magnetic field are in the same direction and transverse to the direction of propagation. The wave is dispersionless. The low-frequency relative permittivity of a magnetized plasma is given by where is the magnetic field strength, is the speed of light, is the permeability of the vacuum, and is the total mass density of the charged plasma particles. Here, goes over all plasma species, both electrons and (few types of) ions. Therefore, the phase velocity of an electromagnetic wave in such a medium is is the Alfvén velocity. If , then . On the other hand, when , then . That is, at high field or low density, the velocity of the Alfvén wave approaches the speed of light, and the Alfvén wave becomes an ordinary electromagnetic wave. Neglecting the contribution of the electrons to the mass density and assuming that there is a single ion species, we get - in SI - in Gauss where is the ion number density and is the ion mass. In plasma physics, the Alfvén time is an important timescale for wave phenomena. It is related to the Alfvén velocity by: where denotes the characteristic scale of the system, for example is the minor radius of the torus in a tokamak. The general Alfvén wave velocity is defined by Gedalin (1993): is the total energy density of plasma particles, is the total plasma pressure, and is the magnetic field pressure. In the non-relativistic limit , and we immediately get the expression from the previous section. Heating the Corona Cold plasma floating in the corona above the solar limb. Alfvén waves were observed for the first time, extrapolated from fluctuations of the plasma. The coronal heating problem is a longstanding question in heliophysics. It is unknown why the sun's corona lives in a temperature range higher than one million degrees while the sun's surface (photosphere) is only a few thousand degrees in temperature. Natural intuition would predict a decrease in temperature while getting farther away from a heat source, but it is theorized that the photosphere, influenced by the sun's magnetic fields, emits certain waves which carry energy (i.e. heat) to the corona and solar wind. It is important to note that because the density of the corona is quite a bit smaller than the photosphere, the heat and energy level of the photosphere is much higher than the corona. Temperature depends only on the average speed of a species, and less energy is required to heat fewer particles to higher temperatures in the coronal atmosphere. Alfvén first proposed the existence of an electromagnetic-hydrodynamic wave in 1942 in journal Nature. He claimed the sun had all necessary criteria to support these waves and that they may in turn be responsible for sun spots. From his paper: Magnetic waves, called Alfvén S-waves, flow from the base of black hole jets. If a conducting liquid is placed in a constant magnetic field, every motion of the liquid gives rise to an E.M.F. which produces electric currents. Owing to the magnetic field, these currents give mechanical forces which change the state of motion of the liquid. Thus a kind of combined electromagnetic-hydrodynamic wave is produced. -- Hannes Alfvén, Existence of Electromagnetic-Hydrodynamic Waves, Beneath the sun's photosphere lies the convection zone. The rotation of the sun, as well as varying pressure gradients beneath the surface, produces the periodic electromagnetism in the convection zone which can be observed on the sun's surface. This random motion of the surface gives rise to Alfvén waves. The waves travel through the chromosphere and transition zone and interact with much of the ionized plasma. The wave itself carries energy as well as some of the electrically charged plasma. De Pontieu and Haerendel suggested in the early 1990s that Alfven waves may also be associated with the plasma jets known as spicules. It was theorized these brief spurts of superheated gas were carried by the combined energy and momentum of their own upward velocity, as well as the oscillating transverse motion of the Alfven waves. In 2007, Alfven waves were reportedly observed for the first time traveling towards the corona by Tomcyzk et al., but their predictions could not conclude that the energy carried by the Alfven waves were sufficient to heat the corona to its enormous temperatures, for the observed amplitudes of the waves were not high enough. However, in 2011, McIntosh et al. reported the observation of highly energetic Alfven waves combined with energetic spicules which could sustain heating the corona to its million Kelvin temperature. These observed amplitudes (20.0 km/s against 2007's observed 0.5 km/s) contained over one hundred times more energy than the ones observed in 2007. The short period of the waves also allowed more energy transfer into the coronal atmosphere. The 50,000 km long spicules may also play a part in accelerating the solar wind past the corona. However, the above-mentioned discoveries of Alfvén waves in the complex Sun's atmosphere starting from Hinode era in 2007 for next 10 years mostly fall in the realm of Alfvénic waves essentially generated as a mixed mode due to transverse structuring of the magnetic and plasma properties in the localized fluxtubes. In 2009, David Jess from Queens University, Belfast and colleagues have reported the periodic variation of H-alpha line-width as observed by Swedish Solar Telescope (SST) above chromospheric bright-points. They claimed first direct detection of the long-period (126-700 s) incompressible torsional Alfvén waves in the lower solar atmosphere. In 2017, Abhishek Kumar Srivastava from IIT (BHU), India and colleagues have detected the existence of high-frequency torsional Alfvén waves in the Sun's chromospheric fine structured fluxtubes. They discovered that these high-frequency waves carry substantial energy capable of heating the Sun's corona and also in originating the supersonic solar wind. Using spectral imaging observations, non-LTE inversions and magnetic field extrapolations of sunspot atmospheres, Samuel Grant and colleagues (Grant et al. 2018, Nature Physics) found evidence for elliptically-polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. They have provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes above active region spots. How this phenomenon became understood - 1942: Alfvén suggests the existence of electromagnetic-hydromagnetic waves in a paper published in Nature 150, 405-406 (1942). - 1949: Laboratory experiments by S. Lundquist produce such waves in magnetized mercury, with a velocity that approximated Alfvén's formula. - 1949: Enrico Fermi uses Alfvén waves in his theory of cosmic rays. According to Alexander J. Dessler in a 1970 Science journal article, Fermi had heard a lecture at the University of Chicago, Fermi nodded his head exclaiming "of course" and the next day, the physics world said "of course". - 1950: Alfvén publishes the first edition of his book, Cosmical Electrodynamics, detailing hydromagnetic waves, and discussing their application to both laboratory and space plasmas. - 1952: Additional confirmation appears in experiments by Winston Bostick and Morton Levine with ionized helium - 1954: Bo Lehnert produces Alfvén waves in liquid sodium - 1958: Eugene Parker suggests hydromagnetic waves in the interstellar medium - 1958: Berthold, Harris, and Hope detect Alfvén waves in the ionosphere after the Argus nuclear test, generated by the explosion, and traveling at speeds predicted by Alfvén formula. - 1958: Eugene Parker suggests hydromagnetic waves in the Solar corona extending into the Solar wind. - 1959: D. F. Jephcott produces Alfvén waves in a gas discharge - 1959: C. H. Kelley and J. Yenser produce Alfvén waves in the ambient atmosphere. - 1960: Coleman, et al., report the measurement of Alfvén waves by the magnetometer aboard the Pioneer and Explorer satellites - 1960: Sugiura suggests evidence of hydromagnetic waves in the Earth's magnetic field - 1961: Normal Alfvén modes and resonances in liquid sodium are studied by Jameson - 1966: R.O.Motz generates and observes Alfven waves in mercury - 1970 Hannes Alfvén wins the 1970 Nobel Prize in physics for "fundamental work and discoveries in magneto-hydrodynamics with fruitful applications in different parts of plasma physics" - 1973: Eugene Parker suggests hydromagnetic waves in the intergalactic medium - 1974: Hollweg suggests the existence of hydromagnetic waves in interplanetary space - 1974: Ip and Mendis suggests the existence of hydromagnetic waves in the coma of Comet Kohoutek. - 1984: Roberts et al. predict the presence of standing MHD waves in the solar corona, thus leading to the field of coronal seismology. - 1999: Aschwanden, et al. and Nakariakov, et al. report the detection of damped transverse oscillations of solar coronal loops observed with the EUV imager on board the Transition Region And Coronal Explorer (TRACE), interpreted as standing kink (or "Alfvénic") oscillations of the loops. This fulfilled the prediction of Roberts et al. (1984). - 2007: Tomczyk, et al., report the detection of Alfvénic waves in images of the solar corona with the Coronal Multi-Channel Polarimeter (CoMP) instrument at the National Solar Observatory, New Mexico. These waves were interpreted as propagating kink waves by Van Doorsselaere et al. (2008) - 2007: Alfvén wave discoveries appear in articles by Jonathan Cirtain and colleagues, Takenori J. Okamoto and colleagues, and Bart De Pontieu and colleagues. De Pontieu's team proposed that the energy associated with the waves is sufficient to heat the corona and accelerate the solar wind. These results appear in a special collection of 10 articles, by scientists in Japan, Europe and the United States, in the 7 December issue of the journal Science. It was demonstrated that those waves should be interpreted in terms of kink waves of coronal plasma structures by Van Doorsselaere, et al. (2008); Ofman and Wang (2008); and Vasheghani Farahani, et al. (2009). - 2008: Kaghashvili et al. proposed how the detected oscillations can be used to deduct properties of Alfven waves. The mechanism is based on the formalism developed by the Kaghashvili and his collaborators. An alternative explanation of various observationally discovered Alfvén waves has been given in form of this proclaimed discovery of a new wave phenomenon, The Kaghashvili waves (Kaghashvili waves as claimed to be discovered in 2002-2003). - 2009: Torsional Alfvén waves in the structured Sun's chromosphere is firstly directly detected by David Jess and colleagues using the observations from Swedish Solar Telescope (D. Jess et al., 2009, Science, 323, 1582). - 2011: Experimental evidence of Alfvén wave propagation in a Gallium alloy - 2017: First direct detection of high-frequency torsional Alfvén waves by Abhishek K. Srivastava from Indian Institute of Technology (BHU; Varanasi, India) and colleagues using observations from Swedish Solar Telescope. Using stringent 3-D numerical model, they found that these observed high-frequency (12-42 mHz) waves carry substantial energy to heat the Sun's inner corona (Srivastava et al., 2017, Nature SR, Volume 7, id. 43147). - 2018: Using spectral imaging observations, non-LTE inversions and magnetic field extrapolations of sunspot atmospheres, Samuel Grant and colleagues (Grant et al. 2018, Nature Physics) found evidence for elliptically-polarized Alfvén waves forming fast-mode shocks in the outer regions of the chromospheric umbral atmosphere. For the first time, these authors provided quantification of the degree of physical heat provided by the dissipation of such Alfvén wave modes. - ^ Iwai, K; Shinya, K,; Takashi, K. and Moreau, R. (2003) "Pressure change accompanying Alfvén waves in a liquid metal" Magnetohydrodynamics 39(3): pp. 245-250, page 245 - ^ Gedalin, M. (1993), "Linear waves in relativistic anisotropic magnetohydrodynamics", Physical Review E, 47 (6): 4354-4357, Bibcode:1993PhRvE..47.4354G, doi:10.1103/PhysRevE.47.4354 - ^ Hannes Alfvén (1942). "Existence of Electromagnetic-Hydrodynamic Waves". Nature. 150 (3805): 405-406. Bibcode:1942Natur.150..405K. doi:10.1038/150405a0. - ^ Bart de Pontieu (18 December 1997). "Chromospheric Spicules driven by Alfvén waves". Max-Planck-Institut für extraterrestrische Physik. Archived from the original on 16 July 2002. Retrieved 2012. - ^ Gerhard Haerendel (1992). "Weakly damped Alfven waves as drivers of solar chromospheric spicules". Nature. 360: 241-243. Bibcode:1992Natur.360..241H. doi:10.1038/360241a0. - ^ Tomczyk, S.; McIntosh, S.W.; Keil, S.L.; Judge, P.G.; Schad, T.; Seeley, D.H.; Edmondson, J. (2007). "Alfven waves in the solar corona". Science. 317 (5842): 1192-1196. Bibcode:2007Sci...317.1192T. doi:10.1126/science.1143304. - ^ McIntosh; et al. (2011). "Alfvenic waves with sufficient energy to power the quiet solar corona and fast solar wind". Nature. 475 (7357): 477-480. Bibcode:2011Natur.475..477M. doi:10.1038/nature10235. PMID 21796206. - ^ Karen Fox (27 July 2011). "SDO Spots Extra Energy in the Sun's Corona". NASA. Retrieved 2012. - ^ Kaghashvili, Edisher Kh.; Quinn, Richard A.; Hollweg, Joseph V. (2009). "Driven Waves as a Diagnostics Tool in the Solar Corona". The Astrophysical Journal. 703: 1318. Bibcode:2009ApJ...703.1318K. doi:10.1088/0004-637x/703/2/1318. - ^ Thierry Alboussière; Philippe Cardin; François Debray; Patrick La Rizza; Jean-Paul Masson; Franck Plunian; Adolfo Ribeiro; Denys Schmitt (2011). "Experimental evidence of Alfvén wave propagation in a Gallium alloy". Phys. Fluids. 23 (9): 096601. arXiv:1106.4727 . Bibcode:2011PhFl...23i6601A. doi:10.1063/1.3633090. - ^ Grant, Samuel D. T.; Jess, David B.; Zaqarashvili, Teimuraz V.; Beck, Christian; Socas-Navarro, Hector; Aschwanden, Markus J.; Keys, Peter H.; Christian, Damian J.; Houston, Scott J.; Hewitt, Rebecca L. (2018), "Alfvén Wave Dissipation in the Solar Chromosphere", Nature Physics, Bibcode:2018NatPh..14..480G, doi:10.1038/s41567-018-0058-3 11. The electromagnetodynamics of fluids by W F Hughes and F J Young, pp. 159 - 161, p. 308, p. 311, p. 335, p. 446 John Wiley & Sons, New York, LCCC #66-17631 - Alfvén, H. (1942), "Existence of electromagnetic-hydrodynamic waves", Nature, 150 (3805): 405-406, Bibcode:1942Natur.150..405A, doi:10.1038/150405d0 - ------ (1981), Cosmic Plasma, Holland: Reidel, ISBN 90-277-1151-8 - Aschwanden, M. J.; Fletcher, L.; Schrijver, C. J.; Alexander, D. (1999), "Coronal Loop Oscillations Observed with the Transition Region and Coronal Explorer", The Astrophysical Journal, 520 (2): 880-894, Bibcode:1999ApJ...520..880A, doi:10.1086/307502 - Berthold, W. K.; Harris, A. K.; Hope, H. J. (1960), "World-Wide Effects of Hydromagnetic Waves Due to Argus", Journal of Geophysical Research, 65 (8): 2233-2239, Bibcode:1960JGR....65.2233B, doi:10.1029/JZ065i008p02233 - Bostick, Winston H.; Levine, Morton A. (1952), "Experimental Demonstration in the Laboratory of the Existence of Magneto-Hydrodynamic Waves in Ionized Helium", Physical Review, 87 (4): 671, Bibcode:1952PhRv...87..671B, doi:10.1103/PhysRev.87.671 - Coleman, P. J., Jr.; Sonett, C. P.; Judge, D. L.; Smith, E. J. (1960), "Some Preliminary Results of the Pioneer V Magnetometer Experiment", Journal of Geophysical Research, 65 (6): 1856-1857, Bibcode:1960JGR....65.1856C, doi:10.1029/JZ065i006p01856 - Cramer, N. F.; Vladimirov, S. V. (1997), "Alfvén Waves in Dusty Interstellar Clouds", Publications of the Astronomical Society of Australia, 14 (2): 170-178, Bibcode:1997PASA...14..170C, doi:10.1071/AS97170 - Dessler, A. J. (1970), "Swedish iconoclast recognized after many years of rejection and obscurity", Science, 170 (3958): 604-606, Bibcode:1970Sci...170..604D, doi:10.1126/science.170.3958.604, PMID 17799293 - Falceta-Gonçalves, D.; Jatenco-Pereira, V. (2002), "The Effects of Alfvén Waves and Radiation Pressure in Dust Winds of Late-Type Stars", Astrophysical Journal, 576 (2): 976-981, arXiv:astro-ph/0207342 , Bibcode:2002ApJ...576..976F, doi:10.1086/341794 - Fermi, E. (1949), "On the Origin of the Cosmic Radiation", Physical Review, 75 (8): 1169-1174, Bibcode:1949PhRv...75.1169F, doi:10.1103/PhysRev.75.1169 - Galtier, S. (2000), "A weak turbulence theory for incompressible magnetohydrodynamics", J. Plasma Physics, 63 (5): 447-488, arXiv:astro-ph/0008148 , Bibcode:2000JPlPh..63..447G, doi:10.1017/S0022377899008284 - Hollweg, J. V. (1974), "Hydromagnetic waves in interplanetary space", Publications of the Astronomical Society of the Pacific, 86 (October 1974): 561-594, Bibcode:1974PASP...86..561H, doi:10.1086/129646 - Ip, W.-H.; Mendis, D. A. (1975), "The cometary magnetic field and its associated electric currents", Icarus, 26 (4): 457-461, Bibcode:1975Icar...26..457I, doi:10.1016/0019-1035(75)90115-3 - Jephcott, D. F. (1959), "Alfvén waves in a gas discharge", Nature, 183 (4676): 1652-1654, Bibcode:1959Natur.183.1652J, doi:10.1038/1831652a0 - Lehnert, Bo (1954), "Magneto-Hydrodynamic Waves in Liquid Sodium", Physical Review, 94 (4): 815-824, Bibcode:1954PhRv...94..815L, doi:10.1103/PhysRev.94.815 - Lundquist, S. (1949), "Experimental Investigations of Magneto-Hydrodynamic Waves", Physical Review, 76 (12): 1805-1809, Bibcode:1949PhRv...76.1805L, doi:10.1103/PhysRev.76.1805 - Mancuso, S.; Spangler, S. R. (1999), "Coronal Faraday Rotation Observations: Measurements and Limits on Plasma Inhomogeneities", The Astrophysical Journal, 525 (1): 195-208, Bibcode:1999ApJ...525..195M, doi:10.1086/307896 - Motz, R. O. (1966), "Alfven Wave Generation in a Spherical System", Physics of Fluids, 9 (2): 411-412, Bibcode:1966PhFl....9..411M, doi:10.1063/1.1761687 - Nakariakov, V. M.; Ofman, L.; Deluca, E. E.; Roberts, B.; Davila, J. M. (1999), "TRACE observation of damped coronal loop oscillations: Implications for coronal heating", Science, 285 (5429): 862-864, Bibcode:1999Sci...285..862N, doi:10.1126/science.285.5429.862, PMID 10436148 - Ofman, L.; Wang, T. J. (2008), "Hinode observations of transverse waves with flows in coronal loops", Astronomy and Astrophysics, 482 (2): L9-L12, Bibcode:2008A&A...482L...9O, doi:10.1051/0004-6361:20079340 - Otani, N. F. (1988a), "The Alfvén ion-cyclotron instability, simulation theory and techniques", Journal of Computational Physics, 78 (2): 251-277, Bibcode:1988JCoPh..78..251O, doi:10.1016/0021-9991(88)90049-6 - ------ (1988b), "Application of Nonlinear Dynamical Invariants in a Single Electromagnetic Wave to the Study of the Alfvén-Ion-Cyclotron Instability", Physics of Fluids, 31 (6): 1456-1464, Bibcode:1988PhFl...31.1456O, doi:10.1063/1.866736 - Parker, E. N. (1955), "Hydromagnetic Waves and the Acceleration of Cosmic Rays", Physical Review, 99 (1): 241-253, Bibcode:1955PhRv...99..241P, doi:10.1103/PhysRev.99.241 - ------ (1958), "Suprathermal Particle Generation in the Solar Corona", Astrophysical Journal, 128: 677, Bibcode:1958ApJ...128..677P, doi:10.1086/146580 - ------ (1973), "Extragalactic Cosmic Rays and the Galactic Magnetic Field", Astrophysics and Space Science, 24 (1): 279-288, Bibcode:1973Ap&SS..24..279P, doi:10.1007/BF00648691 - Silberstein, M.; Otani, N. F. (1994), "Computer simulation of Alfvén waves and double layers along auroral magnetic field lines" (PDF), Journal of Geophysical Research, 99 (A4): 6351-6365, Bibcode:1994JGR....99.6351S, doi:10.1029/93JA02963 - Sugiura, Masahisa (1961), "Some Evidence of Hydromagnetic Waves in the Earth's Magnetic Field", Physical Review Letters, 6 (6): 255-257, Bibcode:1961PhRvL...6..255S, doi:10.1103/PhysRevLett.6.255 - Tomczyk, S.; McIntosh, S. W.; Keil, S. L.; Judge, P. G.; Schad, T.; Seeley, D. H.; Edmondson, J. (2007), "Waves in the Solar Corona", Science, 317 (5842): 1192-1196, Bibcode:2007Sci...317.1192T, doi:10.1126/science.1143304 - Jess, David B.; Mathioudakis, Mihalis; Erdélyi, Robert; Crockett, Philip J.; Keenan, Francis P.; Christian, Damian J. (2009), "Alfvén Waves in the Lower Solar Atmosphere", Science, 323: 1582, arXiv:0903.3546 , Bibcode:2009Sci...323.1582J, doi:10.1126/science.1168680 - Srivastava, Abhishek K.; Shetye, Juie; Murawski, Krzysztof; Doyle, John Gerard; Stangalini, Marco; Scullion, Eamon; Ray, Tom; Wójcik, Dariusz Patryk; Dwivedi, Bhola N. (2017), "High-frequency torsional Alfvén waves as an energy source for coronal heating", Scientific Reports, 7: id.43147, Bibcode:2017NatSR...743147S, doi:10.1038/srep43147 - Grant, Samuel D. T.; Jess, David B.; Zaqarashvili, Teimuraz V.; Beck, Christian; Socas-Navarro, Hector; Aschwanden, Markus J.; Keys, Peter H.; Christian, Damian J.; Houston, Scott J.; Hewitt, Rebecca L. (2018), "Alfvén Wave Dissipation in the Solar Chromosphere", Nature Physics, Bibcode:2018NatPh..14..480G, doi:10.1038/s41567-018-0058-3
<urn:uuid:f2aca9a3-17ac-457b-a44e-29f752c97ab2>
4.0625
6,136
Knowledge Article
Science & Tech.
65.955028
95,607,748
The physical constant ε0 (pronounced as "epsilon nought"), commonly called the vacuum permittivity, permittivity of free space or electric constant or the distributed capacitance of the vacuum, is an ideal, (baseline) physical constant, which is the value of the absolute dielectric permittivity of classical vacuum. It has an exactly defined value It is the capability of the vacuum to permit electric field lines. This constant relates the units for electric charge to mechanical quantities such as length and force. For example, the force between two separated electric charges (in the vacuum of classical electromagnetism) is given by Coulomb's law: The value of the constant fraction is 9 × 109 N⋅m2⋅C−2, q1 and q2 are the charges, and r is the distance between them. Likewise, ε0 appears in Maxwell's equations, which describe the properties of electric and magnetic fields and electromagnetic radiation, and relate them to their sources. The value of ε0 is currently defined by the formula where c is the defined value for the speed of light in classical vacuum in SI units, and μ0 is the parameter that international Standards Organizations call the "magnetic constant" (commonly called vacuum permeability). Since μ0 has the defined value 4π × 10−7 H/m, and c has the defined value 299792458 m⋅s−1, it follows that ε0 has a defined value given approximately by - ε0 ≈ 8.854187817620... × 10−12 F⋅m−1 (or A2⋅s4⋅kg−1⋅m−3 in SI base units, or C2⋅N−1⋅m−2 or C⋅V−1⋅m−1 using other SI coherent units). The historical origins of the electric constant ε0, and its value, are explained in more detail below. Redefinition of the SI unitsEdit Under the proposals to redefine the ampere as a fixed number of elementary charges per second, the electric constant would no longer have an exact fixed value. The value of the electron charge would become a defined number, not measured, making μ0 a measured quantity. Consequently, ε0 also would not be exact. As before, it would be defined by the equation ε0 = 1/(μ0c2), but now with a measurement error related to the error in μ0, the magnetic constant. This measurement error can be related to that in the fine-structure constant α: The relative uncertainty in the value of ε0 therefore would be the same as that for the fine-structure constant, currently ×10−10. 6.8 Historically, the parameter ε0 has been known by many different names. The terms "vacuum permittivity" or its variants, such as "permittivity in/of vacuum", "permittivity of empty space", or "permittivity of free space" are widespread. Standards Organizations worldwide now use "electric constant" as a uniform term for this quantity, and official standards documents have adopted the term (although they continue to list the older terms as synonyms). Another historical synonym was "dielectric constant of vacuum", as "dielectric constant" was sometimes used in the past for the absolute permittivity. However, in modern usage "dielectric constant" typically refers exclusively to a relative permittivity ε/ε0 and even this usage is considered "obsolete" by some standards bodies in favor of relative static permittivity. Hence, the term "dielectric constant of vacuum" for the electric constant ε0 is considered obsolete by most modern authors, although occasional examples of continuing usage can be found. Historical origin of the parameter ε0Edit As indicated above, the parameter ε0 is a measurement-system constant. Its presence in the equations now used to define electromagnetic quantities is the result of the so-called "rationalization" process described below. But the method of allocating a value to it is a consequence of the result that Maxwell's equations predict that, in free space, electromagnetic waves move with the speed of light. Understanding why ε0 has the value it does requires a brief understanding of the history. Rationalization of unitsEdit The experiments of Coulomb and others showed that the force F between two equal point-like "amounts" of electricity, situated a distance r apart in free space, should be given by a formula that has the form where Q is a quantity that represents the amount of electricity present at each of the two points, and ke is Coulomb's constant. If one is starting with no constraints, then the value of ke may be chosen arbitrarily. For each different choice of ke there is a different "interpretation" of Q: to avoid confusion, each different "interpretation" has to be allocated a distinctive name and symbol. In one of the systems of equations and units agreed in the late 19th century, called the "centimetre–gram–second electrostatic system of units" (the cgs esu system), the constant ke was taken equal to 1, and a quantity now called "gaussian electric charge" qs was defined by the resulting equation The unit of gaussian charge, the statcoulomb, is such that two units, a distance of 1 centimetre apart, repel each other with a force equal to the cgs unit of force, the dyne. Thus the unit of gaussian charge can also be written 1 dyne1/2 cm. "Gaussian electric charge" is not the same mathematical quantity as modern (MKS and subsequently the SI) electric charge and is not measured in coulombs. The idea subsequently developed that it would be better, in situations of spherical geometry, to include a factor 4π in equations like Coulomb's law, and write it in the form: This idea is called "rationalization". The quantities qs′ and ke′ are not the same as those in the older convention. Putting ke′ = 1 generates a unit of electricity of different size, but it still has the same dimensions as the cgs esu system. The next step was to treat the quantity representing "amount of electricity" as a fundamental quantity in its own right, denoted by the symbol q, and to write Coulomb's Law in its modern form: The system of equations thus generated is known as the rationalized metre–kilogram–second (rmks) equation system, or "metre–kilogram–second–ampere (mksa)" equation system. This is the system used to define the SI units. The new quantity q is given the name "rmks electric charge", or (nowadays) just "electric charge". Clearly, the quantity qs used in the old cgs esu system is related to the new quantity q by Determination of a value for ε0Edit One now adds the requirement that one wants force to be measured in newtons, distance in metres, and charge to be measured in the engineers' practical unit, the coulomb, which is defined as the charge accumulated when a current of 1 ampere flows for one second. This shows that the parameter ε0 should be allocated the unit C2⋅N−1⋅m−2 (or equivalent units – in practice "farads per metre"). In order to establish the numerical value of ε0, one makes use of the fact that if one uses the rationalized forms of Coulomb's law and Ampère's force law (and other ideas) to develop Maxwell's equations, then the relationship stated above is found to exist between ε0, μ0 and c0. In principle, one has a choice of deciding whether to make the coulomb or the ampere the fundamental unit of electricity and magnetism. The decision was taken internationally to use the ampere. This means that the value of ε0 is determined by the values of c0 and μ0, as stated above. For a brief explanation of how the value of μ0 is decided, see the article about μ0. Permittivity of real mediaEdit By convention, the electric constant ε0 appears in the relationship that defines the electric displacement field D in terms of the electric field E and classical electrical polarization density P of the medium. In general, this relationship has the form: For a linear dielectric, P is assumed to be proportional to E, but a delayed response is permitted and a spatially non-local response, so one has: In the event that nonlocality and delay of response are not important, the result is: - "CODATA Value: electric constant". The NIST Reference on Constants, Units, and Uncertainty. US National Institute of Standards and Technology. June 2015. Retrieved 2015-09-25. 2014 CODATA recommended values "Electropedia: International Electrotechnical Vocabulary (IEC 60050)". Geneva: International Electrotechnical Commission. Retrieved 2015-03-26. - The exact numerical value is found at: "Electric constant, ε0". NIST reference on constants, units, and uncertainty: Fundamental physical constants. NIST. Retrieved 2012-01-22. This formula determining the exact value of ε0 is found in Table 1, p. 637 of PJ Mohr; BN Taylor; DB Newell (April–June 2008). "Table 1: Some exact quantities relevant to the 2006 adjustment in CODATA recommended values of the fundamental physical constants: 2006" (PDF). Rev Mod Phys. 80 (2): 633–729. arXiv: . Bibcode:2008RvMP...80..633M. doi:10.1103/RevModPhys.80.633. - Quote from NIST: "The symbol c is the conventional symbol for the speed of light in vacuum. " See NIST Special Publication 330, p. 18 - See the last sentence of the NIST definition of ampere. - See the last sentence of the NIST definition of meter. - Mohr, Peter J.; Taylor, Barry N.; Newell, David B. (2008). "CODATA Recommended Values of the Fundamental Physical Constants: 2006". Reviews of Modern Physics. 80 (2): 633–730. arXiv: . Bibcode:2008RvMP...80..633M. doi:10.1103/RevModPhys.80.633. Direct link to value.. - A summary of the definitions of c, μ0 and ε0 is provided in the 2006 CODATA Report: CODATA report, pp. 6–7 "On the possible future revision of the International System of Units, the SI" (PDF). Sèvres, France: International Bureau for Weights and Measures. 21 Oct 2011. |contribution=ignored (help) It is not expected to be adopted until some prerequisite conditions are met, and in any case not before 2014. See "Possible changes to the international system of units". IUPAC Wire. International Union of Pure and Applied Chemistry. 34 (1). January–February 2012. - SM Sze & Ng KK (2007). "Appendix E". Physics of semiconductor devices (Third ed.). New York: Wiley-Interscience. p. 788. ISBN 0-471-14323-5. - RS Muller, Kamins TI & Chan M (2003). Device electronics for integrated circuits (Third ed.). New York: Wiley. Inside front cover. ISBN 0-471-59398-2. - FW Sears, Zemansky MW & Young HD (1985). College physics. Reading, Mass.: Addison-Wesley. p. 40. ISBN 0-201-07836-8. - B. E. A. Saleh and M. C. Teich, Fundamentals of Photonics (Wiley, 1991) - International Bureau of Weights and Measures (2006). "The International System of Units (SI)" (PDF). p. 12. - Braslavsky, S.E. (2007). "Glossary of terms used in photochemistry (IUPAC recommendations 2006)" (PDF). Pure and Applied Chemistry. 79 (3): 293–465; see p. 348. doi:10.1351/pac200779030293. - "Naturkonstanten". Freie Universität Berlin. - King, Ronold W. P. (1963). Fundamental Electromagnetic Theory. New York: Dover. p. 139. - IEEE Standards Board (1997). "IEEE Standard Definitions of Terms for Radio Wave Propagation" (PDF). p. 6. - For an introduction to the subject of choices for independent units, see John David Jackson (1999). "Appendix on units and dimensions". Classical electrodynamics (Third ed.). New York: Wiley. pp. 775 et seq.. ISBN 0-471-30932-X. - International Bureau of Weights and Measures. "The International System of Units (SI) and the corresponding system of quantities". - Jenö Sólyom (2008). "Equation 16.1.50". Fundamentals of the physics of solids: Electronic properties. Springer. p. 17. ISBN 3-540-85315-4.
<urn:uuid:b41ff953-86af-4a6f-848a-2bd59edfabac>
3.71875
2,855
Knowledge Article
Science & Tech.
57.92927
95,607,769
NIST Increases Mass Spectral Library One of the world’s largest and most widely used databases of molecular fingerprints is the NIST Mass Spectral Library, and that library just got larger still. On June 6, NIST added fingerprints from more than 25,000 compounds to the library, bringing the total number to more than 265,000. This library contains fingerprints of organic compounds—a class of carbon-containing molecules that exist in an endless variety, both natural and man-made. “This library is used by scientists and engineers in virtually every industry,” said Stephen Stein, the NIST chemist who oversees the Mass Spectral Library. He rattled off just a few uses: diagnosing medical conditions, conducting forensic investigations, identifying environmental pollutants and developing new fuels. “And anything having to do with food,” he said, since the taste of a food is determined by the complex mixture of organic molecules within it. “The flavor and fragrance industries live and die by this stuff.” Among the important compounds whose fingerprints are included in this upgrade are many dangerous drugs. These include dozens of synthetic cannabinoids—aka “synthetic marijuana”—which can cause psychotic episodes, seizures and death. Also included are more than 30 types of fentanyl, the synthetic opioid that is driving an epidemic of overdoses nationwide. Having the fingerprints of these compounds in the Mass Spectral Library will help law enforcement and public health officials fight the spread of these new and dangerous substances. This article has been republished from materials provided by NIST. Note: material may have been edited for length and content. For further information, please contact the cited source. Stormwater Ponds Not a Significant Source of Climate-Warming N2ONews Stormwater retention ponds, a ubiquitous feature in developed landscapes worldwide, are not a significant source of climate-warming nitrous oxide (N2O) emissions, a new study finds.
<urn:uuid:cc8b131c-3fad-4fdf-b5df-da752473a093>
2.953125
400
News Article
Science & Tech.
28.646503
95,607,787
Today the OPERA collaboration has also made their data public through the CERN Open Data Portal. By releasing the data into the public domain, researchers outside the OPERA Collaboration have the opportunity to conduct novel research with them. The datasets provided come with rich context information to help interpret the data, also for educational use. A visualizer enables users to see the different events and download them. This is the first non-LHC data release through the CERN Open Data portal, a service launched in 2014. There are three kinds of neutrinos in nature: electron, muon and tau neutrinos. They can be distinguished by the property that, when interacting with matter, they typically convert into the electrically charged lepton carrying their name: electron, muon and tau leptons. It is these leptons that are seen by detectors, such as the OPERA detector, unique in its capability of observing all three. Experiments carried out around the turn of the millennium showed that muon neutrinos, after travelling long distances, create fewer muons than expected, when interacting with a detector. This suggested that muon neutrinos were oscillating into other types of neutrinos. Since there was no change in the number of detected electrons, physicists suggested that muon neutrinos were primarily oscillating into tau neutrinos. This has now been unambiguously confirmed by OPERA, through the direct observation of tau neutrinos appearing hundreds of kilometres away from the muon neutrino source. The clarification of the oscillation patterns of neutrinos sheds light on some of the properties of these mysterious particles, such as their mass. The OPERA collaboration observed the first tau-lepton event (evidence of muon-neutrino oscillation) in 2010, followed by four additional events reported between 2012 and 2015, when the discovery of tau neutrino appearance was first assessed. Thanks to a new analysis strategy applied to the full data sample collected between 2008 and 2012 – the period of neutrino production – a total of 10 candidate events have now been identified, with an extremely high level of significance. “We have analysed everything with a completely new strategy, taking into account the peculiar features of the events,” said Giovanni De Lellis Spokesperson for the OPERA collaboration. “We also report the first direct observation of the tau neutrino lepton number, the parameter that discriminates neutrinos from their antimatter counterpart, antineutrinos. It is extremely gratifying to see today that our legacy results largely exceed the level of confidence we had envisaged in the experiment proposal.” Beyond the contribution of the experiment to a better understanding of the way neutrinos behave, the development of new technologies is also part of the legacy of OPERA. The collaboration was the first to develop fully automated, high-speed readout technologies with sub-micrometric accuracy, which pioneered the large-scale use of the so-called nuclear emulsion films to record particle tracks. Nuclear emulsion technology finds applications in a wide range of other scientific areas from dark matter search to volcano and glacier investigation. It is also applied to optimise the hadron therapy for cancer treatment and was recently used to map out the interior of the Great Pyramid, one of the oldest and largest monuments on Earth, built during the dynasty of the pharaoh Khufu, also known as Cheops.
<urn:uuid:1a3bb1e6-53bc-4365-a93a-1d36ecb26df9>
3.109375
710
News (Org.)
Science & Tech.
21.209493
95,607,802
Authors: J.C. Sung and M-C Kan Affilation: KINIK Company, Taiwan Pages: 239 - 243 Keywords: amorphous diamond, solar cell, diamond electrode, thermal radiation, heat spreader, electroluminescence Amorphous diamond is the only material that can emit electrons in vacuum when applied with an electrical field of only a few volts per micron and such current can increase hundreds or millions times when heated to only a few hundreds degrees centigrade. This miracle is due to amorphous diamond’s extreme atomic structure, with the highest atomic density and the largest configurational entropy. Amorphous diamond is made of carbon atoms with electrical resistivity that ranges from graphitic conductor to diamond-like insulator. But unlike ordinary composite materials that contain domains of relatively uniform material (e.g. metal and ceramic), each carbon atom in the amorphous diamond is unique in electronic bonding and energy state. In fact, amorphous diamond can be viewed as a self doped variable conductor. In comparison, semiconductors are chemically doped crystalline material. The numerous energy states in amorphous diamond can allow electrons to possess discrete energies so the inputted energy can be absorbed with a wide spectrum. Hence, even with the thermal energy may be absorbed to increase the energy of electrons. As a result, amorphous diamond can be a thermal generator, such as that for a solar cell. In this case, the energy conversion can have much higher efficiency (e.g. 50%) than that (e.g. 15%) of silicon based solar cells that can absorb only a narrow spectrum of sun light. As a solar cell, amorphous diamond has another advantage that its radiation hardness is the highest of all materials, hence, its thermal electricity efficiency will not attenuate as does the photo electrical semiconductor based solar cells.<br>An immediate application of amorphous diamond is to coat it on electron emitting electrodes, such as that used as cold cathode fluorescence lamps (CCFL) that illuminate liquid crystal displays (LCD), such is that used on note books and television sets. Amorphous diamond can dramatically reduce the voltage used to lit CCFL so the lamp life can be greatly extended. Moreover, the electrical current can be simultaneously increased to enhance the brightness of the light. Nanotech Conference Proceedings are now published in the TechConnect Briefs
<urn:uuid:43ef55cd-18a9-4587-addc-c2f8e602b6b5>
3.40625
497
Academic Writing
Science & Tech.
30.485864
95,607,808
Focus: Why We Can’t Remember the Future Why can we remember the past but not the future? It might seem like a bizarre question, but it’s not obvious why our psychological “arrow of time” should move in the same direction as that dictated by the second law of thermodynamics, which implies that events unfold in the direction that increases net entropy. A report in Physical Review E suggests that these two arrows of time are forced to coincide by the constraints on what it actually means to remember something. The fundamental laws of physics are symmetrical in time: in Newtonian classical mechanics time is in principle reversible, and in general relativity it is just a coordinate much like those of space. Given the positions and velocities of a classical system of interacting particles, the past and future can in principle each be completely calculated from the laws of physics. So predictions of the future are just as accurate as descriptions of the past—they are equally “knowable” based on the present. The existence of an arrow of time is usually explained in terms of the thermodynamic concept of entropy. In systems of many components, it is overwhelmingly more probable that changes will occur in the direction that increases the total entropy of the universe. How we actually perceive the flow of time is another matter. Theorists have argued that recording information always involves erasing—for example, initializing a computer memory at the start . Since erasure always increases entropy , the psychological arrow of time aligns with the thermodynamic one. But Leonard Mlodinow of the California Institute of Technology in Pasadena and Todd Brun of the University of Southern California in Los Angeles say that this argument is not quite complete. You can, in principle, get rid of any need for erasure and initialization just by remembering everything—which means that recording information in the memory is then fully reversible in time. But even in that case the arrows of time must align because, says Brun, “there is a broader principle at work.” The researchers argue that this extra ingredient is something they call generality. They illustrate the argument with a rotating turnstile that records the passage of gas molecules from one chamber to another. The system starts with most of the molecules in the left-hand chamber, and at any instant the rotor reveals the net number that have passed from left to right since some earlier reference time. But since the system follows predictable and reversible Newtonian laws, the readout could also be interpreted as showing the number of molecules that will pass between the time of the reading and some future reference time. One can show that this would be a correct anticipation, since that number can, in principle, be calculated. “Why can’t we call that a memory of the future?” asks Brun. The reason we cannot is that for the rotor to work as a memory of the past, the system’s state at an earlier reference time need not be precisely specified; any slight changes in the molecules’ positions at that time will not affect the readout at a later time. But equivalent small changes in the state at a future reference time—say, due to some unforeseen influence intervening—lead to inconsistencies. To see this, recall that the molecules started mostly in the left-hand chamber and are gradually equalizing their numbers on both sides of the rotor. Imagine “running the movie backward” (according to Newtonian equations) from the future reference time to the readout time and seeing the molecules collectively move back toward the left-hand chamber. That extremely improbable event can only occur from one very specific arrangement of the molecules at the future time. If, before running time backward, you made any small changes, say, in the molecules’ positions, new collisions would occur during the time reversal that would rapidly set the system on a completely different course. The molecules would take the much more probable path of equalizing the populations and would not get close to the original state of the system at the readout time. As Mlodinow and Brun put it, this kind of “future memory” lacks generality—a requirement that the memory accurately reflects the future state of the system regardless of unexpected events. The readout indicates a future state, but only one specific future state. They compare it to a camera that needs a different type of memory card to accommodate each photo. A real memory, they say, cannot be contingent on the system behaving a certain way. “They have emphasized a very important problem in the meaning-of-time debate and provided an interesting solution,” says Lorenzo Maccone of the University of Pavia in Italy, who has previously considered the origin of the thermodynamic arrow of time in quantum physics . But he isn’t yet persuaded by the answer, because the researchers allow the memory to track the system only in the “forward” time direction. “It seems to me that they are somehow introducing surreptitiously an arrow of time when they say that the memory tracks the system only in one direction.” But Maccone adds that “in such a difficult field, even highlighting what are the relevant questions to ask is already big progress.” Philip Ball is a freelance science writer in London; his latest book is Beyond Weird, a survey of interpretations of quantum mechanics. - D. H. Wolpert, “Memory Systems, Computation, and the Second Law of Thermodynamics,” Int. J. Theor. Phys. 31, 743 (1992) - R. Landauer, “Irreversibility and Heat Generation in the Computing Process,” IBM J. Res. Dev. 5, 183 (1961) - Lorenzo Maccone, “Quantum Solution to the Arrow-of-Time Dilemma,” Phys. Rev. Lett. 103, 080401 (2009)
<urn:uuid:8bd6fc74-3d23-4d20-a3b3-5a1ea3385af3>
3.265625
1,219
Truncated
Science & Tech.
35.68847
95,607,809
Quantum physics is a branch of physics which deals with physical phenomena’s at microscopic scales, where the action is on the other of Planck constant. Quantum mechanics departs from classical mechanics primarily at the quantum realm of atomic and subatomic length scales. Quantum mechanics provides a mathematical description of much of the dual particle-like and wave-like behavior and interactions of energy and matter. The mathematical formulations of quantum mechanics are abstract. A mathematical function known as the wavefunction provides information about the probability amplitude of position, momentum, and other physical properties of a particle. The wavefunction treats the objet as a quantum harmonic oscillator, and the mathematics is akin to that describing acoustic resonance. Many properties in quantum physics cannot be easily visualized in terms of classical mechanics. According to Planck, each energy element E is proportional to its frequency v: Where h is Planck’s constant. Planck insisted that this was simply an aspect of the processes of absorption and emission of radiation and had nothing to o with the physical reality of the radiation itself. Quantum mechanics had great success in explaining many of the features of our World. It is often the only tool available that can reveal the individual behaviors of the subatomic particles that make up all forms of matter. Quantum physics has also strongly influenced string theories, candidates for a theory of everything.© BrainMass Inc. brainmass.com July 20, 2018, 10:21 pm ad1c9bdddf
<urn:uuid:7fbf3963-7486-4664-b891-4d7f069ea7be>
3.484375
300
Knowledge Article
Science & Tech.
32.898981
95,607,834
A Numerical Study of DMS-Oxidation in the Marine Boundary Layer During the last decade, dimethyl sulfide (DMS) has been invoked as an important source of non-anthropogenic sulfur (Andreae et al., 1983; Nguyen at al., 1983). Produced by marine phytoplankton, DMS is transferred to the marine boundary layer (MBL) where it is oxidized to sulfur dioxide (SO 2),methanesulfonic acid (MSA) and to sulfuric acid (H 2 SO 4). The relationship between DMS emission, cloud condensation nuclei (CCN) and cloud albedo is a problem of climatic interest (Charlson et al., 1987) to be treated on the mesoscale due to the fact that the processes involved in DMS-oxidation in the MBL have timescales typically of the order of some hours or less. KeywordsDiurnal Cycle Boundary Layer Height Cloud Condensation Nucleus Marine Boundary Layer Cloud Albedo Unable to display preview. Download preview PDF.
<urn:uuid:0e094ac7-f5a4-4700-885e-b5323760671d>
2.6875
226
Truncated
Science & Tech.
43.747819
95,607,837
The wave equation is an important second-order linear partial differential equation for the description of waves—as they occur in classical physics—such as mechanical waves (e.g. water waves, sound waves and seismic waves) or light waves. It arises in fields like acoustics, electromagnetics, and fluid dynamics. Historically, the problem of a vibrating string such as that of a musical instrument was studied by Jean le Rond d'Alembert, Leonhard Euler, Daniel Bernoulli, and Joseph-Louis Lagrange. In 1746, d’Alembert discovered the one-dimensional wave equation, and within ten years Euler discovered the three-dimensional wave equation. The wave equation is a hyperbolic partial differential equation. It typically concerns a time variable t, one or more spatial variables x1, x2, …, xn, and a scalar function u = u (x1, x2, …, xn; t), whose values could model, for example, the mechanical displacement of a wave. The wave equation for u is Solutions of this equation describe propagation of disturbances out from the region at a fixed speed in one or in all spatial directions, as do physical waves from plane or localized sources; the constant c is identified with the propagation speed of the wave. This equation is linear. Therefore, the sum of any two solutions is again a solution: in physics this property is called the superposition principle. The wave equation alone does not specify a physical solution; a unique solution is usually obtained by setting a problem with further conditions, such as initial conditions, which prescribe the amplitude and phase of the wave. Another important class of problems occurs in enclosed spaces specified by boundary conditions, for which the solutions represent standing waves, or harmonics, analogous to the harmonics of musical instruments. Scalar wave equation in one space dimensionEdit The wave equation in one space dimension can be written as follows: This equation is typically described as having only one space dimension "x", because the only other independent variable is the time "t". Nevertheless, the dependent variable "u" may represent a second space dimension, if, for example, the displacement "u" takes place in y-direction, as in the case of a string that is located in the x-y plane. Derivation of the wave equationEdit The wave equation in one space dimension can be derived in a variety of different physical settings. Most famously, it can be derived for the case of a string that is vibrating in a two-dimensional plane, with each of its elements being pulled in opposite directions by the force of tension. Another physical setting for derivation of the wave equation in one space dimension utilizes Hooke's Law. In the theory of elasticity, Hooke's Law is an approximation for certain materials, stating that the amount by which a material body is deformed (the strain) is linearly related to the force causing the deformation (the stress). From Hooke's lawEdit The wave equation in the one-dimensional case can be derived from Hooke's Law in the following way: Imagine an array of little weights of mass m interconnected with massless springs of length h . The springs have a spring constant of k: Here the dependent variable u(x) measures the distance from the equilibrium of the mass situated at x, so that u(x) essentially measures the magnitude of a disturbance (i.e. strain) that is traveling in an elastic material. The forces exerted on the mass m at the location x+h are: The equation of motion for the weight at the location x+h is given by equating these two forces: where the time-dependence of u(x) has been made explicit. If the array of weights consists of N weights spaced evenly over the length L = Nh of total mass M = Nm, and the total spring constant of the array K = k/N we can write the above equation as: Taking the limit N → ∞, h → 0 and assuming smoothness one gets: which is from the definition of a second derivative. (KL2)/M is the square of the propagation speed in this particular case. Stress pulse in a barEdit In the case of a stress pulse propagating through a beam the beam acts much like an infinite number of springs in series and can be taken as an extension of the equation derived for Hooke's law. A beam of constant cross-section made from a linear elastic material has a stiffness K given by Where A is the cross-sectional area and E is the Young's modulus of the material. The wave equation becomes AL is equal to the volume of the beam and therefore : where is the density of the material. The wave equation reduces to The speed of a stress wave in a bar is therefore changes the wave equation into which leads to the general solution In other words, solutions of the 1D wave equation are sums of a right traveling function F and a left traveling function G. "Traveling" means that the shape of these individual arbitrary functions with respect to x stays constant, however the functions are translated left and right with time at the speed c. This was derived by Jean le Rond d'Alembert. Another way to arrive at this result is to note that the wave equation may be "factored": As a result, if we define v thus, From this, v must have the form v(x + ct), and from this the correct form of the full solution u can be deduced. For an initial value problem, the arbitrary functions F and G can be determined to satisfy initial conditions: The result is d'Alembert's formula: In the classical sense if f(x) ∈ Ck and g(x) ∈ Ck−1 then u(t, x) ∈ Ck. However, the waveforms F and G may also be generalized functions, such as the delta-function. In that case, the solution may be interpreted as an impulse that travels to the right or the left. The basic wave equation is a linear differential equation and so it will adhere to the superposition principle. This means that the net displacement caused by two or more waves is the sum of the displacements which would have been caused by each wave individually. In addition, the behavior of a wave can be analyzed by breaking up the wave into components, e.g. the Fourier transform breaks up a wave into sinusoidal components. Plane wave eigenmodesEdit Another way to solve for the solutions to the one-dimensional wave equation is to first analyze its frequency eigenmodes. A so-called eigenmode is a solution that oscillates in time with a well-defined constant angular frequency , with which the temporal part of the wave function for such eigenmode takes a specific form . The rest of the wave function is then only dependent on the spatial variable , hence amounting to separation of variables. Now writing the wave function as we can obtain an ordinary differential equation for the spatial part with wave number . The total wave function for this eigenmode is then the linear combination where complex numbers depend in general on any initial and boundary conditions of the problem. Eigenmodes are useful in constructing a full solution to the wave equation, because each of them evolves in time trivially with the phase factor . so that a full solution can be decomposed into an eigenmode expansion or in terms of the plane waves, which is exactly in the same form as in the algebraic approach. Functions are known as the Fourier component and are determined by initial and boundary conditions. This is a so-called frequency-domain method, alternative to direct time-domain propagations, such as FDTD method, of the wave packet , which is complete for representing waves in absence of time dilations. Completeness of the Fourier expansion for representing waves in the presence of time dilations has been challenged by chirp wave solutions allowing for time variation of . The chirp wave solutions seem particularly implied by very large but previously inexplicable radar residuals in the flyby anomaly, and differ from the sinusoidal solutions in being receivable at any distance only at proportionally shifted frequencies and time dilations, corresponding to past chirp states of the source. Scalar wave equation in three space dimensionsEdit A solution of the initial-value problem for the wave equation in three space dimensions can be obtained from the corresponding solution for a spherical wave. The result can then be also used to obtain the same solution in two space dimensions. The wave equation can be solved using the technique of separation of variables. To obtain a solution with constant frequencies, let us first Fourier transform the wave equation in time as So we get, This is the Helmholtz equation and can be solved using separation of variables. If spherical coordinates are used to describe a problem, then the solution to the angular part of the Helmholtz equation is given by spherical harmonics and the radial equation now becomes Here and the complete solution is now given by where and are the spherical Hankel functions. To gain a better understanding of the nature of these spherical waves, let us go back and look at the case when . In this case, there is no angular dependence and the amplitude depends only on the radial distance i.e. . In this case, the wave equation reduces to This equation can be rewritten as where the quantity satisfies the one-dimensional wave equation. Therefore, there are solutions in the form where F and G are general solutions to the one-dimensional wave equation, and can be interpreted as respectively an outgoing or incoming spherical wave. Such waves are generated by a point source, and they make possible sharp signals whose form is altered only by a decrease in amplitude as r increases (see an illustration of a spherical wave on the top right). Such waves exist only in cases of space with odd dimensions. For physical examples of non-spherical wave solutions to the 3D wave equation that do possess angular dependence, see dipole radiation. Monochromatic spherical waveEdit Although the word "monochromatic" is not exactly accurate since it refers to light or electromagnetic radiation with well-defined frequency, the spirit is to discover the eigenmode of the wave equation in three-dimensions. Following the derivation in the previous section on Plane wave eigenmodes, if we again restrict our solutions to spherical waves that oscillate in time with well-defined constant angular frequency , then the transformed function has simply plane wave solutions, From this we can observe that the peak intensity of the spherical wave oscillation, characterized as the squared wave amplitude drops at the rate proportional to , an example of the inverse-square law. Solution of a general initial-value problemEdit The wave equation is linear in u and it is left unaltered by translations in space and time. Therefore, we can generate a great variety of solutions by translating and summing spherical waves. Let φ(ξ,η,ζ) be an arbitrary function of three independent variables, and let the spherical wave form F be a delta-function: that is, let F be a weak limit of continuous functions whose integral is unity, but whose support (the region where the function is non-zero) shrinks to the origin. Let a family of spherical waves have center at (ξ,η,ζ), and let r be the radial distance from that point. Thus If u is a superposition of such waves with weighting function φ, then the denominator 4πc is a convenience. From the definition of the delta-function, u may also be written as where α, β, and γ are coordinates on the unit sphere S, and ω is the area element on S. This result has the interpretation that u(t,x) is t times the mean value of φ on a sphere of radius ct centered at x: It follows that The mean value is an even function of t, and hence if These formulas provide the solution for the initial-value problem for the wave equation. They show that the solution at a given point P, given (t,x,y,z) depends only on the data on the sphere of radius ct that is intersected by the light cone drawn backwards from P. It does not depend upon data on the interior of this sphere. Thus the interior of the sphere is a lacuna for the solution. This phenomenon is called Huygens' principle. It is true for odd numbers of space dimension, where for one dimension the integration is performed over the boundary of an interval with respect to the Dirac measure. It is not satisfied in even space dimensions. The phenomenon of lacunas has been extensively investigated in Atiyah, Bott and Gårding (1970, 1973). Scalar wave equation in two space dimensionsEdit In two space dimensions, the wave equation is We can use the three-dimensional theory to solve this problem if we regard u as a function in three dimensions that is independent of the third dimension. If then the three-dimensional solution formula becomes where α and β are the first two coordinates on the unit sphere, and dω is the area element on the sphere. This integral may be rewritten as a double integral over the disc D with center (x,y) and radius ct: It is apparent that the solution at (t,x,y) depends not only on the data on the light cone where but also on data that are interior to that cone. Scalar wave equation in general dimension and Kirchhoff's formulaeEdit We want to find solutions to utt−Δu = 0 for u : Rn × (0, ∞) → R with u(x, 0) = g(x) and ut(x, 0) = h(x). See Evans for more details. Assume n ≥ 3 is an odd integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn) for m = (n+1)/2. Let and let - u ∈ C2(Rn × [0, ∞)) - utt−Δu = 0 in Rn × (0, ∞) Assume n ≥ 2 is an even integer and g ∈ Cm+1(Rn), h ∈ Cm(Rn), for m = (n+2)/2. Let and let - u ∈ C2(Rn × [0, ∞)) - utt−Δu = 0 in Rn × (0, ∞) Problems with boundariesEdit One space dimensionEdit The Sturm-Liouville formulationEdit A flexible string that is stretched between two points x = 0 and x = L satisfies the wave equation for t > 0 and 0 < x < L. On the boundary points, u may satisfy a variety of boundary conditions. A general form that is appropriate for applications is where a and b are non-negative. The case where u is required to vanish at an endpoint is the limit of this condition when the respective a or b approaches infinity. The method of separation of variables consists in looking for solutions of this problem in the special form A consequence is that The eigenvalue λ must be determined so that there is a non-trivial solution of the boundary-value problem This is a special case of the general problem of Sturm–Liouville theory. If a and b are positive, the eigenvalues are all positive, and the solutions are trigonometric functions. A solution that satisfies square-integrable initial conditions for u and ut can be obtained from expansion of these functions in the appropriate trigonometric series. Investigation by numerical methodsEdit Approximating the continuous string with a finite number of equidistant mass points one gets the following physical model: If each mass point has the mass m, the tension of the string is f, the separation between the mass points is Δx and ui, i = 1, ..., n are the offset of these n points from their equilibrium points (i.e. their position on a straight line between the two attachment points of the string) the vertical component of the force towards point i+1 is and the vertical component of the force towards point i−1 is Taking the sum of these two forces and dividing with the mass m one gets for the vertical motion: As the mass density is this can be written The wave equation is obtained by letting Δx → 0 in which case ui(t) takes the form u(x, t) where u(x, t) is continuous function of two variables, takes the form and where L is the length of the string takes in the discrete formulation the form that for the outermost points u1 and un the equation of motion are while for 1 < i < n If the string is approximated with 100 discrete mass points one gets the 100 coupled second order differential equations (5), (6) and (7) or equivalently 200 coupled first order differential equations. Propagating these up to the times using an 8th order multistep method the 6 states displayed in figure 2 are found: The red curve is the initial state at time zero at which the string is "let free" in a predefined shape with all . The blue curve is the state at time , i.e. after a time that corresponds to the time a wave that is moving with the nominal wave velocity would need for one fourth of the length of the string. Figure 3 displays the shape of the string at the times . The wave travels in direction right with the speed without being actively constraint by the boundary conditions at the two extremes of the string. The shape of the wave is constant, i.e. the curve is indeed of the form f(x−ct). Figure 4 displays the shape of the string at the times . The constraint on the right extreme starts to interfere with the motion preventing the wave to raise the end of the string. Figure 5 displays the shape of the string at the times when the direction of motion is reversed. The red, green and blue curves are the states at the times while the 3 black curves correspond to the states at times with the wave starting to move back towards left. Figure 6 and figure 7 finally display the shape of the string at the times and . The wave now travels towards left and the constraints at the end points are not active any more. When finally the other extreme of the string the direction will again be reversed in a way similar to what is displayed in figure 6 Several space dimensionsEdit The one-dimensional initial-boundary value theory may be extended to an arbitrary number of space dimensions. Consider a domain D in m-dimensional x space, with boundary B. Then the wave equation is to be satisfied if x is in D and t > 0. On the boundary of D, the solution u shall satisfy where n is the unit outward normal to B, and a is a non-negative function defined on B. The case where u vanishes on B is a limiting case for a approaching infinity. The initial conditions are where f and g are defined in D. This problem may be solved by expanding f and g in the eigenfunctions of the Laplacian in D, which satisfy the boundary conditions. Thus the eigenfunction v satisfies in D, and In the case of two space dimensions, the eigenfunctions may be interpreted as the modes of vibration of a drumhead stretched over the boundary B. If B is a circle, then these eigenfunctions have an angular component that is a trigonometric function of the polar angle θ, multiplied by a Bessel function (of integer order) of the radial component. Further details are in Helmholtz equation. Inhomogeneous wave equation in one dimensionEdit The inhomogeneous wave equation in one dimension is the following: with initial conditions given by The function s(x, t) is often called the source function because in practice it describes the effects of the sources of waves on the medium carrying them. Physical examples of source functions include the force driving a wave on a string, or the charge or current density in the Lorenz gauge of electromagnetism. One method to solve the initial value problem (with the initial values as posed above) is to take advantage of a special property of the wave equation in an odd number of space dimensions, namely that its solutions respect causality. That is, for any point (xi, ti), the value of u(xi, ti) depends only on the values of f(xi+cti) and f(xi−cti) and the values of the function g(x) between (xi−cti) and (xi+cti). This can be seen in d'Alembert's formula, stated above, where these quantities are the only ones that show up in it. Physically, if the maximum propagation speed is c, then no part of the wave that can't propagate to a given point by a given time can affect the amplitude at the same point and time. In terms of finding a solution, this causality property means that for any given point on the line being considered, the only area that needs to be considered is the area encompassing all the points that could causally affect the point being considered. Denote the area that casually affects point (xi, ti) as RC. Suppose we integrate the inhomogeneous wave equation over this region. To simplify this greatly, we can use Green's theorem to simplify the left side to get the following: The left side is now the sum of three line integrals along the bounds of the causality region. These turn out to be fairly easy to compute In the above, the term to be integrated with respect to time disappears because the time interval involved is zero, thus dt = 0. For the other two sides of the region, it is worth noting that x±ct is a constant, namingly xi±cti, where the sign is chosen appropriately. Using this, we can get the relation dx±cdt = 0, again choosing the right sign: And similarly for the final boundary segment: Adding the three results together and putting them back in the original integral: Solving for u(xi, ti) we arrive at In the last equation of the sequence, the bounds of the integral over the source function have been made explicit. Looking at this solution, which is valid for all choices (xi, ti) compatible with the wave equation, it is clear that the first two terms are simply d'Alembert's formula, as stated above as the solution of the homogeneous wave equation in one dimension. The difference is in the third term, the integral over the source. Other coordinate systemsEdit The elastic wave equation in three dimensions describes the propagation of waves in an isotropic homogeneous elastic medium. Most solid materials are elastic, so this equation describes such phenomena as seismic waves in the Earth and ultrasonic waves used to detect flaws in materials. While linear, this equation has a more complex form than the equations given above, as it must account for both longitudinal and transverse motion: - λ and μ are the so-called Lamé parameters describing the elastic properties of the medium, - ρ is the density, - f is the source function (driving force), - and u is the displacement vector. Note that in this equation, both force and displacement are vector quantities. Thus, this equation is sometimes known as the vector wave equation. As an aid to understanding, the reader will observe that if f and ∇⋅u are set to zero, this becomes (effectively) Maxwell's equation for the propagation of the electric field E, which has only transverse waves. where ω is the angular frequency and k is the wavevector describing plane wave solutions. For light waves, the dispersion relation is ω = ±c |k|, but in general, the constant speed c gets replaced by a variable phase velocity: - Acoustic attenuation - Acoustic wave equation - Bateman transform - Electromagnetic wave equation - Helmholtz equation - Inhomogeneous electromagnetic wave equation - Laplace operator - Mathematics of oscillation - Maxwell's equations - Schrödinger equation - Standing wave - Vibrations of a circular membrane - Wheeler–Feynman absorber theory - Cannon, John T.; Dostrovsky, Sigalia (1981). "The evolution of dynamics, vibration theory from 1687 to 1742". Studies in the History of Mathematics and Physical Sciences. 6. New York: Springer-Verlag: ix + 184 pp. ISBN 0-3879-0626-6. GRAY, JW (July 1983). "BOOK REVIEWS". Bulletin (New Series) of the American Mathematical Society. 9 (1). (retrieved 13 Nov 2012). - Gerard F Wheeler. The Vibrating String Controversy, (retrieved 13 Nov 2012). Am. J. Phys., 1987, v55, n1, p33-37. - For a special collection of the 9 groundbreaking papers by the three authors, see First Appearance of the wave equation: D'Alembert, Leonhard Euler, Daniel Bernoulli. - the controversy about vibrating strings (retrieved 13 Nov 2012). Herman HJ Lynge and Son. - For de Lagrange's contributions to the acoustic wave equation, can consult Acoustics: An Introduction to Its Physical Principles and Applications Allan D. Pierce, Acoustical Soc of America, 1989; page 18.(retrieved 9 Dec 2012) - Speiser, David. Discovering the Principles of Mechanics 1600-1800, p. 191 (Basel: Birkhäuser, 2008). - Tipler, Paul and Mosca, Gene. Physics for Scientists and Engineers, Volume 1: Mechanics, Oscillations and Waves; Thermodynamics, pp. 470-471 (Macmillan, 2004). - Eric W. Weisstein. "d'Alembert's Solution". MathWorld. Retrieved 2009-01-21. - D'Alembert (1747) "Recherches sur la courbe que forme une corde tenduë mise en vibration" (Researches on the curve that a tense cord forms [when] set into vibration), Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 3, pages 214-219. - See also: D'Alembert (1747) "Suite des recherches sur la courbe que forme une corde tenduë mise en vibration" (Further researches on the curve that a tense cord forms [when] set into vibration), Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 3, pages 220-249. - See also: D'Alembert (1750) "Addition au mémoire sur la courbe que forme une corde tenduë mise en vibration," Histoire de l'académie royale des sciences et belles lettres de Berlin, vol. 6, pages 355-360. - V Guruprasad (2015), "Observational evidence for travelling wave modes bearing distance proportional shifts", EPL, 110 (5): 54001, arXiv: , Bibcode:2015EL....11054001G, doi:10.1209/0295-5075/110/54001 - John David Jackson, Classical Electrodynamics, 3rd Edition, Wiley, page 425. ISBN 978-0-471-30932-1 - The initial state for "Investigation by numerical methods" is set with quadratic splines as follows: - M. F. Atiyah, R. Bott, L. Garding, "Lacunas for hyperbolic differential operators with constant coefficients I", Acta Math., 124 (1970), 109–189. - M.F. Atiyah, R. Bott, and L. Garding, "Lacunas for hyperbolic differential operators with constant coefficients II", Acta Math., 131 (1973), 145–206. - R. Courant, D. Hilbert, Methods of Mathematical Physics, vol II. Interscience (Wiley) New York, 1962. - L. Evans, "Partial Differential Equations". American Mathematical Society Providence, 1998. - "Linear Wave Equations", EqWorld: The World of Mathematical Equations. - "Nonlinear Wave Equations", EqWorld: The World of Mathematical Equations. - William C. Lane, "MISN-0-201 The Wave Equation and Its Solutions", Project PHYSNET. - Nonlinear Wave Equations by Stephen Wolfram and Rob Knapp, Nonlinear Wave Equation Explorer by Wolfram Demonstrations Project. - Mathematical aspects of wave equations are discussed on the Dispersive PDE Wiki. - Graham W Griffiths and William E. Schiesser (2009). Linear and nonlinear waves. Scholarpedia, 4(7):4308. doi:10.4249/scholarpedia.4308 |Wikimedia Commons has media related to Wave equation.|
<urn:uuid:0d23b006-d391-4045-8677-d7a63554fc22>
3.796875
6,202
Knowledge Article
Science & Tech.
48.313711
95,607,849
The extracellular matrix is a meshwork of proteins and carbohydrates that binds cells together or divides one tissue from another. The extracellular matrix is the product principally of connective tissue , one of the four fundamental tissue types, but may also be produced by other cell types, including those in epithelial tissues. In the connective tissue, matrix is secreted by connective tissue cells into the space surrounding them, where it serves to bind cells together. The extracellular matrix forms the basal lamina, a complex sheet of extracellular matrix molecules that separates different tissue types, such as binding the epithelial tissue of the outer layer of skin to the underlying dermis, which is connective tissue. Cartilage is a connective tissue type that is principally composed of matrix, with relatively few cells. Collagens are the principal proteins of the extracellular matrix. They are structural proteins that provide tissues with strength and flexibility, and serve other essential roles as well. They are the most abundant proteins found in many vertebrates. There are at least nineteen collagen family members whose subunits, termed α chains, are encoded by at least twenty-five genes . The primary protein sequence of all collagen subunits contains repeating sequences of three amino acids , the first being glycine with the second and third being any amino acid residue (sometimes referred to as a GLY–X–Y motif). Most, if not all, collagens assemble as trimers , with three subunits coming together to form a tightly coiled helix that confers rigidity on each collagen molecule. Assembly of the collagen trimer occurs in the cell by a self-assembly process, which is mediated by short amino acid sequences at both ends of each subunit, called propeptides. Some collagens, most notably collagen types I, II, III, and V, assemble into large, ropelike macrofibrils once they are secreted into the extracellular matrix. In these cases, the propeptides are cleaved off following secretion , permitting the trimeric molecules to undergo further self-assembly into fibrils. In the electron microscope each of these macrofibrils has a characteristic banded appearance and can be very large (up to 300 nanometers in diameter). Type IV collagen, which is found in the basal lamina, does not assemble into a fibril since its subunits retain their propeptides following secretion from a cell. Its triple helix has a series of interruptions in the GLY–X–Y repeating motif, preventing the subunits from binding quite as tightly, and giving the molecule more flexibility. Type IV collagen forms a scaffold around which other basal lamina molecules assemble. In contrast to the fibril-forming collagens and type IV collagen, type XVII collagen is membrane-spanning protein. It is a component of a cell/matrix junction called the hemidesmosome. The fibrillar collagens are also associated with a class of collagen molecules that themselves do not form fibrils but that appear to play an important role in organizing the highly ordered arrays of collagen fibrils that occur in some connective tissues. Examples of this collagen class include type IX and type XII collagen. Collagens do not simply provide filler for tissues. Both fibrillar and basal lamina collagens interact with other extracellular matrix proteins and play important roles in regulating the activities of the cells with which they interact. Cells associate with collagen via cell surface receptors, and through such interactions collagens may have a profound impact on cell proliferation, migration, and differentiation. Fibers and meshworks of collagen molecules also act as a repository of growth factors and matrix-degrading enzymes . These are often present in inactive form and become activated in order for tissues to undergo remodeling, for example in development, during cyclical changes in the female reproductive system, and in pathological conditions such as cancer. see also Amino Acid; Connective Tissue; Epithelium; Protein Structure Alberts, Bruce, et al. Molecular Biology of the Cell, 4th ed. New York: Garland Publishing, 2000. "Extracellular Matrix." Biology. . Encyclopedia.com. (July 22, 2018). http://www.encyclopedia.com/science/news-wires-white-papers-and-books/extracellular-matrix "Extracellular Matrix." Biology. . Retrieved July 22, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/news-wires-white-papers-and-books/extracellular-matrix Modern Language Association The Chicago Manual of Style American Psychological Association "extracellular matrix." A Dictionary of Biology. . Encyclopedia.com. (July 22, 2018). http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/extracellular-matrix "extracellular matrix." A Dictionary of Biology. . Retrieved July 22, 2018 from Encyclopedia.com: http://www.encyclopedia.com/science/dictionaries-thesauruses-pictures-and-press-releases/extracellular-matrix
<urn:uuid:f1eb04d4-623e-4b78-8bcd-a4fc6f6197fb>
3.734375
1,096
Knowledge Article
Science & Tech.
26.414413
95,607,868
- Open Access Magnetic structure of the southern Boso Peninsula, Honshu, Japan, and its implications for the formation of the Mineoka Ophiolite Belt © The Society of Geomagnetism and Earth, Planetary and Space Sciences (SGEPSS); The Seismological Society of Japan; The Volcanological Society of Japan; The Geodetic Society of Japan; The Japanese Society for Planetary Sciences. 1999 Received: 13 February 1998 Accepted: 17 June 1999 Published: 6 June 2014 We conducted onshore and offshore magnetic surveys on and around the southern Boso Peninsula, Honshu, Japan, and observed prominent large amplitude anomalies along the Mineoka Ophiolite Belt, and long wavelength low anomalies to the south of the belt containing short wavelength isolated anomalies. The magnetic structure was modeled by using three-dimensional magnetic prisms and basement with about 1 A/m of magnetization. At the Mineoka Belt, the top of the magnetic prisms is located at the ground surface, and these bodies are elongate in the vertical direction, with high angle magnetic inclinations. Magnetic basement exists at shallow depth beneath the belt. The magnetic basement traces the bottom surfaces of the magnetic prisms and forms a graben structure. In the south of the Mineoka Belt, thin sheet-like magnetic prisms with low magnetic inclinations are assumed at 1–3 km depth. The magnetic structure implies the tectonic process of the formation of the Mineoka Ophiolite Belt. The belt could be fragmented pieces of an oceanic plate emplaced at a paleo-plate boundary, which originated in low latitude and was transported by obduction to the present place via northward drift.
<urn:uuid:296a3c0f-2437-41f1-ad74-f7eaa4d3551d>
2.5625
352
Truncated
Science & Tech.
18.677951
95,607,905
Researchers traced transition of ice crystals into snowflakes For the first time, scientists have obtained direct, quantifiable observations of cloud seeding for increased snowfall -- from the growth of ice crystals, through the processes that occur in clouds, to the eventual snowfall. The National Science Foundation (NSF)-supported project, dubbed SNOWIE (Seeded and Natural Orographic Wintertime Clouds -- the Idaho Experiment), took place from Jan. 7 to March 17, 2017, in and near Idaho's Payette Basin, located approximately 50 miles north of Boise. The research was conducted in concert with the Boise-based Idaho Power Company, which provides a large percentage of its electrical power through hydroelectric dams. Throughout the Western U.S. and in other semi-arid mountain regions across the globe, water supplies are maintained primarily through snowmelt. Growing human populations place a higher demand on water, while warmer winters and earlier springs reduce snowpack and water supplies. Water managers see cloud seeding as a potential way of increasing winter snowfall. "But no one has had a comprehensive set of observations of what really happens after you seed a cloud," says Jeff French, an atmospheric scientist at the University of Wyoming (UW) and SNOWIE principal investigator. "There have only been hypotheses. There have never been observations that show all the steps in cloud seeding." French is the lead author of a paper reporting the results, published in today's issue of the journal Proceedings of the National Academy of Sciences. Co-authors of the paper are affiliated with the University of Colorado- Boulder, University of Illinois at Urbana-Champaign, the National Center for Atmospheric Research, and the Idaho Power Company. French credited modern technology with making the detailed cloud-seeding observations possible, citing the use of ground-based radar as well as radar on UW's King Air research aircraft and multiple flights over the mountains near Boise. "This research shows that modern tools can be applied to longstanding scientific questions," says Nick Anderson, a program director in NSF's Division of Atmospheric and Geospace Sciences, which funded the study. "We now have direct observations that seeding of certain clouds follows a pathway first theorized in the mid-20th century." Cloud seeding stimulates snowfall by releasing silver iodide into clouds from the air or from ground-based generators. In the SNOWIE project, an aircraft supported by the Idaho Power Company released the silver iodide, while the UW King Air took measurements to monitor the silver iodide's impact. Cloud seeding occurred during 21 flights. During three flights, Idaho Power was forced to suspend cloud seeding because there was already so much snow in the Idaho mountains, French says. The UW King Air made 24 flights lasting four to six hours each, the last three monitoring natural snowfall activity. Numerical modeling of precipitation measurements was conducted using a supercomputer nicknamed Cheyenne at the NCAR-Wyoming Supercomputing Center. The numerical models simulated clouds and snowfall over the Payette Basin, as created both in natural storms and with cloud seeding. The models are enabling researchers to study storms where measurements have not been obtained in the field. "In the long-term, we will be able to answer questions about how effective cloud seeding is, and what conditions may be needed," says French. "Water managers and state and federal agencies can make decisions about whether cloud seeding is a viable option to add additional water to supplies from snowpack in the mountains." Cheryl Dybas | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:ae0f9600-6d94-4909-931f-f2c67ac98c90>
3.515625
1,364
Content Listing
Science & Tech.
38.279145
95,607,909
Woodlouse(Redirected from Woodlice) |Porcellio scaber (left) and Oniscus asellus (centre) living on fallen wood| |Infraorders and sections| A woodlouse (plural woodlice) is a terrestrial isopod crustacean with a rigid, segmented, long exoskeleton and fourteen jointed limbs. Woodlice mostly feed on dead plant material, and they are usually active at night. Woodlice form the suborder Oniscidea within the order Isopoda, with over 5,000 known species. Woodlice in the genus Armadillidium and in the family Armadillidae can roll up into an almost perfect sphere as a defensive mechanism, hence some of the common names such as pill bug, or roly-poly. Most woodlice, however, cannot do this. Common names for woodlice vary throughout the English-speaking world. A number of common names make reference to the fact that some species of woodlice can roll up into a ball. Other names compare the woodlouse to a pig. - "armadillo bug" - "boat-builder" (Newfoundland, Canada) - "butcher boy" or "butchy boy" (Australia, mostly around Melbourne) - "carpenter" or "cafner" (Newfoundland and Labrador, Canada) - "cheeselog" (Reading, England) - "cheesy bobs" (Guildford, England) - "cheesy bug" (North West Kent, England) - "chiggy pig" (Devon, England) - "chucky pig" (Devon, Gloucestershire, Herefordshire, England) - "doodlebug" (also used for the larva of an antlion) - "gramersow" (Cornwall, England) - "granny grey" (South Wales) - "monkey-peas" (Kent, England) - "pea bug" or "peasie-bug" (Kent, England) - "pill bug" (usually applied only to the genus Armadillidium) - "potato bug" - "roll up bug" - "sow bug" - "slater" (Scotland, Northern Ireland, New Zealand and Australia) - "wood bug" (British Columbia, Canada) Description and life cycle The woodlouse has a shell-like exoskeleton, which it must progressively shed as it grows. The moult takes place in two stages; the back half is lost first, followed two or three days later by the front. This method of moulting is different from that of most arthropods, which shed their cuticle in a single process. A female woodlouse will keep fertilised eggs in a marsupium on the underside of her body until they hatch into offspring that look like small white woodlice curled up in balls. The mother then appears to "give birth" to her offspring. Females are also capable of reproducing asexually. Pillbugs and pill millipedes Pillbugs (woodlice of the family Armadillidiidae, also known as pill woodlice) can be confused with pill millipedes of the order Glomerida. Both of these groups of terrestrial segmented arthropods are about the same size. They live in very similar habitats, and they can both roll up into a ball. Pill millipedes and pillbugs appear superficially similar to the naked eye. This is an example of convergent evolution. Pill millipedes can be distinguished from woodlice on the basis of having two pairs of legs per body segment instead of one pair like all isopods. Pill millipedes have twelve to thirteen body segments and about eighteen pairs of legs, whereas woodlice have eleven segments and only seven pairs of legs.[clarification needed] In addition, pill millipedes are smoother, and resemble normal millipedes in overall colouring and the shape of the segments.[clarification needed] Living in a terrestrial environment[clarification needed], woodlice breathe through trachea-like lungs in their paddle-shaped hind legs (pleopods), called pleopodal lungs. Woodlice need moisture because they rapidly lose water by excretion and through their cuticle, and so are usually found in damp, dark places, such as under rocks and logs, although one species, Hemilepistus reaumuri, inhabits "the driest habitat conquered by any species of crustacean".[which?] They are usually nocturnal and are detritivores, feeding mostly on dead plant matter. A few woodlice have returned to water. Evolutionary ancient species are amphibious, such as the marine-intertidal sea slater (Ligia oceanica), which belongs to family Ligiidae. Other examples include some Haloniscus species from Australia (family Scyphacidae), and in the northern hemisphere several species of Trichoniscidae and Thailandoniscus annae (family Styloniscidae). Species for which aquatic life is assumed include Typhlotricholigoides aquaticus (Mexico) and Cantabroniscus primitivus (Spain). Woodlice are eaten by a wide range of insectivores, and some animals are known to prey exclusively on woodlice, for example, spiders of the genus Dysdera, such as the woodlouse spider Dysdera crocata, and land planarians of the genus Luteostriata, such as Luteostriata abundans. Although woodlice, like earthworms, are generally considered beneficial in gardens for their role in controlling pests, producing compost and overturning the soil, they have also been known to feed on cultivated plants, such as ripening strawberries and tender seedlings. Woodlice can also invade homes en masse in search of moisture and their presence can indicate dampness problems. They are not generally regarded as a serious household pest as they do not spread disease and do not damage sound wood or structures. There are over 45 native or naturalised species of woodlouse in the British Isles, ranging in colour and in size (3–30 millimetres or 0.1–1.2 inches). Of these 45 species, only five are common: Oniscus asellus (the common shiny woodlouse), Porcellio scaber (the common rough woodlouse), Philoscia muscorum (the common striped woodlouse), Trichoniscus pusillus (the common pygmy woodlouse), and Armadillidium vulgare (the common pill bug). - Infraorder Holoverticata - Section: Tylida - Section: Microcheta - Section: Synocheta - Section: Crinocheta - Thomas, Simon. "The many names of the woodlouse". Oxford Dictionaries. Oxford University Press. Retrieved 16 February 2018. - "Suborder Oniscidea". Bug Guide. March 25, 2009. - Dale Mayer (2010). The Complete Guide to Companion Planting: Everything You Need to Know to Make Your Garden Successful. Atlantic Publishing Company. p. 88. ISBN 9781601383457. - "Dictionary of Newfoundland English – boat n". Retrieved February 27, 2012. - "Bugs Bugs Bugs!" (PDF). Museum Victoria. Retrieved March 30, 2010. - "Australian Word Map". Macquarie Dictionary. Macmillan Publishers, Australia. 2014. Retrieved February 8, 2015. - "Dictionary of Newfoundland English – carpenter n". Dictionary of Newfoundland English. Retrieved September 29, 2016. - Paul Kerswill. "The sound of Reddin". BBC. Retrieved September 17, 2006. - James Chapple. "Everyone in Guildford is calling wood lice 'cheesy bobs' and it's brilliant". Trinity Mirror Group. Retrieved January 15, 2016. - Howe, Ian (2012). Kent Dialect. Bradwell Books. pp. 7, 18. ISBN 9781902674346. - "BBC Devon: Voices". BBC Devon. Retrieved June 25, 2012. - "365 Urban Species. #093: Woodlouse". The Urban Pantheist. Retrieved January 18, 2009. - Barber, A. D. (2015). "Vernacular names of woodlice with particular reference to Devonshire" (PDF). Bulletin of the British Myriapod & Isopod Group. 28: 54–63. - "Sow bug". Dictionary.com Unabridged (v 1.0.1). 2006. Retrieved August 17, 2006. - Matthew Francis (2004). Where the People Are: Language and Community in the Poetry of W.S. Graham. Salt Publishing. ISBN 1-876857-23-4. - "Pillbugs". Niles Biological, Inc. Retrieved June 8, 2016. - Oxford English Dictionary 1933: headword "Hog-louse" - Bill Amos (August 10, 2002). "Little armored tanks". Caledonian-Record. - Bert Vaux & Scott A. Golder. "Dialect Survey". Harvard University. Archived from the original on September 3, 2006. Retrieved September 30, 2006. - Gail Smith-Arrants (March 20, 2004). "You say potato bug, I say roly-poly, you say…" (PDF). Charlotte Observer. - Bruce Marlin. "Common Woodlouse, Sow Bug, Pillbug". North American Insects and Spiders. Retrieved February 10, 2009. - Mairi Robinson, ed. (1987). The Concise Scots Dictionary (3rd ed.). Aberdeen: Aberdeen University Press. p. 628. ISBN 0-08-028492-2. - Maria Minor & A. W. Robertson (2006). "Guide to New Zealand soil invertebrates: Isopoda". Massey University. Retrieved May 13, 2007. - Josh Byrne (2009). "Fact Sheet: Slater Control". Retrieved May 22, 2012. - "Wood Bug". Vancouver Sun. November 26, 2007. - "How Now, Sow Bug?," Discover, August 1999, 68. Available from: https://msvictorialin.wikispaces.com/file/view/sow-bug-article.pdf - "Pill woodlouse (Armadillidium vulgare)". ARKive.org. Retrieved February 13, 2009. - Rod Preston-Mafham & Ken Preston-Mafham (1993). "Crustacea. Woodlice, crabs". The Encyclopedia of Land Invertebrate Behavior. MIT Press. p. 161. ISBN 978-0-262-16137-4. - Ivo Karaman (2003). "Macedonethes stankoi n. sp., a rhithral oniscidean isopod (Isopoda: Oniscidea: Trichoniscidae) from Macedonia" (PDF). Organisms Diversity & Evolution. 3 (8): 1–15. doi:10.1078/1439-6092-00054. - Prasniski, M. E. T.; Leal-Zanchet, A. M. (2009). "Predatory behavior of the land flatworm Notogynaphallia abundans (Platyhelminthes: Tricladida)". Zoologia (Curitiba). 26 (4): 606. doi:10.1590/S1984-46702009005000011. - Bailey, Pat (March 15, 1999). "Humble Roly-Poly Bug Thwarts Stink Bugs in Farms, Gardens". UC Davis News Service. - Phillip E. Sloderbeck (2004). "Pillbugs and sowbugs" (PDF). Kansas State University. - "Sow Bugs". Pestcontrolcanada.com. Retrieved August 17, 2012. - Christian Schmidt & Andreas Leistikow (2004). "Catalogue of the terrestrial Isopoda (Crustacea: Isopoda: Oniscidea)" (PDF). Steenstrupia. 28 (1): 1–118. (lists all genera published up to the end of 2001)
<urn:uuid:cda9d928-22bc-4d2b-8ad5-62202ea14e08>
2.90625
2,634
Knowledge Article
Science & Tech.
49.453474
95,607,919
The pterosaurs were an order of flying reptiles that went extinct some 66 million years ago. They were not actually dinosaurs, but they went extinct at the same time “Extraordinary.” “Stellar.” “Truly awesome.” “A world-class find.” That’s how paleontologists are reacting to the discovery of several hundred ridiculously well-preserved pterosaur eggs in China, some of them still containing the remains of embryos. “My first thought was extreme jealousy,” said David Unwin, a pterosaur expert and paleobiologist at the University of Leicester. “Really.” To understand why Unwin and others are freaking out about the discovery, published Thursday in the journal Science, you have to first appreciate how rare pterosaur eggs are. The pterosaurs were an order of flying reptiles that went extinct some 66 million years ago. They were not actually dinosaurs, but they went extinct at the same time. Along with bats and birds, they are the only vertebrates to truly fly. And though these creatures lorded over the skies for around 162 million years, only a handful of pterosaur egg fossils have ever been unearthed. And of those, paleontologists have just six three-dimensional eggs – that is, eggs not completely flattened by millions of years of being crushed under younger sediments. But now, we have a pterosaur egg extravaganza. According to the new research, a site in China’s Turpan-Hami Basin in Xinjiang has coughed up 215 beautiful, pliable and miraculously three-dimensional eggs – 16 of which contain embryonic remains. The researchers also suspect there could be as many as 300 more eggs within the same sandstone block. No wonder Xiaolin Wang, the study’s lead author and a paleontologist at the Chinese Academy of Sciences, said the discovery could be described as a sort of “pterosaur Eden.” Aside from breaking records, Unwin said there are practical reasons for why having more eggs is better. “When you have a really unique find, you basically can’t do anything to it because that’s all you’ve got.” But now that we have literally hundreds of eggs to work with, we have more options – such as cutting different eggs into cross-sections to study growth rates. What’s more, the egg treasure trove also boasts skeletons from what appear to be hatchlings, juveniles and adults. This, too, is an embarrassment of riches because it means scientists now have more information about how pterosaurs progressed from egg to adult than ever before. “This is by far the most exciting discovery that I know of,” said Alexander Kellner, co-author of the new study and paleontologist at the National Museum of the Federal University of Rio de Janeiro in Brazil. And Kellner isn’t some newbie to pterosaur discoveries. He’s been studying these ancient animals for more than 30 years and has personally been a part of naming or describing more than 20 species. This includes the species in question, Hamipterus tianshanensis, which Kellner, Wang and a team of their colleagues discovered in 2014. With a wingspan of approximately 11 feet, an adult H. tianshanensis may have been something like an albatross. You know, if albatrosses had large crests running the lengths of their heads and spikelike teeth. This species does not appear to have had feathers, Kellner said. Probably a fish-eater, H. tianshanensis inhabited hot and dry environments but would have buried its eggs in the sand and vegetation found on the shores of lakes or rivers. This is also likely why so many eggs have been found together. Kellner suspects recent storms caused torrential flooding which unearthed the eggs and washed them into the fossil record, along with other, older pterosaurs that fell victim to the deluge. While all four of the outside researchers contacted for this story seemed genuinely wowed by the team’s findings, they did not agree with all of the conclusions the study’s authors drew from the fossils. The paper says that H. tianshanensis hatchlings wouldn’t have been able to fly immediately because several of the eggs the team examined showed wing bones that were less developed than expected. This may have meant that baby pterosaurs would have spent some time on the ground hunting insects and generally trying not to be eaten before learning how to take wing. Kellner even speculated that the pterosaur females and maybe even males would have stuck around to feed the hatchlings to help get them through such a precarious stage. My first thought was extreme jealousy. Really Michael Habib, a paleontologist at the University of Southern California, said the authors make a good argument but that it doesn’t necessarily prove the young pterosaurs were flightless. “It is important to note that while the wings were less mature than the bones of the thighs in some respects, the wing bones are still much more robust than the bones of the hind limbs,” Habib said. Furthermore, he said, bone shape and structure are integral to strength, which means it’s possible for a comparatively underdeveloped bone to in fact be stronger than one that is seemingly more developed. Unwin is similarly unconvinced that the hatchlings were grounded. For starters, he said, the embryos in question were likely only halfway done growing, so they would have developed more before they hatched out. Furthermore, none of these embryos have their teeth yet, and if pterosaurs are similar to other reptile groups in development – as most experts agree they are – then lack of teeth is a pretty good indicator that they weren’t fully baked, so to speak. Unwin said this makes the find even more important because all of the pterosaur embryos that have been discovered thus far have been late-stage and nearly ready to hatch. “And while it’s wonderful to have those, they’re not much different from hatchlings really,” he said. “So I think these new embryonic finds are really exciting because with these, we can begin to reconstruct the embryonic development of pterosaurs inside the egg. I just think it’ll take time to do that.” Hundreds of pterosaur eggs in one place is impressive, but Unwin said we’d need more evidence to demonstrate another suggestion in the paper: that this species of pterosaur was a communal nester, like penguins. Instead, Unwin thinks it more likely that a bunch of female pterosaurs simply laid their eggs in the same general area, much like female sea turtles returning to the same beaches year after year. Luis Chiappe, a paleontologist at the Natural History Museum of Los Angeles County, said this paper is probably just the tip of the iceberg and that a site like this could sustain a decade or more of research. “Pterosaurs are incredibly diverse, not just in the shape but also in sizes,” said Chiappe, who was part of a team that described a 100-million-year-old pterosaur egg in central Argentina in 2004. Pterosaurs ran the gamut from the gigantic, aircraftlike Quetzalcoatlus all the way down to animals about the size of a sparrow, such as Nemicolopterus. Some had the long, pointy snouts we typically associate with the flying reptiles. Others boasted wild and crazy crests, which may have been used to attract the opposite sex, as has been suggested with H. tianshanensis. And still other pterosaur species had short, squat skulls more like that of a frog, Chiappe said. (Albeit, a very scary frog.) Pterosaurs were the very first animals with backbones to master powered flight, Kellner said. How they did what they did, and did it for as long as they did, is just one of the mysteries he and his colleagues hope to solve.
<urn:uuid:d3d94122-8dae-44ee-9983-bb451c884691>
3.1875
1,721
News Article
Science & Tech.
45.635689
95,607,938
Jutting out of rock in a hiking area of Alaska's Talkeetna Mountains were the bones of an ancient, long-necked swimmer. A fossil-finder recently ran across them. A new, near-complete fossil of a velociraptor relative tells a lot about film dinosaurs versus the very large banty-rooster truth. Our writer James Sullivan talks about how a wooly mammoth could potentially roam Siberia, and the ethics of such cases. As far as scientists know, there have been a grand total of five mass extinctions over the last 500-million years - world-changing events during which the great majority of Earth's life was eliminated to make way for new organisms and evolutionary paths. However, for several decades, some experts have suspected that a 6th mass extinction existed among these "Big Five." Now researchers are claiming to have found extremely compelling proof of its existence. Meat-eaters and plant-eaters roamed the Kamitaki region in Japan's Hyogo prefecture, dinosaur eggs indicate. No, you can't actually go and become a velociraptor trainer, like Hollywood actor Christ Pratt does in the new blockbuster "Jurassic World." However, you can certainly learn to be a 'raptor tracker.' In a new and intriguing study, paleontologist Scott Persons details how he and his colleagues have learned to follow 75-million-year-old dinosaur trails. This is bound to make any male cringe. Experts recently determined that the ancient 'penis worm,' Ottoia¬¬ - a phallus-like creature more than 500 million years old - boasted a throat lined with deadly razor and serrated teeth. In fact, these teeth were so varied and numerous that researchers crafted a "dentists' handbook" to help them identify new species in the genus. Spelunking and traditional cave diving are both a lot of fun. There is danger in scuba-diving flooded caves, but experienced explorers will tell you that the unique sights that can be found just beneath waves and stone are worth the risk. Such was the case for Ryan Dart, an Australian diver who made the paleontological discovery of a lifetime after stumbling upon a "treasure trove" of ancient and massive lemur bones. You definitely heard of the woolly mammoth, but did you know that 10,000 years ago, some particularly hairy rhinoceros were stomping around the Sleeping Lands as well? Researchers recently got their hands on an incredibly well-preserved carcass of a baby woolly rhino - one that had been trapped in ice for thousands upon thousands of years. Researchers have now determined that hippos were likely some of the first large animals to migrate from Asia to Africa, swimming from one continent to the other roughly 35 million years ago. However, they certainly weren't the semi-aquatic giants they are today. Fossil evidence indicates that ancient hippos were no larger than modern sheep. You may think that people were the original psychedelic sojourners, 'tripping' on acid and mushrooms in a time of spirituality and rock n' roll. However, fossil evidence now indicates that dinosaurs could have been tripping too, albeit indivertibly, 100 million years ago. Paleontologists have unearthed the fossils of our modern day scorpions' most ancient forefathers - creatures that have long been suspected to stalk the ocean's depths hundreds of millions of years ago. Interestingly, the latest sample, however, appears to have legs that would have been ideal for steady strides on land as well. Fossilized plaque, of all things, may have solved a mystery that has left archaeologists scratching their heads for years. Known for its iconic Moai statues, Easter Island is suspected to have been colonized around the 13th century. However palm trees, one of the only primary crops on the island, are believed to have become extinct not long after colonization. So how did the islanders survive? It has long been suspected that most mammals, primates included, really started down the evolutionary fast-track after the majority of dinosaurs went extinct. However, new fossil finds from 66 million years ago suggest that primates may have started evolving earlier, with one primate boasting a particularly large body size during a time of exceptionally tiny mammals.
<urn:uuid:edbf7b69-0ee7-4499-be90-c3109cd2f10f>
2.5625
887
Content Listing
Science & Tech.
37.895252
95,607,974
Washington: The origin of Russian Chelyabinsk meteor remains elusive even after two years of the incident that injured hundreds of people. Astronomers had originally predicted that a 2-km near-Earth asteroid (NEA) designated (86039) 1999 NC43 could be the source body from which the Chelyabinsk meteoroid was ejected prior to its encounter with the Earth. However, reanalysis of the orbital parameters and spectral data by an international team of researchers led by Reddy has shown that the link between Chelyabinsk and 1999 NC43 was unlikely. The study also showed that linking specific meteorites to an asteroid is extremely difficult due to the chaotic nature of the orbits of these bodies. The paper is published in the journal Icarus.
<urn:uuid:50560a69-638c-4446-a598-9dca9900aceb>
2.796875
156
News Article
Science & Tech.
34.094807
95,607,987
Oh, OK: We’re More Likely to Be Hit by “Space Rock” Than Originally Thought By "space rock," I don't mean Aerosmith playing on the ISS. According to a report shared with Discovery, we’re at a “higher risk of a space rock impact than widely thought.” In other words: prepare yourselves, the disaster movies are making a comeback. But no, seriously. With the recent discovery of hundreds of these new giant comets called “centaurs,” astronomers are strongly recommending that we reconsider Earth’s risk of a catastrophic impact with one of these giant balls of ice and dust. These “centaurs” apparently average out at 31-62 miles wide. For context, the asteroid that killed the dinosaurs was about 6 miles wide. The difference here is that comets are made up of ice and dust, so they’re likely to diminish in size should they attempt to enter Earth’s atmosphere. However, that may not be enough to prevent a major catastrophe. The orbits these comets are on are unstable, and it would require a chance deflection from the gravity pull caused by Jupiter, Saturn, Uranus, and Neptune to redirect one towards Earth. Imagine if those planets were big bullies and they were kind of tossing around a giant dirt clod that they want to throw at poor, nerdy Earth–they’re just waiting for the right moment. It is said that this chance occurrence only ever happens once every 40,000-100,000 years. Are… are we overdue? The astronomers also provided a theory as to how this would play out, because they all know we don’t like sleeping well at night. Thanks. According to them, as the comet hurtles towards earth, it would begin to break up, since it’s coming closer to the sun, and it’s made up of ice and dust. Oh, and make no mistake, it wouldn’t be one massive impact. Said the astronomers in the Royal Astronomical Society Journal: The disintegration of such giant comets would produce intermittent but prolonged periods of bombardment lasting up to 100,000 years. But you’d be dead before the end of the first one, so whatever. You can bet that if this news catches on, we’ll see more and more people turning their gaze skyward. With the renewed interest in exploring space, it’s likely these findings will catch more than a few eyes. What it comes down to, though, is really prioritizing getting the hell off Earth. At the very least, these findings might inspire Hollywood to make another Armageddon. How great was that movie, though?
<urn:uuid:245bd053-d882-4816-aa41-d72987a51bfa>
3.171875
569
Personal Blog
Science & Tech.
56.773297
95,608,007
The study, in the June 19 issue of the journal Science, is the latest to rule out a drop in CO2 as the cause for earth's ice ages growing longer and more intense some 850,000 years ago. But it also confirms many researchers' suspicion that higher carbon dioxide levels coincided with warmer intervals during the study period. The planet has undergone cyclic ice ages for millions of years, but about 850,000 years ago, the cycles of ice grew longer and more intense—a shift that some scientists have attributed to falling CO2 levels. But the study found that CO2 was flat during this transition and unlikely to have triggered the change. "Previous studies indicated that CO2 did not change much over the past 20 million years, but the resolution wasn't high enough to be definitive," said Hönisch. "This study tells us that CO2 was not the main trigger, though our data continues to suggest that greenhouse gases and global climate are intimately linked." The timing of the ice ages is believed to be controlled mainly by the earth's orbit and tilt, which determines how much sunlight falls on each hemisphere. Two million years ago, the earth underwent an ice age every 41,000 years. But some time around 850,000 years ago, the cycle grew to 100,000 years, and ice sheets reached greater extents than they had in several million years—a change too great to be explained by orbital variation alone. A global drawdown in CO2 is just one theory proposed for the transition. A second theory suggests that advancing glaciers in North America stripped away soil in Canada, causing thicker, longer lasting ice to build up on the remaining bedrock. A third theory challenges how the cycles are counted, and questions whether a transition happened at all. The low carbon dioxide levels outlined by the study through the last 2.1 million years make modern day levels, caused by industrialization, seem even more anomalous, says Richard Alley, a glaciologist at Pennsylvania State University, who was not involved in the research. "We know from looking at much older climate records that large and rapid increase in C02 in the past, (about 55 million years ago) caused large extinction in bottom-dwelling ocean creatures, and dissolved a lot of shells as the ocean became acidic," he said. "We're heading in that direction now." The idea to approximate past carbon dioxide levels using boron, an element released by erupting volcanoes and used in household soap, was pioneered over the last decade by the paper's coauthor Gary Hemming, a researcher at Lamont-Doherty and Queens College. The study's other authors are Jerry McManus, also at Lamont; David Archer at the University of Chicago; and Mark Siddall, at the University of Bristol, UK. Funding for the study was provided by the National Science Foundation. Copies of the paper, "Atmospheric Carbon Dioxide Concentrations Across the Mid-Pleistocene Transition" are available from the authors or from Science at 202-326-6440 or email@example.com. Scientists contacts:Bärbel Hönisch, firstname.lastname@example.org Kim Martineau | EurekAlert! Innovative genetic tests for children with developmental disorders and epilepsy 11.07.2018 | Christian-Albrechts-Universität zu Kiel Oxygen loss in the coastal Baltic Sea is “unprecedentedly severe” 05.07.2018 | European Geosciences Union For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:0ebd18ff-e931-40fa-9be2-7643c3db78cb>
4.15625
1,318
Content Listing
Science & Tech.
44.070027
95,608,018
Scientists at the Cornell Lab of Ornithology have noticed a decline in certain species of birds over the past three decades, although an exact cause is unknown. To help better understand what might be causing the downward trend, they are inviting the public to participate in “nest watching”. Anyone can go on the website and begin tracking data on nests that they spot. By recording information on how many eggs are laid, how many hatch, and how many survive, the scientists will get a better idea of what is happening to birds around the country. So what are you waiting for? Grab a notebook and binoculars and start saving the birds! Explore Our Blog Search Our Blog Blog Tag Cloud Song Meter SM2M+ Noise Pollution microphones song scope temperature SM4 CONSERVATION sm3bat recorder guest blog Frogs Song Stream Field work Wildlife Artwork Case Study bat recording Central America Transects Seabirds Song Scape nasbr Research Minke Whales recording Education Bat Walk SM2M+ UPS advice Environment birding training courses SM3 comparison heat sm4bat customer stories firmware Whales BAT WEEK Carbon Neutral specs windscreens SM2BAT+ wind Announcements Field Research tips Congratulations Spectorgram song sleuth Soundscape Ecology Amphibians birding festivals song meter iphone software FAQ BCI Endangered Species customers website SM3M analysis deployment news Dolphins Aquatic Noise Marine Latin America in the field app apps Updates Noise CITIZEN SCIENCE iPad Song Birds products sibley Urban Environments White Nose Syndrome bats batteries echo meter touch grant Omnidirectional spectrograms features Bat Detectors kaleidoscope SM2+ STEM david sibley Airplanes K-12 Ecology Blended Learning WNS fieldwork birds events Accessories Distributors
<urn:uuid:5c6af18b-e11d-4ce7-8689-86cdab7fa5d8>
3.265625
361
News (Org.)
Science & Tech.
6.976106
95,608,021
Professor of Physics Mark Stockman worked with Professor Vadym Apalkov of Georgia State and a group led by Ferenc Krausz at the prestigious Max Planck Institute for Quantum Optics and other well-known German institutions. There are three basic types of solids: metals, semiconductors, used in today's transistors, and insulators – also called dielectrics. Dielectrics do not conduct electricity and get damaged or break down if too high of fields of energy are applied to them. The scientists discovered that when dielectrics were given very short and intense laser pulses, they start conducting electricity while remaining undamaged. The fastest time a dielectric can process signals is on the order of 1 femtosecond – the same time as the light wave oscillates and millions of times faster than the second handle of a watch jumps. Dielectric devices hold promise to allow for much faster computing than possible today with semiconductors. Such a device can work at 1 petahertz, while the processor of today's computer runs slightly faster than at 3 gigahertz. "Now we can fundamentally have a device that works 10 thousand times faster than a transistor that can run at 100 gigahertz," Stockman said. "This is a field effect, the same type that controls a transistor. The material becomes conductive as a very high electrical field of light is applied to it, but dielectrics are 10,000 times faster than semiconductors." The results were published online Dec. 5 in Nature. The research institutions include the Max Planck Institute for Quantum Optics, the Department of Physics at the Munich Technical University, the Physics Department at Ludwig Maximilian University at Munich and the Fritz Haber Institute at Berlin, Germany. At one time, scientists thought dielectrics could not be used in signal processing – breaking down when required high electric fields were applied. Instead, Stockman said, it is possible for them to work if such extreme fields are applied at a very short time. In a second paper also published online Dec. 5 in Nature, Stockman and his fellow researchers experimented with probing optical processes in a dielectric – silica – with very short extreme ultraviolet pulses. They discovered the fastest process that can fundamentally exist in condensed matter physics, unfolding at about at 100 attoseconds – millions of times faster than the blink of an eye. The scientists were able to show that very short, highly intense light pulses can cause on-off electric currents – necessary in computing to make the 1s and 0s needed in the binary language of computers -- in dielectrics, making extremely swift processing possible. Stockman's work was supported by the U.S. Department of Energy. The first paper, "Optical-field-induced current in dielectrics" is available through http://dx.doi.org/10.1038/nature11567. The second, "Controlling dielectrics with the electric field of light," is available through http://dx.doi.org/10.1038/nature11720. Jeremy Craig | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:6eaaefdd-90da-49c5-a899-1cdf5c61e7e7>
3.953125
1,287
Content Listing
Science & Tech.
41.968195
95,608,033
Lecture 34: The Wonderful Quantum World - Breakdown of Classical Mechanics recorded by: Massachusetts Institute of Technology, MIT published: Oct. 10, 2008, recorded: December 1999, views: 13215 released under terms of: Creative Commons Attribution Non-Commercial Share Alike (CC-BY-NC-SA) Download mit801f99_lewin_lec34_01.m4v (Video - generic video source 102.0 MB) Download mit801f99_lewin_lec34_01.rm (Video - generic video source 104.2 MB) Download mit801f99_lewin_lec34_01.flv (Video 102.2 MB) Download mit801f99_lewin_lec34_01.wmv (Video 419.0 MB) Report a problem or upload filesIf you have found a problem with this lecture or would like to send us extra material, articles, exercises, etc., please use our ticket system to describe your request and upload the data. Enter your e-mail into the 'Cc' field, and we will keep you updated with your request's status. 1. Discrete Energy Levels: Electrons orbit their atomic nucleus in well defined orbits corresponding to discrete energy levels. The electrons can jump from one energy level to a vacant energy level, but they cannot exist in between. Transitions between these energy levels gives rise to absorption and emission of light in discrete spectral lines (wavelengths). The students are encouraged to look through their diffraction gratings at helium and neon light sources to see evidence of these discrete wavelengths of emitted light. 2. Particles and Waves: Quantum mechanics introduces some very non-intuitive concepts, e.g. light behaves as both a particle (a photon) and a wave, and a particle behaves like a wave with a wavelength inversely proportional to its momentum. Interference is a wave phenomenon, and indeed particles can interfere with each other. Both the position and momentum of a particle cannot be accurately specified at the same time (Heisenberg's uncertainty principle). 3. Diffraction by a Slit: Diffraction of light by a narrow vertical slit is a well understood classical wave phenomenon consistent with Heisenberg's uncertainty principle. The narrower the slit, the smaller is the uncertainty in the horizontal position of the photons which have to sneak through the narrow opening, so the greater is the horizontal spread of the transmitted protons (uncertainty in their momentum). Quantum mechanics only allows you to predict positions of particles with certain probabilities. In the classical, Newtonian, world you can predict the position and movement of a particle to any degree of accuracy - NOT in the microscopic quantum world. The Newtonian picture is perfect for describing the behaviour of basketballs and planets in the macroscopic world. Link this pageWould you like to put a link to this lecture on your homepage? Go ahead! Copy the HTML snippet !
<urn:uuid:200329d0-588c-4aaf-b086-147f431c9717>
2.796875
615
Truncated
Science & Tech.
43.232739
95,608,038
Stress upsets the natural bacterial balance that keeps them healthy If this winter finds you stressed out and fighting a sinus infection, then you know something of what coral will endure in the face of climate change. They don't have sinuses, but these colorful aquatic animals do actually make mucus--"coral snot" is a thing--and the balance of different species of bacteria living in their mucus is very important, because it functions as an ad hoc immune system, keeping the coral healthy by keeping unfriendly bacteria at bay. In a study appearing in the journal PLOS ONE, researchers at The Ohio State University and their colleagues have demonstrated how two separate effects of climate change combine to destabilize different populations of coral microbes--that is, unbalance the natural coral "microbiome"--opening the door for bad bacteria to overpopulate corals' mucus and their bodies as a whole. "Just like we need good bacteria to be healthy, so do coral," said Andréa Grottoli, Professor of Earth Sciences at Ohio State. "Coral don't have immune systems like humans do, but the microbes living in and on their bodies can impart immune-like function. When that falls apart, they can become sickly." The goal of the study, she said, was to help guide conservation efforts in advance of the expected rise in ocean temperature and acidity by the end of this century, as forecast by the Intergovernmental Panel on Climate Change (IPCC). "If we want to make good decisions about which coral populations are more resilient and which ones need more help, this study suggests that we have to take their associated microbial communities into account," she added. Many questions remain about how coral immunity works. Researchers are still piecing together the complex role that microbes in and on human bodies play in human immunity, and how those microbes respond to stress. But this study is the first to probe how the coral microbiome and physiology respond to simultaneous stresses of temperature and acidification. Grottoli's team tested two species of coral that are extremely common around the world, Acropora millepora, or staghorn coral, and Turbinaria reniformis, or yellow scroll coral. Staghorn coral is a branching coral, while yellow scroll coral is a wavy coral resembling cabbage or lettuce leaves. Some of the colors of both species come from symbiotic algae that live inside the coral animal's cells. Many researchers have studied how stress causes coral to expel their algae and turn white, a phenomenon called bleaching. In recent years, microbes have emerged as a third component of coral ecology. "What we think of as coral are really the animal host, symbiotic algae and symbiotic microbes all living together. We no longer think of coral as a symbiosis between two organisms, but a symbiosis among three organisms, what we call a holobiont," Grottoli explained. Yellow scroll coral is much more hardy than staghorn coral when it comes to retaining its algae--that is, not bleaching--in the face of rising temperatures. But researchers suspected the yellow scroll coral would also have the edge when it came to microbes because it makes more mucus, and most microbes in the coral microbiome live in the mucus that oozes over the outside of their bodies. Grottoli stressed that coral mucus isn't a sign of sickness. Healthy corals produce mucus just like healthy humans do. "It's not like they develop a runny nose, the mucus just runs out of their tissues and protects the coral surface," Grottoli said. "Corals are awesome." To test the resilience of their respective microbiomes, researchers exposed both species of coral to a temperature rise from 26.5 degrees Celsius (almost 80 degrees Fahrenheit) to 29 degrees Celsius (a little over 84 degrees Fahrenheit) over 24 days. During that time, they also gradually increased the acidity in the water until it was about 80 percent more acidic. These are some of the changes to the world's oceans that the IPCC has forecast to happen within this century, depending on different climate change scenarios. Under stress, the yellow scroll coral maintained a stable microbiome. But the staghorn coral was not so lucky--it experienced a decline in microbial diversity and increases in populations of Sphingomonas and Pseudomonas bacteria, both of which are familiar human pathogens. "We have known for a while some of the details as to how high temperatures hurt some symbiotic algae inside the coral, but how multiple stressors affect all three components of the holobiont and how such effects may interact across these players is a big question for the field," said co-author Mark Warner, associate director of the Marine Bioscience Program at the University of Delaware. The more temperature-senstive staghorn coral had a weaker microbiome, bleached in response to stress, and showed signs of overall health decline. Conversely, the temperature-hardy yellow scroll coral had the strongest microbiome, did not bleach and had the best health overall-suggesting that something about the relationships among its animal, algae and microbe components makes it especially resilient. Other co-authors at Ohio State included Michael Wilkins, assistant professor of microbiology and earth sciences; doctoral student Paula Dalcin Martins; former doctoral students Stephen Levas, now at the University of Wisconsin-Whitewater, and Verena Schoepf, now at the University of Western Australia; and former Wilkins lab manager Michael Johnston, now at the National Institutes of Health. From the University of Delaware, collaborators included Wei-Jun Cai, the Mary A. S. Lighthipe Chair Professor of Marine Science and Policy; and post-doctoral researcher Tye Pettay. Coauthor Todd Melman owns Reef Systems Coral Farm in New Albany, Ohio, where the experiment took place. Funding was provided by the National Science Foundation. Contact: Andréa Grottoli, 614-292-5782; Grottoli.email@example.com Written by Pam Frost Gorder, 614-292-9475; Gorder.firstname.lastname@example.org Pam Frost Gorder | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:2d0ff1b5-e8d3-4725-9809-c7ada3896616>
3.765625
1,909
Content Listing
Science & Tech.
36.458921
95,608,048
Scientists at University of California Davis and San Francisco State University have discovered that the combination of elevated levels of carbon dioxide and an increase in ocean water temperature has a significant impact on survival and development of the Antarctic dragonfish (Gymnodraco acuticeps). The research article was published today in the journal Conservation Physiology. The study, which was the first to investigate the response to warming and increased pCO2 (partial pressure of carbon dioxide) in a developing Antarctic fish, assessed the effects of near-future ocean warming and acidification on early embryos of the naked dragonfish, a shallow benthic spawner exclusive to the circumpolar Antarctic. As the formation of their embryos takes longer than many species (up to ten months), this makes them particularly vulnerable to a change in chemical and physical conditions. The survival and metabolism of the dragonfish embryo was measured over time in two different temperatures and three pCO2 levels over a three-week period, which allowed the researchers to assess potential vulnerability of developing dragonfish to future ocean scenarios. The results showed that a near-future increase in ocean temperature as well as acidification have the potential to significantly alter the physiology and development of Antarctic fish. One of the article's authors, Assistant Professor Anne Todgham, explained that "temperature will probably be the main driver of change, but increases in pCO2 will also alter embryonic physiology, with responses dependent on water temperature." Professor Todgham went on to say: "Dragonfish embryos exhibited a synergistic increase in mortality when elevated temperature was coupled with increased pCO2 over the course of the three week experiment. While we predictably found that temperature increased embryonic development, altered development due to increased pCO2 was unexpected." These unique findings show that single stressors alone may not be sufficient to predict the effects on early development of fish, as the negative effects of increased pCO2 may only manifest at increased temperatures. They also show that fish may differ from other marine invertebrate embryos in how they respond to pCO2. The faster development of the embryos in warmer and more acidic waters could be bad news for the dragonfish. Hatching earlier, at the start of the dark winter months when limited food resources are available, has the potential to limit growth during critical periods of development. Furthermore, impacts to survival would reduce numbers of embryos that hatch and could impact dragonfish abundance. Chloe Foster | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:885fec1f-09b3-40d1-94bb-777861a23718>
3.109375
1,125
Content Listing
Science & Tech.
29.855319
95,608,049
An Organic Reaction is a subset of chemical reactions involving carbon-based compounds. The main organic reactions include but are not limited to additions, eliminations, substitution, and redox reactions. These reactions are usually used in conjunction to either break down organic reactants or synthesize new organic compounds. Organic synthesis is now an area of focus, as new organic molecules are heavily depended on in the fields of pharmacology, medicine, industry as well as fabrics. Since, organic molecules in general are a lot more complex than inorganic molecules, organic synthesis has developed into one of the most important branches not only in Organic Chemistry, but also Chemistry in general. Although Organic Reactions are considered a subset of all chemical reactions, they are no different to other chemical reactions in terms of physical/chemical rules. The same factors govern the stability of the reactants, products, as well as ultimately the outcome the organic reactions. Another prominent way to categorize organic reactions, which may prove more useful under some contexts, is by the functional groups undergoing the reactions. For example, in the Fries arrangement, the functional group in the reactant undergoing the reaction is an ester. Thus, it would be useful to an organic chemist to not only have a list of functional groups with their corresponding reactions, but also a way to prepare each functional group.© BrainMass Inc. brainmass.com July 20, 2018, 3:03 am ad1c9bdddf
<urn:uuid:a587d869-f36e-4ee0-9cec-ec87207b9e04>
3.421875
295
Knowledge Article
Science & Tech.
27.837917
95,608,057
Deoxyribonucleic acid and ribonucleic acid -- DNA and RNA -- are closely related molecules that participate in transmitting and expressing genetic information. Both consist of molecular chains containing alternating units of sugar and phosphate. Nitrogen-containing molecules, called nucleotide bases, hang off each sugar unit. The different sugar units in DNA and RNA are responsible for the differences between the two biochemicals. Ribose, the sugar of RNA, has a ring structure arranged as five carbon atoms and one oxygen atom. Each carbon binds to a hydrogen atom and a hydroxyl group, which is a molecule of one oxygen and one hydrogen atom. Deoxyribose is identical to RNA except that one carbon binds to a hydrogen atom instead of a hydroxyl group. This one difference means that two strands of DNA can form a double-helix structure while RNA remains as a single strand. DNA’s double-helix structure is very stable, giving it the ability to encode information for a long time. The cell creates RNA as needed during the process of transcription, but DNA is self-replicating. Each sugar unit in DNA and RNA binds to one of four nucleotide bases. Both DNA and RNA use the bases A, C and G. However, DNA uses the base T while RNA uses the base U instead. The sequence of bases along the strands of DNA and RNA is the genetic code that tells the cell how to make proteins. In DNA, the bases of each strand bind to the bases on the other strand, forming the double-helix structure. In DNA, A’s can only bind to T’s and C’s can only bind to G’s. The structure of a DNA helix is preserved in a protein-RNA cocoon called a chromosome. Roles in Transcription The cell makes protein by transcribing DNA to RNA and then translating the RNA into proteins. During transcription, a portion of the DNA molecule, called a gene, is exposed to enzymes that assemble RNA strands according to the nucleotide-base binding rules. The one difference is that DNA A bases bind to RNA U bases. The enzyme RNA polymerase reads each DNA base in a gene and adds the complementary RNA base to the growing RNA strand. In this way, DNA’s genetic information is transmitted to RNA. The cell also uses a second type of RNA to make ribosomes, which are tiny protein-making factories. A third type of RNA helps transfer amino acids to growing protein strands. DNA plays no role in translation. RNA’s extra hydroxyl groups make it a more reactive molecule that is less stable in alkaline conditions than DNA. The tight structure of a DNA double helix makes it less vulnerable to enzyme action, but RNA is more resistant to ultraviolet rays.
<urn:uuid:3b1fe499-b5cb-456d-8fad-c5a117be1566>
4.5
576
Knowledge Article
Science & Tech.
48.734165
95,608,065
A tooth challenges beliefs about how ancient reptiles lived At the beginning of the age of dinosaurs, gigantic reptiles—distant relatives of modern crocodiles—ruled the earth. Some lived on land and others in water and it was thought they didn't much interact. But a tooth found by a University of Tennessee, Knoxville, researcher in the thigh of one of these ancient animals is challenging this belief. This image shows teeth from phytosaurs, a reptile from the Triassic Period, that lived about 210 million years ago in the western United States, in the hand of Virginia Tech research scientist Michelle Stocker. The gray tooth was 3-D printed from CT scans after being digitally extracted from the thigh bone of a large predatory reptile called a rauisuchid. Credit: Michelle Stocker Stephanie Drumheller, an earth and planetary sciences lecturer, and her Virginia Tech colleagues Michelle Stocker and Sterling Nesbitt examined 220-million-year-old bite marks in the thigh bones of an old reptile and found evidence that two predators at the top of their respective food chains interacted—with the smaller potentially having eaten the larger animal. The evidence? A tooth of a semi-aquatic phytosaur lodged in the thigh bone of a terrestrial rauisuchid. The tooth lay broken off and buried about two inches deep in bone and then healed over, indicating that the rauisuchid, a creature about 25 feet long and 4 feet high at the hip, survived the initial attack. "To find a phytosaur tooth in the bone of a rauisuchid is very surprising. These rauisuchids were the largest predators in their environments. You might expect them to be the top predators as well, but here we have evidence of phytosaurs, who were smaller, semi-aquatic animals, potentially targeting and eating these big carnivores," said Drumheller. To study the tooth without destroying the bone, the team partnered computed tomographic (CT) data with a 3D printer and printed copies of the tooth. This, along with an examination of the bite marks, revealed a story of multiple struggles. The team found tissue surrounding bite marks illustrating that the rauisuchid was attacked twice and survived. Evidence of crushing, impact and flesh-stripping but no healing showed the team that the animal later died in another attack. The tooth that was left behind revealed who was guilty of the attacks. "Finding teeth embedded directly in fossil bone is very, very rare," said Drumheller of the bone obtained from the University of California Museum of Paleontology in Berkeley. "This is the first time it's been identified among phytosaurs, and it gives us a smoking gun for interpreting this set of bite marks." The findings also suggest previous distinctions between water- and land-based food chains of this time, the Late Triassic Period, might be built upon mistaken assumptions made from fossil remains. "This research will call for us to go back and look at some of the assumptions we've had in regard to the Late Triassic ecosystems," Stocker said. "The aquatic and terrestrial distinctions made were oversimplified, and I think we've made a case that the two spheres were intimately connected." The research also calls into question the importance of size in a fight. "Both of the femora we examined came from some of the physically largest carnivorous species present at both locations. Yet they were targeted by other members of the region—specifically phytosaurs," said Drumheller. "Thus, size cannot be the only factor in determining who is at the top of the food chain." The research is published online in the German scientific journal Naturwissenschaften, The Science of Nature. Whitney Heins | Eurek Alert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:ef80e7e8-bdd4-40c7-9899-2144eb5db13e>
3.921875
1,422
Content Listing
Science & Tech.
42.575058
95,608,069
Climate change is visible and occurring throughout the U.S., but the choices we make now will determine the severity of its impacts in the future, according to a Texas Tech University climate scientist who served as a lead author on a report released today by the White House. Katharine Hayhoe, a research associate professor in the Department of Geosciences, was one of 31 scientists from 13 U.S. government science agencies, major universities and research institutes that produced the study. In 2007, she was invited to serve as the lead author for the Great Plains chapter of the report, which includes Texas. “During the next decade or two, we are likely to see an increase of 2 to 3 degrees Fahrenheit across the United States,” Hayhoe said. “How much temperatures rise after that depends primarily on our emissions of heat-trapping gases during the next few decades. Under lower emissions, temperatures could increase 4 to 7.5 degrees. With higher emissions, we can expect 7 to 11 degrees, with the greatest increases in summer.” Using projections such as these, authors crafted what they call the most comprehensive, plain-language report to date on national climate change. Global Climate Change Impacts in the United States provides the most current information on how climate change is likely to impact key economic sectors and regions of the country. The report spans both the Bush and Obama Administrations. The study found that Americans already are being affected by climate change through extreme weather, drought and wildfire and details how the nation’s transportation, agriculture, health, water and energy sectors will be affected in the future. The study also found that the current trend in the emission of greenhouse gas pollution is significantly above the worst-case scenario examined in this report. Hayhoe said heat waves, drought and heavy rainfall events are all expected to become more frequent for much of the nation, including in the Great Plains. Warmer temperatures increase evaporation. Combined with increased risk of drought, this raises concerns about the region’s water supply, already overtaxed in many parts of the Great Plains. “Water is gold – here in Texas and across the Great Plains,” she said. “Much of it comes from the Ogallala Aquifer, which extends from Nebraska all the way down to West Texas. But on the South Plains, we’re already taking the water out faster than it can replenish, and aquifer levels across the region have dropped by more than 150 feet since irrigation began in the 1950s. Farming and ranching are already under pressure from expanding human development and limited water supply. Climate change will exacerbate these and other existing stresses on our natural environment and our society.” Rising temperatures likely will further stress farms and ranches, shifting the areas where certain crops are grown, and allowing pests currently confined to the southern parts of the region to expand northward. Rising temperatures also will add to the pressure on the regions grasslands and playa lakes – unique habitats the Great Plains region offers to migrating and local birds as well as other wildlife. The report emphasizes that the choices we make now will determine the severity of climate change impacts in the future. Earlier reductions in emissions will have a greater effect in reducing climate change than comparable reductions made later. Main findings for the United States include: • Heat waves will become more frequent and intense, increasing threats to human health and quality of life. Extreme heat also will affect transportation and energy systems, and crop and livestock production. • Increased heavy downpours will lead to more flooding, waterborne diseases, negative effects on agriculture, and disruptions to energy, water and transportation systems. • Reduced summer runoff and increasing water demands will create greater competition for water supplies in some regions, especially in the West. • Rising water temperatures and ocean acidification threaten coral reefs and the rich ecosystems they support. • Insect infestations and wildfires already are increasing and are projected to increase further in a warming climate. • Local sea-level rise of more than three feet on top of storm surges will increasingly threaten homes and other coastal infrastructure. Coastal flooding will become more frequent and severe, and coastal land will be lost to the rising seas. Hayhoe has led climate impact assessments for California, the Northeast, Chicago, and also contributed to the United Nations Intergovernmental Panel on Climate Change. A product of the interagency U.S. Global Change Research Program and led by National Oceanic and Atmospheric Administration, the definitive 190-page report is intended to better inform members of the public and policymakers. It is available at www.globalchange.gov/usimpacts. CONTACT: Katharine Hayhoe, associate professor, Department of Geosciences, Texas Tech University, (806) 392-1900, or email@example.com John Davis | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Earth Sciences 19.07.2018 | Power and Electrical Engineering 19.07.2018 | Materials Sciences
<urn:uuid:d641d60b-1ab3-40db-8756-20833ab58b40>
3.484375
1,660
Content Listing
Science & Tech.
37.625853
95,608,070
In tidal wetlands, small differences in ground elevation can have a large impact on hydrology, vegetation, and habitat (e.g., assessment of wetland health and stability, habitat, flood risk, and coastal inundation). The DGS has been a data provider for the National Ground-Water Monitoring Network (NGWMN) since 2016. NGWMN is a consortium of state and local agencies and the U.S. The Delaware Geological Survey will review recent scientific literature and assessments of sea-level change in Delaware and identify appropriate scenarios to use for planning purposes throughout the state. This project will also develop new inundation maps along Delaware's coast that correspond to the identified scenarios. In 2015, DGS became aware of a situation east of Dover where there is potential for overpumping of the Columbia aquifer by the City of Dover’s Long Point Road wellfield (LPRW) and numerous large-capacity irrigation wells in the surrounding area (Figure 1). Geologic maps at the DGS are created as primary deliverables of a project and as derivatives of other projects. Primary deliverables are mainly those that are the result of outside funding sources such as the AASG-USGS cooperative StateMap. Derivative maps are those that have primary data collected for reasons other than geologic mapping can be used to create geologic maps or that geologic maps are derivative products of a project rather than the primary goal of a project. DGS is continuing a collaboration with climate scientist Kevin Brinson (DEOS) to develop and test methods to estimate and map annual and seasonal distribution of ET for Sussex County, Delaware. Remotely sensed data from Landsat 7 ETM+ and MODIS platforms will be used to estimate regional energy balance and water flux. These estimates are calibrated by comparison to ET estimates determined by direct point measurements (Eddy Covariance and atmometer) and models driven by meteorological data such as temperature, relative humidity, wind speed, and soil moisture. The results have the potential to improve accuracy and precision of ET models and will be valuable for efforts that use water budgets for resource management, agriculture, wetland assessment, and research.
<urn:uuid:c3e40834-11cc-48d9-be5b-2c7e69974acb>
3.0625
444
About (Org.)
Science & Tech.
22.365647
95,608,097
Scientists from the University of Liverpool have taught computers to sift through the infinite possibilities of atoms in search of new materials. The computers use machine-learning to help scientists narrow their focus when combining atoms to create something entirely new. This new research will allow scientists to input previously known materials into a machine learning algorithm so that the computer can then predict what similar atomic pairings will produce. Blockchain and cryptocurrency news minus the bullshit. Visit Hard Fork. Liverpool Materials Chemist, Professor Matt Rosseinsky, said: Understanding which atoms will combine to form new materials from the vast space of possible candidates is one of the grand scientific challenges, and solving it will open up exciting scientific opportunities that could lead to important properties. The process is much more complex than wood-plus-fire-equals-torch. At the University of Liverpool they used the research to predict crystal growth. The scientists report explains the experimental discovery of two different crystal structure types by computational identification of the region of a complex inorganic phase field that contains them. Technology like this brings us one step closer to spending the bulk of our time reviewing results instead of conducting pain-staking research. Machines are learning what we want from them, and in return they are teaching us how to make sense of the building blocks of our universe.
<urn:uuid:544f6276-f2d0-4ac5-a7e2-0136f20c8cbf>
3.734375
267
News Article
Science & Tech.
31.223893
95,608,107
Geoscientists are closing important knowledge gaps in the climate history of the Arctic Ocean An international team of scientists led by the Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI) have managed to open a new window into the climate history of the Arctic Ocean. Using unique sediment samples from the Lomonosov Ridge, the researchers found that six to ten million years ago the central Arctic was completely ice-free during summer and sea-surface temperature reached values of 4 to 9 degrees Celsius. In spring, autumn and winter, however, the ocean was covered by sea ice of variable extent, the scientists explain in the current issue of the journal Nature Communications. These new findings from the Arctic region provide new benchmarks for groundtruthing global climate reconstructions and modelling. The researchers had recovered these unique sediment samples during a Polarstern expedition in summer of 2014. "The Arctic sea ice is a very critical and sensitive component in the global climate system. It is therefore important to better understand the processes controlling present and past changes in sea ice. In this context, one of our expedition’s aims was to recover long sediment cores from the central Arctic, that can be used to reconstruct the history of the ocean's sea ice cover throughout the past 50 million years. Until recently, only a very few cores representing such old sediments were available, and, thus, our knowledge of the Arctic climate and sea ice cover several millions of year ago is still very limited," Prof. Dr. Rüdiger Stein, AWI geologist, expedition leader and lead author of the study, explains. The scientists found an ideal place for recovering the sediment cores on the western slope of the Lomonosov Ridge, a large undersea mountain range in the central Arctic. "This slope must have experienced gigantic recurring landslides in the past, which resulted in the exhumation of more than 500-metre thick ancient sediment and rock formations. We were also surprised about the wide-spread occurrence of these slide scars, which extend over a length of more than 300 kilometres, almost from the North Pole to the southern end of the ridge on the Siberian side," Rüdiger Stein explains. Sediment core emerges as a unique climate archive Within a two-days coring action, he and his team took 18 sediment cores from this narrow area on Lomonosov Ridge on board the Polarstern research vessel. Although the recovered sediment cores were only four to eight metres long, one of them turned out to be precisely one of those climate archives that the scientists had been looking for a long time. "With the help of certain microfossils, so-called dinoflagellates, we were able to unambiguously establish that the lower part of this core consists of approximately six to eight million-year-old sediments, thereby tracing its geological history back to the late Miocene. With the help of so-called ‘climate indicators or proxies’, this gave us the unique opportunity to reconstruct the climate conditions in the central Arctic Ocean for a time period for which only very vague and contradictory information was available," says Rüdiger Stein. Some scientists were of the opinion that the central Arctic Ocean was already covered with dense sea ice all year round six to ten million years ago – roughly to the same extent as today. The new research findings contradict this assumption. "Our data clearly indicate that six to ten million years ago, the North Pole and the entire central Arctic Ocean must in fact have been ice-free in the summer," says Rüdiger Stein. Biomarkers preserved in the sea floor allow insight into the climate's past This statement is based on studies of organic compounds (so-called biomarkers) that were produced by certain organisms that lived in the Arctic Ocean at that time and that have been preserved in the sediment deposits. The researchers were able to extract two of such marker groups from the sediments: "The first group of biomarkers is derived from carbonaceous algae that live in surface water, i.e. they need open water and, being plants, depend on light. Since in the central Arctic Ocean sunlight is only available during the spring and summer months and is pitch-dark at all other times, the data derived from these carbonaceous algae provide us with information about the surface water conditions during the summer period," says Rüdiger Stein. Furthermore, these carbonaceous algae produce different biomarker compounds depending on the water temperature. "These molecules allowed us to estimate that the surface water temperature of the Arctic Ocean was approximately 4 to 9 degrees Celsius in the late Miocene. Because these values are well above zero, this is a clear indication that ice-free conditions existed in the summer," says the scientist. However, as the second group of biomarkers shows, the Arctic Ocean was not ice-free all year round. It is formed by specific diatoms that live in the Arctic sea ice. Rüdiger Stein: "By combining our data records on surface water temperature and on sea ice, we are now able to prove for the first time that six to ten million years ago, the central Arctic Ocean was ice-free in the summer. In the spring and the preceding winter, on the other hand, the ocean was covered by sea ice. The seasonal ice cover around the North Pole must have been similar to that in the Arctic marginal seas today." New climate data help to improve climate models These new findings of the Arctic Ocean climate history reconstructed from sediment data, are further corroborated by climate simulations, as was shown by the AWI modellers who participated in this study. This only applies, however, if we assume a relatively high carbon dioxide content in the atmosphere of 450 ppm. If the climate models were run using a significantly lower carbon dioxide content of about 280 ppm, as some studies postulate for the late Miocene, then an ice-free Arctic cannot be simulated. Whether the carbon dioxide content in the Miocene was indeed relatively high or whether the sensitivity of the model is too weak to simulate the magnitude of high-latitude warming in a warmer than modern climate, is currently subject to further international research. One of the overarching goals here is to improve the predictive capacity of climate models. Rüdiger Stein: "Once our climate models are capable of reliably reproducing surface-water temperature and sea ice cover of earlier periods, we will also be able to further improve the climate models for a better prediction of future climate change and sea ice conditions in the central Arctic Ocean, a major challenge for all of us for the coming years." Further scientific drilling planned on the Lomonosov Ridge Despite the outstanding results and the accompanying euphoria, the participating scientists agree that this was merely the first step and that other important steps must follow. "While our new sediment cores give us an undreamt-of initial insight into the early climate history of the Arctic, these climate records are still very incomplete. In order to fully unravel the great mystery of Arctic climate history over the past 20 to 60 million years, we need much longer, continuous sediment sequences, which can only be obtained by drilling. An Arctic drilling expedition (which is still a major scientific and technical challenge for the marine geosciences) is now planned for 2018 in our study area on the Lomonosov Ridge, and it will be carried out as part of the international drilling programme IODP (International Ocean Discovery Program). The preliminary investigations carried out by our Polarstern expedition have played a significant role in selecting the precise drilling locations," Rüdiger Stein, who will be one of the expedition leaders of the IODP campaign in 2018, explains. Notes for Editors: Original article in Nature Communications: Ruediger Stein, Kirsten Fahl, Michael Schreck, Gregor Knorr, Frank Niessen, Matthias Forwick, Catalina Gebhardt, Laura Jensen, Michael Kaminski, Achim Kopf, Jens Matthiessen, Wilfried Jokat, and Gerrit Lohmann: Evidence for ice-free summers in the late Miocene central Arctic Ocean, Nature Communications 7: 11148, doi:10.1038/ncomms11148. Please find printable images on: http://www.awi.de/nc/en/about-us/service/press.html Your scientific contact person at the Alfred Wegener Institute is Professor Rüdiger Stein (e-mail: Ruediger.Stein(at)awi.de). Your contact person in the Dept. of Communications and Media Relations is Sina Löschke (tel: ++49 (0)471 4831-2008; e-mail: medien(at)awi.de). The Alfred Wegener Institute Helmholtz Centre for Polar and Marine Research (AWI) conducts research in the Arctic, Antarctic and oceans of the high and mid-latitudes. It coordinates polar research in Germany and provides major infrastructure to the international scientific community, such as the research icebreaker Polarstern and stations in the Arctic and Antarctica. The Alfred Wegener Institute is one of the 18 research centres of the Helmholtz Association, the largest scientific organisation in Germany. Ralf Röchert | idw - Informationsdienst Wissenschaft New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:270bd740-2988-4f95-9e15-831a74fbb07a>
3.75
2,546
Content Listing
Science & Tech.
35.271864
95,608,124
In geology, saltation (from Latin saltus, "leap") is a specific type of particle transport by fluids such as wind or water. It occurs when loose materials are removed from a bed and carried by the fluid, before being transported back to the surface. Examples include pebble transport by rivers, sand drift over desert surfaces, soil blowing over fields, and snow drift over smooth surfaces such as those in the Arctic or Canadian Prairies. At low fluid velocities, loose material rolls downstream, staying in contact with the surface. This is called creep or reptation. Here the forces exerted by the fluid on the particle are only enough to roll the particle around the point of contact with the surface. Once the wind speed reaches a certain critical value, termed the impact or fluid threshold, the drag and lift forces exerted by the fluid are sufficient to lift some particles from the surface. These particles are accelerated by the fluid, and pulled downward by gravity, causing them to travel in roughly ballistic trajectories. If a particle has obtained sufficient speed from the acceleration by the fluid, it can eject, or splash, other particles in saltation, which propagates the process. Depending on the surface, the particle could also disintegrate on impact, or eject much finer sediment from the surface. In air, this process of saltation bombardment creates most of the dust in dust storms. In rivers, this process repeats continually, gradually eroding away the river bed, but also transporting-in fresh material from upstream. The speed at which the flow can move particles by saltation is given by the Bagnold formula. Suspension generally affects small particles ('small' means ~70 micrometres or less for particles in air). For these particles, vertical drag forces due to turbulent fluctuations in the fluid are similar in magnitude to the weight of the particle. These smaller particles are carried by the fluid in suspension, and advected downstream. The smaller the particle, the less important the downward pull of gravity, and the longer the particle is likely to stay in suspension. A recent study finds that saltating sand particles induces a static electric field by friction. Saltating sand acquires a negative charge relative to the ground which in turn loosens more sand particles which then begin saltating. This process has been found to double the number of particles predicted by previous theory. This is significant in meteorology because it is primarily the saltation of sand particles which dislodges smaller dust particles into the atmosphere. Dust particles and other aerosols such as soot affect the amount of sunlight received by the atmosphere and earth, and are nuclei for condensation of the water vapour. Saltation layers can also form in avalanches. - Aeolian landform - Aeolian processes - Bagnold formula - Saltation (biology) - Saltatory conduction - The Physics of Blown Sand and Desert Dunes - Dune sand saltation video, Kansas State University - Close up of dune sand saltation, Kansas State University - The Bibliography of Aeolian Research - Bagnold, Ralph (1941). The physics of wind-blown sand and desert dunes. New York: Methuen. ISBN 0486439313.[page needed] - Kok, Jasper; Parteli, Eric; Michaels, Timothy I; Karam, Diana Bou (2012). "The physics of wind-blown sand and dust". Reports on Progress in Physics. 75 (10): 106901. arXiv: . Bibcode:2012RPPh...75j6901K. doi:10.1088/0034-4885/75/10/106901. PMID 22982806. - Rice, M. A.; Willetts, B. B.; McEwan, I. K. (1995). "An experimental study of multiple grain-size ejecta produced by collisions of saltating grains with a flat bed". Sedimentology. 42 (4): 695–706. Bibcode:1995Sedim..42..695R. doi:10.1111/j.1365-3091.1995.tb00401.x. - Bagnold, Ralph (1941). The physics of wind-blown sand and desert dunes. New York: Methuen. ISBN 0486439313. - Shao, Yaping, ed. (2008). Physics and Modelling of Wind Erosion. Heidelberg: Springer. ISBN 9781402088957.[page needed] - Electric Sand Findings, University of Michigan Jan. 6, 2008
<urn:uuid:42f27f1a-bc25-43fd-8bcd-399ffc0e21c2>
4.40625
960
Knowledge Article
Science & Tech.
58.280761
95,608,135
Toxicity, the persistence of the nanomaterials in the environment, their efficacy as biosensors and, for that matter, the accuracy of experiments to measure these factors, are all known to be affected by agglomeration and cluster size. Recent work* at the National Institute of Standards and Technology (NIST) offers a way to measure accurately both the distribution of cluster sizes in a sample and the characteristic light absorption for each size. The latter is important for the application of nanoparticles in biosensors. Clusters of roughly 30-nanometer gold nanoparticles imaged by transmission electron microscopy. (Color added for clarity.) Credit: Keene, FDA A good example of the potential application of the work, says NIST biomedical engineer Justin Zook, is in the development of nanoparticle biosensors for ultrasensitive pregnancy tests. Gold nanoparticles can be coated with antibodies to a hormone** produced by an embryo shortly after conception. Multiple gold nanoparticles can bind to each hormone, forming clusters that have a different color from unclustered gold nanoparticles. But only certain size clusters are optimal for this measurement, so knowing how light absorbance changes with cluster size makes it easier to design the biosensors to result in just the right sized clusters. The NIST team first prepared samples of gold nanoparticles—a nanomaterial widely used in biology—in a standard cell culture solution, using their previously developed technique for creating samples with a controlled distribution of sizes***. The particles are allowed to agglomerate in gradually growing clusters and the clumping process is "turned off" after varying lengths of time by adding a stabilizing agent that prevents further agglomeration. They then used a technique called analytical ultracentrifugation (AUC) to simultaneously sort the clusters by size and measure their light absorption. The centrifuge causes the nanoparticle clusters to separate by size, the smaller, lighter clusters moving more slowly than the larger ones. While this is happening, the sample containers are repeatedly scanned with light and the amount of light passing through the sample for each color or frequency is recorded. The larger the cluster, the more light is absorbed by lower frequencies. Measuring the absorption by frequency across the sample containers allows the researchers both to watch the gradual separation of cluster sizes and to correlate absorbed frequencies with specific cluster sizes. Most previous measurements of absorption spectra for solutions of nanoparticles were able only to measure the bulk spectra—the absorption of all the different cluster sizes mixed together. AUC makes it possible to measure the quantity and distribution of each nanoparticle cluster without being confounded by other components in complex biological mixtures, such as proteins. The technique previously had been used only to make these measurements for single nanoparticles in solution. The NIST researchers are the first to show that the procedure also works for nanoparticle clusters.* J.M. Zook, V. Rastogi, R.I. MacCuspie, A.M. Keene and J. Fagan. Measuring agglomerate size distribution and dependence of localized surface plasmon resonance absorbance on gold nanoparticle agglomerate size using analytical ultracentrifugation. ACS Nano, Articles ASAP (As Soon As Publishable). Publication Date (Web): Sept. 3, 2011 DOI: 10.1021/nn202645b. Michael E. Newman | EurekAlert! Pollen taxi for bacteria 18.07.2018 | Technische Universität München Biological signalling processes in intelligent materials 18.07.2018 | Albert-Ludwigs-Universität Freiburg im Breisgau For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Life Sciences 18.07.2018 | Information Technology
<urn:uuid:3a9e38d2-9f8d-4bfe-b60e-f442f8ec1601>
3
1,335
Content Listing
Science & Tech.
31.945915
95,608,140
WORK & ENERGY ROPES , PULLEYS STRINGS : NEWTON S LAWS OF MOTION FORCE do objects move as they do ? why Wh... ZD PROJECTILE MOTION MOTION . . is . x i 7 a THROWING A BALL DRIV... LECTURE 3 : JANUARY 2 5th 2018 MOHANTY PHYSICS 1 : MOTION Wl CONSTA... ~ f~ : (v~ P~) I t ~~ ~ )(~ J :~t~ ~ 7 ~ ~ ~;) ~ ~ ~ 1~ ve.dtnA)~ (1iA.s~ ~ . [a:... t Sauce s of Magne HC Frei t find E in a unform spune re of charge p at point 7 Given v fnd O bit thme S in V a Br This detailed and well structured Guide builds upon the Physics pri... This Guide is an aid to understanding one of the essential ‘hard’ s... Access over 10 million pages of study documents for 1.3 million courses. Join to view So we can recommend you notes for your school. Please enter below the email address you registered with and we will send you a link to reset your password. Get notes from the top students in your class.
<urn:uuid:19fb8e7d-d913-46f0-8b16-31c90dc5bf44>
3.46875
294
Truncated
Science & Tech.
99.933143
95,608,151
The stars are huge gas spheres, hundred thousands or millions of times more massive than the Earth. A star such as the Sun can go on shining steadily for thousands of millions of years. This is shown by studies of the prehistory of the Earth, which indicate that the energy radiated by the Sun has not changed by much during the last four thousand million years. The equilibrium of a star must remain stable for such periods. KeywordsRadiation Pressure Fermi Momentum Helium Nucleus Stellar Structure Mass Absorption Coefficient Unable to display preview. Download preview PDF. - Clayton, D. D. (1968/1983): Principles of Stellar Evolution and Nucleosynthesis (McGraw-Hill, New York/University of Chicago Press, Chicago)Google Scholar - Novotny, E. (1973): Introduction to Stellar Atmospheres and Interiors (Oxford University Press, New York)Google Scholar - Schwarzschild, M. (1958/1965): Structure and Evolution of the Stars (Princeton University Press, Princeton/Dover, New York)Google Scholar
<urn:uuid:c48eedcf-090b-4ba1-81d9-9771458ab302>
3.546875
226
Truncated
Science & Tech.
47.253053
95,608,185
January 12 2016 Astronomy Newsletter Here's the latest article from the Astronomy site at BellaOnline.com. Four Astronomy Non-events of 2015 In 2015 we learned a lot about the Solar System and beyond. And the splendid sky events included a solar eclipse and two lunar eclipses. Yet, as ever, people on social media people who delight in disaster were declaring doom. Should we be apprehensive? Let's see. On January 19, (1) Johann Bode was born in 1747. He produced two major star atlases, was the director of the Berlin Observatory for four decades, and a prolific writer. His name lives on in a relationship known as Bode's Law. You can find out more here: http://www.bellaonline.com/articles/art42694.asp/ (2) Jacobus Kapteyn was born in 1851. He was a Dutch astronomer who made extensive studies of the Milky Way. On January 14, 2005 ESA's Huygens probe, having been released by the Cassini spaceship, landed on Titan. For last year's 10th anniversary, there was a new visualization of Huygens data. It shows close-ups and multiple angles for a simulated probe's eye view. (Credit: ESA–C. Carreau/Schröder, Karkoschka et al) https://youtu.be/9L471ct7YDo *Tributes to David Bowie (1947-2016)* Astronaut Chris Hadfield sang a version of Bowie's "Major Tom" on the International Space Station. Bowie was delighted, posting on Facebook that it was “possibly the most poignant version of the song ever created”. You can see the video here: https://www.youtube.com/watch?v=KaOC9danxNo Yesterday Hadfield tweeted his tribute to Bowie: "Ashes to ashes, dust to stardust. Your brilliance inspired us all. Goodbye Starman." Rosetta Mission's tweet: https://twitter.com/ESA_Rosetta/status/686541648084492288 "RIP David Bowie fellow space traveller" Please visit http://astronomy.bellaonline.com/Site.asp for even more great content about Astronomy. I hope to hear from you sometime soon, either in the forum http://forums.bellaonline.com/ubbthreads.php/forums/323/1/Astronomy or in response to this email message. I welcome your feedback! Do pass this message along to family and friends who might also be interested. Remember it's free and without obligation. I wish you clear skies. Mona Evans, Astronomy Editor One of hundreds of sites at BellaOnline.com Unsubscribe from the Astronomy Newsletter Online Newsletter Archive for Astronomy Site Master List of BellaOnline Newsletters
<urn:uuid:e21fec36-4526-4e60-9285-23aba148b059>
2.75
610
News (Org.)
Science & Tech.
57.8598
95,608,193
The Eastern Pacific's Hurricane Norbert resembled a pinwheel in an image from NASA's Terra satellite as bands of thunderstorms spiraled into the center. NASA's Global Precipitation Measurement or GPM mission has helped forecasters see that Norbert has lost some of its organization early on September 4. The MODIS instrument or Moderate Resolution Imaging Spectroradiometer aboard NASA's Terra satellite captured a visible picture of Tropical Storm Norbert on Sept. 4 at 2:15 p.m. EDT when it resembled a pinwheel. The western bands of Norbert were moving over Socorro Island, located several hundred miles west of Mexico's west coast. An eye was not apparent in the image, although Norbert was strengthening into a hurricane. The image was created by the MODIS Rapid Response Team at NASA's Goddard Space Flight Center in Greenbelt, Maryland. Less than six hours later at 8 p.m. EDT, the National Hurricane Center noted that Norbert became a hurricane with maximum sustained winds near 75 mph (120 kph). Overnight and into the early morning hours of September 4, maximum sustained winds increased to 80 mph (130 kph). On Thursday, September 4, 2014, the National Hurricane Center (NHC) continued the Tropical Storm Warning from La Paz to Cabo San Lazaro, Mexico. A tropical storm watch is also in effect north of Cabo San Lazaro to Puerto San Andresito and north of La Paz to San Evaristo. At 8 a.m. EDT (5 a.m. PDT), Norbert's maximum sustained winds remain near 80 mph (130 kph) and some slow strengthening is expected during the next 24 hours. Those hurricane-force winds only extend up to 25 miles (35 km) from the center, and tropical storm force winds extend out 105 miles (165 km), which is why the Baja is under a tropical storm warning. Norbert's center was located near latitude 20.6 north and longitude 110.0 west. That's just 160 miles (255 km) south of the southern tip of Baja California. Norbert was moving toward the northwest near 6 mph (9 kph) and movement in that direction is expected to continue over the next couple of days taking Norbert along the coast. On the NHC's forecast track the center of the hurricane is expected to approach the southern tip of the Baja California Peninsula today and move nearly parallel to the pacific coast of the peninsula tonight and Friday, September 5. The NHC uses data from multiple satellites, including NASA's new GPM satellite. The NHC discussion on Norbert at 5 a.m. EDT today, September 4, said "Recent microwave images, including a NASA GPM overpass at 0516 UTC (1:16 a.m. EDT), indicate that Norbert has lost some organization during the past few hours due to easterly vertical wind shear. The low-level center is in the northeastern part of the central convection with a mid-level eye displaced to the southwest of the low-level center. Norbert is forecast to track parallel along the coast of Baja California for the next couple of days. Rob Gutro | Eurek Alert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences 16.07.2018 | Physics and Astronomy
<urn:uuid:95118b9a-05ed-4402-831c-e259ecf7300e>
2.875
1,293
Content Listing
Science & Tech.
51.52823
95,608,206
- Research news - Open Access Speciation induced by a bacterial symbiont? © BioMed Central Ltd 2001 Published: 08 February 2001 The cytoplasmic symbiotic bacteria Wolbachia could induce host speciation in insects. Wolbachia are symbiotic bacteria that live in the cytoplasm of an estimated 15-20% of all insect species, including wasps of the genus Nasonia. When two different species of Nasonia mate, hybrid offspring are suppressed. The presence of the bacteria causes an incompatibility between the sperm and egg of the two Nasonia species, resulting in the loss of the sperm's chromosomes upon fertilization. Seth Bordenstein and colleagues at the University of Rochester, New York, report in the 8 February Nature, that Nasonia species treated with antibiotics produced large numbers of hybrid offspring (Nature 2001, 409:707-710). Furthermore, the hybrids were viable and fertile. Thus, the incompatibility caused by these bacteria is the principal mechanism for the reproductive isolation between Nasonia species, implicating Wolbachia in the early stages of speciation in this genus of wasps. Given that Wolbachia also infect arachnids, isopods and nematodes, their role in promoting speciation could be quite common.
<urn:uuid:7351b938-1009-4b4c-94c3-5da4d1258268>
3.359375
271
Academic Writing
Science & Tech.
24.855683
95,608,215
The tendency of an atom in a molecule to attract the shared pair of electrons towards itself is known as electronegativity. It is a dimensionless property because it is only a tendency. It basically indicates the net result of the tendencies of atoms in different elements to attract the bond forming electron pairs. We measure electronegativity on several scales. The most commonly used scale was designed by Linus Pauling. According to this scale fluorine is the most electronegative element with a value of 4.0 and caesium is the least electronegative element with a value of 0.7. Electronegativity of elements depends on the following factors: - Size of an atom: A greater atomic size will result in less value of electronegativity, this happens because electrons being far away from the nucleus will experience lesser force of attraction. - Nuclear charge: A greater value of nuclear charge will result in greater value of electronegativity. This happens because increase in nuclear charge causes electron attraction with greater force. Trends in electronegativity As we move across a period from left to right the nuclear charge increases and the atomic size decreases, therefore the value of electronegativity increases across a period in the modern periodic table. There is an increase in atomic number as we move down the group in the modern periodic table. Nuclear charge also increases but the effect of increase in nuclear charge is overcome by the addition of one shell. Hence the value of electronegativity decreases as we move down the group. For example, in the first group the value decreases as we move from lithium downwards to francium. It is a general observation that metals show a lower value of electronegativity as compared to the non-metals. Therefore metals are electropositive and non-metals are electronegative in nature. The elements in period two differ in properties from their respective group elements due to the small size and higher value of electronegativity. The elements in second period show resemblance to the elements of the next group in period three. This happens due to a small difference in their electronegativities. This leads to the formation of a diagonal relationship. This article has briefly described the concept of electronegativity. For any query on this topic install Byju’s the learning app. Practise This Question
<urn:uuid:baffbcf8-3c6e-4867-8734-0175805e7397>
4.125
487
Knowledge Article
Science & Tech.
27.517151
95,608,232
Nuclear Physics: Radiation, Radioactivity & its Applications. Published byModified over 3 years ago Presentation on theme: "Nuclear Physics: Radiation, Radioactivity & its Applications."— Presentation transcript: Nuclear Physics: Radiation, Radioactivity & its Applications Nuclear Energy The Nucleus of an atom contains Protons – Positively Charged Neutrons – no charge Atomic Mass Number – denoted by the letter A, this number represents the total number of protons + neutrons in the nucleus, telling you what isotope of the element you have. Atomic Number – denoted by the letter Z, this number represents the number of protons in the nucleus, telling you what element you have. Nuclear Energy Atomic Symbol for a given isotope of an element is generally given as noted to the right. A prime example is an alpha particle or helium nucleus Nuclear Reactions Two Types of Nuclear reactions produce vast amounts of energy according to Einstein’s famous equation E = mc 2 Fission – the splitting of an atom into smaller parts Fusion- the joining of two small nuclei to produce one larger nucleus Nuclear Reactions Mass defect – is the amount of mass that is converted to energy during fission or fusion. Calculation of Mass defect is the difference between the actual mass of the atom and the known mass of each of its parts The amount of energy that this mass is converted into is called the binding energy Sample Problem Calculate the mass defect and energy released in the creation of Carbon-13. Solution Expected Mass: Protons = 6 (1.007276 u) = 6.043656 u Neutrons = 7(1.008665 u) = 7.060655 u 13.104311 u Known Mass -13.003355 u Mass Defect.100956 u Energy Released = (931.5 MeV/u)(0.100956 u) = 94.04 MeV Radioactivity Three types of Radioactivity Alpha – α – is the nucleus of a helium atom Can be stopped by a sheet of paper, is harmful only if ingested Beta – β – emission of an electron or positron Can be stopped by a sheet of lead, is harmful to all living tissue Gamma – γ – emission of a high energy photon Cannot be completely stopped. Very harmful to all living tissue. Half Life The half life of a radioactive material is the amount of time required for ½ of the sample to decay into another element or isotope. Half lives are calculated according to the equation: a = a 0 (½) x Half Life a = amount of material left at any time a 0 = amount of material that you begin with x = the number of half lives that have passed since you have begun counting This type of decay is said to be exponential since it can be described graphically as a hyperbola Sample Problem Carbon-14, a radioactive isotope of carbon, has a half life of 5730 years. If a 20 gram sample of carbon-14 is allowed to decay for 10,000 years, how much remains at the end of this period? Solution a = a 0 (½) x a 0 = 20 grams x = 10,000 yrs/5730 yrs/half life = 1.75 So a = 20 grams(½) 1.75 = 5.95 grams Detection of Radiation Counters Geiger Counter – Radiation causes a gas to emit electrons causing a voltage which makes the counter “click” Scintillation counter – uses a solid, liquid, or gas scintillator – a material which is excited by radiation to emit light. The light is captured and amplified by a Photomultiplier (PM) tube – which turns it into an electric signal. Semi-conductor detector – uses a p-n junction diode which produces a short electric pulse when irradiated Detection of Radiation Trackers Photographic emulsion – the particle passing through the emulsion ionizes atoms in its path Cloud chamber – a gas is cooled to a temperature slightly below its normal condensation temperature hence it condenses on any ionized molecule present this “tracks” the particle Bubble chamber – a liquid is kept close to its boiling temperature and hence “bubbles” around any ionized particle – the bubbles are then left in the wake of the particle and photographed Applications of Nuclear Processes Energy can be released in a nuclear reaction by one of two processes: Fission – the splitting of a nucleus into smaller nuclei Fusion – the joining of two smaller nuclei into a larger nuclei Fission Usually caused by neutron bombardment of the nucleus, causing the nucleus to split Mass is converted into energy All current nuclear reactor technology uses fission Fission is controlled by using a moderator, a material which absorbs neutrons to keep the chain reaction under control Fusion Fusion reactions take lighter nuclei, often an isotope of hydrogen called deuterium and fuse them together to make a heavier nuclei, often helium This must occur at high energy and is very difficult to produce under laboratory conditions Currently no workable fusion reactor has been produced on earth The sun and stars all produce energy due to nuclear fusion Measurement of Radiation: Dosimetry Since radiation can harm the body it is important to quantify the amount of radiation received, or the dose. The study of this is called dosimetry and is an important part of an emerging field known as Health Physics Dosimetry is most often concerned with the number of rads or millirads of radiation received. A rad is defined as the amount of radiation which deposits energy at a rate of 0.01 J/kg in any absorbing material Things to Know Atomic number, Atomic mass number, atomic symbols, atomic equations Mass defect, binding energy Types of Radiation Alpha Beta Gamma Detection of Radiation Nuclear Reactions Fission Fusion Dosimetry
<urn:uuid:e3f07429-f82a-4c26-b85d-3b9d7c86e6b8>
3.765625
1,207
Truncated
Science & Tech.
34.35831
95,608,234
Get Timeline of Meteorology essential facts below. View Videos or join the Timeline of Meteorology discussion . Add Timeline of Meteorology to your Like2do.com topic list for future reference or share this resource on social media. Timeline of Meteorology The timeline of meteorology contains events of scientific and technological advancements in the area of atmospheric sciences. The most notable advancements in observational meteorology, weather forecasting, climatology, atmospheric chemistry, and atmospheric physics are listed chronologically. Some historical weather events are included that mark time periods where advancements were made, or even that sparked policy change - 3000 BC - Meteorology in India can be traced back to around 3000 BC, with writings such as the Upanishads, containing discussions about the processes of cloud formation and rain and the seasonal cycles caused by the movement of earth round the sun. - 600 BC - Thales may qualify as the first Greek meteorologist. He reputedly issues the first seasonal crop forecast. - 400 BC - There is some evidence that Democritus predicted changes in the weather, and that he used this ability to convince people that he could predict other future events. - 400 BC - Hippocrates writes a treatise called Airs, Waters and Places, the earliest known work to include a discussion of weather. More generally, he wrote about common diseases that occur in particular locations, seasons, winds and air. - 350 BC - The Greek philosopher Aristotle writes Meteorology, a work which represents the sum of knowledge of the time about earth sciences, including weather and climate. It is the first known work that attempts to treat a broad range of meteorological topics. For the first time, precipitation and the clouds from which precipitation falls are called meteors, which originate from the Greek word meteoros, meaning 'high in the sky'. From that word comes the modern term meteorology, the study of clouds and weather. - Although the term meteorology is used today to describe a subdiscipline of the atmospheric sciences, Aristotle's work is more general. Meteorologica is based on intuition and simple observation, but not on what is now considered the scientific method. In his own words: - ...all the affections we may call common to air and water, and the kinds and parts of the earth and the affections of its parts. - The magazine De Mundo (attributed to Pseudo-Aristotle) notes: - Cloud is a vaporous mass, concentrated and producing water. Rain is produced from the compression of a closely condensed cloud, varying according to the pressure exerted on the cloud; when the pressure is slight it scatters gentle drops; when it is great it produces a more violent fall, and we call this a shower, being heavier than ordinary rain, and forming continuous masses of water falling over earth. Snow is produced by the breaking up of condensed clouds, the cleavage taking place before the change into water; it is the process of cleavage which causes its resemblance to foam and its intense whiteness, while the cause of its coldness is the congelation of the moisture in it before it is dispersed or rarefied. When snow is violent and falls heavily we call it a blizzard. Hail is produced when snow becomes densified and acquires impetus for a swifter fall from its close mass; the weight becomes greater and the fall more violent in proportion to the size of the broken fragments of cloud. Such then are the phenomena which occur as the result of moist exhalation. - One of the most impressive achievements in Meteorology is his description of what is now known as the hydrologic cycle: - Now the sun, moving as it does, sets up processes of change and becoming and decay, and by its agency the finest and sweetest water is every day carried up and is dissolved into vapour and rises to the upper region, where it is condensed again by the cold and so returns to the earth. - Several years after Aristotle's book, his pupil Theophrastus puts together a book on weather forecasting called The Book of Signs. Various indicators such as solar and lunar halos formed by high clouds are presented as ways to forecast the weather. The combined works of Aristotle and Theophrastus have such authority they become the main influence in the study of clouds, weather and weather forecasting for nearly 2000 years. - 250 BC - Archimedes studies the concepts of buoyancy and the hydrostatic principle. Positive buoyancy is necessary for the formation of convective clouds (cumulus, cumulus congestus and cumulonimbus). - 25 AD - Pomponius Mela, a geographer for the Roman empire, formalizes the climatic zone system. - c. 80 AD - In his Lunheng (; Critical Essays), the Han Dynasty Chinese philosopher Wang Chong (27-97 AD) dispels the Chinese myth of rain coming from the heavens, and states that rain is evaporated from water on the earth into the air and forms clouds, stating that clouds condense into rain and also form dew, and says when the clothes of people in high mountains are moistened, this is because of the air-suspended rain water. However, Wang Chong supports his theory by quoting a similar one of Gongyang Gao's, the latter's commentary on the Spring and Autumn Annals, the Gongyang Zhuan, compiled in the 2nd century BC, showing that the Chinese conception of rain evaporating and rising to form clouds goes back much farther than Wang Chong. Wang Chong wrote: - As to this coming of rain from the mountains, some hold that the clouds carry the rain with them, dispersing as it is precipitated (and they are right). Clouds and rain are really the same thing. Water evaporating upwards becomes clouds, which condense into rain, or still further into dew. - 500 AD - In around 500 AD, the Indian astronomer, mathematician, and astrologer: Var?hamihira published his work Brihat-Samhita's, which provides clear evidence that a deep knowledge of atmospheric processes existed in the Indian region. - 7th century - The poet Kalidasa in his epic Meghaduta, mentions the date of onset of the south-west Monsoon over central India and traces the path of the monsoon clouds. - 7th century - St. Isidore of Seville,in his work De Rerum Natura, writes about astronomy, cosmology and meteorology. In the chapter dedicated to Meteorology, he discusses the thunder, clouds, rainbows and wind. - 9th century - Al-Kindi (Alkindus), an Arab naturalist, writes a treatise on meteorology entitled Risala fi l-Illa al-Failali l-Madd wa l-Fazr (Treatise on the Efficient Cause of the Flow and Ebb), in which he presents an argument on tides which "depends on the changes which take place in bodies owing to the rise and fall of temperature." - 9th century - Al-Dinawari, a Kurdish naturalist, writes the Kitab al-Nabat (Book of Plants), in which he deals with the application of meteorology to agriculture during the Muslim Agricultural Revolution. He describes the meteorological character of the sky, the planets and constellations, the Sun and Moon, the lunar phases indicating seasons and rain, the anwa (heavenly bodies of rain), and atmospheric phenomena such as winds, thunder, lightning, snow, floods, valleys, rivers, lakes, wells and other sources of water. - 10th century - Ibn Wahshiyya's Nabatean Agriculture discusses the weather forecasting of atmospheric changes and signs from the planetary astral alterations; signs of rain based on observation of the lunar phases, nature of thunder and lightning, direction of sunrise, behaviour of certain plants and animals, and weather forecasts based on the movement of winds; pollenized air and winds; and formation of winds and vapours. - 1021 - Ibn al-Haytham (Alhazen) writes on the atmospheric refraction of light, the cause of morning and evening twilight. He endeavored by use of hyperbola and geometric optics to chart and formulate basic laws on atmospheric refraction. He provides the first correct definition of the twilight, discusses atmospheric refraction, shows that the twilight is due to atmospheric refraction and only begins when the Sun is 19 degrees below the horizon, and uses a complex geometric demonstration to measure the height of the Earth's atmosphere as 52,000 passuum (49 miles), which is very close to the modern measurement of 50 miles. - 1020s - Ibn al-Haytham publishes his Risala fi l-Daw' (Treatise on Light) as a supplement to his Book of Optics. He discusses the meteorology of the rainbow, the density of the atmosphere, and various celestial phenomena, including the eclipse, twilight and moonlight. - 1027 - Avicenna publishes The Book of Healing, in which Part 2, Section 5, contains his essay on mineralogy and meteorology in six chapters: formation of mountains; the advantages of mountains in the formation of clouds; sources of water; origin of earthquakes; formation of minerals; and the diversity of earth's terrain. He also describes the structure of a meteor, and his theory on the formation of metals combined J?bir ibn Hayy?n's sulfur-mercury theory from Islamic alchemy (although he was critical of alchemy) with the mineralogical theories of Aristotle and Theophrastus. His scientific methodology of field observation was also original in the Earth sciences. - Late 11th century - Abu 'Abd Allah Muhammad ibn Ma'udh, who lived in Al-Andalus, wrote a work on optics later translated into Latin as Liber de crepisculis, which was mistakenly attributed to Alhazen. This was a short work containing an estimation of the angle of depression of the sun at the beginning of the morning twilight and at the end of the evening twilight, and an attempt to calculate on the basis of this and other data the height of the atmospheric moisture responsible for the refraction of the sun's rays. Through his experiments, he obtained the accurate value of 18°, which comes close to the modern value. - 1088 - In his Dream Pool Essays (?), the Chinese scientist Shen Kuo wrote vivid descriptions of tornadoes, that rainbows were formed by the shadow of the sun in rain, occurring when the sun would shine upon it, and the curious common phenomena of the effect of lightning that, when striking a house, would merely scorch the walls a bit but completely melt to liquid all metal objects inside. - 1121 - Al-Khazini, a Muslim scientist of Byzantine Greek descent, publishes The Book of the Balance of Wisdom, the first study on the hydrostatic balance. - 13th century-St. Albert the Great is the first to propose that each drop of falling rain had the form of a small sphere, and that this form meant that the rainbow was produced by light interacting with each raindrop. - 1267 - Roger Bacon was the first to calculate the angular size of the rainbow. He stated that the rainbow summit can not appear higher than 42 degrees above the horizon. - 1337 - William Merle, rector of Driby, starts recording his weather diary, the oldest existing in print. The endeavour ended 1344. - Late 13th century - Theoderic of Freiburg and Kam?l al-D?n al-F?ris? give the first accurate explanations of the primary rainbow, simultaneously but independently.Theoderic also gives the explanation for the secondary rainbow. - 1441 - King Sejongs son, Prince Munjong, invented the first standardized rain gauge. These were sent throughout the Joseon Dynasty of Korea as an official tool to assess land taxes based upon a farmer's potential harvest. - - Nicolas Cryfts, (Nicolas of Cusa), described the first hair hygrometer to measure humidity. The design was drawn by Leonardo da Vinci, referencing Cryfts design in da Vinci's Codex Atlanticus. - 1488 - Johannes Lichtenberger publishes the first version of his Prognosticatio linking weather forecasting with astrology. The paradigm was only challenged centuries later. - 1494 - During his second voyage Christopher Columbus experiences a tropical cyclone in the Atlantic Ocean, which leads to the first written European account of a hurricane. - 1510 - Leonhard Reynmann, astronomer of Nuremberg, publishes ?Wetterbüchlein Von warer erkanntnus des wetters?, a collection of weather lore. - 1607 - Galileo Galilei constructs a thermoscope. Not only did this device measure temperature, but it represented a paradigm shift. Up to this point, heat and cold were believed to be qualities of Aristotle's elements (fire, water, air, and earth). Note: There is some controversy about who actually built this first thermoscope. There is some evidence for this device being independently built at several different times. This is the era of the first recorded meteorological observations. As there was no standard measurement, they were of little use until the work of Daniel Gabriel Fahrenheit and Anders Celsius in the 18th century. - 1648 - Blaise Pascal rediscovers that atmospheric pressure decreases with height, and deduces that there is a vacuum above the atmosphere. - 1654 - Ferdinando II de Medici sponsors the first weather observing network, that consisted of meteorological stations in Florence, Cutigliano, Vallombrosa, Bologna, Parma, Milan, Innsbruck, Osnabrück, Paris and Warsaw. Collected data was centrally sent to Accademia del Cimento in Florence at regular time intervals. - 1662 - Sir Christopher Wren invented the mechanical, self-emptying, tipping bucket rain gauge. - 1667 - Robert Hooke builds another type of anemometer, called a pressure-plate anemometer. - 1686 - Edmund Halley presents a systematic study of the trade winds and monsoons and identifies solar heating as the cause of atmospheric motions. - - Edmund Halley establishes the relationship between barometric pressure and height above sea level. Global circulation as described by Hadley. - - Royal Society begins twice daily observations compiled by Samuel Horsley testing for the influence of winds and of the moon on the barometer readings. - - First hair hygrometer demonstrated. The inventor was Horace-Bénédict de Saussure. - 1800 - The Voltaic pile was the first modern electric battery, invented by Alessandro Volta, which led to later inventions like the telegraph. - 1802-1803 - Luke Howard writes On the Modification of Clouds in which he assigns cloud types Latin names. Howard's system establishes three physical categories or forms based on appearance and process of formation: cirriform (mainly detached and wispy), cumuliform or convective (mostly detached and heaped, rolled, or rippled), and non-convective stratiform (mainly continuous layers in sheets). These are cross-classified into lower and upper levels or étages. Cumuliform clouds forming in the lower level are given the genus name cumulus from the Latin word for heap, while low stratiform clouds are given the genus name stratus from the Latin word for a flattened or spread out sheet. Cirriform clouds are identified as always upper level and given the genus name cirrus from the Latin for hair. From this genus name, the prefix cirro- is derived and attached to the names of upper level cumulus and stratus, yielding the names cirrocumulus, and cirrostratus. In addition to these individual cloud types; Howard adds two names to designate cloud systems consisting of more than one form joined together or located in very close proximity. Cumulostratus describes large cumulus clouds blended with stratiform layers in the lower or upper levels. The term nimbus, taken from the Latin word for rain cloud, is given to complex systems of cirriform, cumuliform, and stratiform clouds with sufficient vertical development to produce significant precipitation, and it comes to be identified as a distinct nimbiform physical category. |Classification of major types: 1803 - - John Herapath develops some ideas in the kinetic theory of gases but mistakenly associates temperature with molecular momentum rather than kinetic energy; his work receives little attention other than from Joule. - 1822 - Joseph Fourier formally introduces the use of dimensions for physical quantities in his Theorie Analytique de la Chaleur. - 1824 - Sadi Carnot analyzes the efficiency of steam engines using caloric theory; he develops the notion of a reversible process and, in postulating that no such thing exists in nature, lays the foundation for the second law of thermodynamics. - 1827 - Robert Brown discovers the Brownian motion of pollen and dye particles in water. - 1832 - An electromagnetic telegraph was created by Baron Schilling. - 1834 - Émile Clapeyron popularises Carnot's work through a graphical and analytic formulation. - 1835 - Gaspard-Gustave Coriolis publishes theoretical discussions of machines with revolving parts and their efficiency, for example the efficiency of waterweels. At the end of the 19th century, meteorologists recognized that the way the Earth's rotation is taken into account in meteorology is analogous to what Coriolis discussed: an example of Coriolis Effect. - 1836 - An American scientist, Dr. David Alter, invented the first known American electric telegraph in Elderton, Pennsylvania, one year before the much more popular Morse telegraph was invented. - 1837 - Samuel Morse independently developed an electrical telegraph, an alternative design that was capable of transmitting over long distances using poor quality wire. His assistant, Alfred Vail, developed the Morse code signaling alphabet with Morse. The first electric telegram using this device was sent by Morse on May 24, 1844 from the U.S. Capitol in Washington, D.C. to the B&O Railroad "outer depot" in Baltimore and sent the message: - What hath God wrought - 1839 - The first commercial electrical telegraph was constructed by Sir William Fothergill Cooke and entered use on the Great Western Railway. Cooke and Wheatstone patented it in May 1837 as an alarm system. - 1840 - Elias Loomis becomes the first person known to attempt to devise a theory on frontal zones. The idea of fronts do not catch on until expanded upon by the Norwegians in the years following World War I. - - German meteorologist Ludwig Kaemtz adds stratocumulus to Howard's canon as a mostly detached low-étage genus of limited convection. It is defined as having cumuliform and stratiform characteristics integrated into a single layer (in contrast to cumulostratus which is deemed to be composite in nature and can be structured into more than one layer). This eventually leads to the formal recognition of a stratocumuliform physical category that includes rolled and rippled clouds classified separately from the more freely convective heaped cumuliform clouds. - 1843 - John James Waterston fully expounds the kinetic theory of gases, but is ridiculed and ignored. - - James Prescott Joule experimentally finds the mechanical equivalent of heat. - - The Manchester Examiner newspaper organises the first weather reports collected by electrical means. - - William John Macquorn Rankine calculates the correct relationship between saturated vapour pressure and temperature using his hypothesis of molecular vortices. - - Rudolf Clausius gives the first clear joint statement of the first and second law of thermodynamics, abandoning the caloric theory, but preserving Carnot's principle. - 1852 - Joule and Thomson demonstrate that a rapidly expanding gas cools, later named the Joule-Thomson effect. - 1853 - The first International Meteorological Conference was held in Brussels at the initiative of Matthew Fontaine Maury, U.S. Navy, recommending standard observing times, methods of observation and logging format for weather reports from ships at sea. - 1854 - The French astronomer Leverrier showed that a storm in the Black Sea could be followed across Europe and would have been predictable if the telegraph had been used. A service of storm forecasts was established a year later by the Paris Observatory. - - Rankine introduces his thermodynamic function, later identified as entropy. - Mid 1850s - Emilien Renou, director of the Parc Saint-Maur and Montsouris observatories, begins work on an elaboration of Howard's classifications that would lead to the introduction during the 1870s of a newly defined middle étage . Clouds in this altitude range are given the prefix alto- derived from the Latin word altum pertaining to height above the low-level clouds. This resultes in the genus name altocumulus for mid-level cumuliform and stratocumuliform types and altostratus for stratiform types in the same altitude range. - 1856 - William Ferrel publishes his essay on the winds and the currents of the oceans. - 1859 - James Clerk Maxwell discovers the distribution law of molecular velocities. - 1860 - Robert FitzRoy uses the new telegraph system to gather daily observations from across England and produces the first synoptic charts. He also coined the term "weather forecast" and his were the first ever daily weather forecasts to be published in this year. - - After establishment in 1849, 500 U.S. telegraph stations are now making weather observations and submitting them back to the Smithsonian Institution. The observations are later interrupted by the American Civil War. - 1865 - Josef Loschmidt applies Maxwell's theory to estimate the number-density of molecules in gases, given observed gas viscosities. - - Manila Observatory founded in the Philippines. - - United States Army Signal Corp, forerunner of the National Weather Service, issues its first hurricane warning. Synoptic chart from 1874. - 1875 - The India Meteorological Department is established, after a tropical cyclone struck Calcutta in 1864 and monsoon failures during 1866 and 1871. - 1876 - Josiah Willard Gibbs publishes the first of two papers (the second appears in 1878) which discuss phase equilibria, statistical ensembles, the free energy as the driving force behind chemical reactions, and chemical thermodynamics in general. - 1880 - Philip Weilbach, secretary and librarian at the Art Academy in Copenhagen proposes and has accepted by the permanent committee of the International Meteorological Organization (IMO), a forerunner of the present-day World Meteorological Organization (WMO), the designation of a new free-convective vertical or multi-étage genus type, cumulonimbus (heaped rain cloud). It would be distinct from cumulus and nimbus and identifiable by its often very complex structure (frequently including a cirriform top and what are now recognized as multiple accessory clouds), and its ability to produce thunder. With this addition, a canon of ten tropospheric cloud genera is established that comes to be officially and universally accepted. Howard's cumulostratus is not included as a distinct type, having effectively been reclassified into its component cumuliform and stratiform genus types already included in the new canon. - 1881 - Finnish Meteorological Central Office was formed from part of Magnetic Observatory of Helsinki University. - 1890 - US Weather Bureau is created as a civilian operation under the U.S. Department of Agriculture. - - Otto Jesse reveals the discovery and identification of the first clouds known to form above the troposphere. He proposes the name noctilucent which is Latin for night shining. Because of the extremely high altitudes of these clouds in what is now known to be the mesosphere, they can become illuminated by the sun's rays when the sky is nearly dark after sunset and before sunrise. - 1892 - William Henry Dines invented another kind of anemometer, called the pressure-tube (Dines) anemometer. His device measured the difference in pressure arising from wind blowing in a tube versus that blowing across the tube. - - The first mention of the term "El Niño" to refer to climate occurs when Captain Camilo Carrilo told the Geographical society congress in Lima that Peruvian sailors named the warm northerly current "El Niño" because it was most noticeable around Christmas. - - Svante Arrhenius proposes carbon dioxide as a key factor to explain the ice ages. - - H.H. Clayton proposes formalizing the division of clouds by their physical structures into cirriform, stratiform, "flocciform" (stratocumuliform) and cumuliform. With the later addition of cumulonimbiform, the idea eventually finds favor as an aid in the analysis of satellite cloud images. - 1898 - US Weather Bureau established a hurricane warning network at Kingston, Jamaica. - - The Marconi Company issues the first routine weather forecast by means of radio to ships on sea. Weather reports from ships started 1905. - 1903 - Max Margules publishes ,,Über die Energie der Stürme", an essay on the atmosphere as a three-dimensional thermodynamical machine. - 1904 - Vilhelm Bjerknes presents the vision that forecasting the weather is feasible based on mathematical methods. - 1905 - Australian Bureau of Meteorology established by a Meteorology Act to unify existing state meteorological services. - 1919 - Norwegian cyclone model introduced for the first time in meteorological literature. Marks a revolution in the way the atmosphere is conceived and immediately starts leading to improved forecasts. - - Sakuhei Fujiwhara is the first to note that hurricanes move with the larger scale flow, and later publishes a paper on the Fujiwhara effect in 1921. - 1920 - Milutin Milankovi? proposes that long term climatic cycles may be due to changes in the eccentricity of the Earth's orbit and changes in the Earth's obliquity. - 1922 - Lewis Fry Richardson organises the first numerical weather prediction experiment. - 1923 - The oscillation effects of ENSO were first erroneously described by Sir Gilbert Thomas Walker from whom the Walker circulation takes its name; now an important aspect of the Pacific ENSO phenomenon. - 1924 - Gilbert Walker first coined the term "Southern Oscillation". - 1930, January 30 - Pavel Molchanov invents and launches the first radiosonde. Named "271120", it was released 13:44 Moscow Time in Pavlovsk, USSR from the Main Geophysical Observatory, reached a height of 7.8 kilometers measuring temperature there (-40.7 °C) and sent the first aerological message to the Leningrad Weather Bureau and Moscow Central Forecast Institute. - 1932 - A further modification of Luke Howard's cloud classification system comes when an IMC commission for the study of clouds puts forward a refined and more restricted definition of the genus nimbus which is effectively reclassified as a stratiform cloud type. It is renamed nimbostratus (flattened or spread out rain cloud) and published with the new name in the 1932 edition of the International Atlas of Clouds and of States of the Sky. This leaves cumulonimbus as the only nimbiform type as indicated by its root-name. - 1933 - Victor Schauberger publishes his theories on the carbon cycle and it's relationship to the weather in Our Senseless Toil - 1935 - IMO decides on the 30 years normal period (1900-1930) to describe the climate. - 1937 - The U.S. Army Air Forces Weather Service was established (redesignated in 1946 as AWS-Air Weather Service). - 1938 - Guy Stewart Callendar first to propose global warming from carbon dioxide emissions. - 1939 - Rossby waves were first identified in the atmosphere by Carl-Gustaf Arvid Rossby who explained their motion. Rossby waves are a subset of inertial waves. - 1941 - Pulsed radar network is implemented in England during World War II. Generally during the war, operators started noticing echoes from weather elements such as rain and snow. - 1943 - 10 years after flying into the Washington Hoover Airport on mainly instruments during the August 1933 Chesapeake-Potomac hurricane, J. B. Duckworth flies his airplane into a Gulf hurricane off the coast of Texas, proving to the military and meteorological community the utility of weather reconnaissance. - 1944 - The Great Atlantic Hurricane is caught on radar near the Mid-Atlantic coast, the first such picture noted from the United States. - 1947 - The Soviet Union launched its first Long Range Ballistic Rocket October 18, based on the German rocket A4 (V-2). The photographs demonstrated the immense potential of observing weather from space. - 1948 - First correct tornado prediction by Robert C. Miller and E. J. Fawbush for tornado in Oklahoma. - - Erik Palmén publishes his findings that hurricanes require surface water temperatures of at least 26°C (80°F) in order to form. - - Hurricanes begin to be named alphabetically with the radio alphabet. - - WMO World Meteorological Organization replaces IMO under the auspice of the United Nations. - - A United States Navy rocket captures a picture of an inland tropical depression near the Texas/Mexico border, which leads to a surprise flood event in New Mexico. This convinces the government to set up a weather satellite program. - - NSSP National Severe Storms Project and NHRP National Hurricane Research Projects established. The Miami office of the United States Weather Bureau is designated the main hurricane warning center for the Atlantic Basin. The first television image of Earth from space from the TIROS-1 weather satellite. - 1959 - The first weather satellite, Vanguard 2, was launched on February 17. It was designed to measure cloud cover, but a poor axis of rotation kept it from collecting a notable amount of useful data. - 1960 - The first successful weather satellite, TIROS-1 (Television Infrared Observation Satellite), is launched on April 1 from Cape Canaveral, Florida by the National Aeronautics and Space Administration (NASA) with the participation of The US Army Signal Research and Development Lab, RCA, the US Weather Bureau, and the US Naval Photographic Center. During its 78-day mission, it relays thousands of pictures showing the structure of large-scale cloud regimes, and proves that satellites can provide useful surveillance of global weather conditions from space. TIROS paves the way for the Nimbus program, whose technology and findings are the heritage of most of the Earth-observing satellites NASA and NOAA have launched since then. - 1961 - Edward Lorenz accidentally discovers Chaos theory when working on numerical weather prediction. - 1962 - Keith Browning and Frank Ludlam publish first detailed study of a supercell storm (over Wokingham, UK). Project STORMFURY begins its 10-year project of seeding hurricanes with silver iodide, attempting to weaken the cyclones. - 1968 - A hurricane database for Atlantic hurricanes is created for NASA by Charlie Newmann and John Hope, named HURDAT. - 1969 - Saffir-Simpson Hurricane Scale created, used to describe hurricane strength on a category range of 1 to 5. Popularized during Hurricane Gloria of 1985 by media. - - Jacob Bjerknes described ENSO by suggesting that an anomalously warm spot in the eastern Pacific can weaken the east-west temperature difference, causing weakening in the Walker circulation and trade wind flows, which push warm water to the west. - 1970s Weather radars are becoming more standardized and organized into networks. The number of scanned angles was increased to get a three-dimensional view of the precipitation, which allowed studies of thunderstorms. Experiments with the Doppler effect begin. - 1970 - NOAA National Oceanic and Atmospheric Administration established. Weather Bureau is renamed the National Weather Service. - 1971 - Ted Fujita introduces the Fujita scale for rating tornadoes. - 1974 - AMeDAS network, developed by Japan Meteorological Agency used for gathering regional weather data and verifying forecast performance, begun operation on November 1, the system consists of about 1,300 stations with automatic observation equipment. These stations, of which more than 1,100 are unmanned, are located at an average interval of 17 km throughout Japan. - 1975 - The first Geostationary Operational Environmental Satellite, GOES, was launched into orbit. Their role and design is to aid in hurricane tracking. Also this year, Vern Dvorak develops a scheme to estimate tropical cyclone intensity from satellite imagery. - - The first use of a General Circulation Model to study the effects of carbon dioxide doubling. Syukuro Manabe and Richard Wetherald at Princeton University. - 1976 - The United Kingdom Department of Industry publishes a modification of the international cloud classification system adapted for satellite cloud observations. It is co-sponsored by NASA and showes a division of clouds into stratiform, cirriform, stratocumuliform, cumuliform, and cumulonimbiform. The last of these constitutes a change in name of the earlier nimbiform type, although this earlier name and original meaning pertaining to all rain clouds can still be found in some classifications. Major types include the ten tropospheric genera and two additional major types above the troposphere. The cumulus genus includes three variants as defined by vertical size. - 1980s onwards, networks of weather radars are further expanded in the developed world. Doppler weather radar is becoming gradually more common, adds velocity information. - 1982 - The first Synoptic Flow experiment is flown around Hurricane Debby to help define the large scale atmospheric winds that steer the storm. - 1988 - WSR-88D type weather radar implemented in the United States. Weather surveillance radar that uses several modes to detect severe weather conditions. - 1992 - Computers first used in the United States to draw surface analyses. - 1997 - The Pacific Decadal Oscillation was discovered by a team studying salmon production patterns at the University of Washington. - 1998 - Improving technology and software finally allows for the digital underlying of satellite imagery, radar imagery, model data, and surface observations improving the quality of United States Surface Analyses. - - CAMEX3, a NASA experiment run in conjunction with NOAA's Hurricane Field Program collects detailed data sets on Hurricanes Bonnie, Danielle, and Georges. - 1999 - Hurricane Floyd induces fright factor in some coastal States and causes a massive evacuation from coastal zones from northern Florida to the Carolinas. It comes ashore in North Carolina and results in nearly 80 dead and $4.5 billion in damages mostly due to extensive flooding. - 2001 - National Weather Service begins to produce a Unified Surface Analysis, ending duplication of effort at the Tropical Prediction Center, Ocean Prediction Center, Hydrometeorological Prediction Center, as well as the National Weather Service offices in Anchorage, AK and Honolulu, HI. - 2003 - NOAA hurricane experts issue first experimental Eastern Pacific Hurricane Outlook. - 2004 - A record number of hurricanes strike Florida in one year, Charley, Frances, Ivan, and Jeanne. - 2005 - A record 27 named storms occur in the Atlantic. National Hurricane Center runs out of names from its standard list and uses Greek alphabet for the first time. - 2006 - Weather radar improved by adding common precipitation to it such as freezing rain, rain and snow mixed, and snow for the first time. - 2007 - The Fujita scale is replaced with the Enhanced Fujita Scale for National Weather Service tornado assessments. - 2010s - Weather radar dramatically advances with more detailed options. References and notes - ^ a b c d "History of Meteorological Services in India". India Meteorological Department. March 19, 2016. Archived from the original on March 19, 2016. Retrieved 2016. - ^ a b c d e Ancient and pre-Renaissance Contributors to Meteorology National Oceanic and Atmospheric Administration (NOAA) - ^ a b Toth, Garry; Hillger, Don, eds. (2007). "Ancient and pre-Renaissance Contributors to Meteorology". Colorado State University. Retrieved . - ^ a b Aristotle (2004) [350 B.C.E]. Meteorology. The University of Adelaide Library, University of Adelaide, South Australia 5005: eBooks@Adelaide. Archived from the original on February 17, 2007. Translated by E. W. Webster - ^ Aristotle; Forster, E. S. (Edward Seymour), 1879-1950; Dobson, J. F. (John Frederic), 1875-1947 (1914). De Mundo. p. Chapter 4. - ^ "Timeline of geography, paleontology". Paleorama.com. Following the path of Discovery - ^ a b c d Needham, Joseph (1986). Science and Civilization in China: Volume 3, Mathematics and the Sciences of the Heavens and the Earth. Taipei: Caves Books Ltd. - ^ Plinio Prioreschi, "Al-Kindi, A Precursor Of The Scientific Revolution", Journal of the International Society for the History of Islamic Medicine, 2002 (2): 17-19 . - ^ Fahd, Toufic. : 815. , in Morelon, Régis; Rashed, Roshdi (1996). Encyclopedia of the History of Arabic Science. 3. Routledge. ISBN 0-415-12410-7. - ^ Fahd, Toufic. : 842. , in (Morelon & Rashed 1996, pp. 813-52) - ^ Mahmoud Al Deek (November-December 2004). "Ibn Al-Haitham: Master of Optics, Mathematics, Physics and Medicine, Al Shindagah. - ^ Sami Hamarneh (March 1972). Review of Hakim Mohammed Said, Ibn al-Haitham, Isis 63 (1), p. 119. - ^ Frisinger, H. Howard (March 1973). "Aristotle's Legacy in Meteorology". Bulletin of the American Meteorological Society. 54 (3): 198-204 . Bibcode:1973BAMS...54..198F. doi:10.1175/1520-0477(1973)054<0198:ALIM>2.0.CO;2. - ^ George Sarton, Introduction to the History of Science (cf. Dr. A. Zahoor and Dr. Z. Haq (1997), Quotations from Famous Historians of Science) - ^ Dr. Nader El-Bizri, "Ibn al-Haytham or Alhazen", in Josef W. Meri (2006), Medieval Islamic Civilization: An Encyclopaedia, Vol. II, p. 343-345, Routledge, New York, London. - ^ Toulmin, S. and Goodfield, J. (1965), The Ancestry of science: The Discovery of Time, Hutchinson & Co., London, p. 64 - ^ Seyyed Hossein Nasr (December 2003). "The achievements of IBN SINA in the field of science and his contributions to its philosophy". Islam & Science. 1. - ^ A. I. Sabra (Spring 1967). "The Authorship of the Liber de crepusculis, an Eleventh-Century Work on Atmospheric Refraction". Isis. 58 (1): 77-85 . doi:10.1086/350185. - ^ Robert E. Hall (1973). "Al-Biruni", Dictionary of Scientific Biography, Vol. VII, p. 336. - ^ Raymond L. Lee; Alistair B. Fraser (2001). The Rainbow Bridge: Rainbows in Art, Myth, and Science. Penn State Press. p. 156. ISBN 978-0-271-01977-2. - ^ The Bookman, ed. (January 1892). "The Earliest known Journal of the Weather". p. 147. - ^ Topdemir, Hüseyin Gazi (2007) Kamal Al-din Al-Farisi´s explanation of the rainbow. idosi.org - ^ a b c d e Jacobson, Mark Z. (June 2005). Fundamentals of Atmospheric Modeling (2nd ed.). New York: Cambridge University Press. p. 828. ISBN 978-0-521-54865-6. - ^ Hellmann's Repertorium of German Meteorology, page 963. Dmg-ev.de. Retrieved on November 6, 2013. - ^ Morison, Samuel Eliot (1942). Admiral of the Ocean Sea: A Life of Cristopher Columbus. p. 617. - ^ Dorst, Neal (May 5, 2014). "Subject: J6) What are some important dates in the history of hurricanes and hurricane research?". Tropical Cyclone Frequently Asked Questions:. United States Hurricane Research Division. Archived from the original on March 19, 2016. Retrieved 2016. - ^ Austria National Library - ^ Leonhard Reynmann, Astrologe und Meteorologe - ^ Highlights in the study of snowflakes and snow crystals. Its.caltech.edu (February 1, 1999). Retrieved on 2013-11-06. - ^ New Organon (English translations) - ^ Florin to Pascal, September 1647,OEuves completes de Pascal, 2:682. - ^ Raymond S. Bradley, Philip D. Jones (1992) Climate Since A.D. 1500, Routledge, ISBN 0-415-07593-9, p.144 - ^ Thomas Birch's History of the Royal Society is one of the most important sources of our knowledge not only of the origins of the Society, but also the day to day running of the Society. It is in these records that the majority of Wren's scientific works are recorded. - ^ Cook, Alan H. (1998) Edmond Halley: Charting the Heavens and the Seas, Oxford: Clarendon Press, ISBN 0198500319. - ^ Grigull, U., Fahrenheit, a Pioneer of Exact Thermometry Archived January 25, 2005, at the Wayback Machine.. Heat Transfer, 1966, The Proceedings of the 8th International Heat Transfer Conference, San Francisco, 1966, Vol. 1. - ^ George Hadley (1735). "Concerning the cause of the general trade winds". Philosophical Transactions of the Royal Society of London. 39 (436-444): 58. doi:10.1098/rstl.1735.0014. JSTOR 103976. - ^ O'Connor, John J.; Robertson, Edmund F., "Timeline of meteorology", MacTutor History of Mathematics archive, University of St Andrews. - ^ Olof Beckman (2001) History of the Celsius temperature scale., translated, Anders Celsius (Elementa, 84:4). - ^ a b c d e f g h i j k l m n Dorst, Neal, FAQ: Hurricanes, Typhoons, and Tropical Cyclones: Hurricane Timeline, Hurricane Research Division, Atlantic Oceanographic and Meteorological Laboratory, NOAA, January 2006. - ^ Biographical note at "Lectures and Papers of Professor Daniel Rutherford (1749-1819), and Diary of Mrs Harriet Rutherford". londonmet.ac.uk - ^ Gaston R. Demarée: The Ancien Régime instrumental meteorological observations in Belgium or the physician with lancet and thermometer in the wake of Hippocrates. Ghent University. - ^ a b J.L. Heilbron et. al: "The Quantifying Spirit in the 18th Century". Publishing.cdlib.org. Retrieved on November 6, 2013. - ^ "Sur la combustion en général" ("On Combustion in general", 1777) and "Considérations Générales sur la Nature des Acides" ("General Considerations on the Nature of Acids", 1778). - ^ Nicholas W. Best, "Lavoisier's 'Reflections on Phlogiston' I: Against Phlogiston Theory", Foundations of Chemistry, 2015, 17, 137-151. - ^ Nicholas W. Best, Lavoisier's 'Reflections on Phlogiston' II: On the Nature of Heat, Foundations of Chemistry, 2016, 18, 3-13. In this early work, Lavoisier calls it "igneous fluid". - ^ The 1880 edition of A Guide to the Scientific Knowledge of Things Familiar, a 19th-century educational science book, explained heat transfer in terms of the flow of caloric. - ^ "Cumulus". The Free Dictionary. Farlex. Retrieved . - ^ a b c "Fact sheet No. 1 - Clouds" (PDF). Met Office (U.K.). 2013. Retrieved 2013. - ^ Royal Meteorological Society, ed. (2015). "Luke Howard and Cloud Names". Retrieved 2015. - ^ a b c d e World Meteorological Organization, ed. (1975). International Cloud Atlas, preface to the 1939 edition (PDF). I. pp. IX-XIII. ISBN 92-63-10407-7. Retrieved 2014. - ^ Colorado State University Dept. of Atmospheric Science, ed. (2014). "Cloud Art: Cloud Classification". Retrieved 2014. - ^ Henry Glassford Bell, ed. (1827). Constable's miscellany of original and selected publications. XII. p. 320. - ^ G-G Coriolis (1835). "Sur les équations du mouvement relatif des systèmes de corps". J. de l'École royale polytechnique. 15: 144-154. - ^ Library of Congress. The Invention of the Telegraph. Retrieved on January 1, 2009. - ^ David M. Schultz. Perspectives on Fred Sanders's Research on Cold Fronts, 2003, revised, 2004, 2006, p. 5. Retrieved on July 14, 2006. - ^ Laufersweiler, M. J.; Shirer, H. N. (1995). "A theoretical model of multi-regime convection in a stratocumulus-topped boundary layer". Boundary-Layer Meteorology. 73 (4): 373-409. Bibcode:1995BoLMe..73..373L. doi:10.1007/BF00712679. - ^ a b c E.C. Barrett; C.K. Grant (1976). "The identification of cloud types in LANDSAT MSS images". NASA. Retrieved 2012. - ^ Louis Figuier; Émile Gautier (1867). L'Année scientifique et industrielle. L. Hachette et cie. pp. 485-486. - ^ Ronalds, B.F. (2016). Sir Francis Ronalds: Father of the Electric Telegraph. London: Imperial College Press. ISBN 978-1-78326-917-4. - ^ Ronalds, B.F. (June 2016). "Sir Francis Ronalds and the Early Years of the Kew Observatory". Weather. Bibcode:2016Wthr...71..131R. doi:10.1002/wea.2739. - ^ A History of the Telegraph Companies in Britain between 1838 and 1868. Distantwriting.co.uk. Retrieved on November 6, 2013. - ^ Millikan, Frank Rives, JOSEPH HENRY: Father of Weather Service Archived October 20, 2006, at the Wayback Machine., 1997, Smithsonian Institution - ^ Anne E. Egger and Anthony Carpi: "Data collection, analysis, and interpretation: Weather and climate". Visionlearning.com (January 2, 2008). Retrieved on 2013-11-06. - ^ World Meteorological Organization, ed. (1975). Noctilucent, International Cloud Atlas (PDF). I. p. 66. ISBN 92-63-10407-7. Retrieved 2014. - ^ World Meteorological Organization, ed. (1975). Nacreous, International Cloud Atlas (PDF). I. p. 65. ISBN 92-63-10407-7. Retrieved 2014. - ^ International Cloud-Atlas. ucsd.edu - ^ Theodora, ed. (1995). ", Cloud". Retrieved 2015. - ^ Reynolds, Ross (2005). Guide to Weather. Buffalo, New York: Firefly Books Ltd. p. 208. ISBN 1-55407-110-0. - ^ NOAA: "Evolution of the National Weather Service". Weather.gov. Retrieved on November 6, 2013. - ^ Max Austria-Forum on Max margules. Austria-lexikon.at. Retrieved on November 6, 2013. - ^ Norwegian Cyclone Model Archived January 4, 2016, at the Wayback Machine., webpage from NOAA Jetstream online school for weather. - ^ "75th anniversary of starting aerological observations in Russia". EpizodSpace (in Russian). Archived from the original on February 11, 2007. - ^ Roth, David, and Hugh Cobb, Virginia Hurricane History: Early Twentieth Century, July 16, 2001. - ^ Earth Observation History on Technology Introduction. Archived July 28, 2007, at the Wayback Machine.. eoportal.org. - ^ "TIROS". NASA. 2014. Archived from the original on December 9, 2014. Retrieved 2014. - ^ JetStream, ed. (8 October 2008). "Cloud Classifications". National Weather Service. Retrieved 2014. - ^ Nathan J. Mantua; Steven R. Hare; Yuan Zhang; John M. Wallace & Robert C. Francis (June 1997). "A Pacific interdecadal climate oscillation with impacts on salmon production". Bulletin of the American Meteorological Society. 78 (6): 1069-1079. Bibcode:1997BAMS...78.1069M. doi:10.1175/1520-0477(1997)078<1069:APICOW>2.0.CO;2. Archived from the original on February 12, 2005. - ^ https://sealevel.jpl.nasa.gov/science/elninopdo/pdo/ - ^ Unified Surface Analysis Manual. Weather Prediction Center. August 7, 2013
<urn:uuid:bba5b0fa-05b3-46c9-b909-b60842966e44>
3.4375
10,645
Knowledge Article
Science & Tech.
46.215078
95,608,243
Wandering catalyst activates carbon atoms to form carbon-carbon bonds Published online 23 June 2016 Inspired by a reaction that has become a mainstay of polymer synthesis, Keio University researchers have discovered a novel way to construct small organic molecules. Their method uses a 'chain-walking' catalyst to activate otherwise unreactive parts of simple molecules, catalyzing the formation of new carbon-carbon bonds1. The conventional formation of new carbon-carbon bonds -- which turn simple starting molecules into complex structures, such as therapeutic drugs -- is to first install a chemical 'handle'. For example, a chlorine atom might be used to pre-activate a particular carbon atom toward bond formation. Adding and manipulating these handles can make the synthesis of complex molecules slow and inefficient, so there is much interest in finding ways to directly install new bonds at unreactive carbon atoms. Chain walking could be just such a reaction, realized Takuya Kochi from the Department of Chemistry at Keio University. Chain-walking chemistry was pioneered in the 1990s, when polymer chemists discovered a palladium catalyst that would attach to the end of a chain of carbon atoms, then 'walk' along the polymer chain before adding the next building block to the chain. "It is not a conventional way to form a carbon-carbon bond," says Kochi. Polymer chemists use bulky substituents around the palladium atom to slow a competing reaction, alkene exchange, in which the palladium drops off the growing polymer chain. "We felt that if we just removed the bulky substituent, we could switch the rates," says Kochi, favoring alkene exchange over polymerization -- the desired pathway when making small molecules rather than polymers. The researchers tested their slimmed-down catalyst on a variety of starting molecules, each of which contained two parallel 'arms' consisting of carbon atoms. They showed the catalyst would attach to the end of one arm, then walk from atom to atom until it was opposite a carbon-carbon double bond on the other arm. It would join the two arms of the molecule at that point -- activating the otherwise unreactive carbon it had walked to -- to form the new carbon-carbon single bond. Using their reaction, the team has been able to synthesize a structure called prostane, the underlying carbon framework of a family of natural products. They are now investigating the reaction to identify which molecules are particularly suited to chain walking. "The chemistry is still in its initial phase," says Kochi. - Hamasaki, T., Aoyama, Y., Kawasaki, J., Kakiuchi, F. and Kochi, T. Chain walking as a strategy for carbon-carbon bond formation at unreactive sites in organic synthesis: Catalytic cycloisomerization of various 1,n-dienes. Journal of the American Chemical Society 137, 16163 (2015). | article
<urn:uuid:63ef9495-23df-4387-b1b6-567149cc8c4c>
3.15625
588
Truncated
Science & Tech.
33.676742
95,608,249
A team of researchers, including Rensselaer professor Morgan Schaller, has used mathematical modeling to show that continental erosion over the last 40 million years has contributed to the success of diatoms, a group of tiny marine algae that plays a key role in the global carbon cycle. The research was published today in the Proceedings of the National Academy of Sciences. Diatoms consume 70 million tons of carbon from the world's oceans daily, producing organic matter, a portion of which sinks and is buried in deep ocean sediments. Diatoms account for over half of organic carbon burial in marine sediments. In a mechanism known as a the "oceanic biological pump," the diatoms absorb and bury carbon, then atmospheric carbon dioxide diffuses into the upper ocean to compensate for that loss of carbon, reducing the concentration of carbon dioxide in the atmosphere. "What we really have here is a double whammy: The chemical breakdown of rocks on land efficiently consumes carbon dioxide from the atmosphere, and those minerals are delivered to the ocean basins by rivers where, in this case, they fueled the massive expansion of diatoms," said Schaller, an assistant professor of earth and environmental sciences. "Diatoms are photosynthetic, so they also consume atmospheric carbon dioxide. The combination of both of these effects may help explain the drastic decrease in atmospheric carbon dioxide over the last 35 million years that has plunged us into the current condition where we have glacial ice cover at both of the poles." Diatoms appeared in the Mesozoic about 200 million years ago as descendants of the red algal lineage. However, it was not until the last 40 million years that this group of marine microalgae rose to dominate marine primary productivity. Unlike other microalgae, diatoms require silicic acid to form tiny cases of amorphous silica (glass) called frustules, which are a means of defense against predators. Therefore, understanding the sources of silicic acid in the ocean is essential to understanding the evolutionary success of diatoms, and this is where the Earth sciences come into play. Silicate rocks such as granites and basalts comprise the majority of Earth's crust, and their chemical decomposition represents a major source of silicic acid to the world oceans. Continental erosion depends on a complex interaction of physical, chemical, and biological forces that ultimately combine to enhance the dissolution of minerals that make up the rocks. The elevation of mountain ranges such as the Himalayas over the last 40 million years favored the fracture and dissolution of continental silicate rocks facilitating the expansion of diatoms in marine ecosystems. Previous work has associated the evolutionary expansion of diatoms with a superior competitive ability for silicic acid relative to other plankton that use silica, such as radiolarians, which evolved by reducing the weight of their silica skeleton. But in their work, the researchers used a mathematical model in which diatoms and radiolarians compete for silicic acid to show that the observed reduction in the weight of radiolarian tests is insufficient to explain the rise of diatoms. Using the lithium isotope record of seawater as a proxy of silicate rock weathering and erosion, they calculated changes in the input flux of silicic acid to the oceans. Their results indicate that the long-term massive erosion of continental silicates was critical to the subsequent success of diatoms in marine ecosystems over the last 40 million years and suggest an increase in the strength and efficiency of the oceanic biological pump over this period. The research team was led by Pedro Cermeño, and included Sergio M. Vallina, both of the Instituto de Ciencias del Mar in Spain, as well as Schaller, Paul G. Falkowski of Rutgers University, and Òscar E. Romero, of the University of Bremen in Germany. Mary Martialay | EurekAlert! Global study of world's beaches shows threat to protected areas 19.07.2018 | NASA/Goddard Space Flight Center NSF-supported researchers to present new results on hurricanes and other extreme events 19.07.2018 | National Science Foundation For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 19.07.2018 | Materials Sciences 19.07.2018 | Earth Sciences 19.07.2018 | Life Sciences
<urn:uuid:a0dbd32a-086d-43a7-8b12-dc7910c163b3>
3.875
1,439
Content Listing
Science & Tech.
34.586573
95,608,250
1) The Equilibrium constant Kp for the reaction 2SO2(g) + O2(g) --- 2SO3(g) Is 5.60 x 10^4 at 350 degrees C. The initial pressures of SO2 is 0.350 atm and the initial pressure of O2 is 0.762 atm at 350 degrees C. When a mixture equillibtrates, is the total pressure less than or greater than the sum of the initial pressures (1.112 atm)? 2) You are working as a lab assistant and are tasked to prepare an acetic acid - sodium acetate buffer solution with a pH of 4.00 ± 0.02. What molar ratio of CH3COOH to CH3COONa should be used? 3) The equilibrium constant Kc for the reaction H2(g) + Br(g) ----- 2HBr(g) Is 2.18 x 10^6 at 730 degrees C. Starting with 3.20 moles of HBr in a 12.0-L reaction vessel, calculate the concentrations of H2, Br2, and HBR at equilibrium. 4) Assuming equal concentrations of conjugate base and acid, which one of the following mixtures is suitable for making a buffer solution with an optimum pH of 4.6 - 4.8? a. CH3COO2Na / CH3COOH (Ka = 1.8 x 10-5) b. NH3 / NH4Cl (Ka(NH4+) = 5.6 x 10-10) c. NaOCl / HOCl (Ka = 3.2 x 10-8) d. NaNO2 / HNO2 (Ka = 4.5 x 10-4) e. NaCl / HCl 5) You have 500.0 mL of a buffer solution containing 0.20 M acetic acid (CH3COOH) and 0.30 M sodium acetate (CH3COONa). What will the pH of this solution be after the addition of 20.0 mL of 1.00 M NaOH solution? Ka = 1.8 x 10-5 6) 50.00 mL of 0.10 M HNO2 (nitrous acid) was titrated with 0.10 M KOH solution. After 25.00 mL of KOH solution was added, what was the pH in the titration flask? (Given Ka = 4.5 x 10-4) 7) The solubility product for CrF3 is Ksp = 6.6 x 10-11. What is the molar solubility of CrF3? 8) The Ksp for Ag3PO4 is 1.8 x 10-18. Determine the Ag+ ion concentration in a saturated solution of Ag3PO4. 9) Will a precipitate of MgF2 form when 300 mL of 1.1 x 10-3 M MgCl2 solution are added to 500 mL of 1.2 x 10-3 M NaF? Ksp (MgF2) = 6.9 x 10-9© BrainMass Inc. brainmass.com July 20, 2018, 10:06 am ad1c9bdddf This solution explains: 1) How to determine concentration of reactants and products in an equilibrium reaction. 2) How to prepare a buffer solution with a specific pH. 3) How to calculate a molar solubility. 4) How to calculate ion concentration in a concentrated solution.
<urn:uuid:d942f16c-5fc1-452e-b1af-9c66d86d3c84>
3.765625
763
Content Listing
Science & Tech.
108.824465
95,608,259
Report Detail Page Genetic Variation and the Origin of Lippia Populations in Australia Lippia (Phyla canescens: Verbenaceae) is a serious weed of wetlands, riparian zones and floodplains, particularly in eastern Australia where many Ramsar wetlands are threatened by hydrological changes precipitated by soil accreting lippia mats Three distinct P. canescens genotypes have been identified, two from the native range, and one that has so far only been found in France (presumably from an as yet unidentified American population). The two genotypes in South America are geographically isolated (SE coastal and NW inland Argentina). French and Australian populations are represented by numerous genotypes, suggesting multiple introductions (or considerable crossing/domestication prior to introduction) Glasshouse experiments revealed that P. canescens does not set seed without pollinators (is not autonomous). Experiments are underway to study more aspects of reproductive biology of the species. On the other hand P. nodiflora is autonomous and produces seeds without the need for pollinator. |Contract No.||Title||Start date||End date||Funding type| Ecology, Spread & Impact of Lippia This page was last updated on 06/07/2018 Join myMLA today One username and password for key integrity and information Systems (LPA/NVD, NLIS, MSA & LDL). A personalised online dashboard that provides news, weather, events and R&D tools relevant to you. Customised market information and analysis. Already registered for myMLA?
<urn:uuid:955eb473-267b-4434-b387-ecb120851fac>
2.8125
333
Knowledge Article
Science & Tech.
18.777599
95,608,262
A Change in the Weather Rainmaking efforts during the Vietnam War prompted an international ban. After three years of dodging Senate inquiries, the Department of Defense, on March 20, 1974, presented a subcommittee of the Senate Committee on Foreign Relations with a briefing on the extensive rainmaking operations in Southeast Asia. The briefing was classified “Top Secret,” but the hearing report was made public on May 19, 1974. In this way the American public officially learned for the first time that the United States had used a new and developing technology in an attempt to slow movement of North Vietnamese troops and supplies along the Ho Chi Minh Trail network. From March of 1967 until July of 1972 the Air Force had rained canisters of silver iodide into clouds, and these in turn were supposed to initiate an increase in rainfall. So began geoscientist Gordon J. MacDonald’s 1975 essay for TR on weather modification. Although the revelation of the Vietnam War program was the occasion for his article, his concerns were more general. MacDonald, who died in 2002, was then a professor at Dartmouth and had served on the President’s Science Advisory Committee under Lyndon B. Johnson. Throughout his career, he was interested in the way the planet changes as a result of both natural processes and human interference. After World War II, it became clear that industrial activity was changing the world’s climate. If humans were inadvertently creating climate change, it followed that they might be able to reverse those effects by modifying the local weather (see “The Geoengineering Gambit”). Weather modification moved from the realm of magic to an applied science in July of 1946 when Vincent Schaefer, then at the General Electric Laboratories, discovered that dry ice could bring about nucleation of super-cooled water into ice. These laboratory studies were extended by Irving Langmuir and Bernard Vonnegut, who discovered that silver iodide as well as dry ice acted as an effective agent in bringing about the transformation of super-cooled water into ice. The laboratory work was soon supplemented by field observations. In November of 1946, Schaefer flew into a cloud over Pittsfield, Massachusetts, at an altitude of 40,000 feet and a temperature of -20 °C. After dispatching several pounds of dry ice into the clouds, Schaefer observed draperies of snow falling below the clouds. Governments took notice. At his inauguration, President John F. Kennedy pledged not only to “explore the stars” but also to “conquer the deserts”; he would later direct his science advisor, Jerome Wiesner, to pursue weather modification for humanitarian aims. But the scientists Wiesner consulted were uncertain how feasible weather modification would be and cautioned that international coöperation would be needed to ensure that the technology would not be used in war. The lure proved great, however, and rainmaking was used as a tactical weapon during the Vietnam War, albeit to minimal effect. To some, rainmaking may seem relatively innocuous as compared with bombing or napalm. In some sense this is a correct view, but in the broader view the implications for future political stability are immense. We are developing a far more detailed understanding of the earth and its surroundings. … It may be possible in the future to trigger earthquakes with devastating results from a great distance, or to bring about major climatic changes by triggering the instabilities inherent in the Antarctic icecap. All of these possibilities seem today to be far-fetched. But our history has shown us that if a technology develops, it will be used, unless international agreements can be secured. MacDonald didn’t have to wait long. As the issue that contained his essay went to press, a proposal from the United States and the Soviet Union to ban the hostile use of environmental modification was put before the United Nations. Three years later, the Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques entered into force. As the search for solutions to global warming grows more desperate, perhaps a new international consensus will be required. Couldn't make it to EmTech Next to meet experts in AI, Robotics and the Economy?Go behind the scenes and check out our video
<urn:uuid:21ce0de9-823e-4e4e-a3aa-8a67c69ccf51>
3.421875
864
Nonfiction Writing
Science & Tech.
35.24495
95,608,268
A Big Step Towards More Efficient Photosynthesis News Sep 20, 2014 Plants, algae and some bacteria capture light energy from the sun and transform it into chemical energy by the process named photosynthesis. Blue-green algae (cyanobacteria) have a more efficient mechanism in carrying out photosynthesis than plants. For a long time now, it has been suggested that if plants could carry out photosynthesis with a similar mechanism to that of the blue-green algae, plant productivity and hence crop yields could improve. Rothamsted Research scientists strategically funded by the BBSRC and in collaboration with colleagues at Cornell University funded by the U.S. National Science Foundation have used genetic engineering to demonstrate for the first time that flowering plants can carry out photosynthesis utilizing a faster bacterial Rubisco enzyme rather than their own slower Rubisco enzyme. These findings represent a milestone toward the goal of improving the photosynthetic rate in crop plants. The study is published in Nature. In order to engineer the bacterial genes to work properly in plants, postdoctoral fellow Dr. Myat Lin at Cornell used recombinant DNA methods to connect the bacterial DNA to plant DNA sequences so that several bacterial proteins could be produced simultaneously in chloroplasts and successfully assemble into a functional enzyme. Dr. Lin commented, "In order for this project to succeed, it was essential to carefully engineer the cyanobacterial genes so that they would be expressed at sufficient levels to support photosynthesis.” Dr. Alessandro Occhialini, Rothamsted Research scientist applied sophisticated microscope techniques to observe the exact position of the enzyme within the tobacco plant chloroplasts. Moreover he tested the in vitro enzymatic activity of cyanobacterial Rubisco extracted from tobacco leaves. “I was thrilled to see that these tobacco lines were photosynthetically competent and that the faster cyanobacterial enzyme was active in plant tissue.These engineered plants represent a very important step towards the improvement of plant photosynthetic performance.” Professor Maureen Hanson, lead scientist at Cornell University said: “The plants we have developed in this study are extremely valuable for further enhancing photosynthesis by surrounding the cyanobacterial Rubisco with a microcompartment called the carboxysome. Our next step is to add the proteins required to form the carboxysome in the chloroplast, as we described earlier this year in the Plant Journal.” Professor Martin Parry, lead scientist at Rothamsted Research, said: “We are truly excited about the findings of this study. Wheat yields in the UK in recent years have reached a plateau. In order to increase wheat yields in a sustainable manner in the future, we are looking at a variety of approaches that include changes within the plant as well as in terms of the surrounding environment of the plant. The present study has been undertaken in a model plant species and it represents a major milestone. Now we have acquired important knowledge and we can start taking further steps towards our goal of turbo-charging photosynthesis in major crops like wheat.” Professor Jackie Hunter, BBSRC Chief Executive, commented: "Photosynthesis is the basis for almost all life on Earth, yet it has the potential to use the sun's energy so much more efficiently. There is a great opportunity for improvement and this study and other research is working towards realizing a potential that could benefit us in many diverse ways, from producing more food to fuels, materials, useful chemicals and much more." Dr Kent Chapman, Programme Director at National Science Foundation (NSF) commented: “This US/UK team of plant biologists has replaced the key carbon-dioxide-fixing gene in photosynthesis that is present in tobacco plants with a more efficient version from a cyanobacterium. This novel achievement marks a major step toward enhancing the process of photosynthesis in crop plants for improved growth and overall yields, and is a great example of the value of international collaboration." Analytical Tool Predicts Disease-Causing GenesNews Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes. Single Gene Change in Gut Bacteria Alters Host MetabolismNews Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE
<urn:uuid:1820282c-6e2a-43a8-a374-a86f14e50f0a>
3.671875
1,022
Content Listing
Science & Tech.
22.477296
95,608,281
We often think of ‘land-based’ climate solutions or nature as being something that’s far removed from the urban lives. Cities are powerhouses of global GDP, and are also responsible for at least 70% of greenhouse gas emissions. But could they also be the unseen heroes of natural climate solutions. What’s the beef? Brazil’s climate emissions If the Brazilian cattle sector was a country, it would rank 16th in a league table of countries with the highest greenhouse gas emissions (ghg). UN says trees provide critical solution to climate change, as world continues dramatic loss in 2017 Despite international efforts to reduce deforestation, the world’s tropical forests lost 39 million acres of trees in 2017, an area roughly the size of Bangladesh, according to new data from the environmental monitoring group Global Forest Watch. Forests provide a critical short-term solution to climate change There is a “catastrophic climate gap” between the commitments that countries have made under the Paris Climate Agreement and the emissions reductions required to avoid the worst consequences of global warming, according to UN Environment’s Emissions Gap Report 2017. A new project seeks to kickstart a revival for the world’s largest rainforest by planting new trees – tens of millions of them. The project, announced in September in Brazil, aims to restore 73 million trees in the Brazilian Amazon by 2023. Climate and forest restoration in the Lower Mississippi Valley Hardwood forests are returning to the Lower Mississippi Valley region, thanks to a range of private and public actors, and as the trees grow they will sequester of millions of tons of carbon from the atmosphere. Australia’s indigenous communities light a fire on climate change Indigenous groups in Australia are mitigating carbon emissions from their land by setting fire to it. Reduced-impact logging in Indonesia and Malaysia Reduced-impact logginga set of logging practices that focuses on removing only high-quality timber while minimizing impact to the ecosystemcan reduce some carbon emissions and minimize damage to soil and water quality while also maintaining an active industry that resists forest conversion and supports communities. With the deadline fast approaching towards 2020 – the date by which hundreds of companies have committed to reduce tropical deforestation associated with the sourcing of commodities such as palm oil, soy, beef, and paper and pulp – many admit that knowing what must be done and then actually doing it, is a gap. An Open Goal: Why Forests and Nature Need to be at the Center of the Sustainable Development Agenda The United Nations High-level Political Forum on Sustainable Development kicked off on Monday at the UN headquarters in New York. This year’s theme focusses on the “Transformation towards sustainable and resilient societies” and the SDGs in review are of relevance to migration. WWF authors explore why nature is central to it all.
<urn:uuid:a3a5276c-7fd8-465e-bd0c-4bb74e782838>
3.03125
587
Content Listing
Science & Tech.
26.603507
95,608,289
Gravity on the Moon is about 1/6th that on the Earth. A pole-vaulter 2 metres tall can clear a 5 metres pole on the Earth. How high a pole could he clear on the Moon? Look at the calculus behind the simple act of a car going over a step. Can you work out the natural time scale for the universe? PhysNRICH is the area of the StemNRICH site devoted to the mathematics underlying the study of physics How high will a ball taking a million seconds to fall travel? Work in groups to try to create the best approximations to these physical quantities. See how the motion of the simple pendulum is not-so-simple after all. A look at the fluid mechanics questions that are raised by the Stonehenge 'bluestones'. Find out some of the mathematics behind neural networks. Look at the units in the expression for the energy levels of the electrons in a hydrogen atom according to the Bohr model. Explore the Lorentz force law for charges moving in different ways. Problems which make you think about the kinetic ideas underlying the ideal gas laws. This is the area of the advanced stemNRICH site devoted to the core applied mathematics underlying the sciences. engNRICH is the area of the stemNRICH Advanced site devoted to the mathematics underlying the study of engineering How does the half-life of a drug affect the build up of medication in the body over time? Which line graph, equations and physical processes go together? Explore the rates of growth of the sorts of simple polynomials often used in mathematical modelling. How fast would you have to throw a ball upwards so that it would never land? Show that even a very powerful spaceship would eventually run out of overtaking power Follow in the steps of Newton and find the path that the earth follows around the sun. Explore displacement/time and velocity/time graphs with this mouse motion sensor. An article demonstrating mathematically how various physical modelling assumptions affect the solution to the seemingly simple problem of the projectile. A simplified account of special relativity and the twins paradox. A ball whooshes down a slide and hits another ball which flies off the slide horizontally as a projectile. How far does it go? Where will the spaceman go when he falls through these strange planetary systems? Investigate why the Lennard-Jones potential gives a good approximate explanation for the behaviour of atoms at close ranges Have you got the Mach knack? Discover the mathematics behind exceeding the sound barrier. Read all about electromagnetism in our interactive article. Many physical constants are only known to a certain accuracy. Explore the numerical error bounds in the mass of water and its constituents. What is an AC voltage? How much power does an AC power source supply? A look at a fluid mechanics technique called the Steady Flow Momentum Equation. Investigate some of the issues raised by Geiger and Marsden's famous scattering experiment in which they fired alpha particles at a sheet of gold. Investigate the effects of the half-lifes of the isotopes of cobalt on the mass of a mystery lump of the element. Can you match up the entries from this table of units? Get some practice using big and small numbers in chemistry. This is the technology section of stemNRICH - Core. Derive an equation which describes satellite dynamics. A look at different crystal lattice structures, and how they relate to structural properties Find out how to model a battery mathematically Some explanations of basic terms and some phenomena discovered by ancient astronomers An introduction to a useful tool to check the validity of an equation. Explore how can changing the axes for a plot of an equation can lead to different shaped graphs emerging When a mixture of gases burn, will the volume change? Ever wondered what it would be like to vaporise a diamond? Find out inside... Explore the energy of this incredibly energetic particle which struck Earth on October 15th 1991 A think about the physics of a motorbike riding upside down Can you suggest a curve to fit some experimental data? Can you work out where the data might have come from? Things are roughened up and friction is now added to the approximate simple pendulum Find out why water is one of the most amazing compounds in the universe and why it is essential for life. - UNDER DEVELOPMENT
<urn:uuid:9c56cb30-3fe8-4cd7-8e59-23d2f8f65ef3>
3.03125
918
Content Listing
Science & Tech.
47.091019
95,608,292
While mapping algal blooms from space is now well-established, mapping undesirable algal blooms in eutrophicated coastal waters raises further challenge in detecting individual phytoplankton species. In this paper, an algorithm is developed and tested for detecting Phaeocystis globosa blooms in the Southern North Sea. For this purpose, we first measured the light absorption properties of two phytoplankton groups, P. globosa and diatoms, in laboratory-controlled experiments. The main spectral difference between both groups was observed at 467 nm due to the absorption of the pigment chlorophyll c3 only present in P. globosa, suggesting that the absorption at 467 nm can be used to detect this alga in the field. A Phaeocystis-detection algorithm is proposed to retrieve chlorophyll c3 using either total absorption or water-leaving reflectance field data. Application of this algorithm to absorption and reflectance data from Phaeocystis-dominated natural communities shows positive results. Comparison with pigment concentrations and cell counts suggests that the algorithm can flag the presence of P. globosa and provide quantitative information above a chlorophyll c3 threshold of 0.3 mg m(-3) equivalent to a P. globosa cell density of 3 x 10(6) cells L(-1). Finally, the possibility of extrapolating this information to remote sensing reflectance data in these turbid waters is evaluated. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:511481d6-f19f-4e14-a613-e7f857abf1ab>
3.03125
320
Academic Writing
Science & Tech.
33.054948
95,608,311
The unchanging values are called constants, and can be represented in Ruby by a variable name beginning with a capital letter: Pi = 3.141592 If you enter the preceding line and try to change the value of Pi, but you'll get a warning: Pi = 3.141592 Pi = 3.141592 Pi = 500 Constants are objects whose values never change. For example, PI in Ruby's Math module is a constant. Constants in Ruby begin with a capital letter. Class names are also constants. You can obtain a list of all defined constants using the constants method: Ruby provides the const_get and const_set methods to get and set the value of named constants specified as symbols. symbols identifiers are preceded by a colon such as :RUBY_VERSION. Ruby's constants may be assigned new values: RUBY_VERSION = "1.8.7" RUBY_VERSION = "2.5.6" The previous reassignment of the RUBY_VERSION constant produces an "already initialized constant" warning but not an error! You can even reassign constants declared in Ruby's standard class library. For example, here I reassign the value of PI. Although this displays a warning, the assignment succeeds nonetheless: puts Math::PI #=> 3.141592653589793 Math::PI = 100 #=> warning: already initialized constant PI puts Math::PI #=> 100 # from ww w. j a va2 s . c o m
<urn:uuid:7a8ac953-a7ee-4778-bbb1-6ba8124eb68e>
3.734375
328
Knowledge Article
Software Dev.
67.072264
95,608,322
This work, carried put in the Dpt. of Geodynamics of the University of Granada (Universidad de Granada, [http://www.ugr.es]) and the Andalusian Institute of Earth Sciences, has given rise, for the time being, to four publications in prestigious international scientific journals. “Intense earthquakes can be registered in the area, reaching 7 degrees in the Richter scale, whose tectonic content is similar to that of other densely populated regions of the planet, like the San Andreas Fault (California, United States), the Caribbean arc or Japan”, Bohoyo points out. Working on the ground is quite difficult. ”The campaign is brought up in Granada in detail, but once there the climatic conditions and the enormous ice blocks limit the possibilities, leaving wide unexplored sectors”, the young scientist says. Expeditions take place on board the Hespérides, ceded to the CSIC (Higher Council of Scientific Research) and crewed by the staff of the Spanish Ministry of Defence. “The main objective is to study tectonic blocks´ fragmentation and evolution”, Professor Jesús Galindo Zaldívar explains, who is one of the persons in charge for the work, together with Andrés Maldonado López. With data of different scientific nature, researchers from Granada have obtained a map of the morphology and tectonic evolution of the area; thanks to them it has been possible to determine the expansion age of the Scotia Sea oceanic bowls. Galindo adds that “it is a continental crust area that formed a barrier between the Atlantic and the Pacific, between South America and the Antarctic Peninsula, thirty million years ago”. Circumpolar antarctic current The opening of this big barrier that connects the two oceans, gave rise to a circumpolar current that, at the present day, influences in a determinant way the planet´s climatology. It is the most important current in Antactica, which connects with others, isolating the anctartic continent. In fact, its existence is one of the keys of the extremely cold anctartic climate and one of the most important international scientific lines at present. The scientific dimension of this group of the University of Granada is very relevant. Last week (the last week of August), Galindo and Bohoyo were present in the biannual meeting of the SCAR (Scientific Committee on Antarctic Research) in Bremen (Germany). This International Scientific Committee concentrates a total of 28 countries involved in the scientific knowledge of the Antarctica. Before the work of the researchers from Granada, there were hardly a few scarcely systematized data about the Scotia Arc, from British expeditions in the seventies. As a matter of fact, Fernando Bohoyo goes to the British Antarctic Center, one of the international reference centres on the subject, at the end of this week, to go deeply into data on the external part of the tectonic arc and the northern part of the Scotia Sea. In December, they will go back to the Antarctica to collect more data and continue with this promising scientific study. Antonio Marín Ruiz | alfa New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:c516877a-16e0-4b65-8930-4ba9d1dbbd8d>
2.90625
1,301
Content Listing
Science & Tech.
37.818113
95,608,326
First of all, a quick primer on the nomenclature of Saturn's rings. The rings are labeled alphabetically in order of discovery, although the A, B, and C rings were all discovered basically at the same time and the decision to name them working outward in towards the planet was pretty much arbitrary. Technically the F ring is too thin to be shown here; it's only about 30–500 km thick which means it's about 40–400 times thinner than shown here. The relative brightnesses of the rings is also only approximate; the G ring (and even D ring) are also fainter than shown here, and aren't visible to the naked eye. They were only discovered with photography from various interplanetary probes after 1979 (as was the F ring). The F ring is the outermost of the “discrete” rings; beyond it, the rings are diffuse and may have moons orbiting embedded within them. The astute among you might have noticed that there is a distinct lack of an E ring in the above image. Don't worry, we'll come back to that. Anyway, let's see how these rings stack up against the average Earth-Moon distance: With an average separation distance between them of about 358,000 km, we can see that the Earth and the Moon nicely frame Saturn and its main rings there. It also gives a good idea of the size of Saturn relative to Earth. But what about that E ring I glossed over a paragraph ago? Turns out the E ring is outside the G ring and extremely large, but like the G ring it's also extremely faint and diffuse. Anyway, here's the E ring in all its glory (I've left the Earth, Moon, and the line between them in place): Yeah, the E ring's pretty wide (and again, it's so diffuse that it's not visible to the naked eye). Its outer edge is just within the orbit of Saturn's largest moon, Titan. As you can see (or maybe not), the E ring's diameter is around twice as large as the average Earth-Moon distance. But believe it or not, that's not all of Saturn's rings! There are a few more ringlets between the G and E ring that are too thin to show here, but there's another ring outside the E ring that's even larger and even more diffuse. This ring was only discovered in October 2009, and is known as the Phoebe ring after Saturn's unusual moon Phoebe which orbits just outside of it in a retrograde orbit. Here it is, with the rest of the ring system for comparison: Yep, that little disc in the center is the E ring we just saw in the last picture—with the inner ring system and Saturn within that. This ring is really large. In fact, unlike the other rings which have a maximum thickness on the order of tens to maybe hundreds of meters, the Phoebe ring has a thickness around forty times greater than the radius of Saturn itself. In other words, this ring is thicker than the entire diameter of the E ring. So there you have it! Saturn and its fascinating ring system, and how it compares to the distance between the Earth and the Moon. Hope you found it as interesting as I did putting these images together. A hui hou!
<urn:uuid:a45b525e-e717-4bcd-bbf0-17f2a0b3e2ce>
3.90625
686
Personal Blog
Science & Tech.
61.642093
95,608,351
Having grown up in coastal Newfoundland, Daniel Hoyles has seen, heard and felt first hand the power of the ocean. Now the ambitious Memorial University bachelor of commerce alumnus and chief operating officer of Grey Island Energy is developing a technology to harness the ocean’s immense wave energy to power everything from a house along the coast to a massive offshore oil rig. Daniel describes wave power as another form of solar energy. As air heats up in the blazing sun, it becomes less dense, lighter and rises higher, only to be swiftly replaced by colder, denser air. This rapid rush of new air is how wind is formed and wind drives the ocean’s powerful waves. In the case of wave power, water is the medium through which kinetic energy, the dynamic energy of motion, passes. READ MORE
<urn:uuid:928320d8-1687-42f7-9ec3-f30591c72186>
2.5625
164
Truncated
Science & Tech.
39.935952
95,608,363
Boomer has been examining the tools scientists and managers use to predict how much sediment runs into the Chesapeake Bay, and by her account, they are way off the mark. The study, co-authored by SERC ecological modeler Donald Weller and ecologist Thomas Jordan, appears in the January/February issue of the Journal of Environmental Quality. Sediment running into the bay reduces light, suffocates underwater organisms and is a significant source of phosphorous, a nutrient that essentially fertilizes the water promoting algal blooms and many other problems in the bay. “Cities and counties are under increasing pressure to meet total maximum daily loads set by state and federal agencies and to understand where sediments come from,” she said. “So we tested the tools most widely used now to predict sediment delivery.” Her work has led to a new tactic. “We’re moving away from focusing on upland erosion and looking more at what happens near streams and in streams during events with high levels of stream sediments.” The new study compared actual measurement of sediments in more than 100 streams in the Chesapeake watershed with predictions from several of the most up-to-date models. All the models failed completely to identify streams with high sediment levels. “There was no correlation at all between the model predictions and the measurements,” said Boomer. The study is among the first to directly compare predictions of the widely used models with actual observations of sediments in a large number of streams. The problem, she said, is that the most widely used models all begin with the same tool, the Universal Sediment Loss Equation. The USLE estimates erosion from five factors: topography, soil erodibility, annual average rainfall amount and intensity, land cover, and land management practices. Boomer emphasized that the USLE was developed to help farmers limit topsoil loss from individual fields rather than to predict sediment delivery from complex watersheds to streams. As often applied, the USLE gives an average annual erosion rate for the whole watershed draining into a stream. But not all of the eroded soil makes it into the water, so the estimates do not translate directly into sediment delivery rates. To account for the discrepancy, different models incorporate a wide variety of adjustments. According to Boomer, the adjusted models still do not work, partly because erosion rate is not the best information to start with. During the study, Boomer and colleagues Weller and Jordan compared erosion rates and sediment yields estimated from regional application of the USLE, the automated Revised-USLE, and five widely used sediment delivery ratio algorithms to measured annual average sediment delivery in 78 catchments of the Chesapeake Bay watershed. “We did the same comparisons for an independent set of 23 watersheds monitored by the U.S. Geological Society,” Boomer said. Sediment delivery predictions, which were highly correlated with USLE erosion predictions, exceeded observed sediment yields by more than 100 percent. The RUSLE2 erosion estimates also were highly correlated with the USLE predictions, indicating that the method of implementing the USLE model did not greatly change the results. “Sediment delivery is largely associated with specific rain events and stream bank erosion,” she said. “So, USLE-based models that emphasize long-term annual average erosion from uplands provide limited information to land managers.” With a new focus on what is happening in and near the streams themselves, Boomer and her colleagues hope to develop more reliable tools to predict sediment running into Chesapeake Bay—tools that can be used in other lakes and estuaries as well. Kimbra Cutlip | EurekAlert! Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:65e117bd-868c-466b-9fbf-a4971bca9e61>
3.5
1,365
Content Listing
Science & Tech.
33.2716
95,608,378
In this section, you will learn how to handle events in Java awt. Events are the integral part of the java platform. You can see the concepts related to the event handling through the example and use methods through which you can implement the event driven application. For any event to occur, the objects registers themselves as listeners. No event takes place if there is no listener i.e. nothing happens when an event takes place if there is no listener. No matter how many listeners there are, each and every listener is capable of processing an event. For example, a SimpleButtonEvent applet registers itself as the listener for the button's action events that creates a Button instance. ActionListener can be implemented by any Class including Applet. One point to remember here is that all the listeners are always notified. Moreover, you can also call AWTEvent.consume() method whenever you don't want an event to be processed further. There is another method which is used by a listener to check for the consumption. The method is isConsumed() method. The processing of the events gets stopped with the consumption of the events by the system once a listener is notified. Consumption only works for InputEvent and its subclasses. Moreover, if you don't want any input from the user through keyboard then you can use consume() method for the KeyEvent. The step by step procedure of Event handling is as follow: Most of the times every event-type has Listener interface as Events subclass the AWTEvent class. However, PaintEvent and InputEvent don't have the Listener interface because only the paint() method can be overriden with PaintEvent etc. A low-level input or window operation is represented by the Low-level events. Types of Low-level events are mouse movement, window opening, a key press etc. For example, three events are generated by typing the letter 'A' on the Keyboard one for releasing, one for pressing, and one for typing. The different type of low-level events and operations that generate each event are show below in the form of a table. |FocusEvent||Used for Getting/losing focus.| |MouseEvent||Used for entering, exiting, clicking, dragging, moving, pressing, or releasing.| |ContainerEvent||Used for Adding/removing component.| |KeyEvent||Used for releasing, pressing, or typing (both) a key.| |WindowEvent||Used for opening, deactivating, closing, Iconifying, deiconifying, really closed.| |ComponentEvent||Used for moving, resizing, hiding, showing.| The interaction with GUI component is represented by the Semantic events like changing the text of a text field, selecting a button etc. The different events generated by different components is shown below. |ItemEvent||Used for state changed.| |ActionEvent||Used for do the command.| |TextEvent||Used for text changed.| |AdjustmentEvent||Used for value adjusted.| If a component is an event source for something then the same happens with its subclasses. The different event sources are represented by the following table. Every listener interface has at least one event type. Moreover, it also contains a method for each type of event the event class incorporates. For example as discussed earlier, the KeyListener has three methods, one for each type of event that the KeyEvent has: keyTyped(), keyPressed(), and keyReleased(). The Listener interfaces and their methods are as follow: |ContainerListen er||componentAdded(ContainerEvent e)|
<urn:uuid:28deff66-3f09-42c3-8ddc-fe189976c775>
3.953125
747
Documentation
Software Dev.
38.268798
95,608,381
Exploring the mystery of how enzymes work via simulations WASHINGTON, D.C., May 10, 2016 – Enzymes play a crucial role in most biological processes–controlling energy transduction, as well as the transcription and translation of genetic information and signaling. They possess a remarkable capacity to accelerate biochemical reactions by many orders of magnitude compared to their uncatalyzed counterparts. So there is broad scientific interest in understanding the origin of the catalytic power of enzymes on a molecular level. While many hypotheses have been put forward using both experimental and computational approaches, they must be examined critically. In The Journal of Chemical Physics, from AIP Publishing, a group of University of Southern California (USC) researchers present a critical review of the dynamical concept–time-dependent coupling between protein conformational motions and chemical reactions–that explores all reasonable definitions of what does and does not qualify as a dynamical effect. The group's work centers on multiscale computer simulations–for which Arieh Warshel, currently a distinguished professor of chemistry at USC, was awarded the 2013 Nobel Prize in chemistry, along with Michael Levitt and Martin Karplus–for exploring the complex actions of enzymes. "Our study reviews various proposals governing the catalytic efficiency of enzymes, which requires constructing free energy surfaces associated with the chemical reactions, as well as catalytic landscapes comprising conformational and chemical coordinates," explained Ram Prasad Bora, co-author and a postdoctoral assistant working with Warshel. "It's far from trivial to construct these surfaces using reliable theoretical approaches." For the group's studies, they tend to construct catalytic free energy surfaces by combining an empirical valence bond (EVB) approach, developed by Warshel and colleagues in the early 1980s to determine reaction free energies of enzymatic reactions, with corresponding uncatalyzed solution reaction surfaces constructed via computational "ab initio quantum chemical calculations." But the "quality of EVB free energy surfaces are improved further–to a desired quantum chemical level–by using a paradynamics approach," Bora added. Comparing the free energy surfaces–and the various factors contributing to these surfaces–for both enzymatic and solution reactions enabled the group to identify the factors contributing to enzyme catalysis. "In our studies, it's electrostatic free energy," said Bora. "But, to specifically address dynamical and kinetic isotope effects, we used an autocorrelation function of the energy gap, catalytic landscapes–constructed using a renormalization approach–and a quantum classical patch (QCP) approach to determine that dynamics don't contribute to the catalytic power of enzymes." The key significance of the group's work? "The catalytic activity of enzymes is fundamental to life, so gaining a solid understanding of the factors controlling and contributing to their activity at the molecular level is crucial," said Bora. With high-quality scientific data now available for the most interesting and complex biological molecules, "it's essential to re-examine various catalytic proposals–especially electrostatic- and dynamic-based proposals," he continued. "Our review focuses on clarifying the role of logical definitions of different catalytic proposals, as well as on the need for a clear formulation in terms of the assumed potential surface and reaction coordinates." The group determined, in previous efforts, that electrostatic preorganization actually accounts for the observed catalytic effects of enzymes. Their current work focuses on exploring the alternative dynamical proposal. In terms of applications, the group's enzyme modeling offers "major medical and fundamental value," said Warshel. "For example, it can provide big improvements in enzyme design and advances in drug resistance. Most significant to me, personally, is that it's a solution to the 100-year-old puzzle of how enzymes really do and don't work." The scientific field of "computational enzyme designing," in particular, stands to benefit from the group's work. "The idea is to tailor desired enzymes for specific purposes–to be used as catalysts for several chemical and biochemical processes at industrial scales–using computational approaches to create artificial enzymes, which can be tested and further improved in laboratories later," explained Bora. While the concept of designing enzymes computationally from scratch was initially considered intriguing, the field is evolving slowly. "This is largely because the current enzyme designing protocols don't take the actual enzyme catalytic factors into account," Bora continued. "Our work presents a very detailed discussion of these factors, and clarifies that it isn't useful to try to optimize dynamical effects in enzyme design. Focusing on other factors will enable devising future designing protocols on a scientific basis–which will increase the success rate of computationally designed enzymes." What's next for the group? "More efforts in enzyme design and drug resistance, as well as studies of complex molecular motors and signal transduction," said Warshel. The group's critical analysis work will help further the exploration of the catalytic factors controlling enzyme catalysis. "Our clear vision about the enzyme catalytic factors means we can now devise effective artificial enzyme design protocols–purely on a scientific basis–that may fill the gap between rational design and laboratory discovery. This can provide insights into another fundamental issue in biology: What controls the evolution of proteins and enzymes?" noted Bora. "Successful design of enzymes may also help in the control of metabolic pathways." The article, "Perspective: Defining and quantifying the role of dynamics in enzyme catalysis," is authored by Arieh Warshel and Ram Prasad Bora. It will appear in the Journal of Chemical Physics May 10, 2016 (DOI: 10.1063/1.4947037). After that date, it can be accessed at http://scitation.aip.org/content/aip/journal/jcp/144/18/10.1063/1.4947037. ABOUT THE JOURNAL The Journal of Chemical Physics publishes concise and definitive reports of significant research in the methods and applications of chemical physics. See http://jcp.aip.org. AIP Media Line
<urn:uuid:b21911ef-33b7-4d3f-80e2-b065500eaadd>
2.875
1,261
News Article
Science & Tech.
20.020228
95,608,414
Hubble Space Telescope image of NGC 1275 |Observation data (J2000 epoch)| |Right ascension||03h 19m 48.1s| |Declination||+41° 30′ 42″| 5264 ± 11 km/s| 222 million light-years| |Group or cluster||Perseus Cluster| |Apparent magnitude (V)||12.6| |Apparent size (V)||2′.2 × 1′.7| |Perseus A, PGC 12429, UGC 2669, QSO B0316+413, Caldwell 24, 3C 84| NGC 1275 (also known as Perseus A or Caldwell 24) is a type 1.5 Seyfert galaxy located around 237 million light-years away in the direction of the constellation Perseus. NGC 1275 corresponds to the radio galaxy Perseus A and is situated near the center of the large Perseus Cluster of galaxies. NGC 1275 consists of two galaxies, a central type-cD galaxy in the Perseus Cluster, and a so-called "high velocity system" (HVS) which lies in front of it. The HVS is moving at 3000 km/s towards the dominant system, and is believed to be merging with the Perseus Cluster. The HVS is not affecting the cD galaxy as it lies at least 200 thousand light years from it. however tidal interactions are disrupting it and ram-pressure stripping produced by its interaction with the intracluster medium of Perseus is stripping its gas as well as producing large amounts of star formation within it The central cluster galaxy contains a massive network of spectral line emitting filaments, which apparently are being dragged out by rising bubbles of relativistic plasma generated by the central active galactic nucleus. Long gaseous filaments made up of threads of gas stretch out beyond the galaxy, into the multimillion-degree, X-ray–emitting gas that fills the cluster. The amount of gas contained in a typical thread is approximately one million times the mass of the Sun. They are only 200 light-years wide, are often very straight, and extend for up to 20,000 light-years. The existence of the filaments poses a problem. As they are much cooler than the surrounding intergalactic cloud, it is unclear how they have existed for such a long time, or why they have not warmed, dissipated or collapsed to form stars. One possibility is that weak magnetic fields (about one-ten-thousandth the strength of Earth’s field) exert enough force on the ions within the threads to keep them together. NGC 1275 contains 13 billion solar masses of molecular hydrogen that seems to be infalling from Perseus' intracluster medium in a cooling flow, both feeding its active nucleus and fueling significant amounts of star formation The presence of an active nucleus demonstrates that a supermassive black hole is present in NGC 1275's center. The black hole is surrounded by a rotating disk of molecular gas. High-resolution observations of the rotation of this disk obtained using adaptive optics at the Gemini North telescope indicate a central mass of approximately 800 million Solar masses, including both the mass of the black hole and of the inner core of the gas disk. - "NASA/IPAC Extragalactic Database". Results for NGC 1275. Retrieved 2006-11-19. - "Distance Results for NGC 1275". NASA/IPAC Extragalactic Database. Retrieved 2010-03-31. - Ho, Luis C.; Filippenko, Alex V.; Sargent, Wallace L. W. (October 1997). "A Search for "Dwarf" Seyfert Nuclei. III. Spectroscopic Parameters and Properties of the Host Galaxies". Astrophysical Journal Supplement. 112 (2): 315–390. arXiv: . Bibcode:1997ApJS..112..315H. doi:10.1086/313041. - Minkowski R., 1957, in IAU Symp 4, Radio astronomy, p107 - Gillmon K., Sanders J.S., Fabian A.C., An X-ray absorption analysis of the high-velocity system in NGC 1275, 2004, MNRAS, 348, 159 - Gallagher, John S., III; Lee, M.; Canning, R.; Fabian, A.; O'Connell, R. W.; Sanders, J.; Zweibel, E. (2010). "Dusty Gas and New Stars: Disruption of the High Velocity Intruder Galaxy Falling Towards NGC 1275". Bulletin of the American Astronomical Society. 42: 552. Bibcode:2010AAS...21536308G. - Lynds R., Improved Photographs of the NGC1275 Phenomenon, 1970, ApJ, 159, L151 - Hatch N.A., Crawford C.S., Johnstone R.M., Fabian A.C.: On the origin and excitation of the extended nebula surrounding NGC1275, 2006, MNRAS, 367, 433 - Hubble Sees Magnetic Monster in Erupting Galaxy Newswise, Retrieved on August 21, 2008. - A. C. Fabian; et al. (2008-08-21). "Magnetic support of the optical emission line filaments in NGC 1275". Nature. 454 (7207): 968–970. arXiv: . Bibcode:2008Natur.454..968F. doi:10.1038/nature07169. PMID 18719583. - Chang, Kenneth (2008-08-21). "Hubble Images Solve Galactic Filament Mystery". The New York Times. - Lim, Jeremy; Ao, Yi Ping; Dinh‐v‐Trung, Dinh-V-Trung (2008). "Radially Inflowing Molecular Gas in NGC 1275 Deposited by an X-Ray Cooling Flow in the Perseus Cluster". The Astrophysical Journal. 672: 252–265. arXiv: . Bibcode:2008ApJ...672..252L. doi:10.1086/523664. - O'Connell, Robert (2007). "Star Formation in the Perseus Cluster Cooling Flow". HST Proposal ID #11207. Cycle 16: 11207. Bibcode:2007hst..prop11207O. - Wilman, R. J.; Edge, A. C.; Johnstone, R. M. (2005). "The nature of the molecular gas system in the core of NGC 1275". Monthly Notices of the Royal Astronomical Society. 359 (2): 755–764. arXiv: . Bibcode:2005MNRAS.359..755W. doi:10.1111/j.1365-2966.2005.08956.x. - Scharwächter, J.; McGregor, P. J.; Dopita, M. A.; Beck, T. L. (2013). "Kinematics and excitation of the molecular hydrogen accretion disc in NGC 1275". Monthly Notices of the Royal Astronomical Society. 429: 2315. arXiv: . Bibcode:2013MNRAS.429.2315S. doi:10.1093/mnras/sts502. |Wikimedia Commons has media related to NGC 1275.| - NGC 1275 on WikiSky: DSS2, SDSS, GALEX, IRAS, Hydrogen α, X-Ray, Astrophoto, Sky Map, Articles and images - APOD (2003-05-05) – NASA image & description - APOD (2005-07-25) – NASA image showing unusual gas filaments - Fabian, A.C., et al. "A deep Chandra observation of the Perseus cluster: shocks and ripples". Monthly Notices of the Royal Astronomical Society. Vol. 344 (2003): L43 (arXiv:astro-ph/0306036v2). - Fabian, A.C. Nature 454, 968-970. - Gabany, R. Jay. cosmotography.com – An image made with a 20" telescope, which displays the unusual gas filaments
<urn:uuid:d806b614-0dd2-4ce6-a830-cb47d08416e1>
3.25
1,765
Knowledge Article
Science & Tech.
80.229671
95,608,427
"Nuclear fusion" is the melting of light nuclei into heavier ones, a process that according to the laws of physics releases enormous amounts of energy. For the past 50 years many scientists have sought ways of harnessing this fusion reaction under controlled reactor conditions as a safe, clean and practically inexhaustible source of energy. Siegbert Kuhn and his team at the Institute of Theoretical Physics at Innsbruck University are making a major contribution to these efforts and positioning Austrian nuclear fusion research at the forefront of international activities in this field by carrying out particle simulation studies of divertor plasmas sponsored by the Austrian Science Fund (FWF) and in cooperation with international research groups. In order to obtain an adequate number of nuclear fusion reactions for practical energy production, the particles involved must be made to collide with sufficient frequency and sufficient energy. In principle, this can be most readily achieved in an extremely hot hydrogen gas (approx. 100 million degrees) at appropriate density. At these temperatures the gas is fully "ionised", meaning that the gas molecules, which are electrically neutral under normal conditions, are split into positively charged nuclei ("ions") and negatively charged "electrons". "Such a gas is called a `plasma` and the plasma state is commonly referred to as the `fourth state of matter`", Kuhn goes on explaining that plasma is the stuff that stars are made of: "Only imagine it: 99.99 % of all matter in the universe is in the plasma state!". Hot plasma is confined in a ring-shaped vessel (torus) by a magnetic field of suitable structure. The most promising configuration to date is termed "tokamak". The next ambitious aim of international fusion research is the construction of the "International Thermonuclear Experimental Reactor (ITER)", which will be the first reactor to work with a plasma largely heated by the fusion reaction itself and which will come very close to the concept of a future commercial fusion reactor in terms of plasma physics. In a tokamak a distinction is made between the hot "core plasma", in which the energy-producing nuclear fusion reactions take place, and the cooler "edge plasma" through which the high-energy plasma particles diffusing from the core plasma are passed to the baffle plates of the divertor. "Since there are strict technical limits to the amounts of energy to which divertor plates can be subjected, questions relating to the contact between the plasma and the divertor wall count among the most important scientific and technical challenges of modern fusion research", explains Kuhn. He has obtained important results for a better understanding of the divertor plasma in his project. Existing models and simulation programmes, for example, have been greatly improved and the strong influence of secondary and fast electrons on the edge layer was clearly shown and quantified. Kuhn: "We were also able to make a major contribution to understanding the forming and effects of fast particles which occur during the heating of the tokamak plasma through wave injection and which can seriously damage the divertor plates. In a next step, our results can be directly used for modelling and optimising existing and planned tokamaks." Monika Scheifinger | alphagalileo First evidence on the source of extragalactic particles 13.07.2018 | Technische Universität München Simpler interferometer can fine tune even the quickest pulses of light 12.07.2018 | University of Rochester For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:54f32aa1-2539-4c2b-b0ca-dcceb97b12d6>
3.375
1,286
Content Listing
Science & Tech.
36.11106
95,608,430
Though Colder Than Earth, Saturn's Moon Titan Is Tropical In Nature "You have all these things that are analogous to Earth. At the same time, it's foreign and unfamiliar," said Ray Pierrehumbert, the Louis Block Professor in Geophysical Sciences at Chicago. Titan, one of Saturn's 60 moons, is the only moon in the solar system large enough to support an atmosphere. Pierrehumbert and Jonathan Mitchell, who recently completed his Ph.D. in Astronomy & Astrophysics at Chicago, have been comparing observations of Titan collected by the Cassini space probe and the Hubble Space Telescope with their own computer simulations of the moon's atmosphere. Their study of the dynamics behind Titan's methane clouds have appeared in the Proceedings of the National Academy of Sciences. Their continuing research on Titan's climate focuses on the moon's deserts. "One of the things that attracts me about Titan is that it has a lot of the same circulation features as Earth, but done with completely different substances that work at different temperatures," Pierrehumbert said. On Earth, for example, water forms liquid and is relatively active as a vapor in the atmosphere. But on Titan, water is a rock. "It's not more volatile on Titan than sand is on Earth." Methane-natural gas-assumes an Earthlike role of water on Titan. It exists in enough abundance to condense into rain and form puddles on the surface within the range of temperatures that occur on Titan. "The ironic thing on Titan is that although it's much colder than Earth, it actually acts like a super-hot Earth rather than a snowball Earth, because at Titan temperatures, methane is more volatile than water vapor is at Earth temperatures," Pierrehumbert said. Pierrehumbert and Mitchell even go so far as to call Titan's climate tropical, even though it sounds odd for a moon that orbits Saturn more than nine times farther from the sun than Earth. Along with the behavior of methane, Titan's slow rotation rate also contributes to its tropical nature. Earth's tropical weather systems extend only to plus or minus 30 degrees of latitude from the equator. But on Titan, which rotates only once every 16 days, "the tropical weather system extends to the entire planet," Pierrehumbert said. Titan's tropical nature means that scientists can observe the behavior of its clouds using theories they've relied upon to understand Earth's tropics, Mitchell noted.Titan's atmosphere produces an updraft where surface winds converge. This updraft lifts evaporated methane up to cooler temperatures and lower pressures, where much of it condenses and forms clouds. "This is a well-known feature on Earth called an ITCZ, the inter-tropical convergence zone," Mitchell said. Earth's oceans help confine the ITCZ to the lowest latitudes. But in some scenarios for oceanless Titan, the ITCZ in Mitchell's computer simulations wanders in latitude almost from one pole to the other. Titan's clouds should also follow the ITCZ. Titan's orange atmospheric haze complicates efforts to observe the moon's clouds. "This haze shrouds the entire surface," Mitchell said. "It pretty much blocks all visible light from reaching us from the surface or from the lower atmosphere." Nevertheless, infrared observations via two narrow frequency bands have recently revealed that clouds are currently confined to the moon's southern hemisphere, which is just now emerging from its summer season. "There should be a very large seasonality in these cloud features," Mitchell said. "Cassini and other instruments might be able to tell us about that in the next seven to 10 years or so, as the seasons progress." Mitchell and Pierrehumbert's next paper will describe how oscillations in Titan's atmospheric circulation dry out the moon's midsection. Over the course of a year, Mitchell explained, "this oscillation in the atmosphere tends to transport moisture, or evaporated methane, out of the low latitudes and then deposit it at mid and high latitude in the form of rainfall. This is interesting, because recent Cassini observations of the surface suggest that the low latitudes are very dry." Cassini images show dunes of ice or tar covering these low-latitude regions that correspond to the tropics on Earth. When ultraviolet light from the sun interacts with methane high in Titan's atmosphere, it creates byproducts such as ethane and hydrogen. These byproducts become linked to chains of hydrocarbon molecules that create Titan's orange haze. When these molecules coalesce into large particles, they settle out as a tar-like rain. "Titan is like a big petrochemical plant," Pierrehumbert said. "Although this is all happening at a much lower temperature than in a petroleum refinery, the basic processes going on there are very closely allied to what people do when they make fuel." Article based on information provided by: University of Chicago, Chicago, Illinois U.S.A. Adapted and published by: Mooshee.com Originally released on: October 03 Next Article: Algorithms To Reanimate The Heart More Articles On: Saturn, Moon, Solar System, Astrophysics, Space Exploration, Sun,
<urn:uuid:bf10f042-37d0-4499-b6ab-b0e2c86a6f67>
3.9375
1,077
News Article
Science & Tech.
39.140134
95,608,451
A study by researchers at Tennessee Tech University, Purdue University, the University of Colorado and the University of Georgia, Pacific Northwest National Laboratory and Hellenic Center for Marine Research concluded that artificial reservoirs can modify precipitation patterns. The study -published in Geophysical Research Letters— marks the first time researchers have documented large dams having a clear, strong influence on the climate around artificial reservoirs, an influence markedly different from the climate around natural lakes and wetlands.The results should spur consideration of more robust management of dams and set the stage for further research on the regions and climates to focus on, says Faisal Hossain, Tennessee Tech University civil engineering professor. “This research shows you the smoking gun,” said Hossain. “Logically and physically we knew it was possible that a having a large body of water and spreading it around would change the local climate. Now, our results give us a better idea of which dams are most likely to gradually change local climate and what that means for managing those reservoirs as time passes.” With Hossain and TTU doctoral student Ahmed Mohamed Degu leading the study, the research team looked at 30 years of climate data based on a technique commonly known as reanalysis in the scientific community. Reanalysis aims to recreate the gold standard record of weather conditions everywhere in a domain by using as much information in hindsight as possible. The data used spanned from 1979-2009 and was collected 24/7 over North America. Roger Pielke Sr. of the University of Colorado’s Cooperative Institute for Research in Environmental Sciences says the work was a breakthrough study in scope and mission. “This is a critically important, much needed study with multiple authors and institutions using diverse data sets in order to obtain information on how dams and their surroundings affect the region's climate rather than a local snapshot that may not be representative for larger areas,” said Pielke. The study reports that large dams influence local climate most in the Mediterranean and semi-arid climates such as ones in California and in the Southwestern United States.So how does a large dam and its reservoir alter the climate? If the dam’s reservoir is large enough or if the water is spread around by uses such as extensive irrigation or recreational activities, then the expanded distribution of water creates an altered climate because it allows the water to evaporate more easily. “Think of your typical backyard swimming pool,” said Hossain. “If you pumped all the water out of your swimming pool and spread it onto your lawn, it wouldn’t take long for all that water to evaporate.” A change in water available for evaporation can change humidity, energy and surface temperature and affect the climate around a reservoir. Under the right circumstances, all of these play an important role in changing rainfall. “We now know we need to do better building and managing dams and reservoirs in those arid and Mediterranean regions where water is really scarce,” said Hossain.Hossain says the report reflects a changing mindset in this area of research. Pielke says this framework, known as a vulnerability framework, is more inclusive and promotes more effective decisions. “The change in mindset is to identify the vulnerabilities from a bottom-up resource-based perspective,” said Pielke. Hossain agrees that this perspective changes the way civil engineers think in the classroom and on the job. “Our profession generally has never looked at climate and what we do to it once we build large structures like dams, even cities, parks, ports, etc.,” said Hossain. “That work is missing at the interface of our profession. “We now need to adapt, be more climate cognizant and broaden our horizons. Many of our dams in the U.S. are 50 years old and we need answers for the future,” he said.“Now we have a better idea about how the local climate and rainfall may change than we did 50 years ago, although more work is needed to pinpoint exact causes at each dam location,” said Hossain. Nevertheless, we now can consider different scenarios and do a life cycle assessment before even building a dam. “This is like saying we can now forecast what a dam may do to itself as it ages before even building it; then we build it according to a specification that the profession is prepared for,” he concluded. The work was mainly supported by TTU’s Office of Research and the Center for the Management, Utilization and Protection of Water Resources.Faisal Hossain, email@example.com Karen Lykins | Newswise Science News Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta Drones survey African wildlife 11.07.2018 | Schweizerischer Nationalfonds SNF For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:bc41de5e-bf71-4445-abc9-0c09bf3b6d3d>
3.5625
1,601
Content Listing
Science & Tech.
39.036983
95,608,459
This information, published in the Genetics Society of America’s new open-access journal, G3: Genes | Genomes | Genetics (http://www.g3journal.org), lays the foundation for future understanding of mutation and disease, as studies of yeasts often identify key genes and mechanisms of disease. “We hope to learn to read the language of DNA and tell when mutations or differences will cause disease and when they will be advantageous,” said Chris Todd Hittinger, senior author of the work from the Department of Biochemistry and Molecular Genetics at the University of Colorado School of Medicine in Aurora, Colorado. “Providing a complete catalog of diversity among this group of species will allow us to quickly test which changes are responsible for which functions in the laboratory with a level of precision and efficiency not possible in other organisms.” Using massively parallel next-generation DNA sequencing, the researchers determined the genome sequences, doubling the number of genes available for comparison, and identifying which genes changed in which species. They did this by segmenting each organism’s DNA into small pieces, and then computationally “reassembled” the pieces and compared them to the genome of S. cerevisiae (the species used to make beer, bread, wine, etc.) to identify similarities and differences. The researchers also genetically engineered several of the strains to make them amenable for experimentation. Results from this study will allow researchers to compare the genetics, molecular biology, and ecology of these species. Because yeast genomes and lifestyles are relatively simple, determining how diversity is encoded in their DNA is much easier than with more complex organisms, such as humans. “The experimental resources described in this paper extend the value of yeasts for understanding biological processes,” said Brenda Andrews, Editor-in-Chief of G3: Genes | Genomes | Genetics, “and if they help us make better pizza crust and beer along the way, all the better.” The Genetics Society of America established G3: Genes | Genomes |Genetics (http://www.g3journal.org) to meet the need for rapid review and publication of high-quality foundational research and experimental resources in genetics and genomics – an outlet unrestricted by subjective editorial criteria of perceived significance or predicted breadth of interest. Papers published in G3 describe useful, well-executed and lucidly interpreted genetic studies of all kinds. This new peer-reviewed, peer-edited, and fully open access journal meets crucial needs that are not met by current journals in this field. Founded in 1931, the Genetics Society of America is the professional membership organization for geneticists and science educators. Its nearly 5,000 members work to advance knowledge in the basic mechanisms of inheritance, from the molecular to the population level. Tracey DePellegrin Connelly | Newswise Science News NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Life Sciences 18.07.2018 | Materials Sciences 18.07.2018 | Health and Medicine
<urn:uuid:54f30f8c-9429-4489-9fda-5e4bdf0d2927>
3.5
1,216
Content Listing
Science & Tech.
32.982479
95,608,480
Ongoing since 1977 Development of Streams Following Glacial Recession What are the major factors affecting colonization of new streams in Glacier Bay? Did You Know? Ecological succession on land typically involves winners and losers—as succession progresses, species disappear and are replaced by others. In stream succession, by contrast, tolerance seems to be the model—most early colonizers remain, joined by newer arrivals. Sandy Milner’s study of ecological succession in streams is not only the longest-running research project in the park, but also the longest continuous study of primary succession in streams anywhere. It all started in 1974, when Sandy was pursuing a Master’s degree in hydrobiology at Chelsea College, University of London, and read a chapter called “A region reborn” in a Time/Life book about Alaska. Sandy was instantly captivated. He realized that the rapid recession of glaciers in Glacier Bay presented a unique opportunity to study the ecological development of streams. Three years later, his dream to organize an expedition of Chelsea College students to Glacier Bay became a reality; thirty-six years later, Sandy is still returning to Glacier Bay every summer. Ecological succession is a key concept within ecology. This study has made a major contribution to our understanding of the main variables driving ecological succession in riverine environments. Over the years this study has investigated various aspects of stream development using methods that include: - Collecting invertebrates and other fauna through various netting and trapping methods and by picking them from stones - Measuring variables such as water temperature, pH, color, and nitrogen and phosphorous content - Counting spawning salmon and capturing juvenile salmonids using minnow traps - Collecting invertebrates prior to, during and subsequent to peak redd digging - Collecting vegetation, invertebrates and juvenile fish to determine if marine-derived nutrients from salmon carcasses are being incorporated into the food chain - Examining nutrient retention by releasing leaves and salmon carcasses into the stream - Preparing detailed maps and cross-sections to understand how geomorphological and habitat complexity increases over time Interesting findings from this study include: - Low water temperature is the major factor inhibiting colonization of young streams. - Small winged insects called midges, carried by the wind, are the earliest colonizers. Even in the cold, harsh conditions of very young glacial streams, large numbers of their worm-like larvae can be seen clinging to rocks in the streambed. - Dolly Varden and “stray” spawning pink salmon may colonize new streams within 10 years. Large populations of pink salmon can develop in a few generations. - In young streams that lack pools, tributary streams or woody debris, salmon carcasses wash away, their nutrients lost. - Redd digging by female salmon disrupts ecological succession and allows the persistence of “fugitive” species that are good dispersers but poor competitors. - The presence of an up-stream lake is important in mitigating flooding, increasing water temperature and stabilizing the stream. However, many lakes become detached from the stream as watersheds develop. - The presence of woody debris in the stream is also very important in retaining salmon carcasses, creating habitat and increasing stream productivity. - The highest diversity of geomorphological habitat occurs in mid-aged streams (125-175 years). - Aside from the very earliest colonizers, most species persist as succession progresses, supporting the model of succession known as tolerance. The study has resulted in not only Sandy’s PhD dissertation, but six others (Flory, Monaghan, Phillips, McDermott, Veal, Klaar) and three more currently in the works (Malone, Clitherow and Sonderland). Twenty-seven papers have been published in scientific journals. Sandy is the only researcher to have presented at all four Glacier Bay science symposia.
<urn:uuid:9a1203b9-6854-460b-85f4-b14228588571>
3.609375
809
Knowledge Article
Science & Tech.
21.787993
95,608,509
Feldspar, any of a group of aluminosilicate minerals that contain calcium, sodium, or potassium. Feldspars make up more than half of Earth’s crust, and professional literature about them constitutes a large percentage of the literature of mineralogy. Other constituents of traditional ceramics are silica and feldspar. Silica is a major ingredient in refractories and whitewares. It is usually added as quartz sand, sandstone, or flint pebbles. The role of silica is that of a filler, used to impart “green” (that is,… Of the more than 3,000 known mineral species, less than 0.1 percent make up the bulk of Earth’s crust and mantle. These and an additional score of minerals serve as the basis for naming most of the rocks exposed on Earth’s surface. Most of the less common rocks can be named by similarly identifying the additional half dozen minerals whose names are given in regular type in the table. Essentially all rocks can be named as professional geologists name them if, in addition, the presence of the minerals whose names are in italics is known. |Common rock-forming minerals| |plagioclase feldspars||clay minerals||amphiboles||epidotes| Each of the common rock-forming minerals can be identified on the basis of its chemical composition and its crystal structure (i.e., the arrangement of its constituent atoms and ions). The nonopaque minerals can also be identified by their optical properties. Fairly expensive equipment and sophisticated procedures, however, are required for such determinations. Therefore, it is fortunate that macroscopic examination, along with one or more tests, are sufficient to identify these minerals as they occur in most rocks. The following descriptions include basic chemical and structural data and the properties used in macroscopically based identifications. Optical data, which are not included in these descriptions, are available in mineralogy books. Two important rock-forming materials that are not minerals are major components of a few rocks. These are glass and macerals. Glass forms when magma (molten rock material) is quenched—i.e., cooled so rapidly that the constituent atoms do not have time to arrange themselves into the regular arrays characteristic of minerals. Natural glass is the major constituent of a few volcanic rocks—e.g., obsidian. Macerals are macerated bits of organic matter, primarily plant materials; one or more of the macerals are the chief original constituents of all the diverse coals and several other organic-rich rocks such as oil shales. In the classification of igneous rocks of the International Union of Geological Sciences (IUGS), the feldspars are treated as two groups: the alkali feldspars and the plagioclase feldspars. The alkali feldspars include orthoclase, microcline, sanidine, anorthoclase, and the two-phase intermixtures called perthite. The plagioclase feldspars include members of the albite-anorthite solid-solution series. Strictly speaking, however, albite is an alkali feldspar as well as a plagioclase feldspar.
<urn:uuid:3ca8f628-7e93-402c-9207-1798b972d237>
4.125
693
Knowledge Article
Science & Tech.
28.408107
95,608,540
This PhysicsCentral article, by Clifford Will of Washington University, discusses the importance of relativity for GPS. Both special and general relativity affect the accuracy of GPS, and without correcting for relativistic effects GPS wouldn't work. Will describes how the GPS system works, how the relativistic effects enter, and how the corrections are made. The article is for a general audience. American Physical Society. Einstein's Relativity and Everyday Life. College Park: American Physical Society, August 29, 2011. http://www.physicscentral.com/explore/writers/will.cfm (accessed 15 July 2018). %0 Electronic Source %D August 29, 2011 %T Einstein's Relativity and Everyday Life %I American Physical Society %V 2018 %N 15 July 2018 %8 August 29, 2011 %9 text/html %U http://www.physicscentral.com/explore/writers/will.cfm Disclaimer: ComPADRE offers citation styles as a guide only. We cannot offer interpretations about citations as this is an automated procedure. Please refer to the style manuals in the Citation Source Information area for clarifications.
<urn:uuid:66f382eb-2a47-4d7a-9842-31ba34a1f24d>
3.609375
235
Truncated
Science & Tech.
45.216257
95,608,541
An enormous spectrum of light streams from the sun. We're most familiar with the conventional visible white light we see with our eyes from Earth, but that's just a fraction of what our closest star emits. NASA regularly watches the sun in numerous wavelengths because different wavelengths provide information about different temperatures and processes in space. Looking at all the wavelengths together helps to provide a complete picture of what's occurring on the sun over 92 million miles away – but no one has been able to focus on high energy X-rays from the sun until recently. In early December 2014, the Focusing Optics X-ray Solar Imager, or FOXSI, mission will launch aboard a sounding rocket for a 15-minute flight with very sensitive hard X-ray optics to observe the sun. This is FOXSI’s second flight – now with new and improved optics and detectors. FOXSI launched previously in November 2012. The mission is led by Säm Krucker of the University of California in Berkeley. Due to launch from White Sands Missile Range in New Mexico, on Dec. 9, 2014, FOXSI will be able to collect six minutes worth of data during the 15-minute flight. Sounding rockets provide a short trip for a relatively low price – yet allow scientists to gather robust data on various things, such as X-ray emission, which cannot be seen from the ground as they are blocked by Earth's atmosphere. "Hard X-rays are a signature of particles accelerating on the sun," said Steven Christe, the project scientist for FOXSI at NASA's Goddard Space Flight Center in Greenbelt, Maryland. "The sun accelerates particles when it releases magnetic energy. The biggest events like solar flares release giant bursts of energy and send particles flying, sometimes directed towards the Earth. But the sun is actually releasing energy all the time and that process is not well-understood." Scientists want to understand these energy releases both because they contribute to immense explosions on the sun that can send particles and energy toward Earth, but also because that energy helps heat up the sun's atmosphere to temperatures of millions of degrees -- 1,000 times hotter than the surface of the sun itself. Observing many wavelengths of light allows us to probe different temperatures within the sun’s atmosphere. Looking for hard X-rays, is not only one of the best ways to measure the highest temperatures, up to tens of millions of degrees, but it also helps track accelerated particles. The sensitivity of the FOXSI instrument means the team can investigate very faint events on the sun, including tiny energy releases commonly known as nanoflares. Nanoflares are thought to occur constantly, but are so small that we can’t see them with current telescopes. Spotting hard X-rays with FOXSI would be a confirmation that these small flares do exist. Moreover, it would suggest that nanoflares behave in a similar fashion as larger flares, accelerating particles in much the same way that big flares do. "It's not necessarily true that these small flares accelerate particles. Perhaps they are just small heating events and the physics is different," said Christe. "That's one of the things we're trying to figure out." Viewing such faint events requires extra sensitive optics. FOXSI carries something called grazing-incidence optics -- built by NASA's Marshall Space Flight Center in Huntsville, Alabama -- that are unlike any previous ones launched into space for solar observations. Techniques to collect and observe high energy X-rays streaming from the sun have been hampered by the fact that these wavelengths cannot be focused with conventional lenses the way visible light can be. When X-rays encounter most materials, including a standard glass lens, they usually pass right through or are absorbed. Such lenses can't, therefore, be used to adjust the X-ray's path and focus the incoming beams. So X-ray telescopes have previously relied on imaging techniques that don’t use focusing. This is effective when looking at a single bright event on the sun, such as the large burst of X-rays from a solar flare, but it doesn't work as well when searching for many faint events simultaneously. The FOXSI instrument makes use of mirrors that can successfully cause x-rays to reflect -- as long as the x-ray mirrors are nearly parallel to the incoming X-rays. Several of these mirrors in combination help collect the X-ray light before funneling it to the detector. This focusing makes faint events appear brighter and crisper. The FOXSI launch is scheduled for Dec. 9 between 2 and 3 pm EST. The shutter door on the optics system opens up after the payload reaches an altitude of 90 miles, one minute after launch. FOXSI then begins six minutes of observing the sun. After the observations, the door on the optics system closes. The rocket deploys a parachute and the instruments float down to the ground in the hopes of being used again. The FOXSI mission made it through this process successfully once before, when it flew in 2012. On its first flight, the telescope successfully viewed a flare in progress. On this second flight, the team has updated some of the optics to be more sensitive and has removed insulation blankets that blocked some of the X-rays during the last flight. They also upgraded some of the detectors with new detectors built by the Japanese Aerospace Exploration Agency using a new detector material. Last time they used silicon and this time they are using cadmium telluride. Such refurbishing illustrates a key value of sounding rockets: Making adjustments to the instruments on relatively low-cost flights has great benefit for future missions. By testing FOXSI on a sounding rocket, it can be perfected to use on a larger satellite with even larger, more sensitive optics. In addition to developing technology, these low-cost missions help train students and young scientists. “Sounding rockets are a great way for students to be heavily involved in every aspect of a space mission, from electronics testing to observational planning,” said Lindsay Glesener, FOXSI’s project manager at the University of California in Berkeley, who was also a graduate student during FOXSI’s first flight. “Development on low-cost missions is the way that,scientists, engineers, and even the telescopes get prepared to work on an eventual satellite mission.” FOXSI is a collaboration between the United States and the Japanese Aerospace Exploration Agency. FOXSI is supported through NASA’s Sounding Rocket Program at the Goddard Space Flight Center’s Wallops Flight Facility in Virginia. NASA’s Heliophysics Division manages the sounding rocket program. Karen C. Fox NASA's Goddard Space Flight Center, Greenbelt, Maryland Susan Hendrix | EurekAlert! What happens when we heat the atomic lattice of a magnet all of a sudden? 17.07.2018 | Forschungsverbund Berlin Subaru Telescope helps pinpoint origin of ultra-high energy neutrino 16.07.2018 | National Institutes of Natural Sciences For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 17.07.2018 | Information Technology 17.07.2018 | Materials Sciences 17.07.2018 | Power and Electrical Engineering
<urn:uuid:b5161690-49f5-4304-8096-21a0233bad1d>
4.21875
2,015
Content Listing
Science & Tech.
44.452296
95,608,574
Ethereum is a programmable blockchain. Rather than give users a set of pre-defined operations (e.g. bitcoin transactions), Ethereum allows users to create their own operations. In this way, it serves as a platform for many different types of decentralized blockchain applications, including but not limited to cryptocurrencies. geth is the official golang implementation of the Ethereum protocol. Source Files (show merged sources derived from linked package) |_link||0000000142142 Bytes||1497590369about 1 year ago| |_service||000000005757 Bytes||1497590928about 1 year ago| |geth.changes||0000000146146 Bytes||1457171367over 2 years ago| |geth.spec||00000017921.75 KB||1497590610about 1 year ago|
<urn:uuid:b87b91ce-f8d7-4934-aa58-3efa5bf7c84d>
2.53125
174
Content Listing
Software Dev.
51.251818
95,608,584
After starting out in our primate ancestors 65 million years ago, one type of repetitive DNA called an Alu retrotransposon now takes up 10 percent of our genome, with about one million copies. Roughly every 20th newborn baby has a new Alu retrotransposon somewhere in its DNA, scientists have estimated. "I think of them as molecular machines that can copy themselves and move around the genome, says Scott Devine, PhD, assistant professor of biochemistry at Emory University School of Medicine. "These elements pose a major threat to our genetic information, because they can damage genes when they jump into them, leading to altered traits or diseases such as cancers." As mutations gradually blur the features of older Alu elements, some become unable to make copies of themselves. To identify the Alu retrotransposons that are still capable of moving around, Devine and graduate student E. Andrew Bennett, who is first author, divided them into families and tested a representative of each family in the laboratory. The results are published online and are scheduled to appear in the December issue of the journal Genome Research. Laboratories at Emory, the University of Michigan and the Max Planck Institute for Developmental Biology contributed to the study. "We wanted to see what dictates whether an Alu element will be mobile," Devine says. "That way we could predict which Alu copies are more likely to damage our genetic information. This information will become very useful as we enter the age of personalized genomics, allowing us to make predictions about the future health of individuals." Alu elements get their name because they usually include the recognition site for the enzyme Alu I (AGCT), a common laboratory tool for cutting DNA into pieces. Geneticists have already identified over 40 Alu elements that interrupt genes and cause human diseases, including neurofibromatosis, hemophilia and breast cancer, Devine says. Bennett and Devine tested Alu elements by putting each of 89 family representatives on a small circle of DNA next to a gene that allows human cells to resist a poisonous drug. They then introduced the DNA circles into cells in culture dishes. If the Alu element could jump, carrying the drug-resistance gene onto the cells' chromosomes, the cells survived the drug. The authors conclude that around 10,000 Alu elements are still capable of jumping around, with 37,000 having at least a low level of activity. The youngest ones were all capable of moving around, and the oldest ones were all inactive. "These results mean that Alu is by far the most abundant class of jumping genes and poses the greatest transposon-mediated threat to our genomes," Devine says. The term retrotransposons comes from how they replicate: first, the DNA is transcribed (copied) into RNA, and the RNA is reverse-transcribed into DNA again. Depending on the type of cell, if an Alu element is located near genes that have been shut off, the Alu element is less likely to get transcribed. That means the number of Alu elements that do move around is probably slightly lower. The team has constructed a database of Alu elements to compile additional information about each family. Devine says an enzyme that is part of the normal machinery of the cell transcribes Alu elements, but they actually depend on another type of repetitive element, called L1, to make the enzyme that can reverse-transcribe them. Scientists think Alu elements "hijack" part of the cell during the copying process. In the cell, Alu RNA is thought to resemble another type of RNA that guides protein production. The team's tests indicate that Alu elements that can best mimic that RNA, called the signal recognition particle, are more likely to be active. "Alus are really parasites of a parasite," Devine says. "They've cleverly taken advantage of another element's machinery to survive." Holly Korschun | EurekAlert! NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation Pollen taxi for bacteria 18.07.2018 | Technische Universität München For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 18.07.2018 | Materials Sciences 18.07.2018 | Life Sciences 18.07.2018 | Health and Medicine
<urn:uuid:847b68cf-560a-4887-a0d9-ce0f40eb3cc6>
3.546875
1,452
Content Listing
Science & Tech.
35.399322
95,608,590
Picture Structured Document Tag.When the object is serialized out as xml, its qualified name is w:picture. Assembly: DocumentFormat.OpenXml (in DocumentFormat.OpenXml.dll) 'Declaration Public Class SdtContentPicture _ Inherits EmptyType 'Usage Dim instance As SdtContentPicture public class SdtContentPicture : EmptyType [ISO/IEC 29500-1 1st Edition] 188.8.131.52 picture (Picture Structured Document Tag) This element specifies that the parent structured document tag shall be a picture when displayed in the document. This setting specifies that the behavior for this structured document tag shall be as follows: - The contents shall always be restricted to a single picture using the DrawingML (§20.1) syntax As well, the structured document tag shall satisfy the following restraints or the document shall be considered non-conformant: The contents shall only be a single picture using the DrawingML (§20.1) syntax The contents shall not contain more than a single paragraph or table cell and shall not contain a table row or table [Example: Consider the following structured document tag: <w:sdt> <w:sdtPr> … <w:picture/> </w:sdtPr> … </w:sdt> The text element in this structured document tag's properties specify that the type of structured document tag is a picture. end example] [Note: The W3C XML Schema definition of this element’s content model (CT_Empty) is located in §A.1. end note] © ISO/IEC29500: 2008. Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.
<urn:uuid:f9a2d8bd-94f9-44a7-a1fb-80c7bd209df5>
2.5625
384
Documentation
Software Dev.
49.945947
95,608,600
What Does Half-Life Mean? What are half-lives? And what do they have to do with measuring the age of the solar system and predicting the effects of a morning cup of coffee? Keep on reading to find out! If you drink two cups of coffee at 8 a.m., how much caffeine will be left in your body that night at 8 p.m.? Certainly after 12 hours it can’t be that much, right? Or could it be? Maybe even enough to mess with your sleep? I’m not going to spoil the answer, but let’s just say that after learning about the concept of a "half-life" today, you might be a little surprised. So, what is a half-life? What’s the math behind it? And what does it have to do with the amount of caffeine left in your body at the end of the day … and even with calculating the ages of archeological artifacts and the entire solar system? Let’s find out. What Is a Half-Life? Some types of atoms do a really weird thing—they spontaneously decay into other types of atoms. A bit more precisely, some unstable isotopes of certain atoms (meaning certain versions of certain atoms that have certain, shall we say, non-standard numbers of neutrons in their nuclei) will spontaneously turn into different elements and in so doing release other particles and light along the way. … it’s impossible to predict exactly when any individual atom in a huge pile of atoms will decay. The spontaneous nature of this decay means that it’s impossible to predict exactly when any individual atom in a huge pile of atoms will decay. But even if we don’t know when any one particular atom will decay, we do know the overall average rate at which atoms in the pile will decay. In other words, in a big group of the same type of radioactive atoms, there is a certain amount of time—called the half-life—that will pass before more-or-less half of the atoms will have decayed. So if there are 1 billion atoms to start with, there will on average be 500 million atoms left after a time equal to their half-life. What Is Exponential Decay? This type of decay—in which an average of half the members of a population disappear in a half-life of time … and then another half disappear in the next half-life … and then half of whatever is remaining disappear in the next half-life … and so on forever—is known as exponential decay. We can represent this type of decay in terms of a fairly simple formula: Nfinal = Ninitial • (1/2)t/t1/2 In other words, if we start with a population of atoms—or, as we’ll see in a moment, anything else that decays exponentially—that we’ll call Ninitial (that’s the initial number of those atoms), and we then multiply it by 1/2 raised to the power t/t1/2, then we can calculate the number of atoms we end up with. But what are t and t1/2? Well, t is simply the amount of time that has passed since we counted the Ninitial atoms we began with, and t1/2 is the half-life for the decay of those atoms.
<urn:uuid:2ea304c6-baa7-44f4-9291-1614f6e57189>
3.234375
704
Knowledge Article
Science & Tech.
65.920759
95,608,601
This tutorial explains, How to find method which generate segmentation fault error in large program. Basically, segmentation fault is a common condition that causes programs to crash; they are often associated with a file named core .Segmentation faults are caused by a program when it trying to read or write an illegal memory location or trying to access memory location who’s permissions are not granted. Main Reasons of Segmentation Fault Error is : Accessing elements which are out bounds of an array – if array is declared with fixed size and program trying to refer the element with index less than lower bound or greater than upper bound of array. so make sure that iterators are within range of lower and upper bound of array. Accessing uninitialized pointer – if program trying to access values using pointer which are pointing to invalid locations. means either pointer variable is not initialized or not pointing to valid address location before accessing the values. so make sure that you have initialized all pointers to a valid area of memory. Incorrect use of the “address of” operator and “dereferencing” operator “address of” operator (&) returns the memory address of any variable. if you forget to use “&” with each variable in a scanf functions call. The Scanf function requires the address of the variables to read values from console. so make sure that you used & and * operator correctly in the program. It always difficult to find exact location of segmentation fault error in large program. so debugging of program correctly can give method details which generates this error. Steps to debug C or CPP program in Ubuntu To Install gdb on ubuntu – <span style="font-family: book antiqua,palatino,serif;">sudo apt-get install libc6-dbg gdb valgrind</span> To Compile C / CPP Program <span style="font-family: book antiqua,palatino,serif;">gcc -g testfile.cc -o testfile To Run debugger <span style="font-family: book antiqua,palatino,serif;">gdb ./testfile</span> To Run program <span style="font-family: book antiqua,palatino,serif;">run</span> To Print Stack trace <span style="font-family: book antiqua,palatino,serif;">bt</span> The stack trace shows the name of method where segmentation fault error has been occurred. so you can check that method for above reasons of error cause. What is Stack Smashing Error ? As we know, memory structure used in many programming languages to store state that is variable values, for instance is known as the “stack.” The Program control flow is also managed by the stack. Some specific space is allocated to stack . if data exceeds beyond the space allocated, then that additional data can overwrite other data stored on the stack and cause problems for other variables and program control flow. that is the cause of stack smashing error. Debugging can be used to track such a variables which overflow the stack space limit. This video tutorials shows , how to run debugger to track segmentation fault error and stack smashing error. Here is the TestSegFault.cpp_1.zip (136 downloads) of program file in the tutorial.
<urn:uuid:cdb525cd-f4b0-4765-a0a2-88dd7af0cfef>
2.953125
713
Tutorial
Software Dev.
43.584675
95,608,610
This article relies largely or entirely on a single source. (April 2013) A transverse wave is a moving wave that consists of oscillations occurring perpendicular (right angled) to the direction of energy transfer (or the propagation of the wave). If a transverse wave is moving in the positive x-direction, its oscillations are in up and down directions that lie in the y–z plane. Light is an example of a transverse wave, while sound is a longitudinal wave. A ripple in a pond and a wave on a string are easily visualized as transverse waves. Transverse waves are waves that are oscillating perpendicularly (at a right angle) to the direction of propagation. If you anchor one end of a ribbon or string and hold the other end in your hand, you can create transverse waves by moving your hand up and down. Notice though, that you can also launch waves by moving your hand side-to-side. This is an important point. There are two independent directions in which wave motion can occur. In this case, these motions are the Y and X directions mentioned above, while the wave propagates away in the z direction. The other type of waves is the longitudinal wave, which oscillates in the direction of its propagation. Continuing with the string example, if you move your hand in a clockwise circle, you will launch waves in the form of a left-handed helix as they propagate away. Similarly, if you move your hand in a counter-clockwise circle, a right-handed helix will form. These phenomena of simultaneous motion in two directions go beyond the kinds of waves we observe on the surface of water - in that a wave on a string can be two-dimensional. Two-dimensional transverse waves exhibit a phenomenon called polarization. A wave produced by moving your hand in a straight line, up and down for instance, is a linearly polarized wave, a special case. A wave produced by moving your hand in a circle or an ellipse is a circularly or elliptically polarized wave, two other special cases. Electromagnetic waves behave in this same way. Electromagnetic waves are also two-dimensional transverse waves. Transverse waves are waves that travel perpendicular to the direction of the vibration. Ray theory does not describe phenomena such as interference and diffraction, which require wave theory (involving the phase of the wave). You can think of a ray of light, in optics, as an idealized narrow beam of electromagnetic radiation. Rays are used to model the propagation of light through an optical system, by dividing the real light field up into discrete rays that can be computationally propagated through the system by the techniques of ray tracing. A light ray is a line or curve that is perpendicular to the light's wavefronts (and is therefore collinear with the wave vector). Light rays bend at the interface between two dissimilar media and may be curved in a medium in which the refractive index changes. Geometric optics describes how rays propagate through an optical system. This two-dimensional nature should not be confused with the two components of an electromagnetic wave, the electric and magnetic field components, which are shown in the light wave diagram here. Each of these fields, the electric and the magnetic, exhibits two-dimensional transverse wave behavior, just like the waves on a string. - Longitudinal wave - Luminiferous aether – the postulated medium for light waves; accepting that light was a transverse wave prompted a search for evidence of this physical medium - Shear wave splitting - Sinusoidal plane-wave solutions of the electromagnetic wave equation - Transverse mode
<urn:uuid:d120c1f5-73fd-43b9-a548-c05b1a2645d7>
4.46875
754
Knowledge Article
Science & Tech.
34.909104
95,608,611
Industrialization has been the hallmark of human progress. However, industries have become to the biggest issue of environmental pollution. Industrial pollution is pollution that can be directly linked with industry, in contrast to other pollution sources. This form of pollution is one of the leading causes of pollution worldwide. Industries release a host of toxic gases into the atmosphere, and gallons of liquid waste into the seas and rivers. Some of the effluents percolate down and reach the ground water and pollute it to the extent, that people can’t use it for drinking or cooking. Besides adding to air pollution, the innumerable vehicles running on the roads add to noise pollution that has led to an increase in stress, anxiety and problems related to hearing. First, let’s talk about the origin of industrial pollution. “Since human beings started burning wood to stay warm, they have been releasing pollution into the environment. Not until the 18th century, though, when the Industrial revolution began, did humans begin to have a significant effect on Earth's environment” (Broderick). According to Broderick, the steam-powered factories needed an endless supply of burning wood to run. Therefore, coal and oil became the predominant source of energy as industry spread across the world. However, the negative byproducts of burning coal and oil became obvious and fearful. The forms of pollution involved radioactive waste, greenhouse gases, heavy metals and medical waste. One of the most harmful forms of industrial pollution is carbon dioxide gas released through the burning of coal and oil. Its increasing presence in the Earth's atmosphere is a direct cause of global warming. Today, many developed nations realize the huge harm to environment and human beings by release of excessive carbon dioxide. They find many ways to reduce carbon dioxide emissions, such as, using filters on smoke stacks to help limit pollution by catching harmful substances and cleaning fumes before they reach the air; and by burning natural gas instead of oil and coal. However, despite the efforts of developed countries, the lax industrial regulations of developing countries such as China and India have led to a continued increase in emissions. Broderick warned that Possibly disastrous ecological consequences may occur within the next 100 years if carbon dioxide levels are not curbed. Urban industrial smog is another form of air pollution. The industrial furnaces, refineries, smelters, chemical plants and paper mills are the major contributors to smog. The large quantity of smog is emitted to the atmosphere from the smokestacks with inadequate pollution controls. Another harmful form of industrial pollution is water pollution, caused by dumping of industrial waste into waterways, or improper containment of waste, which causes leakage into groundwater and waterways. Industrial activities are a significant and growing cause of poor water quality. Industrial work involves the use of many different chemicals that can run-off into water and pollute it. Metals and solvents from industrial work can pollute rivers and lakes. The result is poisoned aquatic life. Subsequently, birds, humans and other animals may be poisoned if they eat infected fish. According to Broderick, one of the most infamous examples is Minamata disease, a neurological disorder that occurred when residents of Minamata, Japan, ate fish containing large amounts of mercury obtained from a nearby chemical factory. Since the 1950s, more than 1,700 individuals have died as a direct result of mercury poisoning. In addition, the innumerable vehicles running on the roads not only emit a host of waste gas, but also cause noise pollution. This form of pollution has not received as much attention as other types of pollution, such as air pollution, or water pollution. However, noise pollution adversely affects the lives of millions of people. ... Cited: “Air Pollution in Developing Countries” Web. January 5, 2013. Bose, Debopriya. “How does Human affect the Environment?” February 28, 2012. Web. January 5, 2013. Faridi, Rashid. “Acid Rain: Causes, Effects and Solutions” May 10, 2008. Web. January 5, 2013. “Noise Pollution” July 19, 2011. Web. January 5, 2013. http://www.epa.gov/air/noise.html S.E. Smith. “What is Industrial Pollution?” March 8, 2012. Web. January 5, 2013. T. Broderick. “What Are the Causes of Industrial Pollution?” January 23, 2012. Web. January 5, 2013. Please join StudyMode to read the full document
<urn:uuid:5f1259b5-72c1-495c-ada0-439904923096>
3.75
939
Truncated
Science & Tech.
44.828517
95,608,617
|Interactions||Electromagnetic, Weak, Gravity| < ×10−18 eV/c2 1 < ×10−35 e1 The photon is a type of elementary particle, the quantum of the electromagnetic field including electromagnetic radiation such as light, and the force carrier for the electromagnetic force (even when static via virtual particles). The photon has zero rest mass and always moves at the speed of light within a vacuum. Like all elementary particles, photons are currently best explained by quantum mechanics and exhibit wave–particle duality, exhibiting properties of both waves and particles. For example, a single photon may be refracted by a lens and exhibit wave interference with itself, and it can behave as a particle with definite and finite measurable position or momentum, though not both at the same time. The photon's wave and quantum qualities are two observable aspects of a single phenomenon - they cannot be described by any mechanical model; a representation of this dual property of light that assumes certain points on the wavefront to be the seat of the energy is not possible. The quanta in a light wave are not spatially localized. The modern concept of the photon was developed gradually by Albert Einstein in the early 20th century to explain experimental observations that did not fit the classical wave model of light. The benefit of the photon model was that it accounted for the frequency dependence of light's energy, and explained the ability of matter and electromagnetic radiation to be in thermal equilibrium. The photon model accounted for anomalous observations, including the properties of black-body radiation, that others (notably Max Planck) had tried to explain using semiclassical models. In that model, light was described by Maxwell's equations, but material objects emitted and absorbed light in quantized amounts (i.e., they change energy only by certain particular discrete amounts). Although these semiclassical models contributed to the development of quantum mechanics, many further experiments beginning with the phenomenon of Compton scattering of single photons by electrons, validated Einstein's hypothesis that light itself is quantized. In 1926 the optical physicist Frithiof Wolfers and the chemist Gilbert N. Lewis coined the name "photon" for these particles. After Arthur H. Compton won the Nobel Prize in 1927 for his scattering studies, most scientists accepted that light quanta have an independent existence, and the term "photon" was accepted. In the Standard Model of particle physics, photons and other elementary particles are described as a necessary consequence of physical laws having a certain symmetry at every point in spacetime. The intrinsic properties of particles, such as charge, mass, and spin, are determined by this gauge symmetry. The photon concept has led to momentous advances in experimental and theoretical physics, including lasers, Bose–Einstein condensation, quantum field theory, and the probabilistic interpretation of quantum mechanics. It has been applied to photochemistry, high-resolution microscopy, and measurements of molecular distances. Recently, photons have been studied as elements of quantum computers, and for applications in optical imaging and optical communication such as quantum cryptography. - 1 Nomenclature - 2 Physical properties - 3 Historical development - 4 Einstein's light quantum - 5 Early objections - 6 Wave–particle duality and uncertainty principles - 7 Bose–Einstein model of a photon gas - 8 Stimulated and spontaneous emission - 9 Second quantization and high energy photon interactions - 10 The hadronic properties of the photon - 11 The photon as a gauge boson - 12 Contributions to the mass of a system - 13 Photons in matter - 14 Technological applications - 15 Recent research - 16 See also - 17 Notes - 18 References - 19 Additional references - 20 External links The word quanta (singular quantum, Latin for how much) was used before 1900 to mean particles or amounts of different quantities, including electricity. In 1900, the German physicist Max Planck was studying black-body radiation: he suggested that the experimental observations would be explained if the energy carried by electromagnetic waves could only be released in "packets" of energy. In his 1901 article in Annalen der Physik he called these packets "energy elements". In 1905, Albert Einstein published a paper in which he proposed that many light-related phenomena—including black-body radiation and the photoelectric effect—would be better explained by modelling electromagnetic waves as consisting of spatially localised, discrete wave-packets. He called such a wave-packet the light quantum (German: das Lichtquant).[Note 1] The name photon derives from the Greek word for light, φῶς (transliterated phôs). Arthur Compton used photon in 1928, referring to Gilbert N. Lewis. The same name was used earlier, by the American physicist and psychologist Leonard T. Troland, who coined the word in 1916, in 1921 by the Irish physicist John Joly, in 1924 by the French physiologist René Wurmser (1890-1993) and in 1926 by the French physicist Frithiof Wolfers (1891-1971). The name was suggested initially as a unit related to the illumination of the eye and the resulting sensation of light and was used later in a physiological context. Although Wolfers's and Lewis's theories were contradicted by many experiments and never accepted, the new name was adopted very soon by most physicists after Compton used it.[Note 2] In physics, a photon is usually denoted by the symbol γ (the Greek letter gamma). This symbol for the photon probably derives from gamma rays, which were discovered in 1900 by Paul Villard, named by Ernest Rutherford in 1903, and shown to be a form of electromagnetic radiation in 1914 by Rutherford and Edward Andrade. In chemistry and optical engineering, photons are usually symbolized by hν, which is the photon energy, where h is Planck constant and the Greek letter ν (nu) is the photon's frequency. Much less commonly, the photon can be symbolized by hf, where its frequency is denoted by f. A photon is massless,[Note 3] has no electric charge, and is a stable particle. A photon has two possible polarization states. In the momentum representation of the photon, which is preferred in quantum field theory, a photon is described by its wave vector, which determines its wavelength λ and its direction of propagation. A photon's wave vector may not be zero and can be represented either as a spatial 3-vector or as a (relativistic) four-vector; in the latter case it belongs to the light cone (pictured). Different signs of the four-vector denote different circular polarizations, but in the 3-vector representation one should account for the polarization state separately; it actually is a spin quantum number. In both cases the space of possible wave vectors is three-dimensional. The photon is the gauge boson for electromagnetism,:29–30 and therefore all other quantum numbers of the photon (such as lepton number, baryon number, and flavour quantum numbers) are zero. Also, the photon does not obey the Pauli exclusion principle.:1221 Photons are emitted in many natural processes. For example, when a charge is accelerated it emits synchrotron radiation. During a molecular, atomic or nuclear transition to a lower energy level, photons of various energy will be emitted, ranging from radio waves to gamma rays. Photons can also be emitted when a particle and its corresponding antiparticle are annihilated (for example, electron–positron annihilation).:572, 1114, 1172 In empty space, the photon moves at c (the speed of light) and its energy and momentum are related by E = pc, where p is the magnitude of the momentum vector p. This derives from the following relativistic relation, with m = 0: Since p points in the direction of the photon's propagation, the magnitude of the momentum is The photon also carries a quantity called spin angular momentum that does not depend on its frequency. The magnitude of its spin is and the component measured along its direction of motion, its helicity, must be ±ħ. These two possible helicities, called right-handed and left-handed, correspond to the two possible circular polarization states of the photon. To illustrate the significance of these formulae, the annihilation of a particle with its antiparticle in free space must result in the creation of at least two photons for the following reason. In the center of momentum frame, the colliding antiparticles have no net momentum, whereas a single photon always has momentum (since, as we have seen, it is determined by the photon's frequency or wavelength, which cannot be zero). Hence, conservation of momentum (or equivalently, translational invariance) requires that at least two photons are created, with zero net momentum. (However, it is possible if the system interacts with another particle or field for the annihilation to produce one photon, as when a positron annihilates with a bound atomic electron, it is possible for only one photon to be emitted, as the nuclear Coulomb field breaks translational symmetry.):64–65 The energy of the two photons, or, equivalently, their frequency, may be determined from conservation of four-momentum. Seen another way, the photon can be considered as its own antiparticle. The reverse process, pair production, is the dominant mechanism by which high-energy photons such as gamma rays lose energy while passing through matter. That process is the reverse of "annihilation to one photon" allowed in the electric field of an atomic nucleus. The classical formulae for the energy and momentum of electromagnetic radiation can be re-expressed in terms of photon events. For example, the pressure of electromagnetic radiation on an object derives from the transfer of photon momentum per unit time and unit area to that object, since pressure is force per unit area and force is the change in momentum per unit time. Each photon carries two distinct and independent forms of angular momentum of light. The spin angular momentum of light of a particular photon is always either or . The light orbital angular momentum of a particular photon can be any integer N, including zero. Experimental checks on photon mass Current commonly accepted physical theories imply or assume the photon to be strictly massless. If the photon is not a strictly massless particle, it would not move at the exact speed of light, c in vacuum. Its speed would be lower and depend on its frequency. Relativity would be unaffected by this; the so-called speed of light, c, would then not be the actual speed at which light moves, but a constant of nature which is the upper bound on speed that any object could theoretically attain in space-time. Thus, it would still be the speed of space-time ripples (gravitational waves and gravitons), but it would not be the speed of photons. If a photon did have non-zero mass, there would be other effects as well. Coulomb's law would be modified and the electromagnetic field would have an extra physical degree of freedom. These effects yield more sensitive experimental probes of the photon mass than the frequency dependence of the speed of light. If Coulomb's law is not exactly valid, then that would allow the presence of an electric field to exist within a hollow conductor when it is subjected to an external electric field. This thus allows one to test Coulomb's law to very high precision. A null result of such an experiment has set a limit of m ≲ 10−14 eV/c2. Sharper upper limits on the speed of light have been obtained in experiments designed to detect effects caused by the galactic vector potential. Although the galactic vector potential is very large because the galactic magnetic field exists on very great length scales, only the magnetic field would be observable if the photon is massless. In the case that the photon has mass, the mass term would affect the galactic plasma. The fact that no such effects are seen implies an upper bound on the photon mass of m < ×10−27 eV/c2. 3 The galactic vector potential can also be probed directly by measuring the torque exerted on a magnetized ring. Such methods were used to obtain the sharper upper limit of 10−18eV/c2 (the equivalent of ×10−27 atomic mass units) given by the 1.07Particle Data Group. These sharp limits from the non-observation of the effects caused by the galactic vector potential have been shown to be model dependent. If the photon mass is generated via the Higgs mechanism then the upper limit of m≲10−14 eV/c2 from the test of Coulomb's law is valid. In most theories up to the eighteenth century, light was pictured as being made up of particles. Since particle models cannot easily account for the refraction, diffraction and birefringence of light, wave theories of light were proposed by René Descartes (1637), Robert Hooke (1665), and Christiaan Huygens (1678); however, particle models remained dominant, chiefly due to the influence of Isaac Newton. In the early nineteenth century, Thomas Young and August Fresnel clearly demonstrated the interference and diffraction of light and by 1850 wave models were generally accepted. In 1865, James Clerk Maxwell's prediction that light was an electromagnetic wave—which was confirmed experimentally in 1888 by Heinrich Hertz's detection of radio waves—seemed to be the final blow to particle models of light. The Maxwell wave theory, however, does not account for all properties of light. The Maxwell theory predicts that the energy of a light wave depends only on its intensity, not on its frequency; nevertheless, several independent types of experiments show that the energy imparted by light to atoms depends only on the light's frequency, not on its intensity. For example, some chemical reactions are provoked only by light of frequency higher than a certain threshold; light of frequency lower than the threshold, no matter how intense, does not initiate the reaction. Similarly, electrons can be ejected from a metal plate by shining light of sufficiently high frequency on it (the photoelectric effect); the energy of the ejected electron is related only to the light's frequency, not to its intensity.[Note 4] At the same time, investigations of blackbody radiation carried out over four decades (1860–1900) by various researchers culminated in Max Planck's hypothesis that the energy of any system that absorbs or emits electromagnetic radiation of frequency ν is an integer multiple of an energy quantum E = hν. As shown by Albert Einstein, some form of energy quantization must be assumed to account for the thermal equilibrium observed between matter and electromagnetic radiation; for this explanation of the photoelectric effect, Einstein received the 1921 Nobel Prize in physics. Since the Maxwell theory of light allows for all possible energies of electromagnetic radiation, most physicists assumed initially that the energy quantization resulted from some unknown constraint on the matter that absorbs or emits the radiation. In 1905, Einstein was the first to propose that energy quantization was a property of electromagnetic radiation itself. Although he accepted the validity of Maxwell's theory, Einstein pointed out that many anomalous experiments could be explained if the energy of a Maxwellian light wave were localized into point-like quanta that move independently of one another, even if the wave itself is spread continuously over space. In 1909 and 1916, Einstein showed that, if Planck's law of black-body radiation is accepted, the energy quanta must also carry momentum p = h/λ, making them full-fledged particles. This photon momentum was observed experimentally by Arthur Compton, for which he received the Nobel Prize in 1927. The pivotal question was then: how to unify Maxwell's wave theory of light with its experimentally observed particle nature? The answer to this question occupied Albert Einstein for the rest of his life, and was solved in quantum electrodynamics and its successor, the Standard Model (see Second quantization and The photon as a gauge boson, below). Einstein's light quantum Unlike Planck, Einstein entertained the possibility that there might be actual physical quanta of light—what we now call photons. He noticed that a light quantum with energy proportional to its frequency would explain a number of troubling puzzles and paradoxes, including an unpublished law by Stokes, the ultraviolet catastrophe, and the photoelectric effect. Stokes's law said simply that the frequency of fluorescent light cannot be greater than the frequency of the light (usually ultraviolet) inducing it. Einstein eliminated the ultraviolet catastrophe by imagining a gas of photons behaving like a gas of electrons that he had previously considered. He was advised by a colleague to be careful how he wrote up this paper, in order to not challenge Planck, a powerful figure in physics, too directly, and indeed the warning was justified, as Planck never forgave him for writing it. Einstein's 1905 predictions were verified experimentally in several ways in the first two decades of the 20th century, as recounted in Robert Millikan's Nobel lecture. However, before Compton's experiment showed that photons carried momentum proportional to their wave number (1922), most physicists were reluctant to believe that electromagnetic radiation itself might be particulate. (See, for example, the Nobel lectures of Wien, Planck and Millikan.) Instead, there was a widespread belief that energy quantization resulted from some unknown constraint on the matter that absorbed or emitted radiation. Attitudes changed over time. In part, the change can be traced to experiments such as Compton scattering, where it was much more difficult not to ascribe quantization to light itself to explain the observed results. Even after Compton's experiment, Niels Bohr, Hendrik Kramers and John Slater made one last attempt to preserve the Maxwellian continuous electromagnetic field model of light, the so-called BKS model. To account for the data then available, two drastic hypotheses had to be made: - Energy and momentum are conserved only on the average in interactions between matter and radiation, but not in elementary processes such as absorption and emission. This allows one to reconcile the discontinuously changing energy of the atom (the jump between energy states) with the continuous release of energy as radiation. - Causality is abandoned. For example, spontaneous emissions are merely emissions stimulated by a "virtual" electromagnetic field. However, refined Compton experiments showed that energy–momentum is conserved extraordinarily well in elementary processes; and also that the jolting of the electron and the generation of a new photon in Compton scattering obey causality to within 10 ps. Accordingly, Bohr and his co-workers gave their model "as honorable a funeral as possible". Nevertheless, the failures of the BKS model inspired Werner Heisenberg in his development of matrix mechanics. A few physicists persisted in developing semiclassical models in which electromagnetic radiation is not quantized, but matter appears to obey the laws of quantum mechanics. Although the evidence from chemical and physical experiments for the existence of photons was overwhelming by the 1970s, this evidence could not be considered as absolutely definitive; since it relied on the interaction of light with matter, and a sufficiently complete theory of matter could in principle account for the evidence. Nevertheless, all semiclassical theories were refuted definitively in the 1970s and 1980s by photon-correlation experiments.[Note 5] Hence, Einstein's hypothesis that quantization is a property of light itself is considered to be proven. Wave–particle duality and uncertainty principles Photons, like all quantum objects, exhibit wave-like and particle-like properties. Their dual wave–particle nature can be difficult to visualize. The photon displays clearly wave-like phenomena such as diffraction and interference on the length scale of its wavelength. For example, a single photon passing through a double-slit experiment exhibits interference phenomena but only if no measure was made at the slit. A single photon passing through a double-slit experiment lands on the screen with a probability distribution given by its interference pattern determined by Maxwell's equations. However, experiments confirm that the photon is not a short pulse of electromagnetic radiation; it does not spread out as it propagates, nor does it divide when it encounters a beam splitter. Rather, the photon seems to be a point-like particle since it is absorbed or emitted as a whole by arbitrarily small systems, systems much smaller than its wavelength, such as an atomic nucleus (≈10−15 m across) or even the point-like electron. Nevertheless, the photon is not a point-like particle whose trajectory is shaped probabilistically by the electromagnetic field, as conceived by Einstein and others; that hypothesis was also refuted by the photon-correlation experiments cited above. According to our present understanding, the electromagnetic field itself is produced by photons, which in turn result from a local gauge symmetry and the laws of quantum field theory (see the Second quantization and Gauge boson sections below). A key element of quantum mechanics is Heisenberg's uncertainty principle, which forbids the simultaneous measurement of the position and momentum of a particle along the same direction. Remarkably, the uncertainty principle for charged, material particles requires the quantization of light into photons, and even the frequency dependence of the photon's energy and momentum. An elegant illustration of the uncertainty principle is Heisenberg's thought experiment for locating an electron with an ideal microscope. The position of the electron can be determined to within the resolving power of the microscope, which is given by a formula from classical optics where θ is the aperture angle of the microscope and λ is the wavelength of the light used to observe the electron. Thus, the position uncertainty can be made arbitrarily small by reducing the wavelength λ. Even if the momentum of the electron is initially known, the light impinging on the electron will give it a momentum "kick" of some unknown amount, rendering the momentum of the electron uncertain. If light were not quantized into photons, the uncertainty could be made arbitrarily small by reducing the light's intensity. In that case, since the wavelength and intensity of light can be varied independently, one could simultaneously determine the position and momentum to arbitrarily high accuracy, violating the uncertainty principle. By contrast, Einstein's formula for photon momentum preserves the uncertainty principle; since the photon is scattered anywhere within the aperture, the uncertainty of momentum transferred equals giving the product , which is Heisenberg's uncertainty principle. Thus, the entire world is quantized; both matter and fields must obey a consistent set of quantum laws, if either one is to be quantized. The analogous uncertainty principle for photons forbids the simultaneous measurement of the number of photons (see Fock state and the Second quantization section below) in an electromagnetic wave and the phase of that wave Both photons and electrons create analogous interference patterns when passed through a double-slit experiment. For photons, this corresponds to the interference of a Maxwell light wave whereas, for material particles (electron), this corresponds to the interference of the Schrödinger wave equation. Although this similarity might suggest that Maxwell's equations describing the photon's electromagnetic wave are simply Schrödinger's equation for photons, most physicists do not agree. For one thing, they are mathematically different; most obviously, Schrödinger's one equation for the electron solves for a complex field, whereas Maxwell's four equations solve for real fields. More generally, the normal concept of a Schrödinger probability wave function cannot be applied to photons. As photons are massless, they cannot be localized without being destroyed; technically, photons cannot have a position eigenstate , and, thus, the normal Heisenberg uncertainty principle does not pertain to photons. A few substitute wave functions have been suggested for the photon, but they have not come into general use. Instead, physicists generally accept the second-quantized theory of photons described below, quantum electrodynamics, in which photons are quantized excitations of electromagnetic modes. Another interpretation, that avoids duality, is the De Broglie–Bohm theory: known also as the pilot-wave model. In that theory, the photon is both, wave and particle. "This idea seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored", J.S.Bell. Bose–Einstein model of a photon gas In 1924, Satyendra Nath Bose derived Planck's law of black-body radiation without using any electromagnetism, but rather by using a modification of coarse-grained counting of phase space. Einstein showed that this modification is equivalent to assuming that photons are rigorously identical and that it implied a "mysterious non-local interaction", now understood as the requirement for a symmetric quantum mechanical state. This work led to the concept of coherent states and the development of the laser. In the same papers, Einstein extended Bose's formalism to material particles (bosons) and predicted that they would condense into their lowest quantum state at low enough temperatures; this Bose–Einstein condensation was observed experimentally in 1995. It was later used by Lene Hau to slow, and then completely stop, light in 1999 and 2001. The modern view on this is that photons are, by virtue of their integer spin, bosons (as opposed to fermions with half-integer spin). By the spin-statistics theorem, all bosons obey Bose–Einstein statistics (whereas all fermions obey Fermi–Dirac statistics). Stimulated and spontaneous emission In 1916, Albert Einstein showed that Planck's radiation law could be derived from a semi-classical, statistical treatment of photons and atoms, which implies a link between the rates at which atoms emit and absorb photons. The condition follows from the assumption that functions of the emission and absorption of radiation by the atoms are independent of each other, and that thermal equilibrium is made by way of the radiation's interaction with the atoms. Consider a cavity in thermal equilibrium with all parts of itself and filled with electromagnetic radiation and that the atoms can emit and absorb that radiation. Thermal equilibrium requires that the energy density of photons with frequency (which is proportional to their number density) is, on average, constant in time; hence, the rate at which photons of any particular frequency are emitted must equal the rate at which they absorb them. Einstein began by postulating simple proportionality relations for the different reaction rates involved. In his model, the rate for a system to absorb a photon of frequency and transition from a lower energy to a higher energy is proportional to the number of atoms with energy and to the energy density of ambient photons of that frequency, where is the rate constant for absorption. For the reverse process, there are two possibilities: spontaneous emission of a photon, or the emission of a photon initiated by the interaction of the atom with a passing photon and the return of the atom to the lower-energy state. Following Einstein's approach, the corresponding rate for the emission of photons of frequency and transition from a higher energy to a lower energy is where is the rate constant for emitting a photon spontaneously, and is the rate constant for emissions in response to ambient photons (induced or stimulated emission). In thermodynamic equilibrium, the number of atoms in state i and those in state j must, on average, be constant; hence, the rates and must be equal. Also, by arguments analogous to the derivation of Boltzmann statistics, the ratio of and is where are the degeneracy of the state i and that of j, respectively, their energies, k the Boltzmann constant and T the system's temperature. From this, it is readily derived that and The A and Bs are collectively known as the Einstein coefficients. Einstein could not fully justify his rate equations, but claimed that it should be possible to calculate the coefficients , and once physicists had obtained "mechanics and electrodynamics modified to accommodate the quantum hypothesis". In fact, in 1926, Paul Dirac derived the rate constants by using a semiclassical approach, and, in 1927, succeeded in deriving all the rate constants from first principles within the framework of quantum theory. Dirac's work was the foundation of quantum electrodynamics, i.e., the quantization of the electromagnetic field itself. Dirac's approach is also called second quantization or quantum field theory; earlier quantum mechanical treatments only treat material particles as quantum mechanical, not the electromagnetic field. Einstein was troubled by the fact that his theory seemed incomplete, since it did not determine the direction of a spontaneously emitted photon. A probabilistic nature of light-particle motion was first considered by Newton in his treatment of birefringence and, more generally, of the splitting of light beams at interfaces into a transmitted beam and a reflected beam. Newton hypothesized that hidden variables in the light particle determined which of the two paths a single photon would take. Similarly, Einstein hoped for a more complete theory that would leave nothing to chance, beginning his separation from quantum mechanics. Ironically, Max Born's probabilistic interpretation of the wave function was inspired by Einstein's later work searching for a more complete theory. Second quantization and high energy photon interactions In 1910, Peter Debye derived Planck's law of black-body radiation from a relatively simple assumption. He correctly decomposed the electromagnetic field in a cavity into its Fourier modes, and assumed that the energy in any mode was an integer multiple of , where is the frequency of the electromagnetic mode. Planck's law of black-body radiation follows immediately as a geometric sum. However, Debye's approach failed to give the correct formula for the energy fluctuations of blackbody radiation, which were derived by Einstein in 1909. In 1925, Born, Heisenberg and Jordan reinterpreted Debye's concept in a key way. As may be shown classically, the Fourier modes of the electromagnetic field—a complete set of electromagnetic plane waves indexed by their wave vector k and polarization state—are equivalent to a set of uncoupled simple harmonic oscillators. Treated quantum mechanically, the energy levels of such oscillators are known to be , where is the oscillator frequency. The key new step was to identify an electromagnetic mode with energy as a state with photons, each of energy . This approach gives the correct energy fluctuation formula. Dirac took this one step further. He treated the interaction between a charge and an electromagnetic field as a small perturbation that induces transitions in the photon states, changing the numbers of photons in the modes, while conserving energy and momentum overall. Dirac was able to derive Einstein's and coefficients from first principles, and showed that the Bose–Einstein statistics of photons is a natural consequence of quantizing the electromagnetic field correctly (Bose's reasoning went in the opposite direction; he derived Planck's law of black-body radiation by assuming B–E statistics). In Dirac's time, it was not yet known that all bosons, including photons, must obey Bose–Einstein statistics. Dirac's second-order perturbation theory can involve virtual photons, transient intermediate states of the electromagnetic field; the static electric and magnetic interactions are mediated by such virtual photons. In such quantum field theories, the probability amplitude of observable events is calculated by summing over all possible intermediate steps, even ones that are unphysical; hence, virtual photons are not constrained to satisfy , and may have extra polarization states; depending on the gauge used, virtual photons may have three or four polarization states, instead of the two states of real photons. Although these transient virtual photons can never be observed, they contribute measurably to the probabilities of observable events. Indeed, such second-order and higher-order perturbation calculations can give apparently infinite contributions to the sum. Such unphysical results are corrected for using the technique of renormalization. Other virtual particles may contribute to the summation as well; for example, two photons may interact indirectly through virtual electron–positron pairs. In fact, such photon-photon scattering (see two-photon physics), as well as electron-photon scattering, is meant to be one of the modes of operations of the planned particle accelerator, the International Linear Collider. where represents the state in which photons are in the mode . In this notation, the creation of a new photon in mode (e.g., emitted from an atomic transition) is written as . This notation merely expresses the concept of Born, Heisenberg and Jordan described above, and does not add any physics. The hadronic properties of the photon Measurements of the interaction between energetic photons and hadrons show that the interaction is much more intense than expected by the interaction of merely photons with the hadron's electric charge. Furthermore, the interaction of energetic photons with protons is similar to the interaction of photons with neutrons in spite of the fact that the electric charge structures of protons and neutrons are substantially different. A theory called Vector Meson Dominance (VMD) was developed to explain this effect. According to VMD, the photon is a superposition of the pure electromagnetic photon which interacts only with electric charges and vector mesons. However, if experimentally probed at very short distances, the intrinsic structure of the photon is recognized as a flux of quark and gluon components, quasi-free according to asymptotic freedom in QCD and described by the photon structure function. A comprehensive comparison of data with theoretical predictions was presented in a review in 2000. The photon as a gauge boson The electromagnetic field can be understood as a gauge field, i.e., as a field that results from requiring that a gauge symmetry holds independently at every position in spacetime. For the electromagnetic field, this gauge symmetry is the Abelian U(1) symmetry of complex numbers of absolute value 1, which reflects the ability to vary the phase of a complex field without affecting observables or real valued functions made from it, such as the energy or the Lagrangian. The quanta of an Abelian gauge field must be massless, uncharged bosons, as long as the symmetry is not broken; hence, the photon is predicted to be massless, and to have zero electric charge and integer spin. The particular form of the electromagnetic interaction specifies that the photon must have spin ±1; thus, its helicity must be . These two spin components correspond to the classical concepts of right-handed and left-handed circularly polarized light. However, the transient virtual photons of quantum electrodynamics may also adopt unphysical polarization states. In the prevailing Standard Model of physics, the photon is one of four gauge bosons in the electroweak interaction; the other three are denoted W+, W− and Z0 and are responsible for the weak interaction. Unlike the photon, these gauge bosons have mass, owing to a mechanism that breaks their SU(2) gauge symmetry. The unification of the photon with W and Z gauge bosons in the electroweak interaction was accomplished by Sheldon Glashow, Abdus Salam and Steven Weinberg, for which they were awarded the 1979 Nobel Prize in physics. Physicists continue to hypothesize grand unified theories that connect these four gauge bosons with the eight gluon gauge bosons of quantum chromodynamics; however, key predictions of these theories, such as proton decay, have not been observed experimentally. Contributions to the mass of a system The energy of a system that emits a photon is decreased by the energy of the photon as measured in the rest frame of the emitting system, which may result in a reduction in mass in the amount . Similarly, the mass of a system that absorbs a photon is increased by a corresponding amount. As an application, the energy balance of nuclear reactions involving photons is commonly written in terms of the masses of the nuclei involved, and terms of the form for the gamma photons (and for other relevant energies, such as the recoil energy of nuclei). This concept is applied in key predictions of quantum electrodynamics (QED, see above). In that theory, the mass of electrons (or, more generally, leptons) is modified by including the mass contributions of virtual photons, in a technique known as renormalization. Such "radiative corrections" contribute to a number of predictions of QED, such as the magnetic dipole moment of leptons, the Lamb shift, and the hyperfine structure of bound lepton pairs, such as muonium and positronium. Since photons contribute to the stress–energy tensor, they exert a gravitational attraction on other objects, according to the theory of general relativity. Conversely, photons are themselves affected by gravity; their normally straight trajectories may be bent by warped spacetime, as in gravitational lensing, and their frequencies may be lowered by moving to a higher gravitational potential, as in the Pound–Rebka experiment. However, these effects are not specific to photons; exactly the same effects would be predicted for classical electromagnetic waves. Photons in matter Light that travels through transparent matter does so at a lower speed than c, the speed of light in a vacuum. For example, photons engage in so many collisions on the way from the core of the sun that radiant energy can take about a million years to reach the surface; however, once in open space, a photon takes only 8.3 minutes to reach Earth. The factor by which the speed is decreased is called the refractive index of the material. In a classical wave picture, the slowing can be explained by the light inducing electric polarization in the matter, the polarized matter radiating new light, and that new light interfering with the original light wave to form a delayed wave. In a particle picture, the slowing can instead be described as a blending of the photon with quantum excitations of the matter to produce quasi-particles known as polariton (other quasi-particles are phonons and excitons); this polariton has a nonzero effective mass, which means that it cannot travel at c. Light of different frequencies may travel through matter at different speeds; this is called dispersion (not to be confused with scattering). In some cases, it can result in extremely slow speeds of light in matter. The effects of photon interactions with other quasi-particles may be observed directly in Raman scattering and Brillouin scattering. Photons can also be absorbed by nuclei, atoms or molecules, provoking transitions between their energy levels. A classic example is the molecular transition of retinal (C20H28O), which is responsible for vision, as discovered in 1958 by Nobel laureate biochemist George Wald and co-workers. The absorption provokes a cis-trans isomerization that, in combination with other such transitions, is transduced into nerve impulses. The absorption of photons can even break chemical bonds, as in the photodissociation of chlorine; this is the subject of photochemistry. Photons have many applications in technology. These examples are chosen to illustrate applications of photons per se, rather than general optical devices such as lenses, etc. that could operate under a classical theory of light. The laser is an extremely important application and is discussed above under stimulated emission. Individual photons can be detected by several methods. The classic photomultiplier tube exploits the photoelectric effect: a photon of sufficient energy strikes a metal plate and knocks free an electron, initiating an ever-amplifying avalanche of electrons. Semiconductor charge-coupled device chips use a similar effect: an incident photon generates a charge on a microscopic capacitor that can be detected. Other detectors such as Geiger counters use the ability of photons to ionize gas molecules contained in the device, causing a detectable change of conductivity of the gas. Planck's energy formula is often used by engineers and chemists in design, both to compute the change in energy resulting from a photon absorption and to determine the frequency of the light emitted from a given photon emission. For example, the emission spectrum of a gas-discharge lamp can be altered by filling it with (mixtures of) gases with different electronic energy level configurations. Under some conditions, an energy transition can be excited by "two" photons that individually would be insufficient. This allows for higher resolution microscopy, because the sample absorbs energy only in the spectrum where two beams of different colors overlap significantly, which can be made much smaller than the excitation volume of a single beam (see two-photon excitation microscopy). Moreover, these photons cause less damage to the sample, since they are of lower energy. In some cases, two energy transitions can be coupled so that, as one system absorbs a photon, another nearby system "steals" its energy and re-emits a photon of a different frequency. This is the basis of fluorescence resonance energy transfer, a technique that is used in molecular biology to study the interaction of suitable proteins. Several different kinds of hardware random number generators involve the detection of single photons. In one example, for each bit in the random sequence that is to be produced, a photon is sent to a beam-splitter. In such a situation, there are two possible outcomes of equal probability. The actual outcome is used to determine whether the next bit in the sequence is "0" or "1". Much research has been devoted to applications of photons in the field of quantum optics. Photons seem well-suited to be elements of an extremely fast quantum computer, and the quantum entanglement of photons is a focus of research. Nonlinear optical processes are another active research area, with topics such as two-photon absorption, self-phase modulation, modulational instability and optical parametric oscillators. However, such processes generally do not require the assumption of photons per se; they may often be modeled by treating atoms as nonlinear oscillators. The nonlinear process of spontaneous parametric down conversion is often used to produce single-photon states. Finally, photons are essential in some aspects of optical communication, especially for quantum cryptography.[Note 6] - Advanced Photon Source at Argonne National Laboratory - Ballistic photon - Dirac equation - Doppler effect - Electromagnetic radiation - EPR paradox - High energy X-ray imaging technology - Luminiferous aether - Photon counting - Photon energy - Photon epoch - Photon polarization - Photonic molecule - Single-photon source - Static forces and virtual-particle exchange - Two-photon physics - Although the 1967 Elsevier translation of Planck's Nobel Lecture interprets Planck's Lichtquant as "photon", the more literal 1922 translation by Hans Thacher Clarke and Ludwik Silberstein Planck, Max (1922). The Origin and Development of the Quantum Theory. Clarendon Press. (here ) uses "light-quantum". No evidence is known that Planck himself used the term "photon" by 1926 (see also this note). - Isaac Asimov credits Arthur Compton with defining quanta of energy as photons in 1923. Asimov, Isaac (1 April 1983). The Neutrino: Ghost Particle of the Atom. Garden City (NY): Avon Books. ISBN 978-0-380-00483-6. and Asimov, Isaac (1 January 1971). The Universe: From Flat Earth to Quasar. New York (NY): Walker. ISBN 0-8027-0316-X. LCCN 66022515. - The mass of the photon is believed to be exactly zero. Some sources also refer to the relativistic mass, which is just the energy scaled to units of mass. For a photon with wavelength λ or energy E, this is h/λc or E/c2. This usage for the term "mass" is no longer common in scientific literature. Further info: What is the mass of a photon? http://math.ucr.edu/home/baez/physics/ParticleAndNuclear/photon_mass.html - The phrase "no matter how intense" refers to intensities below approximately 1013 W/cm2 at which point perturbation theory begins to break down. In contrast, in the intense regime, which for visible light is above approximately 1014 W/cm2, the classical wave description correctly predicts the energy acquired by electrons, called ponderomotive energy. (See also: Boreham et al. (1996). "Photon density and the correspondence principle of electromagnetic interaction".) By comparison, sunlight is only about 0.1 W/cm2. - These experiments produce results that cannot be explained by any classical theory of light, since they involve anticorrelations that result from the quantum measurement process. In 1974, the first such experiment was carried out by Clauser, who reported a violation of a classical Cauchy–Schwarz inequality. In 1977, Kimble et al. demonstrated an analogous anti-bunching effect of photons interacting with a beam splitter; this approach was simplified and sources of error eliminated in the photon-anticorrelation experiment of Grangier et al. (1986). This work is reviewed and simplified further in Thorn et al. (2004). (These references are listed below under #Additional references.) - Introductory-level material on the various sub-fields of quantum optics can be found in Fox, M. (2006). Quantum Optics: An Introduction. Oxford University Press. ISBN 0-19-856673-5. - Amsler, C. (Particle Data Group) (2008). "Review of Particle Physics: Gauge and Higgs bosons" (PDF). Physics Letters B. 667: 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. - Joos, George (1951). Theoretical Physics. London and Glasgow: Blackie and Son Limited. p. 679. - Kimble, H.J.; Dagenais, M.; Mandel, L. (1977). "Photon Anti-bunching in Resonance Fluorescence". Physical Review Letters. 39 (11): 691–695. Bibcode:1977PhRvL..39..691K. doi:10.1103/PhysRevLett.39.691. - Grangier, P.; Roger, G.; Aspect, A.; Roger; Aspect (1986). "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences". Europhysics Letters. 1 (4): 173–179. Bibcode:1986EL......1..173G. doi:10.1209/0295-5075/1/4/004. - Compton, Arthur H. (12 Dec 1927). "X-rays as a branch of optics" (PDF-1.4). Nobel Lecture. - "Arthur H. Compton - Nobel Lecture: X-rays as a Branch of Optics". Nobelprize.org. Nobel Media AB 2014. Web. 4 Mar 2017. <http://www.nobelprize.org/nobel_prizes/physics/laureates/1927/compton-lecture.html> - Kragh, Helge (1 January 2014). "Photon: New light on an old name". arXiv: [physics.hist-ph]. - "Arthur H. Compton - Facts". Nobelprize.org. Nobel Media AB 2014. Web. 4 Mar 2017. <http://www.nobelprize.org/nobel_prizes/physics/laureates/1927/compton-facts.html> - Planck, M. (1901). "Über das Gesetz der Energieverteilung im Normalspectrum". Annalen der Physik (in German). 4 (3): 553–563. Bibcode:1901AnP...309..553P. doi:10.1002/andp.19013090310. English translation - Einstein, A. (1905). "Über einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt" (PDF). Annalen der Physik (in German). 17 (6): 132–148. Bibcode:1905AnP...322..132E. doi:10.1002/andp.19053220607.. An English translation is available from Wikisource. - "Discordances entre l'expérience et la théorie électromagnétique du rayonnement." In Électrons et Photons. Rapports et Discussions de Cinquième Conseil de Physique, edited by Institut International de Physique Solvay. Paris: Gauthier-Villars, pp. 55-85. - Villard, P. (1900). "Sur la réflexion et la réfraction des rayons cathodiques et des rayons déviables du radium". Comptes Rendus des Séances de l'Académie des Sciences (in French). 130: 1010–1012. - Villard, P. (1900). "Sur le rayonnement du radium". Comptes Rendus des Séances de l'Académie des Sciences (in French). 130: 1178–1179. - Rutherford, E.; Andrade, E.N.C. (1914). "The Wavelength of the Soft Gamma Rays from Radium B". Philosophical Magazine. 27 (161): 854–868. doi:10.1080/14786440508635156. - Andrew Liddle (27 April 2015). An Introduction to Modern Cosmology. John Wiley & Sons. p. 16. ISBN 978-1-118-69025-3. - Kobychev, V.V.; Popov, S.B. (2005). "Constraints on the photon charge from observations of extragalactic sources". Astronomy Letters. 31 (3): 147–151. arXiv: . Bibcode:2005AstL...31..147K. doi:10.1134/1.1883345. - Matthew D. Schwartz (2014). Quantum Field Theory and the Standard Model. Cambridge University Press. p. 66. ISBN 978-1-107-03473-0. - Role as gauge boson and polarization section 5.1 inAitchison, I.J.R.; Hey, A.J.G. (1993). Gauge Theories in Particle Physics. IOP Publishing. ISBN 0-85274-328-9. - See p.31 inAmsler, C.; et al. (2008). "Review of Particle Physics" (PDF). Physics Letters B. 667: 1–1340. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. - Halliday, David; Resnick, Robert; Walker, Jerl (2005), Fundamental of Physics (7th ed.), USA: John Wiley and Sons, Inc., ISBN 0-471-23231-9 - See section 1.6 in Alonso & Finn 1968, Section 1.6 - Davison E. Soper, Electromagnetic radiation is made of photons, Institute of Theoretical Science, University of Oregon - This property was experimentally verified by Raman and Bhagavantam in 1931: Raman, C.V.; Bhagavantam, S. (1931). "Experimental proof of the spin of the photon" (PDF). Indian Journal of Physics. 6 (3244): 353. Bibcode:1932Natur.129...22R. doi:10.1038/129022a0. - Burgess, C.; Moore, G. (2007). "22.214.171.124". The Standard Model. A Primer. Cambridge University Press. ISBN 0-521-86036-9. - Griffiths, David J. (2008), Introduction to Elementary Particles (2nd revised ed.), WILEY-VCH, ISBN 978-3-527-40601-2 - Alonso & Finn 1968, Section 9.3 - E.g., Appendix XXXII in Born, Max; Blin-Stoyle, Roger John; Radcliffe, J. M. (1 June 1989). Atomic Physics. Courier Corporation. ISBN 978-0-486-65984-8. - Alan E. Willner. "Twisted Light Could Dramatically Boost Data Rates: Orbital angular momentum could take optical and radio communication to new heights". 2016. - Mermin, David (February 1984). "Relativity without light". American Journal of Physics. 52 (2): 119–124. Bibcode:1984AmJPh..52..119M. doi:10.1119/1.13917. - Plimpton, S.; Lawton, W. (1936). "A Very Accurate Test of Coulomb's Law of Force Between Charges". Physical Review. 50 (11): 1066. Bibcode:1936PhRv...50.1066P. doi:10.1103/PhysRev.50.1066. - Williams, E.; Faller, J.; Hill, H. (1971). "New Experimental Test of Coulomb's Law: A Laboratory Upper Limit on the Photon Rest Mass". Physical Review Letters. 26 (12): 721. Bibcode:1971PhRvL..26..721W. doi:10.1103/PhysRevLett.26.721. - Chibisov, G V (1976). "Astrophysical upper limits on the photon rest mass". Soviet Physics Uspekhi. 19 (7): 624. Bibcode:1976SvPhU..19..624C. doi:10.1070/PU1976v019n07ABEH005277. - Lakes, Roderic (1998). "Experimental Limits on the Photon Mass and Cosmic Magnetic Vector Potential". Physical Review Letters. 80 (9): 1826. Bibcode:1998PhRvL..80.1826L. doi:10.1103/PhysRevLett.80.1826. - Amsler, C; Doser, M; Antonelli, M; Asner, D; Babu, K; Baer, H; Band, H; Barnett, R; et al. (2008). "Review of Particle Physics⁎" (PDF). Physics Letters B. 667: 1. Bibcode:2008PhLB..667....1A. doi:10.1016/j.physletb.2008.07.018. Summary Table - Adelberger, Eric; Dvali, Gia; Gruzinov, Andrei (2007). "Photon-Mass Bound Destroyed by Vortices". Physical Review Letters. 98 (1): 010402. arXiv: . Bibcode:2007PhRvL..98a0402A. doi:10.1103/PhysRevLett.98.010402. PMID 17358459. preprint - Wilczek, Frank (2010). The Lightness of Being: Mass, Ether, and the Unification of Forces. Basic Books. p. 212. ISBN 978-0-465-01895-6. - Descartes, R. (1637). Discours de la méthode (Discourse on Method) (in French). Imprimerie de Ian Maire. ISBN 0-268-00870-1. - Hooke, R. (1667). Micrographia: or some physiological descriptions of minute bodies made by magnifying glasses with observations and inquiries thereupon ... London (UK): Royal Society of London. ISBN 0-486-49564-7. - Huygens, C. (1678). Traité de la lumière (in French).. An English translation is available from Project Gutenberg - Newton, I. (1952) . Opticks (4th ed.). Dover (NY): Dover Publications. Book II, Part III, Propositions XII–XX; Queries 25–29. ISBN 0-486-60205-2. - Buchwald, J.Z. (1989). The Rise of the Wave Theory of Light: Optical Theory and Experiment in the Early Nineteenth Century. University of Chicago Press. ISBN 0-226-07886-8. OCLC 18069573. - Maxwell, J.C. (1865). "A Dynamical Theory of the Electromagnetic Field". Philosophical Transactions of the Royal Society. 155: 459–512. Bibcode:1865RSPT..155..459C. doi:10.1098/rstl.1865.0008. This article followed a presentation by Maxwell on 8 December 1864 to the Royal Society. - Hertz, H. (1888). "Über Strahlen elektrischer Kraft". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin) (in German). 1888: 1297–1307. - Frequency-dependence of luminiscence p. 276f., photoelectric effect section 1.4 in Alonso & Finn 1968 - Wien, W. (1911). "Wilhelm Wien Nobel Lecture". nobelprize.org. - Planck, M. (1920). "Max Planck's Nobel Lecture". nobelprize.org. - Einstein, A. (1909). "Über die Entwicklung unserer Anschauungen über das Wesen und die Konstitution der Strahlung" (PDF). Physikalische Zeitschrift (in German). 10: 817–825.. An English translation is available from Wikisource. - Presentation speech by Svante Arrhenius for the 1921 Nobel Prize in Physics, December 10, 1922. Online text from [nobelprize.org], The Nobel Foundation 2008. Access date 2008-12-05. - Einstein, A. (1916). "Zur Quantentheorie der Strahlung". Mitteilungen der Physikalischen Gesellschaft zu Zürich. 16: 47. Also Physikalische Zeitschrift, 18, 121–128 (1917). (in German) - Compton, A. (1923). "A Quantum Theory of the Scattering of X-rays by Light Elements". Physical Review. 21 (5): 483–502. Bibcode:1923PhRv...21..483C. doi:10.1103/PhysRev.21.483. - Pais, A. (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. ISBN 0-19-853907-X. - Einstein and the Quantum: The Quest of the Valiant Swabian, A. Douglas Stone, Princeton University Press, 2013. - Millikan, R.A (1924). "Robert A. Millikan's Nobel Lecture". - Hendry, J. (1980). "The development of attitudes to the wave-particle duality of light and quantum theory, 1900–1920". Annals of Science. 37 (1): 59–79. doi:10.1080/00033798000200121. - Bohr, N.; Kramers, H.A.; Slater, J.C. (1924). "The Quantum Theory of Radiation". Philosophical Magazine. 47 (281): 785–802. doi:10.1080/14786442408565262. Also Zeitschrift für Physik, 24, 69 (1924). - Heisenberg, W. (1933). "Heisenberg Nobel lecture". - Mandel, L. (1976). E. Wolf, ed. "The case for and against semiclassical radiation theory". Progress in Optics. Progress in Optics. North-Holland. 13: 27–69. doi:10.1016/S0079-6638(08)70018-0. ISBN 978-0-444-10806-7. - Taylor, G.I. (1909). Interference fringes with feeble light. Proceedings of the Cambridge Philosophical Society. 15. pp. 114–115. - Saleh, B. E. A. & Teich, M. C. (2007). Fundamentals of Photonics. Wiley. ISBN 0-471-35832-0. - Heisenberg, W. (1927). "Über den anschaulichen Inhalt der quantentheoretischen Kinematik und Mechanik". Zeitschrift für Physik (in German). 43 (3–4): 172–198. Bibcode:1927ZPhy...43..172H. doi:10.1007/BF01397280. - E.g., p. 10f. in Schiff, L.I. (1968). Quantum Mechanics (3rd ed.). McGraw-Hill. ISBN 0-07-055287-8. - Kramers, H.A. (1958). Quantum Mechanics. Amsterdam: North-Holland. ISBN 0-486-49533-7. - Bohm, D. (1989) . Quantum Theory. Dover Publications. ISBN 0-486-65969-0. - Newton, T.D.; Wigner, E.P. (1949). "Localized states for elementary particles" (PDF). Reviews of Modern Physics. 21 (3): 400–406. Bibcode:1949RvMP...21..400N. doi:10.1103/RevModPhys.21.400. - Bialynicki-Birula, I. (1994). "On the wave function of the photon" (PDF). Acta Physica Polonica A. 86: 97–116. - Sipe, J.E. (1995). "Photon wave functions". Physical Review A. 52 (3): 1875–1883. Bibcode:1995PhRvA..52.1875S. doi:10.1103/PhysRevA.52.1875. - Bialynicki-Birula, I. (1996). "Photon wave function". Progress in Optics. Progress in Optics. 36: 245–294. doi:10.1016/S0079-6638(08)70316-0. ISBN 978-0-444-82530-8. - Scully, M.O.; Zubairy, M.S. (1997). Quantum Optics. Cambridge (UK): Cambridge University Press. ISBN 0-521-43595-1. - The best illustration is the Couder experiment, demonstrating the behaviour of a mechanical analog, see Video on YouTube - Bell, J. S. (3 June 2004). Speakable and Unspeakable in Quantum Mechanics: Collected Papers on Quantum Philosophy. Cambridge University Press. ISBN 978-0-521-52338-7. - Bose, S.N. (1924). "Plancks Gesetz und Lichtquantenhypothese". Zeitschrift für Physik (in German). 26: 178–181. Bibcode:1924ZPhy...26..178B. doi:10.1007/BF01327326. - Einstein, A. (1924). "Quantentheorie des einatomigen idealen Gases". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin), Physikalisch-mathematische Klasse (in German). 1924: 261–267. - Einstein, A. (1925). "Quantentheorie des einatomigen idealen Gases, Zweite Abhandlung". Sitzungsberichte der Preussischen Akademie der Wissenschaften (Berlin), Physikalisch-mathematische Klasse (in German). 1925: 3–14. doi:10.1002/3527608958.ch28. ISBN 978-3-527-60895-9. - Anderson, M.H.; Ensher, J.R.; Matthews, M.R.; Wieman, C.E.; Cornell, E.A. (1995). "Observation of Bose–Einstein Condensation in a Dilute Atomic Vapor". Science. 269 (5221): 198–201. Bibcode:1995Sci...269..198A. doi:10.1126/science.269.5221.198. JSTOR 2888436. PMID 17789847. - "Physicists Slow Speed of Light". News.harvard.edu (1999-02-18). Retrieved on 2015-05-11. - "Light Changed to Matter, Then Stopped and Moved". photonics.com (February 2007). Retrieved on 2015-05-11. - Streater, R.F.; Wightman, A.S. (1989). PCT, Spin and Statistics, and All That. Addison-Wesley. ISBN 0-201-09410-X. - Einstein, A. (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft (in German). 18: 318–323. Bibcode:1916DPhyG..18..318E. - Section 1.4 in Wilson, J.; Hawkes, F.J.B. (1987). Lasers: Principles and Applications. New York: Prentice Hall. ISBN 0-13-523705-X. - Einstein, A. (1916). "Strahlungs-emission und -absorption nach der Quantentheorie". Verhandlungen der Deutschen Physikalischen Gesellschaft (in German). 18: 318–323. Bibcode:1916DPhyG..18..318E. p. 322: Die Konstanten and würden sich direkt berechnen lassen, wenn wir im Besitz einer im Sinne der Quantenhypothese modifizierten Elektrodynamik und Mechanik wären." - Dirac, P.A.M. (1926). "On the Theory of Quantum Mechanics". Proceedings of the Royal Society A. 112 (762): 661–677. Bibcode:1926RSPSA.112..661D. doi:10.1098/rspa.1926.0133. - Dirac, P.A.M. (1927). "The Quantum Theory of the Emission and Absorption of Radiation" (PDF). Proceedings of the Royal Society A. 114 (767): 243–265. Bibcode:1927RSPSA.114..243D. doi:10.1098/rspa.1927.0039. - Dirac, P.A.M. (1927b). The Quantum Theory of Dispersion. Proceedings of the Royal Society A. 114. pp. 710–728. Bibcode:1927RSPSA.114..710D. doi:10.1098/rspa.1927.0071. - Heisenberg, W.; Pauli, W. (1929). "Zur Quantentheorie der Wellenfelder". Zeitschrift für Physik (in German). 56: 1. Bibcode:1929ZPhy...56....1H. doi:10.1007/BF01340129. - Heisenberg, W.; Pauli, W. (1930). "Zur Quantentheorie der Wellenfelder". Zeitschrift für Physik (in German). 59 (3–4): 139. Bibcode:1930ZPhy...59..168H. doi:10.1007/BF01341423. - Fermi, E. (1932). "Quantum Theory of Radiation". Reviews of Modern Physics. 4: 87. Bibcode:1932RvMP....4...87F. doi:10.1103/RevModPhys.4.87. - Born, M. (1926). "Zur Quantenmechanik der Stossvorgänge". Zeitschrift für Physik (in German). 37 (12): 863–867. Bibcode:1926ZPhy...37..863B. doi:10.1007/BF01397477. - Born, M. (1926). "Quantenmechanik der Stossvorgänge". Zeitschrift für Physik (in German). 38 (11–12): 803. Bibcode:1926ZPhy...38..803B. doi:10.1007/BF01397184. - Pais, A. (1986). Inward Bound: Of Matter and Forces in the Physical World. Oxford University Press. p. 260. ISBN 0-19-851997-4. Specifically, Born claimed to have been inspired by Einstein's never-published attempts to develop a "ghost-field" theory, in which point-like photons are guided probabilistically by ghost fields that follow Maxwell's equations. - Debye, P. (1910). "Der Wahrscheinlichkeitsbegriff in der Theorie der Strahlung". Annalen der Physik (in German). 33 (16): 1427–1434. Bibcode:1910AnP...338.1427D. doi:10.1002/andp.19103381617. - Born, M.; Heisenberg, W.; Jordan, P. (1925). "Quantenmechanik II". Zeitschrift für Physik (in German). 35 (8–9): 557–615. Bibcode:1926ZPhy...35..557B. doi:10.1007/BF01379806. - Photon-photon-scattering section 7-3-1, renormalization chapter 8-2 in Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 0-07-032071-3. - Weiglein, G. (2008). "Electroweak Physics at the ILC". Journal of Physics: Conference Series. 110 (4): 042033. arXiv: . Bibcode:2008JPhCS.110d2033W. doi:10.1088/1742-6596/110/4/042033. - Bauer, T. H.; Spital, R. D.; Yennie, D. R.; Pipkin, F. M. (1978). "The hadronic properties of the photon in high-energy interactions". Reviews of Modern Physics. 50 (2): 261. Bibcode:1978RvMP...50..261B. doi:10.1103/RevModPhys.50.261. - Sakurai, J. J. (1960). "Theory of strong interactions". Annals of Physics. 11: 1. Bibcode:1960AnPhy..11....1S. doi:10.1016/0003-4916(60)90126-3. - Walsh, T. F.; Zerwas, P. (1973). "Two-photon processes in the parton model". Physics Letters B. 44 (2): 195. Bibcode:1973PhLB...44..195W. doi:10.1016/0370-2693(73)90520-0. - Witten, E. (1977). "Anomalous cross section for photon-photon scattering in gauge theories". Nuclear Physics B. 120 (2): 189. Bibcode:1977NuPhB.120..189W. doi:10.1016/0550-3213(77)90038-4. - Nisius, R. (2000). "The photon structure from deep inelastic electron–photon scattering". Physics Reports. 332 (4–6): 165. arXiv: . Bibcode:2000PhR...332..165N. doi:10.1016/S0370-1573(99)00115-5. - Ryder, L.H. (1996). Quantum field theory (2nd ed.). Cambridge University Press. ISBN 0-521-47814-6. - Sheldon Glashow Nobel lecture, delivered 8 December 1979. - Abdus Salam Nobel lecture, delivered 8 December 1979. - Steven Weinberg Nobel lecture, delivered 8 December 1979. - E.g., chapter 14 in Hughes, I. S. (1985). Elementary particles (2nd ed.). Cambridge University Press. ISBN 0-521-26092-2. - E.g., section 10.1 in Dunlap, R.A. (2004). An Introduction to the Physics of Nuclei and Particles. Brooks/Cole. ISBN 0-534-39294-6. - Radiative correction to electron mass section 7-1-2, anomalous magnetic moments section 7-2-1, Lamb shift section 7-3-2 and hyperfine splitting in positronium section 10-3 in Itzykson, C.; Zuber, J.-B. (1980). Quantum Field Theory. McGraw-Hill. ISBN 0-07-032071-3. - E. g. sections 9.1 (gravitational contribution of photons) and 10.5 (influence of gravity on light) in Stephani, H.; Stewart, J. (1990). General Relativity: An Introduction to the Theory of Gravitational Field. Cambridge University Press. pp. 86 ff, 108 ff. ISBN 0-521-37941-5. - Naeye, R. (1998). Through the Eyes of Hubble: Birth, Life and Violent Death of Stars. CRC Press. ISBN 0-750-30484-7. OCLC 40180195. - Polaritons section 10.10.1, Raman and Brillouin scattering section 10.11.3 in Patterson, J.D.; Bailey, B.C. (2007). Solid-State Physics: Introduction to the Theory. Springer. ISBN 3-540-24115-9. - E.g. section 11-5 C in Pine, S.H.; Hendrickson, J.B.; Cram, D.J.; Hammond, G.S. (1980). Organic Chemistry (4th ed.). McGraw-Hill. ISBN 0-07-050115-7. - Nobel lecture given by G. Wald on December 12, 1967, online at nobelprize.org: The Molecular Basis of Visual Excitation. - Photomultiplier section 1.1.10, CCDs section 1.1.8, Geiger counters section 126.96.36.199 in Kitchin, C.R. (2008). Astrophysical Techniques. Boca Raton (FL): CRC Press. ISBN 1-4200-8243-4. - Denk, W.; Svoboda, K. (1997). "Photon upmanship: Why multiphoton imaging is more than a gimmick". Neuron. 18 (3): 351–357. doi:10.1016/S0896-6273(00)81237-4. PMID 9115730. - Lakowicz, J.R. (2006). Principles of Fluorescence Spectroscopy. Springer. pp. 529 ff. ISBN 0-387-31278-1. - Jennewein, T.; Achleitner, U.; Weihs, G.; Weinfurter, H.; Zeilinger, A. (2000). "A fast and compact quantum random number generator". Review of Scientific Instruments. 71 (4): 1675–1680. arXiv: . Bibcode:2000RScI...71.1675J. doi:10.1063/1.1150518. - Stefanov, A.; Gisin, N.; Guinnard, O.; Guinnard, L.; Zbiden, H. (2000). "Optical quantum random number generator". Journal of Modern Optics. 47 (4): 595–598. doi:10.1080/095003400147908. - Hignett, Katherine (16 February 2018). "Physics Creates New Form Of Light That Could Drive The Quantum Computing Revolution". Newsweek. Retrieved 17 February 2018. - Liang, Qi-Yu; et al. (16 February 2018). "Observation of three-photon bound states in a quantum nonlinear medium". Science. 359 (6377): 783–786. arXiv: . Bibcode:2018Sci...359..783L. doi:10.1126/science.aao7293. Retrieved 17 February 2018. By date of publication: - Alonso, M.; Finn, E.J. (1968). Fundamental University Physics Volume III: Quantum and Statistical Physics. Addison-Wesley. ISBN 0-201-00262-0. - Clauser, J.F. (1974). "Experimental distinction between the quantum and classical field-theoretic predictions for the photoelectric effect". Physical Review D. 9 (4): 853–860. Bibcode:1974PhRvD...9..853C. doi:10.1103/PhysRevD.9.853. - Pais, A. (1982). Subtle is the Lord: The Science and the Life of Albert Einstein. Oxford University Press. - Feynman, Richard (1985). QED: The Strange Theory of Light and Matter. Princeton University Press. ISBN 978-0-691-12575-6. - Grangier, P.; Roger, G.; Aspect, A. (1986). "Experimental Evidence for a Photon Anticorrelation Effect on a Beam Splitter: A New Light on Single-Photon Interferences". Europhysics Letters. 1 (4): 173–179. Bibcode:1986EL......1..173G. CiteSeerX . doi:10.1209/0295-5075/1/4/004. - Lamb, W.E. (1995). "Anti-photon". Applied Physics B. 60 (2–3): 77–84. Bibcode:1995ApPhB..60...77L. doi:10.1007/BF01135846. - Special supplemental issue of Optics and Photonics News (vol. 14, October 2003) article web link - Roychoudhuri, C.; Rajarshi, R. (2003). "The nature of light: what is a photon?". Optics and Photonics News. 14: S1 (Supplement). - Zajonc, A. "Light reconsidered". Optics and Photonics News. 14: S2–S5 (Supplement). - Loudon, R. "What is a photon?". Optics and Photonics News. 14: S6–S11 (Supplement). - Finkelstein, D. "What is a photon?". Optics and Photonics News. 14: S12–S17 (Supplement). - Muthukrishnan, A.; Scully, M.O.; Zubairy, M.S. "The concept of the photon—revisited". Optics and Photonics News. 14: S18–S27 (Supplement). - Mack, H.; Schleich, W.P. "A photon viewed from Wigner phase space". Optics and Photonics News. 14: S28–S35 (Supplement). - Glauber, R. (2005). "One Hundred Years of Light Quanta" (PDF). 2005 Physics Nobel Prize Lecture. - Hentschel, K. (2007). "Light quanta: The maturing of a concept by the stepwise accretion of meaning". Physics and Philosophy. 1 (2): 1–20. Education with single photons: - Thorn, J.J.; Neel, M.S.; Donato, V.W.; Bergreen, G.S.; Davies, R.E.; Beck, M. (2004). "Observing the quantum behavior of light in an undergraduate laboratory" (PDF). American Journal of Physics. 72 (9): 1210–1219. Bibcode:2004AmJPh..72.1210T. doi:10.1119/1.1737397. - Bronner, P.; Strunz, Andreas; Silberhorn, Christine; Meyn, Jan-Peter (2009). "Interactive screen experiments with single photons". European Journal of Physics. 30 (2): 345–353. Bibcode:2009EJPh...30..345B. doi:10.1088/0143-0807/30/2/014. - Quotations related to Photon at Wikiquote - The dictionary definition of photon at Wiktionary - Media related to Photon at Wikimedia Commons
<urn:uuid:e1199c25-6183-4ac7-a14b-8b1e01b4c1f2>
3.78125
17,164
Knowledge Article
Science & Tech.
60.896578
95,608,623
The cloud may be the clue needed in solving a puzzle that has confounded scientists who so far have seen little evidence of a veil of ethane clouds and surface liquids originally thought extensive enough to cover the entire surface of Titan with a 300-meter-deep ocean. Before the Cassini-Huygens mission began visiting Titan in 2004, "We expected to see lots of ethane -- vast ethane clouds at all latitudes and extensive seas on the surface of Saturn's giant moon Titan," University of Arizona planetary scientist Caitlin Griffith said. That's because solar ultraviolet light irreversibly breaks down methane in Titan's mostly nitrogen atmosphere. Ethane is by far the most plentiful byproduct when methane breaks down. If methane has been a constituent of the atmosphere throughout Titan's 4.5-billion-year lifetime -- and there was no reason to suspect it had not -- the large moon would be awash with seas of ethane, scientists theorized. NASA's Cassini spacecraft radar found lakes in Titan's north arctic latitudes on a flyby last July 22. However, "We now know that Titan's surface is largely devoid of lakes and oceans," Griffith said. She is a member of the UA-based Cassini VIMS team, headed by Professor Robert Brown of UA's Lunar and Planetary Lab. The missing ethane is all the more mysterious because Cassini images suggest that other less abundant solid precipitates from the photochemical reactions in Titan's atmosphere have formed dunes and covered craters on its surface, Griffith said. VIMS made the first detection of Titan's vast polar ethane cloud when it probed Titan's high northern latitudes on Cassini flybys in December 2004, August 2005, and September 2005. VIMS detected the cirrus cloud as a bright band at altitudes from between 30 km and 60 km at the edge of Titan's arctic circle, between 51 degrees and 69 degrees north latitude. VIMS saw only part of the cloud because most of the northern polar region is in winter's shadow and won't be fully illuminated until 2010, Griffith noted. "Our observations imply that surface deposits of ethane should be found specifically at the poles, rather than globally distributed across Titan's disk as previously assumed," Griffith said. "That may partially explain the lack of liquid ethane oceans and clouds at Titan's middle and lower latitudes." "We think that ethane is raining or, if temperatures are cool enough, snowing on the north pole right now. When the seasons switch, we expect ethane to condense at the south pole during its winter," Griffith said. If polar conditions are as cool as predictions say, ethane could accumulate as polar ice. Ethane dissolves in methane, which scientists predict is raining from the atmosphere at the north pole during its cool winter. "During the polar winters, we expect the lowlands to cradle methane lakes that are rich with ethane," Griffith noted. "Perhaps these are the lakes recently imaged by Cassini." If ethane was produced at today's rate over Titan's entire lifetime, a total of two kilometers of ethane would have precipitated over the poles. But that seems unlikely, Griffith said. Scientists have no direct evidence for polar caps of ethane ice. Titan's north pole is in winter darkness, and Cassini cameras have yet to see it in reflected light. Cassini cameras have imaged Titan's south pole. "The morphology seen in those images doesn't suggest a two kilometer polar ice cap, but the images do show flow features," Griffith said. "We're going to start making more polar passes in the upcoming months," she added. "By the end of next year Cassini will have recorded the first polar temperature profile of Titan, which will tell us how cold conditions are at the pole." Griffith is first author on the article, "Evidence for a Polar Ethane Cloud on Titan," published in the current (Sept.15) issue of Science. Paulo Pinteado and VIMS team leader Robert Brown of the UA and researchers from France, the Jet Propulsion Laboratory in Pasadena, Calif., the U.S. Geological Survey, Cornell University, NASA Ames Research Center, Portugal and Germany are co-authors. Griffith, Pinteado and Robert Kursinski of UA collaborated earlier in studies of the thousand-mile-long methane clouds that band Titan at southern latitudes. They concluded from analyzing VIMS images that these highly localized, convective clouds, which are composed of methane, result from summer heating much as thunderstorms form on Earth. The VIMS instrument is an imaging spectrometer that produces a special data set called an image cube. It takes an image of an object in many colors simultaneously. An ordinary video camera takes images in three primary colors (red, green, and blue) and combines them to produce images as seen by the human eye. The VIMS instrument takes images in 352 separate wavelengths, or colors, spanning a realm of colors far beyond those visible to humans. All materials reflect light in a unique way. So molecules of any element or compound can be identified by the wavelengths they reflect or absorb, their "signature" spectra. Caitlin Griffith | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:01c6ada4-0d36-4a39-bba8-3e66c3d37404>
3.515625
1,662
Content Listing
Science & Tech.
41.734707
95,608,627
Experiments by physicists in Konstanz prove Mermin-Wagner fluctuations Now, 50 years later, a group of physicists from Konstanz headed by Dr Peter Keim, were able to prove the Mermin-Wagner theorem by experiments and computer simulations - at the same time as two international working groups from Japan and the USA. The research results were published in the 21 February 2017 edition of the Proceedings of the National Academy of Sciences (PNAS) scientific journal. Microscopic image of lattice vibrations in a two-dimensional crystal consisting of a monolayer of approx. 6,500 colloids. Deviations of particle positions from ideal lattice sites can be observed. If these deviations grow (logarithmically) with the system size beyond all limits, they are due to Mermin-Wagner fluctuations. In a three-dimensional crystal, particle distances are fixed and deviations are limited, irrespective of the size of the crystal. Credit: University of Konstanz Based on a model system of colloids, Peter Keim was able to prove that in low-dimensional systems slow but steadily growing fluctuations occur in the distance between particles: the positions deviate from perfect lattice sites, distances frequently increase or decrease. Crystal formation over long ranges is therefore not possible in low-dimensional materials. "Often the Mermin-Wagner theorem has been interpreted to mean that no crystals at all exist in two-dimensional systems. This is wrong: in fact long-wave density fluctuations grow logarithmically in two-dimensional systems and only destroy the order over long ranges," explains Peter Keim. In small systems of only a few hundred particles, crystal formation can indeed occur. But the larger the systems, the more the irregularities in particle position grow, ultimately preventing crystal formation over long ranges. Peter Keim was also able to measure the growth rate of these fluctuations: he observed the predicted logarithmic growth, the slowest possible form of a monotonic increase. "However, the perturbation of the order not only has a structural impact, but also leaves traces in the dynamics of the particles," continues Keim. The Mermin-Wagner theorem is one of the standard topics of interest in statistical physics and recently became a subject of discussion again in the context of the Nobel Prize for Physics: Michael Kosterlitz, the 2016 Nobel Prize winner published in a commentary how he and David Thouless got motivated to investigate so-called topological phase transitions in low-dimensional materials: it was the contradiction between the Mermin-Wagner theorem that prohibits the existence of perfect low-dimensional crystals, on the one hand and the first computer simulations that nevertheless indicated crystallization in two dimensions on the other hand. The proof from Peter Keim and his research team has now resolved this apparent contradiction: over short scales crystal formation is indeed possible, but impossible over long ranges. The Konstanz-based project analyses data from four generations of doctoral theses. The Mermin-Wagner fluctuations were successfully proven by investigating the dynamics in unordered, amorphous, that means glassy, two-dimensional solids - just as in the work from Japan and the USA which appeared almost at the same time - while the existence of Mermin-Wagner fluctuations in two-dimensional crystals still has not been proven directly. The Konstanz research was sponsored by the German Research Foundation (DFG) and the Young Scholar Fund of the University of Konstanz. Note to editors: Photos can be downloaded from here: Caption: Microscopic image of lattice vibrations in a two-dimensional crystal consisting of a monolayer of approx. 6,500 colloids. Deviations of particle positions from ideal lattice sites can be observed. If these deviations grow (logarithmically) with the system size beyond all limits, they are due to Mermin-Wagner fluctuations. In a three-dimensional crystal, particle distances are fixed and deviations are limited, irrespective of the size of the crystal. Caption: Dr Peter Keim, University of Konstanz Proceedings of the National Academy of Sciences (PNAS) 114, 1861 (2017) Comments highlighting the original publication: Proc. Natl. Acad. Sci. 114, 2440 (2017) Nature Physics 13, 205 (2017) University of Konstanz Communication and Marketing Phone: +49 753188-3603 Julia Wandt | EurekAlert! Computer model predicts how fracturing metallic glass releases energy at the atomic level 20.07.2018 | American Institute of Physics What happens when we heat the atomic lattice of a magnet all of a sudden? 18.07.2018 | Forschungsverbund Berlin A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:e22ff852-6e46-4d49-828a-e5be3d4ffef5>
3.21875
1,510
Content Listing
Science & Tech.
33.235898
95,608,628
posted by Angel Locsin Using LeChatelier's principle, predict the direction of the net reaction in each of the following system as a result of decreasing the volume of the chamber for the reaction mixture. Where does the direction shifts? a.) N2(g)+ O2(g)= 2NO(g) sabrina: Please help I disagree with answer for a by Sabrina. I agree with b and c. The answer for a is neither. Decreasing volume means increasing pressure so the reaction will shift to the side with the fewer moles of gas. There are two mols of gas on the left and two mols gas on the right; therefore, no shift.
<urn:uuid:9b9b670b-5337-4041-bc6b-ce8e68ed3219>
3.265625
145
Q&A Forum
Science & Tech.
70.149167
95,608,629
Facts Summary: The Barbados Yellow Warbler (Dendroica petechia petechia) is a species of concern belonging in the species group "birds" and found in the following area(s): West Indies (Barbados). This species is also known by the following name(s): Barbados Yellow Wood Warbler. Barbados Yellow Warbler Facts Last Updated: January 1, 2006 To Cite This Page: Glenn, C. R. 2006. "Earth's Endangered Creatures - Barbados Yellow Warbler Facts" (Online). Accessed 7/22/2018 at http://earthsendangered.com/profile.asp?sp=499&ID=5. Need more Barbados Yellow Warbler facts? Twelve Incredibly Odd Endangered Creatures 1. Solenodon The solenodon is a mammal found primarily in Cuba and Hispanola. The species was thought to be extinct until scientists found a few still alive in 2003. Solenodons only prefer to come out at night. They eat primarily insects and they are one of the few mammal species that are venomous, delivering a very powerful toxin. Symptoms of a solenodon bite are very similar to a snake bite, including swelling and severe pain, lasting several days.
<urn:uuid:286cae68-242a-428e-ae4e-8d3f9aee5596>
3.484375
267
Knowledge Article
Science & Tech.
52.734784
95,608,634
Studies Reveal Diverse Molecular Mechanisms Underlying Evolution News Sep 08, 2014 The study helps explain the genetic basis for the incredible diversity among cichlid fishes and provides new information about vertebrate evolution. The genomic information from the study will help answer questions about human biology and disease. "Our study reveals a spectrum of methods that nature uses to allow organisms to adapt to different environments,” said co-senior author Kerstin Lindblad-Toh, scientific director of vertebrate genome biology at the Broad Institute of Harvard and MIT, a biomedical and genomic research center. “These mechanisms are likely also at work in humans and other vertebrates, and by focusing on the remarkably diverse cichlid fishes, we were able to study this process on a broad scale for the first time.” The new study was published in the September 3 advance online edition of the journal Nature. The work was a collaboration between the Broad Institute of MIT and Harvard, the Georgia Institute of Technology, and the Eawag Swiss Federal Institute for Aquatic Sciences, in addition to more than 70 scientists from the international cichlid research community. African cichlid fishes are some of the most diverse organisms on the planet, with over 2,000 known species. Some lakes are home to hundreds of distinct species that evolved from a common ancestral species in the Nile River. Like Darwin’s finches, the cichlids are a dramatic example of adaptive radiation, the process by which multiple species radiate from an ancestral species through adaptation. In the new study, the researchers sequenced the genomes and transcriptomes – the protein-coding RNA - from ten tissues of five distinct lineages of African cichlids. The sequenced species include the Nile tilapia, representing the ancestral lineage, and four East African species: a species that inhabits a river near Lake Tanganyika; a species from Lake Tanganyika colonized 10-20 million years ago; a cichlid species from Lake Malawi colonized 5 million years ago; and species from Lake Victoria where the fish radiated only 15,000 to 100,000 years ago. The researchers found a number of genomic changes at play in the adaptive radiation. Compared to the ancestral lineage, the East African cichlid genomes possess an excess of gene duplications, alterations in regulatory elements in the genome, accelerated evolution of protein-coding elements in genes for pigmentation, and other distinct features that affect gene expression. “It’s not one big change in the genome of this fish, but lots of different molecular mechanisms used to achieve this amazing adaptation and speciation,” said Federica Di Palma, co-senior author of the Nature study and director of science in vertebrate and health genomics at The Genome Analysis Center in the UK. Some changes in the genome appear to have accumulated before the species left the rivers to colonize lakes and radiated into hundreds of species. This suggests that the cichlids were once in a period of reduced constraint. During this time, the fishes accumulated diversity through genetic mutations, and the relaxed constraint – in which all individuals thrived, not just the fittest – allowed genetic variation to accumulate. As the fish later inhabited new environmental niches within the lakes, new species could form quickly through selection. In this way, a reservoir of mutations – and resultant phenotypes – represented a genomic toolkit that allowed quick adaptation. More work remains to fully dissect the mutations that cause each of the varying phenotypes in cichlid fish, which could help explain how similar forms or traits evolved in parallel in different lakes. "By learning how natural populations, such as fishes, adapt and evolve under selective pressures, we can learn how these pressures affect humans in terms of health and disease,” Di Palma said. Todd Streelman, professor in the School of Biology at Georgia Tech and a co-author of the study, studies Lake Malawi cichlid species to address biological questions that are difficult to study in traditional model organisms. "These fishes provide a great way to identify the genes that control traits in natural populations," Streelman said. “Now that we understand the genome sequences of some of these species, it’s a lot easier to interpret all of the new genetic and genomic data we collect in the lab.” His lab studies natural mechanisms of lifelong tooth replacement and the genomics of complex social behavior using closely-related Malawi cichlids. The new genome sequence of the Lake Malawi cichlid will allow Streelman’s lab to investigate which genes are turned on or off during these processes. Streelman's research group cultures roughly 25 different Malawi cichlid species in aquatic facilities at Georgia Tech, through research funded by the National Institute of Dental and Craniofacial Research (NIDCR) and the National Institute of General Medical Sciences (NIGMS). This work was funded in part by the National Human Genome Research Institute (NHGRI), the Swiss National Science Foundation, the German Science Foundation, Biomedical Research Council of A*STAR, Singapore, the European Research Council, US National Institute of Dental and Craniofacial Research (NIDCR), and the Wellcome Trust. Analytical Tool Predicts Disease-Causing GenesNews Predicting genes that can cause disease due to the production of truncated or altered proteins that take on a new or different function, rather than those that lose their function, is now possible thanks to an international team of researchers that has developed a new analytical tool to effectively and efficiently predict such candidate genes. Single Gene Change in Gut Bacteria Alters Host MetabolismNews Scientists have found that deleting a single gene in a particular strain of gut bacteria causes changes in metabolism and reduced weight gain in mice. The research provides an important step towards understanding how the microbiome – the bacteria that live in our body – affects metabolism.READ MORE Gotta Sample 'Em All! Underwater Pokéball Captures Ocean LifeNews A new device developed by Wyss Institute reseachers safely traps delicate sea creatures inside a folding polyhedral enclosure and lets them go without harm using a novel, origami-inspired design. The ultimate aim is to allow the sea creatures to be (gently) analyzed in high detail.READ MORE International Conference on Neurooncology and Neurosurgery Sep 17 - Sep 18, 2018
<urn:uuid:f909bc3e-8093-42c1-bbed-541347afeb15>
3.546875
1,334
News Article
Science & Tech.
16.796931
95,608,642
With tornado season fast approaching or already underway in vulnerable states throughout the U.S., new supercomputer simulations are giving meteorologists unprecedented insight into the structure of monstrous thunderstorms and tornadoes. One such recent simulation recreates a tornado-producing supercell thunderstorm that left a path of destruction over the Central Great Plains in 2011. The person behind that simulation is Leigh Orf, a scientist with the Cooperative Institute for Meteorological Satellite Studies (CIMSS) at the University of Wisconsin-Madison. He leads a group of researchers who use computer models to unveil the moving parts inside tornadoes and the supercells that produce them. The team has developed expertise creating in-depth visualizations of supercells and discerning how they form and ultimately spawn tornadoes. The work is particularly relevant because the U.S. leads the global tornado count with more than 1,200 touchdowns annually, according to the National Oceanic and Atmospheric Administration. In May 2011, several tornadoes touched down over the Oklahoma landscape in a short, four-day assemblage of storms. One after the other, supercells spawned funnel clouds that caused significant property damage and loss of life. On May 24, one tornado in particular - the "El Reno" - registered as an EF-5, the strongest tornado category on the Enhanced Fujita scale. It remained on the ground for nearly two hours and left a path of destruction 63-miles long. Orf's most recent simulation recreates the El Reno tornado, revealing in high-resolution the numerous "mini-tornadoes" that form at the onset of the main tornado. As the funnel cloud develops, they begin to merge, adding strength to the tornado and intensifying wind speeds. Eventually, new structures form, including what Orf refers to as the streamwise vorticity current (SVC). "The SVC is made up of rain-cooled air that is sucked into the updraft that drives the whole system," says Orf. "It's believed that this is a crucial part in maintaining the unusually strong storm, but interestingly, the SVC never makes contact with the tornado. Rather, it flows up and around it." Using real-world observational data, the research team was able to recreate the weather conditions present at the time of the storm and witness the steps leading up to the creation of the tornado. The archived data, taken from a short-term operational model forecast, was in the form of an atmospheric sounding, a vertical profile of temperature, air pressure, wind speed and moisture. When combined in the right way, these parameters can create the conditions suitable for tornado formation, known as tornadogenesis. According to Orf, producing a tornado requires a couple of "non-negotiable" parts, including abundant moisture, instability and wind shear in the atmosphere, and a trigger that moves the air upwards, like a temperature or moisture difference. However, the mere existence of these parts in combination does not mean that a tornado is inevitable. "In nature, it's not uncommon for storms to have what we understand to be all the right ingredients for tornadogenesis and then nothing happens," says Orf. "Storm chasers who track tornadoes are familiar with nature's unpredictability, and our models have shown to behave similarly." Orf explains that unlike a typical computer program, where code is written to deliver consistent results, modelling on this level of complexity has inherent variability, and in some ways he finds it encouraging since the real atmosphere exhibits this variability, too. Successful modeling can be limited by the quality of the input data and the processing power of computers. To achieve greater levels of accuracy in the models, retrieving data on the atmospheric conditions immediately prior to tornado formation is ideal, but it remains a difficult and potentially dangerous task. With the complexity of these storms, there can be subtle (and currently unknown) factors in the atmosphere that influence whether or not a supercell forms a tornado. Digitally resolving a tornado simulation to a point where the details are fine enough to yield valuable information requires immense processing power. Fortunately, Orf had earned access to a high-performance supercomputer, specifically designed to handle complex computing needs: the Blue Waters Supercomputer at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign In total, their EF-5 simulation took more than three days of run time. In contrast, it would take decades for a conventional desktop computer to complete this type of processing. Looking ahead, Orf is working on the next phase of this research and continues to share the group's findings with scientists and meteorologists across the country. In January 2017, the group's research was featured on the cover of the Bulletin of the American Meteorological Society. "We've completed the EF-5 simulation, but we don't plan to stop there," says Orf. "We are going to keep refining the model and continue to analyze the results to better understand these dangerous and powerful systems." Orf's work was supported by CIMSS/SSEC, the College of Science and Technology at Central Michigan University and the National Science Foundation (NSF). The research is part of the Blue Waters sustained-petascale computing project, funded by the NSF. Orf's collaborators on the simulation include: Robert Wilhelmson, University of Illinois Department of Atmospheric Science; Bruce Lee of High Impact Weather Research & Consulting, LLC; and Catherine Finley of St. Louis University. Lee and Finley are members of TWISTEX, the team that included Tim Samaras, who passed away in the May 31, 2013 El Reno Supercell. Leigh Orf | EurekAlert! New research calculates capacity of North American forests to sequester carbon 16.07.2018 | University of California - Santa Cruz Scientists discover Earth's youngest banded iron formation in western China 12.07.2018 | University of Alberta For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 16.07.2018 | Physics and Astronomy 16.07.2018 | Life Sciences 16.07.2018 | Earth Sciences
<urn:uuid:10ee02e2-a441-4b72-b987-7e8755b803cb>
3.46875
1,796
Content Listing
Science & Tech.
35.732021
95,608,670
Hawaii volcano's sulfur dioxide emissions surge PAHOA, Hawaii (AP) - Scientists say sulfur dioxide emissions from Hawaii's Kilauea volcano have more than doubled since the current eruption began. The increase could boost volcanic smog known as vog, but trade winds are currently carrying most of the gas offshore. U.S. Geological Survey scientist Wendy Stovall says the volcano is belching 15,000 tons of sulfur dioxide each day from ground vents that have formed since May 3. She says volumes of the gas spiked when the vents began gushing more lava and rivers of molten rock started streaming toward the ocean over the weekend. Before the Leilani Estates eruption, the volcano's summit had been releasing an average of 3,000 to 6,000 tons of sulfur dioxide each day. Another crater had been releasing 200 to 300 tons per day but is no longer emitting sulfur dioxide. Desktop NewsClick to open Continuous News in a sidebar that updates in real-time. Comite Diversion Canal fully funded, project could start soon Authorities rescue man dangling from power line after work accident along LA... Attorney requests change of venue, cites Confederate monument Police investigating after victim shot in leg Zachary Police show off dance moves in 'lip-sync' challenge
<urn:uuid:d2560684-8cbe-4f5d-a5eb-d4f7e123854b>
2.984375
267
Truncated
Science & Tech.
45.863303
95,608,678
Encrusting algae are one of the major occupiers of space on hard marine substrata and are thought to influence the patterns of distribution and abundance of other organisms in intertidal areas of rocky seashores. However, little is known about their ecology and the mechanisms which may affect their distribution and abundance in space and time. Pseudolithoderma sp., a brown encrusting alga common on intertidal rocky shores in northeastern New Zealand, occurs in discrete patches on sheets of the honeycomb barnacle Chamaesipho columna. Patches of the alga change their size and shape over a wide range of spatial and temporal scales. To identify potential mechanisms that may influence the life history of this alga, patterns of colonisation and persistence of patches of Pseudolithoderma were monitored for 1 year by measuring the colonisation of spores of Pseudolithoderma on settlement plates in relation to existing patches of the alga, and measuring the amount of lateral expansion and contraction of established patches. Colonisation of Pseudolithoderma occurred at a variable rate through the year, but was consistently greater on plates placed on patches of Pseudolithoderma than those placed 1 m away from patches and was rare on plates placed 10 m away from established patches. Established patches of Pseudolithoderma had a much faster tale of lateral expansion (m2per year) than those previously measured for other species of crustose algae, and declined in area unpredictably. The rapidity and lack of seasonality in the changes of the patches is hypothesised to be due to a variable age structure of the patches of Pseudolithoderma, along with localised interactions with small grazers and the underlying barnacles on the rocky shore. This suggests that processes operating at a very localised scale may be equally or even more important in determining the demography of fleshy encrusting algae, such as Pseudolithoderma, than processes operating at larger scales. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:4c7989dd-4564-493c-a008-0d7b6bbceee2>
3.34375
422
Academic Writing
Science & Tech.
7.161911
95,608,690
A crash course tutorial on atomic orbitals, quantum numbers and electron configuration + practice problems explained. A brief explanation of 13.6 eV and Hydrogen energy levels. Show that the speed of the electron in the nth Bohr orbit in hydrogen is given by vn=ke e/nħ. http://www.physicsgalaxy.com Learn complete Physics Video Lectures on Atomic Structure Of Current for IIT JEE by Ashish Arora. Class 10+2, Chapter 8A, Question 7, Radius, velocity and total energy in nth orbit (English) Matter Waves and De Broglie's Hypothesis. Mr. Causey explains De Broglie's hypothesis, matter waves and Schrodinger's ... Energy of Revolving Electron Hindi/Urdu Chemistry Crash Course #115 Download Notes: ... 2018 © All Rights Reserved. Herofaster is only music search engine, we don't store or host any mp3 files and other copyright materials on our server,- So please.
<urn:uuid:2e0f621d-08ef-4c75-9b6b-fea669bca304>
3.21875
215
Content Listing
Science & Tech.
58.945497
95,608,697
Loading in 2 Seconds... Loading in 2 Seconds... Our Universe : Creation, Galaxies, Stars and Celestial Objects. BIG BANG THEORY. universe began with one huge exploding atom that relapsed all the energy and matter that exists today 13.8-15 Billion years ago Creation, Galaxies, Stars and Celestial Objects explosion was so enormous, that all objects in the universe are still moving outward today from the initial blast! 1 light year (which is measure of distance, NOT time) is the distance light travels in a year. Light travels at a speed of 9.5x1015 m/s!!! White dwarf will eventually stop nuclear fusion and become a black dwarf-”Dead Star” Gravity condenses the star to shrink (Where protostars are formed) A star can exist anywhere from 1 Million to 30 Billion years! (depending on size) 70 mill light years away and occurred millions of years ago! Both images are computer animations. Black Dwarfs do not give off any light to be seen. Size: A little larger than New York City (5-10 Miles) is extremely dense and small. As the star decreases in size, the pressure increases so immensely that the temperature increases dramatically. As temperature increases, so does brightness. A Neutron Star is almost 1.5 million times brighter than our Sun! These are computer animations- true black holes cannot be seen because light cannot reflect off them to create a shape. descriptions of black holes are based on equations in the theory of general relativity developed by Albert Einstein in 1916. This is an actual black hole in the center of our galaxy. The black hole cannot be seen but we can see its gravitational pull “eating” everything around it Computer animated asteroid impact (Every 76 years - 2062) Asteroids and other objects come close to our planet EVERY DAY! Most of the time we never notice them, but with improving technologies we are detecting more of them, and detecting them earlier. In fact, on Wednesday Sept. 8th, 2010 NASA telescopes spotted 2 asteroids (both around 30 feet in diameter) that came very close to Earth. One actually passed between the orbit of Earth and the Moon. Neither would have been large enough to cause large scale damage. The majority of these asteroids would burn up while entering our atmosphere. -This blue ring is the Oort cloud, nearly a light year away (to put it in perspective) -Sedna is the furthest known object to orbit our sun. The IAU (International Astronomy Union) has yet to define it as a planet/dwarf planet/asteroid.
<urn:uuid:9fc52990-e24e-494d-969c-b2084dedfc51>
3.828125
560
Personal Blog
Science & Tech.
56.50875
95,608,709
The noble gases (historically also the inert gases) make up a group of chemical elements with similar properties; under standard conditions, they are all odorless, colorless, monatomic gases with very low chemical reactivity. The six noble gases that occur naturally are helium (He), neon (Ne), argon (Ar), krypton (Kr), xenon (Xe), and the radioactive radon (Rn). Oganesson (Og) is variously predicted to be a noble gas as well or to break the trend due to relativistic effects; its chemistry has not yet been investigated. For the first six periods of the periodic table, the noble gases are exactly the members of group 18. Noble gases are typically highly unreactive except when under particular extreme conditions. The inertness of noble gases makes them very suitable in applications where reactions are not wanted. For example, argon is used in incandescent light bulbs to prevent the hot tungsten filament from oxidizing; also, helium is used in breathing gas by deep-sea divers to prevent oxygen, nitrogen and carbon dioxide (hypercapnia) toxicity. The properties of the noble gases can be well explained by modern theories of atomic structure: their outer shell of valence electrons is considered to be "full", giving them little tendency to participate in chemical reactions, and it has been possible to prepare only a few hundred noble gas compounds. The melting and boiling points for a given noble gas are close together, differing by less than 10 °C (18 °F); that is, they are liquids over only a small temperature range. Neon, argon, krypton, and xenon are obtained from air in an air separation unit using the methods of liquefaction of gases and fractional distillation. Helium is sourced from natural gas fields which have high concentrations of helium in the natural gas, using cryogenic gas separation techniques, and radon is usually isolated from the radioactive decay of dissolved radium, thorium, or uranium compounds (since those compounds give off alpha particles). Noble gases have several important applications in industries such as lighting, welding, and space exploration. A helium-oxygen breathing gas is often used by deep-sea divers at depths of seawater over 55 m (180 ft) to keep the diver from experiencing oxygen toxemia, the lethal effect of high-pressure oxygen, nitrogen narcosis, the distracting narcotic effect of the nitrogen in air beyond this partial-pressure threshold, and carbon dioxide poisoning (hypercapnia), the panic-inducing effect of excessive carbon dioxide in the bloodstream. After the risks caused by the flammability of hydrogen became apparent, it was replaced with helium in blimps and balloons. Noble gas is translated from the German noun Edelgas, first used in 1898 by Hugo Erdmann to indicate their extremely low level of reactivity. The name makes an analogy to the term "noble metals", which also have low reactivity. The noble gases have also been referred to as inert gases, but this label is deprecated as many noble gas compounds are now known. Rare gases is another term that was used, but this is also inaccurate because argon forms a fairly considerable part (0.94% by volume, 1.3% by mass) of the Earth's atmosphere due to decay of radioactive potassium-40. Pierre Janssen and Joseph Norman Lockyer discovered a new element on August 18, 1868 while looking at the chromosphere of the Sun, and named it helium after the Greek word for the Sun, ἥλιος (hḗlios). No chemical analysis was possible at the time, but helium was later found to be a noble gas. Before them, in 1784, the English chemist and physicist Henry Cavendish had discovered that air contains a small proportion of a substance less reactive than nitrogen. A century later, in 1895, Lord Rayleigh discovered that samples of nitrogen from the air were of a different density than nitrogen resulting from chemical reactions. Along with Scottish scientist William Ramsay at University College, London, Lord Rayleigh theorized that the nitrogen extracted from air was mixed with another gas, leading to an experiment that successfully isolated a new element, argon, from the Greek word ἀργός (argós, "idle" or "lazy"). With this discovery, they realized an entire class of gases was missing from the periodic table. During his search for argon, Ramsay also managed to isolate helium for the first time while heating cleveite, a mineral. In 1902, having accepted the evidence for the elements helium and argon, Dmitri Mendeleev included these noble gases as group 0 in his arrangement of the elements, which would later become the periodic table. Ramsay continued his search for these gases using the method of fractional distillation to separate liquid air into several components. In 1898, he discovered the elements krypton, neon, and xenon, and named them after the Greek words κρυπτός (kryptós, "hidden"), νέος (néos, "new"), and ξένος (ksénos, "stranger"), respectively. Radon was first identified in 1898 by Friedrich Ernst Dorn, and was named radium emanation, but was not considered a noble gas until 1904 when its characteristics were found to be similar to those of other noble gases. Rayleigh and Ramsay received the 1904 Nobel Prizes in Physics and in Chemistry, respectively, for their discovery of the noble gases; in the words of J. E. Cederblom, then president of the Royal Swedish Academy of Sciences, "the discovery of an entirely new group of elements, of which no single representative had been known with any certainty, is something utterly unique in the history of chemistry, being intrinsically an advance in science of peculiar significance". The discovery of the noble gases aided in the development of a general understanding of atomic structure. In 1895, French chemist Henri Moissan attempted to form a reaction between fluorine, the most electronegative element, and argon, one of the noble gases, but failed. Scientists were unable to prepare compounds of argon until the end of the 20th century, but these attempts helped to develop new theories of atomic structure. Learning from these experiments, Danish physicist Niels Bohr proposed in 1913 that the electrons in atoms are arranged in shells surrounding the nucleus, and that for all noble gases except helium the outermost shell always contains eight electrons. In 1916, Gilbert N. Lewis formulated the octet rule, which concluded an octet of electrons in the outer shell was the most stable arrangement for any atom; this arrangement caused them to be unreactive with other elements since they did not require any more electrons to complete their outer shell. In 1962, Neil Bartlett discovered the first chemical compound of a noble gas, xenon hexafluoroplatinate. Compounds of other noble gases were discovered soon after: in 1962 for radon, radon difluoride (RnF 2), which was identified by radiotracer techniques and in 1963 for krypton, krypton difluoride (KrF 2). The first stable compound of argon was reported in 2000 when argon fluorohydride (HArF) was formed at a temperature of 40 K (−233.2 °C; −387.7 °F). In December 1998, scientists at the Joint Institute for Nuclear Research working in Dubna, Russia bombarded plutonium (Pu) with calcium (Ca) to produce a single atom of element 114, flerovium (Fl). Preliminary chemistry experiments have indicated this element may be the first superheavy element to show abnormal noble-gas-like properties, even though it is a member of group 14 on the periodic table. In October 2006, scientists from the Joint Institute for Nuclear Research and Lawrence Livermore National Laboratory successfully created synthetically oganesson (Og), the seventh element in group 18, by bombarding californium (Cf) with calcium (Ca). Physical and atomic properties |Boiling point (K)||4.4||27.3||87.4||121.5||166.6||211.5| |Melting point (K)||0.95 (at 25 bar) |Enthalpy of vaporization (kJ/mol)||0.08||1.74||6.52||9.05||12.65||18.1| |Solubility in water at 20 °C (cm3/kg)||8.61||10.5||33.6||59.4||108.1||230| |Atomic radius (calculated) (pm)||31||38||71||88||108||120| |Ionization energy (kJ/mol)||2372||2080||1520||1351||1170||1037| The noble gases have weak interatomic force, and consequently have very low melting and boiling points. They are all monatomic gases under standard conditions, including the elements with larger atomic masses than many normally solid elements. Helium has several unique qualities when compared with other elements: its boiling and melting points are lower than those of any other known substance; it is the only element known to exhibit superfluidity; it is the only element that cannot be solidified by cooling under standard conditions—a pressure of 25 standard atmospheres (2,500 kPa; 370 psi) must be applied at a temperature of 0.95 K (−272.200 °C; −457.960 °F) to convert it to a solid. The noble gases up to xenon have multiple stable isotopes. Radon has no stable isotopes; its longest-lived isotope, 222Rn, has a half-life of 3.8 days and decays to form helium and polonium, which ultimately decays to lead. Melting and boiling points generally increase going down the group. The noble gas atoms, like atoms in most groups, increase steadily in atomic radius from one period to the next due to the increasing number of electrons. The size of the atom is related to several properties. For example, the ionization potential decreases with an increasing radius because the valence electrons in the larger noble gases are farther away from the nucleus and are therefore not held as tightly together by the atom. Noble gases have the largest ionization potential among the elements of each period, which reflects the stability of their electron configuration and is related to their relative lack of chemical reactivity. Some of the heavier noble gases, however, have ionization potentials small enough to be comparable to those of other elements and molecules. It was the insight that xenon has an ionization potential similar to that of the oxygen molecule that led Bartlett to attempt oxidizing xenon using platinum hexafluoride, an oxidizing agent known to be strong enough to react with oxygen. Noble gases cannot accept an electron to form stable anions; that is, they have a negative electron affinity. The macroscopic physical properties of the noble gases are dominated by the weak van der Waals forces between the atoms. The attractive force increases with the size of the atom as a result of the increase in polarizability and the decrease in ionization potential. This results in systematic group trends: as one goes down group 18, the atomic radius, and with it the interatomic forces, increases, resulting in an increasing melting point, boiling point, enthalpy of vaporization, and solubility. The increase in density is due to the increase in atomic mass. The noble gases are nearly ideal gases under standard conditions, but their deviations from the ideal gas law provided important clues for the study of intermolecular interactions. The Lennard-Jones potential, often used to model intermolecular interactions, was deduced in 1924 by John Lennard-Jones from experimental data on argon before the development of quantum mechanics provided the tools for understanding intermolecular forces from first principles. The theoretical analysis of these interactions became tractable because the noble gases are monatomic and the atoms spherical, which means that the interaction between the atoms is independent of direction, or isotropic. The noble gases are colorless, odorless, tasteless, and nonflammable under standard conditions. They were once labeled group 0 in the periodic table because it was believed they had a valence of zero, meaning their atoms cannot combine with those of other elements to form compounds. However, it was later discovered some do indeed form compounds, causing this label to fall into disuse. Like other groups, the members of this family show patterns in its electron configuration, especially the outermost shells resulting in trends in chemical behavior: |Z||Element||No. of electrons/shell| |18||argon||2, 8, 8| |36||krypton||2, 8, 18, 8| |54||xenon||2, 8, 18, 18, 8| |86||radon||2, 8, 18, 32, 18, 8| The noble gases have full valence electron shells. Valence electrons are the outermost electrons of an atom and are normally the only electrons that participate in chemical bonding. Atoms with full valence electron shells are extremely stable and therefore do not tend to form chemical bonds and have little tendency to gain or lose electrons. However, heavier noble gases such as radon are held less firmly together by electromagnetic force than lighter noble gases such as helium, making it easier to remove outer electrons from heavy noble gases. As a result of a full shell, the noble gases can be used in conjunction with the electron configuration notation to form the noble gas notation. To do this, the nearest noble gas that precedes the element in question is written first, and then the electron configuration is continued from that point forward. For example, the electron notation of phosphorus is 1s2 2s2 2p6 3s2 3p3, while the noble gas notation is [Ne] 3s2 3p3. This more compact notation makes it easier to identify elements, and is shorter than writing out the full notation of atomic orbitals. The noble gases show extremely low chemical reactivity; consequently, only a few hundred noble gas compounds have been formed. Neutral compounds in which helium and neon are involved in chemical bonds have not been formed (although there is some theoretical evidence for a few helium compounds), while xenon, krypton, and argon have shown only minor reactivity. The reactivity follows the order Ne < He < Ar < Kr < Xe < Rn. In 1933, Linus Pauling predicted that the heavier noble gases could form compounds with fluorine and oxygen. He predicted the existence of krypton hexafluoride (KrF 6) and xenon hexafluoride (XeF 6), speculated that XeF 8 might exist as an unstable compound, and suggested that xenic acid could form perxenate salts. These predictions were shown to be generally accurate, except that XeF 8 is now thought to be both thermodynamically and kinetically unstable. Xenon compounds are the most numerous of the noble gas compounds that have been formed. Most of them have the xenon atom in the oxidation state of +2, +4, +6, or +8 bonded to highly electronegative atoms such as fluorine or oxygen, as in xenon difluoride (XeF 2), xenon tetrafluoride (XeF 4), xenon hexafluoride (XeF 6), xenon tetroxide (XeO 4), and sodium perxenate (Na 6). Xenon reacts with fluorine to form numerous xenon fluorides according to the following equations: - Xe + F2 → XeF2 - Xe + 2F2 → XeF4 - Xe + 3F2 → XeF6 Some of these compounds have found use in chemical synthesis as oxidizing agents; XeF 2, in particular, is commercially available and can be used as a fluorinating agent. As of 2007, about five hundred compounds of xenon bonded to other elements have been identified, including organoxenon compounds (containing xenon bonded to carbon), and xenon bonded to nitrogen, chlorine, gold, mercury, and xenon itself. Compounds of xenon bound to boron, hydrogen, bromine, iodine, beryllium, sulphur, titanium, copper, and silver have also been observed but only at low temperatures in noble gas matrices, or in supersonic noble gas jets. In theory, radon is more reactive than xenon, and therefore should form chemical bonds more easily than xenon does. However, due to the high radioactivity and short half-life of radon isotopes, only a few fluorides and oxides of radon have been formed in practice. Krypton is less reactive than xenon, but several compounds have been reported with krypton in the oxidation state of +2. Krypton difluoride is the most notable and easily characterized. Under extreme conditions, krypton reacts with fluorine to form KrF2 according to the following equation: - Kr + F2 → KrF2 Krypton atoms chemically bound to other nonmetals (hydrogen, chlorine, carbon) as well as some late transition metals (copper, silver, gold) have also been observed, but only either at low temperatures in noble gas matrices, or in supersonic noble gas jets. Similar conditions were used to obtain the first few compounds of argon in 2000, such as argon fluorohydride (HArF), and some bound to the late transition metals copper, silver, and gold. As of 2007, no stable neutral molecules involving covalently bound helium or neon are known. The noble gases—including helium—can form stable molecular ions in the gas phase. The simplest is the helium hydride molecular ion, HeH+, discovered in 1925. Because it is composed of the two most abundant elements in the universe, hydrogen and helium, it is believed to occur naturally in the interstellar medium, although it has not been detected yet. In addition to these ions, there are many known neutral excimers of the noble gases. These are compounds such as ArF and KrF that are stable only when in an excited electronic state; some of them find application in excimer lasers. In addition to the compounds where a noble gas atom is involved in a covalent bond, noble gases also form non-covalent compounds. The clathrates, first described in 1949, consist of a noble gas atom trapped within cavities of crystal lattices of certain organic and inorganic substances. The essential condition for their formation is that the guest (noble gas) atoms must be of appropriate size to fit in the cavities of the host crystal lattice. For instance, argon, krypton, and xenon form clathrates with hydroquinone, but helium and neon do not because they are too small or insufficiently polarizable to be retained. Neon, argon, krypton, and xenon also form clathrate hydrates, where the noble gas is trapped in ice. Noble gases can form endohedral fullerene compounds, in which the noble gas atom is trapped inside a fullerene molecule. In 1993, it was discovered that when C 60, a spherical molecule consisting of 60 carbon atoms, is exposed to noble gases at high pressure, complexes such as He@C 60 can be formed (the @ notation indicates He is contained inside C 60 but not covalently bound to it). As of 2008, endohedral complexes with helium, neon, argon, krypton, and xenon have been obtained. These compounds have found use in the study of the structure and reactivity of fullerenes by means of the nuclear magnetic resonance of the noble gas atom. Noble gas compounds such as xenon difluoride (XeF 2) are considered to be hypervalent because they violate the octet rule. Bonding in such compounds can be explained using a three-center four-electron bond model. This model, first proposed in 1951, considers bonding of three collinear atoms. For example, bonding in XeF 2 is described by a set of three molecular orbitals (MOs) derived from p-orbitals on each atom. Bonding results from the combination of a filled p-orbital from Xe with one half-filled p-orbital from each F atom, resulting in a filled bonding orbital, a filled non-bonding orbital, and an empty antibonding orbital. The highest occupied molecular orbital is localized on the two terminal atoms. This represents a localization of charge which is facilitated by the high electronegativity of fluorine. The chemistries of the heavier noble gases, krypton and xenon, are well established. The chemistry of the lighter ones, argon and helium, is still at an early stage, while a neon compound is yet to be identified. Occurrence and production The abundances of the noble gases in the universe decrease as their atomic numbers increase. Helium is the most common element in the universe after hydrogen, with a mass fraction of about 24%. Most of the helium in the universe was formed during Big Bang nucleosynthesis, but the amount of helium is steadily increasing due to the fusion of hydrogen in stellar nucleosynthesis (and, to a very slight degree, the alpha decay of heavy elements). Abundances on Earth follow different trends; for example, helium is only the third most abundant noble gas in the atmosphere. The reason is that there is no primordial helium in the atmosphere; due to the small mass of the atom, helium cannot be retained by the Earth's gravitational field. Helium on Earth comes from the alpha decay of heavy elements such as uranium and thorium found in the Earth's crust, and tends to accumulate in natural gas deposits. The abundance of argon, on the other hand, is increased as a result of the beta decay of potassium-40, also found in the Earth's crust, to form argon-40, which is the most abundant isotope of argon on Earth despite being relatively rare in the Solar System. This process is the basis for the potassium-argon dating method. Xenon has an unexpectedly low abundance in the atmosphere, in what has been called the missing xenon problem; one theory is that the missing xenon may be trapped in minerals inside the Earth's crust. After the discovery of xenon dioxide, research showed that Xe can substitute for Si in quartz. Radon is formed in the lithosphere by the alpha decay of radium. It can seep into buildings through cracks in their foundation and accumulate in areas that are not well ventilated. Due to its high radioactivity, radon presents a significant health hazard; it is implicated in an estimated 21,000 lung cancer deaths per year in the United States alone. |Solar System (for each atom of silicon)||2343||2.148||0.1025||5.515 × 10−5||5.391 × 10−6||–| |Earth's atmosphere (volume fraction in ppm)||5.20||18.20||9340.00||1.10||0.09||(0.06–18) × 10−19| |Igneous rock (mass fraction in ppm)||3 × 10−3||7 × 10−5||4 × 10−2||–||–||1.7 × 10−10| |Gas||2004 price (USD/m3)| |Helium (industrial grade)||4.20–4.90| |Helium (laboratory grade)||22.30–44.90| Neon, argon, krypton, and xenon are obtained from air using the methods of liquefaction of gases, to convert elements to a liquid state, and fractional distillation, to separate mixtures into component parts. Helium is typically produced by separating it from natural gas, and radon is isolated from the radioactive decay of radium compounds. The prices of the noble gases are influenced by their natural abundance, with argon being the cheapest and xenon the most expensive. As an example, the adjacent table lists the 2004 prices in the United States for laboratory quantities of each gas. Noble gases have very low boiling and melting points, which makes them useful as cryogenic refrigerants. In particular, liquid helium, which boils at 4.2 K (−268.95 °C; −452.11 °F), is used for superconducting magnets, such as those needed in nuclear magnetic resonance imaging and nuclear magnetic resonance. Liquid neon, although it does not reach temperatures as low as liquid helium, also finds use in cryogenics because it has over 40 times more refrigerating capacity than liquid helium and over three times more than liquid hydrogen. Helium is used as a component of breathing gases to replace nitrogen, due its low solubility in fluids, especially in lipids. Gases are absorbed by the blood and body tissues when under pressure like in scuba diving, which causes an anesthetic effect known as nitrogen narcosis. Due to its reduced solubility, little helium is taken into cell membranes, and when helium is used to replace part of the breathing mixtures, such as in trimix or heliox, a decrease in the narcotic effect of the gas at depth is obtained. Helium's reduced solubility offers further advantages for the condition known as decompression sickness, or the bends. The reduced amount of dissolved gas in the body means that fewer gas bubbles form during the decrease in pressure of the ascent. Another noble gas, argon, is considered the best option for use as a drysuit inflation gas for scuba diving. Helium is also used as filling gas in nuclear fuel rods for nuclear reactors. In many applications, the noble gases are used to provide an inert atmosphere. Argon is used in the synthesis of air-sensitive compounds that are sensitive to nitrogen. Solid argon is also used for the study of very unstable compounds, such as reactive intermediates, by trapping them in an inert matrix at very low temperatures. Helium is used as the carrier medium in gas chromatography, as a filler gas for thermometers, and in devices for measuring radiation, such as the Geiger counter and the bubble chamber. Helium and argon are both commonly used to shield welding arcs and the surrounding base metal from the atmosphere during welding and cutting, as well as in other metallurgical processes and in the production of silicon for the semiconductor industry. Noble gases are commonly used in lighting because of their lack of chemical reactivity. Argon, mixed with nitrogen, is used as a filler gas for incandescent light bulbs. Krypton is used in high-performance light bulbs, which have higher color temperatures and greater efficiency, because it reduces the rate of evaporation of the filament more than argon; halogen lamps, in particular, use krypton mixed with small amounts of compounds of iodine or bromine. The noble gases glow in distinctive colors when used inside gas-discharge lamps, such as "neon lights". These lights are called after neon but often contain other gases and phosphors, which add various hues to the orange-red color of neon. Xenon is commonly used in xenon arc lamps which, due to their nearly continuous spectrum that resembles daylight, find application in film projectors and as automobile headlamps. The noble gases are used in excimer lasers, which are based on short-lived electronically excited molecules known as excimers. The excimers used for lasers may be noble gas dimers such as Ar2, Kr2 or Xe2, or more commonly, the noble gas is combined with a halogen in excimers such as ArF, KrF, XeF, or XeCl. These lasers produce ultraviolet light which, due to its short wavelength (193 nm for ArF and 248 nm for KrF), allows for high-precision imaging. Excimer lasers have many industrial, medical, and scientific applications. They are used for microlithography and microfabrication, which are essential for integrated circuit manufacture, and for laser surgery, including laser angioplasty and eye surgery. Some noble gases have direct application in medicine. Helium is sometimes used to improve the ease of breathing of asthma sufferers. Xenon is used as an anesthetic because of its high solubility in lipids, which makes it more potent than the usual nitrous oxide, and because it is readily eliminated from the body, resulting in faster recovery. Xenon finds application in medical imaging of the lungs through hyperpolarized MRI. Radon, which is highly radioactive and is only available in minute amounts, is used in radiotherapy. The color of gas discharge emission depends on several factors, including the following: - discharge parameters (local value of current density and electric field, temperature, etc. – note the color variation along the discharge in the top row); - gas purity (even small fraction of certain gases can affect color); - material of the discharge tube envelope – note suppression of the UV and blue components in the bottom-row tubes made of thick household glass. - Noble gas (data page), for extended tables of physical properties. - Noble metal, for metals that are resistant to corrosion or oxidation. - Inert gas, for any gas that is not reactive under normal circumstances. - Industrial gas - Noble gas configuration - Renouf, Edward (1901). "Noble gases". Science. 13 (320): 268–270. Bibcode:1901Sci....13..268R. doi:10.1126/science.13.320.268. - Ozima 2002, p. 30 - Ozima 2002, p. 4 - "argon". Encyclopædia Britannica. 2008. - Oxford English Dictionary (1989), s.v. "helium". Retrieved December 16, 2006, from Oxford English Dictionary Online. Also, from quotation there: Thomson, W. (1872). Rep. Brit. Assoc. xcix: "Frankland and Lockyer find the yellow prominences to give a very decided bright line not far from D, but hitherto not identified with any terrestrial flame. It seems to indicate a new substance, which they propose to call Helium." - Ozima 2002, p. 1 - Mendeleev 1903, p. 497 - Partington, J. R. (1957). "Discovery of Radon". Nature. 179 (4566): 912. Bibcode:1957Natur.179..912P. doi:10.1038/179912a0. - "Noble Gas". Encyclopædia Britannica. 2008. - Cederblom, J. E. (1904). "The Nobel Prize in Physics 1904 Presentation Speech". - Cederblom, J. E. (1904). "The Nobel Prize in Chemistry 1904 Presentation Speech". - Gillespie, R. J.; Robinson, E. A. (2007). "Gilbert N. Lewis and the chemical bond: the electron pair and the octet rule from 1916 to the present day". J Comput Chem. 28 (1): 87–97. doi:10.1002/jcc.20545. PMID 17109437. - Bartlett, N. (1962). "Xenon hexafluoroplatinate Xe+[PtF6]−". Proceedings of the Chemical Society (6): 218. doi:10.1039/PS9620000197. - Fields, Paul R.; Stein, Lawrence; Zirin, Moshe H. (1962). "Radon Fluoride". Journal of the American Chemical Society. 84 (21): 4164–4165. doi:10.1021/ja00880a048. - Grosse, A. V.; Kirschenbaum, A. D.; Streng, A. G.; Streng, L. V. (1963). "Krypton Tetrafluoride: Preparation and Some Properties". Science. 139 (3559): 1047–1048. Bibcode:1963Sci...139.1047G. doi:10.1126/science.139.3559.1047. PMID 17812982. - Khriachtchev, Leonid; Pettersson, Mika; Runeberg, Nino; Lundell, Jan; Räsänen, Markku (2000). "A stable argon compound". Nature. 406 (6798): 874–876. doi:10.1038/35022551. PMID 10972285. - Oganessian, Yu. Ts.; Utyonkov, V.; Lobanov, Yu.; Abdullin, F.; Polyakov, A.; et al. (1999). "Synthesis of Superheavy Nuclei in the 48Ca + 244Pu Reaction". Physical Review Letters. American Physical Society. 83 (16): 3154–3157. Bibcode:1999PhRvL..83.3154O. doi:10.1103/PhysRevLett.83.3154. - Woods, Michael (2003-05-06). "Chemical element No. 110 finally gets a name—darmstadtium". Pittsburgh Post-Gazette. Retrieved 2008-06-26. - "Gas Phase Chemistry of Superheavy Elements" (PDF). Texas A&M University. Archived from the original (PDF) on 2012-02-20. Retrieved 2008-05-31. - Barber, Robert C.; Karol, Paul J.; Nakahara, Hiromichi; Vardaci, Emanuele & Vogt, Erich W. (2011). "Discovery of the elements with atomic numbers greater than or equal to 113 (IUPAC Technical Report)*" (PDF). Pure Appl. Chem. IUPAC. 83 (7). doi:10.1515/ci.2011.33.5.25b. Retrieved 2014-05-30. - Oganessian, Yu. Ts.; Utyonkov, V.; Lobanov, Yu.; Abdullin, F.; Polyakov, A.,; et al. (2006). "Synthesis of the isotopes of elements 118 and 116 in the 249 fusion reactions". Physical Review C. 74 (4): 44602. Bibcode:2006PhRvC..74d4602O. doi:10.1103/PhysRevC.74.044602. - Greenwood 1997, p. 891 - Allen, Leland C. (1989). "Electronegativity is the average one-electron energy of the valence-shell electrons in ground-state free atoms". Journal of the American Chemical Society. 111 (25): 9003–9014. doi:10.1021/ja00207a003. - "Solid Helium". University of Alberta. Archived from the original on February 12, 2008. Retrieved 2008-06-22. - Wheeler, John C. (1997). "Electron Affinities of the Alkaline Earth Metals and the Sign Convention for Electron Affinity". Journal of Chemical Education. 74: 123–127. Bibcode:1997JChEd..74..123W. doi:10.1021/ed074p123.; Kalcher, Josef; Sax, Alexander F. (1994). "Gas Phase Stabilities of Small Anions: Theory and Experiment in Cooperation". Chemical Reviews. 94 (8): 2291–2318. doi:10.1021/cr00032a004. - Mott, N. F. (1955). "John Edward Lennard-Jones. 1894–1954". Biographical Memoirs of Fellows of the Royal Society. 1: 175–184. doi:10.1098/rsbm.1955.0013. - Ozima 2002, p. 35 - CliffsNotes 2007, p. 15 - Grochala, Wojciech (2007). "Atypical compounds of gases, which have been called noble" (PDF). Chemical Society Reviews. 36 (10): 1632–1655. doi:10.1039/b702109g. PMID 17721587. - Pauling, Linus (1933). "The Formulas of Antimonic Acid and the Antimonates". Journal of the American Chemical Society. 55 (5): 1895–1900. doi:10.1021/ja01332a016. - Holloway 1968 - Seppelt, Konrad (1979). "Recent developments in the Chemistry of Some Electronegative Elements". Accounts of Chemical Research. 12 (6): 211–216. doi:10.1021/ar50138a004. - Moody, G. J. (1974). "A Decade of Xenon Chemistry". Journal of Chemical Education. 51 (10): 628–630. Bibcode:1974JChEd..51..628M. doi:10.1021/ed051p628. Retrieved 2007-10-16. - Zupan, Marko; Iskra, Jernej; Stavber, Stojan (1998). "Fluorination with XeF2. 44. Effect of Geometry and Heteroatom on the Regioselectivity of Fluorine Introduction into an Aromatic Ring". J. Org. Chem. 63 (3): 878–880. doi:10.1021/jo971496e. PMID 11672087. - Harding 2002, pp. 90–99 - .Avrorin, V. V.; Krasikova, R. N.; Nefedov, V. D.; Toropova, M. A. (1982). "The Chemistry of Radon". Russian Chemical Review. 51 (1): 12–20. Bibcode:1982RuCRv..51...12A. doi:10.1070/RC1982v051n01ABEH002787. - Lehmann, J (2002). "The chemistry of krypton". Coordination Chemistry Reviews. 233–234: 1–39. doi:10.1016/S0010-8545(02)00202-3. - Hogness, T. R.; Lunn, E. G. (1925). "The Ionization of Hydrogen by Electron Impact as Interpreted by Positive Ray Analysis". Physical Review. 26: 44–55. Bibcode:1925PhRv...26...44H. doi:10.1103/PhysRev.26.44. - Fernandez, J.; Martin, F. (2007). "Photoionization of the HeH2+ molecular ion". J. Phys. B: At. Mol. Opt. Phys. 40 (12): 2471–2480. Bibcode:2007JPhB...40.2471F. doi:10.1088/0953-4075/40/12/020. - Powell, H. M. & Guter, M. (1949). "An Inert Gas Compound". Nature. 164 (4162): 240–241. Bibcode:1949Natur.164..240P. doi:10.1038/164240b0. - Greenwood 1997, p. 893 - Dyadin, Yuri A.; et al. (1999). "Clathrate hydrates of hydrogen and neon". Mendeleev Communications. 9 (5): 209–210. doi:10.1070/MC1999v009n05ABEH001104. - Saunders, M.; Jiménez-Vázquez, H. A.; Cross, R. J.; Poreda, R. J. (1993). "Stable compounds of helium and neon. He@C60 and Ne@C60". Science. 259 (5100): 1428–1430. Bibcode:1993Sci...259.1428S. doi:10.1126/science.259.5100.1428. PMID 17801275. - Saunders, Martin; Jimenez-Vazquez, Hugo A.; Cross, R. James; Mroczkowski, Stanley; Gross, Michael L.; Giblin, Daryl E.; Poreda, Robert J. (1994). "Incorporation of helium, neon, argon, krypton, and xenon into fullerenes using high pressure". J. Am. Chem. Soc. 116 (5): 2193–2194. doi:10.1021/ja00084a089. - Frunzi, Michael; Cross, R. Jame; Saunders, Martin (2007). "Effect of Xenon on Fullerene Reactions". Journal of the American Chemical Society. 129 (43): 13343–6. doi:10.1021/ja075568n. PMID 17924634. - Greenwood 1997, p. 897 - Weinhold 2005, pp. 275–306 - Pimentel, G. C. (1951). "The Bonding of Trihalide and Bifluoride Ions by the Molecular Orbital Method". The Journal of Chemical Physics. 19 (4): 446–448. Bibcode:1951JChPh..19..446P. doi:10.1063/1.1748245. - Weiss, Achim. "Elements of the past: Big Bang Nucleosynthesis and observation". Max Planck Institute for Gravitational Physics. Retrieved 2008-06-23. - Coc, A.; et al. (2004). "Updated Big Bang Nucleosynthesis confronted to WMAP observations and to the Abundance of Light Elements". Astrophysical Journal. 600 (2): 544–552. arXiv: . Bibcode:2004ApJ...600..544C. doi:10.1086/380121. - Morrison, P.; Pine, J. (1955). "Radiogenic Origin of the Helium Isotopes in Rock". Annals of the New York Academy of Sciences. 62 (3): 71–92. Bibcode:1955NYASA..62...71M. doi:10.1111/j.1749-6632.1955.tb35366.x. - Scherer, Alexandra (2007-01-16). "40Ar/39Ar dating and errors". Technische Universität Bergakademie Freiberg. Archived from the original on 2007-10-14. Retrieved 2008-06-26. - Sanloup, Chrystèle; Schmidt, Burkhard C.; et al. (2005). "Retention of Xenon in Quartz and Earth's Missing Xenon". Science. 310 (5751): 1174–1177. Bibcode:2005Sci...310.1174S. doi:10.1126/science.1119070. PMID 16293758. - Tyler Irving (May 2011). "Xenon Dioxide May Solve One of Earth's Mysteries". L’Actualité chimique canadienne (Canadian Chemical News). Retrieved 2012-05-18. - "A Citizen's Guide to Radon". U.S. Environmental Protection Agency. 2007-11-26. Retrieved 2008-06-26. - Lodders, Katharina (July 10, 2003). "Solar System Abundances and Condensation Temperatures of the Elements" (PDF). The Astrophysical Journal. The American Astronomical Society. 591 (2): 1220–1247. Bibcode:2003ApJ...591.1220L. doi:10.1086/375492. - "The Atmosphere". National Weather Service. Retrieved 2008-06-01. - Häussinger, Peter; Glatthaar, Reinhard; Rhode, Wilhelm; Kick, Helmut; Benkmann, Christian; Weber, Josef; Wunschel, Hans-Jörg; Stenke, Viktor; Leicht, Edith; Stenger, Hermann (2002). "Noble gases". Ullmann's Encyclopedia of Industrial Chemistry. Wiley. doi:10.1002/14356007.a17_485. - Hwang, Shuen-Chen; Lein, Robert D.; Morgan, Daniel A. (2005). "Noble Gases". Kirk Othmer Encyclopedia of Chemical Technology. Wiley. pp. 343–383. doi:10.1002/0471238961.0701190508230114.a01. - Winter, Mark (2008). "Helium: the essentials". University of Sheffield. Retrieved 2008-07-14. - "Neon". Encarta. 2008. - Zhang, C. J.; Zhou, X. T.; Yang, L. (1992). "Demountable coaxial gas-cooled current leads for MRI superconducting magnets". Magnetics, IEEE Transactions on. IEEE. 28 (1): 957–959. Bibcode:1992ITM....28..957Z. doi:10.1109/20.120038. - Fowler, B.; Ackles, K. N.; Porlier, G. (1985). "Effects of inert gas narcosis on behavior—a critical review". Undersea Biomed. Res. 12 (4): 369–402. ISSN 0093-5387. OCLC 2068005. PMID 4082343. Retrieved 2008-04-08. - Bennett 1998, p. 176 - Vann, R. D. (ed) (1989). "The Physiological Basis of Decompression". 38th Undersea and Hyperbaric Medical Society Workshop. 75(Phys)6-1-89: 437. Retrieved 2008-05-31. - Maiken, Eric (2004-08-01). "Why Argon?". Decompression. Retrieved 2008-06-26. - Horhoianu, G.; Ionescu, D. V.; Olteanu, G. (1999). "Thermal behaviour of CANDU type fuel rods during steady state and transient operating conditions". Annals of Nuclear Energy. 26 (16): 1437–1445. doi:10.1016/S0306-4549(99)00022-5. - "Disaster Ascribed to Gas by Experts". The New York Times. 1937-05-07. p. 1. - Freudenrich, Craig (2008). "How Blimps Work". HowStuffWorks. Retrieved 2008-07-03. - Dunkin, I. R. (1980). "The matrix isolation technique and its application to organic chemistry". Chem. Soc. Rev. 9: 1–23. doi:10.1039/CS9800900001. - Basting, Dirk; Marowsky, Gerd (2005). Excimer Laser Technology. Springer. ISBN 3-540-20056-8. - Sanders, Robert D.; Ma, Daqing; Maze, Mervyn (2005). "Xenon: elemental anaesthesia in clinical practice". British Medical Bulletin. 71 (1): 115–135. doi:10.1093/bmb/ldh034. PMID 15728132. - Albert, M. S.; Balamore, D. (1998). "Development of hyperpolarized noble gas MRI". Nuclear Instruments and Methods in Physics Research A. 402 (2–3): 441–453. Bibcode:1998NIMPA.402..441A. doi:10.1016/S0168-9002(97)00888-7. PMID 11543065. - Ray, Sidney F. (1999). Scientific photography and applied imaging. Focal Press. pp. 383–384. ISBN 0-240-51323-1. |Library resources about | |Wikimedia Commons has media related to Noble gases.| |Look up noble gas in Wiktionary, the free dictionary.| |Wikibooks has more on the topic of: Noble gas| |Wikiversity has learning resources about Noble gases| - Bennett, Peter B.; Elliott, David H. (1998). The Physiology and Medicine of Diving. SPCK Publishing. ISBN 0-7020-2410-4. - Bobrow Test Preparation Services (2007-12-05). CliffsAP Chemistry. CliffsNotes. ISBN 0-470-13500-X. - Greenwood, N. N.; Earnshaw, A. (1997). Chemistry of the Elements (2nd ed.). Oxford:Butterworth-Heinemann. ISBN 0-7506-3365-4. - Harding, Charlie J.; Janes, Rob (2002). Elements of the P Block. Royal Society of Chemistry. ISBN 0-85404-690-9. - Holloway, John H. (1968). Noble-Gas Chemistry. London: Methuen Publishing. ISBN 0-412-21100-9. - Mendeleev, D. (1902–1903). Osnovy Khimii (The Principles of Chemistry) (in Russian) (7th ed.). - Ozima, Minoru; Podosek, Frank A. (2002). Noble Gas Geochemistry. Cambridge University Press. ISBN 0-521-80366-7. - Weinhold, F.; Landis, C. (2005). Valency and bonding. Cambridge University Press. ISBN 0-521-83128-8.
<urn:uuid:11b0b9a8-89bc-4774-abd3-18adc3054fa4>
4.28125
10,551
Knowledge Article
Science & Tech.
63.915787
95,608,728
At some point as children, many people learn to identify one or two star patterns: The Big Dipper, Scorpius, Orion’s Belt. As adults, they see those same shapes again, the star patterns seeming distant and eternal. So it’s easy to forget that the seemingly immutable shapes in the sky are, in fact, always changing. Stars haven’t always been in the position in the sky as they are now, nor will they be in that position forever. At a recent hackathon at the American Museum of Natural History, one team made an app to remind us of that fact. Using Space Time, users can travel forward and back in time (using a little Delorian slider) to see how the stars, and their corresponding constellations, transform throughout the years. Here’s hacker Robby Kraft demonstrating how the app works: Not only does Space Time reveal that in the 1800s B.C., when the Babylonians were first developing the star charts that the Greeks later adopted and passed down to us, the stars were in slightly different places. And when anatomically modern humans arose in the form of Homo sapiens 200,000 years ago, the stars were in vastly different places. Should we humans manage to not destroy ourselves in the coming 200,000 years, our ancestors will look up into the sky and see not a scorpion and a bear, but a totally different arrangement of stars. The constellations we see today (which already, if we’re honest, don’t actually look like bears or scorpions or any of those things) will be even harder to identify. Jana Grcevich, a post-doctoral researcher in astrophysics at the museum who worked with the team to develop the app, says that the projections have some limitations. “Each star has a complicated orbit within our galaxy that we're approximating as a straight line through space from the perspective of a viewer on earth,” Grcevich told me in an email. “Many stars don't orbit nicely around the center of galaxy in an ellipse, some have crazy spirograph orbits that take them out of the plane and alter how far they are from the center of the galaxy.” And the app shows the stars moving on a single plane, but in reality they’re actually moving in three dimensions. And, of course, the way people see stars when they simply look up isn’t quite the same as what astronomers see when they look through their telescopes. All that said, Grcevich still thinks the app is showing interesting and useful data. “Despite all my caveats, you can definitely get a sense for how the constellations will change shape over time with the app,” she said, “and I think ancient Egyptians or even the first modern humans could recognize the sky we plot, if you taught them to use a computer.” The team hopes to release their app to the iTunes store soon. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:569daeed-d415-4e98-aa07-bf80615b2fbb>
3.859375
644
News Article
Science & Tech.
54.882721
95,608,736
Changes in Biodiversity: Lower Organisms, Vegetation and Flora Biodiversity is a framework concept, referring to the variety of life on Earth, and in this sense the concept is neither measurable nor quantifiable. However, specific features of biodiversity, e.g. species richness of taxonomic groups can be quantified. Circa 2% of the total number of identified species on Earth, roughly 35,000 species of plants and animals, live in the Delta, comprising roughly 25,000 species of animals (and among them 18,000 insects), and more than 10,000 plants, and among them ca. 1,400 higher plants (seed-plants or Spermatophyta). For a few hundreds of species the Delta has a (great) international significance, and this counts in particular for waterfowl (Chapter 19) (www.mnp.nl/natuurcompendium). In this chapter data will be given on the historic changes in the biodiversity of aquatic and terrestrial organisms, mainly plankton, aquatic macro-invertebrates, aquatic macrophytes and terrestrial vegetation of higher plants. The changes in the vegetation of the Biesbosch wetland are highlighted in a case study, because this national park is one of the few areas in the Delta where the flood-plain vegetation is allowed to flourish unrestrained, hence mimicking a semi-natural succession of flood-plain vegetation, developing under almost non-tidal conditions. Some notes on the human use of rushes, reeds and willow trees will close this chapter. KeywordsAquatic Macrophyte Flood Plain River Flood Plain Sand Flat Freshwater Tidal Marsh Unable to display preview. Download preview PDF.
<urn:uuid:64af8f09-920f-485a-a780-6b7f3b0cb015>
3.21875
352
Truncated
Science & Tech.
39.440484
95,608,739
World fossil gas emissions must be decreased by as a lot as 20% greater than earlier estimates to realize the Paris Settlement targets, due to pure greenhouse gasoline emissions from wetlands and permafrost, new analysis has discovered. The extra reductions are equal to 5-6 years of carbon emissions from human actions at present charges, based on a brand new paper led by the UK’s Centre for Ecology & Hydrology. The 2015 Paris Local weather Settlement goals to maintain “the worldwide common temperature improve to properly beneath 2 °C above pre-industrial ranges and to pursue efforts to restrict the temperature improve to 1.5 °C above pre-industrial ranges.” The analysis, revealed within the journal Nature Geoscience immediately (July 9, 2018) makes use of a novel type of local weather mannequin the place a specified temperature goal is used to calculate the appropriate fossil gas emissions. The mannequin simulations estimate the pure wetland and permafrost response to local weather change, together with their greenhouse gasoline emissions, and the implications for human fossil-fuel emissions. Pure wetlands are very moist areas the place the soils emit methane, which can be a greenhouse gasoline. The methane emissions are bigger in hotter soils, so they are going to improve in a hotter local weather. Permafrost areas are these that are completely frozen. Underneath a warming local weather, permafrost areas start to thaw and because of this the soils start to emit carbon dioxide, and in some instances methane, into the ambiance. The greenhouse gasoline emissions from pure wetland and permafrost improve with world temperature will increase, this in flip provides additional to world warming making a “constructive suggestions” loop. The outcomes present the “constructive suggestions” course of are disproportionately extra vital for the emission reductions wanted to realize the 1.5 °C goal reasonably than the two °C goal. It’s because the scientists concerned within the research modelled the affect of the extra processes for the time-period 2015-2100, that are broadly comparable for the 2 temperature targets. Nonetheless, because the emissions budgets to realize the 1.5 °C goal are half of what’s required to satisfy the two °C goal, the proportional affect of pure wetlands and permafrost thaw is way bigger. Lead writer Dr Edward Comyn-Platt, a biogeochemist on the UK Centre for Ecology & Hydrology stated: “Greenhouse gasoline emissions from pure wetlands and permafrost areas are delicate to local weather change, primarily through modifications in soil temperature. “Modifications in these emissions will alter the quantity of greenhouse gases within the ambiance and have to be thought-about when estimating the human emissions appropriate with the Paris Local weather Settlement.” Co-author Dr Sarah Chadburn, of the College of Leeds, stated: “We discovered that permafrost and methane emissions get increasingly vital as we contemplate decrease world warming targets. “These feedbacks might make it a lot tougher to realize the goal, and our outcomes reinforce the urgency in lowering fossil gas burning.” Co-author Prof Chris Huntingford, of the Centre for Ecology & Hydrology, stated: “We had been shocked at how giant these permafrost and wetland feedbacks will be for the low warming goal of simply 1.5°C.” The opposite establishments concerned within the analysis had been the College of Exeter, the Met Workplace Hadley Centre, Exeter, the College of Studying and the Joint Centre for Hydrometeorological Analysis, Wallingford.
<urn:uuid:52139f60-091d-4b50-9e1d-394c35ea3828>
3.1875
749
Truncated
Science & Tech.
25.545206
95,608,752
Sky at Night Find out what to see in the night sky this month. What meteor showers are active now? Are there any comets to see? When is the next planetary conjunction?, Is there a solar or lunar eclipse soon? Summer can be a wonderful time for stargazing and despite the light evenings, there's much to be seen in the night sky at this time of year. Spring deep sky objects In March and April include the Leo Triplet (M65, M66 and NGC 3628) are a fine sight in a 6 or 8-inch telescope, and there are several open and globular clusters worth observing as well. Each year the Earth moves through a number of meteoroid streams, producing meteor showers at roughly the same time each year. This handy chart shows you when the most active showers occur throughout the year, along with their Zenithal Hourly Rate (ZHR). The Annual Draconid Meteor Shower occurs each year around Oct 8th when Earth passes through a minefield of dusty debris from Comet Giacobini-Zinner. Earth is about to enter a stream of debris from Comet Thatcher, the source of the annual Lyrid meteor shower. Expect between 5 to 20 meteors per hour around the peak dates. The Geminid Meteor Shower (Geminids) are usually the strongest meteor shower of the year and meteor enthusiasts are certain to circle December 13th and 14th on their calendars.
<urn:uuid:22fb8f3f-708d-4719-ab88-65539a6d3f37>
3.203125
300
Content Listing
Science & Tech.
62.749615
95,608,754
Blackbirds, it turns out, aren’t actually all that black. Their feathers absorb most of the visible light that hits them, but still reflect between 3 and 5 percent of it. For really black plumage, you need to travel to Papua New Guinea and track down the birds-of-paradise. Although these birds are best known for their gaudy, kaleidoscopic colors, some species also have profoundly black feathers. The feathers ruthlessly swallow light and, with it, all hints of edge or contour. They make body parts seem less like parts of an actual animal and more like gaping voids in reality. They’re blacker than black. None more black. A typical bird feather has a central shaft called a rachis. Thin branches, or barbs, sprout from the rachis, and even thinner branches—barbules—sprout from the barbs. The whole arrangement is flat, with the rachis, barbs, and barbules all lying on the same plane. The super-black feathers of birds-of-paradise, meanwhile, look very different. Their barbules, instead of lying flat, curve upward. And instead of being smooth cylinders, they are studded in minuscule spikes. “It’s hard to describe,” says McCoy. “It’s like a little bottlebrush or a piece of coral.” These unique structures excel at capturing light. When light hits a normal feather, it finds a series of horizontal surfaces, and can easily bounce off. But when light hits a super-black feather, it finds a tangled mess of mostly vertical surfaces. Instead of being reflected away, it bounces repeatedly between the barbules and their spikes. With each bounce, a little more of it gets absorbed. Light loses itself within the feathers. McCoy and her colleagues, including Teresa Feo from the National Museum of Natural History, showed that this light-trapping nanotechnology can absorb up to 99.95 percent of incoming light. That’s between 10 and 100 times better than the feathers of most other black birds, like crows or blackbirds. It’s also only just short of the blackest materials that humans have designed. Vantablack, an eerily black substance produced by the British company Surrey Nanosystems, can absorb 99.965 percent of incoming light. It consists of a forest of vertical carbon nanotubes that are “grown” at more than 750 degrees Fahrenheit. The birds-of-paradise mass-produce similar forests, using only biological materials, at body temperature. Vantablack is genuinely amazing: It’s so good at absorbing light that if you move a laser onto it, the red dot disappears. But McCoy has created a similar demonstration with her super-black feathers. In the image below, you can see two feathers, both of which have been sprinkled with gold dust. The left one is from the lesser melampitta—a bird of average blackness—and it looks as golden as its surroundings. The right one comes from a paradise riflebird—one of the 42 species of bird-of-paradise. Yes, it is covered in gold dust. And yes, it still looks black. The gold settles within the grooves of microscopic forest, and all of its glitter is lost. This opens up several other questions, says Rafael Maia from Columbia University, who studies the evolution of bird colors. “Is this something unique to birds-of-paradise, or have other species evolved similar optical solutions?” he says. “If they have, do they use the same type of feather modifications?” Many animals and plants use microscopic structures to produce exceptionally vivid colors with metallic sheens; this is called iridescence. Comparably fewer species use microscopic structures for the opposite purpose: to absorb colors entirely. These include a few butterflies and the Gaboon viper. The viper—whose fangs, at two inches, are the longest of any snake—likely uses its super-black scales for camouflage, breaking up its outline so that the rest of its body better blends into the leaf litter of a rainforest. The birds-of-paradise, meanwhile, probably use their unfeasibly black blacks for the same thing that seems to motivate everything about them: sex. “These likely evolved as an optical illusion, to make adjacent colors seem even brighter than they are,” says McCoy. “Animal eyes and brains are wired to control for the amount of ambient light. That’s why an apple looks red whether it is in the sun or the shade, even though the wavelength hitting our eyes is quite different in those scenarios. A super-black frame inhibits this ability, so nearby colors look like they are very bright—even glowing.” The male birds use this illusion to great effect. The magnificent riflebird—that’s its adjective, not mine—splays out his super-black wings and flicks his head between them, showing off his electric blue throat. The superb bird-of-paradise—again, that is literally its name—spreads a cape of super-black feathers to highlight the electric blue patches on his cheeks and chest. He ends up looking like a spectral, wide-mouthed face. The six-plumed bird-of-paradise erects a super-black tutu and shimmies about to show off his kaleidoscopic throat bib. The illusions work best when viewed straight on. From that angle, the little barbules and spikes are pointing straight at you, and they become better at trapping light. When viewed from the side, the super-blacks lose some of their blackness. That’s why the dancing males take such care to face the objects of their attention, bouncing around so their audience never gets a side view. Super-black surfaces have plenty of uses for humans, too. They could camouflage military vehicles, help solar panels collect more light, or stop stray light from entering telescopes, improving the ability to spot faint stars. Vantablack can already do all of the above, but McCoy thinks the structure in super-black feathers might still be useful to engineers. “If these could be really cheaply 3-D printed, that would be amazing,” she says. We want to hear what you think. Submit a letter to the editor or write to firstname.lastname@example.org.
<urn:uuid:7985df18-bd3e-44cf-b766-c499abb163c7>
3.75
1,356
Nonfiction Writing
Science & Tech.
54.331692
95,608,757
Valence electrons are the outer most shell of electrons and they have a "1-" charge (-1 means the same thing but is written 1-). All of the other electrons in the energy levels also have 1- charge. All electrons have 1- charge and all protons have a 1+. Lithium (Li #3) has 1 valence electron and three electrons total. Charge= 3- It also has three protons. Charge= 3+ 3- + 3+ = 0; making it neutral. The answer to your question is 1 How many valence electrons? How many valence electrons does it take to neutralize a protons charge? I am studying acids and bases and it seems that it takes 2 which does not make sense to me. Thanks - An atom consists of positively charged protons, electrically neutral neutrons and negatively charged electrons. At the centre of the atom, neutrons and protons stay together to form the atom’s core or nucleus. Electrons revolve around the atom’s core in three-dimensional orbits or shells.Each of these molecular orbits needs a certain number of electrons to be stable. The inner orbit closest to the core must contain 2 electrons to be stable. The second orbit must contain 8 electrons to be stable. Each subsequent orbit, for atoms that contain more than 10 protons and electrons, also requires a pre-defined number of electrons to be stable. But apart from inert gases such as helium, neon and argon, the outermost orbit of most atoms is missing one or more electrons to be stable.00 - I don't know if you understand enough about the physical structure of atoms for this answer. All atoms have electrons - one for each proton in its nucleus (you understand that the electrons orbit the nucleus, right?). Well, the presently accepted model of the construction of atoms has the electrons occurring in orbit clouds around the nucleus but at specific distances from the nucleus (called "shells") depending on how many there are for that atom. The valence electrons are the ones in the outer-most "shell" of electrons and they determine the electrical charge (if any) of the atom in question and what it's bonding ability will be (i.e. how it will react or connect to other atoms). That's as non-technical as I can say it with out getting all physics-y on you.00 - In chemistry, valence electrons are the electrons contained in the outermost, or valence, electron shell of an atom. Valence electrons are important in determining how an element reacts chemically with other elements: The fewer valence electrons an atom holds, the less stable it becomes and the more likely it is to react. The reverse is also true, the more full/complete the valence shell is with valence electrons, the more inert an atom is and the less likely it is to chemically react with other chemical elements or with chemical elements of its own type. This is because it takes more transfer of energy (photons) to lose or gain an electron from or into a shell when that shell is more complete/full. Valence electrons have the ability like electrons in inner shells to absorb or release energy(photons). This gain or loss of energy can trigger an electron to move/jump to another shell or even break free from the atom and its valence shell. When an electron absorbs/gains more energy(photons), then it moves to a more outer shell depending on the amount of energy the electron contains and has gained due to the absorption of 1 or more photons. (Also see: electrons in an excited state) When an electron releases/loses energy(photons), then it moves to a more inner shell depending on the amount of energy the electron contains and has lost due to the release of 1 or more photons. 1 The number of valence electrons 2 Valence electrons in chemical reactions 3 Valence electrons and electricity 4 External links Helium atom model This helium (He) model displays two valence electrons located in its outermost energy level. Helium is a member of the noble gases and contains two protons, neutrons, and electrons. The number of valence electrons of an element is determined by its periodic table group (vertical column) in which the element is categorized. With the exception of groups 3–12 (transition metals), the number within the unit's place identifies how many valence electrons are contained within the elements listed under that particular column. Periodic table group Valence electrons Group 1 (I) (alkali metals) 1 Group 2 (II) (alkaline earth metals) 2 Groups 3-12 (transition metals) 1 or 2* Group 13 (III) (boron group) 3 Group 14 (IV) (carbon group) 4 Group 15 (V) (nitrogen group) 5 Group 16 (VI) (chalcogens) 6 Group 17 (VII) (halogens) 7 Group 18 (VIII or 0) (noble gases) 8** * The count of valence electrons is generally not useful for transition metals. ** Except for helium, which has only two valence electrons. - A reaction mixture was made: 10.00 mL acetone, 5.00 mL HCl, 5.00 mL I2, and 5.00 mL of H2O. 4? - Which one is correct: 1) coal turned into ashes or 2) coal turned to ashes? Can someone explain to me? - How much energy is evolved during the reaction of 48.7 g of Al, according to the reaction below? - A sample of ideal gas at room temperature occupies a volume of 26.0 L at a pressure of 722 torr.? - How do I get the rust off of a cast iron pan? - Is a water a compound or an element? Why? - Can a smell of gone off milk in your house burn your throat? - What is an emissions test? - The removal of oxygen from a substance is knows as_________? - How do candies burn?
<urn:uuid:7857f245-6451-4a3b-81c3-4b2f2aa326e9>
3.6875
1,280
Q&A Forum
Science & Tech.
58.34865
95,608,793
Infrared (IR) spectra of molecules adsorbed on a silver electrode surface have been investigated by using the Kretschmann attenuated-total-reflection (ATR) method, where a thin metal film evaporated on an ATR prism was used as the electrode. The sensitivity of this technique is ca. 50 times higher than that of reflection-absorption spectroscopy (RAS) technique due to a surface-enhanced IR absorption phenomenon associated with surface roughness of the evaporated metal film. The advantage of the ATR spectroscopy is discussed in comparison with RAS. The mechanism of the surface-enhanced IR absorption phenomenon is also discussed theoretically. © 1993. Mendeley saves you time finding and organizing research Choose a citation style from the tabs below
<urn:uuid:d5ecb713-b0bc-45d4-a52b-babb1cf96326>
2.5625
162
Academic Writing
Science & Tech.
17.445181
95,608,817
The method is so efficient that, for the first time, a thousand genes can be studied in parallel in ten thousand single human cells. Applications lie in fields of basic research and medical diagnostics. The new method shows that the activity of genes, and the spatial organization of the resulting transcript molecules, strongly vary between single cells. Whenever cells activate a gene, they produce gene specific transcript molecules, which make the function of the gene available to the cell. The measurement of gene activity is a routine activity in medical diagnostics, especially in cancer medicine. Today’s technologies determine the activity of genes by measuring the amount of transcript molecules. However, these technologies can neither measure the amount of transcript molecules of one thousand genes in ten thousand single cells, nor the spatial organization of transcript molecules within a single cell. The fully automated procedure, developed by biologists of the University of Zurich under the supervision of Prof. Lucas Pelkmans, allows, for the first time, a parallel measurement of the amount and spatial organization of single transcript molecules in ten thousands single cells. The results, which were recently published in the scientific journal Nature Methods, provide completely novel insights into the variability of gene activity of single cells. Robots, a fluorescence microscope and a supercomputer The method developed by Pelkmans’ PhD students Nico Battich and Thomas Stoeger is based upon the combination of robots, an automated fluorescence microscope and a supercomputer. “When genes become active, specific transcript molecules are produced. We can stain them with the help of a robot”, explains Stoeger. Subsequently, fluorescence microscope images of brightly glowing transcript molecules are generated. Those images were analyzed with the supercomputer Brutus, of the ETH Zurich. With this method, one thousand human genes can be studied in ten thousand single cells. According to Pelkmans, the advantages of this method are the high number of single cells and the possibility to study, for the first time, the spatial organization of the transcript molecules of many genes. New insights into the spatial organization of transcript molecules The analysis of the new data shows that individual cells distinguish themselves in the activity of their genes. While the scientists had been suspecting a high variability in the amount of transcript molecules, they were surprised to discover a strong variability in the spatial organization of transcript molecules within single cells and between multiple single cells. The transcript molecules adapted distinctive patterns. “We realized that genes with a similar function also have a similar variability in the transcript patterns,” explains Battich. “This similarity exceeds the variability in the amount of transcript molecules, and allows us to predict the function of individual genes.” The scientists suspect that transcript patterns are a countermeasure against the variability in the amount of transcript molecules. Thus, such patterns would be responsible for the robustness of processes within a cell. The importance of these new insights was summarized by Pelkmans: “Our method will be of importance to basic research and the understanding of cancer tumors because it allows us to map the activity of genes within single tumor cells.” Nico Battich, Thomas Stoeger, Lucas Pelkmans. Image-based transcriptomics in thousands of single human cells at single-molecule resolution. Nature Methods. DOI: 10.1038/nmeth.2657 This Study was supported by SystemsX.ch and the University Research Priority Program in Functional Genomics. Contact:Prof. Dr. Lucas Pelkmans Beat Müller | Universität Zürich Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:acf0965e-dbcb-47df-b94d-90a2d842fff0>
3.15625
1,302
Content Listing
Science & Tech.
30.897044
95,608,843
Vinegar flies should normally try to avoid their sick conspecifics to prevent becoming infected themselves. Nevertheless, as researchers from the Max Planck Institute for Chemical Ecology and Cornell University recently found out, they are irresistibly attracted to the smell given off by sick flies. A dramatic increase in the production of the sex pheromones responsible for the attractive odor of the infected flies is caused by pathogens: this perfidious strategy is used by the deadly germs to enable them to infect healthy flies and spread even further (Nature Communications, August 16, 2017) Markus Knaden and Bill Hansson, and their colleagues at the Department of Evolutionary Neuroethology, study ecologically relevant odors in the natural environment of insects, especially vinegar flies. In this new study they focused on a deadly smell: the odor of conspecifics which have a lethal bacterial infection. Mating experiments with Drosophila melanogaster: Enhanced sexual attractiveness of sick flies does not lead to reproductive success. Increased pheromone production benefits only the pathogens. “We had originally hoped to find a dedicated neuronal circuit in the flies which is specialized to detect and avoid sickness odors. Instead we observed that healthy flies were especially attracted to the smell of infected ones. When we realized that flies cannot avoid becoming infected, as sick flies produce particularly high amounts of pheromones, we were surprised but found that even more interesting,” says Markus Knaden, one of the leaders of the study. State-of-the-art analytical methods enabled the researchers to identify and quantify the odors of single flies. Vinegar flies which suffered from bacterial infection and their feces emitted dramatically increased amounts of the typical odors that attract other flies. The hypothesis that last-minute pheromone emission by sick insects would enhance their reproductive success turned out to be wrong, as mating assays demonstrated that sick flies were barely able to copulate. Insect immunologist Nicolas Buchon from Cornell University and his team, who were also involved in the study, noticed that the increase in pheromone production matched the up-regulation of certain immune responses in the flies. Ian Keesey, the first author of the study, and his colleagues in Jena therefore tested mutant flies which lacked the ability to produce these responses and found that these flies emitted far fewer pheromones when they became infected in comparison to sick wild-type flies. Further analysis of the insects’ metabolism convinced the researchers that ongoing bacterial growth and the subsequent damages caused by the pathogens are necessary to induce increases in pheromone production. The scientists observed similar results when they conducted experiments with other fly species. Seven other Drosophila species as well as the yellow fever mosquito Aedes aegyptii conspecifics dramatically changed their olfactory profile after infection with the pathogen. Manipulation of social communication in insects by pathogenic bacteria seems to be a more general phenomenon in nature than thought. Markus Knaden hopes that the new insights can one day contribute to useful applications: “A well-established method to combat insect-transmitted diseases and to control agricultural pest insects is the use of pheromone traps. By infecting insects with bacteria we could generally increase their pheromone emission. This could enable us to identify novel pheromones in species that have not been investigated so far.” [AO/KG/EW] Keesey, I. W., Koerte, S., Khallaf, M. A., Retzke, T., Guillou, A., Grosse-Wilde, E., Buchon, N., Knaden, M., Hansson, B. S. (2017). Pathogenic bacteria enhance dispersal through alteration of Drosophila social communication. Nature Communications Prof. Dr. Bill S. Hansson, Max Planck Institute for Chemical Ecology, Hans-Knöll-Str. 8, 07743 Jena, +49 3641 57-1401, E-Mail email@example.com Dr. Markus Knaden, Max Planck Institute for Chemical Ecology, Hans-Knöll-Str. 8, 07743 Jena, +49 3641 57-1421, E-Mail firstname.lastname@example.org Contact and Media Requests: Angela Overmeyer M.A., Max Planck Institute for Chemical Ecology, Hans-Knöll-Str. 8, 07743 Jena, +49 3641 57-2110, E-Mail email@example.com Download high-resolution images via http://www.ice.mpg.de/ext/downloads2017.html http://www.ice.mpg.de/ext/index.php?id=evolutionary-neuroethology&L=0 Department of Evolutionary Neuroethology Angela Overmeyer | Max-Planck-Institut für chemische Ökologie Scientists uncover the role of a protein in production & survival of myelin-forming cells 19.07.2018 | Advanced Science Research Center, GC/CUNY NYSCF researchers develop novel bioengineering technique for personalized bone grafts 18.07.2018 | New York Stem Cell Foundation A new manufacturing technique uses a process similar to newspaper printing to form smoother and more flexible metals for making ultrafast electronic devices. The low-cost process, developed by Purdue University researchers, combines tools already used in industry for manufacturing metals on a large scale, but uses... For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 20.07.2018 | Power and Electrical Engineering 20.07.2018 | Information Technology 20.07.2018 | Materials Sciences
<urn:uuid:8eb6bc9b-418d-42b3-b076-6f1e6f0f78e3>
3.25
1,622
Content Listing
Science & Tech.
36.178896
95,608,844
Publishing their latest research in Current Biology, the Newcastle University team have discovered that mantis 3D vision works differently from all previously known forms of biological 3D vision. 3D or stereo vision helps us work out the distances to the things we see. Each of our eyes sees a slightly different view of the world. Our brains merge these two views to create a single image, while using the differences between the two views to work out how far away things are. But humans are not the only animals that have stereo vision. Other animals include monkeys, cats, horses, owls and toads, but the only insect known to have stereo vision is the praying mantis. A team at the Institute of Neuroscience at Newcastle University funded by the Leverhulme Trust have been investigating whether praying mantis 3D vision works in the same way as humans’. To investigate this they created special insect 3D glasses which were temporarily glued on with beeswax. In their insect 3D cinema, they could show the mantis a movie of tasty prey, apparently hovering right in front of the mantis. The illusion is so good the mantises try to catch it. The scientists could now show the mantises not only simple movies of bugs, but the complex dot-patterns used to investigate human 3D vision. This enabled them to compare human and insect 3D vision for the first time. Humans are incredibly good at seeing 3D in still images. We do this by matching up the details of the picture seen in each eye. But mantises only attack moving prey so their 3D doesn’t need to work in still images. The team found mantises don’t bother about the details of the picture but just look for places where the picture is changing. This makes mantis 3D vision very robust. Even if the scientists made the two eyes’ images completely different, mantises can still match up the places where things are changing. They did so even when humans couldn’t. It's a bugs life “This is a completely new form of 3D vision as it is based on change over time instead of static images,” said behavioural ecologist, Dr Vivek Nityananda at Newcastle University. “In mantises it is probably designed to answer the question ‘is there prey at the right distance for me to catch?’” As part of the wider research, a Newcastle University engineering student developed an electronic mantis arm which mimics the distinct striking action of the insect. Fellow team-member from the School of Engineering, Dr Ghaith Tarawneh adds, “Many robots use stereo vision to help them navigate, but this is usually based on complex human stereo. Since insect brains are so tiny, their form of stereo vision can’t require much computer processing. This means it could find useful applications in low-power autonomous robots.” Reference: A novel form of stereo vision in the praying mantis. Vivek Nityananda, Ghaith Tarawneh, Sid Henriksen, Diana Umeton, Adam Simmons, Jenny C. A. Read. Current Biology. Doi: 10.1016/j.cub.2018.01.012 Newcastle is the best performing in the country for volume of clinical research for the seventh year running. published on: 19 July 2018 Patients with persistent facial pain are costing the economy more than £3,000 each per year, new research has revealed. published on: 17 July 2018
<urn:uuid:be32bb90-7d4e-4990-bc3f-64d56db51cf7>
3.484375
736
News (Org.)
Science & Tech.
56.592219
95,608,866
A group of researchers from the University of Michigan wondered how ethanol-based fuels would spread in the event of a large aquatic spill. They found that ethanol-based liquids mix actively with water, very different from how pure gasoline interacts with water and potentially more dangerous to aquatic life. The scientists will present their results, which could impact the response guidelines for ethanol fuel-based spills, at the American Physical Society’s (APS) Division of Fluid Dynamics (DFD) meeting, held Nov. 18 – 20, in San Diego, Calif. “Ethanol/gasoline blends are often presented as more environmentally benign than pure gasoline, but there is, in fact, little scientific research into the effects these blends could have on the health of surface waters,” says Avery Demond, an associate professor and director of the Environmental and Water Resources Engineering program at the University of Michigan, and one of the researchers who is working on the project. Some reports written for the State of California include methods for calculating the spread of ethanol into water based on a passive diffusion/dispersion process, notes Demond, but the method was not based on strong scientific evidence of how the two fluids interact. The Michigan researchers were motivated to fill some of the knowledge gaps. They experimented by filling a tank with water, covering the water with a plate, and pouring ethanol mixtures on top. The plate was then pulled away and the researchers recorded videos of the two fluids as they began to mix. The videos showed flow patterns called convection cells forming at the interface of the ethanol mixture and water. The mixing of the two fluids produced heat that changed the density and viscosity of the fluid, giving rise to circulation currents. In contrast, pure gasoline is essentially insoluble in water and primarily remains on the surface where it vaporizes into the air. “The mixing behavior [of ethanol-based fuel mixtures and water], from my perspective, is very unusual,” says Demond. “I’ve never seen anything quite like it and it certainly is not passive the way that modeling guidelines suggest.” Aline Cotel, also an associate professor at the University of Michigan and another member of the research team, will present videos of the unusual mixing patterns at the conference. As a next step, the researchers would like to study how different ethanol mixtures vaporize, helping them to determine how much of a spill would end up mixed into the water and how much would volatilize into the air. Although ethanol is biodegradable, in high concentrations it can be toxic to fish and other aquatic life. The ethanol in ethanol/gasoline blends might also transport some of the carcinogenic components of gasoline into the water during the mixing process. “We can’t make statements about the environmental impact of ethanol before we’ve more fully investigated its potential effects on surface water quality in the event of a spill,” note the researchers. Ultimately, they hope their work will help answer outstanding questions about how ethanol mixes with water, giving scientists and policy makers a firmer grasp of the potential risks of ethanol-based biofuels.Presentation: “Characterization of Mixing Between Water and Biofuels,” is at 9:31 a.m. on Tuesday, Nov. 20, in room 23A. Abstract: http://meeting.aps.org/Meeting/DFD12/Event/178765MORE MEETING INFORMATION Selected entries from the Gallery of Fluid Motion will be hosted as part of the Fluid Dynamics Virtual Press Room. In mid-November, when the Virtual Press Room is launched, another announcement will be sent out. This release was prepared by the American Institute of Physics (AIP) on behalf of the American Physical Society’s (APS) Division of Fluid Dynamics (DFD).ABOUT THE APS DIVISION OF FLUID DYNAMICS Charles Blue | Newswise Science News Upcycling of PET Bottles: New Ideas for Resource Cycles in Germany 25.06.2018 | Fraunhofer-Institut für Betriebsfestigkeit und Systemzuverlässigkeit LBF Dry landscapes can increase disease transmission 20.06.2018 | Forschungsverbund Berlin e.V. For the first time ever, scientists have determined the cosmic origin of highest-energy neutrinos. A research group led by IceCube scientist Elisa Resconi, spokesperson of the Collaborative Research Center SFB1258 at the Technical University of Munich (TUM), provides an important piece of evidence that the particles detected by the IceCube neutrino telescope at the South Pole originate from a galaxy four billion light-years away from Earth. To rule out other origins with certainty, the team led by neutrino physicist Elisa Resconi from the Technical University of Munich and multi-wavelength... For the first time a team of researchers have discovered two different phases of magnetic skyrmions in a single material. Physicists of the Technical Universities of Munich and Dresden and the University of Cologne can now better study and understand the properties of these magnetic structures, which are important for both basic research and applications. Whirlpools are an everyday experience in a bath tub: When the water is drained a circular vortex is formed. Typically, such whirls are rather stable. Similar... Physicists working with Roland Wester at the University of Innsbruck have investigated if and how chemical reactions can be influenced by targeted vibrational excitation of the reactants. They were able to demonstrate that excitation with a laser beam does not affect the efficiency of a chemical exchange reaction and that the excited molecular group acts only as a spectator in the reaction. A frequently used reaction in organic chemistry is nucleophilic substitution. It plays, for example, an important role in in the synthesis of new chemical... Optical spectroscopy allows investigating the energy structure and dynamic properties of complex quantum systems. Researchers from the University of Würzburg present two new approaches of coherent two-dimensional spectroscopy. "Put an excitation into the system and observe how it evolves." According to physicist Professor Tobias Brixner, this is the credo of optical spectroscopy.... Ultra-short, high-intensity X-ray flashes open the door to the foundations of chemical reactions. Free-electron lasers generate these kinds of pulses, but there is a catch: the pulses vary in duration and energy. An international research team has now presented a solution: Using a ring of 16 detectors and a circularly polarized laser beam, they can determine both factors with attosecond accuracy. Free-electron lasers (FELs) generate extremely short and intense X-ray flashes. Researchers can use these flashes to resolve structures with diameters on the... 13.07.2018 | Event News 12.07.2018 | Event News 03.07.2018 | Event News 13.07.2018 | Event News 13.07.2018 | Materials Sciences 13.07.2018 | Life Sciences
<urn:uuid:fe8e2e6c-0aef-4d09-ba1a-e087554d8a5f>
3.578125
1,477
Content Listing
Science & Tech.
38.816255
95,608,879
Release of Genetically Modified Microorganisms in Natural Environments: Scientific and Ethical Problems Genetically engineered organisms are likely to be the cornerstone of the commercial application of biotechnology in the coming decades. The exponential increase of fundamental understanding in molecular biology over the last 20 years, is such that foreign DNA can be inserted into the genome of most organisms: it is now possible to introduce new and useful genes into organisms or to inactivate or modify genes within organisms, thereby removing certain traits or disarming pathogens. A wealth of potential applications in all fields including agriculture, medicine, chemistry has been revealed and several new pharmaceutical products are now on the market, allowing revolutionary new approaches to therapy and prophylaxis. In addition, herbicide-resistant and insecticide-producing plants have been prepared and have been shown to be effective in controlled trials. However, most of these applications are not yet widely used, since there is an increased awareness of the need to assess the possible consequences of the controlled release of genetically modified organisms into the environment. Moreover, the new possibilities of increasing the insect resistance of plants or improving their growth rate by the deliberate release of genetically modified microorganisms has caused concern among some groups and individuals. KeywordsNatural Transformation Bacillus Anthracis Soil Microcosm Plasmid Transfer Conjugative Transfer Unable to display preview. Download preview PDF. - Morison WD, Miller RV, Sayler GS (1978) Frequency of Fl 16 mediated transduction in Pseudomonas aeroginosa in fresh water environments. Appi Environ Microbiol 36:724–727Google Scholar - Richaume A, Angle JS, Sadowsky MJ (1989) Influence of soil variables on in situ plasmid transfer from E. coli toRhizobium fredii. Appi Environ Microbiol 55:1730–1735Google Scholar - Stotzky G (1989) In gene transfer in the environment. Levy SB and Miller RB (Eds), Mac Graw Hill (Environmental Biotechnology), Vol. 6Google Scholar - Thorne CB (1978) Transduction in Bacillus thuringiensis. Appi Environ Microbiol 35:1109–1115Google Scholar - Top E, Mergeay M, Springael D, Verstraete W (1990) Gene escape model: transfer of heavy metal resistance genes from E. coli to Alcali genes eutrophus on agar plates and soil samples. Appi Environ Microbiol 56:2471–2479Google Scholar - Torsvik V, Goksoyr J, Daae FL (1990) High diversity in DNA of soil bacteria. Appi Environ Microbiol 56:782–786Google Scholar
<urn:uuid:1a1d0276-771f-42e2-8983-f3d61f6a8a81>
3.015625
564
Truncated
Science & Tech.
17.952494
95,608,887