id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
24,609,468
https://en.wikipedia.org/wiki/MOA-2008-BLG-310L
MOA-2008-BLG-310L is a 23rd magnitude star located at least 20000 light years away in the constellation Scorpius. This star has mass 0.67 solar masses which imply that it could probably be a late K-type star. Planetary system In 2009 during a microlensing event, a planet was found orbiting this star at a distance of 1.25 AU and has mass 0.23 times that of Jupiter. See also MOA-2007-BLG-192L OGLE-2005-BLG-390L List of extrasolar planets References G-type main-sequence stars Scorpius Planetary systems with one confirmed planet Gravitational lensing
MOA-2008-BLG-310L
Astronomy
144
44,062
https://en.wikipedia.org/wiki/X-ray%20astronomy
X-ray astronomy is an observational branch of astronomy which deals with the study of X-ray observation and detection from astronomical objects. X-radiation is absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray astronomy uses a type of space telescope that can see x-ray radiation which standard optical telescopes, such as the Mauna Kea Observatories, cannot. X-ray emission is expected from astronomical objects that contain extremely hot gases at temperatures from about a million kelvin (K) to hundreds of millions of kelvin (MK). Moreover, the maintenance of the E-layer of ionized gas high in the Earth's thermosphere also suggested a strong extraterrestrial source of X-rays. Although theory predicted that the Sun and the stars would be prominent X-ray sources, there was no way to verify this because Earth's atmosphere blocks most extraterrestrial X-rays. It was not until ways of sending instrument packages to high altitudes were developed that these X-ray sources could be studied. The existence of solar X-rays was confirmed early in the mid-twentieth century by V-2s converted to sounding rockets, and the detection of extra-terrestrial X-rays has been the primary or secondary mission of multiple satellites since 1958. The first cosmic (beyond the Solar System) X-ray source was discovered by a sounding rocket in 1962. Called Scorpius X-1 (Sco X-1) (the first X-ray source found in the constellation Scorpius), the X-ray emission of Scorpius X-1 is 10,000 times greater than its visual emission, whereas that of the Sun is about a million times less. In addition, the energy output in X-rays is 100,000 times greater than the total emission of the Sun in all wavelengths. Many thousands of X-ray sources have since been discovered. In addition, the intergalactic space in galaxy clusters is filled with a hot, but very dilute gas at a temperature between 100 and 1000 megakelvins (MK). The total amount of hot gas is five to ten times the total mass in the visible galaxies. History of X-ray astronomy In 1927, E.O. Hulburt of the US Naval Research Laboratory and associates Gregory Breit and Merle A. Tuve of the Carnegie Institution of Washington explored the possibility of equipping Robert H. Goddard's rockets to explore the upper atmosphere. "Two years later, he proposed an experimental program in which a rocket might be instrumented to explore the upper atmosphere, including detection of ultraviolet radiation and X-rays at high altitudes". In the late 1930s, the presence of a very hot, tenuous gas surrounding the Sun was inferred indirectly from optical coronal lines of highly ionized species. The Sun has been known to be surrounded by a hot tenuous corona. In the mid-1940s radio observations revealed a radio corona around the Sun. The beginning of the search for X-ray sources from above the Earth's atmosphere was on August 5, 1948 12:07 GMT. A US Army (formerly German) V-2 rocket as part of Project Hermes was launched from White Sands Proving Grounds. The first solar X-rays were recorded by T. Burnight. Through the 1960s, 70s, 80s, and 90s, the sensitivity of detectors increased greatly during the 60 years of X-ray astronomy. In addition, the ability to focus X-rays has developed enormously—allowing the production of high-quality images of many fascinating celestial objects. Sounding rocket flights The first sounding rocket flights for X-ray research were accomplished at the White Sands Missile Range in New Mexico with a V-2 rocket on January 28, 1949. A detector was placed in the nose cone section and the rocket was launched in a suborbital flight to an altitude just above the atmosphere. X-rays from the Sun were detected by the U.S. Naval Research Laboratory Blossom experiment on board. An Aerobee 150 rocket launched on June 19, 1962 (UTC) detected the first X-rays emitted from a source outside our solar system (Scorpius X-1). It is now known that such X-ray sources as Sco X-1 are compact stars, such as neutron stars or black holes. Material falling into a black hole may emit X-rays, but the black hole itself does not. The energy source for the X-ray emission is gravity. Infalling gas and dust is heated by the strong gravitational fields of these and other celestial objects. Based on discoveries in this new field of X-ray astronomy, starting with Scorpius X-1, Riccardo Giacconi received the Nobel Prize in Physics in 2002. The largest drawback to rocket flights is their very short duration (just a few minutes above the atmosphere before the rocket falls back to Earth) and their limited field of view. A rocket launched from the United States will not be able to see sources in the southern sky; a rocket launched from Australia will not be able to see sources in the northern sky. X-ray Quantum Calorimeter (XQC) project In astronomy, the interstellar medium (or ISM) is the gas and cosmic dust that pervade interstellar space: the matter that exists between the star systems within a galaxy. It fills interstellar space and blends smoothly into the surrounding intergalactic medium. The interstellar medium consists of an extremely dilute (by terrestrial standards) mixture of ions, atoms, molecules, larger dust grains, cosmic rays, and (galactic) magnetic fields. The energy that occupies the same volume, in the form of electromagnetic radiation, is the interstellar radiation field. Of interest is the hot ionized medium (HIM) consisting of a coronal cloud ejection from star surfaces at 106-107 K which emits X-rays. The ISM is turbulent and full of structure on all spatial scales. Stars are born deep inside large complexes of molecular clouds, typically a few parsecs in size. During their lives and deaths, stars interact physically with the ISM. Stellar winds from young clusters of stars (often with giant or supergiant HII regions surrounding them) and shock waves created by supernovae inject enormous amounts of energy into their surroundings, which leads to hypersonic turbulence. The resultant structures are stellar wind bubbles and superbubbles of hot gas. The Sun is currently traveling through the Local Interstellar Cloud, a denser region in the low-density Local Bubble. To measure the spectrum of the diffuse X-ray emission from the interstellar medium over the energy range 0.07 to 1 keV, NASA launched a Black Brant 9 from White Sands Missile Range, New Mexico on May 1, 2008. The Principal Investigator for the mission is Dr. Dan McCammon of the University of Wisconsin–Madison. Balloons Balloon flights can carry instruments to altitudes of up to 40 km above sea level, where they are above as much as 99.997% of the Earth's atmosphere. Unlike a rocket where data are collected during a brief few minutes, balloons are able to stay aloft for much longer. However, even at such altitudes, much of the X-ray spectrum is still absorbed. X-rays with energies less than 35 keV (5,600 aJ) cannot reach balloons. On July 21, 1964, the Crab Nebula supernova remnant was discovered to be a hard X-ray (15–60 keV) source by a scintillation counter flown on a balloon launched from Palestine, Texas, United States. This was likely the first balloon-based detection of X-rays from a discrete cosmic X-ray source. High-energy focusing telescope The high-energy focusing telescope (HEFT) is a balloon-borne experiment to image astrophysical sources in the hard X-ray (20–100 keV) band. Its maiden flight took place in May 2005 from Fort Sumner, New Mexico, USA. The angular resolution of HEFT is c. 1.5'. Rather than using a grazing-angle X-ray telescope, HEFT makes use of a novel tungsten-silicon multilayer coatings to extend the reflectivity of nested grazing-incidence mirrors beyond 10 keV. HEFT has an energy resolution of 1.0 keV full width at half maximum at 60 keV. HEFT was launched for a 25-hour balloon flight in May 2005. The instrument performed within specification and observed Tau X-1, the Crab Nebula. High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) A balloon-borne experiment called the High-resolution gamma-ray and hard X-ray spectrometer (HIREGS) observed X-ray and gamma-rays emissions from the Sun and other astronomical objects. It was launched from McMurdo Station, Antarctica in December 1991 and 1992. Steady winds carried the balloon on a circumpolar flight lasting about two weeks each time. Rockoons The rockoon, a blend of rocket and balloon, was a solid fuel rocket that, rather than being immediately lit while on the ground, was first carried into the upper atmosphere by a gas-filled balloon. Then, once separated from the balloon at its maximum height, the rocket was automatically ignited. This achieved a higher altitude, since the rocket did not have to move through the lower thicker air layers that would have required much more chemical fuel. The original concept of "rockoons" was developed by Cmdr. Lee Lewis, Cmdr. G. Halvorson, S. F. Singer, and James A. Van Allen during the Aerobee rocket firing cruise of the on March 1, 1949. From July 17 to July 27, 1956, the Naval Research Laboratory (NRL) shipboard launched eight Deacon rockoons for solar ultraviolet and X-ray observations at ~30° N ~121.6° W, southwest of San Clemente Island, apogee: 120 km. X-ray telescopes and mirrors Satellites are needed because X-rays are absorbed by the Earth's atmosphere, so instruments to detect X-rays must be taken to high altitude by balloons, sounding rockets, and satellites. X-ray telescopes (XRTs) have varying directionality or imaging ability based on glancing angle reflection rather than refraction or large deviation reflection. This limits them to much narrower fields of view than visible or UV telescopes. The mirrors can be made of ceramic or metal foil. The first X-ray telescope in astronomy was used to observe the Sun. The first X-ray picture (taken with a grazing incidence telescope) of the Sun was taken in 1963, by a rocket-borne telescope. On April 19, 1960, the very first X-ray image of the sun was taken using a pinhole camera on an Aerobee-Hi rocket. The utilization of X-ray mirrors for extrasolar X-ray astronomy simultaneously requires: the ability to determine the location at the arrival of an X-ray photon in two dimensions and a reasonable detection efficiency. X-ray astronomy detectors X-ray astronomy detectors have been designed and configured primarily for energy and occasionally for wavelength detection using a variety of techniques usually limited to the technology of the time. X-ray detectors collect individual X-rays (photons of X-ray electromagnetic radiation) and count the number of photons collected (intensity), the energy (0.12 to 120 keV) of the photons collected, wavelength (c. 0.008–8 nm), or how fast the photons are detected (counts per hour), to tell us about the object that is emitting them. Astrophysical sources of X-rays Several types of astrophysical objects emit, fluoresce, or reflect X-rays, from galaxy clusters, through black holes in active galactic nuclei (AGN) to galactic objects such as supernova remnants, stars, and binary stars containing a white dwarf (cataclysmic variable stars and super soft X-ray sources), neutron star or black hole (X-ray binaries). Some Solar System bodies emit X-rays, the most notable being the Moon, although most of the X-ray brightness of the Moon arises from reflected solar X-rays. A combination of many unresolved X-ray sources is thought to produce the observed X-ray background. The X-ray continuum can arise from bremsstrahlung, black-body radiation, synchrotron radiation, or what is called inverse Compton scattering of lower-energy photons by relativistic electrons, knock-on collisions of fast protons with atomic electrons, and atomic recombination, with or without additional electron transitions. An intermediate-mass X-ray binary (IMXB) is a binary star system where one of the components is a neutron star or a black hole. The other component is an intermediate mass star. Hercules X-1 is composed of a neutron star accreting matter from a normal star (HZ Herculis) probably due to Roche lobe overflow. X-1 is the prototype for the massive X-ray binaries although it falls on the borderline, , between high- and low-mass X-ray binaries. In July 2020, astronomers reported the observation of a "hard tidal disruption event candidate" associated with ASASSN-20hx, located near the nucleus of galaxy NGC 6297, and noted that the observation represented one of the "very few tidal disruption events with hard powerlaw X-ray spectra". Celestial X-ray sources The celestial sphere has been divided into 88 constellations. The International Astronomical Union (IAU) constellations are areas of the sky. Each of these contains remarkable X-ray sources. Some of them have been identified from astrophysical modeling to be galaxies or black holes at the centers of galaxies. Some are pulsars. As with sources already successfully modeled by X-ray astrophysics, striving to understand the generation of X-rays by the apparent source helps to understand the Sun, the universe as a whole, and how these affect us on Earth. Constellations are an astronomical device for handling observation and precision independent of current physical theory or interpretation. Astronomy has been around for a long time. Physical theory changes with time. With respect to celestial X-ray sources, X-ray astrophysics tends to focus on the physical reason for X-ray brightness, whereas X-ray astronomy tends to focus on their classification, order of discovery, variability, resolvability, and their relationship with nearby sources in other constellations. Within the constellations Orion and Eridanus and stretching across them is a soft X-ray "hot spot" known as the Orion-Eridanus Superbubble, the Eridanus Soft X-ray Enhancement, or simply the Eridanus Bubble, a 25° area of interlocking arcs of Hα emitting filaments. Soft X-rays are emitted by hot gas (T ~ 2–3 MK) in the interior of the superbubble. This bright object forms the background for the "shadow" of a filament of gas and dust. The filament is shown by the overlaid contours, which represent 100 micrometre emission from dust at a temperature of about 30 K as measured by IRAS. Here the filament absorbs soft X-rays between 100 and 300 eV, indicating that the hot gas is located behind the filament. This filament may be part of a shell of neutral gas that surrounds the hot bubble. Its interior is energized by ultraviolet (UV) light and stellar winds from hot stars in the Orion OB1 association. These stars energize a superbubble about 1200 lys across which is observed in the visual (Hα) and X-ray portions of the spectrum. Explorational X-ray astronomy Usually observational astronomy is considered to occur on Earth's surface (or beneath it in neutrino astronomy). The idea of limiting observation to Earth includes orbiting the Earth. As soon as the observer leaves the cozy confines of Earth, the observer becomes a deep space explorer. Except for Explorer 1 and Explorer 3 and the earlier satellites in the series, usually if a probe is going to be a deep space explorer it leaves the Earth or an orbit around the Earth. For a satellite or space probe to qualify as a deep space X-ray astronomer/explorer or "astronobot"/explorer, all it needs to carry aboard is an XRT or X-ray detector and leave Earth's orbit. Ulysses was launched October 6, 1990, and reached Jupiter for its "gravitational slingshot" in February 1992. It passed the south solar pole in June 1994 and crossed the ecliptic equator in February 1995. The solar X-ray and cosmic gamma-ray burst experiment (GRB) had 3 main objectives: study and monitor solar flares, detect and localize cosmic gamma-ray bursts, and in-situ detection of Jovian aurorae. Ulysses was the first satellite carrying a gamma burst detector which went outside the orbit of Mars. The hard X-ray detectors operated in the range 15–150 keV. The detectors consisted of 23-mm thick × 51-mm diameter CsI(Tl) crystals mounted via plastic light tubes to photomultipliers. The hard detector changed its operating mode depending on (1) measured count rate, (2) ground command, or (3) change in spacecraft telemetry mode. The trigger level was generally set for 8-sigma above background and the sensitivity is 10−6 erg/cm2 (1 nJ/m2). When a burst trigger is recorded, the instrument switches to record high resolution data, recording it to a 32-kbit memory for a slow telemetry read out. Burst data consist of either 16 s of 8-ms resolution count rates or 64 s of 32-ms count rates from the sum of the 2 detectors. There were also 16 channel energy spectra from the sum of the 2 detectors (taken either in 1, 2, 4, 16, or 32 second integrations). During 'wait' mode, the data were taken either in 0.25 or 0.5 s integrations and 4 energy channels (with shortest integration time being 8 s). Again, the outputs of the 2 detectors were summed. The Ulysses soft X-ray detectors consisted of 2.5-mm thick × 0.5 cm2 area Si surface barrier detectors. A 100 mg/cm2 beryllium foil front window rejected the low energy X-rays and defined a conical FOV of 75° (half-angle). These detectors were passively cooled and operate in the temperature range −35 to −55 °C. This detector had 6 energy channels, covering the range 5–20 keV. Theoretical X-ray astronomy Theoretical X-ray astronomy is a branch of theoretical astronomy that deals with the theoretical astrophysics and theoretical astrochemistry of X-ray generation, emission, and detection as applied to astronomical objects. Like theoretical astrophysics, theoretical X-ray astronomy uses a wide variety of tools which include analytical models to approximate the behavior of a possible X-ray source and computational numerical simulations to approximate the observational data. Once potential observational consequences are available they can be compared with experimental observations. Observers can look for data that refutes a model or helps in choosing between several alternate or conflicting models. Theorists also try to generate or modify models to take into account new data. In the case of an inconsistency, the general tendency is to try to make minimal modifications to the model to fit the data. In some cases, a large amount of inconsistent data over time may lead to total abandonment of a model. Most of the topics in astrophysics, astrochemistry, astrometry, and other fields that are branches of astronomy studied by theoreticians involve X-rays and X-ray sources. Many of the beginnings for a theory can be found in an Earth-based laboratory where an X-ray source is built and studied. Dynamos Dynamo theory describes the process through which a rotating, convecting, and electrically conducting fluid acts to maintain a magnetic field. This theory is used to explain the presence of anomalously long-lived magnetic fields in astrophysical bodies. If some of the stellar magnetic fields are really induced by dynamos, then field strength might be associated with rotation rate. Astronomical models From the observed X-ray spectrum, combined with spectral emission results for other wavelength ranges, an astronomical model addressing the likely source of X-ray emission can be constructed. For example, with Scorpius X-1 the X-ray spectrum steeply drops off as X-ray energy increases up to 20 keV, which is likely for a thermal-plasma mechanism. In addition, there is no radio emission, and the visible continuum is roughly what would be expected from a hot plasma fitting the observed X-ray flux. The plasma could be a coronal cloud of a central object or a transient plasma, where the energy source is unknown, but could be related to the idea of a close binary. In the Crab Nebula X-ray spectrum there are three features that differ greatly from Scorpius X-1: its spectrum is much harder, its source diameter is in light-years (ly)s, not astronomical units (AU), and its radio and optical synchrotron emission are strong. Its overall X-ray luminosity rivals the optical emission and could be that of a nonthermal plasma. However, the Crab Nebula appears as an X-ray source that is a central freely expanding ball of dilute plasma, where the energy content is 100 times the total energy content of the large visible and radio portion, obtained from the unknown source. The "Dividing Line" as giant stars evolve to become red giants also coincides with the Wind and Coronal Dividing Lines. To explain the drop in X-ray emission across these dividing lines, a number of models have been proposed: low transition region densities, leading to low emission in coronae, high-density wind extinction of coronal emission, only cool coronal loops become stable, changes in a magnetic field structure to that an open topology, leading to a decrease of magnetically confined plasma, or changes in the magnetic dynamo character, leading to the disappearance of stellar fields leaving only small-scale, turbulence-generated fields among red giants. Analytical X-ray astronomy High-mass X-ray binaries (HMXBs) are composed of OB supergiant companion stars and compact objects, usually neutron stars (NS) or black holes (BH). Supergiant X-ray binaries (SGXBs) are HMXBs in which the compact objects orbit massive companions with orbital periods of a few days (3–15 d), and in circular (or slightly eccentric) orbits. SGXBs show typical the hard X-ray spectra of accreting pulsars and most show strong absorption as obscured HMXBs. X-ray luminosity (Lx) increases up to 1036 erg·s−1 (1029 watts). The mechanism triggering the different temporal behavior observed between the classical SGXBs and the recently discovered supergiant fast X-ray transients (SFXT)s is still debated. Stellar X-ray astronomy The first detection of stellar x-rays occurred on April 5, 1974, with the detection of X-rays from Capella. A rocket flight on that date briefly calibrated its attitude control system when a star sensor pointed the payload axis at Capella (α Aur). During this period, X-rays in the range 0.2–1.6 keV were detected by an X-ray reflector system co-aligned with the star sensor. The X-ray luminosity of Lx = 1031 erg·s−1 (1024 W) is four orders of magnitude above the Sun's X-ray luminosity. Stellar coronae Coronal stars, or stars within a coronal cloud, are ubiquitous among the stars in the cool half of the Hertzsprung-Russell diagram. Experiments with instruments aboard Skylab and Copernicus have been used to search for soft X-ray emission in the energy range ~0.14–0.284 keV from stellar coronae. The experiments aboard ANS succeeded in finding X-ray signals from Capella and Sirius (α CMa). X-ray emission from an enhanced solar-like corona was proposed for the first time. The high temperature of Capella's corona as obtained from the first coronal X-ray spectrum of Capella using HEAO 1 required magnetic confinement unless it was a free-flowing coronal wind. In 1977 Proxima Centauri is discovered to be emitting high-energy radiation in the XUV. In 1978, α Cen was identified as a low-activity coronal source. With the operation of the Einstein observatory, X-ray emission was recognized as a characteristic feature common to a wide range of stars covering essentially the whole Hertzsprung-Russell diagram. The Einstein initial survey led to significant insights: X-ray sources abound among all types of stars, across the Hertzsprung-Russell diagram and across most stages of evolution, the X-ray luminosities and their distribution along the main sequence were not in agreement with the long-favored acoustic heating theories, but were now interpreted as the effect of magnetic coronal heating, and stars that are otherwise similar reveal large differences in their X-ray output if their rotation period is different. To fit the medium-resolution spectrum of UX Arietis, subsolar abundances were required. Stellar X-ray astronomy is contributing toward a deeper understanding of magnetic fields in magnetohydrodynamic dynamos, the release of energy in tenuous astrophysical plasmas through various plasma-physical processes, and the interactions of high-energy radiation with the stellar environment. Current wisdom has it that the massive coronal main sequence stars are late-A or early F stars, a conjecture that is supported both by observation and by theory. Young, low-mass stars Newly formed stars are known as pre-main-sequence stars during the stage of stellar evolution before they reach the main-sequence. Stars in this stage (ages <10 million years) produce X-rays in their stellar coronae. However, their X-ray emission is 103 to 105 times stronger than for main-sequence stars of similar masses. X-ray emission for pre–main-sequence stars was discovered by the Einstein Observatory. This X-ray emission is primarily produced by magnetic reconnection flares in the stellar coronae, with many small flares contributing to the "quiescent" X-ray emission from these stars. Pre–main sequence stars have large convection zones, which in turn drive strong dynamos, producing strong surface magnetic fields. This leads to the high X-ray emission from these stars, which lie in the saturated X-ray regime, unlike main-sequence stars that show rotational modulation of X-ray emission. Other sources of X-ray emission include accretion hotspots and collimated outflows. X-ray emission as an indicator of stellar youth is important for studies of star-forming regions. Most star-forming regions in the Milky Way Galaxy are projected on Galactic-Plane fields with numerous unrelated field stars. It is often impossible to distinguish members of a young stellar cluster from field-star contaminants using optical and infrared images alone. X-ray emission can easily penetrate moderate absorption from molecular clouds, and can be used to identify candidate cluster members. Unstable winds Given the lack of a significant outer convection zone, theory predicts the absence of a magnetic dynamo in earlier A stars. In early stars of spectral type O and B, shocks developing in unstable winds are the likely source of X-rays. Coolest M dwarfs Beyond spectral type M5, the classical αω dynamo can no longer operate as the internal structure of dwarf stars changes significantly: they become fully convective. As a distributed (or α2) dynamo may become relevant, both the magnetic flux on the surface and the topology of the magnetic fields in the corona should systematically change across this transition, perhaps resulting in some discontinuities in the X-ray characteristics around spectral class dM5. However, observations do not seem to support this picture: long-time lowest-mass X-ray detection, VB 8 (M7e V), has shown steady emission at levels of X-ray luminosity (LX) ≈ 1026 erg·s−1 (1019 W) and flares up to an order of magnitude higher. Comparison with other late M dwarfs shows a rather continuous trend. Strong X-ray emission from Herbig Ae/Be stars Herbig Ae/Be stars are pre-main sequence stars. As to their X-ray emission properties, some are reminiscent of hot stars, others point to coronal activity as in cool stars, in particular the presence of flares and very high temperatures. The nature of these strong emissions has remained controversial with models including unstable stellar winds, colliding winds, magnetic coronae, disk coronae, wind-fed magnetospheres, accretion shocks, the operation of a shear dynamo, the presence of unknown late-type companions. K giants The FK Com stars are giants of spectral type K with an unusually rapid rotation and signs of extreme activity. Their X-ray coronae are among the most luminous (LX ≥ 1032 erg·s−1 or 1025 W) and the hottest known with dominant temperatures up to 40 MK. However, the current popular hypothesis involves a merger of a close binary system in which the orbital angular momentum of the companion is transferred to the primary. Pollux is the brightest star in the constellation Gemini, despite its Beta designation, and the 17th brightest in the sky. Pollux is a giant orange K star that makes an interesting color contrast with its white "twin", Castor. Evidence has been found for a hot, outer, magnetically supported corona around Pollux, and the star is known to be an X-ray emitter. Eta Carinae New X-ray observations by the Chandra X-ray Observatory show three distinct structures: an outer, horseshoe-shaped ring about 2 light years in diameter, a hot inner core about 3 light-months in diameter, and a hot central source less than 1 light-month in diameter which may contain the superstar that drives the whole show. The outer ring provides evidence of another large explosion that occurred over 1,000 years ago. These three structures around Eta Carinae are thought to represent shock waves produced by matter rushing away from the superstar at supersonic speeds. The temperature of the shock-heated gas ranges from 60 MK in the central regions to 3 MK on the horseshoe-shaped outer structure. "The Chandra image contains some puzzles for existing ideas of how a star can produce such hot and intense X-rays," says Prof. Kris Davidson of the University of Minnesota. Davidson is principal investigator for the Eta Carina observations by the Hubble Space Telescope. "In the most popular theory, X-rays are made by colliding gas streams from two stars so close together that they'd look like a point source to us. But what happens to gas streams that escape to farther distances? The extended hot stuff in the middle of the new image gives demanding new conditions for any theory to meet." Amateur X-ray astronomy Collectively, amateur astronomers observe a variety of celestial objects and phenomena sometimes with equipment that they build themselves. The United States Air Force Academy (USAFA) is the home of the US's only undergraduate satellite program, and has and continues to develop the FalconLaunch sounding rockets. In addition to any direct amateur efforts to put X-ray astronomy payloads into space, there are opportunities that allow student-developed experimental payloads to be put on board commercial sounding rockets as a free-of-charge ride. There are major limitations to amateurs observing and reporting experiments in X-ray astronomy: the cost of building an amateur rocket or balloon to place a detector high enough and the cost of appropriate parts to build a suitable X-ray detector. Major questions in X-ray astronomy As X-ray astronomy uses a major spectral probe to peer into the source, it is a valuable tool in efforts to understand many puzzles. Stellar magnetic fields Magnetic fields are ubiquitous among stars, yet we do not understand precisely why, nor have we fully understood the bewildering variety of plasma physical mechanisms that act in stellar environments. Some stars, for example, seem to have magnetic fields, fossil stellar magnetic fields left over from their period of formation, while others seem to generate the field anew frequently. Extrasolar X-ray source astrometry With the initial detection of an extrasolar X-ray source, the first question usually asked is "What is the source?" An extensive search is often made in other wavelengths such as visible or radio for possible coincident objects. Many of the verified X-ray locations still do not have readily discernible sources. X-ray astrometry becomes a serious concern that results in ever greater demands for finer angular resolution and spectral radiance. There are inherent difficulties in making X-ray/optical, X-ray/radio, and X-ray/X-ray identifications based solely on positional coincidents, especially with handicaps in making identifications, such as the large uncertainties in positional determinants made from balloons and rockets, poor source separation in the crowded region toward the galactic center, source variability, and the multiplicity of source nomenclature. X‐ray source counterparts to stars can be identified by calculating the angular separation between source centroids and the position of the star. The maximum allowable separation is a compromise between a larger value to identify as many real matches as possible and a smaller value to minimize the probability of spurious matches. "An adopted matching criterion of 40" finds nearly all possible X‐ray source matches while keeping the probability of any spurious matches in the sample to 3%." Solar X-ray astronomy All of the detected X-ray sources at, around, or near the Sun appear to be associated with processes in the corona, which is its outer atmosphere. Coronal heating problem In the area of solar X-ray astronomy, there is the coronal heating problem. The photosphere of the Sun has an effective temperature of 5,570 K yet its corona has an average temperature of 1–2 × 106 K. However, the hottest regions are 8–20 × 106 K. The high temperature of the corona shows that it is heated by something other than direct heat conduction from the photosphere. It is thought that the energy necessary to heat the corona is provided by turbulent motion in the convection zone below the photosphere, and two main mechanisms have been proposed to explain coronal heating. The first is wave heating, in which sound, gravitational or magnetohydrodynamic waves are produced by turbulence in the convection zone. These waves travel upward and dissipate in the corona, depositing their energy in the ambient gas in the form of heat. The other is magnetic heating, in which magnetic energy is continuously built up by photospheric motion and released through magnetic reconnection in the form of large solar flares and myriad similar but smaller events—nanoflares. Currently, it is unclear whether waves are an efficient heating mechanism. All waves except Alfvén waves have been found to dissipate or refract before reaching the corona. In addition, Alfvén waves do not easily dissipate in the corona. Current research focus has therefore shifted towards flare heating mechanisms. Coronal mass ejection A coronal mass ejection (CME) is an ejected plasma consisting primarily of electrons and protons (in addition to small quantities of heavier elements such as helium, oxygen, and iron), plus the entraining coronal closed magnetic field regions. Evolution of these closed magnetic structures in response to various photospheric motions over different time scales (convection, differential rotation, meridional circulation) somehow leads to the CME. Small-scale energetic signatures such as plasma heating (observed as compact soft X-ray brightening) may be indicative of impending CMEs. The soft X-ray sigmoid (an S-shaped intensity of soft X-rays) is an observational manifestation of the connection between coronal structure and CME production. "Relating the sigmoids at X-ray (and other) wavelengths to magnetic structures and current systems in the solar atmosphere is the key to understanding their relationship to CMEs." The first detection of a Coronal mass ejection (CME) as such was made on December 1, 1971, by R. Tousey of the US Naval Research Laboratory using OSO 7. Earlier observations of coronal transients or even phenomena observed visually during solar eclipses are now understood as essentially the same thing. The largest geomagnetic perturbation, resulting presumably from a "prehistoric" CME, coincided with the first-observed solar flare, in 1859. The flare was observed visually by Richard Christopher Carrington and the geomagnetic storm was observed with the recording magnetograph at Kew Gardens. The same instrument recorded a crotchet, an instantaneous perturbation of the Earth's ionosphere by ionizing soft X-rays. This could not easily be understood at the time because it predated the discovery of X-rays (by Roentgen) and the recognition of the ionosphere (by Kennelly and Heaviside). Exotic X-ray sources A microquasar is a smaller cousin of a quasar that is a radio emitting X-ray binary, with an often resolvable pair of radio jets. LSI+61°303 is a periodic, radio-emitting binary system that is also the gamma-ray source, CG135+01. Observations are revealing a growing number of recurrent X-ray transients, characterized by short outbursts with very fast rise times (tens of minutes) and typical durations of a few hours that are associated with OB supergiants and hence define a new class of massive X-ray binaries: Supergiant Fast X-ray Transients (SFXTs). Observations made by Chandra indicate the presence of loops and rings in the hot X-ray emitting gas that surrounds Messier 87. A magnetar is a type of neutron star with an extremely powerful magnetic field, the decay of which powers the emission of copious amounts of high-energy electromagnetic radiation, particularly X-rays and gamma rays. X-ray dark stars During the solar cycle, as shown in the sequence of images at right, at times the Sun is almost X-ray dark, almost an X-ray variable. Betelgeuse, on the other hand, appears to be always X-ray dark. Hardly any X-rays are emitted by red giants. There is a rather abrupt onset of X-ray emission around spectral type A7-F0, with a large range of luminosities developing across spectral class F. Altair is spectral type A7V and Vega is A0V. Altair's total X-ray luminosity is at least an order of magnitude larger than the X-ray luminosity for Vega. The outer convection zone of early F stars is expected to be very shallow and absent in A-type dwarfs, yet the acoustic flux from the interior reaches a maximum for late A and early F stars provoking investigations of magnetic activity in A-type stars along three principal lines. Chemically peculiar stars of spectral type Bp or Ap are appreciable magnetic radio sources, most Bp/Ap stars remain undetected, and of those reported early on as producing X-rays only few of them can be identified as probably single stars. X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." X-ray dark planets and comets X-ray observations offer the possibility to detect (X-ray dark) planets as they eclipse part of the corona of their parent star while in transit. "Such methods are particularly promising for low-mass stars as a Jupiter-like planet could eclipse a rather significant coronal area." As X-ray detectors have become more sensitive, they have observed that some planets and other normally X-ray non-luminescent celestial objects under certain conditions emit, fluoresce, or reflect X-rays. Comet Lulin NASA's Swift Gamma-Ray Burst Mission satellite was monitoring Comet Lulin as it closed to 63 Gm of Earth. For the first time, astronomers can see simultaneous UV and X-ray images of a comet. "The solar wind—a fast-moving stream of particles from the sun—interacts with the comet's broader cloud of atoms. This causes the solar wind to light up with X-rays, and that's what Swift's XRT sees", said Stefan Immler, of the Goddard Space Flight Center. This interaction, called charge exchange, results in X-rays from most comets when they pass within about three times Earth's distance from the Sun. Because Lulin is so active, its atomic cloud is especially dense. As a result, the X-ray-emitting region extends far sunward of the comet. See also Balloons for X-ray astronomy Crab (unit) Gamma-ray astronomy History of X-ray astronomy IRAS 13224-3809 List of X-ray space telescopes Solar X-ray astronomy Stellar X-ray astronomy Ultraviolet astronomy X-ray telescope References Sources The content of this article was adapted and expanded from http://imagine.gsfc.nasa.gov/ (Public Domain) External links How Many Known X-Ray (and Other) Sources Are There? Is My Favorite Object an X-ray, Gamma-Ray, or EUV Source? X-ray all-sky survey on WIKISKY Audio – Cain/Gay (2009) Astronomy Cast – X-Ray Astronomy Space plasmas Astronomical imaging Astronomical X-ray sources Observational astronomy Astronomical sub-disciplines
X-ray astronomy
Physics,Astronomy
8,725
31,141,546
https://en.wikipedia.org/wiki/Institution%20of%20Engineers%2C%20Bangladesh
The Institution of Engineers, Bangladesh, commonly referred to as IEB, is the national professional organisation of engineers in Bangladesh. It is registered under the Societies Registration Act (1860) of the country. Within the country, it has 18 centers and 31 sub-centers. It has 10 'overseas chapter' in different countries of the world, namely: Australia, Kuwait, Malaysia, Oman, Qatar, Saudi Arabia, Singapore, Thailand, United Arab Emirates and the United States. It formed the Board of Accreditation for Engineering and Technical Education (BAETE) which holds the accreditation of engineering faculties in Bangladesh. History After the Bangladesh Liberation War of 1971, the Institution of Engineers, Pakistan was renamed as the Institution of Engineers, Bangladesh, which was founded in 1948, after the end of the British colonial rule in the Indian subcontinent; with its headquarters at Dhaka in Bangladesh. Around 1947, a number of senior engineers took initiative to establish a forum of engineers in profession. Soon after, it got its foundation stone laid on 7 May 1948, at Ramna in Dhaka city. Divisions IEB has seven divisions within it. They are: Agriculture Engineering Division Chemical Engineering Division Civil Engineering Division Computer Engineering Division (established in 2011) Electrical Engineering Division Electronics Engineering Division Mechanical Engineering Division Textile Engineering Division Organs IEB has four independent organs within it. They are: Board of Accreditation for Engineering and Technical Education Bangladesh Professional Engineers Registration Board Engineering Staff College Bangladesh Occupational Safety Board of Bangladesh Membership Every professional engineer of Bangladesh is invited to join the Institution of Engineers, Bangladesh (IEB). As of 2024, it has over 60,000 members. Board of Accreditation for Engineering and Technical Education (BAETE) Board of Accreditation for Engineering and Technical Education (BAETE) is a non-governmental body that provides accreditation for engineering programs within the jurisdiction of Bangladesh. It operates as an independent and autonomous agency of the Institution of Engineers, Bangladesh (IEB). BAETE represents the Institution of Engineers, Bangladesh, a full signatory of Washington Accord, an international accreditation agreement for undergraduate professional engineering academic degrees. Under the leadership of Prof Dr Engr A F M Saiful Amin as the Chairman of the Board, BAETE achieved the full signatory recognition of Washington Accord on 12 June 2024, with effect from 2023, through an unanimous decision of all 23 signatories of the accord. BAETE is also a full member of Network of Accreditation Bodies for Engineering Education in Asia (NABEEA). BAETE also represents IEB in The Federation of Engineering Institutions of Asia and the Pacific (FEIAP). References External links Official Website Institution of Engineers, Bangladesh Engineering societies Professional associations based in Bangladesh Science and technology in Bangladesh Organizations established in 1948 Accreditation organizations Professional titles and certifications Engineering education Bangladeshi engineers Regulators of Bangladesh Engineering societies based in Bangladesh
Institution of Engineers, Bangladesh
Engineering
562
2,903,681
https://en.wikipedia.org/wiki/Omega%20Bo%C3%B6tis
Omega Boötis, its name Latinized from ω Boötis, is a solitary, orange-hued star in the northern constellation of Boötes. It is a dim star but visible to the naked eye with an apparent visual magnitude of +4.82. Based upon an annual parallax shift of as seen from the Earth, it is located about 382 light years from the Sun. The star is drifting further away with a radial velocity of +12.5 km/s. This star is three billion years old with a stellar classification of K4 III, matching an evolved K-type giant star that has consume the supply of hydrogen at its core. It has an estimated 1.65 times the mass of the Sun and has expanded to 39 times the Sun's radius. The star is radiating 340 times the Sun's luminosity from its photosphere at an effective temperature of about 3,994 K. References External links K-type giants Bootis, Omega Boötes Durchmusterung objects Bootis, 41 133124 073568 5600
Omega Boötis
Astronomy
222
6,845,737
https://en.wikipedia.org/wiki/Disintegration%20theorem
In mathematics, the disintegration theorem is a result in measure theory and probability theory. It rigorously defines the idea of a non-trivial "restriction" of a measure to a measure zero subset of the measure space in question. It is related to the existence of conditional probability measures. In a sense, "disintegration" is the opposite process to the construction of a product measure. Motivation Consider the unit square in the Euclidean plane . Consider the probability measure defined on by the restriction of two-dimensional Lebesgue measure to . That is, the probability of an event is simply the area of . We assume is a measurable subset of . Consider a one-dimensional subset of such as the line segment . has -measure zero; every subset of is a -null set; since the Lebesgue measure space is a complete measure space, While true, this is somewhat unsatisfying. It would be nice to say that "restricted to" is the one-dimensional Lebesgue measure , rather than the zero measure. The probability of a "two-dimensional" event could then be obtained as an integral of the one-dimensional probabilities of the vertical "slices" : more formally, if denotes one-dimensional Lebesgue measure on , then for any "nice" . The disintegration theorem makes this argument rigorous in the context of measures on metric spaces. Statement of the theorem (Hereafter, will denote the collection of Borel probability measures on a topological space .) The assumptions of the theorem are as follows: Let and be two Radon spaces (i.e. a topological space such that every Borel probability measure on it is inner regular, e.g. separably metrizable spaces; in particular, every probability measure on it is outright a Radon measure). Let . Let be a Borel-measurable function. Here one should think of as a function to "disintegrate" , in the sense of partitioning into . For example, for the motivating example above, one can define , , which gives that , a slice we want to capture. Let be the pushforward measure . This measure provides the distribution of (which corresponds to the events ). The conclusion of the theorem: There exists a -almost everywhere uniquely determined family of probability measures , which provides a "disintegration" of into such that: the function is Borel measurable, in the sense that is a Borel-measurable function for each Borel-measurable set ; "lives on" the fiber : for -almost all , and so ; for every Borel-measurable function , In particular, for any event , taking to be the indicator function of , Applications Product spaces The original example was a special case of the problem of product spaces, to which the disintegration theorem applies. When is written as a Cartesian product and is the natural projection, then each fibre can be canonically identified with and there exists a Borel family of probability measures in (which is -almost everywhere uniquely determined) such that which is in particular and The relation to conditional expectation is given by the identities Vector calculus The disintegration theorem can also be seen as justifying the use of a "restricted" measure in vector calculus. For instance, in Stokes' theorem as applied to a vector field flowing through a compact surface , it is implicit that the "correct" measure on is the disintegration of three-dimensional Lebesgue measure on , and that the disintegration of this measure on ∂Σ is the same as the disintegration of on . Conditional distributions The disintegration theorem can be applied to give a rigorous treatment of conditional probability distributions in statistics, while avoiding purely abstract formulations of conditional probability. The theorem is related to the Borel–Kolmogorov paradox, for example. See also Regular conditional probability References Theorems in measure theory Probability theorems
Disintegration theorem
Mathematics
809
66,664,550
https://en.wikipedia.org/wiki/Gnu%20code
In quantum information, the gnu code refers to a particular family of quantum error correcting codes, with the special property of being invariant under permutations of the qubits. Given integers g (the gap), n (the occupancy), and m (the length of the code), the two codewords are where are the Dicke states consisting of a uniform superposition of all weight-k words on m qubits, e.g. The real parameter scales the density of the code. The length , hence the name of the code. For odd and , the gnu code is capable of correcting erasure errors, or deletion errors. References Quantum information science Fault-tolerant computer systems
Gnu code
Technology,Engineering
149
58,622,865
https://en.wikipedia.org/wiki/Aspergillus%20aureolatus
Aspergillus aureolatus is a species of fungus in the genus Aspergillus. It is from the Nidulantes section. The species was first described in 1964. It was isolated from air in Belgrade, Serbia. Growth and morphology A. aureolatus has been cultivated on Czapek yeast extract agar (CYA) plates and Malt Extract Agar Oxoid® (MEAOX) plates. The growth morphology of the colonies can be seen in the pictures below. References aureolatus Fungi described in 1964 Fungus species
Aspergillus aureolatus
Biology
118
36,558,191
https://en.wikipedia.org/wiki/List%20of%20endophytes
Endophytes are micro-organisms living within the tissue of a plant as endosymbionts, without causing symptoms of disease. Some of them are mutualistic symbionts with beneficial effects on their host, such as improved growth or resistance against disease or environmental stress, and are being used as microbial inoculants. However, pathogens and saprophytes may also be endophytic at some point of their life cycle. Endophytes are distinct from mycorrhizal fungi or rhizosphere microbes in that they live entirely within the plant. Most endophytes known are bacteria or fungi, although there are also some endophytic algae and oomycetes. This list contains genera with endophytic species (but which may also have non-endophytic species). Species are only listed in notable cases. Where specific variants or cultivars of a species are endophytic, this is detailed on the taxon's page. The host range is "wide" when it does not include only a specific lineage of plants; in that case, the lineage is given. Bacteria See also Rhizobia for the nitrogen-fixing bacteria on roots of legumes (Fabaceae). Fungi Algae and oomycetes See also List of symbiotic organisms List of symbiotic relationships Plant pathology Plant use of endophytic fungi in defense Endophytes Endophytes Endophytes
List of endophytes
Biology
311
65,428,574
https://en.wikipedia.org/wiki/BD-11%204672
BD−11 4672 is a single star with a pair of orbiting exoplanets in the southern constellation of Scutum, the shield. The designation BD−11 4672 comes from the Bonner Durchmusterung star catalogue, which was published during the nineteenth century in Germany. With an apparent visual magnitude of 9.99, the star is much too faint to be viewed with the naked eye. It is located at a distance of 89 light years from the Sun, as determined from parallax, but is drifting closer with a radial velocity of −87.5 km/s. This was recognised as a high proper motion star by German astronomer Max Wolf in 1924 and is traversing the celestial sphere at an angular rate of . The spectrum of BD−11 4672 matches a K-type main-sequence star, an orange dwarf, with a stellar classification of K7 V. Its age is not well constrained, but is probably older than the Sun. It is a metal-poor star, showing an iron abundance that is 35% of solar. No significant flare activity was detected. The star shows evidence of a Sun-like magnetic activity cycle with a period of 7–10 years. It has 65% of the mass and 64% of the radius of the Sun. The star is radiating 16% of the luminosity of the Sun from its photosphere at an effective temperature of 4,550 K. Planetary system In 2010, a team of astronomers led by astronomer of the High Accuracy Radial Velocity Planet Searcher performed a radial-velocity analysis, which led to the suspicion of a gas giant exoplanet in orbit around BD−11 4672. The existence of this exoplanet was confirmed in 2014. In 2020, a second exoplanet was detected on an interior and much more eccentric orbit near the inner edge of the Star's habitable zone. See also list of exoplanets discovered in 2014 (BD-11 4672b) list of exoplanets discovered in 2020 (BD-11 4672c) References K-type main-sequence stars Planetary systems with two confirmed planets Scutum (constellation) Durchmusterung objects 90979 J18332885-1138097 TIC objects
BD-11 4672
Astronomy
479
50,518,856
https://en.wikipedia.org/wiki/TorrentLocker
TorrentLocker is a ransomware trojan targeting Microsoft Windows. It was first observed in February 2014, with at least five of its major releases made available by December 2014. The malware encrypts the victim's files in a similar manner to CryptoLocker by implementing symmetric block cipher AES where the key is encrypted with an asymmetric cipher. TorrentLocker scans the system for programs and files, and conceals the contents through AES encryption leaving ransom instructions to the victim on what has to be done, and how to pay the decryption ransom. The operator demands from the victim an amount that usually starts around within 3 days. The victim is told to pay the amount in Bitcoins, and is sent a unique Bitcoin address that differs for each infected user. See also CryptoLocker Command and control (malware) Cyber spying Identity theft Malvertising Phishing Targeted threat References Malware
TorrentLocker
Technology
194
42,857,925
https://en.wikipedia.org/wiki/Vicianose
Vicianose is a disaccharide. Vicianin is a cyanogenic glycoside containing vicianose. The enzyme vicianin beta-glucosidase uses (R)-vicianin and water to produce mandelonitrile and vicianose. The fruits of Viburnum dentatum appear blue. One of the major pigments is cyanidin 3-vicianoside, but the total mixture is very complex. References Disaccharides
Vicianose
Chemistry
106
4,160,992
https://en.wikipedia.org/wiki/Invariant-based%20programming
Invariant-based programming is a programming methodology where specifications and invariants are written before the actual program statements. Writing down the invariants during the programming process has a number of advantages: it requires the programmer to make their intentions about the program behavior explicit before actually implementing it, and invariants can be evaluated dynamically during execution to catch common programming errors. Furthermore, if strong enough, invariants can be used to prove the correctness of the program based on the formal semantics of program statements. A combined programming and specification language, connected to a powerful formal proof system, will generally be required for full verification of non-trivial programs. In this case a high degree of automation of proofs is also possible. In most existing programming languages the main organizing structures are control flow blocks such as for loops, while loops and if statements. Such languages may not be ideal for invariants-first programming, since they force the programmer to make decisions about control flow before writing the invariants. Furthermore, most programming languages do not have good support for writing specifications and invariants, since they lack quantifier operators and one can typically not express higher order properties. The idea of developing the program together with its proof originated from E.W. Dijkstra. Actually writing invariants before program statements has been considered in a number of different forms by M.H. van Emden, J.C. Reynolds and R-J Back. See also Eiffel (programming language) References Formal methods Programming paradigms
Invariant-based programming
Engineering
303
40,796,656
https://en.wikipedia.org/wiki/Arisugacin%20A
Arisugacin A is an orally active acetylcholinesterase inhibitor. References Lactones Acetylcholinesterase inhibitors Methoxy compounds
Arisugacin A
Chemistry
34
76,492,308
https://en.wikipedia.org/wiki/History%20of%20radiation%20protection
The history of radiation protection begins at the turn of the 19th and 20th centuries with the realization that ionizing radiation from natural and artificial sources can have harmful effects on living organisms. As a result, the study of radiation damage also became a part of this history. While radioactive materials and X-rays were once handled carelessly, increasing awareness of the dangers of radiation in the 20th century led to the implementation of various preventive measures worldwide, resulting in the establishment of radiation protection regulations. Although radiologists were the first victims, they also played a crucial role in advancing radiological progress and their sacrifices will always be remembered. Radiation damage caused many people to suffer amputations or die of cancer. The use of radioactive substances in everyday life was once fashionable, but over time, the health effects became known. Investigations into the causes of these effects have led to increased awareness of protective measures. The dropping of atomic bombs during World War II brought about a drastic change in attitudes towards radiation. The effects of natural cosmic radiation, radioactive substances such as radon and radium found in the environment, and the potential health hazards of non-ionizing radiation are well-recognized. Protective measures have been developed and implemented worldwide, monitoring devices have been created, and radiation protection laws and regulations have been enacted. In the 21st century, regulations are becoming even stricter. The permissible limits for ionizing radiation intensity are consistently being revised downward. The concept of radiation protection now includes regulations for the handling of non-ionizing radiation. In the Federal Republic of Germany, radiation protection regulations are developed and issued by the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection (BMUV). The Federal Office for Radiation Protection is involved in the technical work. In Switzerland, the Radiation Protection Division of the Federal Office of Public Health is responsible, and in Austria, the Ministry of Climate Action and Energy. X-rays Early radiation consequences The discovery of X-rays by Wilhelm Conrad Röntgen (1845-1923) in 1895 led to extensive experimentation by scientists, physicians, and inventors. The first X-ray machines produced extremely unfavorable radiation spectra for imaging with extremely high skin doses. In February 1896, John Daniel and William Lofland Dudley (1859–1914) of Vanderbilt University conducted an experiment in which Dudley's head was X-rayed, resulting in hair loss. Herbert D. Hawks, a graduate of Columbia University, suffered severe burns on his hands and chest during demonstration experiments with X-rays. Burns and hair loss were reported in scientific journals. Nikola Tesla (1856–1943) was one of the first researchers to explicitly warn of the potential dangers of X-rays in the Electrical Review on May 5, 1897 - after initially claiming them to be completely harmless. He suffered massive radiation damage after his experiments. Nevertheless, some doctors at the time still claimed that X-rays had no effect on humans. Until the 1940s, X-ray machines were operated without any protective safeguards. Röntgen himself was spared the fate of the other X-ray users by habit. He always carried the unexposed photographic plates in his pockets and found that they were exposed if he remained in the same room during the exposure. So he regularly left the room when he took X-rays. The use of X-rays for diagnostic purposes in dentistry was made possible by the pioneering work of C. Edmund Kells (1856-1928), a New Orleans dentist who demonstrated them to dentists in Asheville, North Carolina, in July 1896. Kells committed suicide after suffering from radiation-induced cancer for many years. He had been amputated one finger at a time, later his entire hand, followed by his forearm and then his entire arm. Otto Walkhoff (1860-1934), one of the most important German dentists in history, took X-rays of himself in 1896 and is considered a pioneer in dental radiology. He described the required exposure time of 25 minutes as an "ordeal". Braunschweig's medical community later commissioned him to set up and supervise a central X-ray facility. In 1898, the year radium was discovered, he also tested the use of radium in medicine in a self-experiment using an amount of 0.2 grams of radium bromide. Walkhoff observed that cancerous mice exposed to radium radiation died significantly later than a control group of untreated mice. He thus initiated the development of radiation research for the treatment of tumors. The Armenian-American radiologist Mihran Krikor Kassabian (1870-1910), vice president of the American Roentgen Ray Society (ARRS), was concerned about the irritating effects of X-rays. In a publication, he mentioned his increasing problems with his hands. Although Kassabian recognized X-rays as the cause, he avoided making this reference so as not to hinder the progress of radiology. In 1902, he suffered a severe radiation burn on his hand. Six years later, the hand became necrotic and two fingers of his left hand were amputated. Kassabian kept a diary and photographed his hands as the tissue damage progressed. He died of cancer in 1910. Many of the early X-ray and radioactivity researchers went down in history as "martyrs for science." In her article, The Miracle and the Martyrs, Sarah Zobel of the University of Vermont tells of a 1920 banquet held to honor many of the pioneers of X-rays. Chicken was served for dinner: "Shortly after the meal was served, it could be seen that some of the participants were unable to enjoy the meal. After years of working with X-rays, many of the participants had lost fingers or hands due to radiation exposure and were unable to cut the meat themselves". The first American to die from radiation exposure was Clarence Madison Dally (1845-1904), an assistant to Thomas Alva Edison (1847-1931). Edison began studying X-rays almost immediately after Röntgen's discovery and delegated the task to Dally. Over time, Dally underwent more than 100 skin operations due to radiation damage. Eventually, both of his arms had to be amputated. His death led Edison to abandon all further X-ray research in 1904. One of the pioneers was the Austrian Gustav Kaiser (1871-1954), who in 1896 succeeded in photographing a double toe with an exposure time of 1½-2 hours. Due to the limited knowledge at the time, he also suffered severe radiation damage to his hands, losing several fingers and his right metacarpal. His work was the basis for, among other things, the construction of lead rubber aprons. Heinrich Albers-Schönberg (1865-1921), the world's first professor of radiology, recommended gonadal protection for testicles and ovaries in 1903. He was one of the first to protect germ cells not only from acute radiation damage but also from small doses of radiation that could accumulate over time and cause late damage. Albers-Schönberg died at the age of 56 from radiation damage, as did Guido Holzknecht and Elizabeth Fleischman. Since April 4, 1936, a radiology memorial in the garden of the of Hamburg's St. Georg Hospital has commemorated the 359 victims from 23 countries who were among the first medical users of X-rays. Initial warnings In 1896, the engineer Wolfram Fuchs, based on his experience with numerous X-ray examinations, recommended keeping the exposure time as short as possible, staying away from the tube, and covering the skin with Vaseline. In 1897, Chicago doctors William Fuchs and Otto Schmidt became the first users to have to pay compensation to a patient for radiation damage. In 1901, dentist William Herbert Rollins (1852-1929) called for using lead-glass goggles when working with X-rays, for the X-ray tube to be encased in lead, and for all areas of the body to be covered with lead aprons. He published over 200 articles on the potential dangers of X-rays, but his suggestions were long ignored. A year later, Rollins wrote in despair that his warnings about the dangers of X-rays were not being heeded by either the industry or his colleagues. By this time, Rollins had demonstrated that X-rays could kill laboratory animals and induce miscarriages in guinea pigs. Rollins' achievements were not recognized until later. Since then, he has gone down in the history of radiology as the "father of radiation protection. He became a member of the Radiological Society of North America and its first treasurer. Radiation protection continued to develop with the invention of new measuring devices such as the chromoradiometer by Guido Holzknecht (1872-1931) in 1902, the radiometer by Raymond Sabouraud (1864-1938) and Henri Noiré (1878–1937) in 1904/05, and the quantimeter by Robert Kienböck (1873-1951) in 1905, which made it possible to determine maximum doses at which there was a high probability that no skin changes would occur. Radium was also included by the British Roentgen Society, which published its first memorandum on radium protection in 1921. Unnecessary applications Pedoscope Since the 1920s, pedoscopes have been installed in many shoe stores in North America and Europe, more than 10,000 in the U.S. alone, following the invention of Jacob Lowe, a Boston physicist. They were X-ray machines used to check the fit of shoes and to promote sales, especially to children. Children were particularly fascinated by the sight of their footbones. X-rays were often taken several times daily to evaluate the fit of different shoes. Most were available in shoe stores until the early 1970s. The energy dose absorbed by the customer was up to 116 rads, or 1.16 grays. In the 1950s, when medical knowledge of the health risks was already available, pedoscopes came with warnings that shoe-buyers should not be scanned more than three times a day and twelve times a year. By the early 1950s, several professional organizations issued warnings against the continued use of shoe-mounted fluoroscopes, including the American Conference of Governmental Industrial Hygienists, the American College of Surgeons, the New York Academy of Medicine, and the American College of Radiology. At the same time, the District of Columbia enacted regulations requiring that shoe-mounted fluoroscopes be operated only by a licensed physical therapist. A few years later, the state of Massachusetts passed regulations stating that these machines could only be operated by a licensed physician. In 1957, the use of shoe-mounted fluoroscopes was banned by court order in Pennsylvania. By 1960, these measures and pressure from insurance companies led to the disappearance of the shoe-mounted fluoroscope, at least in the United States. In Switzerland, there were 1,500 shoe-mounted fluoroscopes in use, 850 were required to be inspected by the Swiss Electrotechnical Association by a decree of the Federal Department of Home Affairs on October 7, 1963. The last one was decommissioned in 1990. In Germany, the machines were not banned until 1976.   The fluoroscopy machine emitted uncontrolled X-rays, which continuously exposed children, parents, and sales staff. The all-wood cabinet of the machine did not prevent the X-rays from passing through, resulting in particularly high cumulative radiation levels for the cashier when the pedoscope was placed near the cash register. The all-wood cabinet of the machine did not prevent the X-rays from passing through, resulting in particularly high cumulative radiation levels for the cashier when the pedoscope was placed near the cash register. It is clear that the machine was not designed with proper safety measures in place, leading to dangerous levels of radiation exposure. The well-established long-term effects of X-rays, including genetic damage and carcinogenicity, suggest that the use of pedoscopes worldwide over several decades may have contributed to health effects.The well-established long-term effects of X-rays, including genetic damage and carcinogenicity, suggest that the use of pedoscopes worldwide over several decades may have contributed to health effects. However, it cannot be definitively proven that they were the sole cause. For example, a direct link has been discussed in the case of basal cell carcinoma of the foot. In 1950, a case was published in which a shoe model had to have a leg amputated as a result. Radiotherapy In 1896, Viennese dermatologist Leopold Freund (1868-1943) used X-rays to treat patients for the first time. He successfully irradiated the hairy nevus of a young girl. In 1897, Hermann Gocht (1869–1931) published the treatment of trigeminal neuralgia with X-rays, and Alexei Petrovich Sokolov (1854-1928) wrote about radiotherapy for arthritis in the oldest radiology journal, Advances in the field of X-rays (RöFo). In 1922, X-rays were recommended as safe for many diseases and for diagnostic purposes. Radiation protection was limited to recommending doses that would not cause erythema (reddening of the skin). For example, X-rays were promoted as an alternative to tonsillectomy. It was also boasted that in 80% of cases of diphtheria carriers, Corynebacterium diphtheriae was no longer detectable within two to four days. In the 1930s, Günther von Pannewitz (1900–1966), a radiologist from Freiburg, Germany, perfected what he called X-ray stimulation radiation for degenerative diseases. Low-dose radiation reduces the inflammatory response of tissues. Until about 1960, children with diseases such as ankylosing spondylitis or favus (head fungus) were irradiated, which was effective but led to increased cancer rates among patients decades later. In 1926, the American pathologist James Ewing (1866-1943) was the first to observe bone changes as a result of radiotherapy, which he described as radiation osteitis (now Osteoradionecrosis). In 1983, Robert E. Marx stated that osteoradionecrosis is radiation-induced aseptic bone necrosis. The acute and chronic inflammatory processes of osteoradionecrosis are prevented by the administration of steroidal anti-inflammatory drugs. In addition, the administration of pentoxifylline and antioxidant treatments, such as superoxide dismutase and tocopherol (vitamin E) are recommended. Radiation protection during X-ray examinations Preliminary observation Sonography (ultrasound diagnostics) is a versatile and widely used imaging modality in medical diagnostics. Ultrasound is also used in therapy. However, it uses mechanical waves and no ionizing or non-ionizing radiation. Patient safety is ensured if the recommended limits for avoiding cavitation and overheating are observed, see also Safety Aspects of Sonography. Even devices that use alternating magnetic fields in the radiofrequency range, such as magnetic resonance imaging (MRI), do not use ionizing radiation. MRI was developed as an imaging technique in 1973 by Paul Christian Lauterbur (1929-2007) with significant contributions from Sir Peter Mansfield (1933-2017). Jewelry or piercings can become very hot; on the other hand, a high tensile force is exerted on the jewelry, which in the worst case can cause it to be torn out. To avoid pain and injury, jewelry containing ferromagnetic metals should be removed beforehand. Pacemakers, defibrillator systems, and large tattoos in the examination area that contain metallic color pigments may heat up or cause second-degree burns or malfunction of the implants. Photoacoustic Tomography (PAT) is a hybrid imaging modality that utilizes the photoacoustic effect without the use of ionizing radiation. It works without contact with very fast laser pulses that generate ultrasound in the tissue under examination. The local absorption of the light leads to sudden local heating and the resulting thermal expansion. The result is broadband acoustic waves. The original distribution of absorbed energy can be reconstructed by measuring the outgoing ultrasound waves with appropriate ultrasound transducers. Radiation exposure detection In order to better assess radiation protection, the number of X-ray examinations, including the dose, has been recorded annually in Germany since 2007. However, the Federal Statistical Office does not have complete data for conventional X-ray examinations. In 2014, the total number of X-ray examinations in Germany was estimated to be about 135 million, of which about 55 million were dental X-ray examinations. The average effective dose from x-ray examinations per inhabitant in Germany in 2014 was about 1.55 mSv (about 1.7 x-ray examinations per inhabitant per year). The proportion of dental X-rays is 41%, but accounts for only 0.4% of the collective effective dose. In Germany, Section 28 of the X-ray Ordinance (RöV) has required since 2002 that the attending physician must have an X-ray pass available for X-ray examinations and offer it to the patient. The pass contains information about the patient's X-rays to avoid unnecessary examinations and to allow comparison with previous images. With the entry into force of the new Radiation Protection Ordinance on December 31, 2018, this obligation no longer applies. In Austria and Switzerland, x-ray passports have so far been available voluntarily. In principle, there must always be both a justifiable indication for the use of X-rays and the informed consent of the patient. In the context of medical treatment, informed consent refers to the patient's agreement to all types of interventions and other medical measures. Radiation reduction Over the years, there have been increasing efforts to reduce radiation exposure to therapists and patients. Radiation protective clothing Following Rollins' discovery in 1920 that lead aprons protected against X-rays, lead aprons with a lead thickness of 0.5 mm were introduced. Due to their weight, lead-free and lead-reduced aprons were subsequently developed. In 2005, it was recognized that in some cases the protection was significantly less than wearing lead aprons. The lead-free aprons contain tin, antimony and barium, which have the property of producing intense radiation (X-ray fluorescence radiation) when irradiated. In Germany, the Radiology Standards Committee has taken up the issue and introduced a German standard (DIN 6857-1) in 2009. The international standard IEC 61331-3:2014 was finally published in 2014. Protective aprons that do not comply with DIN 6857-1 of 2009 or the new IEC 61331-1 of 2014 may result in higher exposures. There are two classes of lead equivalency classes: 0.25 mm and 0.35 mm. The manufacturer must specify the area weight in kg/m2 at which the protective effect of a pure lead apron of 0.25 or 0.35 mm Pb is achieved. The protective effect of an apron shall be appropriate to the energy range used, up to 110 kV for low energy aprons and up to 150 kV for high energy aprons. If necessary, lead glass panels must also be used, with the front panels having a lead equivalent of 0.5-1.0 mm, depending on the application, and the side shields having a lead equivalent of 0.5-0.75 mm. Outside the useful beam, radiation exposure is primarily caused by scattered radiation from the tissue being scanned. During examinations of the head and torso, this scattered radiation can spread throughout the body and is difficult to shield with radiation protective clothing. Fears that a lead apron will prevent radiation from leaving the body are unfounded, however, because lead absorbs radiation rather than scattering it. When preparing an orthopantomogram (OPG) for a dental overview radiograph, it is sometimes recommended not to wear a lead apron, as it does little to shield scattered radiation from the jaw area, but may hinder the rotation of the imaging device. However, according to the 2018 X-ray regulation, it is still mandatory to wear a lead apron when taking an OPG. X-ray intensifier foils In the same year as the discovery of X-rays, Mihajlo Idvorski Pupin (1858-1935) invented the method of placing a sheet of paper coated with fluorescent substances on the photographic plate, drastically reducing the exposure time and thus the radiation exposure. 95% of the film was blackened by the intensifying film and only the remaining 5% was directly blackened by the X-rays. Thomas Alva Edison identified the blue-emitting calcium tungstate (CaWO4) as a suitable phosphor, which quickly became the standard for X-ray intensifying film. In the 1970s, calcium tungstate was replaced by even better and finer intensifying films with rare earth-based phosphors (terbium-activated lanthanum oxybromide, gadolinium oxysulfide). The use of intensifying films in dental film production did not become widespread because of the loss of image quality. The combination with high-sensitivity films further reduced radiation exposure. Anti-scatter grid An anti-scatter grid is a device in X-ray technology that is placed in front of the image receiver (screen, detector, or film) and reduces the incidence of diffuse radiation on it. The first diffusion radiation grid was developed in 1913 by Gustav Peter Bucky (1880-1963). The US radiologist Hollis Elmer Potter (1880-1964) improved it in 1917 by adding a moving device. The radiation dose must be increased when using scattered radiation grids. For this reason, the use of scattered radiation equipment should not be used on children. In digital radiography, a grid may be omitted under certain conditions to reduce radiation exposure to the patient. Radiation protection splint Radiation protection measures may also be necessary against scattered radiation, which occurs during tumor irradiation of the head and neck on metal parts of the dentition (dental fillings, bridges, etc.). Since the 1990s, soft tissue retractors known as radiation protection splints have been used to prevent or reduce mucositis, an inflammation of the mucous membranes. It is the most significant adverse acute side effect of radiation. The radiation protection splint is a spacer that keeps the mucosa away from the teeth and reduces the amount of scattered radiation that hits the mucosa according to the square law of distance. Mucositis, which is extremely painful, is one of the most significant detriments to a patient's quality of life and often limits radiation therapy, thereby reducing the chances of tumor cure. The splint reduces oral mucosal reactions that typically occur in the second and third third of a radiation series and are irreversible. Panoramic X-ray machine The Japanese Hisatugu Numata developed the first panoramic radiograph in 1933/34. This was followed by the development of intraoral panoramic X-ray units, in which the X-ray tube is placed intraorally (inside the mouth) and the X-ray film extraorally (outside the mouth). At the same time, Horst Beger from Dresden in 1943 and the Swiss dentist Walter Ott in 1946 worked on the Panoramix (Koch & Sterzel), Status X (Siemens) and Oralix (Philips). Intraoral panoramic devices were discontinued at the end of the 1980s because the radiation exposure was too high in direct contact with the tongue and oral mucosa due to the intraoral tube. Digital X-ray Eastman Kodak filed the first patent for digital radiography in 1973. The first commercial CR (Computed Radiology) solution was offered by Fujifilm in Japan in 1983 under the device name CR-101. X-ray imaging plates are used in X-ray diagnostics to record the shadow image of X-rays. The first commercial digital X-ray system for use in dentistry was introduced in 1986 by Trophy Radiology (France) under the name Radiovisiography. Digital x-ray systems help reduce radiation exposure. Instead of film, the machines contain a scintillator that converts the incident X-ray photons either into visible light or directly into electrical impulses. Computer tomography In 1972, the first commercial CT scanner for clinical use went into operation at Atkinsons Morley Hospital in London. Its inventor was the English engineer Godfrey Newbold Hounsfield (1919-2004), who shared the 1979 Nobel Prize in Medicine with Allan McLeod Cormack (1924-1998) for his pioneering work in the field of computed tomography. The first steps toward dose reduction were taken in 1989 in the era of single-slice spiral CT. The introduction of multi-slice spiral computed tomography in 1998 and its continuous development made it possible to reduce the dose by means of dose modulation. The tube current is adjusted, for example by reducing the power for images of the lungs compared to the abdomen. The tube current is modulated during rotation. Because the human body has an approximately oval cross-section, radiation intensity is reduced when radiation is delivered from the front or back, and is increased when radiation is delivered from the side. This dose control also depends on the body mass index. For example, the use of dose modulation in the head and neck region reduces total exposure and organ doses to the thyroid and eye lens by up to 50% without significantly compromising diagnostic image quality. The Computed Tomography Dose Index (CTDI) is used to measure radiation exposure during a CT scan. The CTDI was first defined by the Food and Drug Administration (FDA) in 1981. The unit of measurement for the CTDI is the mGy (milli-Gray). Multiplying the CTDI by the length of the examination volume yields the dose-length product (DLP), which quantifies the total radiation exposure to the patient during a CT scan. Structural protective measures An X-ray room must be shielded on all sides with 1 mm lead equivalent shielding. Calcium silicate or solid brick masonry is recommended. A steel jamb should be used, not only because of the weight of the heavy shielding door but also because of the shielding; wooden frames must be shielded separately. The shielding door must be covered with a 1 mm thick lead foil and a lead glass window must be installed as a visual connection. A keyhole shall be avoided. All installations (sanitary or electrical), that interrupt the radiation protection, must be leaded ( and Depending on the application, nuclear medicine requires even more extensive protective measures, up to and including concrete walls several meters thick. In addition, from December 31, 2018, when the latest amendments to Section 14 (1) No. 2b of the Radiation Protection Act come into force, an expert in medical physics for X-ray diagnostics and therapy must be consulted for the optimization and quality assurance of the application and for advice on radiation protection issues. Certificate of competence Each facility operating an x-ray unit shall have sufficient personnel with appropriate expertise. The person responsible for radiation protection or one or more Radiation Safety Officers shall have appropriate qualifications, which shall be regularly updated. X-ray examinations may be technically performed by any other staff member of a medical or dental practice if they are under the direct supervision and responsibility of the person responsible and if they have knowledge of radiation protection. This knowledge of radiation protection has been required since the amendment of the X-ray Ordinance in 1987; medical and dental assistants (then called medical assistants or dental assistants) received this additional training in 1990. The regulations for the specialty of radiology were tightened by the Radiation Protection Act, which came into force on October 1, 2017. The handling of radioactive substances and ionizing radiation (if not covered by the X-ray Ordinance) is regulated by the Radiation Protection Ordinance (StrlSchV). Section 30 StrlSchV defines the "Required expertise and knowledge in radiation protection". Radiation protection associations The Association of German Radiation Protection Physicians (VDSÄ) was formed in the late 1950s from a working group of radiation protection physicians of the German Red Cross and was founded in 1964. It was dedicated to the promotion of radiation protection and the representation of medical, dental, and veterinary radiation protection concerns to the public and the health care system. In 2017, it was merged into the Professional Association for Radiation Protection. The Austrian Association for Radiation Protection (ÖVS), founded in 1966, pursues the same goals as the Association for Medical Radiation Protection in Austria. The Professional Association for Radiation Protection for Germany and Switzerland is networked worldwide. Radiation protection in radiotherapy In radiotherapy, radiation protection is often overlooked in favor of structural safeguards and therapist protection. The benefit/risk assessment should prioritize both the therapeutic goal of treating the patient's cancer and the safety of all involved. However, it is crucial to ensure that radiation is delivered only where it is needed through appropriate treatment planning. By employing strong radiation protection measures, we can confidently provide effective treatment while minimizing potential risks. Linear accelerators replaced cobalt and caesium emitters in routine therapy due to their superior technical characteristics and risk profile. They have been available since about 1970. The presence of a medical physicist responsible for technical quality control is required for linear accelerators, unlike X-rays and telecurie systems. It is important to note that radiation necrosis is the necrosis of cells in an organism caused by the effects of ionizing radiation. Radionecrosis is a serious complication of radiosurgical treatment that becomes clinically apparent months or years after irradiation. Radiation therapy has significantly reduced the incidence of radionecrosis since its early days. Modern radiation techniques prioritize the sparing of healthy tissue while irradiating as much of the area around the tumor as possible to prevent recurrence. It is important to note that patients undergoing radiotherapy face a certain level of radiation risk. Radiation protection and radiation damage in veterinary medicine While there is limited literature on radiation injury to animals, there is no evidence of other types of radiation injury. Diagnostic radiation has been shown to cause local burns in animals, typically resulting from prolonged exposure of body parts or sparks from old x-ray tubes. It is important to note that the frequency of injury to veterinary staff and veterinarians is significantly lower than that in human medicine, highlighting the safety of diagnostic radiation in veterinary practice. In veterinary medicine, fewer images are taken compared to human medicine, particularly fewer CT scans. However, due to the manual restraint of animals to avoid anesthesia, at least one person is present in the control area, resulting in significantly higher radiation exposure than that of human medical staff. It is important to note that since the 1970s, dosimeters have been used to measure the radiation exposure of veterinary personnel, ensuring their safety. Feline hyperthyroidism (overactive thyroid) is a common disease in older cats. Radioiodine therapy is considered by many authors to be the treatment of choice. Following the administration of radioactive iodine, cats are kept in an isolation pen. The cat's radioactivity is measured to determine the time of discharge, which is typically 14 days after the start of therapy. The therapy requires significant radiation protection measures and is currently only offered at two veterinary facilities in Germany (as of 2010). After the start of treatment, cats must be kept indoors for four weeks, and contact with pregnant women and children under the age of 16 must be avoided due to residual radioactivity. Just like a medical practice, any veterinary practice operating an X-ray machine must have sufficient staff with the appropriate expertise, as required by Section 18 of the X-Ray Ordinance 2002. The corresponding training for paraveterinary workers (then called veterinary nurses) took place in 1990. In 2017, Linsengericht (Hesse) opened Europe's first clinic for horses with cancer. Radiation therapy is administered in a treatment room that is eight meters wide, on a specially designed table that can withstand heavyweight. The surrounding area is protected from radiation by three-meter thick walls. Mobile equipment is used to irradiate tumors in small animals at various locations. Radioactive substances Radon Radon is a naturally occurring radioactive noble gas discovered in 1900 by Friedrich Ernst Dorn (1848-1916) and is considered carcinogenic. Radon is increasingly found in areas with high levels of uranium and thorium in the soil. These are mainly areas with high granitic rock deposits. According to studies by the World Health Organization, the incidence of lung cancer increases significantly at radiation levels of 100-200 Bq per cubic meter of indoor air. The likelihood of developing lung cancer increases by 10% with each additional 100 Bq/m3 of indoor air. Elevated radon levels have been measured in numerous areas in Germany, particularly in southern Germany, Austria and Switzerland. Germany The Federal Office for Radiation Protection has developed a radon map of Germany. The EU Directive 2013/59/Euratom (Radiation Protection Basic Standards Directive) introduced reference levels and the possibility for workers to have their workplace tested for radon exposure. In Germany, it was implemented in the Radiation Protection Act (Chapter 2 or Sections 124-132 StrlSchG) and the amended Radiation Protection Ordinance (Part 4 Chapter 1, Sections 153-158 StrlSchV). The new radon protection regulations for workplaces and new residential buildings have been binding since January 2019. Extensive radon contamination and radon precautionary areas have been determined by the ministries of the environment of the federal states (as of June 15, 2021). Austria The highest radon concentrations in Austria were measured in 1991 in the municipality of Umhausen in Tyrol. Umhausen has about 2300 inhabitants and is located in the Ötztal valley. Some of the houses there were built on a bedrock of granite gneiss. From this porous subsoil, the radon present in the rock seeped freely into the unsealed cellars, which were contaminated with up to 60,000 Becquerels of radon per cubic meter of air. Radon levels in the apartments in Umhausen have been systematically monitored since 1992. Since then, extensive radon mitigation measures have been implemented in the buildings: New buildings, sealing of cellar floors, forced ventilation of cellars or relocation. Queries in the Austrian Health Information System (ÖGIS) have shown that the incidence of new cases of lung cancer has declined sharply since then. The Austrian National Radon Project (ÖNRAP) has studied radon exposure throughout the country. Austria also has a Radiation Protection Act as a legal basis. Indoor limits were set in 2008 The Austrian Ministry of the Environment states that In Austria, the Radon Protection Ordinance in its version of September 10, 2021 is currently in force, which also defines the radon protection areas and radon precautionary areas. Switzerland The aim of the Radon Action Plan 2012-2020 in Switzerland was to incorporate the new international recommendations into the Swiss strategy for protection against radon and thus reduce the number of lung cancer cases attributable to radon in buildings. On 1 January 2018, the limit value of 1000 Bq/m3 was replaced by a reference value of 300 becquerels per cubic meter (Bq/m3) for the radon gas concentration averaged over a year in "rooms in which people regularly spend several hours a day". Subsequently, on May 11, 2020, the Federal Office of Public Health FOPH issued the Radon Action Plan 2021-2030. The provisions on radon protection are primarily laid down in the Radiation Protection Ordinance (RPO). Radiation sickness among miners In 1879, Walther Hesse (1846-1911) and Friedrich Hugo Härting published the study "Lung Cancer, the Miners' Disease in the Schneeberg Mines". Hesse, a pathologist, was shocked by the poor health and young age of the miners. This particular form of bronchial carcinoma was given the name Schneeberg disease because it occurred among miners in the Schneeberg mines (Saxon Erz Mountains). When Hesse's report was published, radioactive radiation and the existence of radon were unknown. It was not until 1898 that Marie Curie-Skłodowska (1867-1934) and her husband Pierre Curie (1859-1906) discovered radium and created the concept of radioactivity. Beginning in the fall of 1898, Marie Curie suffered from inflammation of the fingertips, the first known symptoms of radiation sickness. In the Jáchymov mines, where silver and non-ferrous metals were mined from the 16th to the 19th century, uranium ore was mined in abundance in the 20th century. It was only during the Second World War that restrictions were imposed on ore mining in the Schneeberg and Jáchymov mines. After World War II, uranium mining was accelerated for the Soviet atomic bomb project and the emerging Soviet nuclear industry. Forced labor was used. Initially, these were German prisoners of war and displaced persons, and after the February Revolution of 1948, political prisoners were imprisoned by the Communist Party regime in Czechoslovakia, as well as conscripted civilian workers. Several "Czechoslovak gulags" were established in the area to house these workers. In all, about 100,000 political prisoners and more than 250,000 forced laborers passed through the camps. About half of them probably did not survive the mining work. Uranium mining ceased in 1964. We can only speculate about other victims who died as a result of radiation. Radon-bearing springs discovered during the mining in the early 20th century established a spa industry that is still important today, as well as the town's status as the oldest radium brine spa in the world. Wismut AG The approximately 200,000 uranium miners employed by Wismut AG in the former Soviet occupation zone of East Germany were exposed to very high levels of radiation, particularly between 1946 and 1955, but also in later years. This exposure was caused by the inhalation of radon and its radioactive by-products, which were deposited to a considerable extent in the inhaled dust. Radiation exposure was expressed in the historical unit of working level month (WLM). This unit of measurement was introduced in the 1950s specifically for occupational safety in uranium mines in the U.S. to record radiation exposure resulting from radioactive exposure to radon and its decay products in the air we breathe. Approximately 9000 workers at Wismut AG have been diagnosed with lung cancer. Radium Until the 1930s, radium compounds were not only considered relatively harmless, but also beneficial to health, and were advertised as medicines for a variety of ailments or used in products that glowed in the dark. Processing took place without any safeguards. Until the 1960s, radioactivity was often handled naively and carelessly. From 1940 to 1945, the Berlin-based Auergesellschaft, founded by Carl Auer von Welsbach (1858-1929, Osram), produced a radioactive toothpaste called Doramad that contained thorium-X and was sold internationally. It was advertised with the statement, "Its radioactive radiation strengthens the defenses of the teeth and gums. The cells are charged with new life energy and the destructive effect of bacteria is inhibited. This gave the claim of radiant white teeth a double meaning. By 1930, there were also bath additives and eczema ointments under the brand name "Thorium-X". Radium was also added to toothpastes, such as Kolynos toothpaste. After World War I, radioactivity became a symbol of modern achievement and was considered "chic". Radioactive substances were added to mineral water, condoms, and cosmetic powders. Even chocolate laced with radium was sold. The toy manufacturer Märklin in the Swabian town of Göppingen tested the sale of an X-ray machine for children. At upper-class parties, people "photographed" each other's bones for fun. A system called Trycho () for epilation (hair removal) of the face and body was franchised in the USA. As a result, thousands of women suffered skin burns, ulcers and tumors. It was not until the atomic bombings of Hiroshima and Nagasaki that the public became aware of the dangers of ionizing radiation and these products were banned. A radium industry developed, using radium in creams, beverages, chocolates, toothpastes, and soaps. It took a relatively long time for radium and its decay product radon to be recognized as the cause of the observed effects. Radithor, a radioactive agent consisting of triple-distilled water in which the radium isotopes 226Ra and 228Ra were dissolved so that it had an activity of at least one microcurie, was marketed in the United States. It was not until 1932, when the prominent American athlete Eben Byers, who by his own account had taken about 1,400 vials of Radithor as medicine on the recommendation of his physician, fell seriously ill with cancer, lost many of his teeth, and died shortly thereafter in great agony, that strong doubts were raised about the healing powers of Radithor and radium water. Radium cures 1908 saw a boom in the use of radioactive water for therapeutic purposes. The discovery of springs in Oberschlema and Bad Brambach paved the way for the establishment of radium spas, which relied on the healing properties of radium. During the cures, people bathed in radium water, drank cures with radium water, and inhaled radon in emanatoriums. The baths were visited by tens of thousands of people every year, hoping for hormesis. To this day, therapeutic applications are carried out in spas and healing tunnels. The natural release of radon from the ground is used. According to the German Spa Association, the activity in water must be at least 666 Bq/liter. The requirement for inhalation treatments is at least 37,000 Bq/m3 of air. This form of therapy is not scientifically accepted and the potential risk of radiation exposure is criticized. The equivalent dose of a radon cure in Germany is given by the individual health resorts as about one to two millisieverts, depending on the location. In 2010, doctors in Erlangen, using the (outdated) LNT (Linear, No-Threshold) model, concluded that five percent of all lung cancer deaths in Germany are caused by radon. There are radon baths in Bad Gastein, Bad Hofgastein and Bad Zell in Austria, in Niška Banja in Serbia, in the radon revitalization bath in Menzenschwand and in Bad Brambach, Bad Münster am Stein-Ebernburg, Bad Schlema, Bad Steben, Bad Schmiedeberg and Sibyllenbad in Germany, in Jáchymov in the Czech Republic, in Hévíz in Hungary, in Świeradów-Zdrój (Bad Flinsberg) in Poland, in Naretschen and Kostenez in Bulgaria and on the island of Ischia in Italy. There are radon tunnels in Bad Kreuznach and Bad Gastein. Illuminated dials The dangers of radium were recognized in the early 1920s and first described in 1924 by New York dentist and oral surgeon Theodor Blum (1883-1962). He was particularly aware of the use of radium in the watch industry, where it was used for luminous dials. He published an article on the clinical picture of the so-called radium jaw. He observed this disease in female patients who, as dial painters, came into contact with luminous paint whose composition was similar to Radiomir, a luminous material invented in 1914 consisting of a mixture of zinc sulfide and radium bromide. As they painted, they used their lips to form the tip of the phosphorus-laden brush into the desired pointed shape, and this is how the radioactive radium entered their bodies. In the U.S. and Canada alone, about 4,000 workers were affected over the years. In retrospect, the factory workers were called the Radium Girls. They also played with the paint, painting their fingernails, teeth and faces. This made them glow at night to the surprise of their companions. After Harrison Stanford Martland (1883-1954), chief medical examiner in Essex County, detected the radioactive noble gas radon (a decay product of radium) in the breath of the Radium Girls, he turned to Charles Norris (1867-1935) and Alexander Oscar Gettler (1883-1968). In 1928, Gettler was able to detect a high concentration of radium in the bones of Amelia Maggia, one of the young women, even five years after her death. In 1931, a method was developed for determining radium dosage using a film dosimeter. A standard preparation is irradiated through a hardwood cube onto an X-ray film, which is then blackened. For a long time, the cube minute was an important unit of radium dosage. It was calibrated by ionometric measurements. The radiologists Hermann Georg Holthusen (1886-1971) and Anna Hamann (1894-1969) found a calibration value of 0.045 r/min in 1932/1935. The calibration film receives the y-ray dose of 0.045 r per minute through the wooden cube from the preparation of 13.33 mg. In 1933, the physicist Robley D. Evans (1907-1995) made the first measurements of radon and radium in the excretions of female workers. On this basis, the National Bureau of Standards, the predecessor to the National Institute of Standards and Technology (NIST), set the limit for radium at 0.1 microcuries (about 3.7 kilobecquerels) in 1941. A Radium Action Plan 2015-2019 aims to solve the problem of radiological contamination in Switzerland, mainly in the Jura Mountains, due to the use of radium luminous paint in the watch industry until the 1960s. In France, a line of cosmetics called Tho-Radia, which contained both thorium and radium, was created in 1932 and lasted until the 1960s. Other terrestrial radiation Terrestrial radiation is the ubiquitous radiation on Earth caused by radionuclides in the ground that were formed billions of years ago by stellar nucleosynthesis and have not yet decayed due to their long half-lives. Terrestrial radiation is caused by natural radionuclides that occur naturally in the Earth's soil, rocks, hydrosphere, and atmosphere. Natural radionuclides can be divided into cosmogenic and primordial nuclides. Cosmogenic nuclides do not contribute significantly to the terrestrial ambient radiation at the Earth's surface. The sources of terrestrial radiation are the natural radioactive nuclides found in the uppermost layers of the Earth, in the water and in the air. These include in particular Thorium-232 (half-life 14 billion years), Uranium-238 (half-life 4.4 billion years), Uranium-235 (half-life 0.7 billion years) and Potassium-40 (half-life 1.3 billion years). Mining and extraction of fuels According to the World Nuclear Association, coal from all deposits contains traces of various radioactive substances, particularly radon, uranium and thorium. These substances are released during coal mining, especially from surface mines, through power plant emissions, or power plant ash, and contribute to terrestrial radiation exposure through their exposure pathways. In December 2009, it was revealed that oil and gas production generates millions of tons of radioactive waste each year, much of which is improperly disposed of without detection, including 226Radium and 210Polonium. The specific activity of the waste ranges from 0.1 to 15,000 becquerels per gram. In Germany, according to the Radiation Protection Ordinance of 2001, the material is subject to monitoring at one Becquerel per gram and would have to be disposed of separately. The implementation of this regulation has been left to the industry, which has disposed of the waste carelessly and improperly for decades. Building material Every building material contains traces of natural radioactive substances, especially 238uranium, 232thorium, and their decay products, and 40potassium. Solidified and effusive rocks such as granite, tuff, and pumice have higher levels of radioactivity. In contrast, sand, gravel, limestone, and natural gypsum (calcium sulfate dihydrate) have low levels of radioactivity. The European Union's Activity Concentration Index (ACI), developed in 1999, can be used to assess radiation exposure from building materials. It replaces the Leningrad summation formula, which was used in 1971 in Leningrad (St. Petersburg) to determine how much radiation exposure from building materials is permissible for humans. The ACI is calculated from the sum of the weighted activities of 40potassium, 226radium, and 232thorium. The weighting takes into account the relative harmfulness to humans. According to official recommendations, building materials with a European ACI value greater than "1" should not be used in large quantities. Glazes Uranium pigments are used to color ceramic tiles with uranium glazes (red, yellow, brown), where 2 mg of uranium per cm2 is allowed. Between 1900 and 1943, large quantities of uranium-containing ceramics were produced in the United States, as well as in Germany and Austria. It is estimated that between 1924 and 1943, 50-150 tons of uranium (V,VI) oxide were used annually in the U.S. to produce uranium-containing glazes. In 1943, the U.S. government imposed a ban on the civilian use of uranium-containing substances, which remained in effect until 1958. Beginning in 1958, the U.S. government, and in 1969 the United States Atomic Energy Commission, sold depleted uranium in the form of uranium(VI) fluoride for civilian use. In Germany, uranium-glazed ceramics were produced by the Rosenthal porcelain factory and were commercially available until the early 1980s. Uranium-glazed ceramics should only be used as collector's items and not for everyday use due to possible abrasion. ODL measurement network The Federal Office for Radiation Protection's monitoring network measures natural radiation exposure through the local dose rate (ODL), expressed in microsieverts per hour (μSv/h).  In Germany, the natural ODL ranges from approximately 0.05 to 0.18 μSv/h, depending on local conditions. The ODL monitoring network has been operational since 1973 and currently comprises 1800 fixed, automatically operating measuring points. Its primary function is to provide early warning for the rapid detection of increased radiation from radioactive substances in the air in Germany. Spectroscopic probes have been successfully utilized since 2008 to determine the contribution of artificial radionuclides in addition to the local dose rate, showcasing the network's advanced capabilities. In addition to the ODL monitoring network of the Federal Office for Radiation Protection, there are other federal monitoring networks at the Federal Maritime and Hydrographic Agency and the Federal Institute of Hydrology, which measure gamma radiation in water; the German Meteorological Service measures air activity with aerosol samplers. To monitor nuclear facilities, the relevant federal states operate their own ODL monitoring networks. The data from these monitoring networks are automatically fed into the Integrated Measurement and Information System (IMIS), where they are used to analyze the current situation. Many countries operate their own ODL monitoring networks to protect the public. In Europe, these data are collected and published on the EURDEP platform of the European Atomic Energy Community. The European monitoring networks are based on Articles 35 and 37 of the Euratom Treaty. Radionuclides in medicine Nuclear medicine is the use of open radionuclides for diagnostic and therapeutic purposes (radionuclide therapy). It also includes the use of other radioactive substances and nuclear physics techniques for functional and localization diagnostics. George de Hevesy (1885-1966) lived as a lodger and in 1923 suspected that his landlady was offering him pudding that he had not eaten the following week. He mixed a small amount of a radioactive isotope into the leftovers. When she served him the pudding a week later, he was able to detect radioactivity in a sample of the casserole. When he showed this to his landlady, she immediately gave him notice. The method he used made him the father of nuclear medicine. It became known as the tracer method, which is still used today in nuclear medicine diagnostics. A small amount of a radioactive substance, its distribution in the organism, and its path through the human body can be tracked externally. This provides information about various metabolic functions of the body. The continuous development of radionuclides has improved radiation protection. For example, the mercury compounds 203chloro-merodrin and 197chloro-merodrin were abandoned in the 1960s as substances were developed that allowed a higher photon yield with less radiation exposure. Beta emitters such as 131I and 90Y are used in radionuclide therapy. In nuclear medicine diagnostics, the beta+ emitters 18F, 11C, 13N, and 15O are used as radioactive markers for tracers in positron emission tomography (PET). Radiopharmaceuticals (isotope-labeled drugs) are being developed on an ongoing basis. Radiopharmaceutical residues, such as empty application syringes and contaminated residues from the patient's toilet, shower and washing water, are collected in tanks and stored until they can be safely pumped into the sewer system. The storage time depends on the half-life and ranges from a few weeks to a few months, depending on the radionuclide. Since 2001, by of the Radiation Protection Ordinance, the specific radioactivity in the waste containers has been recorded in release measuring stations and the release time is calculated automatically. This requires measurements of the sample activity in Bq/g and the surface contamination in Bq/cm2. In addition, the behavior of the patients after their discharge from the clinic is prescribed. To protect personnel, syringe filling systems, borehole measurement stations for nuclide-specific measurement of low-activity, small volume individual samples, a lift system into the measurement chamber to reduce radiation exposure when handling highly active samples, probe measurement stations, ILP (isolated limb perfusion) measurement stations to monitor activity with one or more detectors during surgery and report leakage to the surgical oncologist. Radioiodine therapy Radioiodine Therapy (RIT) is a nuclear medicine procedure used to treat thyroid hyperfunction, Graves' disease, thyroid enlargement, and certain forms of thyroid cancer. The radioactive iodine isotope used is 131Iodine, a predominant beta emitter with a half-life of eight days, which is only stored in thyroid cells in the human body. In 1942, Saul Hertz (1905-1950) of the Massachusetts General Hospital and the physicist Arthur Roberts published their report on the first radioiodine therapy (1941) for Graves' disease, at that time still predominantly using the 130iodine isotope with a half-life of 12.4 hours. At the same time, Joseph Gilbert Hamilton (1907-1957) and John Hundale Lawrence (1904-1991) performed the first therapy with 131iodine, the isotope still used today. Radioiodine therapy is subject to special legal regulations in many countries, and in Germany may only be performed on an inpatient basis. There are approximately 120 treatment centers in Germany (as of 2014), performing approximately 50,000 treatments per year. In Germany, the minimum length of stay is 48 hours. Discharge depends on the residual activity remaining in the body. In 1999, the limit for residual activity was raised. The dose rate may not exceed 3.5 μSv per hour at a distance of 2 meters from the patient, which means that a radiation exposure of 1 mSv may not be exceeded within one year at a distance of 2 meters. This corresponds to a residual activity of about 250 MBq. Similar regulations exist in Austria. In Switzerland, a maximum radiation exposure of 1 mSv per year and a maximum of 5 mSv per year for the patient's relatives may not be exceeded. After discharge following radioiodine therapy, a maximum dose rate of 5 μSv per hour at a distance of 1 meter is permitted, which corresponds to a residual activity of approximately 150 MBq. In the event of early discharge, the supervisory authority must be notified up to a dose rate of 17.5 μSv/h; above 17.5 μSv/h, permission must be obtained. If the patient is transferred to another ward, the responsible radiation protection officer must ensure that appropriate radiation protection measures are taken there, e.g. that a temporary control area is set up. Scintigraphy Scintigraphy is a nuclear medicine procedure in which low-level radioactive substances are injected into the patient for diagnostic purposes. These include bone scintigraphy, thyroid scintigraphy, octreotide scintigraphy, and, as a further development of the procedure, single photon emission computed tomography (SPECT). For example, 201Tl thallium(I) chloride, technetium compounds (99mTc tracer, 99mtechnetium tetrofosmin), PET tracers (with radiation exposure of 1100 MBq each with 15O-water, 555 MBq with 13N ammonia, or 1850 MBq with 82Rb rubidium chloride) are used in myocardial scintigraphy to diagnose blood flow conditions and function of the heart muscle (myocardium). The examination with 74 MBq 201Thallium Chloride causes a radiation exposure of about 16 mSv (effective dose equivalent), the examination with 740 MBq 99mTechnetium-MIBI about 7 mSv. Metastable 99mTc is by far the most important nuclide used as a tracer in scintigraphy because of its short half-life, the 140 keV gamma radiation it emits, and its ability to bind to many active biomolecules. Most of this radiation is excreted after the examination. The remaining 99mTc decays rapidly to 99Tc with a half-life of 6 hours. This has a long half-life of 212,000 years and, because of the relatively weak beta radiation released during its decay, contributes only a small amount of additional radiation exposure over the remaining lifetime. In the United States alone, approximately seven million individual doses of 99mTc are administered each year for diagnostic purposes. To reduce radiation exposure, the American Society of Nuclear Cardiology (ASNC) issued dosage recommendations in 2010. The effective dose is 2.4 mSv for 13N-ammonia, 2.5 mSv for 15O-water, 7 mSv for 18F-fluorodeoxyglucose, and 13.5 mSv for 82Rb-rubidium chloride. Compliance with these recommendations is expected to reduce the average radiation exposure to = 9 mSv. The Ordinance on Radioactive Drugs or Drugs Treated with Ionizing Radiation regulates the approval procedures for the marketability of radioactive drugs. Brachytherapy Brachytherapy is used to place a sealed radioactive source inside or near the body to treat cancer, such as prostate cancer. Afterloading brachytherapy is often combined with teletherapy, which is external radiation delivered from a greater distance than brachytherapy. It is not classified as a nuclear medicine procedure, although like nuclear medicine, it uses the radiation emitted by radionuclides. After initial interest in brachytherapy in the early 20th century, its use declined in the mid-20th century because of the radiation exposure to physicians from manual handling of the radiation sources. It was not until the development of remote-controlled afterloading systems and the use of new radiation sources in the 1950s and 1960s that the risk of unnecessary radiation exposure to physicians and patients was reduced. In the afterloading procedure, an empty, tubular applicator is inserted into the target volume (e.g., the uterus) before the actual therapy and, after checking the position, loaded with a radioactive preparation. The preparation is located at the tip of a steel wire that is advanced and retracted step by step under computer control. After the pre-calculated time, the source is withdrawn into a safe and the applicator is removed. The procedure is used for breast cancer, bronchial carcinoma or oral floor carcinoma, among others. Beta emitters such as 90Sr or 106Ru or 192Ir are used. As a precaution, patients undergoing permanent brachytherapy are advised not to hold small children immediately after treatment and not to be in the vicinity of pregnant women, since low-dose radioactive sources (seeds) remain in the body after treatment with permanent brachytherapy. This is to protect the particularly radiation-sensitive tissues of a fetus or infant. Thorium as a drug and X-ray contrast agent Radioactive thorium was used in the 1950s and 60s to treat tuberculosis and other benign diseases (including children), with serious consequences (see Peteosthor). A stabilized suspension of colloidal thorium(IV) oxide, co-developed by António Egas Moniz (1874-1954), was used from 1929 under the trade name Thorotrast as an X-ray contrast agent for angiography in several million patients worldwide until it was banned in the mid-1950s. It accumulates in the reticulohistiocytic system and can lead to cancer due to locally increased radiation exposure. The same is true for cholangiocarcinoma and angiosarcoma of the liver, two rare liver cancers. Carcinomas of the paranasal sinuses have also been described following administration of Thorotrast. Typical onset of disease is 30–35 years after exposure. The biological half-life of Thorotrast is approximately 400 years. The largest study in this area was conducted in Germany in 2004 and showed a particularly high mortality rate among patients exposed in this way. The median life expectancy over a seventy-year observation period was 14 years shorter than in the comparison group. Nuclear weapons and nuclear energy Radiation effects of the atomic bomb attack and consequences for radiation protection After the U.S. atomic bombs were dropped on Hiroshima and Nagasaki on August 6 and 9, 1945, an additional 130,000 people - in addition to the 100,000 immediate victims - died from the effects of radiation by the end of 1945. Some experienced the so-called walking ghost phase, an acute radiation sickness caused by a high equivalent dose of 6 to 20 Sievert after a lethal whole-body dose. The phase describes the period of apparent recovery of a patient between the onset of the first massive symptoms and the inevitable death. In the years that followed, a number of deaths from radiation-induced diseases were added. In Japan, the radiation-damaged survivors are called hibakusha () and are conservatively estimated to number about 100,000. In 1946, the Atomic Bomb Casualty Commission (ABCC) was established by the National Research Council of the National Academy of Sciences by order of U.S. President Harry S. Truman to study the long-term effects of radiation on survivors of the atomic bombings. In 1975, the ABCC was replaced by the Radiation Effects Research Foundation (RERF). Organizations such as the United Nations Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), founded in 1955, and the National Academy of Sciences - Advisory Committee on the Biological Effects of Ionizing Radiation (BEIR Committee), founded in 1972, analyze the effects of radiation exposure on humans on the basis of atomic bomb victims who have been examined and, in some cases, medically monitored for decades. They determine the course of the mortality rate as a function of the age of the radiation victims in comparison with the spontaneous rate, and also the dose-dependency of the number of additional deaths. To date, 26 UNSCEAR reports have been published and are available online, most recently in 2017 on the effects of the Fukushima nuclear accident. By 1949, Americans felt increasingly threatened by the possibility of nuclear war with the Soviet Union and sought ways to survive a nuclear attack. The U.S. Federal Civil Defense Administration (USFCDA) was created by the government to educate the public on how to prepare for such an attack. In 1951, with the help of this agency, a children's educational film was produced in the U.S. called Duck and Cover, in which a turtle demonstrates how to protect oneself from the immediate effects of an atomic bomb explosion by using a coat, tablecloths, or even a newspaper. Recognizing that existing medical capacity would not be sufficient in an emergency, dentists were called upon to either assist physicians in an emergency or, if necessary, to provide assistance themselves. To mobilize the profession with the help of a prominent representative, dentist Russell Welford Bunting (1881-1962), dean of the University of Michigan Dental School, was recruited in July 1951 as a dental consultant to the USFCDA. The American physicist Karl Ziegler Morgan (1907-1999) was one of the founders of radiation health physics. In later life, after a long career with the Manhattan Project and Oak Ridge National Laboratory (ORNL), he became a critic of nuclear power and nuclear weapons production. Morgan was Director of Health Physics at ORNL from the late 1940s until his retirement in 1972. In 1955, he became the first president of the Health Physics Society and served as editor of the journal Health Physics from 1955 to 1977. Nuclear fallout shelters are designed to protect for an extended period. Due to the nature of nuclear warfare, such shelters must be completely self-sufficient for long periods. In particular, because of the radioactive contamination of the surrounding area, such a facility must be able to survive for several weeks. In 1959, top-secret construction began in Germany on a government bunker in the Ahr valley. In June 1964, 144 test persons survived for six days in a civilian nuclear bunker. The bunker in Dortmund had been built during the Second World War and had been converted at great expense in the early 1960s into a nuclear-weapon-proof building. However, it would be impossible to build a bunker for millions of German citizens. The Swiss Army built about 7800 nuclear fallout shelters in 1964. In the United States in particular, but also Europe, citizens built private fallout shelters in their front yards on their initiative. This construction was largely kept secret because the owners feared that third parties might take possession of the bunker in the event of a crisis. Fallout and contamination On July 16, 1945, the first atomic bomb test took place near the town of Alamogordo (New Mexico, USA). As a result of the atmospheric nuclear weapons tests carried out by the United States, the Soviet Union, France, Great Britain, and China, the Earth's atmosphere became increasingly contaminated with fission products from these tests from the 1950s onwards. The radioactive fallout landed on the earth's surface and ended up in plants and, via animal feed, in food of animal origin. Ultimately, they entered the human body and could be detected in bones and teeth as strontium-90, among other things. The radioactivity in the field was measured with a gamma scope, as shown at the air raid equipment exhibition in Bad Godesberg in 1954. Around 180 tests were carried out in 1962 alone. The extent of the radioactive contamination of the food sparked worldwide protests in the early 1960s. During World War II and the Cold War, the Hanford Site produced plutonium for U.S. nuclear weapons for more than 50 years. The plutonium for the first plutonium bomb, Fat Man, also came from there. Hanford is considered the most radioactively contaminated site in the Western Hemisphere. A total of 110,000 tons of nuclear fuel was produced there. In 1948, a radioactive cloud leaked from the plant. The amount of 131I alone was 5500 curies. Most of the reactors at Hanford were shut down in the 1960s, but no disposal or decontamination was done. After preliminary work, the world's largest decontamination operation began at Hanford in 2001 to safely dispose of the radioactive and toxic waste. In 2006, some 11,000 workers were still cleaning up contaminated buildings and soil to reduce radiation levels at the site to acceptable levels. This work is expected to continue until 2052. It is estimated that more than four million liters of radioactive liquid have leaked from storage tanks. It was only after the two superpowers agreed on a Partial Test Ban Treaty in 1963, which allowed only underground nuclear weapons testing, that the level of radioactivity in food began to decline. Shields Warren (1896-1980), one of the authors of a report on the effects of the atomic bombs dropped on Japan, was criticized for downplaying the effects of residual radiation in Hiroshima and Nagasaki, but later warned of the dangers of fallout. Fallout refers to the spread of radioactivity in the context of a given meteorological situation. A model experiment was conducted in 2008. The International Campaign to Abolish Nuclear Weapons (ICAN) is an international alliance of non-governmental organizations committed to the elimination of all nuclear weapons through a binding international treaty - a Nuclear Weapons Convention. ICAN was founded in 2007 by IPPNW (International Physicians for the Prevention of Nuclear War) and other organizations at the Nuclear Non-Proliferation Treaty Conference in Vienna and launched in twelve countries. Today, 468 organizations in 101 countries are involved in the campaign (as of 2017). ICAN was awarded the 2017 Nobel Peace Prize. Radioprotectors A radioprotector is a pharmacon that, when administered, selectively protects healthy cells from the toxic effects of ionizing radiation. The first work with radioprotectors began as part of the Manhattan Project, a military research project to develop and build an atomic bomb. Iodine absorbed by the body is almost completely stored in the thyroid gland and has a biological half-life of about 120 days. If the iodine is radioactive (131I), it can irradiate and damage the thyroid gland in high doses during this time. Because the thyroid gland can only absorb a limited amount of iodine, prophylactic administration of non-radioactive iodine may result in iodine blockade. Potassium iodide in tablet form (colloquially known as "iodine tablets") reduces the uptake of radioactive iodine into the thyroid by a factor of 90 or more, thus acting as a radioprotector. All other radiation damage remains unaffected by taking iodine tablets. In Germany, the Potassium Iodide Ordinance (KIV) was enacted in 2003 to ensure "the supply of the population with potassium iodide-containing medicines in the event of radiological incidents". Potassium iodide is usually stored in communities near nuclear facilities for distribution to the population in the event of a disaster. People over the age of 45 should not take iodine tablets because the risk of side effects is higher than the risk of developing thyroid cancer. In Switzerland, as a precautionary measure, tablets have been distributed every five years since 2004 to the population living within 20 km of nuclear power plants (from 2014, 50 km). In Austria, large stocks of iodine tablets have been kept in pharmacies, kindergartens, schools, the army and the federal reserve since 2002. Thanks to the protective function of radioprotectors, the dose of radiation used to treat malignant tumors (cancer) can be increased, thereby increasing the effectiveness of the therapy. There are also radiosensitizers, which increase the sensitivity of malignant tumor cells to ionizing radiation. As early as 1921, the German radiologist Hermann Holthusen (1886-1971) described that oxygen increases the sensitivity of cells. Nuclear accidents and catastrophes Founded in 1957 as a sub-organization of the Organization for Economic Cooperation and Development (OECD), the Nuclear Energy Agency (NEA) pools the scientific and financial resources of participating countries' nuclear research programs. It operates various databases and also manages the International Reporting System for Operating Experience (IRS or IAEA/NEA Incident Reporting System) of the International Atomic Energy Agency (IAEA). The IAEA records and investigates radiation accidents that have occurred worldwide in connection with nuclear medical procedures and the disposal of related materials. The International Nuclear and Radiological Event Scale (INES) is a scale for safety-related events, in particular nuclear incidents and accidents in nuclear facilities. It was developed by an international group of experts and officially adopted in 1990 by the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD). The purpose of the scale is to inform the public quickly about the safety significance of an event by means of a comprehensible classification of events. At the end of its useful life, the proper disposal of the remaining high activity is of paramount importance. Improper disposal of the radionuclide cobalt-60, used in cobalt guns for radiotherapy, has led to serious radiation accidents, such as the Ciudad Juárez (Mexico) radiological accident in 1983/84, the Goiânia (Brazil) accident in 1987, the Samut Prakan (Thailand) nuclear accident in 2000, and the Mayapuri (India) accident in 2010. Eleven Therac-25 linear accelerators were built by the Canadian company Atomic Energy of Canada Limited (AECL) between 1982 and 1985 and installed in clinics in the United States and Canada. Software errors and a lack of quality assurance led to a serious malfunction that killed three patients and seriously injured three others between June 1985 and 1987 before appropriate countermeasures were taken. The radiation exposure in the six cases was subsequently estimated to be between 40 and 200 Gray; normal treatment is equivalent to a dose of less than 2 Gray. Around 1990, about one hundred cobalt guns were still in use in Germany. In the meantime, electron linear accelerators were introduced and the last cobalt gun was decommissioned in 2000. The Fukushima nuclear accident in 2011 reinforced the need for proper safety management and the derivation of safety indicators regarding the frequency of errors and incorrect actions by personnel, i.e., the human factor. The Nuclear Safety Commission of Japan () was a body of scientists that advised the Japanese government on nuclear safety issues. The commission was established in 1978, but was dissolved after the Fukushima nuclear disaster on September 19, 2012, and replaced by the Genshiryoku Kisei Iinkai (). It is an independent agency (gaikyoku, "external office") of the Japanese Ministry of the Environment that regulates and monitors the safety of Japan's nuclear power plants and related facilities. As a result of the Chernobyl nuclear disaster in 1986, the IAEA coined the term "safety culture" for the first time in 1991 to draw attention to the importance of human and organizational issues for the safe operation of nuclear power plants. After this nuclear disaster, the sand in children's playgrounds in Germany was removed and replaced with uncontaminated sand to protect children who were most vulnerable to radioactivity. Some families temporarily left Germany to escape the fallout. Infant mortality increased significantly by 5% in 1987, the year after Chernobyl. In total, 316 more newborns died that year than statistically expected. In Germany, the caesium137 inventories from the Chernobyl nuclear disaster in soil and food decrease by 2-3% each year; however, the contamination of game and mushrooms was still comparatively high in 2015, especially in Bavaria; there are several cases of game meat, especially wild boar, exceeding the limits. However, controls are insufficient. Ocean dumping of radioactive waste Between 1969 and 1982, conditioned low- and intermediate-level radioactive waste was disposed of in the Atlantic Ocean at a depth of about 4,000 meters under the supervision of the Nuclear Energy Agency (NEA) of the Organization for Economic Cooperation and Development (OECD) in accordance with the provisions of the European Convention on the Prevention of Marine Pollution by Dumping of Waste of All Kinds (London Dumping Convention of June 11, 1974). This was carried out jointly by several European countries. Since 1993, international treaties have prohibited the dumping of radioactive waste in the oceans. For decades, this dumping of nuclear waste went largely unnoticed by the public until Greenpeace denounced it in the 1980s. Repository for heat-generating radioactive waste Since the commissioning of the first commercial nuclear power plants (USA 1956, Germany 1962), various final storage concepts for radioactive materials have been proposed in the following decades, of which only storage in deep geological formations appeared to be safe and feasible within a reasonable period of time and was pursued further. Due to the high activity of the short-lived fission products, spent fuel is initially handled only under water and stored for several years in a decay pool. The water is used for cooling and also shields much of the emitted radiation. This is followed either by reprocessing or by decades of interim storage. Waste from reprocessing must also be stored temporarily until the heat has decreased enough to allow final disposal. Casks are special containers for the storage and transport of highly radioactive materials. Their maximum permissible dose rate is 0.35 mSv/h, of which a maximum of 0.25 mSv/h is due to neutron radiation. The safety of these transport containers has been discussed every three years since 1980 at the International Symposium on the Packaging and Transportation of Radioactive Materials (PATRAM). Following various experiments, such as the Gorleben exploratory mine or the Asse mine, a working group on the selection procedure for repository sites (AkEnd) developed recommendations for a new selection procedure for repository sites between 1999 and 2002. In Germany, the Site Selection Act was passed in 2013 and the Act on the Further Development of the Site Search was passed on March 23, 2017. A suitable site is to be sought throughout Germany and identified by 2031. In principle, crystalline (granite), salt or clay rock types can be considered for a repository. There will be no "ideal" site. The "best possible" site will be sought. Mining areas and regions where volcanoes have been active or where there is a risk of earthquakes are excluded. Internationally, experts are advocating storage in rock formations several hundred meters below the earth's surface. This involves building a repository mine and storing the waste there. It is then permanently sealed. Geological and technical barriers surrounding the waste are designed to keep it safe for thousands of years. For example, 300 meters of rock will separate the repository from the earth's surface. It will be surrounded by a 100-meter-thick layer of granite, salt or clay. The first waste is not expected to be stored until 2050. The Federal Office for the Safety of Nuclear Waste Management (BfE) took up its activities on September 1, 2004. Its remit includes tasks relating to nuclear safety, the safety of nuclear waste management, the site selection procedure including research activities in these areas and, later on, further tasks in the area of licensing and supervision of repositories. In the USA, Yucca Mountain was initially selected as the final storage site, but this project was temporarily halted in February 2009. Yucca Mountain was the starting point for an investigation into atomic semiotics. Atomic semiotics The operation of nuclear power plants and other nuclear facilities produces radioactive materials that can have lethal health effects for thousands of years. It is important to note that there is no institution capable of maintaining the necessary knowledge of the dangers over such periods, and of ensuring that warnings about the dangers of nuclear waste in nuclear repositories will be understood by posterity in the distant future. A few years ago, even the capsules of the radionuclide cobalt-60, which were appropriately labeled, went unnoticed. Improper disposal led to the opening of these capsules, resulting in fatal consequences. The dimensions of time exceed previous human standards. For instance, cuneiform writing, which is only about 5000 years old (about 150 human generations), can only be understood after a long period of research and by experts. In 1981, research into the development of atomic semiotics began in the USA, in the German-speaking world, Roland Posner (1942-2020) of the Center for Semiotics at Technische Universität Berlin worked on this in 1982/83. In the USA, the time horizon for such warning signs was set at 10,000 years; later, as in Germany, it was set at a period of one million years, which would correspond to about 30,000 (human) generations. To date, no satisfactory solution to this problem has been found. Radiation protection during flights High-altitude radiation In 1912, Victor Franz Hess (1883-1964) discovered (secondary) cosmic rays in the Earth's atmosphere using balloon flights. For this discovery he received the Nobel Prize in Physics in 1936. He was also one of the "martyrs" of early radiation research and had to undergo a thumb amputation and larynx surgery due to radium burns. In the United States and the Soviet Union, balloon flights to altitudes of about 30 km, followed by parachute jumps from the stratosphere, were conducted before 1960 to study human exposure to cosmic radiation in space. The American Manhigh and Excelsior projects with Joseph Kittinger (1928-2022) became particularly well known, but the Soviet parachutist Yevgeny Andreyev (1926-2000) also set new records. High-energy radiation from space is much stronger at high altitudes than at sea level. The radiation exposure of flight crews and air travelers is therefore increased. The International Commission on Radiological Protection (ICRP) has issued recommendations for dose limits, which were incorporated into European law in 1996 and into the German Radiation Protection Ordinance in 2001. Radiation exposure is particularly high when flying in the polar regions or over the polar route. The average annual effective dose for aviation personnel was 1.9 mSv in 2015 and 2.0 mSv in 2016. The highest annual personal dose was 5.7 mSv in 2015 and 6.0 mSv in 2016. The collective dose for 2015 was about 76 person-Sv. This means that flight personnel are among the occupational groups in Germany with the highest radiation exposure in terms of collective dose and average annual dose. This group also includes frequent flyers, with Thomas Stuker holding the "record" - also in terms of radiation exposure - by reaching the 10 million mile mark with United Airlines MileagePlus on 5,900 flights between 1982 and the summer of 2011. In 2017, he passed the 18 million mile mark. The program EPCARD (European Program Package for the Calculation of Aviation Route Dose) was developed at the University of Siegen and the Helmholtz Munich and can be used to calculate the dose from all components of natural penetrating cosmic radiation on any flight route and flight profile - also online. Radiation protection in space From the earliest crewed space flights to the first moon landing and the construction of the International Space Station (ISS), radiation protection has been a major concern. Spacesuits used for extravehicular activities are coated on the outside with aluminum, which largely protects against cosmic radiation. The largest international research project to determine the effective dose or effective dose equivalent was the Matryoshka experiment in 2010, named after the Russian Matryoshka dolls, because it uses a human-sized phantom that can be cut into slices. As part of Matroshka, an anthropomorphic phantom was exposed to the outside of the space station for the first time to simulate an astronaut performing an extravehicular activity (spacewalk) and determine their exposure to radiation.Microelectronics on satellites must also be protected from radiation. Japanese scientists from the Japan Aerospace Exploration Agency (JAXA) have discovered a huge cave on the moon with their Kaguya lunar probe, which could offer astronauts protection from dangerous radiation during future lunar landings, especially during the planned stopover of a Mars mission. As part of a human mission to Mars, astronauts must be protected from cosmic radiation. During Curiosity's mission to Mars, a Radiation Assessment Detector (RAD) was used to measure radiation exposure. The radiation exposure of 1.8 millisieverts per day was mainly due to the constant presence of high-energy galactic particle radiation. In contrast, radiation from the sun accounted for only about three to five percent of the radiation levels measured during Curiosity's flight to Mars. On the way to Mars, the RAD instrument detected a total of five major radiation events caused by solar flares. To protect the astronauts, a plasma bubble will surround the spacecraft as an energy shield and its magnetic field will protect the crew from cosmic radiation. This would eliminate the need for conventional radiation shields, which are several centimeters thick and correspondingly heavy. In the Space Radiation Superconducting Shield (SR2S) project, which was completed in December 2015, magnesium diboride was found to be a suitable material for generating a suitable force field. Development of metrological principles of radiation protection Dosimeter Dosimeters are instruments used to measure radiation dose - as absorbed dose or dose equivalent - and are an important cornerstone of radiation protection. Film dosimeter At the October 1907 meeting of the American Roentgen Ray Society, Rome Vernon Wagner, an X-ray tube manufacturer, reported that he had begun carrying a photographic plate in his pocket and developing it every evening. This allowed him to determine how much radiation he had been exposed to. This was the forerunner of the film dosimeter. His efforts came too late, as he had already developed cancer and died six months after the conference. In the 1920s, the physical chemist John Eggert (1891-1973) played a key role in the introduction of film dosimetry for routine personal monitoring. Since then, it has been successively improved and, in particular, the evaluation technique has been automated since the 1960s. At the same time, Hermann Joseph Muller (1890-1967) discovered mutations as genetic consequences of X-rays, for which he was awarded the Nobel Prize in 1946. At the same time, the roentgen (R) was introduced as a unit for quantitative measurement of radiation exposure. A dosimeter for film is divided into multiple segments, each containing a light- or radiation-sensitive film surrounded by layers of copper and lead with varying thickness. The degree of radiation penetration determines whether the segment is not blackened or blackened to varying degrees. The absorbed radiation effect during the measurement time is summed up, and the radiation dose can be determined from the blackening. Guidelines for evaluation exist, with those for Germany being published in 1994 and last updated on December 8, 2003. Particle and quantum detectors With the invention of the Geiger gaseous ionization detector in 1913, which became the Geiger-Müller gaseous ionization detector in 1928 - named after the physicists Hans Geiger (1882-1945) and Walther Müller (1905-1979) - the individual particles or quanta of ionizing radiation could be detected and measured. Detectors developed later, such as proportional counters or scintillation counters, which not only "count" but also measure energy and distinguish between types of radiation, also became important for radiation protection. Scintillation measurement is one of the oldest methods of detecting ionizing radiation or X-rays; originally, a zinc sulfide screen was held in the path of the beam and the scintillation events were either counted as flashes or, in the case of X-ray diagnostics, viewed as an image. A scintillation counter known as a spinthariscope was developed in 1903 by William Crookes (1832-1919) and used by Ernest Rutherford (1871-1937) to study the scattering of alpha particles from atomic nuclei. Thermoluminescence dosimeter Lithium fluoride had already been proposed in the USA in 1950 by Farrington Daniels (1889-1972), Charles A. Boyd and Donald F. Saunders (1924-2013) for solid-state dosimetry using thermoluminescent dosimeters. The intensity of the thermoluminescent light is proportional to the amount of radiation previously absorbed. This type of dosimetry has been used since 1953 in the treatment of cancer patients and wherever people are occupationally exposed to radiation. The thermoluminescence dosimeter was followed by OSL dosimetry, which is not based on heat but on optically stimulated luminescence and was developed by Zenobia Jacobs and Richard Roberts at the University of Wollongong (Australia). The detector emits the stored energy as light. The light output, measured with photomultipliers, is then a measure of the dose. Whole body counter Since 2003, whole-body counters have been used in radiation protection to monitor the absorption (incorporation) of radionuclides in people who handle gamma-emitting open radioactive materials and who may be contaminated through food, inhalation of dusts and gases, or open wounds. (α and β emitters are not measurable). Test specimen Constancy testing is the verification of reference values as part of quality assurance in x-ray diagnostics, nuclear medicine diagnostics, and radiotherapy. National regulations specify which parameters are to be tested, which limits are to be observed, which test methods are to be used, and which test samples are to be used. In Germany, the Radiation Protection in Medicine Directive and the relevant DIN 6855 standard in nuclear medicine require regular (in some cases daily) constancy testing. Test sources are used to check the response of probe measuring stations as well as in vivo and in vitro measuring stations. Before starting the tests, the background count rate and the setting of the energy window must be checked every working day, and the settings and the yield with reproducible geometry must be checked at least once a week with a suitable test source, e.g. 137Caesium (DIN 6855-1). The reference values for the constancy test are determined during the acceptance test. Compact test specimens for medical X-ray images were not created until 1982. Prior to this, the patient himself often served as the object for producing X-ray test images. Prototypes of such an X-ray phantom with integrated structures were developed by Thomas Bronder at the Physikalisch-Technische Bundesanstalt. A water phantom is a Plexiglas container filled with distilled water that is used as a substitute for living tissue to test electron linear accelerators used in radiation therapy. According to regulatory requirements, water phantom testing must be performed approximately every three months to ensure that the radiation dose delivered by the treatment system is consistent with the radiation planning. The Alderson-Rando phantom, invented by Samuel W. Alderson (1914-2005), has become the standard X-ray phantom. It was followed by the Alderson Radio Therapy (ART) phantom, which he patented in 1967. The ART phantom is cut horizontally into 2.5 cm thick slices. Each slice has holes sealed with bone-equivalent, soft-tissue-equivalent, or lung-equivalent pins that can be replaced by thermoluminescent dosimeters. Alderson is also known as the inventor of the crash test dummy. Dose reconstruction with ESR spectroscopy of deciduous teeth As a result of accidents or improper use and disposal of radiation sources, a significant number of people are exposed to varying degrees of radiation. Radioactivity and local dose measurements are not sufficient to fully assess the effects of radiation. To retrospectively determine the individual radiation dose, measurements are made on teeth, i.e. on biological, endogenous materials. Tooth enamel is particularly suitable for the detection of ionizing radiation due to its high mineral content (hydroxyapatite), which has been known since 1968 thanks to the research of John M. Brady, Norman O. Aarestad and Harold M. Swartz. The measurements are performed on milk teeth, preferably molars, using electron paramagnetic resonance spectroscopy (ESR, EPR). The concentration of radicals generated by ionizing radiation is measured in the mineral part of the tooth. Due to the high stability of the radicals, this method can be used for dosimetry of long past exposures. Dose reconstruction using biological dosimetry Since about 1988, in addition to physical dosimetry, biological dosimetry has made it possible to reconstruct the individual dose of ionizing radiation. This is especially important for unforeseen and accidental exposures, where radiation exposures occur without physical dose monitoring. Biological markers, particularly cytogenetic markers in blood lymphocytes, are used for this purpose. Techniques for detecting radiation damage include analyzing dicentric chromosomes after acute radiation exposure. Dicentric chromosomes result from defective repair of chromosome breaks in two chromosomes, resulting in two centromeres instead of one like undamaged chromosomes. Symmetric translocations, detected through fluorescence in situ hybridization (FISH), are used after chronic or long-term exposure to radiation. The micronucleus test and the premature chromosome condensation (PCC) test are available to measure acute exposure. Measured variables and units In principle, reducing the exposure of the human organism to ionizing radiation to zero is not possible and perhaps not even sensible. The human organism has been accustomed to natural radioactivity for thousands of years and ultimately this also triggers mutations (changes in genetic material), which are the cause of the development of life on earth. The mutation-inducing effect of high-energy radiation was first demonstrated in 1927 by Hermann Joseph Muller (1890-1967). Three years after its establishment in 1958, the United Nations Scientific Committee on the Effects of Atomic Radiation adopted the Linear No-Threshold (LNT) model - a linear dose-effect relationship without a threshold - largely at the instigation of the Soviet Union. The dose-response relationship measured at high doses was extrapolated linearly to low doses. There would be no threshold, since even the smallest amounts of ionizing radiation would trigger some biological effect. The LNT model ignores not only possible radiation hormesis, but also the known ability of cells to repair genetic damage and the ability of the organism to remove damaged cells. Between 1963 and 1969, John W. Gofman (1918-2007) and Arthur R. Tamplin of the University of California, Berkeley, conducted research for the United States Atomic Energy Commission (USAEC, 1946-1974) investigating the relationship between radiation doses and cancer incidence. Their findings sparked a fierce controversy in the United States beginning in 1969. Starting in 1970, Ernest J. Sternglass, a radiologist at the University of Pittsburgh, published several studies describing the effect of radiation from nuclear tests and the vicinity of nuclear power plants on infant mortality. In 1971, the UASEC reduced the maximum allowable radiation dose by a factor of 100. Subsequently, nuclear technology was based on the principle of "As Low As Reasonably Achievable" (ALARA). This was a coherent principle as long as it was assumed that there was no threshold and that all doses were additive. In the meantime, a transition to "As High As Reasonably Safe" (AHARS) is increasingly being discussed. For the question of evacuation after accidents, a transition to AHARS seems absolutely necessary. In both the Chernobyl and Fukushima cases, hasty, poorly organized and poorly communicated evacuations caused psychological and physical damage to those affected - including documented deaths in the case of Fukushima. By some estimates, this damage is greater than would have been expected had the evacuation not taken place. Voices such as Geraldine Thomas therefore question such evacuations in principle and call for a transition to shelter-in-place wherever possible. Absorbed dose and dose equivalent The British physicist and radiologist and founder of radiobiology Louis Harold Gray (1905-1965) introduced the unit Rad (acronym for radiation absorbed dose) in the 1930s, which was renamed Gray (Gy) after him in 1978. One gray is a mass-specific quantity and corresponds to the energy of one joule absorbed by one kilogram of body weight. Acute whole-body exposures in excess of four Gy are usually fatal to humans. The different types of radiation ionize to different degrees. Ionization is any process in which one or more electrons are removed from an atom or molecule, leaving the atom or molecule as a positively charged ion (cation). Each type of radiation is therefore assigned a dimensionless weighting factor that expresses its biological effectiveness. For X-rays, gamma and beta radiation, the factor is one, alpha radiation reaches a factor of twenty, and for neutron radiation it is between five and twenty, depending on the energy. Multiplying the absorbed dose in Gy by the weighting factor gives the equivalent dose, expressed in Sievert (Sv). It is named after the Swedish physician and physicist Rolf Maximilian Sievert (1896-1966). Sievert was the founder of radiation protection research and developed the Sievert chamber in 1929 to measure the intensity of X-rays. He founded the International Commission on Radiation Units and Measurements (ICRU) and later became chairman of the International Commission on Radiological Protection (ICRP). The ICRU and ICRP specify differently defined weighting factors that apply to environmental measurements (quality factor) and body-related dose equivalent data (radiation weighting factor). In relation to the body, the relevant dose term is the Organ Equivalent Dose (formerly "Organ Dose"). This is the dose equivalent averaged over an organ. Multiplied by organ-specific tissue weighting factors and summed over all organs, the effective dose is obtained, which represents a dose balance. In relation to environmental measurements, the ambient dose equivalent or local dose is relevant. Its increase over time is called the local dose rate. Even at very low effective doses, stochastic effects (genetic and cancer risk) are expected. At effective doses above 0.1 Sv, deterministic effects also occur (tissue damage up to radiation sickness at very high doses). Correspondingly high radiation doses are now only given in units of Gy. Natural radiation exposure in Germany, with an annual average effective dose of about 0.002 Sv, is well below this range. Tolerance dose In 1931, the U.S. Advisory Committee on X-Ray and Radium Protection (ACXRP, now the National Council on Radiation Protection and Measurements, NCRP), founded in 1929, published the results of a study on the so-called tolerance dose, on which a scientifically based radiation protection guideline was based. Exposure limits were gradually lowered. In 1936 the tolerance dose was 0.1 R/day. The unit "R" (the X-ray) from the CGS unit system has been obsolete since the end of 1985. Since then, the SI unit of ion dose has been "coulomb per kilogram". Relative biological effectiveness After World War II, the concept of tolerance dose was replaced by that of maximum permissible dose and the concept of relative biological effectiveness was introduced. The limit was set in 1956 by the National Council on Radiation Protection & Measurements (NCRP) and the International Commission on Radiological Protection (ICRP) at 5 rem (50 mSv) per year for radiation workers and 0.5 rem per year for the general population. The unit Rem as a physical measure of radiation dose (from the English roentgen equivalent in man) was replaced by the unit Sv (sievert) in 1978. This was due to the advent of nuclear energy and its associated dangers. Prior to 1991, the equivalent dose was used both as a measure of dose and as a term for the body dose that determines the course and survival of radiation sickness. ICRP Publication 60 introduced the radiation weighting factor was introduced. For examples of equivalent doses as body doses, see Order of magnitude of the dose equivalent Order of magnitude of the local dose rate. Banana equivalent dose The origin of the concept of using a banana equivalent dose (BED) as a benchmark is unknown. In 1995, Gary Mansfield of the Lawrence Livermore National Laboratory found the Banana Equivalent Dose (BED) to be very useful in explaining radiation risks to the public. It is not a formally used dose. The banana equivalent dose is the dose of ionizing radiation to which a person is exposed by eating one banana. Bananas contain potassium. Natural potassium consists of 0.0117% of the radioactive isotope 40K (potassium-40) and has a specific activity of 30,346 becquerels per kilogram, or about 30 becquerels per gram. The radiation dose from eating a banana is about 0.1 μSv. The value of this reference dose is given as "1" and thus becomes the "unit of measurement" banana equivalent dose. Consequently, other radiation exposures can be compared to the consumption of one banana. For example, the average daily total radiation exposure of a person is 100 banana equivalent doses. At 0.17 mSv per year, almost 10 percent of natural radioactive exposure in Germany (an average of 2.1 mSv per year) is caused by the body's own (vital) potassium. The banana equivalent dose does not take into account the fact that no radioactive nuclide is accumulated in the body through the consumption of potassium-containing foods. The potassium content of the body is in homeostasis and is kept constant. Disregard of radiation protection Unethical radiation experiments The Trinity test was the first nuclear weapon explosion conducted as part of the US Manhattan Project. There were no warnings to residents about the fallout, nor information about shelters or possible evacuations. This was followed in 1946 by tests in the Marshall Islands (Operation Crossroads), as recounted by chemist Harold Carpenter Hodge (1904-1990), toxicologist for the Manhattan Project, in his lecture (1947) as president of the International Association for Dental Research. Hodge's reputation was severely damaged by historian Eileen Welsome's 1999 Pulitzer Prize-winning book The Plutonium Files - America's Secret Medical Experiments in the Cold War. She documents horrific human experiments in which the subjects (including Hodge) were unaware that they were being used as "guinea pigs" to test the safety limits of uranium and plutonium. The experiments on the unidentified subjects were continued by the United States Atomic Energy Commission (AEC) into the 1970s. The abuse of radiation continues to this day. During the Cold War, ethically reprehensible radiation experiments were conducted in the United States on untrained human subjects to determine the detailed effects of radiation on human health. Between 1945 and 1947, 18 people were injected with plutonium by Manhattan Project doctors. In Nashville, pregnant women were given radioactive mixtures. In Cincinnati, about 200 patients were irradiated over a 15-year period. In Chicago, 102 people received injections of strontium and caesium solutions. In Massachusetts, 57 children with developmental disorders were given oatmeal with radioactive markers. These radiation experiments were not stopped until 1993 under President Bill Clinton. But the injustice committed was not atoned for. For years, uranium hexafluoride caused radiation damage at a DuPont Company plant and to local residents. At times, the plant even deliberately released uranium hexafluoride in its heated gaseous state into the surrounding area to study the effects of the radioactive and chemically aggressive gas. Stasi border controls Between 1978 and 1989, vehicles were checked with 137Cs gamma sources at 17 border crossings between the German Democratic Republic and the Federal Republic of Germany. According to the Transit Agreement, vehicles could only be screened if there was reasonable suspicion. For this reason, the Ministry for State Security (Stasi) installed and operated a secret radioactive screening technology, codenamed "Technik V," which was generally used to screen all transit passengers to detect "deserters from the Republic." Ordinary GDR customs officers were unaware of the secret radioactive screening technology and were subject to strict "entry regulations" designed to "protect" them as much as possible from radiation exposure. Lieutenant General Heinz Fiedler (1929-1993), as the highest ranking border guard of the MfS, was responsible for all radiation controls. On February 17, 1995, the Radiation Protection Commission published a statement in which it said: "Even if we assume that individual persons stopped more frequently in the radiation field and that a fluoroscopy lasting up to three minutes increases the annual radiation exposure by one to a few mSv, this does not result in a dose that is harmful to health". In contrast, the designer of this type of border control calculated 15 nSv per crossing. Lorenz of the former State Office for Radiation Protection and Nuclear Safety of the GDR came up with a dose estimate of 1000 nSv, which was corrected to 50 nSv a few weeks later. Radar systems Radar equipment is used at airports, in airplanes, at missile sites, on tanks, and on ships. The radar technology commonly used in the 20th century produced X-rays as a technically unavoidable by-product in the high-voltage electronics of the equipment. In the 1960s and 1970s, German soldiers and technicians were largely unaware of the dangers, as were those in the GDR's National People's Army. The problem had been known internationally since the 1950s, and to the German Armed Forces since at least 1958. However, no radiation protection measures were taken, such as the wearing of lead aprons. Until about the mid-1980s, radiation shielding was inadequate, especially for pulse switch tubes. Particularly affected were maintenance technicians (radar mechanics) who were exposed to the X-ray generating parts for hours without any protection. The permissible annual limit value could be exceeded after just 3 minutes. It was not until 1976 that warning notices were put up and protective measures taken in the German Navy, and not until the early 1980s in general. As late as the 1990s, the German Armed Forces denied any connection between radar equipment and cancer or genetic damage. The number of victims amounted to several thousand. The connection was later acknowledged by the German Armed Forces and in many cases a supplementary pension was paid. In 2012, a foundation was set up to provide unbureaucratic compensation for the victims. Radiation protection crimes National Socialism The harmful effects of X-rays were recognized during the National Socialist era. The function of the gonads (ovaries or testicles) was destroyed by ionizing radiation, leading to infertility. In July 1942, Heinrich Himmler (1900-1945) decided to conduct forced sterilization experiments at the Auschwitz-Birkenau concentration camp, which were carried out by Horst Schumann (1906-1983), previously a doctor in Aktion T4. Each test victim had to stand between two X-ray machines, which were arranged in such a way that the test victim had just enough space between them. Opposite the x-ray machines was a booth with lead walls and a small window. From the booth, Schumann could direct X-rays at the test victims' sexual organs without endangering himself. Human radiation castration experiments were also conducted in concentration camps under the direction of Viktor Brack (1904-1948). As part of the "Law for the Prevention of Hereditary Diseases," people were often subjected to radiation castration during interrogations without their knowledge. Approximately 150 radiologists from hospitals throughout Germany participated in the forced castration of approximately 7,200 people using X-rays or radium. Polonium murder On November 23, 2006, Alexander Alexanderovich Litvinenko (1962-2006) was murdered under unexplained circumstances as a result of radiation sickness caused by polonium. This was also briefly suspected in the case of Yasser Arafat (1929-2004), who died in 2004. Radiation offenses The misuse of ionizing radiation is a radiation offence under German criminal law. The use of ionizing radiation to harm persons or property is punishable. Since 1998, the regulations can be found in (previously § 311a StGB old version); the regulations go back to § 41 AtG old version. In the Austrian Criminal Code, relevant criminal offenses are defined in the seventh section, "Criminal acts dangerous to the public" and "Criminal acts against the environment". In Switzerland, endangerment by nuclear energy, radioactive substances or ionizing radiation is punishable under Art. 326 of the Swiss Criminal Code and disregard of safety regulations under Chapter 9 of the Nuclear Energy Act of 21 March 2003. Radiation protection for less energetic types of radiation Originally, the term radiation protection referred only to ionizing radiation. Today, non-ionizing radiation is also included and is the responsibility of the Federal Office for Radiation Protection, the Radiation Protection Division of the Federal Office of Public Health and the Ministry of Climate Action and Energy (Austria). The project collected, evaluated and compared data on the legal situation in all European countries (47 countries plus Germany) and major non-European countries (China, India, Australia, Japan, Canada, New Zealand and the USA) regarding electric, magnetic and electromagnetic fields (EMF) and optical radiation (OS). The results were very different and in some cases deviated from the recommendations of the International Commission on Non-Ionizing Radiation Protection (ICNIRP). UV light For many centuries, the Inuit (Eskimos) have used snow goggles with narrow slits, carved from seal bones or reindeer antlers, to protect against snow blindness (photokeratitis). In the 1960s, Australia - particularly Queensland - launched the first awareness campaign on the dangers of ultraviolet (UV) radiation in the spirit of primary prevention. In the 1980s, many countries in Europe and overseas initiated similar UV protection campaigns. UV radiation has a thermal effect on the skin and eyes and can lead to skin cancer (malignant melanoma) and eye inflammation or cataracts. To protect the skin from harmful UV radiation, such as photodermatosis, acne aestivalis, actinic keratosis or urticaria solaris, normal clothing, special UV protective clothing (SPF 40-50) and high SPF sunscreen can be used. The Australian-New Zealand Standard (AS/NZS 4399) of 1996 measures new textile materials in an unstretched and dry state for the manufacture of protective clothing worn while bathing, especially by children, and for the manufacture of shading textiles (sunshades, awnings). The UV Standard 801 assumes a maximum radiation intensity with the solar spectrum in Melbourne, Australia, on January 1 of a year (at the height of the Australian summer), the most sensitive skin type of the wearer, and under wearing conditions. As the solar spectrum in the northern hemisphere differs from that in Australia, the measurement method according to the European standard EN 13758-1 is based on the solar spectrum of Albuquerque (New Mexico, USA), which corresponds approximately to that of southern Europe. To protect your eyes, wear sunglasses with UV protection or special goggles that also shield the sides to prevent snow blindness. A defensive reaction of the skin is the formation of a light callus, the skin's own sun protection, which corresponds to a protection factor of about 5. At the same time, the production of brown skin pigments (melanin) in the corresponding cells (melanocytes) is stimulated. A solar control film is usually a film made of polyethylene terephthalate (PET) that is applied to windows to reduce the light and heat from the sun's rays. The film filters UV-A and UV-B radiation. Polyethylene terephthalate goes back to an invention by the two Englishmen John Rex Whinfield (1902-1966) and James Tennant Dickson in 1941. The fact that UV-B radiation (Dorno radiation, after Carl Dorno (1865-1942)) is a proven carcinogen, but is also required for the body's own synthesis of vitamin-D3 (cholecalciferol), leads to internationally conflicting recommendations regarding health-promoting UV exposure. In 2014, based on the scientific evidence of the last decades, 20 scientific authorities, professional societies and associations from the fields of radiation protection, health, risk assessment, medicine and nutrition published a recommendation on "UV exposure for the formation of the body's own vitamin D". It was the first interdisciplinary recommendation on this topic worldwide. Using a solarium for the first time at a young age (<35 years) almost doubles the risk of developing malignant melanoma. In Germany, the use of tanning beds by minors has been prohibited by law since March 2010. As of August 1, 2012, sunbeds must not exceed a maximum irradiance of 0.3 watts per square meter of skin. Sunbeds must be labeled accordingly. The new irradiance limit corresponds to the highest UV dose that can be measured on Earth at 12 noon under a cloudless sky at the equator. The minimum erythema dose (MED) is determined for medical applications. The MED is defined as the lowest dose of radiation that produces a barely visible erythema. It is determined 24 hours after the test irradiation. It is performed with the type of lamp intended for the therapy by applying so-called light stairs to skin that is not normally exposed to light (for example, on the buttocks). Sun lamp Richard Küch (1860-1915) was able to melt quartz glass - the basis for UV radiation sources - for the first time in 1890 and founded the Heraeus Quarzschmelze. He developed the first quartz lamp (sun lamp) for generating UV radiation in 1904, thus laying the foundation for this form of light therapy. Despite the dosage problems, doctors increasingly used quartz lamps in the early 20th century. Internal medicine specialists and dermatologists were among the most eager testers. After successful treatment of skin tuberculosis, internal medicine began to treat tuberculous pleurisy, glandular tuberculosis and intestinal tuberculosis. In addition, doctors tested the effect of quartz lamps on other infectious diseases such as syphilis, metabolic diseases, cardiovascular diseases, nerve pain such as sciatica, or nervous diseases such as neurasthenia and hysteria. In dermatology, fungal diseases, ulcers and wounds, psoriasis, acne, freckles and hair loss were also treated with quartz lamps, while in gynecology, abdominal diseases were treated with quartz lamps. Rejuvenation specialists used artificial high-altitude sunlight to stimulate gonadal activity and treated infertility, impotentia generandi (inability to conceive), and lack of sexual desire by irradiating the genitals. For this purpose, Philipp Keller (1891-1973) developed an erythema dosimeter with which he measured the amount of radiation not in Finsen units (UV radiation with a wavelength λ of 296.7 nm and an irradiance E of 10−5 W/m2), but in height solar units (HSE). It was the only instrument in use around 1930, but it was not widely accepted in medical circles. Treatment of acne with ultraviolet radiation is still controversial. Although UV radiation can have an antibacterial effect, it can also induce proliferative hyperkeratosis. This can lead to the formation of comedones ("blackheads"). Phototoxic effects may also occur. In addition, it is carcinogenic and promotes skin aging. UV therapy is increasingly being abandoned in favor of photodynamic therapy. Laser The ruby laser was developed in 1960 by Theodore Maiman (1927-2007) as the first laser based on the ruby maser. Soon after, the dangers of lasers were discovered, especially for the eyes and skin, due to the laser's low penetration depth. Lasers have numerous applications in technology and research as well as in everyday life, from simple laser pointers to distance measuring devices, cutting and welding tools, reproduction of optical storage media such as CDs, DVDs and Blu-ray discs, communication, laser scalpels and other devices using laser light in everyday medical practice. The Radiation Protection Commission requires that laser applications on human skin be performed only by a specially trained physician. Lasers are also used for show effects in discotheques and at events. Lasers can cause biological damage due to the properties of their radiation and their sometimes extremely concentrated electromagnetic power. For this reason, lasers must be labeled with standardized warnings depending on the laser class. The classification is based on the DIN standard EN 60825-1, which distinguishes between ranges of wavelengths and exposure times that lead to characteristic injuries and injury thresholds for power or energy density. The CO2-Laser was developed in 1964 by the Indian electrical engineer and physicist Chandra Kumar Naranbhai Patel (*1938) at the same time as the Nd:YAG laser (neodymium-doped yttrium aluminum garnet laser) at Bell Laboratories by LeGrand Van Uitert (1922-1999) and Joseph E. Geusic (*1931) and the Er:YAG laser (erbium-doped yttrium aluminum garnet laser) and has been used in dentistry since the early 1970s. In the hard laser field, two systems in particular are emerging for use in the oral cavity: the CO2 laser for use in soft tissue and the Er:YAG laser for use in dental hard and soft tissue. The goal of soft laser treatment is to achieve biostimulation with low energy densities. The Commission on Radiological Protection strongly recommends that the possession and purchase of class 3B and 4 laser pointers be regulated by law to prevent misuse. This is due to the increase in dangerous dazzle attacks caused by high-power laser pointers. In addition to pilots, these include truck and car drivers, train operators, soccer players, referees, and even spectators at soccer games. Such glare can lead to serious accidents and, in the case of pilots and truck drivers, to occupational disability due to eye damage. The first accident prevention regulation was published on April 1, 1988 as BGV B2, followed on January 1, 1997 by DGUV Regulation 11 of the German Social Accident Insurance. Between January and mid-September 2010, the German Federal Aviation Office registered 229 dazzle attacks on helicopters and airplanes of German airlines nationwide. On October 18, 2017, a perpetrator of a dazzle attack on a federal police helicopter was sentenced to one year and six months in prison without parole. Electromagnetic radiation exposure Electrosmog is colloquially understood as the exposure of humans and the environment to electric, magnetic and electromagnetic fields, some of which are believed to have undesirable biological effects. Electromagnetic environmental compatibility (EMC) refers to the effects on living organisms, some of which are considered electrosensitive. Fears of such effects have existed since the beginning of technological use in the mid-19th century. In 1890, for example, officials of the Royal General Directorate in Bavaria were forbidden to attend the opening ceremony of Germany's first alternating current power plant, the Reichenhall Electricity Works, or to enter the machine room. With the establishment of the first radio telegraphy and its telegraph stations, the U.S. magazine The Atlanta Constitution reported in April 1911 on the potential dangers of radio telegraph waves, which, in addition to "tooth loss," were said to cause hair loss and make people "crazy" over time. Full-body protection was recommended as a preventive measure. During the second half of the 20th century, other sources of electromagnetic fields have become the focus of health concerns, such as power lines, photovoltaic systems, microwave ovens, computer and television screens, security devices, radar equipment, and more recently, cordless telephones (DECT), cell phones, their base stations, energy-saving lamps, and Bluetooth connections. Electrified railroad lines, tram overhead lines and subway tracks are also strong sources of electrosmog. In 1996, the World Health Organization (WHO) launched the EMF (ElectroMagnetic Fields) Project to bring together current knowledge and available resources from key international and national organizations and scientific institutions on electromagnetic fields. The German Federal Office for Radiation Protection (BfS) published the following recommendation in 2006: As of 2016, the EMF Guideline 2016 of EUROPAEM (European Academy For Environmental Medicine) on the prevention, diagnosis and treatment of EMF-related complaints and diseases applies. Microwaves A microwave oven, invented in 1950 by U.S. researcher Percy Spencer (1894-1970), is used to quickly heat food using microwave radiation at a frequency of 2.45 gigahertz. In an intact microwave oven, leakage radiation is relatively low due to the shielding of the cooking chamber. An "emission limit of five milliwatts per square centimeter (equivalent to 50 watts per square meter) at a distance of five centimeters from the surface of the appliance" (radiation density or power flux density) is specified. Children should not stand directly in front of or next to the appliance while food is being prepared. In addition, the Federal Office for Radiation Protection lists pregnant women as particularly at risk. In microwave therapy, electromagnetic waves are generated for heat treatment. The penetration depth and energy distribution vary depending on the frequency of application (short waves, ultra short waves, microwaves). To achieve greater penetration, pulsed microwaves are used, each of which delivers high energy to the tissue. A pulse pause ensures that no burns occur. Metal implants and pacemakers are contraindications. Cell phones The discussion about possible health risks from mobile phone radiation has been controversial to date, although there are currently no valid results. According to the German Federal Office for Radiation Protection The German Federal Office for Radiation Protection recommends, among other things, mobile phones with a low SAR (Specific Absorption Rate) and the use of headsets or hands-free devices to keep the mobile phone away from the head. There is some discussion that mobile phone radiation may increase the incidence of acoustic neuroma, a benign tumor that arises from the vestibulocochlear nerve. It should therefore be reduced. In everyday life, a mobile phone transmits at maximum power only in exceptional cases. As soon as it is near a cell where maximum power is no longer needed, it is instructed by that cell to reduce its power. Electrosmog or cell phone radiation filters built into cell phones are supposed to protect against radiation. The effect is doubtful from the point of view of electromagnetic environmental compatibility, because the radiation intensity of the cell phone is increased disproportionately in order to obtain the necessary power. The same is true for use in a car without an external antenna, as the necessary radiation can only penetrate through the windows, or in areas with poor network coverage. Since 2004, radio network repeaters have been developed for mobile phone networks (GSM, UMTS, Tetrapol) that can amplify the reception of a mobile phone cell in shaded buildings. This reduces the SAR value of the mobile phone when making calls. The SAR value of a WLAN router is only a tenth of that of a cell phone, although this drops by a further 80% at a distance of just one meter. The router can be set so that it switches off when not in use, for example at night. Electric fields High-voltage power lines Until now, electrical energy has been transported from the power plant to the consumer almost exclusively via high-voltage lines, in which alternating current flows at a frequency of 50 Hertz. As part of the energy transition, high-voltage direct current (HVDC) transmission systems are also planned in Germany. Since the amendment of the 26th Federal Immission Control Ordinance (BImSchV) in 2013, emissions from HVDC systems are also regulated by law. The limit is set to prevent interference with electronic implants caused by static magnetic fields. No limit has been set for static electric fields. Domestic electrical installation Ground fault interrupters are available to reduce electric fields and (in the case of current flow) magnetic fields from residential electrical installations. In plaster installations, only a small part of the electric field can escape from the wall. However, a mains disconnect switch automatically disconnects the relevant line as long as no electrical load is switched on; as soon as a load is switched on, the mains voltage is also switched on. Ground fault interrupters were introduced in 1973 and have been continuously improved over the decades. In 1990, for example, it became possible to disconnect the PEN conductor (formerly known as the neutral conductor). Circuit breakers can be installed in several different circuits, preferably in those that supply bedrooms. However, they only turn off when no continuous current consumers such as air conditioners, fans, humidifiers, electric alarm clocks, night lights, standby devices, alarm systems, chargers, and similar devices are turned on. Instead of the mains voltage, a low voltage (2-12 volts) is applied, which can be used to detect when a consumer is switched on. Rooms can also be shielded with copper wallpaper or special wall paints containing metal, thus applying the Faraday cage principle. Body scanner Since about 2005, body scanners have been used primarily at airports for security (passenger) screening. Passive scanners detect the natural radiation emitted by a person's body and use it to locate objects worn or concealed on the body. Active systems also use artificial radiation to improve detection by analyzing the backscatter. A distinction is made between body scanners that use ionizing radiation (usually X-rays) and those that use non-ionizing radiation (terahertz radiation). The integrated components operating in the lower terahertz range emit less than 1 mW (-3 dBm), so no health effects are expected. There are conflicting studies from 2009 on whether genetic damage can be detected as a result of terahertz radiation. In the U.S., backscatter x-ray scanners make up the majority of devices used. Scientists fear that a future increase in cancer could pose a greater threat to the life and limb of passengers than terrorism itself. It is not clear to the passenger whether the body scanners used during a particular checkpoint use only terahertz or also X-ray radiation. According to the Federal Office for Radiation Protection, the few available results from investigations in the frequency range of active whole-body scanners that work with millimeter wave or terahertz radiation do not yet allow a conclusive assessment from a radiation protection perspective (as of 24 May 2017). In the vicinity of the plant, where employees or other third parties may be present, the limit value of the permissible annual dose for a single person in the population of one millisievert (1 mSv, including pregnant women and children) is not exceeded, even in the case of permanent presence. In the case of X-ray scanners for hand luggage, it is not necessary to set up a radiation protection area by Section §19 RöV, as the radiation exposure during a hand luggage check for passengers does not exceed 0.2 microsievert (μSv), even under unfavorable assumptions. For this reason, employees involved in baggage screening are not considered to be occupationally exposed to radiation in accordance with Section §31 X-ray Ordinance and therefore do not have to wear a dosimeter. Radiation protection for electromedical treatment procedures Electromagnetic alternating fields have been used in medicine since 1764, mainly for heating and increasing blood circulation (diathermy, short-wave therapy) to improve wound and bone healing. The relevant radiation protection is regulated by the Medical Devices Act together with the Medical Devices Operator Ordinance. The Medical Devices Act came into force in Germany on January 14, 1985. It divided the medical devices known at that time into groups according to their degree of risk to the patient. The Medical Devices Ordinance regulated the handling of medical devices until January 1, 2002, when it was replaced by the Medical Devices Act. When ionizing radiation is used in medicine, the benefit must outweigh the potential risk of tissue damage (justifiable indication). For this reason, radiation protection is of great importance. The design should be optimized according to the ALARA (As Low As Reasonably Achievable) principle as soon as an application is described as suitable. Since 1996, the European ALARA Network (EAN), founded by the European Commission, has been working on the further implementation of the ALARA principle in radiation protection. Infrared radiation Discovered around 1800 by the German-British astronomer, engineer and musician Friedrich Wilhelm Herschel (1738-1822), infrared radiation primarily produces heat. If the increase in body temperature and the duration of exposure exceed critical limits, heat damage and even heat stroke can result. Due to the still unsatisfactory data situation and the partly contradictory results, it is not yet possible to give clear recommendations for radiation protection with regard to infrared radiation. However, the findings regarding the acceleration of skin aging by infrared radiation are sufficient to describe the use of infrared radiation against wrinkles as counterproductive. In 2011, the Institute for Occupational Safety and Health of the German Social Accident Insurance established exposure limit values to protect the skin from burns caused by thermal radiation. The IFA recommends that, in addition to the limit specified in EU Directive 2006/25/EC to protect the skin from burns for exposure times up to 10 seconds, a limit for exposure times between 10 and 1000 seconds should be applied. In addition, all radiation components in the wavelength range from 380 to 20000 nm should be considered for comparison with the limit values. Radiation protection regulations First radiation protection regulations A leaflet published by the German Radiological Society (DRG) in 1913 was the first systematic approach to radiation protection. The physicist and co-founder of the society, Bernhard Walter (1861-1950), was one of the pioneers of radiation protection. The International Commission on Radiological Protection (ICRP) and the International Commission on Radiation Units and Measurements (ICRU) were established at the Second International Congress of Radiology in Stockholm in 1928. In the same year, the first international radiation protection recommendations were adopted and each country represented was asked to develop a coordinated radiation control program. The United States representative, Lauriston Taylor of the US Bureau of Standards (NSB), formed the Advisory Committee on X-Ray and Radium Protection, later renamed the National Committee on Radiation Protection and Measurements (NCRP). The NCRP received a Congressional charter in 1964 and continues to develop guidelines to protect individuals and the public from excessive radiation. In the years that followed, numerous other organizations were established by almost every president. Radiation protection monitoring Individuals in professions such as pilots, nuclear physicians, and nuclear power plant workers are regularly exposed to ionizing radiation. In Germany, over 400,000 workers undergo occupational radiation monitoring to safeguard against the harmful effects of radiation. Approximately 70,000 individuals employed across various industries possess a radiation pass (distinct from an X-ray pass - see below). Individuals who may receive an annual effective dose of more than 1 millisievert during their work are required to undergo radiation protection monitoring. In Germany, the effective dose from natural radiation is 2.1 millisieverts per year. Radiation dose is measured using dosimeters, and the occupational dose limit is 20 millisieverts per year. Monitoring also applies to buildings, plant components or (radioactive) substances. These are exempted from the scope of the Radiation Protection Ordinance by a special administrative act, the exemption in radiation protection. To this end, it must be ensured that the resulting radiation exposure for an individual member of the public does not exceed 10 μSv per calendar year and that the resulting collective dose does not exceed 1 person sievert per year. Radiation protection register According to all occupationally exposed persons and holders of radiation passports require a radiation protection register number (SSR number or SSRN), a unique personal identification number, as of December 31, 2018. The SSR number facilitates and improves the allocation and balancing of individual dose values from occupational radiation exposure in the radiation protection register. It replaces the former radiation passport number. It is used to monitor dose limits. Companies are obliged to deploy their employees in such a way that the radiation dose to which they are exposed does not exceed the limit of 20 millisieverts per calendar year. In Germany, about 440,000 people were classified as occupationally exposed to radiation in 2016. According to paragraph 1, Sentence 1, "in the case of remediation and other measures to prevent and reduce exposure at radioactively contaminated sites, the person who carries out the measures himself or has them carried out by workers under his supervision must carry out an assessment of the body dose of the workers before starting the measures". Applications for SSR numbers must be submitted to the Federal Office for Radiation Protection (BfS) by March 31, 2019 for all employees currently under surveillance. The application for the SSR number at the Federal Office and the transmission of the necessary data must be ensured following paragraph 4 sentence 4 by Radiation protection officer or by the responsible person in accordance with paragraph 1 or paragraph 1 sentence 1 or by the person responsible under paragraph 2 or paragraph 1. The SSR numbers must then be available for further use as part of normal communication with monitoring stations or radiation pass authorities. The SSR number is derived from the social security number and personal data using non-traceable encryption. The transmission takes place online. Approximately 420,00 persons are monitored for radiation protection in Germany (as of 2019). Emergency responders (including volunteers) who are not occupationally exposed persons within the meaning of the Radiation Protection Act also require an SSR number retrospectively, i.e. after an operation in which they were exposed to radiation above the limits specified in the Radiation Protection Ordinance, as all relevant exposures must be recorded in the Radiation Protection Register. Radiation protection areas Radiation protection areas are spatial areas in which either people can receive certain body doses during their stay or in which a certain local dose rate is exceeded. They are defined in § 36 of the Radiation Protection Ordinance and in §§ 19 and 20 of the X-Ray Ordinance. According to the Radiation Protection Ordinance, radiation protection areas are divided into restricted areas (local dose rate ≥ 3 mSv/hour), control areas (effective dose > 6 mSv/year) and monitoring areas (effective dose > 1 mSv/year), depending on the hazard. Radiological emergency response projects Early warning systems Germany, Austria and Switzerland, among many other countries, have early warning systems in place to protect the population. The local dose rate measurement network (ODL measurement network) is a measurement system for radioactivity operated by the German Federal Office for Radiation Protection, which determines the local dose rate at the measurement site. In Austria, the Radiation Early Warning System is a measurement and reporting system established in the late 1970s to provide early detection of elevated levels of ionizing radiation in the country and to enable the necessary measures to be taken. The readings are automatically sent to the central office at the Ministry, where they can be accessed by the relevant departments, such as the Federal Warning Center or the warning centers of the federal states. NADAM (Network for Automatic Dose Alerting and Measurement) is the gamma radiation monitoring network of the Swiss National Emergency Operations Center. The monitoring network is complemented by the MADUK stations (Monitoring Network for Automatic Dose Rate Monitoring in the Environment of Nuclear Power Plants) of the Swiss Federal Nuclear Safety Inspectorate (ENSI). Project NERIS-TP In 2011-2014, the NERIS-TP project aimed to discuss the lessons learned from the European EURANOS project on nuclear emergency response with all relevant stakeholders. Project PREPARE The European PREPARE project aims to fill gaps in nuclear and radiological emergency preparedness identified after the Fukushima accident. The project aims to review emergency response concepts for long-lived releases, to address issues of measurement methods and food safety in the case of transboundary contamination, and to fill gaps in decision support systems (source term reconstruction, improved dispersion modeling, consideration of aquatic dispersion pathways in European river systems). Project IMIS Environmental radioactivity has been monitored in Germany since the 1950s. Until 1986, this was carried out by various authorities that did not coordinate with each other. Following the confusion during the Chernobyl reactor disaster in April 1986, measurement activities were pooled in the IMIS (Integrated Measurement and Information System) project, an environmental information system for monitoring radioactivity in Germany. Previously, the measuring equipment was affiliated to the warning offices under the name WADIS ("Warning service information system"). Project CONCERT The aim of the CONCERT (European Joint Programme for the Integration of Radiation Protection Research) project is to establish a joint European program for radiation protection research in Europe in 2018, based on the current strategic research programs of the European research platforms MELODI (radiation effects and radiation risks), ALLIANCE (radioecology), NERIS (nuclear and radiological emergency response), EURADOS (radiation dosimetry) and EURAMED (medical radiation protection). Project REWARD The REWARD (Real time wide area radiation surveillance system) project was established to address the threats of nuclear terrorism, missing radioactive sources, radioactive contamination and nuclear accidents. The consortium developed a mobile system for real time wide area radiation monitoring based on the integration of new miniaturized solid state sensors. Two sensors are used: a cadmium zinc telluride (CdZnTe) detector for gamma radiation and a high efficiency neutron detector based on novel silicon technologies. The gamma and neutron detectors are integrated into a single monitoring device called a tag. The sensor unit includes a wireless communication interface to remotely transmit data to a monitoring base station, which also uses a GPS system to calculate the tag's position. Task force for all types of nuclear emergencies The Nuclear Emergency Support Team (NEST) is a US program for all types of nuclear emergencies of the National Nuclear Security Administration (NNSA) of the United States Department of Energy and is also a counter-terrorism unit that responds to incidents involving radioactive materials or nuclear weapons in US possession abroad. It was founded in 1974/75 under US President Gerald Ford and renamed the Nuclear Emergency Support Team in 2002. In 1988, a secret agreement from 1976 between the USA and the Federal Republic of Germany became known, which stipulates the deployment of NEST in the Federal Republic. In Germany, a similar unit has existed since 2003 with the name Central Federal Support Group for Serious Cases of Nuclear-Specific Emergency Response (ZUB). Legal basis As early as 1905, the Frenchman Viktor Hennecart called for special legislation to regulate the use of X-rays. In England, Sidney Russ (1879-1963) suggested to the British Roentgen Society in 1915 that it should develop its own set of safety standards, which it did in July 1921 with the formation of the British X-Ray and Radium Protection Committee. In the United States, the American X-Ray Society developed its own guidelines in 1922. In the German Reich, a special committee of the German X-Ray Society under Franz Maximilian Groedel (1881-1951), Hans Liniger (1863-1933) and Heinz Lossen (1893-1967) formulated the first guidelines after the First World War. In 1953, the employers' liability insurance associations issued the accident prevention regulation "Use of X-rays in medical facilities" based on the legal basis in § 848a of the Reich Insurance Code (RVG). In the GDR, the Occupational Safety and Health Regulation (ASAO) 950 was in effect from 1954 to 1971. It was replaced by ASAO 980 on April 1, 1971. EURATOM The European Atomic Energy Community (EURATOM) was founded on March 25, 1957, by the Treaty of Rome between France, Italy, the Benelux countries and the Federal Republic of Germany, and remains almost unchanged to this day. Chapter 3 of the Euratom Treaty regulates measures to protect the health of the population. Article 35 requires facilities for the continuous monitoring of soil, air and water for radioactivity. As a result, monitoring networks have been set up in all Member States and the data collected is sent to the EU's central database (EURDEP, European Radiological Data Exchange Platform). The platform is part of the EU's ECURIE system for the exchange of information in the event of radiological emergencies and became operational in 1995. Switzerland also participates in this information system. Legal basis in Germany In Germany, the first X-ray regulation (RGBl. I p. 88) was issued in 1941 and originally applied to non-medical companies. The first medical regulations were issued in October 1953 by the Main Association of Industrial Employer's Liability Insurance Associations as accident prevention regulations for the Reich Insurance Code. Basic standards for radiation protection were introduced by directives of the European Atomic Energy Community (EURATOM) on February 2, 1959. The Atomic Energy Act of December 23, 1959 is the national legal basis for all radiation protection legislation in the Federal Republic of Germany (West) with the Radiation Protection Ordinance of June 24, 1960 (only for radioactive substances), the Radiation Protection Ordinance of July 18, 1964 (for the medical sector) and the X-ray Ordinance of March 1, 1973. Radiation protection was formulated in § 1, according to which life, health and property are to be protected from the dangers of nuclear energy and the harmful effects of ionizing radiation and damage caused by nuclear energy or ionizing radiation is to be compensated. The Radiation Protection Ordinance sets dose limits for the general population and for occupationally exposed persons. In general, any use of ionizing radiation must be justified and radiation exposure must be kept as low as possible even below the limit values. To this end, physicians, dentists and veterinarians, for example, must provide proof every five years - by Section 18a (2) X-ray Ordinance. in the version dated April 30, 2003 - that their specialist knowledge in radiation protection has been updated and must complete a full-day course with a final examination. Specialist knowledge in radiation protection is required by the Technical Knowledge Guideline according to X-ray Ordinance. - R3 for persons who work with baggage screening equipment, industrial measuring equipment and interfering emitters. Since 2019, the regulatory areas of the previous X-ray and radiation protection ordinances have been merged in the amended Radiation Protection Ordinance. The Radiation Protection Commission (SSK) was founded in 1974 as an advisory body to the Federal Ministry of the Interior. It emerged from Commission IV "Radiation Protection" of the German Atomic Energy Commission, which was founded on January 26, 1956. After the Chernobyl nuclear disaster in 1986, the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection was established in the Federal Republic of Germany. The creation of this ministry was primarily a response to the perceived lack of coordination in the political response to the Chernobyl disaster and its aftermath. On December 11, 1986, the German Bundestag passed the Precautionary Radiation Protection Act (StrVG) to protect the population, to monitor radioactivity in the environment, and to minimize human exposure to radiation and radioactive contamination of the environment in the event of radioactive accidents or incidents. The last revision of the X-Ray Ordinance was issued on January 8, 1987. As part of a comprehensive modernization of German radiation protection law, which is largely based on Directive 2013/59/Euratom, the provisions of the X-Ray Ordinance have been incorporated into the revised Radiation Protection Ordinance. Among many other measures, contaminated food was withdrawn from the market on a large scale. Parents were strongly advised not to let their children play in sandboxes. Some of the contaminated sand was replaced. In 1989, the Federal Office for Radiation Protection was incorporated into the Ministry of the Environment. On April 30, 2003, a new precautionary radiation protection law was promulgated to implement two EU directives on the health protection of persons against the dangers of ionizing radiation during medical exposure. The protection of workers from optical radiation (infrared radiation (IR), visible light (VIS) and ultraviolet radiation (UV)), which falls under the category of non-ionizing radiation, is regulated by the Ordinance on the Protection of Workers from Artificial Optical Radiation of 19 July 2010. It is based on the EU Directive 2006/25/EC of April 27, 2006. On March 1, 2010, the "Act on the Protection of Humans from Non-Ionizing Radiation" (NiSG), BGBl. I p. 2433, came into force, according to which the use of sunbeds by minors has been prohibited since August 4, 2009, in accordance with A new Radiation Protection Act came into force in Germany on October 1, 2017. In Germany, a radiation protection officer directs and supervises activities to ensure radiation protection when handling radioactive materials or ionizing radiation. Their duties are described in of the Radiation Protection Ordinance and of the X-Ray Ordinance. They are appointed by the radiation protection officer, who is responsible for ensuring that all radiation protection regulations are observed. X-ray passport Since 2002, an x-ray pass is a document in which the examining physician or dentist must enter information about the x-ray examinations performed on the patient. The main aim was to avoid unnecessary repeat examinations. According to the new Radiation Protection Ordinance (StrlSchV), practices and clinics are no longer obliged to offer their patients X-ray passports and to enter examinations in them. The Radiation Protection Ordinance came into force on December 31, 2018, together with the Radiation Protection Act (StrlSchG) passed in 2017, replacing the previous Radiation Protection Ordinance and the X-ray Ordinance. The Federal Office for Radiation Protection (BfS) continues to advise patients to keep records of their own radiation diagnostic examinations. On its website, the BfS provides a downloadable document that can be used for personal documentation. Legal basis in Switzerland In Switzerland, institutionalized radiation protection began in 1955 with the issuance of guidelines for protection against ionizing radiation in medicine, laboratories, industry and manufacturing plants, although these were only recommendations. The legal basis was created by a new constitutional article (Art. 24), according to which the federal government issues regulations on protection against the dangers of ionizing radiation. On this basis, a corresponding federal law entered into force on July 1, 1960. The first Swiss ordinance on radiation protection entered into force on May 1, 1963. On October 7, 1963, the Federal Department of Home Affairs (EDI) issued the following decrees to supplement the ordinance: on radiation protection in medical X-ray equipment on radiation protection in shoe X-ray machines (of which about 850 were in operation in 1963; the last one was not decommissioned until 1990) on the radioactivity of luminous dials. Another 40 regulations followed. The monitoring of such facilities took many years due to a lack of personnel. From 1963, dosimeters were to be used for personal protection, but this met with great resistance. It was not until 1989 that an updated radiation protection law was passed, accompanied by radiation protection training for the people concerned. Legal basis in Austria The legal basis for radiation protection in Austria is the Radiation Protection Act (BGBl. 277/69 as amended) of June 11, 1969. The tasks of radiation protection extend to the fields of medicine, commerce and industry, research, schools, worker protection and food. The General Radiation Protection Ordinance, Federal Law Gazette II No. 191/2006, has been in force since June 1, 2006. Based on the Radiation Protection Act, it regulates the handling of radiation sources and measures for protection against ionizing radiation. The Optical Radiation Ordinance (VOPST) is a detailed ordinance to the Occupational Safety and Health Act (ASchG). On August 1, 2020, a new radiation protection law came into force, which largely harmonized the radiation protection regulations for artificial radioactive substances and terrestrial natural radioactive substances. They are now enshrined in the General Radiation Protection Ordinance 2020. Companies that carry out activities with naturally occurring radioactive substances are now subject to the licensing or notification requirements pursuant to Sections 15 to 17 of the Radiation Protection Act 2020, unless an exemption provision pursuant to Sections 7 or 8 of the General Radiation Protection Ordinance 2020 applies. Cement production including maintenance of clinker kilns, production of primary iron and tin, lead and copper smelting are included in the scope. If a company falls within the scope of the General Radiation Protection Ordinance 2020, its owner must commission an officially authorized monitoring body. The mandate includes dose assessment for workers who may be exposed to increased radiation exposure and, if necessary, determination of the activity concentration of residues and radioactive substances discharged with the air or waste water. See also List of civilian nuclear accidents References External links Laws, ordinances, guidelines, expert opinions and publications on radiation protection , timeline since 2002 of the Federal Ministry for the Environment, Nature Conservation, Nuclear Safety and Consumer Protection. Retrieved on November 28, 2017. Guidelines for quality assurance in radiology (PDF) German Medical Association, November 23, 2007. Retrieved December 4, 2017. DIN Radiology Standards (PDF) DIN Radiology Standards Committee NAR in cooperation with the German Radiological Society, June 2015, accessed December 4, 2017. Radiation protection in veterinary medicine - Guideline to the Radiation Protection Ordinance (StrlSchV) and the X-ray Ordinance (RöV) (PDF) September 25, 2014, Federal Ministry for the Environment, Nature Conservation, Building and Nuclear Safety, Division Medical-Biological Affairs of Radiation Protection Ref. RS II 4 - 11432/7. Retrieved November 28, 2017. Radioactivity and radiation protection (PDF; 6.6 MB) Federal Office of Public Health (Switzerland), July 2007, accessed November 25, 2017. Overview of international radiation protection associations and organizations, Austrian Association for Radiation Protection. Retrieved December 3, 2017. Human Radiation Experiments DOE Openness. Retrieved January 10, 2018. Department of Energy OpenNet Resources. Retrieved January 10, 2018. Igor Gusev, Angelina Guskova, Fred A. Mettler: Medical Management of Radiation Accidents, Second Edition. CRC Press, 2001, , p. 299 ff. (English, limited preview in Google Book Search). Bibliography Limited preview in Google Books. Limited preview in Google Books. Limited preview in Google Books Limited preview in Google Books. Ashley W. Oughterson, Shields Warren: Medical Effects of the Atomic Bomb in Japan. Volume VIII.8 aus der National Nuclear Energy Series zum Manhattan Project, McGraw-Hill Book Company, 1956. Carl Voegtlin, Harold C. Hodge: Pharmacology and Toxicology of Uranium Compounds. Volume VI.1, Part I and Part II (with a Section on the Pharmacology and Toxicology of Fluorine and Hydrogen Fluoride) from the National Nuclear Energy Series to the Manhattan Project, McGraw-Hill Book Company, 1949. Henry DeWolf Smyth (written on the request of Maj. Gen. L. R. Groves): Atomic Energy for Military Purposes. The official report on the development of the atomic bomb under the auspices of the United States Government, 1940–1945. Princeton University Press, 1946. James E. Grindler (Argonne National Laboratory): The Radiochemistry of Uranium. Nuclear Science Series, National Research Council, NAS-NS 3050, undated. Radiation protection History of medicine History of dentistry History of physics Radiology Radiobiology Nuclear safety and security
History of radiation protection
Chemistry,Biology
31,205
18,053,246
https://en.wikipedia.org/wiki/History%20of%20condoms
The history of condoms goes back at least several centuries, and perhaps beyond. For most of their history, condoms have been used both as a method of birth control, and as a protective measure against sexually transmitted infections such as syphilis, gonorrhea, chlamydia, hepatitis B and more recently HIV/AIDS. Condoms have been made from a variety of materials; prior to the 19th century, chemically treated linen and animal tissue (intestine or bladder) are the best documented varieties. Rubber condoms gained popularity in the mid-19th century, and in the early 20th century major advances were made in manufacturing techniques. Prior to the introduction of the combined oral contraceptive pill, condoms were the most popular birth control method in the Western world. In the second half of the 20th century, the low cost of condoms contributed to their importance in family planning programs throughout the developing world. Condoms have also become increasingly important in efforts to fight the AIDS pandemic. Antiquity Whether condoms were used in ancient civilizations is debated by archaeologists and historians. Societies in the ancient civilizations of Egypt, Greece, and Rome preferred small families and are known to have practiced a variety of birth control methods. However, these societies viewed birth control as a woman's responsibility, and the only well-documented contraception methods were female-controlled devices (both possibly effective, such as pessaries, and ineffective, such as amulets). The writings of these societies contain subtle references to male-controlled contraceptive methods that might have been condoms, but most historians interpret them as referring to coitus interruptus or anal intercourse. The loincloths worn by Egyptian and Greek laborers were very sparse, sometimes consisting of little more than a covering for the glans of the penis. Records of these types of loincloths being worn by men in higher classes have made some historians speculate they were worn during intercourse; others, however, are doubtful of such interpretations. Historians may also cite one legend of Minos, related by Antoninus Liberalis in 150 AD, as suggestive of condom use in ancient societies. This legend describes a curse that caused Minos' semen to contain serpents and scorpions. To protect his sexual partner from these animals, Minos used a goat's bladder as a female condom. Contraceptives fell out of use in Europe after the decline of the Western Roman Empire in the 5th century; the use of contraceptive pessaries, for example, is not documented again until the 15th century. If condoms were used during the Roman Empire, knowledge of them may have been lost during its decline. In the writings of Muslims and Jews during the Middle Ages, there are some references to attempts at male-controlled contraception, including suggestions to cover the penis in tar or soak it in onion juice. Some of these writings might describe condom use, but they are "oblique", "veiled", and "vague". 1500s to the 1800s Renaissance Prior to the 15th century, some use of glans condoms (devices covering only the head of the penis) is recorded in Asia. Glans condoms seem to have been used for birth control, and to have been known only by members of the upper classes. In China, glans condoms may have been made of oiled silk paper, or of lamb intestines. In Japan, condoms called Kabuto-gata (甲形) were made of tortoise shell or animal horn. In England, there is evidence that condoms made of animal organs were available in the time of Henry VIII (the mid-1500s). The first well-documented outbreak of what is now known as syphilis occurred in 1494 among French troops. The disease then swept across Europe. As Jared Diamond describes it, "when syphilis was first definitely recorded in Europe in 1495, its pustules often covered the body from the head to the knees, caused flesh to fall from people's faces, and led to death within a few months." (The disease is less frequently fatal today.) By 1505, the disease had spread to Asia, and within a few decades had "decimated large areas of China". In 16th-century Italy, Gabriele Falloppio authored the earliest uncontested description of condom use. De Morbo Gallico ("The French Disease", referring to syphilis) was published in 1564, two years after Falloppio's death. In this tract, he recommended use of a device he claimed to have invented: linen sheaths soaked in a chemical solution and allowed to dry before use. The cloths he described were sized to cover the glans of the penis, and were held on with a ribbon. Fallopio claimed to have performed an experimental trial of the linen sheath on 1100 men, and reported that none of them had contracted the dreaded disease. After the publication of De Morbo Gallico, use of penis coverings to protect from disease is described in a wide variety of literature throughout Europe. The first indication these devices were used for birth control, rather than disease prevention, is the 1605 theological publication De iustitia et iure (On justice and law) by Catholic theologian Leonardus Lessius: he condemned them as immoral. The first explicit description that un petit linge (a small cloth) was used to prevent pregnancy is from 1655: a French novel and play titled L'Escole des Filles (The Philosophy of Girls). In 1666, the English Birth Rate Commission attributed a recent downward fertility rate to use of "condons", the first documented use of that word (or any similar spelling). In addition to linen, condoms during the Renaissance were made out of intestines and bladder. Cleaned and prepared intestine for use in glove making had been sold commercially since at least the 13th century. Condoms made from bladder and dating to the 1640s were discovered in an English privy; it is believed they were used by soldiers of King Charles I. Dutch traders introduced condoms made from "fine leather" to Japan. Unlike the horn condoms used previously, these leather condoms covered the entire penis. The oldest condoms ever excavated were found in a cesspit located in the grounds of Dudley Castle and were made from animal membrane; dating back to as early as 1642. 18th century Written references to condom use became much more common during the 18th century. Not all of the attention was positive: in 1708, John Campbell unsuccessfully asked Parliament to make the devices illegal. Noted English physician Daniel Turner condemned the condom, publishing his arguments against their use in 1717. He disliked condoms because they did not offer full protection against syphilis. He also seems to have argued that belief in the protection condoms offered encouraged men to engage in sex with unsafe partners - but then, because of the loss of sensation caused by condoms, these same men often neglected to actually use the devices. The French medical professor Jean Astruc wrote his own anti-condom treatise in 1736, citing Turner as the authority in this area. Physicians later in the 18th century also spoke against the condom, but not on medical grounds: rather, they expressed the belief that contraception was immoral. The condom market grew rapidly, however. 18th-century condoms were available in a variety of qualities and sizes, made from either linen treated with chemicals, or "skin" (bladder or intestine softened by treatment with sulphur and lye). They were sold at pubs, barbershops, chemist shops, open-air markets, and at the theatre throughout Europe and Russia. The first recorded inspection of condom quality is found in the memoirs of Giacomo Casanova (which cover his life until 1774): to test for holes, he would often blow them up before use. Couples in colonial America relied on female-controlled methods of contraception if they used contraceptives at all. The first known documents describing American condom use were written around 1800, two to three decades after the American Revolutionary War. Also around 1800, linen condoms lost popularity in the market and their production ceased: they were more expensive and were viewed as less comfortable when compared to skin condoms. Up to the 19th century, condoms were generally used only by the middle and upper classes. Perhaps more importantly, condoms were unaffordable for many: for a typical prostitute, a single condom might cost several months' pay. Expanded marketing and introduction of rubber The early 19th century saw contraceptives promoted to the poorer classes for the first time: birth control advocates in England included Jeremy Bentham and Richard Carlile, and noted American advocates included Robert Dale Owen and Charles Knowlton. Writers on contraception tended to prefer other methods of birth control, citing both the expense of condoms and their unreliability (they were often riddled with holes, and often fell off or broke), but they discussed condoms as a good option for some, and as the only contraceptive that also protected from disease. One group of British contraceptive advocates distributed condom literature in poor neighborhoods, with instructions on how to make the devices at home; in the 1840s, similar tracts were distributed in both cities and rural areas through the United States. From the 1820s through the 1870s, popular women and men lecturers traveled around America teaching about physiology and sexual matters. Many of them sold birth control devices, including condoms, after their lectures. They were condemned by many moralists and medical professionals, including America's first female doctor Elizabeth Blackwell. Blackwell accused the lecturers of spreading doctrines of "abortion and prostitution". In the 1840s, advertisements for condoms began to appear in British newspapers, and in 1861 a condom advertisement appeared in The New York Times. The discovery of the rubber vulcanization process is disputed. Some contest that it was invented by Charles Goodyear in America 1839, and patented in 1844. Other accounts attribute it to Thomas Hancock in Britain in 1843. The first rubber condom was produced in 1855, and by the late 1850s several major rubber companies were mass-producing, among other items, rubber condoms. A main advantage of rubber condoms was their reusability, making them a more economical choice in the long term. Compared to the 19th-century rubber condoms, however, skin condoms were initially cheaper and offered better sensitivity. For these reasons, skin condoms remained more popular than the rubber variety. However, by the end of the 19th century "rubber" had become a euphemism for condoms in countries around the world. For many decades, rubber condoms were manufactured by wrapping strips of raw rubber around penis-shaped moulds, then dipping the wrapped moulds in a chemical solution to cure the rubber. The earliest rubber condoms covered only the glans of the penis; a doctor had to measure each man and order the correct size. Even with the medical fittings, however, glans condoms tended to fall off during use. Rubber manufacturers quickly discovered they could sell more devices by manufacturing full-length one-size-fits-all condoms to be sold in pharmacies. Increased popularity despite legal impediments Distribution of condoms in the United States was limited by passage of the Comstock laws, which included a federal act banning the mailing of contraceptive information (passed in 1873) as well as State laws that banned the manufacture and sale of condoms in thirty states. In Ireland the 1889 Indecent Advertisements Act made it illegal to advertise condoms, although their manufacture and sale remained legal. Contraceptives were illegal in 19th-century Italy and Germany, but condoms were allowed for disease prevention. In Great Britain it was forbidden to sell condoms as prophylactics under the 1917 VD act, so they were marketed as contraceptives rather than as prophylactics, as they were in America. Despite legal obstacles, condoms continued to be readily available in both Europe and America, widely advertised under euphemisms such as male shield and rubber good. In late-19th-century England, condoms were known as "a little something for the weekend". The phrase was commonly used in barbershops, which were a key retailer of condoms, in twentieth century Britain. Only in the Republic of Ireland were condoms effectively outlawed. In Ireland, their sale and manufacture remained illegal until the 1970s. Opposition to condoms did not only come from moralists: by the late 19th century many feminists expressed distrust of the condom as a contraceptive, as its use was controlled and decided upon by men alone. They advocated instead for methods which were controlled by women, such as diaphragms and spermicidal douches. Despite social and legal opposition, at the end of the 19th century the condom was the Western world's most popular birth control method. Two surveys conducted in New York in 1890 and 1900 found that 45% of the women surveyed were using condoms to prevent pregnancy. A survey in Boston just prior to World War I concluded that three million condoms were sold in that city every year. 1870s England saw the founding of the first major condom manufacturing company, E. Lambert and Son of Dalston. In 1882, German immigrant Julius Schmid founded one of the largest and longest-lasting condom businesses, Julius Schmid, Inc. This New York business initially manufactured only skin condoms (in 1890 he was arrested by Anthony Comstock for having almost seven hundred of the devices in his house). In 1912, a German named Julius Fromm developed a new, improved manufacturing technique for condoms: dipping glass molds into a raw rubber solution. Called cement dipping, this method required adding gasoline or benzene to the rubber to make it liquid. In America, Schmid was the first company to use the new technique. Using the new dipping method, French condom manufacturers were the first to add textures to condoms. Fromm was the first company to sell a branded line of condoms, Fromm's Act, which remains popular in Germany today. The Fromms was taken over by the Nazis during the war, and the family fled to Great Britain but could not compete against the powerful London Rubber Company. The condom lines manufactured by Schmid, Sheiks and Ramses, were sold through the late 1990s. Youngs Rubber Company, founded by Merle Youngs in late-19th-century America, introduced the Trojan brand. Beginning in the second half of the 19th century, American rates of sexually transmitted infections skyrocketed. Causes cited by historians include effects of the American Civil War, and the ignorance of prevention methods promoted by the Comstock laws. To fight the growing epidemic, sexual education classes were introduced to public schools for the first time, teaching about STIs and how they were transmitted. They generally taught that abstinence was the only way to avoid sexually transmitted infections. The medical community and moral watchdogs considered STIs to be punishment for sexual misbehavior. The stigma on victims of these diseases was so great that many hospitals refused to treat people who had syphilis. 1900 to present World War I to the 1920s The German military was the first to promote condom use among its soldiers, beginning in the second half of the 19th century. Early-20th-century experiments by the American military concluded that providing condoms to soldiers significantly lowered rates of sexually transmitted infections. During World War I, the United States and (at the beginning of the war only) Britain were the only countries with soldiers in Europe who did not provide condoms and promote their use, although some condoms were provided as an experiment by the Royal Navy. By the end of the war, the American military had diagnosed almost 400,000 cases of syphilis and gonorrhea, a historic high. From just before 1900 to the beginning of World War I, almost all condoms used in Europe were imported from Germany. Germany not only exported condoms to other European countries, but was a major supplier to Australia, New Zealand, and Canada. During the war, the American companies Schmid and Youngs became the main suppliers of condoms to the European Allies. By the early 1920s, however, most of Europe's condoms were once again made in Germany. In 1918, just before the end of the war, an American court overturned a conviction against Margaret Sanger. In this case, the judge ruled that condoms could be legally advertised and sold for the prevention of disease. There were still a few state laws against buying and selling contraceptives, and advertising condoms as birth control devices remained illegal in over thirty states. But condoms began to be publicly, legally sold to Americans for the first time in forty-five years. Through the 1920s, catchy names and slick packaging became an increasingly important marketing technique for many consumer items, including condoms and cigarettes. Quality testing became more common, involving filling each condom with air followed by one of several methods intended to detect loss of pressure. Several American companies sold their rejects under cheaper brand names rather than discarding them. Consumers were advised to perform similar tests themselves before use, although few actually did so. Worldwide, condom sales doubled in the 1920s. Still, there were many prominent opponents of condoms. Marie Stopes objected to the use of condoms ostensibly for medical reasons. Founder of psychoanalysis Sigmund Freud opposed all methods of birth control on the grounds that their failure rates were too high. Freud was especially opposed to the condom because it cut down on sexual pleasure. . Some feminists continued to oppose male-controlled contraceptives such as condoms. Many moralists and medical professionals opposed all methods of contraception. In 1920 the Church of England's Lambeth Conference condemned all "unnatural means of conception avoidance." London's Bishop Arthur Winnington-Ingram complained of the number of condoms discarded in alleyways and parks, especially after weekends and holidays. In the U.S., condom advertising was legally restricted to their use as disease preventatives. They could be openly marketed as birth control devices in Britain, but purchasing condoms in Britain was socially awkward compared to the U.S. They were generally requested with the euphemism "a little something for the weekend." Boots, the largest pharmacy chain in Britain, stopped selling condoms altogether in the 1920s, a policy that was not reversed until the 1960s. In post-World War I France, the government was concerned about falling birth rates. In response, it outlawed all contraceptives, including condoms. Contraception was also illegal in Spain. European militaries continued to provide condoms to their members for disease protection, even in countries where they were illegal for the general population. Invention of spray-drying and manufacturing automation Around 1920 patent lawyer, inventor and vice-president of the United States Rubber Company Ernest Hopkinson invented a simple new technique of converting latex into rubber without a coagulant (demulsifier), which featured using water as a solvent, spraying the solution and drying it with warm air, as well as optionally preserving liquid latex with ammonia. To distinguish from the earlier technologies, the rubber products made with this technology eventually started to be called "latex" products. Youngs Rubber Company was the first to manufacture a latex condom, an improved version of their Trojan brand. Latex condoms required less labor to produce than cement-dipped rubber condoms, which had to be smoothed by rubbing and trimming. Because it used water to suspend the rubber instead of gasoline and benzene, it eliminated the fire hazard previously associated with all condom factories. Latex condoms also performed better for the consumer: they were stronger and thinner than rubber condoms, and had a shelf life of five years (compared to three months for rubber). Europe's first latex condom was an export from Youngs Rubber Company in 1929. In 1932 the London Rubber Company, which had previously served as a wholesaler for German-manufactured condoms, became Europe's first manufacturer of latex condoms, the Durex. The Durex plant was designed and installed by Lucian Landau, a Polish rubber technology student living in London. Until the twenties, all condoms were individually hand-dipped by semiskilled workers. Throughout the 1920s, advances in automation of condom assembly line were made. Fred Killian patented the first fully automated line in 1930 and installed it in his manufacturing plant in Akron, Ohio. Killian charged $20,000 for his conveyor system ($ in dollars). Automated lines dramatically lowered the price of condoms. Major condom manufacturers bought or leased conveyor systems, and small manufacturers were driven out of business. The skin condom, now significantly more expensive than the latex variety, became restricted to a niche high-end market. In Britain, the London Rubber Company's fully automated plant was designed in-house by Lucian Landau and the first lines were installed from 1950 onward. Great Depression In 1927, senior medical officers in the American military began promoting condom distribution and educational programs to members of the army and navy. By 1931, condoms were standard issue to all members of the U.S. military. This coincided with a steep decline in U.S. military cases of sexually transmitted infection. The U.S. military was not the only large organization that changed its moral stance on condoms: in 1930, the Anglican Church's Lambeth Conference sanctioned the use of birth control by married couples. In 1931, the Federal Council of Churches in the U.S. issued a similar statement. The Roman Catholic Church responded by issuing the encyclical Casti connubii affirming its opposition to all contraceptives, a stance it has never reversed. Semen analysis was first performed in the 1930s. Samples were typically collected by masturbation, another action opposed by the Catholic Church. In 1930s Spain, the first use of collection condoms was documented; holes put in the condom allowed the user to collect a sample without violating the prohibitions on contraception and masturbation. In 1932, Margaret Sanger arranged for a shipment of diaphragms to be mailed from Japan to a sympathetic doctor in New York City. When U.S. customs confiscated the package as illegal contraceptive devices, Sanger helped file a lawsuit. In 1936, a federal appeals court ruled in United States v. One Package of Japanese Pessaries that the federal government could not interfere with doctors providing contraception to their patients. In 1938, over three hundred birth control clinics opened in America, supplying reproductive care (including condoms) to poor women all over the country. Programs led by U.S. Surgeon General Thoman Parran included heavy promotion of condoms. These programs are credited with a steep drop in the U.S. STI rate by 1940. Two of the few places where condoms became more restricted during this period were Fascist Italy and Nazi Germany. Because of government concern about low birth rates, contraceptives were made illegal in Italy in the late 1920s. Although limited and highly controlled sales as disease preventatives were still allowed, there was a brisk black market trade in condoms as birth control. In Germany, laws passed in 1933 mandated that condoms could only be sold in plain brown wrappers, and only at pharmacies. Despite these restrictions, when World War II began Germans were using 72 million condoms every year. The elimination of moral and legal barriers, and the introduction of condom programs by the U.S. government helped condom sales. However, these factors alone are not considered to explain the Great Depression's booming condom industry. In the U.S. alone, more than 1.5 million condoms were used every day during the Depression, at a cost of over $33 million per year (not adjusted for inflation). One historian explains these statistics this way: "Condoms were cheaper than children." During the Depression condom lines by Schmid gained in popularity: that company still used the cement-dipping method of manufacture. Unlike the latex variety, these condoms could be safely used with oil-based lubricants. And while less comfortable, older-style rubber condoms could be reused and so were more economical, a valued feature in hard times. More attention was brought to quality issues in the 1930s. In 1935, a biochemist tested 2000 condoms by filling each one with air and then water: he found that 60% of them leaked. The condom industry estimated that only 25% of condoms were tested for quality before packaging. The media attention led the U.S. Food and Drug Administration to classify condoms as a drug in 1937 and mandate that every condom be tested before packaging. Youngs Rubber Company was the first to institute quality testing of every condom they made, installing automatic testing equipment designed by Arthur Youngs (the owner's brother) in 1938. The Federal Food, Drug, and Cosmetic Act authorized the FDA to seize defective products; the first month the Act took effect in 1940, the FDA seized 864,000 condoms. While these actions improved the quality of condoms in the United States, American condom manufacturers continued to export their rejects for sale in foreign markets. World War II to 1980 During World War II condoms were not only distributed to male U.S. military members, but enlisted men were also subject to significant contraception propaganda in the form of films, posters, and lectures. A number of slogans were coined by the military, with one film exhorting "Don't forget — put it on before you put it in." African-American soldiers, who served in segregated units, were exposed to less of the condom promotion programs, had lower rates of condom usage, and much higher rates of STIs. America's female military units, the WACs and WAACs, were still engaged with abstinence programs. European and Asian militaries on both sides of the conflict also provided condoms to their troops throughout the war, even Germany which outlawed all civilian use of condoms in 1941. Despite the rubber shortages that occurred during this period, condom manufacturing was never restricted. In part because condoms were readily available, soldiers found a number of non-sexual uses for the devices, many of which continue to be utilized to this day. Post-war American troops in Germany continued to receive condoms and materials promoting their use. Nevertheless, rates of STIs in this population began to rise, reaching the highest levels since World War I. One explanation is that the success of newer penicillin treatments led soldiers to take syphilis and gonorrhea much less seriously. A similar casual attitude toward STIs appeared in the general American population; one historian states that condoms "were almost obsolete as prophylaxis by 1960". By 1947, the U.S. military was again promoting abstinence as the only method of disease control for its members, a policy that continued through the Vietnam War. But condom sales continued to grow. From 1955 to 1965, 42% of Americans of reproductive age relied on condoms for birth control. In Britain from 1950 to 1960, 60% of married couples used condoms. For the more economical-minded, cement-dipped condoms continued to be available long after the war. In 1957, Durex introduced the world's first lubricated condom. Beginning in the 1960s, the Japanese used more condoms per capita than any other nation in the world. The birth control pill became the world's most popular method of birth control in the years after its 1960 debut, but condoms remained a strong second. A survey of British women between 1966 and 1970 found that the condom was the most popular birth control method with single women. New manufacturers appeared in the Soviet Union, which had never restricted condom sales. The U.S. Agency for International Development pushed condom use in developing countries to help solve the "world population crises": by 1970 hundreds of millions of condoms were being used each year in India alone. In the 1960s and 1970s quality regulations tightened, and legal barriers to condom use were removed. In 1965, the U.S. Supreme Court case Griswold v. Connecticut struck down one of the remaining Comstock laws, the bans of contraception in Connecticut and Massachusetts. France repealed its anti-birth control laws in 1967. Similar laws in Italy were declared unconstitutional in 1971. Captain Beate Uhse in Germany founded a birth control business, and fought a series of legal battles to continue her sales. In Ireland, legal condom sales (only to people over 18, and only in clinics and pharmacies) were allowed for the first time in 1978. (All restrictions on Irish condom sales were lifted in 1993.) Advertising was one area that continued to have legal restrictions. In the late 1950s, the American National Association of Broadcasters banned condom advertisements from national television. This policy remained in place until 1979, when the U.S. Justice department had it overturned in court. In the U.S., advertisements for condoms were mostly limited to men's magazines such as Penthouse. The first television ad, on the California station KNTV, aired in 1975: it was quickly pulled after it attracted national attention. And in over 30 states, advertising condoms as birth control devices was still illegal. After the discovery of AIDS The first New York Times story on acquired immunodeficiency syndrome (AIDS) was published on July 3, 1981. In 1982, it was first suggested that the disease was sexually transmitted. In response to these findings, and to fight the spread of AIDS, the U.S. Surgeon General Dr. C. Everett Koop supported condom promotion programs. However, President Ronald Reagan preferred an approach of concentrating only on abstinence programs. Some opponents of condom programs stated that AIDS was a disease of homosexuals and illicit drug users, who were just getting what they deserved. In 1990, North Carolina senator Jesse Helms argued that the best way to fight AIDS would be to enforce state sodomy laws. Nevertheless, major advertising campaigns were put in print media, promoting condoms as a way to protect against AIDS. Youngs Rubber mailed educational pamphlets to American households, although the postal service forced them to go to court to do so, citing a section of Title 39 that "prohibits the mailing of unsolicited advertisements for contraceptives." In 1983 the U.S. Supreme Court held that the postal service's actions violated the free speech clause of the First Amendment. Beginning in 1985 through 1987, national condom promotion campaigns occurred in U.S. and Europe. Over the 10 years of the Swiss campaign, Swiss condom use increased by 80%. The year after the British campaign began, condom sales in the UK increased by 20%. In 1988 Britain, condoms were the most popular birth control choice for married couples, for the first time since the introduction of the pill. The first condom commercial on U.S. television aired during an episode of Herman's Head on November 17, 1991. In the U.S. in the 1990s, condoms ranked third in popularity among married couples, and were a strong second among single women. Condoms began to be sold in a wider variety of retail outlets, including in supermarkets and in discount department stores such as Wal-Mart. In this environment of more open sales, the British euphemism of "a little something for the weekend" fell out of use. In June 1991 America's first condom store, Condomania, opened on Bleecker Street in New York City. Condomania was the first store of its kind in North America dedicated to the sale and promotion of condoms in an upbeat, upscale and fun atmosphere. Condomania was also one of the first retailers to offer condoms online when it launched its website in December 1995. Condom sales increased every year until 1994, when media attention to the AIDS pandemic began to decline. In response, manufacturers have changed the tone of their advertisements from scary to humorous. New developments continue to occur in the condom market, with the first polyurethane condom—branded Avanti and produced by the manufacturer of Durex—introduced in the 1990s. Durex was also the first condom brand to have a website, launched in 1997. As of 2007, worldwide condom use was expected to continue to grow: one study predicted that developing nations would need 18.6 billion condoms in 2015. In 1987, Global Protection Corp. was founded in Boston, Massachusetts by two Tufts University students, and it became known for its innovative approach to condom marketing. It developed the only FDA-approved glow-in-the-dark condom, called the Pleasure Plus condom. In 2005 the company introduced its One Condoms product line. The One brand used sleek metal packaging, unusual condom wrappers and innovative marketing programs. In 2014, Global Protection became majority owned by Karex, which then purchased the rest of the company in 2020. In 2022, ONE Condoms and MyONE Condoms became the first to receive FDA approval specifically for anal sex. Etymology and other terms Etymological theories for the word "condom" abound. By the early 18th century, the invention and naming of the condom was attributed to an associate of England's King Charles II, and this explanation persisted for several centuries. However, the "Dr. Condom" or "Earl of Condom" described in these stories has never been proved to exist, and condoms had been used for over one hundred years before King Charles II acceded to the throne. Also exploration Latex by Charles Marie de La Condamine A variety of Latin etymologies have been proposed, including condon (receptacle), (house), and (scabbard or case). It has also been speculated to be from the Italian word , derived from , meaning glove. William E. Kruck wrote an article in 1981 concluding that, "As for the word 'condom', I need state only that its origin remains completely unknown, and there ends this search for an etymology." Modern dictionaries may also list the etymology as "unknown". Other terms are also commonly used to describe condoms. In North America condoms are also commonly known as prophylactics, or rubbers. In Britain they may be called French letters. Additionally, condoms may be referred to using the manufacturer's name. The insult term scumbag was originally a slang word for condom. Major manufacturers One analyst described the size of the condom market as something that "boggles the mind". Numerous small manufacturers, nonprofit groups, and government-run manufacturing plants exist around the world. Within the condom market, there are several major contributors, among them both for-profit businesses and philanthropic organizations. In 1882, German immigrant Julius Schmid founded one of the largest and longest-lasting condom businesses, Julius Schmid, Inc., based in New York City. The condom lines manufactured by Schmid included Sheiks and Ramses. In 1932, the London Rubber Company (which had previously been a wholesale business importing German condoms) began to produce latex condoms, under the Durex brand. In 1963 Schmid was purchased by London Rubber. In 1987, London Rubber began acquiring other condom manufacturers, and within a few years became an important international company. In the late 1990s, London Rubber (by then London International Limited) merged all the Schmid brands into its European brand, Durex. Soon after, London International was purchased by Seton Scholl Healthcare (manufacturer of Dr. Scholl's footcare products), forming Seton Scholl Limited. Youngs Rubber Company, founded by Merle Youngs in late-19th-century America, introduced the Trojan line of condoms. In 1985, Youngs Rubber Company was sold to Carter-Wallace. The Trojan name switched hands yet again in 2000 when Carter-Wallace was sold to Church and Dwight. The Australian division of Dunlop Rubber began manufacturing condoms in the 1890s. In 1905, Dunlop sold its condom-making equipment to one of its employees, Eric Ansell, who founded Ansell Rubber. In 1969, Ansell was sold back to Dunlop. In 1987, English business magnate Richard Branson contracted with Ansell to help in a campaign against HIV and AIDS. Ansell agreed to manufacture the Mates brand of condom, to be sold at little or no profit in order to encourage condom use. Branson soon sold the Mates brand to Ansell, with royalty payments made annually to the charity Virgin Unite. In addition to its Mates brand, Ansell currently manufactures Lifestyles and Lifesan for the U.S. market. In 1934 the Kokusia Rubber Company was founded in Japan. It is now known as the Okamoto Rubber Manufacturing Company. In 1970 Tim Black and Philip Harvey founded Population Planning Associates (now known as Adam & Eve). Population Planning Associates was a mail-order business that marketed condoms to American college students, despite U.S. laws against sending contraceptives through the mail. Black and Harvey used the profits from their company to start a non-profit organization Population Services International. By 1975, PSI was marketing condoms in Kenya and Bangladesh, and today operates programs in over sixty countries. Harvey left his position as PSI's director in the late 1970s, but in the late 1980s again founded a nonprofit company, DKT International. Named after D.K. Tyagi (a leader of family planning programs in India), DKT International annually sells millions of condoms at discounted rates in developing countries around the world. By selling the condoms instead of giving them away, DKT intends to make its customers invested in using the devices. One of DKT's more notable programs is its work in Ethiopia, where soldiers are required to carry a condom every time they leave base. The rate of HIV infection in the Ethiopian military, about 5%, is believed to be the lowest among African militaries. As of 2020, the Malaysian company Karex is the largest condom producer in the world. References Condoms Condoms Condoms
History of condoms
Technology
7,656
23,252,771
https://en.wikipedia.org/wiki/Completeness%20of%20atomic%20initial%20sequents
In sequent calculus, the completeness of atomic initial sequents states that initial sequents (where is an arbitrary formula) can be derived from only atomic initial sequents (where is an atomic formula). This theorem plays a role analogous to eta expansion in lambda calculus, and dual to cut elimination and beta reduction. Typically it can be established by induction on the structure of , much more easily than cut elimination. References Gaisi Takeuti. Proof theory. Volume 81 of Studies in Logic and the Foundation of Mathematics. North-Holland, Amsterdam, 1975. Anne Sjerp Troelstra and Helmut Schwichtenberg. Basic Proof Theory. Edition: 2, illustrated, revised. Published by Cambridge University Press, 2000. Theorems in the foundations of mathematics Proof theory
Completeness of atomic initial sequents
Mathematics
161
25,819,126
https://en.wikipedia.org/wiki/HD%2024496
HD 24496 is a binary star system in the equatorial constellation of Taurus. The combined apparent visual magnitude of the pair is 6.81, which is too faint to be readily visible to the normal human eye. The system is located at a distance of 66.8 light-years from the Sun, based on parallax, and is drifting further away with a radial velocity of +19 km/s. It is traversing the celestial sphere with a proper motion of per year. The magnitude 6.9 primary star, designated component A, is a G-type main-sequence star with a stellar classification of G7V. It is around three billion years old with a low projected rotational velocity. The star has 96% of the mass of the Sun and 91% of the Sun's radius. The metallicity, what astronomers term the abundance of heavier elements, is about the same as in the Sun. The star is radiating 71% of the luminosity of the Sun from its photosphere at an effective temperature of 5,572 K. The secondary companion, component B, is of magnitude 11.1 red dwarf of class M2V that shares a common proper motion with the primary. They have an angular separation of along a position angle of 256°, which is equivalent to a physical projected separation of . Their orbital period is around 123,000 years. References G-type main-sequence stars M-type main-sequence stars Binary stars Taurus (constellation) BD+16 0527 3255 024496 018267
HD 24496
Astronomy
319
66,826,648
https://en.wikipedia.org/wiki/Russula%20albidoflava
Russula albidoflava is a fungus in the family, Russulaceae, found "in stands of Eucalyptus globulus" in Tasmania. It was first described in 2007 by Teresa Lebel and Jennifer Tonkin. References albidoflava Taxa named by Teresa Lebel Fungus species
Russula albidoflava
Biology
63
35,002,643
https://en.wikipedia.org/wiki/Stevo%20Todor%C4%8Devi%C4%87
Stevo Todorčević (; born February 9, 1955), is a Yugoslavian mathematician specializing in mathematical logic and set theory. He holds a Canada Research Chair in mathematics at the University of Toronto, and a director of research position at the Centre national de la recherche scientifique in Paris. Early life and education Todorčević was born in Ubovića Brdo. As a child he moved to Banatsko Novo Selo, and went to school in Pančevo. At Belgrade University, he studied pure mathematics, attending lectures by Đuro Kurepa. He began graduate studies in 1978, and wrote his doctoral thesis in 1979 with Kurepa as his advisor. Research Todorčević's work involves mathematical logic, set theory, and their applications to pure mathematics. In Todorčević's 1978 master’s thesis, he constructed a model of MA + ¬wKH in a way to allow him to make the continuum any regular cardinal, and so derived a variety of topological consequences. Here MA is an abbreviation for Martin's axiom and wKH stands for the weak Kurepa Hypothesis. In 1980, Todorčević and Abraham proved the existence of rigid Aronszajn trees and the consistency of MA + the negation of the continuum hypothesis + there exists a first countable S-space. Awards and honours Todorčević is the winner of the first prize of the Balkan Mathematical Society for 1980 and 1982, the 2012 CRM-Fields-PIMS prize in mathematical sciences, and the Shoenfield prize of the Association for Symbolic Logic for "outstanding expository writing in the field of logic" in 2013, for his book Introduction to Ramsey Spaces. He was selected by the Association for Symbolic Logic as their 2016 Gödel Lecturer. He became a corresponding member of the Serbian Academy of Sciences and Arts as of 1991 and a full member of the Academy in 2009. In 2016 Todorčević became a fellow of the Royal Society of Canada. Todorčević has been described as "the greatest Serbian mathematician" since the time of Mihailo Petrović Alas. Books Todorčević is the author of several books in mathematics, including: (with Ilijas Farah) (with Spiros A. Argyros) See also Baumgartner's axiom Kechris–Pestov–Todorčević correspondence Open coloring axiom S and L spaces References Sources . RSC Fellowship Citation and Detailed Appraisal: Stevo Todorcevic External links CRM Fields PIMS Prize Lecture: Prof. Stevo Todorcevic (photo album) CRM-Fields-PIMS Prize Lecture: Stevo Todorcevic (University of Toronto) Stevo Todorcevic at University of Toronto Stevo Todorcevic at Institut de mathématiques de Jussieu – Paris Rive Gauche Todorčević najcenjeniji (Todorčević most respected) Dispute over Infinity Divides Mathematicians by Natalie Wolchover, Quanta Magazine, November 26, 2013; contains some comments on choices of axioms for set theory Stevo Todorcevic at Institute for Advanced Study Prof. Todorčević Interview Living people 20th-century Serbian mathematicians 21st-century Canadian mathematicians Combinatorialists Set theorists Topologists Mathematical analysts Canada Research Chairs Members of the Serbian Academy of Sciences and Arts Fellows of the Royal Society of Canada Academic staff of the University of Toronto Academic staff of the University of Paris University of Belgrade Faculty of Mathematics alumni 1955 births Canadian people of Serbian descent Serbs of Bosnia and Herzegovina Yugoslav mathematicians
Stevo Todorčević
Mathematics
742
2,778,969
https://en.wikipedia.org/wiki/CCSID
A CCSID (coded character set identifier) is a 16-bit number that represents a particular encoding of a specific code page. For example, Unicode is a code page that has several character encoding schemes (referred to as "transformation formats")—including UTF-8, UTF-16 and UTF-32—but which may or may not actually be accompanied by a CCSID number to indicate that this encoding is being used. Difference between a code page and a CCSID The terms code page and CCSID are often used interchangeably, even though they are not synonymous. A code page may be only part of what makes up a CCSID. The following definitions from IBM help to illustrate this point: A glyph is the actual physical pattern of pixels or ink that shows up on a display or printout. A character is a concept that covers all glyphs associated with a certain symbol. For instance, "F", "F", "F", "", "", and "" are all different glyphs, but use the same character. The various modifiers (bold, italic, underline, color, and font) do not change the fact that these glyphs represent . A character set contains the characters necessary to allow a particular human to carry on a meaningful interaction with the computer. It does not specify how those characters are represented in a computer. This level is the first one to separate characters into various alphabets (Latin, Arabic, Hebrew, Cyrillic, and so on) or ideographic groups (e.g., Chinese, Korean). It corresponds to a "character repertoire" in the Unicode encoding model. A code page represents a particular assignment of code point values to characters. It corresponds to a "coded character set" in the Unicode encoding model. A code point for a character is the computer's internal representation of that character in a given code page. Many characters are represented by different code points in different code pages. Certain character sets can be adequately represented with single-byte code pages (which have a maximum 256 code points, hence a maximum of 256 characters), but many require more than that. Examples include JIS X 0208 and Unicode. An encoding scheme is the byte format of a code page. It maps code point values to sequences of one or more byte values in a computer. For example, UTF-8 and UTF-16BE are two encodings of the same Unicode code page. (Varying only in how many bytes are needed to represent a particular Unicode character value, how it is contained within those bytes, and how the presence of Unicode information is indicated.) Meanwhile, in IBM's character data representation architecture (CDRA), this is typically represented with an ESID (encoding scheme identifier). EUC and ISO-2022 are other examples of encoding schemes. A coded character set identifier (CCSID) contains all of the information necessary to assign and preserve the meaning and rendering of characters through various stages of processing and interchange. This information always includes at least one code page, but may include multiple code pages of differing byte-lengths. The CCSID also has an associated encoding scheme that governs how various code points are to be handled. This mechanism allows a program to recognize bidirectional orientation, character shaping (mainly of Arabic characters), and other complex encoding information. Examples The following examples show how some CCSIDs are made up of other CCSIDs. All three of these variant Shift-JIS CCSIDs are multi-byte character sets (MBCS): the single-byte character set (SBCS) portion of each CCSID is different. The double-byte character set (DBCS) portion is the same across each CCSID. CCSID 5028 uses an updated code page 897 called CCSID 4993. CCSID 932 uses the original code page 897, which is CCSID 897. CCSID 942 uses a different SBCS from the other two CCSIDs, which is 1041. Also notice how CCSID 5028 and 4993 are different by 4096 (1000 in hexadecimal) from the predecessor CCSID with the same code page identifier. This is a common way that CDRA denotes an upgraded CCSID. There are a few reasons for this complexity: Many of the CCSIDs are used in IBM databases, like IBM Db2, where a database field only supports an SBCS, DBCS or MBCS string. CCSIDs allow programs to differentiate between which one is being used. When characters are added or replaced, like the Euro currency sign introduction, one can know whether the stored strings support or do not support those character additions because a different CCSID is being used. This versioning is important for the integrity of the data. It enables reuse of resources among similar CCSIDs. References External links IBM CDRA (character data representation architecture) glossary of terms IBM globalization terminology Complete description of IBM CDRA. (This includes a more detailed description of the architecture surrounding CCSIDs.) IBM's complete list of CCSIDs and other various related identifiers List of CCSIDs supported on the IBM System i computer Character encoding
CCSID
Technology
1,099
21,301,483
https://en.wikipedia.org/wiki/GroupLens%20Research
GroupLens Research is a human–computer interaction research lab in the Department of Computer Science and Engineering at the University of Minnesota, Twin Cities specializing in recommender systems and online communities. GroupLens also works with mobile and ubiquitous technologies, digital libraries, and local geographic information systems. The GroupLens lab was one of the first to study automated recommender systems with the construction of the "GroupLens" recommender, a Usenet article recommendation engine, and MovieLens, a popular movie recommendation site used to study recommendation engines, tagging systems, and user interfaces. The lab has also gained notability for its members' work studying open content communities such as Cyclopath, a geo-wiki that was used in the Twin Cities to help plan the regional cycling system. History Formation In 1992, John Riedl and Paul Resnick attended the CSCW conference together. After they heard keynote speaker Shumpei Kumon talk about his vision for an information economy, they began working on a collaborative filtering system for Usenet news. The system collected ratings from Usenet readers and used those ratings to predict how much other readers would like an article before they read it. This recommendation engine was one of the first automated collaborative filtering systems in which algorithms were used to automatically form predictions based on historical patterns of ratings. The overall system was called the "GroupLens" recommender, and the servers that collected the ratings and performed the computation were called the "Better Bit Bureau". This name was later dropped after a request from the Better Business Bureau. "GroupLens" is now used as a name both for this recommender system, and for the research lab at the University of Minnesota. A feasibility test was done between MIT and the University of Minnesota and a research paper was published including the algorithm, the system design, and the results of the feasibility study, in the CSCW conference of 1994. In 1993, Riedl and Resnick invited Joseph Konstan to join the team. Together, they decided to create a higher-performance implementation of the algorithms to support larger-scale deployments. In summer 1995 the team gathered Bradley Miller, David Maltz, Jon Herlocker, and Mark Claypool for "Hack Week" to create the new implementation, and to plan the next round of experiments. In the Spring of 1996, the first workshop on collaborative filtering was put together by Resnick and Hal Varian at the University of California, Berkeley. There, researchers from projects around the US that were studying similar systems came together to share ideas and experience. Net Perceptions In the summer of 1996, David Gardiner, a former Ph.D. student of Riedl's, introduced John Riedl to Steven Snyder. Snyder had been an early employee at Microsoft, but left Microsoft to come to Minnesota to do a Ph.D. in Psychology. He realized the commercial potential of collaborative filtering, and encouraged the team to found a company in April 1996. By June, Gardiner, Snyder, Miller, Riedl, and Konstan had incorporated their company, and by July they had their first round of funding, from Hummer Winblad Venture Partners venture capital company. Net Perceptions went on to be one of the leading companies in personalization during the Internet boom of the late 1990s, and stayed in business until 2004. Based on their experience, Riedl and Konstan wrote a book about the lessons learned from deploying recommenders in practice. Recommender systems have since become ubiquitous in the online world, with leading vendors such as Amazon and Netflix deploying highly sophisticated recommender systems. Netflix even offered a $1 million prize for improvements in recommender technology. When the EachMovie site closed in 1997, the researchers behind it released the anonymous rating data they had collected, for other researchers to use. The GroupLens Research team, led by Brent Dahlen and Jon Herlocker, used this data set to jumpstart a new movie recommendation site called MovieLens which has been a very visible research platform, including a detailed discussion in a New Yorker article by Malcolm Gladwell, and a report in a full episode of ABC Nightline. Between 1997 and 2002 the group continued its research on collaborative filtering, which became known in the community by the more general term of recommender systems. With Joe Konstan's expertise in user interfaces, the team began exploring interface issues in recommenders, such as explanations, and meta-recommendation systems. Studying online communities In 2002, GroupLens expanded into social computing and online communities with the addition of Loren Terveen, who was known for his research of social recommender systems such as PHOAKS. In order to broaden the set of research ideas and tools they used, Riedl, Konstan, and Terveen invited colleagues in social psychology (Robert Kraut and Sara Kiesler, of the Carnegie Mellon Human Computer Interaction Institute), and economic and social analysis (Paul Resnick and Yan Chen of the University of Michigan School of Information) to collaborate. The new, larger team adopted the name CommunityLab, and looked generally at the effects of technological interventions on the performance of online communities. For instance, some of their research explored technology for enriching conversation systems, while other research explored the personal, social, and economic motivations for user ratings. In 2008 GroupLens launched Cyclopath, a computational geo-wiki for bicyclists within a city. In 2010, GroupLens won the annual ACM software system award. Riedl died in 2013. Brent Hecht joined the GroupLens faculty in 2013, focusing on geographic human-computer interaction. Lana Yarosh joined the GroupLens faculty in 2014; she works with social computing and child-computer interaction. A third professor, Haiyi Zhu, joined in 2015. Haiyi has published research on Facebook and other social networks. Stevie Chancellor, a human-centered computing and social computing researcher, joined the GroupLens faculty in 2020. Contributions The MovieLens recommender system: MovieLens is a non-commercial movie recommender system that has been running since 1997 with over 164,000 unique visitors as of 2009, who have provided over 15 million movie ratings. MovieLens ratings datasets: In the early days of recommender systems, research was slowed down by the lack of publicly available datasets. In response to requests from other researchers, GroupLens released three datasets: the MovieLens 100,000 rating dataset, the MovieLens 1 million rating dataset, and the MovieLens 10 million rating dataset. These datasets became the standard datasets for recommender research, and have been used in over 300 papers by researchers around the world. The dataset is also being used for teaching about recommender technology. MovieLens tagging dataset: GroupLens added tagging to MovieLens in 2006. Since then, users have provided over 85,000 applications of 14,000 unique tags to movies. The MovieLens 10 million ratings dataset also includes a 100,000 tag applications dataset for researchers to use. Information leakage from recommender datasets: a paper in the information retrieval conference analyzed the privacy risks to users of having large recommender datasets released. The basic risk discovered is that an anonymized dataset might be combined with public information to identify a user. For instance, a user who has written about his preference for movies on online forums could be associated with a specific row in the MovieLens datasets. In some cases, these associations might leak information the user would prefer to keep private. Wikipedia research: The study of value and vandalism in Wikipedia published in 2007 described the concentration of contribution across Wikipedia editors. This paper was one of the first to focus on the length of time that a contribution survives within Wikipedia as a measure of its value. The paper also investigated the effects of vandalism on Wikipedia readers, by measuring the probability that a view of a page would capture that page in a vandalized state. GroupLens has also explored ways to help editors find pages which they can effectively contribute to with the SuggestBot recommender. The group has also explored the evolution of the norms in Wikipedia that determine which articles are accepted or rejected, and the effect of changes in those norms on the Long Tail of Wikipedia articles. GroupLens has also explored the functioning of the informal peer review system within Wikipedia to discover ways the decisions being made appear to be influenced inappropriately by ownership, and that experience does not seem to change editor performance very much. GroupLens researchers have also explored visualizations of the edit history of Wikipedia articles. In 2011, the GroupLens researchers completed a scientific exploration of gender imbalance in Wikipedia's popular editors, resulting in finding that there was a large gap between male and female editors. Shilling recommender systems: GroupLens has explored ways that users of recommender systems can attempt to inappropriately influence the recommendations given to other users. They call this behavior shilling, because of its relationship to the practice of hiring associates to pretend to be enthusiastic customers. They showed that some types of shilling are likely to be effective in practice. One concern about shilling is that the false predictions may change the reported opinions of later users, further corrupting the recommendations. Cyclopath: Beginning in 2008, GroupLens launched Cyclopath, a computational geo-wiki for local bicyclists. Cyclopath has since been used by hundreds of cyclists within the Twin Cities. More recently, Cyclopath has been adopted by the Twin Cities Metropolitan Council to help plan the regional cycling system. References External links GroupLens Research Homepage University of Minnesota Human–computer interaction Computer science research organizations Research organizations in the United States Recommender systems
GroupLens Research
Technology,Engineering
2,014
48,522,036
https://en.wikipedia.org/wiki/Hygrophoropsis%20ochraceolutea
Hygrophoropsis ochraceolutea is a species of fungus in the family Hygrophoropsidaceae. It was described in 1991 from collections made in Sardinia. References External links Hygrophoropsidaceae Fungi described in 1991 Fungi of Europe Fungus species
Hygrophoropsis ochraceolutea
Biology
61
13,444,838
https://en.wikipedia.org/wiki/S%C3%B6der%20Torn
South Tower (Swedish: "Söder Torn") is a high-rise building located on Fatburstrappan 18, next to Fatbursparken on Södermalm in Stockholm. The building has a height of about above the ground including the "crown" and consists of 25 floors. The Söder Torn complex contains three additional buildings, including one that abuts Medborgarplatsen. Collectively, the buildings contain 172 condominium apartments and 5 businesses. The South Tower itself has 85 apartments and one business. A garage contains parking for both cars and motorcycles. Site History and Building Description The Tower's site was previously a lake called Fatburen, which had formed due to isostatic uplift after the last glaciation. By the 1700s, the lake had become polluted due to urban expansion, and by 1860 the lake had been filled in order to create a rail yard and train station. The rail yard was closed in 1980 and the neighborhood re-developed into a residential district during the period 1985-1995 which features a large number of buildings in the post-modern style. A new train station was built underground in close proximity (300 meters) to the South Tower. The Tower was originally designed by Danish architect Henning Larsen to have 40 floors. However, Larsen left the project in protest after Stockholm's city planning office forced the removal 16 floors from the building plan. The floor plan is octagonal with five apartments on each level. The tower tapers with increasing height. The facades are clad with red granite slabs. In the centrally located stairwell there are two elevators and a spiral staircase. The 23rd and 24th floors have three multi-story apartments, and the top floor is a common party room with glass walls and a panoramic view of the city. Built by construction company JM and finance company SBC, it opened in 1997. Gallery Residential Qualities The top floor of the building is a glass-enclosed party room and terrace with panoramic views of Stockholm. There is an indoor swimming pool and sauna on a lower level. A fountain sculpture at the Tower's base, La Fontaine aux quatre Nanas by French-American artist Niki de Saint Phalle, attracts many viewers due to its styling and location adjacent to a heavily used path between Stockholm South Station and the plaza Medborgarplatsen. Criticism The development at Fatburen district was poorly received by some architectural critics, with one review specifically highlighting the South Tower as "a monument to post-modernism as a playhouse for urban development." See also Bofills båge Medborgarplatsen Södermalm Stockholm South Station References Skyscrapers in Stockholm Residential skyscrapers Residential buildings in Sweden Postmodern architecture
Söder Torn
Engineering
557
24,466,345
https://en.wikipedia.org/wiki/Gymnopilus%20microsporus
Gymnopilus microsporus is a species of mushroom in the family Hymenogastraceae. It was given its current name by mycologist Rolf Singer in 1951. See also List of Gymnopilus species References External links Gymnopilus microsporus at Index Fungorum microsporus Fungi of North America Fungus species
Gymnopilus microsporus
Biology
72
25,771,106
https://en.wikipedia.org/wiki/Soci%C3%A9t%C3%A9%20astronomique%20de%20France
The Société astronomique de France (SAF; ), the French astronomical society, is a non-profit association in the public interest organized under French law (Association loi de 1901). Founded by astronomer Camille Flammarion in 1887, its purpose is to promote the development and practice of astronomy. History SAF was established by Camille Flammarion and a group of 11 persons on 28 January 1887 in Flammarion's apartment at 16 rue Cassini, 75014 Paris, close to the Paris Observatory. Open to all, SAF includes both professional and amateur astronomers as members, from France and abroad. Its objective was defined at the time of its establishment as: "A Society is founded with the aim to bring together people involved practically or theoretically in Astronomy, or who are interested in the development of this Science and the extension of its influence for the illumination of minds. Its efforts shall support the increase and extension of this Science, as well as to facilitating ways and means for those who wish to undertake astronomical studies. All friends of the Science and Progress are invited for its composition and development." On 4 April 1887, the headquarters was established at the Hôtel des Sociétés Savantes, 28 rue Serpente, in the 6th arrondissement of Paris. The society built an observatory on the top floor of the building for its members use that operated from 1890–1968 (Observatory of the rue Serpente). On 17 October 1966, the headquarters moved to the Maison de la Chimie at 28 rue Saint-Dominique, Paris 75007. Since 1974, the headquarters has been located at 3, rue Beethoven, Paris 75016. To date, the Society has had 49 presidents comprising many illustrious persons in astronomy and related fields. Activities and services Monthly magazine L'Astronomie and the periodical Observations et Travaux. Specialized commissions for Astronautics, Astrophilately, Comets, Cosmology, Double stars, History, Instruments, Meteors, meteorites and impacts, Planetary observations, Planetology, Radio Astronomy, Sundials, the Sun, Techniques for amateur astronomy, and Youth. Monthly conferences, lectures, initiation courses in astronomy, and regular meetings of the commissions. The monthly conferences are convened at the Conservatoire national des arts et métiers (CNAM). Rencontres AstroCiel, an annual astronomical gathering every August at which astronomy enthusiasts come together for two weeks of nighttime observations in Valdrôme (Drôme department) in southeastern France, at 1,300 meters altitude. An extensive library that includes both historical and modern works, available for research and consultation to members and non-members. Three astronomical observatories that are open to the public: the Astronomy Tower of the Sorbonne in the 5th arrondissement of Paris, the Camille Flammarion Observatory in Juvisy-sur-Orge, and the Bélesta Observatory, located in Bélesta-en-Lauragais in the Haute-Garonne departement. An optics workshop for members, located in the Astronomy Tower of the Sorbonne. Awards The society has offered the following awards over the years to its members and to notable personalities in the field of astronomy in France and abroad. Not all awards are given every year, and some have been discontinued. Prix Jules Janssen. Recognition of astronomical work in general, or services rendered to Astronomy, by a professional. Prize established by Jules Janssen. Annual prize awarded 1896–present. Prix des Dames. Recognition of services rendered to the Society of any kind. Prize established at the initiative of Sylvie Camille Flammarion and a group of women members of SAF. Annual prize awarded 1896–present. Prix Maurice Ballot. Recognition of authors of works of the Society's observatory. Biannual prize established by a donation of Maurice Ballot, SAF Librarian. Awarded when merited. Given 1921–present. Prix Georges Bidault de l'Isle. Encouragement of young people who show a special talent for astronomy or meteorology. Individuals are chosen from participants at courses and conferences, collaboration at the Observatory, or through communications in the bulletin during the preceding year. Prior to 1956, this award was known as the Prix de l'Observatoire de la Guette. Annual prize awarded 1925– Prix Henry Rey. Recognition of an important work in astronomy. A silver medal is awarded annually. Established by funds bequeathed by Henry Rey of Marseille. Annual prize awarded 1926–present. Prix Gabrielle et Camille Flammarion. Recognition of an important discovery and marked progress in astronomy or in a sister science, to aid an independent researcher, or to assist a young researcher to begin work in astronomy. Given odd-numbered years, alternating with the Prix Dorothea Klumpke-Isaac Roberts. Prize awarded 1930–present. Prix Dorothea Klumpke - Isaac Roberts. Encouragement of the study of the wide and diffuse nebulae of William Herschel, the obscure objects of Barnard, or the cosmic clouds of R.P. Hagen. Biannual prize established by a donation of Dorothea Klumpke Roberts in honor of her late husband Isaac Roberts. Prize awarded 1931– Prix Marcel Moye. Recognition of a young member of the Society for his or her observations. Individuals must be 25 years of age or less. Annual prize awarded 1946–. Prix Marius Jacquemetton. Recognition of a work or research by a member of the Society, a student, or a young astronomer. Annual prize awarded 1947–present. Prix Viennet - Damien. Recognition of a beautiful piece of optics or for some work in this branch of astronomy. Given in alternate years with the Prix Dorothea Klumpke-Isaac Roberts. Prize awarded 1949– Prix Julien Saget. Recognition of an amateur for his or her remarkable astronomical photography. Annual prize awarded 1969–present. Prix Edmond Girard. Encouragement for a beginning vocation in astronomy or scientific exploration of the sky above the Observatoire de Juvisy. Annual prize awarded 1974–. Prix Camus - Waitz. Named in honor of Jacques Camus and Michel Waitz. Awarded – present. Prix Marguerite Clerc. The condition of attribution of this prize is left to the discretion of the SAF Council. Prix International d'Astronautique. Recognition of a study of interplanetary travel/astronautics. Prize established by Robert Esnault-Pelterie and André-Louis Hirsch. Prior to 1936, it was known as the Prix Rep-Hirsch. Given when merited. Prize awarded 1928–1939. Médaille des Anciens Présidents. Awarded when merited. Médaille Commémorative. Annual prize awarded 1901– Médaille du Soixantenaire. Recognition of members who achieve 60 continuous years of membership. Awarded when merited. Plaquette du Centenaire de Camille Flammarion. Recognition of eminent service to the Society. Annual prize awarded 1956–. The Parisian engraver Alphée Dubois (1831–1905) created several medals for the Société Astronomique de France, including the Medal of the Society "la Nuit étoilée" (1887), the Medal of the Prix des Dames (1896), the Medal of the Prix Janssen (1896), and the Society's Commemorative Medal. Presidents 1887–1889: Camille Flammarion, SAF founder, astronomer, author 1889–1891: Hervé Faye, astronomer 1892–1893: Anatole Bouquet de la Grye, hydrographic engineer, geographer, astronomer 1893–1895: Félix Tisserand, astronomer 1895–1897: Jules Janssen, astronomer 1897–1899: Alfred Cornu, physicist 1899–1901: Octave Callandreau, physicist 1901–1903: Henri Poincaré, mathematician, theoretical physicist, engineer, philosopher of science 1903–1904: Gabriel Lippmann, physicist, inventor 1905–1907: Chrétien Édouard Caspari, astronomer, hydrographic engineer 1907–1909: Henri-Alexandre Deslandres, astronomer 1909–1911: Benjamin Baillaud, astronomer 1911–1913: Pierre Puiseux, astronomer 1913–1919: Aymar de la Baume Pluvinel, astronomer 1919–1921: Paul Émile Appell, mathematician 1921–1923: Roland Bonaparte, French prince, President of the Société de Géographie 1923–1925: Charles Lallemand, geophysicist 1925–1927: Gustave-Auguste Ferrié, radio pioneer, army general 1927–1929: Eugène Fichot, hydrographer 1929–1931: Georges Perrier, army general, President of the Société de Géographie 1931–1933: Charles Fabry, physicist 1933–1935: Ernest Esclangon, astronomer, mathematician 1935–1937: Jules Baillaud, astronomer 1937–1939: Charles Maurain, geophysicist 1939–1945: Fernand Baldet, astronomer 1945–1947: Bernard Lyot, astronomer 1947–1949: André-Louis Danjon, astronomer 1949–1951: Lucien d'Azambuja, astronomer 1951–1953: Jean Cabannes, physicist 1953–1955: Pierre Chevenard, mining engineer 1955–1957: André Couder, astronomer, optical engineer 1957–1958: Albert Pérard, physicist, meteorologist 1958–1960: Jean Coulomb, geophysicist, mathematician 1960–1962: André Lallemand, astronomer 1962–1964: André-Louis Danjon, astronomer 1964–1966: Pierre Tardi, astronomer 1966–1970: Jean Rösch, astronomer 1970–1973: Jean Kovalevsky, astronomer 1973–1976: Jean-Claude Pecker, astronomer 1976–1979: Bruno Morando, astronomer 1979–1981: Audouin Dollfus, astronomer 1981–1984: Jacques Boulon, astronomer 1984–1987: Paul Simon, astronomer 1987–1993: Philippe de la Cotardière, writer, science journalist 1993–1997: Jean-Claude Ribes, radioastronomer 1997–2001: Roger Ferlet, astrophysicist 2001–2005: Patrick Guibert, engineer 2005–2014: Philippe Morel, medical doctor 2014–2021: Patrick Baradeau, historian, publisher 2021–present: Sylvain Bouley, planetary scientist Asteroid (4162) SAF French astronomer André Patry of the Observatoire de Nice named Asteroid (4162) SAF in the society's honor after he discovered the body on 24 November 1940. See also List of astronomical societies Societat Catalana de Gnomònica References External links Société astronomique de France official website L'Astronomie official website Web sites of SAF commissions Cosmology Double stars Instruments Planetary observations Sundials 1887 establishments in France Astronomy organizations Scientific organizations based in France Scientific organizations established in 1887 Astronomy in France
Société astronomique de France
Astronomy
2,193
27,377,195
https://en.wikipedia.org/wiki/11%20Lacertae
11 Lacertae is a star in the northern constellation of Lacerta. It is visible to the naked eye as a faint orange-hued star with an apparent visual magnitude of 4.46. It lies at a distance of about 350 light years and has an absolute magnitude -0.54. The object is moving closer to the Earth with a heliocentric radial velocity of −10.9 km/s. This is an evolved giant star with a stellar classification of K2.5 III. It is a red clump giant, meaning it is fusing helium in its core after passing through the red giant branch. The star is 3.2 billion years old with 1.38 times the mass of the Sun and has expanded to 39 times the Sun's radius. It is radiating 280 times the Sun's luminosity from its enlarged photosphere at an effective temperature of 4,352 K. References K-type giants Horizontal-branch stars Lacerta BD+43 4266 Lacertae, 11 214868 111944 8632
11 Lacertae
Astronomy
217
9,500,722
https://en.wikipedia.org/wiki/Monilinia%20fructicola
Monilinia fructicola is a species of fungus in the order Helotiales. A plant pathogen, it is the causal agent of brown rot of stone fruits. Stone fruit (summer fruit) Stone fruits such as apricot and peaches originated in China and spread through old trade routes 3–4000 years ago. Nectarines are more recent (at least 2000 years). Cherries and European plums originated in Europe, although the Japanese plum originated in China. Trees exposed to cold in autumn and early spring can develop cankers under the bark of the trunk or branches. Cankers are usually associated with the production of amber-coloured gum that contains bacteria and oozes on to the outer bark. Unfortunately, there are few control methods for fungal spores apart from copper sprays. Symptoms Brown rot causes blossom blight, twig blight; twig canker and fruit rot. Brown rot is caused by a fungus that produces spores, and can be a major problem during particularly wet seasons. Prolonged wet weather during bloom may result in extensive blossom infection. The length of wet periods required for blossom infection depends upon the temperature. Humid wet conditions are when the fruit trees are most at risk from infection. Young green fruit can be infected just before autumn, but the infection often remains inactive until near maturity of the fruit. Brown rot can spread after harvest. Mature fruit can decay in only 2 days under warm conditions. Blossom Blight: Infected blossoms wilt, shrivel and become covered with greyish mould. Petals may appear light brown or water-soaked. Blighted blossoms do not produce fruit. Dead blossoms may stick to spurs and twigs until harvest, providing a source of spores for the fruit rot phase. Twig Blight and Canker: On peaches and apricots the infection may spread to twigs, causing brownish, oval cankers that may girdle and kill twigs. Fruit rot Fruit rot appears as small, circular brown spots that increase rapidly in size causing the entire fruit to rot. Greyish spores appear in tufts on rotted areas. Infected fruit eventually turn into shrivelled, black mummies that may drop or remain attached to the tree through the winter. Brown rot can be serious on injured fruit such as cherries split by rain. Life cycle Overwintering: The fungus over-winters in mummified fruit on the ground or in the tree and in twig cankers. Spring Infection: two types of spores are produced in spring which can infect blossoms. Conidia are produced on cankers and fruit mummies on the tree. Apothecia (small mushroom-like structures) form on mummies lying on the ground. The apothecia discharge ascospores during the bloom period, but don't contribute to fruit infection later in season. Secondary Infection: Spores produced on blighted blossoms provide a source of infection for ripening fruit. Infected fruit become covered with greyish spores which spread by wind and rain to healthy fruit. Insects may also contribute to the spread of brown rot spores. Plant defenses A plant's first line of defense against infection is the physical barrier of the plant's “skin”, the epidermis of the primary plant body and the periderm of the secondary plant body. This first defense system, however, is not impenetrable. Viruses, bacteria, and the spores and hyphae of fungi can still enter the plant through injuries or through the natural openings in the epidermis, such as stomata. Once a pathogen invades, the plant mounts a chemical attack as a second line of defense that destroys the pathogens and prevents their spread from the site of infection. This second defense system is enhanced by the plant's inherited ability to recognize certain pathogens. Elicitors: Oligosaccharins, derived from cellulose fragments released by cell wall damage, are one of the major classes of elicitors. Elicitors stimulate the production of antimicrobial compounds called phytoalexins. Infections also activate genes that produce PR proteins (pathogenesis-related proteins). Some of these proteins are antimicrobial, attacking molecules in the cell wall of a bacterium. Others may function as signals that spread “news” of the infection to nearby cells. Infection also stimulates the cross-linking of molecules in the cell wall and the deposition of lignin, responses that set up a local barricade that slows spread of the pathogen to other parts of the plant. Control Orchard sanitation, removing fruit mummies and pruning any cankered or dead twigs will reduce inoculum levels, which will improve the effectiveness of fungicide sprays. Primarily treatment is chemical; using fungicidal sprays to control the spread of the fungus. Spraying occurs during all phases, blossoms, green fruit, and mature fruit. Stone fruit trees' only natural defences are “skin” and chemical reactions to being attacked by the fungi, but this is a limited defence, so spraying and orchard sanitation are the best way to control spread of the fungus. References Fungi described in 1883 Fungal plant pathogens and diseases Stone fruit tree diseases Sclerotiniaceae Fungus species
Monilinia fructicola
Biology
1,055
62,571,409
https://en.wikipedia.org/wiki/Dihydrocaffeic%20acid
Dihydrocaffeic acid (DHCA; systematic name 3-(3,4-dihydroxyphenyl)propionic acid) is a phytochemical found in grapes and other plants. DHCA is known to lower IL-6 production through down regulation of DNMT1 expression and inhibition of DNA methylation of the IL-6 gene in mice. DHCA in combination with malvidin-3′-O-glucoside (Mal-gluc) is effective in promoting resilience against stress by modulating brain synaptic plasticity and peripheral inflammation. DHCA/Mal-gluc also significantly lowered depression like phenotypes in mice that had increased peripheral inflammation caused by transplantation of hematopoietic progenitor cells from other more stress-susceptible mice. References Phenylpropanoids Phenolic acids
Dihydrocaffeic acid
Chemistry
187
2,995,992
https://en.wikipedia.org/wiki/Duoplasmatron
The Duoplasmatron is an ion source in which a cathode filament emits electrons into a vacuum chamber. A gas such as argon is introduced in very small quantities into the chamber, where it becomes charged or ionized through interactions with the free electrons from the cathode, forming a plasma. The plasma is then accelerated through a series of at least two highly charged grids, and becomes an ion beam, moving at a fairly high speed from the aperture of the device. History The duoplasmatron was first developed in 1956 by Manfred von Ardenne to provide a powerful source of gas ions. Other contributors such as Demirkanov, Frohlich and Kistemaker continued development between 1959 and 1965. Throughout the 1960s, many continued to investigate, discovering negative ion extraction and multiply charged ion production. There are two types of plasmatrons, the uniplasmatron and the duoplasmatron. The prefix refers to the constriction of discharge. Operation The standard duoplasmatron consists of three main components that are responsible for its operation. These include the hot cathode, the intermediate electrode, and the anode. The intermediate electrode's main job is to produce discharge. This discharge is confined to a small portion near the anode and a short magnetic field between the intermediate electrode and the anode. The duoplasmatron has two different types of plasma: the cathode plasma which is close to the cathode and the anode plasma which is close to the anode. The cathode works by injecting a beam of electrons with a suitable amount of energy. This injection ionizes the gas molecules, typically Argon gas, in the anode and increases the potential near the anode. The ions that are repulsed, however, combine with the ions that contain enough energy to pass the deceleration region and this combination of ions fills the expansion cup with directed ions and electrons. The best operational mode for the duoplasmatron is considered to be when the cathode is adjusted to an emission where the intermediate electrode and the cathode potential are approximately equal. Applications The duoplasmatron is a type of ion source. Ion sources are necessary to form ions for mass spectrometers and other types of instruments. In comparison to Penning ionization sources, the duoplasmatron features advantages such as less expenditure, easier handling, and a longer lifetime. However, the duoplasmatron does have lower beam intensity, which can be a large disadvantage. References Further reading Brown, I.G., "The Physics and Technology of Ion Sources", Wiley-VCH (2004), p. 110 Dass, Chhabil (24 August 2006). Fundamentals of Contemporary Mass Spectrometry. John Wiley & Sons, Inc. . External links Duoplasmatron-Animation Ion source Plasma technology and applications
Duoplasmatron
Physics
604
21,346,408
https://en.wikipedia.org/wiki/Materials%20Science%20and%20Engineering%20R%3A%20Reports
Materials Science and Engineering R: Reports is a monthly peer-reviewed scientific journal. It is the review section of Materials Science and Engineering and is published by Elsevier. It was established in 1993, when the journal Materials Science Reports was split into Materials Science and Engineering C and Materials Science and Engineering R: Reports. According to the Journal Citation Reports, Materials Science and Engineering R: Reports has a 2020 impact factor of 36.214, ranking it 3rd out of 160 in the category Physics, Applied. References External links Physics review journals Materials science journals Elsevier academic journals Academic journals established in 1993 English-language journals Monthly journals
Materials Science and Engineering R: Reports
Materials_science,Engineering
125
593,231
https://en.wikipedia.org/wiki/Trunk%20%28botany%29
In botany, the trunk (or bole) is the stem and main wooden axis of a tree, which is an important feature in tree identification, and which often differs markedly from the bottom of the trunk to the top, depending on the species. The trunk is the most important part of the tree for timber production. Occurrence Trunks occur both in "true" woody plants and non-woody plants such as palms and other monocots, though the internal physiology is different in each case. In all plants, trunks thicken over time due to the formation of secondary growth, or, in monocots, pseudo-secondary growth. Trunks can be vulnerable to damage, including sunburn. Vocabulary Trunks which are cut down for making lumber are generally called logs; if they are cut to a specific length, called bolts. The term "log" is informally used in English to describe any felled trunk not rooted in the ground, whose roots are detached. A stump is the part of a trunk remaining in the ground after the tree has been felled, or the earth-end of an uprooted tree which retains its un-earthed roots. Structure of the trunk The trunk consists of five main parts: The outer bark, inner bark (phloem), cambium, sapwood (live xylem), and heartwood (dead xylem). From the outside of the tree working in: The first layer is the outer bark; this is the protective outermost layer of the trunk. Under this is the inner bark which is called the phloem. The phloem is how the tree transports nutrients from the roots to the shoots and vice versa. The next layer is the cambium, a very thin layer of undifferentiated cells that divide to replenish the phloem cells on the outside and the xylem cells to the inside. The cambium contains the growth meristem of the trunk. Directly inside of the cambium is the sapwood, or the live xylem cells. These cells transport the water through the tree. The xylem also stores starch inside the tree. At the center of the tree is the heartwood. The heartwood is made up of dead xylem cells that have been filled with resins and minerals; these keep other organisms from infecting and growing in the center of the tree. See also Log (disambiguation) Tree measurement Tree volume measurement References External links Plant morphology de:Baum#Aufbau des Baumstammes
Trunk (botany)
Biology
523
49,252,335
https://en.wikipedia.org/wiki/Cryomyces%20antarcticus
Cryomyces antarcticus is a fungus of uncertain placement in the class Dothideomycetes, division Ascomycota. Found in Antarctica, it was described as new to science in 2005. It has been found to be able to survive the harsh outer space environment and cosmic radiation. A proposed mechanistic contributor to the unique resilience observed in C. antarcticus is the presence of its thick and highly melanized cell walls. This melanin may act to protect DNA from damage while C. antarcticus is exposed to conditions that are unsuitable for typical DNA repair systems to function. References External links Enigmatic Dothideomycetes taxa Fungi described in 2005 Fungi of Antarctica Fungus species
Cryomyces antarcticus
Astronomy,Biology
144
23,614,279
https://en.wikipedia.org/wiki/Olation
In inorganic chemistry, olation is the process by which metal ions form polymeric oxides in aqueous solution. The phenomenon is important for understanding the relationship between metal aquo complexes and metal oxides, which are represented by many minerals. At low pH, many metal ions exist in aqueous solution as aquo complexes with the formula [M(H2O)6]3+. As the pH increases, one O-H bond ionizes to give the hydroxide complex, the conjugate base of the parent hexaaqua complex: The hydroxo complex is poised to undergo olation, which is initiated by displacement of one water by the hydroxide ligand on another complex: In this product, the hydroxide ligand bridges between the two metals, this bridge is denoted with the symbol μ. In the resulting 5+ ion, the remaining water and hydroxo ligands are highly acidic and the ionization and condensation processes can continue at still higher pHs. The formation of the oxo-dimer is a process called oxolation: , where L = ligand Ultimately olation and oxolation lead to metal oxides: Olation and oxolation are responsible for the formation of many natural and synthetic materials. Such materials are usually insoluble polymers, but some, the polyoxometallates, are discrete and molecular. Olation and leather tanning One application where olation is important is leather tanning using chromium(III) sulfate. This salt dissolves to give hexaaquachromium(III) cation, [Cr(H2O)6]3+ and sulfate anions. [Cr(H2O)6]3+ acts as an acid according to the reaction: [Cr(H2O)6]3+ [Cr(H2O)5OH]2+ + H+; Keq ~ 10−4 M Thus, higher pH favors [Cr(H2O)5OH]2+. This hydroxy complex can undergo olation: [Cr(H2O)6]3+ + [Cr(H2O)5OH]2+ → [(Cr(H2O)5)2(μ-OH)]5+ + H2O 2[Cr(H2O)5OH]2+ → [(Cr(H2O)4)2(μ-OH)2]4+ + 2 H2O The "diol" (second reaction) is favored and is accelerated by heat and high pH. The balance of these two factors, temperature and pH of the solution, along with the concentration of chromium(III), influence the continued polymerization of [(Cr(H2O)4)2(μ-OH)2]4+. The chromium(III) hydroxide is susceptible to oxolation: [(Cr(H2O)4)2(μ-OH)2]4+ → [(Cr(H2O)4)2(μ-O)2]2+ + 2 H+ Products of oxolation are less susceptible to acidic cleavage than the hydroxy bridge. The resulting clusters are active in crosslinking the protein in tanning, which essentially involves the cross-linking of the collagen subunits. The actual chemistry of [Cr(H2O)6]3+ is more complex in the tanning bath rather than in water due to the presence of a variety of ligands. Some ligands include the sulfate anion, the collagen's carboxyl groups, amine groups from the side chains of the amino acids, as well as "masking agents." Masking agents are carboxylic acids, such as acetic acid, used to suppress formation of polychromium(III) chains. Masking agents allow the tanner to further increase the pH to increase collagen's reactivity without inhibiting the penetration of the chromium(III) complexes. The crosslinks formed by the polychromium species are approximately 17 Å long. References
Olation
Chemistry
857
22,949,908
https://en.wikipedia.org/wiki/Isolator%20%28microwave%29
An isolator is a two-port device that transmits microwave or radio frequency power in one direction only. The non-reciprocity observed in these devices usually comes from the interaction between the propagating wave and the material, which can be different with respect to the direction of propagation. It is used to shield equipment on its input side, from the effects of conditions on its output side; for example, to prevent a microwave source being detuned by a mismatched load. Non-reciprocity An isolator is a non-reciprocal device, with a non-symmetric scattering matrix. An ideal isolator transmits all the power entering port 1 to port 2, while absorbing all the power entering port 2, so that to within a phase-factor its S-matrix is To achieve non-reciprocity, an isolator must necessarily incorporate a non-reciprocal material. At microwave frequencies, this material is usually a ferrite which is biased by a static magnetic field but can be a self-biased material. The ferrite is positioned within the isolator such that the microwave signal presents it with a rotating magnetic field, with the rotation axis aligned with the direction of the static bias field. The behaviour of the ferrite depends on the sense of rotation with respect to the bias field, and hence is different for microwave signals travelling in opposite directions. Depending on the exact operating conditions, the signal travelling in one direction may either be phase-shifted, displaced from the ferrite or absorbed. Types Most common types of ferrite-based isolators are classified into four categories: terminated circulators, Faraday rotation isolators, field-displacement isolators, and resonance isolators. In all these kinds of devices, the observed non-reciprocity arises from the wave-material interaction which depends on the direction of propagation. Resonance absorption In this type the ferrite absorbs energy from the microwave signal travelling in one direction. A suitable rotating magnetic field is found in the dominant TE10 mode of rectangular waveguide. The rotating field exists away from the centre-line of the broad wall, over the full height of the guide. However, to allow heat from the absorbed power to be conducted away, the ferrite does not usually extend from one broad-wall to the other, but is limited to a shallow strip on each face. For a given bias field, resonance absorption occurs over a fairly narrow frequency band, but since in practice the bias field is not perfectly uniform throughout the ferrite, the isolator functions over a somewhat wider band. Field displacement This type is superficially very similar to a resonance absorption isolator, but the magnetic biasing differs, and the energy from the backward travelling signal is absorbed in a resistive film or card on one face of the ferrite block rather than within the ferrite itself. The bias field is weaker than that necessary to cause resonance at the operating frequency, but is instead designed to give the ferrite near-zero permeability for one sense of rotation of the microwave signal field. The bias polarity is such that this special condition arises for the forward signal; the backward signal sees the ferrite as an ordinary dielectric material (with little permeability, as the ferrite is already saturated by the bias field). Consequently, for the electromagnetic field of the forward signal, the ferrite has very low characteristic wave impedance, and the field tends to be excluded from the ferrite. This results in a null of the electric field of the forward signal on the surface of the ferrite where the resistive film is placed. Conversely for the backward signal, the electric field is strong over this surface and so its energy is dissipated in driving current through the film. In rectangular waveguide the ferrite block will typically occupy the full height from one broad-wall to the other, with the resistive film on the side facing the centre-line of the guide. Terminated circulator A circulator is a non-reciprocal three- or four-port device, in which power entering any port is transmitted to the next port in rotation (only). So to within a phase-factor, the scattering matrix for a three-port circulator is A two-port isolator is obtained simply by terminating one of the three ports with a matched load, which absorbs all the power entering it. The biased ferrite is part of the circulator and causes a differential phase-shift for signals travelling in different directions. The bias field is lower than that needed for resonance absorption, and so this type of isolator does not require such a heavy permanent magnet. Because the power is absorbed in an external load, cooling is less of a problem than with a resonance absorption isolator. Faraday rotation isolator A last physical principle useful to design isolators is the Faraday rotation. When a linearly polarized wave propagates through ferrite having a magnetization aligned with the direction of propagation of the wave, the polarization plane will rotate along the propagation axis. This rotation may be used to create microwave devices as isolators, circulators, gyrators, etc. In rectangular waveguide topology, it also requires the implementation of circular waveguide sections which come out of the device plane. See also Power dividers and directional couplers Circulator Gyrator References External links Circulators and Isolators Telecommunications engineering Microwave technology
Isolator (microwave)
Engineering
1,133
476,960
https://en.wikipedia.org/wiki/Paramylon
Paramylon is a carbohydrate similar to starch. The chloroplasts found in Euglena contain chlorophyll which aids in the synthesis of carbohydrates to be stored as starch granules and paramylon. Overview Paramylon is made in the pyrenoids of Euglena. The euglenoids have chlorophylls a and b and they store their photosynthate in an unusual form called paramylon starch, a β-1,3 polymer of glucose. The paramylon is stored in rod like bodies throughout the cytoplasm, called paramylon bodies, which are often visible as colorless or white particles in light microscopy. Their shape is often characteristic of the Euglena species that produces them. Paramylon is also reportedly made in granuales by Pavlovophyceae haptophytes. Paramylon was named and first described in detail by Johann Gottlieb in 1850 based on Gottlieb's scientific exchange with Ludwig Karl Schmarda. See also Phytoglycogen References External links Polydextrose Resources Polysaccharides
Paramylon
Chemistry
250
1,921,333
https://en.wikipedia.org/wiki/KARMEN
KARMEN (KArlsruhe Rutherford Medium Energy Neutrino experiment), a detector associated with the ISIS synchrotron at the Rutherford Appleton Laboratory. Neutrinos for study are supplied via the decay of pions produced when a proton beam strikes a target. It operated from 1990 until March 2001, observing the appearance and disappearance of electron neutrinos. KARMEN searched for neutrino oscillations, with implications for the existence of sterile neutrinos. Results Limits were set on neutrino oscillation parameters. The KARMEN results disagreed with the LSND experiment and were followed up by MiniBooNE. References External links KARMEN: Official project homepage, including a list of papers discussing the time anomaly and its possible interpretations. Accelerator neutrino experiments Nuclear research institutes in the United Kingdom Research institutes in Oxfordshire Science and Technology Facilities Council Vale of White Horse
KARMEN
Physics
185
55,205,800
https://en.wikipedia.org/wiki/Pirenperone
Pirenperone (, , ; developmental code names R-47456, R-50656) is a serotonin receptor antagonist described as an antipsychotic and tranquilizer which was never marketed. It is a relatively selective antagonist of the serotonin 5-HT2 receptors and has been used in scientific research to study the serotonin system. In the 1980s, the drug was found to block the effects of the lysergic acid diethylamide (LSD) in animals, and, along with ketanserin, led to the elucidation of the 5-HT2A receptor as the biological mediator of the effects of serotonergic psychedelics. See also Altanserin Ketanserin Setoperone Risperidone Ocaperidone References 5-HT2 antagonists Abandoned drugs Antipsychotics Anxiolytics 4-Fluorophenyl compounds Ketones Piperidines Pyridopyrimidines
Pirenperone
Chemistry
210
25,662,069
https://en.wikipedia.org/wiki/Structure%20%28journal%29
Structure is a monthly peer-reviewed scientific journal established in September 1993 by Wayne Hendrickson, Carl-Ivar Brändén, and Alan R. Fersht. It focuses on structural biology, studies of macromolecular structure, and related issues. In early 1999, the journal merged with Folding & Design and the name changed to Structure with Folding & Design. In 2001, the journal reverted to Structure. The journal is published by Cell Press and Christopher D. Lima and Andrej Sali served as editors-in-chief from 2003 to October 2021. The journal is now edited by an in-house team at Cell Press, with Karin Kühnel as editor-in-chhief. External links Biochemistry journals Cell Press academic journals Academic journals established in 1993 Monthly journals English-language journals
Structure (journal)
Chemistry
162
71,645,045
https://en.wikipedia.org/wiki/Joseph%20Melia
Joseph Melia is a philosopher working in the areas of philosophy of mathematics, modal logic and possible worlds. He has made important contributions to the debate over the Quine–Putnam indispensability argument, where he argues for a "weaseling" approach to mathematical nominalism. He has also argued against modalism and the modal realism of David Lewis. References Philosophers of mathematics 20th-century British philosophers Year of birth missing (living people) Living people 21st-century British philosophers
Joseph Melia
Mathematics
101
72,667,533
https://en.wikipedia.org/wiki/Human%20chimera
A human chimera is a human with a subset of cells with a distinct genotype than other cells, that is, having genetic chimerism. In contrast, an individual where each cell contains genetic material from a human and an animal is called a human–animal hybrid, while an organism that contains a mixture of human and non-human cells would be a human-animal chimera. Mechanisms Some consider mosaicism to be a form of chimerism, while others consider them to be distinct. Mosaicism involves a mutation of the genetic material in a cell, giving rise to a subset of cells that are different from the rest. Natural chimerism is the fusion of more than one fertilized zygote in the early stages of prenatal development. It is much rarer than mosaicism. In artificial chimerism, an individual has one cell lineage that was inherited genetically at the time of the formation of the human embryo and the other that was introduced through a procedure, including organ transplantation or blood transfusion. Specific types of transplants that could induce this condition include bone marrow transplants and organ transplants, as the recipient's body essentially works to permanently incorporate the new blood stem cells into it. Examples Natural chimerism Natural chimerism has been documented in humans in several instances. The Dutch sprinter Foekje Dillema was expelled from the 1950 national team after she refused a mandatory sex test in July 1950; later investigations revealed a Y-chromosome in her body cells, and the analysis showed that she was probably a 46,XX/46,XY mosaic female. In 1953, a human chimera was reported in the British Medical Journal. A woman was found to have blood containing two different blood types. Apparently this resulted from her twin brother's cells living in her body. A 1996 study found that such blood group chimerism is not rare. In 2002, an article in the New England Journal of Medicine described a woman, later identified as Karen Keegan, in whom tetragametic chimerism was unexpectedly identified after she underwent preparations for kidney transplant. Those preparations for the transplant required the patient and her immediate family to undergo histocompatibility testing, the result of which had suggested that she was not the biological mother of two of her three children. In 2002, Lydia Fairchild was denied public assistance in Washington state when DNA evidence appeared to show that she was not the mother of her children. A lawyer for the prosecution heard of the case of Karen Keegan in New England, and suggested the possibility to the defense, who were able to show that Fairchild, too, was a chimera with two sets of DNA, and that one of those sets could have produced the children. In 2009, singer Taylor Muhl's large torso birthmark was diagnosed as resulting from chimerism. Non-intentional chimerism related to treatments Several cases of chimera phenomena have been reported in bone marrow recipients. In 2019, the blood and seminal fluid of a man in Reno, Nevada (who had undergone a vasectomy), exhibited only the genetic content of his bone marrow donor. Swabs from his lips, cheek and tongue showed mixed DNA content. The DNA content of semen from an assault case in 2004 matched that of a man who had been in prison at the time of the assault, but who had been a bone marrow donor for his brother, who was later determined to have committed the crime. In 2008, a man was killed in a traffic accident that occurred in Seoul, South Korea. A DNA analysis to identify him revealed that his blood, along with some of his organs, appeared to show that he was female. It was later determined that he had received a bone marrow transplant from his daughter. Another instance of treatment-related human chimerism was published in 1998, where a male human had some partially developed female organs due to chimerism. He had been conceived by in-vitro fertilization. Human-animal chimeras Human-animal chimeras include humans having undergone non-human to human xenotransplantation, which is the transplantation of living cells, tissues or organs from one species to another. Patient derived xenografts are created by xenotransplantation of human tumor cells into immunocompromised mice, and is a research technique frequently used in pre-clinical oncology research. The first stable human-animal chimeras to actually exist were first created by Shanghai Second Medical University scientists in 2003, the result of having fused human cells with rabbit eggs. In 2017, a human-pig chimera was reported to have been created; the chimera was also reported to have 0.001% human cells, with the balance being pig. The embryo consisted mostly pig cells and some human cells. Scientists stated that they hope to use this technology to address the shortage of donor organs. In 2021, a human-monkey chimera was created as a joint project between the Salk Institute in the US and Kunming University in China and published in the journal Cell. This involved injecting human stem cells into monkey embryos. The embryos were only allowed to grow for a few days, but the study demonstrated that some of these embryos still had human stem cells surviving at the end of the experiments. Because humans are more closely related to monkeys than other animals, it means there is more chance of the chimeric embryos surviving for longer periods so that organs can develop. The project has opened up possibilities into organ transplantation as well as ethical concerns particularly concerning human brain development in primates. Chimera identification Non-artificial chimerism has traditionally been considered to be rare due the low amount of reported cases in medical literature. However, this may be due to the fact that humans might not often be aware of this condition to begin with. There are usually no signs or symptoms for chimerism other than a few physical symptoms such as hyper-pigmentation, hypo-pigmentation, Blaschko's lines, body asymmetry or heterochromia iridum (possessing two different colored eyes). However, these signs do not necessarily mean an individual is a chimera and should only be seen as possible symptoms. Again, forensic investigation or curiosity over an unexpected maternity/paternity DNA test result usually leads to the accidental discovery of this condition. By simply undergoing a DNA test, which usually consists of either a swift cheek swab or a blood test, the discovery of the once unknown second genome is made, therefore identifying that individual as a chimera. Chimerism and intersex The concept of a "human hermaphrodite" resulting from chimerism is largely a misconception. Most intersex individuals are not chimeras, and most human chimeras are not observed to have intersex traits. Theoretically, if a gynandromorphic human chimera were to have fully functioning male and female gonad tissue, such an individual could self-fertilize; this hypothesis is backed by the fact that hermaphroditic animal species commonly reproduce in this way, and it has been observed in a rabbit. However, no such case of functional self-fertilization has ever been documented in humans; and it is non-existent or extremely rare in mammals, especially in humans. While humans are known to have sex characteristics that diverge from typical males or typical females, these individuals fall under the social umbrella of intersex conditions and traits, and some consider the term "hermaphrodite" to be a slur when applied to them. Legislation The Human Chimera Prohibition Act On 11 July 2005, a bill known as The Human Chimera Prohibition Act was introduced into the United States Congress by Senator Samuel Brownback; however, it died in Congress sometime in the next year. The bill was introduced based on findings that science had progressed to the point where human and nonhuman species could be merged to create new forms of life. Because of this, ethical issues might arise as the line blurred between humans and other animals, and according to the bill with this blurring of lines would come a show of disrespect for human dignity. The final claim brought up in The Human Chimera Prohibition Act was that there was an increasing amount of zoonotic diseases, and that the creation of human-animal chimeras might allow these diseases to reach humans. On 22 August 2016, another bill, The Human-Animal Chimera Prohibition Act of 2016, was introduced to the United States House of Representatives by Christopher H. Smith. It identified a human-animal chimera as: a human embryo into which a nonhuman cell or cells (or the component parts thereof) had been introduced to render the embryo's membership in the species Homo sapiens uncertain; a chimera human/animal embryo produced by fertilizing a human egg with nonhuman sperm; a chimera human/animal embryo produced by fertilizing a nonhuman egg with human sperm; an embryo produced by introducing a nonhuman nucleus into a human egg; an embryo produced by introducing a human nucleus into a nonhuman egg; an embryo containing at least haploid sets of chromosomes from both a human and a nonhuman life form; a nonhuman life form engineered such that human gametes developed within the body of a nonhuman life form; or a nonhuman life form engineered such that it contained a human brain or a brain derived wholly or predominantly from human neural tissues. The bill would have prohibited the attempts to create a human-animal chimera, the transfer or attempt to transfer a human embryo into a nonhuman womb, the transfer or attempt to transfer a nonhuman embryo into a human womb, and the transport or receipt of an animal chimera for any purpose. Proposed penalties for violations of this bill included fines and/or imprisonment of up to 10 years. The bill was referred to the Subcommittee on Crime, Terrorism, Homeland Security, and Investigations on October 11, 2016, but died there. Patenting In the U.S., efforts into creating a chimeric entity appeared to be legal when the topic first came up. Developmental biologist Stuart Newman, a professor at New York Medical College in Valhalla, N.Y., applied for a patent on a human-animal chimera in 1997 as a challenge to the U.S. Patent and Trademark Office and the U.S. Congress, motivated by his moral and scientific opposition to the notion that living things can be patented at all. Prior legal precedent had established that genetically engineered entities, in general, could be patented, even if they were based on beings occurring in nature. After a seven-year process, Newman's patent finally received a flat rejection. The legal process had created a paper trail of arguments, giving Newman what he claimed was a victory. The Washington Post ran an article on the controversy that stated that it had raised "profound questions about the differences—and similarities—between humans and other animals, and the limits of treating animals as property." References Reproduction Intersex healthcare Genetic anomalies Twin
Human chimera
Biology
2,299
5,119,115
https://en.wikipedia.org/wiki/3%20Centauri
3 Centauri is a triple star system in the southern constellation of Centaurus, located approximately 300 light years from the Sun. It is visible to the naked eye as a faint, blue-white hued star with a combined apparent visual magnitude of 4.32. As of 2017, the two visible components had an angular separation of along a position angle of 106°. The system has the Bayer designation k Centauri; 3 Centauri is the Flamsteed designation. It was a suspected eclipsing binary with a variable star designation V983 Centauri, however the AAVSO website lists it as non-variable, formerly suspected to be variable. The brighter member, designated component A, is a magnitude 4.52 chemically peculiar star of the helium-weak (CP4) variety, and has a stellar classification of B5 III-IVp. The spectrum of the star displays overabundances of elements such as nitrogen, phosphorus, manganese, iron, and nickel, while carbon, oxygen, magnesium, aluminium, sulfur, and chlorine appear underabundant relative to the Sun. Weak emission line features are also visible. The magnitude 5.97 secondary, component B, is a single-lined spectroscopic binary star system with an orbital period of 17.4 days and an eccentricity of 0.21. The pair have an angular separation of . The visible component is a B-type main-sequence star with a class of B8 V. References B-type main-sequence stars B-type giants Triple star systems Centaurus Centauri, k CD-32 9676 Centauri, 3 120709/10 067669 5210/1 Helium-weak stars
3 Centauri
Astronomy
359
37,761,115
https://en.wikipedia.org/wiki/Ferndale%20Refinery
The Ferndale Refinery is an oil refinery near Ferndale, Washington, United States, that is owned by Phillips 66. It is located in the Cherry Point Industrial Zone west of Ferndale and had a capacity of 101,000 barrels per day in 2015, 64th largest in the nation. The Ferndale Refinery produces predominantly transportation fuels consumed in local markets and also includes secondary processing facilities such as a fluid catalytic cracker, an alkylation unit, hydotreating units, and a naphtha reformer. The plant follows a 10-5-3-2 crack spread, meaning that for ten barrels of crude feedstock, the refinery produces five barrels of gasoline, three barrels of distillate, and two barrels of fuel oil. About north of Seattle, the Ferndale Refinery was the first of five refineries in Washington. Built by General Petroleum Corp in 1954, its original capacity was rated at 35,000 barrels per stream day. General Petroleum was a subsidiary of Socony (Standard Oil Company of New York) and was integrated into Mobil Chemical Co when the company formed in 1960. BP took control of the refinery in 1988 when its wholly owned subsidiary Sohio received the plant from Mobil Oil in exchange for $152.5 million and crude oil inventories. In 1993, Tosco Corp, a California-based downstream and marketing corporation, bought the refinery from BP. The deal included BP’s retail stations and marketing assets across Washington and Oregon. Phillips Petroleum Company purchased Tosco for $7 billion in February 2001, and assumed control of the refinery thereafter. With the deal’s close, Phillips became the second largest refiner in the U.S. and obtained refineries on both coasts. Even after the Tosco purchase, Phillips sought further expansion. Phillips and Conoco announced a merger in November 2001, forming ConocoPhillips, the new controlling entity of the Ferndale Refinery. This new supermajor boasted the nation's largest downstream system (as of 2001). In 2012 ConocoPhillips spun off its downstream and midstream assets as a new independent energy company, Phillips 66, which still operates the Ferndale Refinery. ConocoPhillips became the second company to abandon the vertically integrated model, following Marathon Oil Corporation’s decision to spin off its downstream assets in 2011. The Ferndale refinery receives a portion of its crude oil from the Amazon River Basin of South America, a concern of many environmentalists. In 2015, it was refining 989 barrels per day of oil from the Amazon. See also Petroleum refining in Washington state Cherry Point Refinery Shell Anacortes Refinery References Phillips 66 Oil refineries in Washington Energy infrastructure in Washington (state) 1954 establishments in Washington (state) Buildings and structures in Whatcom County, Washington Energy infrastructure completed in 1954
Ferndale Refinery
Chemistry
574
73,655,972
https://en.wikipedia.org/wiki/Hwang%20affair
The Hwang affair, or Hwang scandal, or Hwanggate, is a case of scientific misconduct and ethical issues surrounding a South Korean biologist, Hwang Woo-suk, who claimed to have created the first human embryonic stem cells by cloning in 2004. Hwang and his research team at the Seoul National University reported in the journal Science that they successfully developed a somatic cell nuclear transfer method with which they made the stem cells. In 2005, they published again in Science the successful cloning of 11 person-specific stem cells using 185 human eggs. The research was hailed as "a ground-breaking paper" in science. Hwang was elevated as "the pride of Korea", "national hero" [of Korea], and a "supreme scientist", to international praise and fame. Recognitions and honours immediately followed, including South Korea's Presidential Award in Science and Technology, and Time magazine listing him among the "People Who Mattered 2004" and the most influential people "The 2004 Time 100". Suspicion and controversy arose in late 2005, when Hwang's collaborator, Gerald Schatten at the University of Pittsburgh, came to know of the real source of oocytes (egg cells) used in the 2004 study. The eggs, reportedly from several voluntary donors, were from Hwang's two researchers, a fact which Hwang denied. The ethical issues made Schatten immediately break his ties with Hwang. In December 2005, a whistleblower informed Science of reuse of the same data. As the journal probed in, it was revealed that there was a lot more data fabrication. The SNU immediately investigated the research work and found that both the 2004 and 2005 papers contained fabricated results. Hwang was compelled to resign from the university, and publicly confessed in January 2006 that the research papers were based on fabricated data. Science immediately retracted the two papers. In 2009, the Seoul Central District Court convicted Hwang for embezzlement and bioethical violations, sentencing him to a two-year imprisonment. The incident was then recorded as the scandal that "shook the world of science," and became "one of the most widely reported and universally disappointing cases of scientific fraud in history". Background Hwang Woo-suk was a professor of veterinary biotechnology at the Seoul National University and specialised in stem cell research. In 1993, he devised an in vitro fertilisation method with which he made the first assisted reproduction in cows. He rose to public notice in 1999 when he announced that he had successfully cloned a dairy cow, named Yeongrong-i, and a few months later, a Korean cow, Jin-i (also reported as Yin-i). The following year, he announced the preparation for cloning an endangered Siberian tiger. It was a failed attempt, but nonetheless, his popularity in the Korean public escalated from wide media coverage. In 2002, he claimed creation of a genetically modified pig that could be used for human organ transplant. In 2003, he announced the successful cloning of a BSE (bovine spongiform encephalopathy)-resistant cow. However, science sceptics raised concern over the absence of research papers for any of his claims. 2004 human cell cloning In 2004, Hwang announced the first complete cloning of a human embryo. The research, published in the 12 March 2004 issue of Science, was reported as "Evidence of a pluripotent human embryonic stem cell line derived from a cloned blastocyst." For its potential medical value to replace diseased and damaged cells, several scientists had previously tried to clone the human embryo, but in vain. Hwang's team had developed an improved method of somatic cell nuclear transfer using which they could transfer the nuclei of somatic (non-reproductive) cells into egg cells which had their nuclei removed. They used human egg cells and cumulus cells, which are found in ovaries near the developing eggs and are known to be good source of nuclear transfer. After emptying an egg of its nucleus, they transferred the nucleus of the cumulus cell into it. The new egg cell divided normally and grew into a blastocyst, an early embryo characterised by a hollow ball of cells. They isolated the outer trophoblast cells that are destined to become the placenta, discarding the inner cell mass that would form the placenta. When the trophoblast cells were cultured, they could divide and form different tissues, indicating that they were viable stem cells. The report concluded: "This study shows the feasibility of generating human ES [embryonic stem] cells from a somatic cell isolated from a living person." It was the first instance of cloning of adult human cells and human embryonic stem cells. Hwang publicly reported the research at the annual meeting of the American Association for the Advancement of Science (AAAS) in Seattle on 16 February 2004. He specified that they used 242 eggs from 16 unpaid volunteers, creating about 100 cells from which 30 embryos were developed. Since the embryos had adult DNA, the resulting stem cells became clones of the adult somatic cells. From the embryos, the stem cells were collected and grafted into mice in which they could grow into various body parts including muscle, bone, cartilage and connective tissues. The method ensured that immune rejection would be avoided so that it could be used for the treatment of genetic disorders, as Hwang explained: "Our approach opens the door for the use of these specially developed cells in transplantation medicine." 2005 human cell cloning Hwang's team reported another successful cloning of human cells in the 17 June 2005 issue of Science, in this case, embryonic stem cells derived from skin cells. Their study claimed the creation of 11 different stem cell lines that were the exact match of DNA in people having a variety of diseases. The experiment used 185 eggs from 18 donors. The report explicitly stated that: "Patients voluntarily donated oocytes and somatic cells for therapeutic cloning research and relevant applications but not for reproductive cloning ... no financial reimbursement in any form was paid." Initial receptions The 2004 report When the 2004 research was announced, it was received with praise and admiration. Donald Kennedy, editor-in-chief of Science, remarked: "the generation of stem cells by somatic cell nuclear transfer methods involving the same individuals may hold promise for advances in transplantation technology that could help people affected by many devastating conditions." Michael S. Gazzaniga, neuroscientists and bioethicist at Dartmouth College, who had supported therapeutic cloning, commented it as "a major advance in biomedical cloning". American scientists took the news to criticise the weakness of US government in stem cell research and its prohibitive attitude. As Helen Pearson reported in Nature, the cloning accomplishment turned Asians into "scientific tigers". Time reported that as a consequence of the achievement, "a medical and ethical door that had remained mostly closed was kicked wide open." Hwang and his colleague Shin Yong Moon were listed by Time at number 84 in its list of most influential people "The 2004 Time 100" in April 2004. The critical issue was on bioethics, as the method ultimately wasted many human embryos and could be used to create full human clones, as John T. Durkin argued in Science: "the developmental events leading from fertilized ovum, to blastula, to embryo, to fetus, to fully formed adult constitute a continuum." Hwang claimed that the purpose was for medical applications only, and said in Seattle, "Reproductive cloning is strictly prohibited [in South Korea]." At the time, South Korea was developing its "Bioethics and Biosafety Act" to be enforced in 2005. The regulations proscribed human reproductive cloning and experimental fusion of human and animal embryos; even therapeutic cloning for diseases would require authorised approval. Based on this situation, Sang-yong Song of the Hanyang University, criticised Hwang for not waiting the forthcoming regulations and social consensus in Korea. Howard H. Kendler, psychologist at the University of California, had an unbiased viewpoint, commenting: "Although individuals will differ in their opinions, a democracy can decide whether the benefits of embryonic stem cell research outweigh any disadvantages. Science can assist in making this decision, but cannot dictate it." Circle of influences Hwang loved public attention and tried to establish a network of bureaucrats. To name his second cloned cow, he solicited President Kim Dae-jung, who gave the name after a celebrated Korean geisha "Hwang Jin-i." As he announced the cloning of a BSE-resistant cow in 2003, President Roh Moo-hyun visited his laboratory and was shown a dog cured from its injury using stem cell transfer, to which the president applauded, "this is not a science; this is a magic." From that point, Hwang received escalated research funding from the government that peaked in 2005 at around US$30 million. That year, Korean Ministry of Science and Technology officially honoured him as "Supreme Scientist" for the first time in Korea; the title carried US$15 million. The government set up the World Stem Cell Hub at Seoul National University Hospital on 19 October 2005, created and directed by Hwang. On the day of opening, people registered for willingness to take stem cell therapy. Scientific flaws In the 2004 report, Hwang's team prudently remarked that "we cannot completely exclude the possibility that the cells had a parthenogenetic origin." Reference to parthenogenesis, the ability of embryo development from egg cells without fertilisation, was relevant because it had been documented that stem cells are capable of such transformation. In 1984, an experiment demonstrated that a genetic mixture (chimera) of nuclei from the stem cells, one-cell-stage embryos of mouse could develop into full embryos. Researchers at Advanced Cell Technology (ACT) in Worcester, Massachusetts, further showed in 2002 that primate (in this case crab-eating macaque, Macaca fascicularis) stem cells grew into the blastocyst stage. The ACT subsequently announced that they created the human parthenogenetic cells, although the cells could not reach the blastocyst stage. In 2003, Gerald Schatten of the University of Pittsburgh and his team reported a failed attempt of stem cell cloning in rhesus monkey, the cell divisions were always erratic and produced abnormal chromosomes. Schatten declared: "This reinforces the fact that the charlatans who claim to have cloned humans have never understood enough cell or developmental biology." Documentary In June 2023, Netflix released the documentary film King of Clones, which covered Hwang Woo-suk and this affair. See also He Jiankui affair References 2005 controversies Scientific misconduct incidents Stem cell research 2005 in biotechnology Cloning
Hwang affair
Chemistry,Engineering,Biology
2,246
74,762,326
https://en.wikipedia.org/wiki/Thermoascus%20thermophilus
Thermoascus thermophilus is a species of fungus in the genus Thermoascus in the order of Eurotiales. References Thermoascaceae Fungi described in 1912 Fungus species
Thermoascus thermophilus
Biology
45
21,882,157
https://en.wikipedia.org/wiki/Philanthropy%20Journal
The Philanthropy Journal is an online magazine that delivers news, resources and opinion on matters relating to non-profit organizations. The journal offers a free website and email newsletters, nonprofit job postings, professional-development webinars, and resource listings. In addition to regular news coverage, it publishes information on trends, research, and resources relevant to nonprofits. History The Philanthropy Journal was started in 2010. It was a program of the Institute for Nonprofits at North Carolina State University in Raleigh, North Carolina from 2012 to 2020. The program coordinator for the journal is Sandra Cyr. Until 30 June 2012 the journal was edited by Todd Cohen, a former news reporter and business editor for The News & Observer, who in 1991 began writing a weekly philanthropy column for that newspaper. The magazine relaunched in 2022. References External links Magazines established in 2010 Magazines published in North Carolina Mass media in Raleigh, North Carolina Online magazines published in the United States Philanthropy Weekly magazines published in the United States
Philanthropy Journal
Biology
198
45,087,691
https://en.wikipedia.org/wiki/Egyptian%20units%20of%20measurement
A number of units of measurement were used in Egypt to measure length, mass, area, capacity, etc. In Egypt, the metric system was made optional in 1873 and has been compulsory in government use since 1891. Ancient Egyptian units of measurement Units during the ending of the 19th century A number of units were used in Egypt. Units and their interrelations were very variable in the national system. Since 1891 their metric equivalences have been defined. Length A number of units were used to measure length. One derah baladi was equal to 0.58 m and one kassabah was equal to 3.55 m, according to the metric equivalences defined in 1891. Some other units according to the metric equivalences defined in 1891 are given below: 1 kirat = dirra 1 abdat = dirra 1 kadam = dirra 1 pic = 1 dirra 1 gasab = 4 dirra 1 mil hachmi = 1000 dirra 1 farsakh = 3000 dirra There were six kinds of derah (a.k.a. dirra) as follows: 1 Nile pic = 0.2545 m, 1 native pic (derah baladi) = 0.5682 m, 1 Constantinople pic (derah Istambuli) = 0.6691 m, 1 cloth pic (derah hendazeh) = 0.6479 m, 1 builder's pic (derah meimari) = 0.7500 m, 1 itinerary pic (aka. road-measure pic) = 0.7389 m. Road Measures One itenery derah was equal to 0.7389 m. Some other units according to the metric equivalences defined in 1891 are given below: 1 cassaba = 5 derah 1 bââh = derah 1 mili = 500 cassabas = 1.148 mile (1.847 km) 1 farsakh (league) = 3 mili 1 = 4 farsakh 1 safar yome (of which make 1° of the meridian = 60 mili) = 2 . Mass A number of units were used to measure mass. One oke was equal to 1.248 kg, according to the metric equivalences defined in 1891. Some other units according to the metric equivalences defined in 1891 are given below: 1 kirat = oke 1 dirhem = oke 1 = oke 1 = 0.03 oke 1 rotoli = 0.36 oke 1 kantar = 36 oke 1 helm = 200 oke One harsela, used for weighing silk, is one oke. Area A number of units were used to measure area. One feddan was equal to 4200.8 m2, according to the metric equivalences defined in 1891. Some other units according to the metric equivalences defined in 1891 are given below: 1 = feddan 1 = feddan 1 = 1 feddan. Squares of derah and cassaba (3.55 m) was used to partly measure lands. Capacity Two main systems, liquid and dry were used in Egypt. Liquid Measure One was equal to 70.4467 quarts ( litres). Dry Measure A number of units were used to measure capacity. One keddah was equal to 2.0625 L, according to the metric equivalences defined in 1891. Some other units according to the metric equivalences defined in 1891 are given below: 1 kirat = keddah 1 khanoubah = keddah 1 toumnah = keddah 1 = keddah 1 nisf keddah = keddah 1 malouah = 2 keddah 1 rob (roubouh) = 4 keddah 1 keila = 8 keddah 1 ardeb = 96 keddah 1 daribah = 768 keddah Before 1891, according to the report of the United States Commissioners to the Paris Exposition of 1867, 1 ardeb was equal to 2.603 bushels (91.72 L). Other authorities give the ardeb = 5.1648 bushels. One ardeb of Alexandria was equal to 7.6907 bushels. References Culture of Egypt Egypt
Egyptian units of measurement
Mathematics
852
40,142
https://en.wikipedia.org/wiki/Botulism
Botulism is a rare and potentially fatal illness caused by botulinum toxin, which is produced by the bacterium Clostridium botulinum. The disease begins with weakness, blurred vision, feeling tired, and trouble speaking. This may then be followed by weakness of the arms, chest muscles, and legs. Vomiting, swelling of the abdomen, and diarrhea may also occur. The disease does not usually affect consciousness or cause a fever. Botulism can occur in several ways. The bacterial spores which cause it are common in both soil and water and are very resistant. They produce the botulinum toxin when exposed to low oxygen levels and certain temperatures. Foodborne botulism happens when food containing the toxin is eaten. Infant botulism instead happens when the bacterium develops in the intestines and releases the toxin. This typically only occurs in children less than one year old, as protective mechanisms against development of the bacterium develop after that age. Wound botulism is found most often among those who inject street drugs. In this situation, spores enter a wound, and in the absence of oxygen, release the toxin. The disease is not passed directly between people. Its diagnosis is confirmed by finding the toxin or bacteria in the person in question. Prevention is primarily by proper food preparation. The toxin, though not the spores, is destroyed by heating it to more than for longer than five minutes. The clostridial spores can be destroyed in an autoclave with moist heat (120°C/ 250°F for at least 15 minutes) or dry heat (160°C for 2 hours) or by irradiation. The spores of group I strains are inactivated by heating at 121°C (250°F) for 3 minutes during commercial canning. Spores of group II strains are less heat-resistant, and they are often damaged by 90°C (194°F) for 10 minutes, 85°C for 52 minutes, or 80°C for 270 minutes; however, these treatments may not be sufficient in some foods. Honey can contain the organism, and for this reason, honey should not be fed to children under 12 months. Treatment is with an antitoxin. In those who lose their ability to breathe on their own, mechanical ventilation may be necessary for months. Antibiotics may be used for wound botulism. Death occurs in 5 to 10% of people. Botulism also affects many other animals. The word is from Latin , meaning 'sausage'. Signs and symptoms The muscle weakness of botulism characteristically starts in the muscles supplied by the cranial nerves—a group of twelve nerves that control eye movements, the facial muscles and the muscles controlling chewing and swallowing. Double vision, drooping of both eyelids, loss of facial expression and swallowing problems may therefore occur. In addition to affecting the voluntary muscles, it can also cause disruptions in the autonomic nervous system. This is experienced as a dry mouth and throat (due to decreased production of saliva), postural hypotension (decreased blood pressure on standing, with resultant lightheadedness and risk of blackouts), and eventually constipation (due to decreased forward movement of intestinal contents). Some of the toxins (B and E) also precipitate nausea, vomiting, and difficulty with talking. The weakness then spreads to the arms (starting in the shoulders and proceeding to the forearms) and legs (again from the thighs down to the feet). Severe botulism leads to reduced movement of the muscles of respiration, and hence problems with gas exchange. This may be experienced as dyspnea (difficulty breathing), but when severe can lead to respiratory failure, due to the buildup of unexhaled carbon dioxide and its resultant depressant effect on the brain. This may lead to respiratory compromise and death if untreated. Clinicians frequently think of the symptoms of botulism in terms of a classic triad: bulbar palsy and descending paralysis, lack of fever, and clear senses and mental status ("clear sensorium"). Infant botulism Infant botulism (also referred to as floppy baby syndrome) was first recognized in 1976, and is the most common form of botulism in the United States. Infants are susceptible to infant botulism in the first year of life, with more than 90% of cases occurring in infants younger than six months. Infant botulism results from the ingestion of the C. botulinum spores, and subsequent colonization of the small intestine. The infant gut may be colonized when the composition of the intestinal microflora (normal flora) is insufficient to competitively inhibit the growth of C. botulinum and levels of bile acids (which normally inhibit clostridial growth) are lower than later in life. The growth of the spores releases botulinum toxin, which is then absorbed into the bloodstream and taken throughout the body, causing paralysis by blocking the release of acetylcholine at the neuromuscular junction. Typical symptoms of infant botulism include constipation, lethargy, weakness, difficulty feeding, and an altered cry, often progressing to a complete descending flaccid paralysis. Although constipation is usually the first symptom of infant botulism, it is commonly overlooked. Honey is a known dietary reservoir of C. botulinum spores and has been linked to infant botulism. For this reason, honey is not recommended for infants less than one year of age. Most cases of infant botulism, however, are thought to be caused by acquiring the spores from the natural environment. Clostridium botulinum is a ubiquitous soil-dwelling bacterium. Many infant botulism patients have been demonstrated to live near a construction site or an area of soil disturbance. Infant botulism has been reported in 49 of 50 US states (all save for Rhode Island), and cases have been recognized in 26 countries on five continents. Complications Infant botulism has no long-term side effects. Botulism can result in death due to respiratory failure. However, in the past 50 years, the proportion of patients with botulism who die has fallen from about 50% to 7% due to improved supportive care. A patient with severe botulism may require mechanical ventilation (breathing support through a ventilator) as well as intensive medical and nursing care, sometimes for several months. The person may require rehabilitation therapy after leaving the hospital. Cause Clostridium botulinum is an anaerobic, Gram-positive, spore-forming rod. Botulinum toxin is one of the most powerful known toxins: about one microgram is lethal to humans when inhaled. It acts by blocking nerve function (neuromuscular blockade) through inhibition of the excitatory neurotransmitter acetylcholine's release from the presynaptic membrane of neuromuscular junctions in the somatic nervous system. This causes paralysis. Advanced botulism can cause respiratory failure by paralysing the muscles of the chest; this can progress to respiratory arrest. Furthermore, acetylcholine release from the presynaptic membranes of muscarinic nerve synapses is blocked. This can lead to a variety of autonomic signs and symptoms described above. In all cases, illness is caused by the botulinum toxin which the bacterium C. botulinum produces in anaerobic conditions and not by the bacterium itself. The pattern of damage occurs because the toxin affects nerves that fire (depolarize) at a higher frequency first. Mechanisms of entry into the human body for botulinum toxin are described below. Colonization of the gut The most common form in Western countries is infant botulism. This occurs in infants who are colonized with the bacterium in the small intestine during the early stages of their lives. The bacterium then produces the toxin, which is absorbed into the bloodstream. The consumption of honey during the first year of life has been identified as a risk factor for infant botulism; it is a factor in a fifth of all cases. The adult form of infant botulism is termed adult intestinal toxemia, and is exceedingly rare. Food Toxin that is produced by the bacterium in containers of food that have been improperly preserved is the most common cause of food-borne botulism. Fish that has been pickled without the salinity or acidity of brine that contains acetic acid and high sodium levels, as well as smoked fish stored at too high a temperature, presents a risk, as does improperly canned food. Food-borne botulism results from contaminated food in which C. botulinum spores have been allowed to germinate in low-oxygen conditions. This typically occurs in improperly prepared home-canned food substances and fermented dishes without adequate salt or acidity. Given that multiple people often consume food from the same source, it is common for more than a single person to be affected simultaneously. Symptoms usually appear 12–36 hours after eating, but can also appear within 6 hours to 10 days. No withdrawal periods have been established for cows affected by Botulism. Lactating cows injected with various doses of Botulinum toxin C have not resulted in detectable Botulinum neurotoxin in milk produced. Using mouse bioassays and immunostick ELISA tests, botulinum toxin was detected in whole blood and serum but not in milk samples, suggesting that botulinum type C toxin does not enter milk in detectable concentrations. Cooking and pasteurization denatures botulinum toxin but does not necessarily eliminate spores. Botulinum spores or toxins can find their way into the dairy production chain from the environment. Despite the low risk of milk and meat contamination, the protocol for fatal bovine botulism cases appears to be incineration of carcasses and withholding any potentially contaminated milk from human consumption. It is also advised that raw milk from affected cows should not be consumed by humans or fed to calves. There have been several reports of botulism from pruno wine made of food scraps in prison. In a Mississippi prison in 2016, prisoners illegally brewed alcohol that led to 31 cases of botulism. The research study done on these cases found the symptoms of mild botulism matched the symptoms severe botulism though the outcomes and progression of the disease were different. Wound Wound botulism results from the contamination of a wound with the bacteria, which then secrete the toxin into the bloodstream. This has become more common in intravenous drug users since the 1990s, especially people using black tar heroin and those injecting heroin into the skin rather than the veins. Wound botulism can also come from a minor wound that is not properly cleaned out; the skin grows over the wound thus trapping the spore in an anaerobic environment and creating botulism. One example was a person who cut their ankle while using a weed eater; as the wound healed over, it trapped a blade of grass and spec of soil under the skin that lead to severe botulism requiring hospitalization and rehabilitation for months. Wound botulism accounts for 29% of cases. Inhalation Isolated cases of botulism have been described after inhalation by laboratory workers. Injection (iatrogenic botulism) Symptoms of botulism may occur away from the injection site of botulinum toxin. This may include loss of strength, blurred vision, change of voice, or trouble breathing which can result in death. Onset can be hours to weeks after an injection. This generally only occurs with inappropriate strengths of botulinum toxin for cosmetic use or due to the larger doses used to treat movement disorders. However, there are cases where an off-label use of botulinum toxin resulted in severe botulism and death. Following a 2008 review the FDA added these concerns as a boxed warning. An international grassroots effort led by NeverTox to assemble the people experiencing Iatrogenic Botulism Poisoning (IBP) and provide education and emotional support serves 39,000 people through a Facebook group who are suffering from adverse events from botulinum toxin injections. Lawsuits about botulism against Pharmaceuticals Prior to the boxed warning labels that included a disclaimer that botulinum toxin injections could cause botulism, there were a series of lawsuits against the pharmaceutical firms that manufactured injectable botulinum toxin. A Hollywood producer's wife brought a lawsuit after experiencing debilitating adverse events from migraine treatment. A lawsuit on behalf of a 3-year-old boy who was permanently disabled by a botulinum toxin injection was settled in court during the trial. The family of a 7-year-old boy treated with botulinum toxin injections for leg spasms sued after the boy almost died. Several families of people who died after treatments with botulinum toxin injections brought lawsuits. One lawsuit prevailed for the plaintiff who was awarded compensation of $18 million; the plaintiff was a physician who was diagnosed with botulism by thirteen neurologists at the NIH. Deposition video from that lawsuit quotes a pharmaceutical executive stating that "Botox doesn't cause botulism." Mechanism The toxin is the protein botulinum toxin produced under anaerobic conditions (where there is no oxygen) by the bacterium Clostridium botulinum. Clostridium botulinum is a large anaerobic Gram-positive bacillus that forms subterminal endospores. There are eight serological varieties of the bacterium denoted by the letters A to H. The toxin from all of these acts in the same way and produces similar symptoms: the motor nerve endings are prevented from releasing acetylcholine, causing flaccid paralysis and symptoms of blurred vision, ptosis, nausea, vomiting, diarrhea or constipation, cramps, and respiratory difficulty. Botulinum toxin is broken into eight neurotoxins (labeled as types A, B, C [C1, C2], D, E, F, and G), which are antigenically and serologically distinct but structurally similar. Human botulism is caused mainly by types A, B, E, and (rarely) F. Types C and D cause toxicity only in other animals. In October 2013, scientists released news of the discovery of type H, the first new botulism neurotoxin found in forty years. However, further studies showed type H to be a chimeric toxin composed of parts of types F and A (FA). Some types produce a characteristic putrefactive smell and digest meat (types A and some of B and F); these are said to be proteolytic; type E and some types of B, C, D and F are nonproteolytic and can go undetected because there is no strong odor associated with them. When the bacteria are under stress, they develop spores, which are inert. Their natural habitats are in the soil, in the silt that comprises the bottom sediment of streams, lakes, and coastal waters and ocean, while some types are natural inhabitants of the intestinal tracts of mammals (e.g., horses, cattle, humans), and are present in their excreta. The spores can survive in their inert form for many years. Toxin is produced by the bacteria when environmental conditions are favourable for the spores to replicate and grow, but the gene that encodes for the toxin protein is actually carried by a virus or phage that infects the bacteria. Little is known about the natural factors that control phage infection and replication within the bacteria. The spores require warm temperatures, a protein source, an anaerobic environment, and moisture in order to become active and produce toxin. In the wild, decomposing vegetation and invertebrates combined with warm temperatures can provide ideal conditions for the botulism bacteria to activate and produce toxin that may affect feeding birds and other animals. Spores are not killed by boiling, but botulism is uncommon because special, rarely obtained conditions are necessary for botulinum toxin production from C. botulinum spores, including an anaerobic, low-salt, low-acid, low-sugar environment at ambient temperatures. Botulinum inhibits the release within the nervous system of acetylcholine, a neurotransmitter, responsible for communication between motor neurons and muscle cells. All forms of botulism lead to paralysis that typically starts with the muscles of the face and then spreads towards the limbs. In severe forms, botulism leads to paralysis of the breathing muscles and causes respiratory failure. In light of this life-threatening complication, all suspected cases of botulism are treated as medical emergencies, and public health officials are usually involved to identify the source and take steps to prevent further cases from occurring. Botulinum toxin A and E specifically cleave the SNAP-25, whereas serotype B, D, F and G cut synaptobrevin. Serotype C cleaves both SNAP-25 and syntaxin. This causes blockade of neurotransmitter acetylcholine release, ultimately leading to paralysis. Diagnosis For botulism in babies, diagnosis should be made on signs and symptoms. Confirmation of the diagnosis is made by testing of a stool or enema specimen with the mouse bioassay. In people whose history and physical examination suggest botulism, these clues are often not enough to allow a diagnosis. Other diseases such as Guillain–Barré syndrome, stroke, and myasthenia gravis can appear similar to botulism, and special tests may be needed to exclude these other conditions. These tests may include a brain scan, cerebrospinal fluid examination, nerve conduction test (electromyography, or EMG), and an edrophonium chloride (Tensilon) test for myasthenia gravis. A definite diagnosis can be made if botulinum toxin is identified in the food, stomach or intestinal contents, vomit or feces. The toxin is occasionally found in the blood in peracute cases. Botulinum toxin can be detected by a variety of techniques, including enzyme-linked immunosorbent assays (ELISAs), electrochemiluminescent (ECL) tests and mouse inoculation or feeding trials. The toxins can be typed with neutralization tests in mice. In toxicoinfectious botulism, the organism can be cultured from tissues. On egg yolk medium, toxin-producing colonies usually display surface iridescence that extends beyond the colony. Prevention Although the vegetative form of the bacteria is destroyed by boiling, the spore itself is not killed by the temperatures reached with normal sea-level-pressure boiling, leaving it free to grow and again produce the toxin when conditions are right. A recommended prevention measure for infant botulism is to avoid giving honey to infants less than 12 months of age, as botulinum spores are often present. In older children and adults the normal intestinal bacteria suppress development of C. botulinum. While commercially canned goods are required to undergo a "botulinum cook" in a pressure cooker at for 3 minutes, and thus rarely cause botulism, there have been notable exceptions. Two were the 1978 Alaskan salmon outbreak and the 2007 Castleberry's Food Company outbreak. Foodborne botulism is the rarest form, accounting for only around 15% of cases (US) and has more frequently resulted from home-canned foods with low acid content, such as carrot juice, asparagus, green beans, beets, and corn. However, outbreaks of botulism have resulted from more unusual sources. In July 2002, fourteen Alaskans ate muktuk (whale meat) from a beached whale, and eight of them developed symptoms of botulism, two of them requiring mechanical ventilation. Other, much rarer sources of infection (about every decade in the US) include garlic or herbs stored covered in oil without acidification, chili peppers, improperly handled baked potatoes wrapped in aluminum foil, tomatoes, and home-canned or fermented fish. When canning or preserving food at home, attention should be paid to hygiene, pressure, temperature, refrigeration and storage. When making home preserves, only acidic fruit such as apples, pears, stone fruits and berries should be used. Tropical fruit and tomatoes are low in acidity and must have some acidity added before they are canned. Low-acid foods have pH values higher than 4.6. They include red meats, seafood, poultry, milk, and all fresh vegetables except for most tomatoes. Most mixtures of low-acid and acid foods also have pH values above 4.6 unless their recipes include enough lemon juice, citric acid, or vinegar to make them acidic. Acid foods have a pH of 4.6 or lower. They include fruits, pickles, sauerkraut, jams, jellies, marmalades, and fruit butters. Although tomatoes usually are considered an acid food, some are now known to have pH values slightly above 4.6. Figs also have pH values slightly above 4.6. Therefore, if they are to be canned as acid foods, these products must be acidified to a pH of 4.6 or lower with lemon juice or citric acid. Properly acidified tomatoes and figs are acid foods and can be safely processed in a boiling-water canner. Oils infused with fresh garlic or herbs should be acidified and refrigerated. Potatoes which have been baked while wrapped in aluminum foil should be kept hot until served or refrigerated. Because the botulism toxin is destroyed by high temperatures, home-canned foods are best boiled for 10 minutes before eating. Metal cans containing food in which bacteria are growing may bulge outwards due to gas production from bacterial growth or the food inside may be foamy or have a bad odor; cans with any of these signs should be discarded. Any container of food which has been heat-treated and then assumed to be airtight which shows signs of not being so, e.g., metal cans with pinprick holes from rust or mechanical damage, should be discarded. Contamination of a canned food solely with C. botulinum may not cause any visual defects to the container, such as bulging. Only assurance of sufficient thermal processing during production, and absence of a route for subsequent contamination, should be used as indicators of food safety. The addition of nitrites and nitrates to processed meats such as ham, bacon, and sausages reduces growth and toxin production of C. botulinum. Vaccine Vaccines are under development, but they have side effects. As of 2017 work to develop a better vaccine was being carried out, but the US FDA had not approved any vaccine against botulism. Treatment Botulism is generally treated with botulism antitoxin and supportive care. Supportive care for botulism includes monitoring of respiratory function. Respiratory failure due to paralysis may require mechanical ventilation for 2 to 8 weeks, plus intensive medical and nursing care. After this time, paralysis generally improves as new neuromuscular connections are formed. In some abdominal cases, physicians may try to remove contaminated food still in the digestive tract by inducing vomiting or using enemas. Wounds should be treated, usually surgically, to remove the source of the toxin-producing bacteria. Antitoxin Botulinum antitoxin consists of antibodies that neutralize botulinum toxin in the circulatory system by passive immunization. This prevents additional toxin from binding to the neuromuscular junction, but does not reverse any already inflicted paralysis. In adults, a trivalent antitoxin containing antibodies raised against botulinum toxin types A, B, and E is used most commonly; however, a heptavalent botulism antitoxin has also been developed and was approved by the U.S. FDA in 2013. In infants, horse-derived antitoxin is sometimes avoided for fear of infants developing serum sickness or lasting hypersensitivity to horse-derived proteins. To avoid this, a human-derived antitoxin has been developed and approved by the U.S. FDA in 2003 for the treatment of infant botulism. This human-derived antitoxin has been shown to be both safe and effective for the treatment of infant botulism. However, the danger of equine-derived antitoxin to infants has not been clearly established, and one study showed the equine-derived antitoxin to be both safe and effective for the treatment of infant botulism. Trivalent (A,B,E) botulinum antitoxin is derived from equine sources utilizing whole antibodies (Fab and Fc portions). In the United States, this antitoxin is available from the local health department via the CDC. The second antitoxin, heptavalent (A,B,C,D,E,F,G) botulinum antitoxin, is derived from "despeciated" equine IgG antibodies which have had the Fc portion cleaved off leaving the F(ab')2 portions. This less immunogenic antitoxin is effective against all known strains of botulism where not contraindicated. Prognosis The paralysis caused by botulism can persist for two to eight weeks, during which supportive care and ventilation may be necessary to keep the patient alive. Botulism can be fatal in five to ten percent of people who are affected. However, if left untreated, botulism is fatal in 40 to 50 percent of cases. Infant botulism typically has no long-term side effects but can be complicated by treatment-associated adverse events. The case fatality rate is less than two percent for hospitalized babies. Epidemiology Globally, botulism is fairly rare, with approximately 1,000 identified cases yearly. United States In the United States an average of 145 cases are reported each year. Of these, roughly 65% are infant botulism, 20% are wound botulism, and 15% are foodborne. Infant botulism is predominantly sporadic and not associated with epidemics, but great geographic variability exists. From 1974 to 1996, for example, 47% of all infant botulism cases reported in the U.S. occurred in California. Between 1990 and 2000, the Centers for Disease Control and Prevention reported 263 individual foodborne cases from 160 botulism events in the United States with a case-fatality rate of 4%. Thirty-nine percent (103 cases and 58 events) occurred in Alaska, all of which were attributable to traditional Alaskan aboriginal foods. In the lower 49 states, home-canned food was implicated in 70 events (~69%) with canned asparagus being the most frequent cause. Two restaurant-associated outbreaks affected 25 people. The median number of cases per year was 23 (range 17–43), the median number of events per year was 14 (range 9–24). The highest incidence rates occurred in Alaska, Idaho, Washington, and Oregon. All other states had an incidence rate of 1 case per ten million people or less. The number of cases of food borne and infant botulism has changed little in recent years, but wound botulism has increased because of the use of black tar heroin, especially in California. All data regarding botulism antitoxin releases and laboratory confirmation of cases in the US are recorded annually by the Centers for Disease Control and Prevention and published on their website. On 2 July 1971, the U.S. Food and Drug Administration (FDA) released a public warning after learning that a New York man had died and his wife had become seriously ill due to botulism after eating a can of Bon Vivant vichyssoise soup. Between 31 March and 6 April 1977, 59 individuals developed type B botulism. All who fell ill had eaten at the same Mexican restaurant in Pontiac, Michigan, and had consumed a hot sauce made with improperly home-canned jalapeño peppers, either by adding it to their food, or by eating nachos that had been prepared with the hot sauce. The full clinical spectrum (mild symptomatology with neurologic findings through life-threatening ventilatory paralysis) of type B botulism was documented. In April 1994, the largest outbreak of botulism in the United States since 1978 occurred in El Paso, Texas. Thirty people were affected; 4 required mechanical ventilation. All ate food from a Greek restaurant. The attack rate among people who ate a potato-based dip was 86% (19/22) compared with 6% (11/176) among people who did not eat the dip (relative risk [RR] = 13.8; 95% confidence interval [CI], 7.6–25.1). The attack rate among people who ate an eggplant-based dip was 67% (6/9) compared with 13% (24/189) among people who did not (RR = 5.2; 95% CI, 2.9–9.5). Botulism toxin type A was detected in patients and in both dips. Toxin formation resulted from holding aluminum foil-wrapped baked potatoes at room temperature, apparently for several days, before they were used in the dips. Food handlers should be informed of the potential hazards caused by holding foil-wrapped potatoes at ambient temperatures after cooking. In 2002, fourteen Alaskans ate muktuk (whale blubber) from a beached whale, resulting in eight of them developing botulism, with two of the affected requiring mechanical ventilation. Beginning in late June 2007, 8 people contracted botulism poisoning by eating canned food products produced by Castleberry's Food Company in its Augusta, Georgia plant. It was later identified that the Castleberry's plant had serious production problems on a specific line of retorts that had under-processed the cans of food. These issues included broken cooking alarms, leaking water valves and inaccurate temperature devices, all the result of poor management of the company. All of the victims were hospitalized and placed on mechanical ventilation. The Castleberry's Food Company outbreak was the first instance of botulism in commercial canned foods in the United States in over 30 years. One person died, 21 cases were confirmed, and 10 more were suspected in Lancaster, Ohio when a botulism outbreak occurred after a church potluck in April 2015. The suspected source was a salad made from home-canned potatoes. A botulism outbreak occurred in Northern California in May 2017 after 10 people consumed nacho cheese dip served at a gas station in Sacramento County. One man died as a result of the outbreak. United Kingdom The largest recorded outbreak of foodborne botulism in the United Kingdom occurred in June 1989. A total of 27 patients were affected; one patient died. Twenty-five of the patients had eaten one brand of hazelnut yogurt in the week before the onset of symptoms. Control measures included the cessation of all yogurt production by the implicated producer, the withdrawal of the firm's yogurts from sale, the recall of cans of the hazelnut conserve, and advice to the general public to avoid the consumption of all hazelnut yogurts. China From 1958 to 1983 there were 986 outbreaks of botulism in China involving 4,377 people with 548 deaths. Qapqal disease After the Chinese Communist Revolution in 1949, a mysterious plague (named Qapqal disease) was noticed to be affecting several Sibe villages in Qapqal Xibe Autonomous County. It was endemic with distinctive epidemic patterns, yet the underlying cause remained unknown for a long period of time. It caused a number of deaths and forced some people to leave the place. In 1958, a team of experts were sent to the area by the Ministry of Health to investigate the cases. The epidemic survey conducted proved that the disease was primarily type A botulism, with several cases of type B. The team also discovered that the source of the botulinum was local fermented grain and beans, as well as a raw meat food called mi song hu hu. They promoted the improvement of fermentation techniques among local residents, and thus eliminated the disease. Canada From 1985 to 2005 there were outbreaks causing 91 confirmed cases of foodborne botulism in Canada, 85% of which were in Inuit communities, especially Nunavik, as well as First Nations of the coast of British Columbia, following consumption of traditionally prepared marine mammal and fish products. Ukraine In 2017, there were 70 cases of botulism with 8 deaths in Ukraine. The previous year there were 115 cases with 12 deaths. Most cases were the result of dried fish, a common local drinking snack. Vietnam In 2020, several cases of botulism were reported in Vietnam. All of them were related to a product containing contaminated vegetarian pâté. Some patients were put on life support. Other susceptible species Botulism can occur in many vertebrates and invertebrates. Botulism has been reported in such species as rats, mice, chicken, frogs, toads, goldfish, aplysia, squid, crayfish, drosophila and leeches. Death from botulism is common in waterfowl; an estimated 10,000 to 100,000 birds die of botulism annually. The disease is commonly called "limberneck". In some large outbreaks, a million or more birds may die. Ducks appear to be affected most often. An enzootic form of duck botulism in the Western US and Canada is known as "western duck sickness". Botulism also affects commercially raised poultry. In chickens, the mortality rate varies from a few birds to 40% of the flock. Botulism seems to be relatively uncommon in domestic mammals; however, in some parts of the world, epidemics with up to 65% mortality are seen in cattle. The prognosis is poor in large animals that are recumbent. In cattle, the symptoms may include drooling, restlessness, incoordination, urine retention, dysphagia, and sternal recumbency. Laterally recumbent animals are usually very close to death. In sheep, the symptoms may include drooling, a serous nasal discharge, stiffness, and incoordination. Abdominal respiration may be observed and the tail may switch on the side. As the disease progresses, the limbs may become paralyzed and death may occur. Phosphorus-deficient cattle, especially in southern Africa, are inclined to ingest bones and carrion containing clostridial toxins and consequently develop lame sickness or lamsiekte. The clinical signs in horses are similar to cattle. The muscle paralysis is progressive; it usually begins at the hindquarters and gradually moves to the front limbs, neck, and head. Death generally occurs 24 to 72 hours after initial symptoms and results from respiratory paralysis. Some foals are found dead without other clinical signs. Clostridium botulinum type C toxin has been incriminated as the cause of grass sickness, a condition in horses which occurs in rainy and hot summers in Northern Europe. The main symptom is pharynx paralysis. Domestic dogs may develop systemic toxemia after consuming C. botulinum type C exotoxin or spores within bird carcasses or other infected meat but are generally resistant to the more severe effects of C. botulinum type C. Symptoms include flaccid muscle paralysis, which can lead to death due to cardiac and respiratory arrest. Pigs are relatively resistant to botulism. Reported symptoms include anorexia, refusal to drink, vomiting, pupillary dilation, and muscle paralysis. In poultry and wild birds, flaccid paralysis is usually seen in the legs, wings, neck and eyelids. Broiler chickens with the toxicoinfectious form may also have diarrhea with excess urates. Prevention in non-human species One of the main routes of exposure for botulism is through the consumption of food contaminated with C. botulinum. Food-borne botulism can be prevented in domestic animals through careful inspection of the feed, purchasing high quality feed from reliable sources, and ensuring proper storage. Poultry litter and animal carcasses are places in which C. botulinum spores are able to germinate so it is advised to avoid spreading poultry litter or any carcass containing materials on fields producing feed materials due to their potential for supporting C. botulinum growth. Additionally, water sources should be checked for dead or dying animals, and fields should be checked for animal remains prior to mowing for hay or silage. Correcting any dietary deficiencies can also prevent animals from consuming contaminated materials such as bones or carcasses. Raw materials used for silage or feed mixed on site should be checked for any sign of mold or rotten appearance. Acidification of animal feed can reduce, but will not eliminate, the risk of toxin formation, especially in carcasses that remain whole. Vaccines in animals Vaccines have been developed for use in animals to prevent botulism. The availability and approval of these vaccines varies depending on the location, with places experiencing more cases generally having more vaccines available and routine vaccination is more common. A variety of vaccines have been developed for the prevention of botulism in livestock. Most initial vaccinations require multiple doses at intervals from 2–6 weeks, however, some newer vaccines require only one shot. This mainly depends on the type of vaccine and manufacturers recommendations. All vaccines require annual boosters to maintain immunity. Many of these vaccines can be used on multiple species including cattle, sheep, and goats with some labeled for use in horses and mules as well as separate vaccines for mink. Additionally, vaccination during an outbreak is as beneficial as therapeutic treatment in cattle, and this method is also used in horses and pheasants. The use of region specific toxoids to immunize animals has been shown to be effective. Toxoid types C and D used to immunize cattle is a useful vaccination method in South Africa and Australia. Toxoid has also been shown to be an appropriate method of immunizing minks and pheasants. In endemic areas, for example Kentucky, vaccination with type B toxoid appears to be effective. Use in biological warfare and terrorism United States Based on CIA research in Fort Detrick on biological warfare, anthrax and botulism were widely regarded as the two most effective options. During the 1950s, a highly lethal strain was discovered during the biological warfare program. The CIA continued to hold 5 grams of Clostridium botulinum, even after Nixon's ban on biological warfare in 1969. During the Gulf War, when the United States were concerned with a potential biowarfare attack, the efforts around botulism turned to prevention. However, the only way to make antitoxin in America until the 1990s was by drawing antibodies from a single horse named First Flight, raising much concern from Pentagon health officials. Iraq Iraq has historically possessed many types of germs, including botulism. The American Type Culture Collection sold 5 variants of botulinum to the University of Baghdad in May 1986. 1991 CIA reports also show Iraqis filled shells, warheads, and bombs with biological agents like botulinum (though none have been deployed). The Iraqi air force used the code name "tea" to refer to botulinum, and it was also referred to as bioweapon "A." Japan A Japanese cult called Aum Shinrikyo created laboratories that produced biological weapons, specifically botulinum, anthrax, and Q fever. From 1990 to 1995, the cult staged numerous unsuccessful bioterrorism attacks on civilians. They sprayed botulinum toxin from a truck in downtown Tokyo and in the Narita airport, but there are no reported cases of botulism as a result. See also List of foodborne illness outbreaks Botulinum toxin References Further reading External links WHO fact sheet on botulism Botulism in the United States, 1889–1996. Handbook for Epidemiologists, Clinicians and Laboratory Technicians. Centers for Disease Control and Prevention. National Center for Infectious Diseases, Division of Bacterial and Mycotic Diseases 1998. NHS choices CDC Botulism: Control Measures Overview for Clinicians University of California, Santa Cruz Environmental toxicology – Botulism CDC Botulism FAQ Biological agents Conditions diagnosed by stool test Foodborne illnesses Myoneural junction and neuromuscular diseases Poultry diseases Wikipedia infectious disease articles ready to translate Wikipedia medicine articles ready to translate
Botulism
Biology,Environmental_science
8,401
67,106,529
https://en.wikipedia.org/wiki/The%20Zuckerberg%20Institute%20for%20Water%20Research
Zuckerberg Institute for Water Research (ZIWR) is one of three research institutes constituting the Jacob Blaustein Institutes for Desert Research, a faculty of Ben-Gurion University of the Negev (BGU). The ZIWR is located on BGU's Sede Boqer Campus in Midreshet Ben-Gurion in Israel's Negev Desert, and hosts researchers who focus on developing new technologies to provide drinking water and water for agricultural and industrial use and to promote the sustainable use of water resources. The ZIWR encompasses the Department of Environmental Hydrology and Microbiology, and the Department of Desalination and Water Treatment. History The Zuckerberg Institute for Water Research was founded in 2002 and was named for Roy J. Zuckerberg, Senior Director of the Goldman Sachs Group and a philanthropist, based in New York City. The ZIWR is one of three institutes currently constituting the Jacob Blaustein Institutes for Desert Research, which were originally established in 1974. In 2016, the estate of Dr. Howard and Lottie Marcus made a donation of $400 million dollars to Ben-Gurion University, believed to be the largest gift ever to a university in Israel, with a portion of it going to the Zuckerberg Institute for Water Research for research into water resources and desalination technologies. Academic programs The Institute runs two department: The Department of Environmental Hydrology and Microbiology, and the Department of Desalination and Water Treatment. It also offers an MSc degree in Hydrology and Water Quality, in collaboration with the Albert Katz International School for Desert Studies, which is located at BGU's Sde Boker Campus. Department of Environmental Hydrology and Microbiology The Department of Environmental Hydrology and Microbiology hosts researchers who specialize in hydrology, hydrogeology, chemistry, and microbiology. Some of their particular research areas include flow and transport processes, remediation of contaminated water, and biological treatment of wastewater. Department of Desalination and Water Treatment The Department of Desalination and Water Treatment employs researchers who focus on various aspects of desalination and water treatment processes including the improvement and development of membranes for reverse osmosis, forward osmosis, and nanofiltration processes; processes to eliminate toxic materials from industrial effluents and polluted groundwater; and brine concentrate management. MSc in Hydrology and Water Quality This master's degree program, offered through the Albert Katz International School for Desert Studies, aims to introduce students to research in water sciences with the goal of improving human life in drylands and the development of policies for the sustainable use of water resources. The program offers the following tracks of study: 1. Water Resources, 2. Desalination and Water Treatment, and 3. Microbiology and Water Quality. Research Researchers from the ZIWR were involved in studies related to the COVID-19 pandemic. In the first, a study that was led by a team of researchers from the ZIWR and published in Nature Sustainability found that coronaviruses can persist in wastewater for several days, possibly leading to the spread of these viruses to humans. In another study, ZIWR researchers, in cooperation with scientists from Rice University in Houston, Texas, developed a laser-induced graphene technology that can filter airborne COVID-19 particles. References Ben-Gurion University of the Negev Research institutes in Israel Water desalination Hydrology organizations Water treatment Research institutes established in 2002
The Zuckerberg Institute for Water Research
Chemistry,Engineering,Environmental_science
710
77,707,545
https://en.wikipedia.org/wiki/List%20of%20regions%20of%20Kazakhstan%20by%20life%20expectancy
Bureau of National Statistics Source: The tables below include data from Kazakhstan's Bureau of National Statistics, which has published data on life expectancy by year, region, gender, and settlement type (urban and rural) consistently. Life expectancy in Kazakhstan has increased consistently for each year from 2005 to 2019, during which life expectancy for males went from 60.30 years to 68.82 years and life expectancy for females went from 71.77 years to 77.30 years. By 2023, life expectancy of Kazakhstan has reached 75.09 years. Latest data (2023) Past Life Expectancy (2019) Past data (1999-2023) Global Data Lab (2019–2022) Data source: Global Data Lab See also List of Asian countries by life expectancy Demographics of Kazakhstan References Demographics of Kazakhstan Health in Kazakhstan Kazakhstan, life expectancy Kazakhstan Regions of Kazakhstan Health-related lists Kazakhstan geography-related lists
List of regions of Kazakhstan by life expectancy
Biology
189
818,401
https://en.wikipedia.org/wiki/International%20Association%20of%20Machinists%20and%20Aerospace%20Workers
The International Association of Machinists and Aerospace Workers (IAM) is an AFL–CIO/CLC trade union Origin On May 5, 1888, Thomas W. Talbot, a railroad machinist in Atlanta, Georgia, founded the Order of United Machinists and Mechanical Engineers. Talbot and 18 others had been members in the Knights of Labor. Talbot believed that a union needed to be formed for railroad machinists that would resist wage cuts. He wanted to provide insurance against unemployment, illness, and accidents but also wanted railroad machinists to be recognized for their craft skill. Unlike the Knights of Labor, who accepted everyone, Talbot's union accepted only white US citizens, preferably native-born. The union excluded blacks, women, and non-citizens, and had secret passwords. Despite the secrecy, the order spread beyond Georgia, thanks in part to "boomers", men who traveled the railway lines for work. These boomers established local lodges in new areas. Within one year there were 40 lodges, and by 1891, there were 189. On May 6, 1889, the Machinists held their first major convention in Atlanta. Talbot was elected the Grand Master Machinist (later known as the international president), and William L. Dawley was elected as Grand Secretary (now known as General Secretary-Treasurer). The Organization's name was changed to the National Association of Machinists (NAM) and a constitution was drawn up. The NAM began publishing the 16-page Machinists Monthly Journal. Also in 1889, Frank French designed an emblem for the union. The emblem consisted of a flywheel, a friction joint caliper, and a machinist's square with the initials of the organization. According to French, the flywheel represented the ongoing power of the union once it started, and the caliper signified an extended invitation to all persons of civilized countries. The square signified that IAM was square and honest. In 1890 and 1891, NAM reached Canada, making Canadians the first international members. Locals were also formed in Mexico. To reflect this, in 1891 the name was changed from National Association of Machinists to International Association of Machinists (IAM), at a conference in Pittsburgh, Pennsylvania. In 1892, IAM signed a contract with the Atchison, Topeka and Santa Fe Railway, establishing the first organized shop at a railroad in the United States. Because IAM had a color bar, the American Federation of Labor (AFL) did not accept IAM right away. After IAM finally did join the AFL, AFL President Samuel Gompers urged IAM to drop its whites-only rule. But IAM maintained racial segregation, arguing that it needed to retain Southern members. Talbot wanted the union to be a fraternity of white men born in the United States who possessed good moral character. His sentiments were echoed by other AFL member unions, whose locals routinely discriminated against black workers through racial exclusion policies, which the AFL rarely commented on. 1920s–1940s World War I and wartime production drove membership in the Machinists to 300,000 in 1918, making it the country's largest union. Just five years later, membership dropped to 80,000. Amid the Great Depression, membership declined further, to 50,000—some 23,000 of whom were unemployed. In 1935, the machinists started to organize with the airline industry. In 1936, the Boeing Company in Seattle, Washington, signed the industry's first labor agreement. By 1938, the IAM negotiated the first union agreement in air transportation with Eastern Air Lines. In 1944, IAM union members established an education department to publish a supplemental journal. Initially published weekly by The Machinist, the IAM newspaper, the journal's production was eventually reduced to twice a year, then voted out of existence in 1956. It was replaced with a quarterly magazine entitled The IAMW Journal. Break with AFL In 1945, IAM disaffiliated with the AFL, which had failed to settle a jurisdictional dispute between IAM and the United Brotherhood of Carpenters and Joiners of America as well as the Amalgamated Association of Street and Electric Railway and Motor Coach Employees of America. In 1947 Congress passed the Taft Hartley Act, officially known as the Labor-Management Relations Act, which placed restrictions on union activities. This act also contained provisions that made closed shops illegal and outlawed boycotts. The second section of the Taft Hartley Act was controversial because it allowed states to pass right-to-work laws, which enabled them to regulate the number of union shops. Furthermore, the machinists worked with AFL unions to repeal the act. The limitations imposed on union political activity by this act led to the creation of the Machinists' Non-Partisan Political League. In 1948, Lodge 751 went on strike against the Boeing Company in Seattle, Washington. The machinists preserved longstanding seniority rules that the company wanted to abolish and achieved a 10 percent per hour raise. IAM also competed for members with the United Auto Workers of America in the automotive industry and with the United Aerospace Workers for aircraft working in that union. In 1949, IAM signed no-raiding agreements with both unions. Those agreements become the model for other unions when AFL and the CIO merged in 1955. Recent history The 1950s was a period of rapid growth for IAM. The production of jet engines during the war led IAM to expand to the aircraft industry. By 1958, IAM had more than 900,000 members. This was because IAM took steps to begin to move away from its racist past. In 1955, under the leadership of President Al Hayes IAM became more of an industrial union; it began to shift from railroad work to metal fabrication. IAM had more union members as well as workers in the aircraft industry. Thus, Aerospace workers were attracted to join IAM. The trade union produced a first-of-its-kind radio show, Boomer Jones, to tell their history in a modern way. In 1964, IAM changed its name to the International Association of Machinists and Aerospace Workers. IAMAW began to strike against five major airlines, including Eastern, National, Northwest, Trans World, and United Airlines. 35,400 IAMAW members in 231 cities grounded the airlines for 43 days finally winning 5 percent raises in three successive years. IAM membership nearly doubled in the 1950s, in large part due to the burgeoning airline industry, from 501,000 members in 1949 to 903,000 members in 1958. As a result of the influx of members from the airlines and the new American space program, the delegates voted to change the name to the International Association of Machinists and Aerospace Workers at the 1964 convention. In 1982, due to individual and corporate bankruptcies IAM membership dropped to 820,211 members from a high of 927,000 in 1973. Also, in 1982 boycott was initiated by the IAM against Brown & Sharpe, a machine, precision, measuring and cutting tool manufacturer, headquartered in Rhode Island. The boycott was called after the firm refused to bargain in good faith (withdrawing previously negotiated clauses in the contract), and forced the union into a strike, during which police sprayed pepper gas on some 800 picketers at the company's North Kingston plant in early 1982. Three weeks later, a machinist narrowly escaped serious injury when a shot fired into the picket line hit his belt buckle. The National Labor Relations Board later charged Brown & Sharpe with regressive bargaining, and of entering into negotiations with the express purpose of not reaching an agreement with the union. It was not until 1998, nearly seventeen years after the strike began, that the Rhode Island Supreme Court ended the legal battle, ultimately siding with Brown & Sharpe in its plea that it had not illegally forced the strike. By this point, both Brown & Sharpe and its erstwhile work force were retreating from manufacturing in Rhode Island. From 1981 to 1990 the union owned and operated an Indy Car racing team, Machinists Union Racing. In 1991, the union absorbed the Pattern Makers' League of North America. The Transportation Communications International Union (TCU) merged with the IAM, after a TCU member vote in July 2005. On September 7, 2008, the union began a 57-day strike against Boeing over issues with outsourcing, job security, pay and benefits. The union continues to expand into different companies today. In December 2013 the union's attempt to represent workers at an Amazon.com fulfillment center in Middletown, Delaware, failed. In 2020, the union began a strike at Bath Iron Works, a major shipyard in Bath, Maine, over disagreements regarding a new labor contract with the company. The strike, occurring during the COVID-19 pandemic, was described by the IAM President as "the largest strike in the United States of America right now.” The strike ended after two months, with new labor contract agreements viewed as favorable to the union members. On September 12, 2024, IAM District 751 voted to strike against Boeing over a proposed contract's pay and benefits with 94.6% of votes in and 96% in favor of a strike. The union leadership had reached a tentative agreement with Boeing prior to the vote and endorsed the contract. Composition According to IAM's Department of Labor records, since 2005, when membership classifications were first reported, the union's membership has been generally in a slow decline, including "dues paying", "retired", and "exempt" members. Despite this, "life" members were reported to have had a 22 percent increase during this period, and "unemployed" members momentarily increased to a peak in 2009, before also declining. Members classified as "on strike" have varied considerably throughout, although remaining less than 1 percent of the total membership. IAM contracts also cover some non-members, known as agency fee payers, which since 2005 have grown to number comparatively just over 1 percent of the size of the union's membership. As of 2013, this accounts for about 145,000 "retirees" (25 percent), 52,000 "life" members (9 percent), 26,000 "exempt" members (5 percent), and 14,000 "unemployed" members (2 percent), plus about 7,000 non-members paying agency fees, compared to about 333,000 dues paying" members (58 percent). Affiliates National Federation of Federal Employees Transportation Communications International Union International Presidents 1888–1890: Thomas W. Talbot 1890–1892: James J. Creamer 1892–1893: John O'Day 1893–1911: James O'Connell 1911–1926: William Hugh Johnston 1926–1939: Arthur O. Wharton 1939–1949: Harvey W. Brown 1949–1965: Al J. Hayes 1965–1969: P. L. Siemiller 1969–1977: Floyd E. Smith 1977–1989: William W. Winpisinger 1989–1997: George Kourpias 1997–2016: R. Thomas Buffenbarger 2016–present: Robert Martinez Jr. Legal issues The association was sued in 2010 after an election of local union officers went badly. A lawsuit was brought by the officers against the union and claimed libel, defamation, and invasion of privacy. While the court ruled for the defendant, the case exposed thousands of grievances filed against the union, dubious handling of the election, and other misconduct; including using union computer equipment for viewing pornography, missing promotional items purchased with union funds, and significant mishandling of union funds. In 1980, the union sued the Federal Election Commission for issues surrounding political action committee contributions. This case eventually went to the Supreme Court. In 2022, a Boeing worker sued the union over unlawful dues deductions. The settlement required IAM to return illegally seized dues. The association sued furniture maker IKEA over unsafe workplace issues in 2010. See also League of Independent Workers of the San Joaquin Valley International Woodworkers of America References Archives Preliminary Guide to the International Association of Machinists Hope Lodge 79 Records. 1932–1941. 25 items. International Association of Machinists and Aerospace Workers, Aerospace Industrial District Lodge 751 Publications. 1939–2008. Jackie Boschok Papers. 1979–2013. 16.32 cubic feet (22 boxes), 2 oversize folders. George E. Rennar Papers. 1933–1972. 37.43 cubic feet. Matthew C. Bates Papers. 1988–2002. 0.48 cubic feet (1 box and 1 oversize folder). External links Aerospace Union International Association of Machinists and Aerospace Workers, Canada IAMAW Collection. Historical materials related to IAM held by Georgia State University, Special Collections,Southern Labor Archives. Online guide retrieved April 27, 2005. AFL-CIO affiliates Canadian Labour Congress affiliates International Metalworkers' Federation International Transport Workers' Federation International Federation of Building and Wood Workers Trade unions established in 1888 Aerospace Engineering trade unions 1888 establishments in Georgia (U.S. state) Organizations based in Maryland
International Association of Machinists and Aerospace Workers
Physics
2,662
36,667,067
https://en.wikipedia.org/wiki/John%20Fewster
John Fewster (1738 – 3 April 1824) was a surgeon and apothecary in Thornbury, Gloucestershire. Fewster, a friend and professional colleague of Edward Jenner, played an important role in the discovery of the smallpox vaccine. In 1768 Fewster realized that prior infection with cowpox rendered a person immune to smallpox. Fewster was educated at Bristol Grammar School before a seven-year apprenticeship at the Bristol Infirmary. Development of the smallpox vaccine In 1768, Fewster noted that two brothers (named Creed) had both been variolated (purposefully infected with smallpox) but that one did not react at all to variolation. On questioning, this subject had never had smallpox, but had previously contracted cowpox. This prompted Fewster to wonder whether cowpox might protect against smallpox, a notion of which he was previously unaware. He is reported to have discussed this possibility over a Convivio-Medical Society dinner at the Ship Inn in Alveston. He also encouraged others to take up the inquiry. Amongst those at the meeting was Edward Jenner, a young medical apprentice at the time. Fewster followed up this observation, but only to a limited extent and not in writing. In 1796, Fewster was called to visit a local boy who was ill with early smallpox and was asked by John Player, the boy’s uncle, whether he would consider inoculating the boy with cowpox to save him from smallpox. According to Player Fewster replied that he had already thought of this but had decided against it as, in his view, variolation was very successful and an alternative seemed unnecessary. Nonetheless, Player reports, Fewster went on to inoculate three children in Thornbury with cowpox, during spring 1796. These vaccinations took place at around the same time as Jenner's first vaccination attempts. Fewster never made any claim to have discovered vaccination. References See also The Fewster Family - Apothecaries and Surgeons of Thornbury Edward Jenner Smallpox vaccine 1738 births 1824 deaths English surgeons English apothecaries People from Thornbury, Gloucestershire Vaccinologists British immunologists 18th-century English medical doctors 18th-century English people Smallpox vaccines People educated at Bristol Grammar School
John Fewster
Biology
462
20,257,727
https://en.wikipedia.org/wiki/Muddy%20flood
A muddy flood is produced by an accumulation of run-off over agricultural land. Sediments are picked up by the run-off and carried as suspended matter or bed-load. Muddy floods are typically a hill-slope process, and should not be confused with mudflows produced by mass movements. Muddy floods can damage the road infrastructure and may deposit layers of mud blanket and may also clog sewers and damage private property. It has been referred to 'muddy floods' since the 1980s. A similar designation appeared in French ('inondations boueuses') during the same period. Muddy flood generation Muddy runoff is generated on agricultural land when the soil surface is exposed or sparsely covered by vegetation. Large quantities of run-off usually generated by heavy storms is needed to start such a flood. Muddy flood occurrence Muddy floods have been observed in the entire European loess belt. Other affected areas include Normandy and Picardy (France), central Belgium and southern Limburg, the Netherlands. Muddy floods have also been observed in Slovakia and Poland. Temporal evolution An increase in muddy flood frequency has been observed during the last twenty years (e.g. in central Belgium,). This increase in their frequency may be due to a number of factors including: Change in agricultural practices that leave field bare of crops in the autumn and winter A shift to crops that are more sensitive to soil erosion; land consolidation (enlargement of fields, removal of landscape buffer elements such as hedges. construction of new houses, upstream of cropland increasing run-off volumes and intensity increased frequency of heavy rainfall. Control measures Preventive measures consist in limiting runoff generation and sediment production at the source. Alternative farming practices (e.g. reduced tillage) to increase runoff infiltration and limit erosion in their fields may assist. Curative measures generally consist in installing retention ponds at the boundary between cropland and inhabited areas. An alternative is to apply other measures than can be referred to as intermediate measures. Grass buffer strips along or within fields, a grassed waterway (in the thalwegs of dry valleys) or earthen dams are good examples of this type of measures. They act as a buffer within landscape, retaining runoff temporarily and trapping sediments. Implementation of these measures is best coordinated at the catchment scale. See also November 2010 European Windstorms References Flood
Muddy flood
Environmental_science
472
37,966,080
https://en.wikipedia.org/wiki/HD%20160529
HD 160529 (V905 Scorpii) is a luminous blue variable (LBV) star located in the constellation of Scorpius. With an apparent magnitude of around +6.8 it cannot be seen with the naked eye except under very favourable conditions, but it is easy to see with binoculars or amateur telescopes. Physical characteristics V905 Sco has a peculiar variable spectral type with emission lines and P Cygni profiles. At visual maximum it is similar to an A9 star and at minimum close to B8. The distance has been estimated at 2.5 kiloparsecs (8,200 light years) based on the assumption of an absolute magnitude of −8.9. However this distance is uncertain and values between 1.9 kiloparsecs and 3.5 kiloparsecs have been proposed. Working with a distance of 2.5 kiloparsecs, the radius varies from when quiescent to in outburst. The temperature also varies, from 8,000K in outburst to 12,000K when quiescent. With these parameters, the apparent visual magnitude varies by 0.5 and the bolometric luminosity is constant at . Estimates of the surface gravity lead to a mass of and a probable initial mass of This suggests that V905 Sco is a former red supergiant star. References A-type hypergiants Luminous blue variables Scorpius 160529 Scorpii, V905 086624 Durchmusterung objects
HD 160529
Astronomy
315
988,751
https://en.wikipedia.org/wiki/Personal%20knowledge%20management
Personal knowledge management (PKM) is a process of collecting information that a person uses to gather, classify, store, search, retrieve and share knowledge in their daily activities and the way in which these processes support work activities . It is a response to the idea that knowledge workers need to be responsible for their own growth and learning . It is a bottom-up approach to knowledge management (KM) . History and background Although as early as 1998 Davenport wrote on the importance to worker productivity of understanding individual knowledge processes (cited in ), the term personal knowledge management appears to be relatively new. Its origin can be traced in a working paper by . PKM integrates personal information management (PIM), focused on individual skills, with knowledge management (KM) in addition to input from a variety of disciplines such as cognitive psychology, management and philosophy . From an organizational perspective, understanding of the field has developed in light of expanding knowledge about human cognitive capabilities and the permeability of organizational boundaries. From a metacognitive perspective, it compares various modalities within human cognition as to their competence and efficacy . It is an underresearched area . More recently, research has been conducted to help understand "the potential role of Web 2.0 technologies for harnessing and managing personal knowledge" . The Great Resignation has expanded the category of knowledge workers and is predicted to increase demand for personal knowledge management in the future . Models identified information retrieval, assessment and evaluation, organization, analysis, presentation, security, and collaboration as essential to PKM (cited in ). Wright's model involves four interrelated domains: analytical, information, social, and learning. The analytical domain involves competencies such as interpretation, envisioning, application, creation, and contextualization. The information dimension comprises the sourcing, assessment, organization, aggregation, and communication of information. The social dimension involves finding and collaborating with people, the development of both close networks and extended networks, and dialogue. The learning dimension entails expanding pattern recognition and sensemaking capabilities, reflection, development of new knowledge, improvement of skills, and extension to others. This model stresses the importance of both bonding and bridging networks . In Nonaka and Takeuchi's SECI model of knowledge dimensions (see under knowledge management), knowledge can be tacit or explicit, with the interaction of the two resulting in new knowledge . Smedley has developed a PKM model based on Nonaka and colleagues' model in which an expert provides direction while a community of practice provides support for personal knowledge creation . Trust is central to knowledge sharing in this model. Nonaka has returned to his earlier work in an attempt to further develop his ideas about knowledge creation Personal knowledge management can also be viewed along two main dimensions, personal knowledge and personal management . Zhang has developed a model of PKM in relation to organizational knowledge management (OKM) that considers two axes of knowledge properties and management perspectives, either organizational or personal. These aspects of organizational and personal knowledge are interconnected through the OAPI process (organizationalize, aggregate, personalize, and individualize), whereby organizational knowledge is personalized and individualized, and personal knowledge is aggregated and operationalized as organizational knowledge . Criticism It is not clear whether PKM is anything more than a new wrapper around personal information management (PIM). William Jones argued that only personal information as a tangible resource can be managed, whereas personal knowledge cannot . Dave Snowden has asserted that most individuals cannot manage their knowledge in the traditional sense of "managing" and has advocated thinking in terms of sensemaking rather than PKM . Knowledge is not solely an individual product—it emerges through connections, dialog, and social interaction (see Sociology of knowledge). However, in Wright's model, PKM involves the application to problem-solving of analytical, information, social, and learning dimensions, which are interrelated , and so is inherently social. An aim of PKM is "helping individuals to be more effective in personal, organizational and social environments" , often through the use of technology such as networking software. It has been argued, however, that equation of PKM with technology has limited the value and utility of the concept (e.g., , ). In 2012, Mohamed Chatti introduced the personal knowledge network (PKN) model to KM as an alternative perspective on PKM, based on the concepts of a personal knowledge network and knowledge ecology . Skills Skills associated with personal knowledge management include: Collaboration skills. Coordination, synchronization, experimentation, cooperation and design. Communication skills. Perception, intuition, expression, visualization and interpretation. Creative skills. Imagination, pattern recognition, appreciation, innovation, inference. Understanding of complex adaptive systems. Information literacy. Understanding what information is important and how to find unknown information. Manage learning. Manage how and when the individual learns. Networking with others. Knowing what your network of people knows. Knowing who might have additional knowledge and resources to help you Organizational skills. Personal librarianship. Personal categorization and taxonomies. Reflection. Continuous improvement on how the individual operates. Researching, canvassing, paying attention, interviewing and observational "cultural anthropology" skills Tools Some organizations are introducing PKM "systems" with some or all four components: Content management: taxonomy processes and desktop search tools that enable employees to subscribe to, find, organize and publish information that resides on their desktops Just-in-time canvassing: templates and e-mail canvassing lists that enable people to identify and connect with the appropriate experts and expertise quickly and effectively Knowledge harvesting: software tools that automatically collect appropriate knowledge residing on subject matter experts' hard drives Personal productivity improvement: knowledge fairs and 101 training sessions to help each employee make more effective personal use of the knowledge, learning, and technology resources available in the context of their work PKM has also been linked to these tools: Email, calendars, task managers Knowledge logs (k-logs) Social bookmarking and enterprise bookmarking Virtual assistants Wikis, including personal wikis and semantic wikis Web annotations Other useful tools include stories and narrative inquiry, decision support systems, various kinds of node–link diagram (such as argument maps, mind maps, concept maps), and similar information visualization techniques. Individuals use these tools to capture ideas, expertise, experience, opinions or thoughts, and this "voicing" will encourage cognitive diversity and promote free exchanges away from a centralized policed knowledge repository. The goal is to facilitate knowledge sharing and personal content management. The most widely used software with PKM functions are: Logseq, which is FLOSS Notion (productivity software) Obsidian (software) Roam Research TiddlyWiki See also Adaptive hypermedia Card file Commonplace book Drakon-chart Memex Semantic desktop User modeling References Knowledge management Information systems
Personal knowledge management
Technology
1,387
2,994
https://en.wikipedia.org/wiki/Anemometer
In meteorology, an anemometer () is a device that measures wind speed and direction. It is a common instrument used in weather stations. The earliest known description of an anemometer was by Italian architect and author Leon Battista Alberti (1404–1472) in 1450. History The anemometer has changed little since its development in the 15th century. Alberti is said to have invented it around 1450. In the ensuing centuries numerous others, including Robert Hooke (1635–1703), developed their own versions, with some mistakenly credited as its inventor. In 1846, Thomas Romney Robinson (1792–1882) improved the design by using four hemispherical cups and mechanical wheels. In 1926, Canadian meteorologist John Patterson (1872–1956) developed a three-cup anemometer, which was improved by Brevoort and Joiner in 1935. In 1991, Derek Weston added the ability to measure wind direction. In 1994, Andreas Pflitsch developed the sonic anemometer. Velocity anemometers Cup anemometers A simple type of anemometer was invented in 1845 by Rev. Dr. John Thomas Romney Robinson of Armagh Observatory. It consisted of four hemispherical cups on horizontal arms mounted on a vertical shaft. The air flow past the cups in any horizontal direction turned the shaft at a rate roughly proportional to the wind's speed. Therefore, counting the shaft's revolutions over a set time interval produced a value proportional to the average wind speed for a wide range of speeds. This type of instrument is also called a rotational anemometer. Four cup With a four-cup anemometer, the wind always has the hollow of one cup presented to it, and is blowing on the back of the opposing cup. Since a hollow hemisphere has a drag coefficient of .38 on the spherical side and 1.42 on the hollow side, more force is generated on the cup that presenting its hollow side to the wind. Because of this asymmetrical force, torque is generated on the anemometer's axis, causing it to spin. Theoretically, the anemometer's speed of rotation should be proportional to the wind speed because the force produced on an object is proportional to the speed of the gas or fluid flowing past it. However, in practice, other factors influence the rotational speed, including turbulence produced by the apparatus, increasing drag in opposition to the torque produced by the cups and support arms, and friction on the mount point. When Robinson first designed his anemometer, he asserted that the cups moved one-third of the speed of the wind, unaffected by cup size or arm length. This was apparently confirmed by some early independent experiments, but it was incorrect. Instead, the ratio of the speed of the wind and that of the cups, the anemometer factor, depends on the dimensions of the cups and arms, and can have a value between two and a little over three. Once the error was discovered, all previous experiments involving anemometers had to be repeated. Three cup The three-cup anemometer developed by Canadian John Patterson in 1926, and subsequent cup improvements by Brevoort & Joiner of the United States in 1935, led to a cupwheel design with a nearly linear response and an error of less than 3% up to . Patterson found that each cup produced maximum torque when it was at 45° to the wind flow. The three-cup anemometer also had a more constant torque and responded more quickly to gusts than the four-cup anemometer. Three cup wind direction The three-cup anemometer was further modified by Australian Dr. Derek Weston in 1991 to also measure wind direction. He added a tag to one cup, causing the cupwheel speed to increase and decrease as the tag moved alternately with and against the wind. Wind direction is calculated from these cyclical changes in speed, while wind speed is determined from the average cupwheel speed. Three-cup anemometers are currently the industry standard for wind resource assessment studies and practice. Vane anemometers One of the other forms of mechanical velocity anemometer is the vane anemometer. It may be described as a windmill or a propeller anemometer. Unlike the Robinson anemometer, whose axis of rotation is vertical, the vane anemometer must have its axis parallel to the direction of the wind and is therefore horizontal. Furthermore, since the wind varies in direction and the axis has to follow its changes, a wind vane or some other contrivance to fulfill the same purpose must be employed. A vane anemometer thus combines a propeller and a tail on the same axis to obtain accurate and precise wind speed and direction measurements from the same instrument. The speed of the fan is measured by a revolution counter and converted to a windspeed by an electronic chip. Hence, volumetric flow rate may be calculated if the cross-sectional area is known. In cases where the direction of the air motion is always the same, as in ventilating shafts of mines and buildings, wind vanes known as air meters are employed, and give satisfactory results. Hot-wire anemometers Hot wire anemometers use a fine wire (on the order of several micrometres) electrically heated to some temperature above the ambient. Air flowing past the wire cools the wire. As the electrical resistance of most metals is dependent upon the temperature of the metal (tungsten is a popular choice for hot-wires), a relationship can be obtained between the resistance of the wire and the speed of the air. In most cases, they cannot be used to measure the direction of the airflow, unless coupled with a wind vane. Several ways of implementing this exist, and hot-wire devices can be further classified as CCA (constant current anemometer), CVA (constant voltage anemometer) and CTA (constant-temperature anemometer). The voltage output from these anemometers is thus the result of some sort of circuit within the device trying to maintain the specific variable (current, voltage or temperature) constant, following Ohm's law. Additionally, PWM (pulse-width modulation) anemometers are also used, wherein the velocity is inferred by the time length of a repeating pulse of current that brings the wire up to a specified resistance and then stops until a threshold "floor" is reached, at which time the pulse is sent again. Hot-wire anemometers, while extremely delicate, have extremely high frequency-response and fine spatial resolution compared to other measurement methods, and as such are almost universally employed for the detailed study of turbulent flows, or any flow in which rapid velocity fluctuations are of interest. An industrial version of the fine-wire anemometer is the thermal flow meter, which follows the same concept, but uses two pins or strings to monitor the variation in temperature. The strings contain fine wires, but encasing the wires makes them much more durable and capable of accurately measuring air, gas, and emissions flow in pipes, ducts, and stacks. Industrial applications often contain dirt that will damage the classic hot-wire anemometer. Laser Doppler anemometers In laser Doppler velocimetry, laser Doppler anemometers use a beam of light from a laser that is divided into two beams, with one propagated out of the anemometer. Particulates (or deliberately introduced seed material) flowing along with air molecules near where the beam exits reflect, or backscatter, the light back into a detector, where it is measured relative to the original laser beam. When the particles are in great motion, they produce a Doppler shift for measuring wind speed in the laser light, which is used to calculate the speed of the particles, and therefore the air around the anemometer. Ultrasonic anemometers Ultrasonic anemometers, first developed in the 1950s, use ultrasonic sound waves to measure wind velocity. They measure wind speed based on the time of flight of sonic pulses between pairs of transducers. The time that a sonic pulse takes to travel from one transducer to its pair is inversely proportionate to the speed of sound in air plus the wind velocity in the same direction: where is the time of flight, is the distance between transducers, is the speed of sound in air and is the wind velocity. In other words, the faster the wind is blowing, the faster the sound pulse travels. To correct for the speed of sound in air (which varies according to temperature, pressure and humidity) sound pulses are sent in both directions and the wind velocity is calculated using the forward and reverse times of flight: where is the forward time of flight and the reverse. Because ultrasonic anenometers have no moving parts, they need little maintenance and can be used in harsh environments. They operate over a wide range of wind speeds. They can measure rapid changes in wind speed and direction, taking many measurements each second, and so are useful in measuring turbulent air flow patterns. Their main disadvantage is the distortion of the air flow by the structure supporting the transducers, which requires a correction based upon wind tunnel measurements to minimize the effect. Rain drops or ice on the transducers can also cause inaccuracies. Since the speed of sound varies with temperature, and is virtually stable with pressure change, ultrasonic anemometers are also used as thermometers. Measurements from pairs of transducers can be combined to yield a measurement of velocity in 1-, 2-, or 3-dimensional flow. Two-dimensional (wind speed and wind direction) sonic anemometers are used in applications such as weather stations, ship navigation, aviation, weather buoys and wind turbines. Monitoring wind turbines usually requires a refresh rate of wind speed measurements of 3 Hz, easily achieved by sonic anemometers. Three-dimensional sonic anemometers are widely used to measure gas emissions and ecosystem fluxes using the eddy covariance method when used with fast-response infrared gas analyzers or laser-based analyzers. Acoustic resonance anemometers Acoustic resonance anemometers are a more recent variant of sonic anemometer. The technology was invented by Savvas Kapartis and patented in 1999. Whereas conventional sonic anemometers rely on time of flight measurement, acoustic resonance sensors use resonating acoustic (ultrasonic) waves within a small purpose-built cavity in order to perform their measurement. Built into the cavity is an array of ultrasonic transducers, which are used to create the separate standing-wave patterns at ultrasonic frequencies. As wind passes through the cavity, a change in the wave's property occurs (phase shift). By measuring the amount of phase shift in the received signals by each transducer, and then by mathematically processing the data, the sensor is able to provide an accurate horizontal measurement of wind speed and direction. Because acoustic resonance technology enables measurement within a small cavity, the sensors tend to be typically smaller in size than other ultrasonic sensors. The small size of acoustic resonance anemometers makes them physically strong and easy to heat, and therefore resistant to icing. This combination of features means that they achieve high levels of data availability and are well suited to wind turbine control and to other uses that require small robust sensors such as battlefield meteorology. One issue with this sensor type is measurement accuracy when compared to a calibrated mechanical sensor. For many end uses, this weakness is compensated for by the sensor's longevity and the fact that it does not require recalibration once installed. Pressure anemometers The first designs of anemometers that measure the pressure were divided into plate and tube classes. Plate anemometers These are the first modern anemometers. They consist of a flat plate suspended from the top so that the wind deflects the plate. In 1450, the Italian art architect Leon Battista Alberti invented the first such mechanical anemometer; in 1663 it was re-invented by Robert Hooke. Later versions of this form consisted of a flat plate, either square or circular, which is kept normal to the wind by a wind vane. The pressure of the wind on its face is balanced by a spring. The compression of the spring determines the actual force which the wind is exerting on the plate, and this is either read off on a suitable gauge, or on a recorder. Instruments of this kind do not respond to light winds, are inaccurate for high wind readings, and are slow at responding to variable winds. Plate anemometers have been used to trigger high wind alarms on bridges. Tube anemometers James Lind's anemometer of 1775 consisted of a vertically mounted glass U tube containing a liquid manometer (pressure gauge), with one end bent out in a horizontal direction to face the wind flow and the other vertical end capped. Though the Lind was not the first, it was the most practical and best known anemometer of this type. If the wind blows into the mouth of a tube, it causes an increase of pressure on one side of the manometer. The wind over the open end of a vertical tube causes little change in pressure on the other side of the manometer. The resulting elevation difference in the two legs of the U tube is an indication of the wind speed. However, an accurate measurement requires that the wind speed be directly into the open end of the tube; small departures from the true direction of the wind causes large variations in the reading. The successful metal pressure tube anemometer of William Henry Dines in 1892 utilized the same pressure difference between the open mouth of a straight tube facing the wind and a ring of small holes in a vertical tube which is closed at the upper end. Both are mounted at the same height. The pressure differences on which the action depends are very small, and special means are required to register them. The recorder consists of a float in a sealed chamber partially filled with water. The pipe from the straight tube is connected to the top of the sealed chamber and the pipe from the small tubes is directed into the bottom inside the float. Since the pressure difference determines the vertical position of the float this is a measure of the wind speed. The great advantage of the tube anemometer lies in the fact that the exposed part can be mounted on a high pole, and requires no oiling or attention for years; and the registering part can be placed in any convenient position. Two connecting tubes are required. It might appear at first sight as though one connection would serve, but the differences in pressure on which these instruments depend are so minute, that the pressure of the air in the room where the recording part is placed has to be considered. Thus, if the instrument depends on the pressure or suction effect alone, and this pressure or suction is measured against the air pressure in an ordinary room in which the doors and windows are carefully closed and a newspaper is then burnt up the chimney, an effect may be produced equal to a wind of 10 mi/h (16 km/h); and the opening of a window in rough weather, or the opening of a door, may entirely alter the registration. While the Dines anemometer had an error of only 1% at , it did not respond very well to low winds due to the poor response of the flat plate vane required to turn the head into the wind. In 1918 an aerodynamic vane with eight times the torque of the flat plate overcame this problem. Pitot tube static anemometers Modern tube anemometers use the same principle as in the Dines anemometer, but using a different design. The implementation uses a pitot-static tube, which is a pitot tube with two ports, pitot and static, that is normally used in measuring the airspeed of aircraft. The pitot port measures the dynamic pressure of the open mouth of a tube with pointed head facing the wind, and the static port measures the static pressure from small holes along the side on that tube. The pitot tube is connected to a tail so that it always makes the tube's head face the wind. Additionally, the tube is heated to prevent rime ice formation on the tube. There are two lines from the tube down to the devices to measure the difference in pressure of the two lines. The measurement devices can be manometers, pressure transducers, or analog chart recorders. Ping-pong ball anemometers A common anemometer for basic use is constructed from a ping-pong ball attached to a string. When the wind blows horizontally, it presses on and moves the ball; because ping-pong balls are very lightweight, they move easily in light winds. Measuring the angle between the string-ball apparatus and the vertical gives an estimate of the wind speed. This type of anemometer is mostly used for middle-school level instruction, which most students make on their own, but a similar device was also flown on the Phoenix Mars Lander. Effect of density on measurements In the tube anemometer the dynamic pressure is actually being measured, although the scale is usually graduated as a velocity scale. If the actual air density differs from the calibration value, due to differing temperature, elevation or barometric pressure, a correction is required to obtain the actual wind speed. Approximately 1.5% (1.6% above 6,000 feet) should be added to the velocity recorded by a tube anemometer for each 1000 ft (5% for each kilometer) above sea-level. Effect of icing At airports, it is essential to have accurate wind data under all conditions, including freezing precipitation. Anemometry is also required in monitoring and controlling the operation of wind turbines, which in cold environments are prone to in-cloud icing. Icing alters the aerodynamics of an anemometer and may entirely block it from operating. Therefore, anemometers used in these applications must be internally heated. Both cup anemometers and sonic anemometers are presently available with heated versions. Instrument location In order for wind speeds to be comparable from location to location, the effect of the terrain needs to be considered, especially in regard to height. Other considerations are the presence of trees, and both natural canyons and artificial canyons (urban buildings). The standard anemometer height in open rural terrain is 10 meters. See also Air flow meter Anemoi, for the ancient origin of the name of this technology Anemoscope, ancient device for measuring or predicting wind direction or weather Automated airport weather station Night of the Big Wind Particle image velocimetry Savonius wind turbine Wind power forecasting Wind run Windsock, a simple high-visibility indicator of approximate wind speed and direction Notes References Meteorological Instruments, W.E. Knowles Middleton and Athelstan F. Spilhaus, Third Edition revised, University of Toronto Press, Toronto, 1953 Invention of the Meteorological Instruments, W. E. Knowles Middleton, The Johns Hopkins Press, Baltimore, 1969 External links Description of the development and the construction of an ultrasonic anemometer Animation Showing Sonic Principle of Operation (Time of Flight Theory) – Gill Instruments Collection of historical anemometer Principle of Operation: Acoustic Resonance measurement – FT Technologies Thermopedia, "Anemometers (laser doppler)" Thermopedia, "Anemometers (pulsed thermal)" Thermopedia, "Anemometers (vane)" The Rotorvane Anemometer. Measuring both wind speed and direction using a tagged three-cup sensor Italian inventions Measuring instruments Meteorological instrumentation and equipment Navigational equipment Wind power 15th-century inventions
Anemometer
Technology,Engineering
4,043
77,841,764
https://en.wikipedia.org/wiki/BMSS%20Medal
The BMSS Medal is awarded by the British Mass Spectrometry Society to individuals who have worked in the United Kingdom and have made sustained contributions by individual members of the British Mass Spectrometry Society to the promotion and advancement of mass spectrometry, primarily within the UK. Details The award is a very occasional award, with no more than one medal being awarded each year. Recipients of this honour receive a gold-plated medal as well as an award certificate. Recipients Edward Houghton Anthony Mallet John J. Monaghan Frank S Pullen Gareth Brenton Alison Ashcroft John G. Langley Michael Morris See also List of chemistry awards References External links Landmarks in the last 50 years of British Mass Spectrometry Academic awards Mass spectrometry awards British science and technology awards
BMSS Medal
Physics
161
95,465
https://en.wikipedia.org/wiki/Stirling%20number
In mathematics, Stirling numbers arise in a variety of analytic and combinatorial problems. They are named after James Stirling, who introduced them in a purely algebraic setting in his book Methodus differentialis (1730). They were rediscovered and given a combinatorial meaning by Masanobu Saka in 1782. Two different sets of numbers bear this name: the Stirling numbers of the first kind and the Stirling numbers of the second kind. Additionally, Lah numbers are sometimes referred to as Stirling numbers of the third kind. Each kind is detailed in its respective article, this one serving as a description of relations between them. A common property of all three kinds is that they describe coefficients relating three different sequences of polynomials that frequently arise in combinatorics. Moreover, all three can be defined as the number of partitions of n elements into k non-empty subsets, where each subset is endowed with a certain kind of order (no order, cyclical, or linear). Notation Several different notations for Stirling numbers are in use. Ordinary (signed) Stirling numbers of the first kind are commonly denoted: Unsigned Stirling numbers of the first kind, which count the number of permutations of n elements with k disjoint cycles, are denoted: Stirling numbers of the second kind, which count the number of ways to partition a set of n elements into k nonempty subsets: Abramowitz and Stegun use an uppercase and a blackletter , respectively, for the first and second kinds of Stirling number. The notation of brackets and braces, in analogy to binomial coefficients, was introduced in 1935 by Jovan Karamata and promoted later by Donald Knuth, though the bracket notation conflicts with a common notation for Gaussian coefficients. The mathematical motivation for this type of notation, as well as additional Stirling number formulae, may be found on the page for Stirling numbers and exponential generating functions. Another infrequent notation is and . Expansions of falling and rising factorials Stirling numbers express coefficients in expansions of falling and rising factorials (also known as the Pochhammer symbol) as polynomials. That is, the falling factorial, defined as is a polynomial in of degree whose expansion is with (signed) Stirling numbers of the first kind as coefficients. Note that by convention, because it is an empty product. The notations for the falling factorial and for the rising factorial are also often used. (Confusingly, the Pochhammer symbol that many use for falling factorials is used in special functions for rising factorials.) Similarly, the rising factorial, defined as is a polynomial in of degree whose expansion is with unsigned Stirling numbers of the first kind as coefficients. One of these expansions can be derived from the other by observing that Stirling numbers of the second kind express the reverse relations: and As change of basis coefficients Considering the set of polynomials in the (indeterminate) variable x as a vector space, each of the three sequences is a basis. That is, every polynomial in x can be written as a sum for some unique coefficients (similarly for the other two bases). The above relations then express the change of basis between them, as summarized in the following commutative diagram: The coefficients for the two bottom changes are described by the Lah numbers below. Since coefficients in any basis are unique, one can define Stirling numbers this way, as the coefficients expressing polynomials of one basis in terms of another, that is, the unique numbers relating with falling and rising factorials as above. Falling factorials define, up to scaling, the same polynomials as binomial coefficients: . The changes between the standard basis and the basis are thus described by similar formulas: . Example Expressing a polynomial in the basis of falling factorials is useful for calculating sums of the polynomial evaluated at consecutive integers. Indeed, the sum of falling factorials with fixed k can expressed as another falling factorial (for ) This can be proved by induction. For example, the sum of fourth powers of integers up to n (this time with n included), is: Here the Stirling numbers can be computed from their definition as the number of partitions of 4 elements into k non-empty unlabeled subsets. In contrast, the sum in the standard basis is given by Faulhaber's formula, which in general is more complicated. As inverse matrices The Stirling numbers of the first and second kinds can be considered inverses of one another: and where is the Kronecker delta. These two relationships may be understood to be matrix inverse relationships. That is, let s be the lower triangular matrix of Stirling numbers of the first kind, whose matrix elements The inverse of this matrix is S, the lower triangular matrix of Stirling numbers of the second kind, whose entries are Symbolically, this is written Although s and S are infinite, so calculating a product entry involves an infinite sum, the matrix multiplications work because these matrices are lower triangular, so only a finite number of terms in the sum are nonzero. Lah numbers The Lah numbers are sometimes called Stirling numbers of the third kind. By convention, and if or . These numbers are coefficients expressing falling factorials in terms of rising factorials and vice versa: and As above, this means they express the change of basis between the bases and , completing the diagram. In particular, one formula is the inverse of the other, thus: Similarly, composing the change of basis from to with the change of basis from to gives the change of basis directly from to : and similarly for other compositions. In terms of matrices, if denotes the matrix with entries and denotes the matrix with entries , then one is the inverse of the other: . Composing the matrix of unsigned Stirling numbers of the first kind with the matrix of Stirling numbers of the second kind gives the Lah numbers: . Enumeratively, can be defined as the number of partitions of n elements into k non-empty unlabeled subsets, where each subset is endowed with no order, a cyclic order, or a linear order, respectively. In particular, this implies the inequalities: Inversion relations and the Stirling transform For any pair of sequences, and , related by a finite sum Stirling number formula given by for all integers , we have a corresponding inversion formula for given by The lower indices could be any integer between and . These inversion relations between the two sequences translate into functional equations between the sequence exponential generating functions given by the Stirling (generating function) transform as and For , the differential operators and are related by the following formulas for all integers : Another pair of "inversion" relations involving the Stirling numbers relate the forward differences and the ordinary derivatives of a function, , which is analytic for all by the formulas Similar properties See the specific articles for details. Symmetric formulae Abramowitz and Stegun give the following symmetric formulae that relate the Stirling numbers of the first and second kind. and Stirling numbers with negative integral values The Stirling numbers can be extended to negative integral values, but not all authors do so in the same way. Regardless of the approach taken, it is worth noting that Stirling numbers of first and second kind are connected by the relations: when n and k are nonnegative integers. So we have the following table for : Donald Knuth defined the more general Stirling numbers by extending a recurrence relation to all integers. In this approach, and are zero if n is negative and k is nonnegative, or if n is nonnegative and k is negative, and so we have, for any integers n and k, On the other hand, for positive integers n and k, David Branson defined and (but not or ). In this approach, one has the following extension of the recurrence relation of the Stirling numbers of the first kind: , For example, This leads to the following table of values of for negative integral n. In this case where is a Bell number, and so one may define the negative Bell numbers by . For example, this produces , generally . See also Bell polynomials Catalan number Cycles and fixed points Pochhammer symbol Polynomial sequence Touchard polynomials Stirling permutation Citations References Further reading Permutations Q-analogs Factorial and binomial topics Integer sequences
Stirling number
Mathematics
1,679
38,551,936
https://en.wikipedia.org/wiki/Film%20awards%20seasons
Film awards season is an annual time period between November and February every year, in the United States, where a majority of significant film award events take place. In October ballots are sent out to voters, to collect nominations for the first award ceremonies, which are usually the Governors Awards or the independent Gotham Awards, to begin awards season in November. The season usually culminates in the Academy Awards in late February or early March (the latter in Winter Olympics years). In 2021, the season ended with the delayed Academy Awards ceremony on April 25, 2021, due to the COVID-19 pandemic, with many other ceremonies and film festivals moving up their dates, in return. Though they only cover film scores and soundtrack albums/songs within their honors, several music awards, including the American Music Awards and Grammy Awards, are also presented during the same film awards season to provide television networks additional event programming. List of film awards ceremonies October Location Managers Guild Awards Saturn Awards November Gotham Awards Governors Awards People's Choice Awards Hollywood Film Awards December National Board of Review New York Film Critics Circle Los Angeles Film Critics Association January National Society of Film Critics Critics' Choice Movie Awards Golden Globe Awards Producers Guild of America Awards Screen Actors Guild Awards February and March Annie Awards Directors Guild of America Awards Satellite Awards British Academy Film Awards Writers Guild of America Awards Independent Spirit Awards Golden Raspberry Awards USC Scripter Awards Academy Awards Nickelodeon Kids' Choice Awards List of film awards seasons 2006–07 film awards season 2007–08 film awards season 2008–09 film awards season 2009–10 film awards season 2010–11 film awards season 2011–12 film awards season 2012–13 film awards season 2013–14 film awards season 2014–15 film awards season 2015–16 film awards season 2016–17 film awards season 2017–18 film awards season 2018–19 film awards season 2019–20 film awards season 2020–21 film awards season 2021–22 film awards season 2022–23 film awards season 2023–24 film awards season 2024–25 film awards season American film awards Seasons
Film awards seasons
Physics
413
43,018,469
https://en.wikipedia.org/wiki/Pine%20Grove%20Furnace%20Site
The Pine Grove Furnace Site is a historic colonial industrial site in rural Sussex County, Delaware. Pine Grove was one of the first blast furnaces to be set up in what is now southern Delaware (along with Deep Creek). The endeavor was begun Thomas Lightfoot and Abraham Mitchel, who apparently had the furnace built by late 1765. The exact fate of the works is unclear; it is last mentioned in the documentary record in 1773, and its operations may have been curtailed by the outbreak of the American Revolutionary War a few years later. The site, which included a dam, was located about above the confluence of Deep Creek with the Nanticoke River. The site was listed on the National Register of Historic Places in 1978. See also National Register of Historic Places listings in Sussex County, Delaware References Archaeological sites on the National Register of Historic Places in Delaware Sussex County, Delaware National Register of Historic Places in Sussex County, Delaware Furnaces
Pine Grove Furnace Site
Engineering
189
1,488,320
https://en.wikipedia.org/wiki/No-communication%20theorem
In physics, the no-communication theorem (also referred to as the no-signaling principle) is a no-go theorem in quantum information theory. It asserts that during the measurement of an entangled quantum state, it is impossible for one observer to transmit information to another observer, regardless of their spatial separation. This conclusion preserves the principle of causality in quantum mechanics and ensures that information transfer does not violate special relativity by exceeding the speed of light. The theorem is significant because quantum entanglement creates correlations between distant events that might initially appear to enable faster-than-light communication. The no-communication theorem establishes conditions under which such transmission is impossible, thus resolving paradoxes like the Einstein-Podolsky-Rosen (EPR) paradox and addressing the violations of local realism observed in Bell's theorem. Specifically, it demonstrates that the failure of local realism does not imply the existence of "spooky action at a distance," a phrase originally coined by Einstein. Informal overview The no-communication theorem states that, within the context of quantum mechanics, it is not possible to transmit classical bits of information by means of carefully prepared mixed or pure states, whether entangled or not. The theorem is only a sufficient condition that states that if the Kraus matrices commute then there can be no communication through the quantum entangled states and that this is applicable to all communication. From a relativity and quantum field perspective, also faster than light or "instantaneous" communication is disallowed. Being only a sufficient condition, there can be other reasons communication is not allowed. The basic premise entering into the theorem is that a quantum-mechanical system is prepared in an initial state with some entangled states, and that this initial state is describable as a mixed or pure state in a Hilbert space H. After a certain amount of time, the system is divided in two parts each of which contains some non-entangled states and half of the quantum entangled states, and the two parts become spatially distinct, A and B, sent to two distinct observers, Alice and Bob, who are free to perform quantum mechanical measurements on their portion of the total system (viz, A and B). The question is: is there any action that Alice can perform on A that would be detectable by Bob making an observation of B? The theorem replies 'no'. An important assumption going into the theorem is that neither Alice nor Bob is allowed, in any way, to affect the preparation of the initial state. If Alice were allowed to take part in the preparation of the initial state, it would be trivially easy for her to encode a message into it; thus neither Alice nor Bob participates in the preparation of the initial state. The theorem does not require that the initial state be somehow 'random' or 'balanced' or 'uniform': indeed, a third party preparing the initial state could easily encode messages in it, received by Alice and Bob. Simply, the theorem states that, given some initial state, prepared in some way, there is no action that Alice can take that would be detectable by Bob. The proof proceeds by defining how the total Hilbert space H can be split into two parts, HA and HB, describing the subspaces accessible to Alice and Bob. The total state of the system is described by a density matrix σ. The goal of the theorem is to prove that Bob cannot in any way distinguish the pre-measurement state σ from the post-measurement state P(σ). This is accomplished mathematically by comparing the trace of σ and the trace of P(σ), with the trace being taken over the subspace HA. Since the trace is only over a subspace, it is technically called a partial trace. Key to this step is that the (partial) trace adequately summarizes the system from Bob's point of view. That is, everything that Bob has access to, or could ever have access to, measure, or detect, is completely described by a partial trace over HA of the system σ. The fact that this trace never changes as Alice performs her measurements is the conclusion of the proof of the no-communication theorem. Formulation The proof of the theorem is commonly illustrated for the setup of Bell tests in which two observers Alice and Bob perform local observations on a common bipartite system, and uses the statistical machinery of quantum mechanics, namely density states and quantum operations. Alice and Bob perform measurements on system S whose underlying Hilbert space is It is also assumed that everything is finite-dimensional to avoid convergence issues. The state of the composite system is given by a density operator on H. Any density operator σ on H is a sum of the form: where Ti and Si are operators on HA and HB respectively. For the following, it is not required to assume that Ti and Si are state projection operators: i.e. they need not necessarily be non-negative, nor have a trace of one. That is, σ can have a definition somewhat broader than that of a density matrix; the theorem still holds. Note that the theorem holds trivially for separable states. If the shared state σ is separable, it is clear that any local operation by Alice will leave Bob's system intact. Thus the point of the theorem is no communication can be achieved via a shared entangled state. Alice performs a local measurement on her subsystem. In general, this is described by a quantum operation, on the system state, of the following kind where Vk are called Kraus matrices which satisfy The term from the expression means that Alice's measurement apparatus does not interact with Bob's subsystem. Supposing the combined system is prepared in state σ and assuming, for purposes of argument, a non-relativistic situation, immediately (with no time delay) after Alice performs her measurement, the relative state of Bob's system is given by the partial trace of the overall state with respect to Alice's system. In symbols, the relative state of Bob's system after Alice's operation is where is the partial trace mapping with respect to Alice's system. One can directly calculate this state: From this it is argued that, statistically, Bob cannot tell the difference between what Alice did and a random measurement (or whether she did anything at all). Some comments The no-communication theorem implies the no-cloning theorem, which states that quantum states cannot be (perfectly) copied. That is, cloning is a sufficient condition for the communication of classical information to occur. To see this, suppose that quantum states could be cloned. Assume parts of a maximally entangled Bell state are distributed to Alice and Bob. Alice could send bits to Bob in the following way: If Alice wishes to transmit a "0", she measures the spin of her electron in the z direction, collapsing Bob's state to either or . To transmit "1", Alice does nothing to her qubit. Bob creates many copies of his electron's state, and measures the spin of each copy in the z direction. Bob will know that Alice has transmitted a "0" if all his measurements will produce the same result; otherwise, his measurements will have outcomes or with equal probability. This would allow Alice and Bob to communicate classical bits between each other (possibly across space-like separations, violating causality). The version of the no-communication theorem discussed in this article assumes that the quantum system shared by Alice and Bob is a composite system, i.e. that its underlying Hilbert space is a tensor product whose first factor describes the part of the system that Alice can interact with and whose second factor describes the part of the system that Bob can interact with. In quantum field theory, this assumption can be replaced by the assumption that Alice and Bob are spacelike separated. This alternate version of the no-communication theorem shows that faster-than-light communication cannot be achieved using processes which obey the rules of quantum field theory. History In 1978, Phillippe H. Eberhard's paper, Bell's Theorem and the Different Concepts of Locality, rigorously demonstrated the impossibility of faster-than-light communication through quantum systems. Eberhard introduced several mathematical concepts of locality and showed how quantum mechanics contradicts most of them while preserving causality. Further, in 1988, the paper Quantum Field Theory Cannot Provide Faster-Than-Light Communication by Eberhard and Ronald R. Ross analyzed how relativistic quantum field theory inherently forbids faster-than-light communication. This work elaborates on how misinterpretations of quantum field properties had led to claims of superluminal communication and pinpoints the mathematical principles that prevent it. In regards to communication, a quantum channel can always be used to transfer classical information by means of shared quantum states. In 2008 Matthew Hastings proved a counterexample where the minimum output entropy is not additive for all quantum channels. Therefore, by an equivalence result due to Peter Shor, the Holevo capacity is not just additive, but super-additive like the entropy, and by consequence there may be some quantum channels where you can transfer more than the classical capacity. Typically overall communication happens at the same time via quantum and non quantum channels, and in general time ordering and causality cannot be violated. In August 24th 2015, a team led by physicist Ronald Hanson from Delft University of Technology in the Netherlands uploaded their latest paper to the preprint website arXiv, reporting the first Bell experiment that simultaneously addressed both the detection loophole and the communication loophole. The research team used a clever technique known as "entanglement swapping," which combines the benefits of photons and matter particles. The final measurements showed coherence between the two electrons that exceeded the Bell limit, once again supporting the standard view of quantum mechanics and rejecting Einstein's hidden variable theory. Furthermore, since electrons are easily detectable, the detection loophole is no longer an issue, and the large distance between the two electrons also eliminates the communication loophole. See also No-broadcast theorem No-cloning theorem No-deleting theorem No-hiding theorem No-teleportation theorem References Quantum measurement Quantum information science Theorems in quantum mechanics Statistical mechanics theorems No-go theorems
No-communication theorem
Physics,Mathematics
2,097
50,975,757
https://en.wikipedia.org/wiki/Double-setpoint%20control
A Double-setpoint control is quite similar to bang–bang control. It is an element of a feedback-loop and therefore evaluated by application of control theory. It has two setpoints on which it switches abruptly usually involving a hysteresis. It may be used in a heating and cooling situation or for two speed control. The theory was examined by J. Gregory Vermeychuk. The "double-bang-bang" was implemented in the Luft Instruments Model 77 Controller used in a variety of laboratory (titrations) and plant application. Shown in Harvard University Science Museum. Optimal control
Double-setpoint control
Mathematics
124
69,250,898
https://en.wikipedia.org/wiki/Mecachrome%20V634%20engine
The Mecachrome V634 engine (also known as Mecachrome Formula 2 V6) is a 3.4-litre, turbocharged or naturally-aspirated, V6 racing engine, designed, developed and produced by Mecachrome, and is used in the FIA Formula 2 Championship, FIA Formula 3 Championship, and the World Endurance Championship. Formula 3 engine First generation (second-generation overall) The series will remain using the 3.4-litre V6 naturally-aspirated direct-injected engines supplied by Mecachrome until at least the 2021 season due to FIA Formula 3 Championship not being interested in a turbocharged engine. The horsepower will be scaled down from . Mecachrome V634 F3 V6 engines were crated and shipped to all FIA Formula 3 Championship teams on a serial-number basis as determined by the FIA to ensure equality and fairness in distribution. Fuel and lubricants components All Formula 3 cars currently use ordinary unleaded racing gasoline as fuel (similar to commercial vehicle unleaded street gasoline), which has been the de facto standard in third tier single-seater formula racing since the introduction of GP3 Series in 2010. Current Elf LMS 102 RON unleaded gasoline resembles ordinary unleaded gasoline but produces better mileage while being more environmentally-friendly and safer than other fuels. Since 2019, Elf exclusively continues providing the LMS 102 RON unleaded fuel and also Elf HTX 840 0W-40 lubricants for all FIA Formula 3 Championship cars. Formula 2 engine The V634 Turbo engine is a V6 turbocharged direct injection four-stroke piston Otto cycle 620 hp fuel-efficient engine developed and built by Mecachrome, and maintained by Teos Engineering. The engine was unveiled in 2017 along with the new Dallara F2 2018 chassis. Dutch turbocharger company Van Der Lee Turbo Systems currently supplies the turbochargers for all FIA Formula 2 Championship engines. The valve train is a dual overhead camshaft configuration with four valves per cylinder. The crankshaft is made of alloy steel, with five main bearing caps. The pistons are forged aluminium alloy, while the connecting rods are machined alloy steel. The electronic engine management system is supplied by Magneti Marelli, firing a CDI ignition system. The engine lubrication is a dry-sump type, cooled by a single water pump. The all-new engine fuel delivery system is gasoline direct injection instead of traditional electronic indirect injection. The power output of all-new FIA Formula 2 engine was increased from . Mecachrome will continue providing new FIA Formula 2 engines from the 2018 season and beyond. The Mecachrome V634 Turbo engine is rev limited down to 8,750 rpm and weighs up to including turbocharger. The firing ignition of the Mecachrome V634 Turbo engine is revolutionary digital inductive. The fuel-mass flow restrictor rate of the second-generation FIA Formula 2 Championship engine is roughly rated at . The Mecachrome V634 Turbo 3.4-litre single-turbocharged direct-injected Mecachrome V6 engine is an evolution of the GP3 engine, which is the solely supplied engine for the FIA Formula 2 Championship. With the addition of a single turbo, the engine underwent rigorous dyno testing, ahead of its racing debut. The Mecachrome V634 Turbo engines sells for up to €67,000 per unit by leasing and rebuilding. The current second-generation FIA Formula 2 engine allocation is limited to one per season and lasts up to after being rebuilt. Mid-season engine changes, including during race weekends, are banned and may result in a grid penalty for the session. Turbocharger Turbochargers were introduced from the start of 2018 season. The turbo configuration is single-turbocharged and produces up to of boost pressure. Dutch turbocharger company Van Der Lee Turbo Systems currently supplies the turbochargers for all FIA Formula 2 Championship all-new engines using the MT134-50120 model. The turbocharger spin limit is 130,000 rpm but cannot exceed 125,000 rpm due to lower turbo boost pressure. Fuel and lubricants components All Formula 2 cars currently use ordinary unleaded racing gasoline as fuel (similar to commercial vehicle unleaded street gasoline), which has been the de facto standard in second tier single-seater formula racing since the introduction of GP2 Series in 2005. Current Elf LMS 102 RON unleaded gasoline resembles ordinary unleaded gasoline but produces better mileage while being environmental-friendly and safer than leaded fuels. Since 2005 GP2 Series season, Elf exclusively continues providing the LMS 102 RON unleaded fuel and also Elf HTX 840 0W-40 lubricants for all FIA Formula 2 Championship cars due to in fact of Mecachrome's long-term technical partnership with Elf. Ginetta LMP1 engine For the Ginetta G60-LT-P1 racercar to be racing in the LMP1 category. A updated engine was developed with now direct injection opposed to the original port injection F2 engine and a larger turbocharger variant. Originally designed to have a output close to 800hp but power was significantly reduced in its first LeMans 24 outing to improve reliability. Alpine LMDh engine For the Alpine A424 LMDh racecar. A new version of the V634 DI engine was developed by Mecachrome with support from Alpines Viry-Chatillon team. The new endurance spec engine is making 675hp and has a redline of 9000rpm. Applications Dallara GP3/16 Dallara F3 2019 Dallara F2 2018 Dallara F2 2024 Ginetta G60-LT-P1 Alpine A424 References External links Mecachrome official website in French language Mecachrome official website in English language FIA Formula 2 official website FIA Formula 3 official website Engines by model FIA Formula 2 Championship V6 engines Gasoline engines by model
Mecachrome V634 engine
Technology
1,215
3,648,011
https://en.wikipedia.org/wiki/Great%20room
A great room is a room inside a house that combines the roles of several more traditional rooms such as the family room, living room, and study into one space. Great rooms typically have raised ceilings and are usually placed at or near the center of the home. Great rooms have been common in American homes since the early 1990s. Description The concept of a great room hearkens back to the romanticized ideal of great halls and great chambers in medieval castles and mansions, which contained one large central room where everything happened. Developers of mid-range suburban homes in America tried to solve the problem of the "dead" living room and the split between the living and family rooms by "returning" to the idea of the great room. The general concept is one relatively central room, the crossroads of the house to be used for all of the family functions traditionally split between living and family rooms. The dominant feature of the great room is the raised ceiling, higher than other parts of the house, typically two stories with arching ceilings often referred to in real estate jargon as "cathedral ceilings". Different great rooms will combine different functions. Some may incorporate a reading area, thus bringing the traditional study function into the scheme of the room, while others may forgo this particular function. Some great room designs incorporate the functions of the traditional dining room as well. In the most general sense, great rooms are typically found on the lower level of American multi-story homes built in the second half of the 20th century. In many houses the great room will also adjoin the kitchen, often separated just by a counter instead of a full wall. History The modern great room concept traces back to the "multipurpose room" in modernist homes built by Joseph Eichler in California in the 1950s and 1960s. Developers started building high-end houses with great rooms in the 1970s and 1980s, at first simply adding vaulted entryways to ranch-style houses. An example of this is the house in the television series The Brady Bunch. Great rooms became a nearly ubiquitous feature of suburban homes constructed in America in the 1990s and 2000s. Great rooms were initially popular with homeowners. According to builders asked by the Wall Street Journal, this was because homeowners wanted a way to show off wealth in a growing economy and thus The New York Times called the great room "the McMansion's signature space". By the mid-2000s, however, the Wall Street Journal reported that home buyers were not as enthusiastic about great rooms. Common complaints included the cost to heat and cool them, that they were difficult to clean and paint due to height and irregular angles, and that they simply were wasted space. About 15 years after great rooms became widely popular in the early 1990s, developers across America were getting fewer demands for houses with great rooms. Buyers were preferring instead houses with increased floor space and numbers of rooms. Owners of existing homes with great rooms were sometimes opting to add new rooms or lofts in the great room's ceiling space, which can cost as much as 50% less than adding a conventional addition to the house. According to the United States Census Bureau, from 2005 to 2007 expenditures on the interior restructuring of homes rose by about 40%, but spending on new-room additions fell by 57%, which a census bureau statistician said can be explained in part by the retrofitting of great rooms. However, much of the motivation for working within the existing footprint of the home may be due to state and municipal regulations which may trigger higher taxes, additional permit hurdles, or septic system redesigns when a building's footprint increases. In 2007, Money listed great rooms as a fad whose time had passed. The magazine reported that a typical great room costs $150 to $350 per square foot, whereas building conventional rooms cost $125 to $250 a square foot, and concluded that the supposed benefits of a great room (unifying family activities into one room) did not justify its cost and maintenance difficulties. References Rooms
Great room
Engineering
809
21,237,933
https://en.wikipedia.org/wiki/Yoon%20Nung-min
Yoon Nung-min (윤능민 尹能民, November 21, 1927 – April 1, 2009) is a South Korean chemist, known for his research in organic chemistry, specializing in metal hydrides. He received his B.A. at Seoul National University in chemistry in 1951 and went on to complete his Ph.D. at Purdue University, under Herbert Charles Brown. He was a postdoc at Purdue, then a researcher for the Ministry of National Defence. He then became an associate professor at the Catholic University of Korea. He later took up full professorship at Sogang University, a position he would hold until his retirement. He served as the president of the Korean Chemical Society in 1989, and was elected a member of the National Academy of Sciences of the Republic of Korea in 2005. He was Professor Emeritus at Sogang University until his death in 2009. He was a proficient researcher; he published 110 papers and developed reagents which became widely used in both organic and inorganic chemistry. He also discovered new methods of generating free radicals and found new applications. He surprised the Korean chemistry community by publishing a substantial portion of his research as the sole author shortly before his retirement. He has also been active as an educator. He taught 14 doctorate students, and 56 masters students. He was awarded the Order of Civil Merit (Mogryeon Medal) in 1983, Korea National Academy of Sciences Award in 1990 and the Korea Science Prize (Science) in 1993. Selected publications An Excellent Nickel Boride Catalyst for the Cis-Selective Semihydrogenation of Acetylenes Tetrahedron (J. Choi and N. M. Yoon), Tetrahedron Lett. Vol.37 p. 1057 (1996) A New Coupling Reaction of Alkyl Iodides with Electron Deficient Alkenes Using Nickel Boride(cat.)- Borohydride Exchange Resin in Methanol (T. B. Sim, J. Choi, M. J. Joung and N. M. Yoon), J. Org. Chem. Vol.62 p. 2357 (1997) Sodium Diethyldialkynylaluminate A New Chemoselective Alkynylating Agent (J. H. Ahn, M. J. Joung, and N. M. Yoon) J. Org. Chem. Vol.60 p. 6173 (1995) Synthesis of Disulfides by Copper Catalyzed Disproportionation of Thiols (J. Choi and N. M. Yoon), J. Org. Chem. Vol.60 p. 3266 (1995) Monoisopinocampheylborane- A New Chiral Hydroborating Agent for Relatively Hindered (Trisubstituted) Olefins (H. C. Brown and N. M. Yoon), J. Am. Chem. Soc. Vol.99 p. 5514 (1977) Diisopinocampheylborane of high Optical Purity. Asymmetric Synthesis via Hydroboration with Essentially Complete Asymmetric Induction (H. C. Brown and N. M. Yoon), Israel J. Chem Vol.15 p. 12 (1976~1977) Lithium Trimethylethynylaluminate A New Chemoselective Ethynylating Agent (M. J. Joung, J. H. Ahn and N. M. Yoon), J. Org. Chem. Vol.61 p. 4472 (1996) The Rapido Reaction of Carboxylic Acids with Borane-Tetrahydrofuran. A Remarkably Convenient Procedure for the Selective Conversion of Carboxylic Acids to the Corresponding Alcohols in the Presence of Other Functional Groups (N. M. Yoon, C. S. Park, H. C. Brown, S. Krishnamurthy, and T. P. Stoky), J. Org. Chem. Vol.38 p. 2786 (1973) References External links Department of Chemistry, Sogang University National Academy of Sciences, Korea 1927 births 2009 deaths South Korean organic chemists Mogryeon Medals of the Order of Civil Merit (Korea) Seoul National University alumni Academic staff of Sogang University Purdue University alumni Members of the National Academy of Sciences of the Republic of Korea
Yoon Nung-min
Chemistry
894
56,064,706
https://en.wikipedia.org/wiki/Methyl%20dimethyldithiocarbamate
Methyl dimethyldithiocarbamate is the organosulfur compound with the formula (CH3)2NC(S)SCH3. It is the one of simplest dithiocarbamic esters. It is a white volatile solid that is poorly soluble in water but soluble in many organic solvents. It was once used as a pesticide. Methyl dimethyldithiocarbamate can be prepared by methylation of salts of dimethyldithiocarbamate: (CH3)2NCS2Na + (CH3O)2SO2 → (CH3)2NC(S)SCH3 + Na[CH3OSO3] It can also be prepared by the reaction of a tetramethylthiuram disulfide with methyl Grignard reagents: [(CH3)2NC(S)S]2 + CH3MgBr → (CH3)2NC(S)SCH3 + (CH3)2NCS2MgBr References Dithiocarbamates
Methyl dimethyldithiocarbamate
Chemistry
219
395,877
https://en.wikipedia.org/wiki/Histamine
Histamine is an organic nitrogenous compound involved in local immune responses communication, as well as regulating physiological functions in the gut and acting as a neurotransmitter for the brain, spinal cord, and uterus. Discovered in 1910, histamine has been considered a local hormone (autocoid) because it's produced without involvement of the classic endocrine glands; however, in recent years, histamine has been recognized as a central neurotransmitter. Histamine is involved in the inflammatory response and has a central role as a mediator of itching. As part of an immune response to foreign pathogens, histamine is produced by basophils and by mast cells found in nearby connective tissues. Histamine increases the permeability of the capillaries to white blood cells and some proteins, to allow them to engage pathogens in the infected tissues. It consists of an imidazole ring attached to an ethylamine chain; under physiological conditions, the amino group of the side-chain is protonated. Properties Histamine base, obtained as a mineral oil mull, melts at 83–84 °C. Hydrochloride and phosphorus salts form white hygroscopic crystals and are easily dissolved in water or ethanol, but not in ether. In aqueous solution, the imidazole ring of histamine exists in two tautomeric forms, identified by which of the two nitrogen atoms is protonated. The nitrogen farther away from the side chain is the 'tele' nitrogen and is denoted by a lowercase tau sign and the nitrogen closer to the side chain is the 'pros' nitrogen and is denoted by the pi sign. The tele tautomer, Nτ-H-histamine, is preferred in solution as compared to the pros tautomer, Nπ-H-histamine. Histamine has two basic centres, namely the aliphatic amino group and whichever nitrogen atom of the imidazole ring does not already have a proton. Under physiological conditions, the aliphatic amino group (having a pKa around 9.4) will be protonated, whereas the second nitrogen of the imidazole ring (pKa ≈ 5.8) will not be protonated. Thus, histamine is normally protonated to a singly charged cation. Since human blood is slightly basic (with a normal pH range of 7.35 to 7.45) therefore the predominant form of histamine present in human blood is monoprotic at the aliphatic nitrogen. Histamine is a monoamine neurotransmitter. Synthesis and metabolism Histamine is derived from the decarboxylation of the amino acid histidine, a reaction catalyzed by the enzyme -histidine decarboxylase. It is a hydrophilic vasoactive amine. Once formed, histamine is either stored or rapidly inactivated by its primary degradative enzymes, histamine-N-methyltransferase or diamine oxidase. In the central nervous system, histamine released into the synapses is primarily broken down by histamine-N-methyltransferase, while in other tissues both enzymes may play a role. Several other enzymes, including MAO-B and ALDH2, further process the immediate metabolites of histamine for excretion or recycling. Bacteria also are capable of producing histamine using histidine decarboxylase enzymes unrelated to those found in animals. A non-infectious form of foodborne disease, scombroid poisoning, is due to histamine production by bacteria in spoiled food, particularly fish. Fermented foods and beverages naturally contain small quantities of histamine due to a similar conversion performed by fermenting bacteria or yeasts. Sake contains histamine in the 20–40 mg/L range; wines contain it in the 2–10 mg/L range. Storage and release Most histamine in the body is generated in granules in mast cells and in white blood cells (leukocytes) called basophils. Mast cells are especially numerous at sites of potential injury – the nose, mouth, and feet, internal body surfaces, and blood vessels. Non-mast cell histamine is found in several tissues, including the hypothalamus region of the brain, where it functions as a neurotransmitter. Another important site of histamine storage and release is the enterochromaffin-like (ECL) cell of the stomach. The most important pathophysiologic mechanism of mast cell and basophil histamine release is immunologic. These cells, if sensitized by IgE antibodies attached to their membranes, degranulate when exposed to the appropriate antigen. Certain amines and alkaloids, including such drugs as morphine, and curare alkaloids, can displace histamine in granules and cause its release. Antibiotics like polymyxin are also found to stimulate histamine release. Histamine release occurs when allergens bind to mast-cell-bound IgE antibodies. Reduction of IgE overproduction may lower the likelihood of allergens finding sufficient free IgE to trigger a mast-cell-release of histamine. Degradation Histamine is released by mast cells as an immune response and is later degraded primarily by two enzymes: diamine oxidase (DAO), coded by AOC1 genes, and histamine-N-methyltransferase (HNMT), coded by the HNMT gene. The presence of single nucleotide polymorphisms (SNPs) at these genes are associated with a wide variety of disorders, from ulcerative colitis to autism spectrum disorder (ASD). Histamine degradation is crucial to the prevention of allergic reactions to otherwise harmless substances. DAO is typically expressed in epithelial cells at the tip of the villus of the small intestine mucosa. Reduced DAO activity is associated with gastrointestinal disorders and widespread food intolerances. This is due to an increase in histamine absorption through enterocytes, which increases histamine concentration in the bloodstream. One study found that migraine patients with gluten sensitivity were positively correlated with having lower DAO serum levels. Low DAO activity can have more severe consequences as mutations in the ABP1 alleles of the AOC1 gene have been associated with ulcerative colitis. Heterozygous or homozygous recessive genotypes at the rs2052129, rs2268999, rs10156191 and rs1049742 alleles increased the risk for reduced DAO activity. People with genotypes for reduced DAO activity can avoid foods high in histamine, such as alcohol, fermented foods, and aged foods, to attenuate any allergic reactions. Additionally, they should be aware whether any probiotics they are taking contain any histamine-producing strains and consult with their doctor to receive proper support . HNMT is expressed in the central nervous system, where deficiencies have been shown to lead to aggressive behavior and abnormal sleep-wake cycles in mice. Since brain histamine as a neurotransmitter regulates a number of neurophysiological functions, emphasis has been placed on the development of drugs to target histamine regulation. Yoshikawa et al. explores how the C314T, A939G, G179A, and T632C polymorphisms all impact HNMT enzymatic activity and the pathogenesis of various neurological disorders. These mutations can have either a positive or negative impact. Some patients with ADHD have been shown to exhibit exacerbated symptoms in response to food additives and preservatives, due in part to histamine release. In a double-blind placebo-controlled crossover trial, children with ADHD who responded with aggravated symptoms after consuming a challenge beverage were more likely to have HNMT polymorphisms at T939C and Thr105Ile. Histamine's role in neuroinflammation and cognition has made it a target of study for many neurological disorders, including autism spectrum disorder (ASD). De novo deletions in the HNMT gene have also been associated with ASD. Mast cells serve an important immunological role by defending the body from antigens and maintaining homeostasis in the gut microbiome. They act as an alarm to trigger inflammatory responses by the immune system. Their presence in the digestive system enables them to serve as an early barrier to pathogens entering the body. People who suffer from widespread sensitivities and allergic reactions may have mast cell activation syndrome (MCAS), in which excessive amounts of histamine are released from mast cells, and cannot be properly degraded. The abnormal release of histamine can be caused by either dysfunctional internal signals from defective mast cells or by the development of clonal mast cell populations through mutations occurring in the tyrosine kinase Kit. In such cases, the body may not be able to produce sufficient degradative enzymes to properly eliminate the excess histamine. Since MCAS is symptomatically characterized as such a broad disorder, it is difficult to diagnose and can be mislabeled as a variety of diseases, including irritable bowel syndrome and fibromyalgia. Histamine is often explored as a potential cause for diseases related to hyper-responsiveness of the immune system. In patients with asthma, abnormal histamine receptor activation in the lungs is associated with bronchospasm, airway obstruction, and production of excess mucus. Mutations in histamine degradation are more common in patients with a combination of asthma and allergen hypersensitivity than in those with just asthma. The HNMT-464 TT and HNMT-1639 TT polymorphisms are significantly more common among children with allergic asthma, the latter of which is overrepresented in African-American children. Mechanism of action In humans, histamine exerts its effects primarily by binding to G protein-coupled histamine receptors, designated H1 through H4. , histamine is believed to activate ligand-gated chloride channels in the brain and intestinal epithelium. Roles in the body Although histamine is small compared to other biological molecules (containing only 17 atoms), it plays an important role in the body. It is known to be involved in 23 different physiological functions. Histamine is known to be involved in many physiological functions because of its chemical properties that allow it to be versatile in binding. It is Coulombic (able to carry a charge), conformational, and flexible. This allows it to interact and bind more easily. Vasodilation and fall in blood pressure It has been known for more than one hundred years that an intravenous injection of histamine causes a fall in the blood pressure. The underlying mechanism concerns both vascular hyperpermeability and vasodilation. Histamine binding to endothelial cells causes them to contract, thus increasing vascular leak. It also stimulates synthesis and release of various vascular smooth muscle cell relaxants, such as nitric oxide, endothelium-derived hyperpolarizing factors and other compounds, resulting in blood vessel dilation. These two mechanisms play a key role in the pathophysiology of anaphylaxis. Effects on nasal mucous membrane Increased vascular permeability causes fluid to escape from capillaries into the tissues, which leads to the classic symptoms of an allergic reaction: a runny nose and watery eyes. Allergens can bind to IgE-loaded mast cells in the nasal cavity's mucous membranes. This can lead to three clinical responses: sneezing due to histamine-associated sensory neural stimulation hyper-secretion from glandular tissue nasal congestion due to vascular engorgement associated with vasodilation and increased capillary permeability Sleep-wake regulation Histamine is a neurotransmitter that is released from histaminergic neurons which project out of the mammalian hypothalamus. The cell bodies of these neurons are located in a portion of the posterior hypothalamus known as the tuberomammillary nucleus (TMN). The histamine neurons in this region comprise the brain's histamine system, which projects widely throughout the brain and includes axonal projections to the cortex, medial forebrain bundle, other hypothalamic nuclei, medial septum, the nucleus of the diagonal band, ventral tegmental area, amygdala, striatum, substantia nigra, hippocampus, thalamus and elsewhere. The histamine neurons in the TMN are involved in regulating the sleep-wake cycle and promote arousal when activated. The neural firing rate of histamine neurons in the TMN is strongly positively correlated with an individual's state of arousal. These neurons fire rapidly during periods of wakefulness, fire more slowly during periods of relaxation/tiredness, and stop firing altogether during REM and NREM (non-REM) sleep. First-generation H1 antihistamines (i.e., antagonists of histamine receptor H1) are capable of crossing the blood–brain barrier and produce drowsiness by antagonizing histamine H1 receptors in the tuberomammillary nucleus. The newer class of second-generation H1 antihistamines do not readily permeate the blood–brain barrier and thus are less likely to cause sedation, although individual reactions, concomitant medications and dosage may increase the likelihood of a sedating effect. In contrast, histamine H3 receptor antagonists increase wakefulness. Similar to the sedative effect of first-generation H1 antihistamines, an inability to maintain vigilance can occur from the inhibition of histamine biosynthesis or the loss (i.e., degeneration or destruction) of histamine-releasing neurons in the TMN. Gastric acid release Enterochromaffin-like cells in the stomach release histamine, stimulating parietal cells via H2 receptors. This triggers carbon dioxide and water uptake from the blood, converted to carbonic acid by carbonic anhydrase. The acid dissociates into hydrogen and bicarbonate ions within the parietal cell. Bicarbonate returns to the bloodstream, while hydrogen is pumped into the stomach lumen. Histamine release ceases as stomach pH decreases. Antagonist molecules, such as ranitidine or famotidine, block the H2 receptor and prevent histamine from binding, causing decreased hydrogen ion secretion. Protective effects While histamine has stimulatory effects upon neurons, it also has suppressive ones that protect against the susceptibility to convulsion, drug sensitization, denervation supersensitivity, ischemic lesions and stress. It has also been suggested that histamine controls the mechanisms by which memories and learning are forgotten. Erection and sexual function Loss of libido and erectile dysfunction can occur during treatment with histamine H2 receptor antagonists such as cimetidine, ranitidine, and risperidone. The injection of histamine into the corpus cavernosum in males with psychogenic impotence produces full or partial erections in 74% of them. It has been suggested that H2 antagonists may cause sexual dysfunction by reducing the functional binding of testosterone to its androgen receptors. Schizophrenia Metabolites of histamine are increased in the cerebrospinal fluid of people with schizophrenia, while the efficiency of H1 receptor binding sites is decreased. Many atypical antipsychotic medications have the effect of increasing histamine production, because histamine levels seem to be imbalanced in people with that disorder. Multiple sclerosis Histamine therapy for treatment of multiple sclerosis is currently being studied. The different H receptors have been known to have different effects on the treatment of this disease. The H1 and H4 receptors, in one study, have been shown to be counterproductive in the treatment of MS. The H1 and H4 receptors are thought to increase permeability in the blood-brain barrier, thus increasing infiltration of unwanted cells in the central nervous system. This can cause inflammation, and MS symptom worsening. The H2 and H3 receptors are thought to be helpful when treating MS patients. Histamine has been shown to help with T-cell differentiation. This is important because in MS, the body's immune system attacks its own myelin sheaths on nerve cells (which causes loss of signaling function and eventual nerve degeneration). By helping T cells to differentiate, the T cells will be less likely to attack the body's own cells, and instead, attack invaders. Disorders As an integral part of the immune system, histamine may be involved in immune system disorders and allergies. Mastocytosis is a rare disease in which there is a proliferation of mast cells that produce excess histamine. Histamine intolerance is a presumed set of adverse reactions (such as flush, itching, rhinitis, etc.) to ingested histamine in food. The mainstream theory accepts that there may exist adverse reactions to ingested histamine, but does not recognize histamine intolerance as a separate condition that can be diagnosed. The role of histamine in health and disease is an area of ongoing research. For example, histamine is researched in its potential link with migraine episodes, when there is a noted elevation in the plasma concentrations of both histamine and calcitonin gene-related peptide (CGRP). These two substances are potent vasodilators, and have been demonstrated to mutually stimulate each other's release within the trigeminovascular system, a mechanism that could potentially instigate the onset of migraines. In patients with a deficiency in histamine degradation due to variants in the AOC1 gene that encodes diamine oxidase enzyme, a diet high in histamine has been observed to trigger migraines, that suggests a potential functional relationship between exogenous histamine and CGRP, which could be instrumental in understanding the genesis of diet-induced migraines, so that the role of histamine, particularly in relation to CGRP, is a promising area of research for elucidating the mechanisms underlying migraine development and aggravation, especially relevant in the context of dietary triggers and genetic predispositions related to histamine metabolism. Measurement Histamine, a biogenic amine, involves many physiological functions, including the immune response, gastric acid secretion, and neuromodulation. However, its rapid metabolism makes it challenging to measure histamine levels directly in plasma. As a solution for the rapid metabolism of histamine, the measurement of histamine and its metabolites, particularly the 1,4-methyl-imidazolacetic acid, in a 24-hour urine sample, provides an efficient alternative to histamine measurement because the values of these metabolites remain elevated for a much longer period than the histamine itself. Commercial laboratories provide a 24-hour urine sample test for 1,4-methyl-imidazolacetic acid, the metabolite of histamine. This test is a valuable tool in assessing the metabolism of histamine in the body, as direct measurement of histamine in the serum has low diagnostic value due to the specificities of histamine metabolism. The urine test involves collecting all urine produced in a 24-hour period, which is then analyzed for the presence of 1,4-methyl-imidazolacetic acid. This comprehensive approach ensures a more accurate reflection of histamine metabolism over an extended period; as such, the 1,4-methyl-imidazolacetic acid urine test offered by commercial labs is currently the most reliable method to determine the rate of histamine metabolism, which may be helpful for the health care practitioners to assess individual’s health status, such as to diagnose interstitial cystitis. History The properties of histamine, then called β-imidazolylethylamine, were first described in 1910 by the British scientists Henry H. Dale and P.P. Laidlaw. By 1913 the name histamine was in use, using combining forms of histo- + amine, yielding "tissue amine". "H substance" or "substance H" are occasionally used in medical literature for histamine or a hypothetical histamine-like diffusible substance released in allergic reactions of skin and in the responses of tissue to inflammation. See also Anaphylaxis Diamine oxidase Histamine N-methyltransferase Hay fever (allergic rhinitis) Histamine intolerance Histamine receptor antagonist Scombroid food poisoning Photic sneeze reflex References External links Histamine MS Spectrum Histamine bound to proteins in the PDB Biogenic amines Amines Imidazoles Immune system Vasodilators Immunostimulants Neurotransmitters TAAR1 agonists Carbonic anhydrase activators
Histamine
Chemistry,Biology
4,511
1,534,136
https://en.wikipedia.org/wiki/NGC%204395
NGC 4395 is a nearby low surface brightness spiral galaxy located about 14 million light-years (or 4.3 Mpc) from Earth in the constellation Canes Venatici. The nucleus of NGC 4395 is active and the galaxy is classified as a Seyfert Type I known for its very low-mass supermassive black hole. Physical characteristics NGC 4395 has a halo that is about 8 in diameter. It has several patches of greater brightness running northwest to southeast. The one furthest southeast is the brightest. Three of the patches have their own NGC numbers: 4401, 4400, and 4399 running east to west. The galaxy is highly unusual for Seyfert galaxies, because it does not have a bulge and is considered to be a dwarf galaxy. Observational history NGC 4395 was imaged and classified as a "spiral nebula" in a 1920 paper by astronomer Francis G. Pease. Now, it is known to be a galaxy distinct from the Milky Way (see Great Debate). Along with several other nearby galaxies, resolved stars in NGC 4395 were used to measure the expansion rate of the Universe by Allan Sandage and Gustav Andreas Tammann in their 1974 paper. More recently, NGC 4395 was discovered to contain a very low-luminosity active galactic nucleus. Since then, its nucleus has been the subject of several academic papers and attempts to measure the mass of its central black hole. Nucleus NGC 4395 is one of the least luminous and nearest Seyfert galaxies known. The nucleus of NGC 4395 is notable for containing one of the smallest supermassive black holes with a well-measured mass. The central black hole has a mass of "only" 300,000 . However, a recent study found a black hole mass of just 10,000 . The low-mass black hole in NGC 4395 would make it a so-called "intermediate-mass black hole". The black hole may have a truncated disk. References External links NGC4395 Unbarred spiral galaxies M94 Group 4395 07542 40596 Astronomical objects discovered in 1786 Canes Venatici Seyfert galaxies Magellanic spiral galaxies
NGC 4395
Astronomy
458
42,875,577
https://en.wikipedia.org/wiki/Direct%20Democracy%20%28Poland%29
Direct Democracy () was a Polish political party founded in 2012. Its goal was to change the Polish political system by moving it closer to the political concept of direct democracy. The party's creation has been inspired by the 2012 protests against ACTA. References Direct Democracy Party in Poland Eurostart e-polityków, rp.pl, Wiktor Ferfecki 15-01-2014 External links Homepage 2012 establishments in Poland Direct democracy parties E-democracy Political parties established in 2012 Political parties in Poland Populist parties
Direct Democracy (Poland)
Technology
111
3,832,380
https://en.wikipedia.org/wiki/Cookie%20jar%20accounting
Cookie jar accounting or cookie jar reserves is an accounting practice in which a company takes a quantity of large reserves from an economically successful year and incurs them against losses from less successful years. Through this process, companies can mislead investors into believing that their losses are less than the actual value. An example of a cookie jar reserve is a liability created when a company records an expense that is not directly linked to a specific accounting period—the expense may fall in one period or another. Companies may record such discretionary expense when profits are high because they can afford to take the hit to income. When profits are low, the company reduces the liability (the reserve) rather than recording an expense in the lean year. The United States Securities and Exchange Commission (SEC) does not permit cookie jar accounting by public companies because it can mislead investors regarding a company's financial performance. Historic examples Several companies have been caught using cookie jar accounting. Companies along with individual accountants have faced legal action from The Securities and Exchange Commission. Limited examples of lawsuits: In October 1999, Microsoft was investigated for cookie jar accounting by the Securities and Exchange Commission for its alleged misconduct in recognizing revenue. At the center, was whether it was using financial reserves to shore up its financial earnings. Software companies are required to recognize revenue under rules set by American Institute of Certified Public Accountants, which set a threshold of persuasive evidence, which are based on satisfaction of delivery or when fees are mutually agreed upon. In 2002, WorldCom Inc., the carrier of about half of all Internet traffic, used the cookie jar accounting methods by using reserves to boost their earnings. This case was the largest among those who misused reserves in the same way. More specifically, WorldCom increased provision for projected expenses and used these to increase the earnings amount. Once WorldCom filed for the largest bankruptcy case in the U.S., it revealed that the company hid $3.9 billion in expenses since 2001. As a result of this, WorldCom's CEO, Bernard Ebbers, and CFO, Scott Sullivan, were arrested with criminal charges filed by the SEC for fraud in 2002. In 2004, Fannie Mae, a company created to promote home ownership by purchasing mortgages from banks that issue them to decrease interest rates, was caught in violation of accounting regulations that involve the recording of loans. The SEC ordered Fannie to restate its earnings over the prior four years to resolve this cookie jar accounting incident. The company showed concern about how this change will negatively impact it, yet still agreed to increase its capital reserves to $9.4 billion. In 2004, Bristol-Myers Squibb, a New York-based pharmaceutical company was sued on August 4, 2004, by The Securities and Exchange Commission partly for using cookie jar accounting. The company used cookie jar accounting to give the perception of higher earnings, lower liabilities, and hid the company's plan of selling inventory to wholesales before providing goods which lead to higher revenue reporting. The company's misconduct resulted in a US$100 million-dollar civil penalty, US$50 million-dollar payment towards a fund for shareholders and creating an independent adviser position who is responsible for the oversight of accounting practices and accurate financial reporting used within the company. In July 2009, a prior chief accounting officer at Beazer Homes, USA, Inc., was charged by The Securities and Exchange Commission, for using cookie jar accounting among other indictments. The chief accounting officer was accused of using the cookie jar accounting method to hide over US$60 million by increased land inventory accounts and net income in declining years and increased expense accounts in good years. During 2006 and the beginning of 2007, he misleading accounting practices lead to an overstatement of income and understatement of losses totaling US$47 million. The employee was sentenced to ten years in prison and three years supervised release. In July 2010, Dell was accused, of fraudulently reporting its financial earnings to give the impression that it was exceeding analyst earnings projections. Dell had used undisclosed payments from Intel to smooth out volatility from poor earnings. Dell paid a $100 million penalty for it misconduct. None of the managers involved faced tough penalties that would convince investors of the company not manipulating figures in the future. References Accounting systems Accounting scandals Corporate crime Financial crimes
Cookie jar accounting
Technology
872
68,597,317
https://en.wikipedia.org/wiki/Republic%20Stamping%20and%20Enameling
Republic Stamping and Enameling was an enamelware manufacturing company in Canton, Ohio. It operated from 1907 until 1952 when it was purchased by Ecko Products of Chicago. The company was founded by a group of investors led by Henry C. Milligan (1853–1940), an inventor who held several patents for improvements in enamelware manufacturing. Founder Milligan, a New Jersey native whose father was secretary-treasurer of the Central Railroad of New Jersey, had worked in a variety of metalworking factories, founded the company after working at the Carnahan Stamping & Enameling Co. in Canton, which he had helped found. Milligan was granted a patent in 1884 for one-coat granite ware that would serve as a basis for some of the processes at the company. He also obtained a patent in 1904 for enameling steelware that provided a smoother finish. Milligan was involved in many other civic and business interests in Northeast Ohio, including serving for many years as a director of National City bank of Cleveland. Milligan also chaired a Congressionally-appointed committee that studied the enamelware industry globally after World War I and recommended steep tariffs to protect U.S. plants such as his own. History and growth The Republic Enameling Plant initially employed 300 people at a new facility along a rail line at Harrison Ave S.W. and Navarre Road. The company enjoyed robust sales shortly after opening and several real estate developments were launched in the area to house workers, many of whom were European immigrants or had relocated from rural areas. In 1915, Republic Stamping purchased the General Stamping Co. of Canton for $1 million and was able to increase production to 160,000 pieces of enamelware daily. Republic operated the acquisition as an separate plant on the east side of Canton until closing it two years later and consolidating all production at its main location. In 1918, the company completed the addition of 184,000 square feet of floor space in two new facilities, including a conveyor belt system that eliminated miles of manual hauling by workers daily. By that year, Republic Stamping employed 1,000 people and was devoted to World War I government contracts for hospital and mess utensils. By the 1920s – boosted by tariffs that blunted competition from European manufacturers – the company was shipping pots, pans and other affordable kitchen utensils via rail throughout the country, including under the Old English Gray Ware brand. Republic continued to refine its production processes, replacing coal with oil to power the facility and improving the grounds around the plant to make them more attractive for workers on lunch breaks. Heavy activity at Republic Stamping plant also led to frequent blockages of Harrison Ave. S.W. as trains waited for loading at the factory. This led to the eventual construction of an underpass at that location. The company was headquartered on the seventh floor of the new First National Bank Building in downtown Canton, which opened in 1924. From the beginning, Republic Stamping employed many women, especially in the dipping department where pans were coated in liquid enamel, a task considered best suited to female manual dexterity. The company was unusual in that it gave employees a Christmas bonus of life insurance equal to their annual salary. Like many other Canton-area employers, Republic Stamping also hosted an annual summer picnic for employees at Myers Lake. Labor relations were generally congenial, although there was a brief wildcat strike in 1933 and a walkout in 1941. Republic was represented by the American Federation of Labor’s Council of Fabricated Metal, Dairy Gasoline Utensil & Enamel Workers local of the American Federation of Labor until the United Steelworkers of America became the representative after the company was sold. The Second World War and decline During the Second World War, Republic converted to war production and produced canteens, powder cartridges and other items. In 1944, Republic received the Army-Navy “E” award for excellence in the production of war materials. Throughout its history, the company had expanded into many product lines ranging from Christmas tree holders to aluminum boxes used to haul bulk products at grocery stores. But with consumer tastes and new materials capturing the houseware market, Ecko Products Inc. bought out Republic Stamping owners and the 400,000-square-foot facility in May, 1952. A few months later Ecko transferred the manufacturing of Ovenex tinware to the Canton plant and dropped enamel production. The plant closed in 1959 with the transfer of housewares production to a sister facility in Massillon. The Canton plant would reopen in 1961. Ecko ceased operations at the Republic site in 1986. The building still stands. Photo album Life in the Republic Stamping plant during the mid-1940s is captured in an extensive photo album created by employee Charles Doyne Reese. The 1,400 photos show workers doing their daily tasks plus special events like Christmas parties and the annual summer picnic at Myers Lake. The album is now part of the William McKinley Presidential Library and Museum. References 1907 establishments in Ohio 1952 disestablishments in Ohio Companies based in Canton, Ohio Manufacturing companies established in 1907 Manufacturing companies disestablished in 1952 Photographic collections Vitreous enamel
Republic Stamping and Enameling
Chemistry
1,039
5,570,754
https://en.wikipedia.org/wiki/Terricolous%20lichen
A terricolous lichen is a lichen that grows on the soil as a substrate. An example is some members of the genus Peltigera. References Lichenology
Terricolous lichen
Biology
38
22,843,059
https://en.wikipedia.org/wiki/Slater%20integrals
In mathematics and mathematical physics, Slater integrals are certain integrals of products of three spherical harmonics. They occur naturally when applying an orthonormal basis of functions on the unit sphere that transform in a particular way under rotations in three dimensions. Such integrals are particularly useful when computing properties of atoms which have natural spherical symmetry. These integrals are defined below along with some of their mathematical properties. Formulation In connection with the quantum theory of atomic structure, John C. Slater defined the integral of three spherical harmonics as a coefficient . These coefficients are essentially the product of two Wigner 3jm symbols. These integrals are useful and necessary when doing atomic calculations of the Hartree–Fock variety where matrix elements of the Coulomb operator and Exchange operator are needed. For an explicit formula, one can use Gaunt's formula for associated Legendre polynomials. Note that the product of two spherical harmonics can be written in terms of these coefficients. By expanding such a product over a spherical harmonic basis with the same order one may then multiply by and integrate, using the conjugate property and being careful with phases and normalisations: Hence These coefficient obey a number of identities. They include References Atomic physics Quantum chemistry Rotational symmetry
Slater integrals
Physics,Chemistry
253
61,594,772
https://en.wikipedia.org/wiki/Bovine%20gammaherpesvirus%206
Bovine gammaherpesvirus 6 (BoHV-6) is a species of virus in the genus Macavirus, subfamily Gammaherpesvirinae, family Herpesviridae, and order Herpesvirales. References Gammaherpesvirinae
Bovine gammaherpesvirus 6
Biology
54
1,830,354
https://en.wikipedia.org/wiki/Heat-affected%20zone
In fusion welding, the heat-affected zone (HAZ) is the area of base material, either a metal or a thermoplastic, which is not melted but has had its microstructure and properties altered by welding or heat intensive cutting operations. The heat from the welding process and subsequent re-cooling causes this change from the weld interface to the termination of the sensitizing temperature in the base metal. The extent and magnitude of property change depends primarily on the base material, the weld filler metal, and the amount and concentration of heat input by the welding process. The thermal diffusivity of the base material plays a large role—if the diffusivity is high, the material cooling rate is high and the HAZ is relatively small. Alternatively, a low diffusivity leads to slower cooling and a larger HAZ. The amount of heat input during the welding process also plays an important role as well, as processes like oxyfuel welding use high heat input and increase the size of the HAZ. Processes like laser beam welding and electron beam welding give a highly concentrated, limited amount of heat, resulting in a small HAZ. Arc welding falls between these two extremes, with the individual processes varying somewhat in heat input. To calculate the heat input for arc welding procedures, the following formula is used: where = heat input (kJ/mm), = voltage (V), = current (A), and = welding speed (mm/min). The efficiency is dependent on the welding process used, with gas tungsten arc welding having a value of 0.6, shielded metal arc welding and gas metal arc welding having a value of 0.8, and submerged arc welding 1.0. References Weman, Klas (2003). Welding processes handbook. New York: CRC Press LLC. . Welding
Heat-affected zone
Engineering
382
71,079,573
https://en.wikipedia.org/wiki/Markov%20chain%20tree%20theorem
In the mathematical theory of Markov chains, the Markov chain tree theorem is an expression for the stationary distribution of a Markov chain with finitely many states. It sums up terms for the rooted spanning trees of the Markov chain, with a positive combination for each tree. The Markov chain tree theorem is closely related to Kirchhoff's theorem on counting the spanning trees of a graph, from which it can be derived. It was first stated by , for certain Markov chains arising in thermodynamics, and proved in full generality by , motivated by an application in limited-memory estimation of the probability of a biased coin. A finite Markov chain consists of a finite set of states, and a transition probability for changing from state to state , such that for each state the outgoing transition probabilities sum to one. From an initial choice of state (which turns out to be irrelevant to this problem), each successive state is chosen at random according to the transition probabilities from the previous state. A Markov chain is said to be irreducible when every state can reach every other state through some sequence of transitions, and aperiodic if, for every state, the possible numbers of steps in sequences that start and end in that state have greatest common divisor one. An irreducible and aperiodic Markov chain necessarily has a stationary distribution, a probability distribution on its states that describes the probability of being on a given state after many steps, regardless of the initial choice of state. The Markov chain tree theorem considers spanning trees for the states of the Markov chain, defined to be trees, directed toward a designated root, in which all directed edges are valid transitions of the given Markov chain. If a transition from state to state has transition probability , then a tree with edge set is defined to have weight equal to the product of its transition probabilities: Let denote the set of all spanning trees having state at their root. Then, according to the Markov chain tree theorem, the stationary probability for state is proportional to the sum of the weights of the trees rooted at . That is, where the normalizing constant is the sum of over all spanning trees. References Markov processes Spanning tree
Markov chain tree theorem
Mathematics
456
36,635,165
https://en.wikipedia.org/wiki/Physical%20acoustics
Physical acoustics is the area of acoustics and physics that studies interactions of acoustic waves with a gaseous, liquid or solid medium on macro- and micro-levels. This relates to the interaction of sound with thermal waves in crystals (phonons), with light (photons), with electrons in metals and semiconductors (acousto-electric phenomena), with magnetic excitations in ferromagnetic crystals (magnons), etc. Some recently developed experimental techniques include photo-acoustics, acoustic microscopy and acoustic emission. A long-standing interest is in acoustic and ultrasonic wave propagation and scattering in inhomogeneous materials, including composite materials and biological tissues. There are two main classes of problems studied in physical acoustics. The first one concerns understanding how the physical properties of a medium (solid, liquid, or gas) influence the propagation of acoustic waves in this medium in order to use this knowledge for practical purposes. The second important class of problems studied in physical acoustics is to obtain the relevant information about a medium under consideration by measuring the properties of acoustic waves propagating through this medium. See also Acoustic attenuation Acoustic levitation Acoustic streaming Acousto-electric effect Acousto-optics Elastic waves Interdigital transducer Longitudinal wave Love wave Nonlinear acoustics Picosecond ultrasonics Sonoluminescence Rayleigh wave Shear wave Sound absorption Sound velocity Thermoacoustics Acoustic radiation force References External links Physical Acoustics Group, The Institute of Physics (IOP) and The Institute of Acoustics (IOA) Anglo-French Physical Acoustics Conferences NCPA - National Center for Physical Acoustics, The University of Mississippi Acoustics
Physical acoustics
Physics
347
46,946,727
https://en.wikipedia.org/wiki/Pralmorelin
Pralmorelin (INN) (brand name GHRP Kaken 100; former developmental code names KP-102, GPA-748, WAY-GPA-748), also known as pralmorelin hydrochloride (JAN) and pralmorelin dihydrochloride (USAN), as well as, notably, growth hormone-releasing peptide 2 (GHRP-2), is a growth hormone secretagogue (GHS) used as a diagnostic agent that is marketed by Kaken Pharmaceutical in Japan in a single-dose formulation for the assessment of growth hormone deficiency (GHD). Pralmorelin is an orally-active, synthetic peptide drug, specifically, an analogue of met-enkephalin, with the amino acid sequence D-Ala-D-(β-naphthyl)-Ala-Trp-D-Phe-Lys-NH2. It acts as a ghrelin/growth hormone secretagogue receptor (GHSR) agonist, and was the first of this class of drugs to be introduced clinically. Acute administration of the drug markedly increases the levels of plasma growth hormone (GH) and reliably induces sensations of hunger and increases food intake in humans. Pralmorelin was also under investigation for the treatment of GHD and short stature (pituitary dwarfism), and made it to phase II clinical trials for these indications, but was ultimately never marketed for them. This may be because the ability of pralmorelin to increase plasma GH levels is significantly lower in people with GHD relative to healthy individuals. See also List of growth hormone secretagogues References Abandoned drugs Ghrelin receptor agonists Growth hormone secretagogues Peptides World Anti-Doping Agency prohibited substances
Pralmorelin
Chemistry
376
1,455,560
https://en.wikipedia.org/wiki/Anheung%20Proving%20Ground
Anheung Proving Ground is a launch site for sounding rockets in South Korea. It has been in use since 1993. External links https://web.archive.org/web/20050416040800/http://www.astronautix.com/sites/anhueng.htm Military installations Anheung Spaceports
Anheung Proving Ground
Astronomy
74
18,932,641
https://en.wikipedia.org/wiki/Motion%20Picture%20Association
The Motion Picture Association (MPA) is an American trade association representing the five major film studios of the United States, the mini-major Amazon MGM Studios, as well as the video streaming services Netflix and Amazon Prime Video. Founded in 1922 as the Motion Picture Producers and Distributors of America (MPPDA) and known as the Motion Picture Association of America (MPAA) from 1945 until September 2019, its original goal was to ensure the viability of the American film industry. In addition, the MPA established guidelines for film content which resulted in the creation of the Motion Picture Production Code in 1930. This code, also known as the Hays Code, was replaced by a voluntary film rating system in 1968, which is managed by the Classification and Rating Administration (CARA). The MPA has advocated for the motion picture and television industry, with the goals of promoting effective copyright protection, expanding market access and has worked to curb copyright infringement, including attempts to limit the sharing of copyrighted works via peer-to-peer file sharing networks and by streaming from pirate sites. Former United States Ambassador to France Charles Rivkin is the chairman and CEO. History Foundation and early history: 1922–1929 The MPA was founded as the Motion Picture Producers and Distributors of America (MPPDA) in 1922 as a trade association of member motion picture companies. At its founding, MPPDA member companies produced approximately 70 to 80 percent of the films made in the United States. Former Postmaster General Will H. Hays was named the association's first president. The main focus of the MPPDA in its early years was on producing a strong public relations campaign to ensure that Hollywood remained financially stable and able to attract investment from Wall Street, while simultaneously ensuring that American films had a "clean moral tone". The MPPDA also instituted a code of conduct for Hollywood's actors in an attempt to govern their behavior offscreen. Finally, the code sought to protect American film interests abroad by encouraging film studios to avoid racist portrayals of foreigners. From the early days of the association, Hays spoke out against public censorship, and the MPPDA worked to raise support from the general public for the film industry's efforts against such censorship. Large portions of the public opposed censorship, but also decried the lack of morals in movies. The organisation also had formed a trust to block out Independents and enforce the monopolistic studio system. At the time of the MPPDA's founding, there was no national censorship, but some state and municipal laws required movies to be censored, a process usually overseen by a local censorship board. As such, in certain locations in the U.S., films were often edited to comply with local laws regarding the onscreen portrayal of violence and sexuality, among other topics. This resulted in negative publicity for the studios and decreasing numbers of theater goers, who were uninterested in films that were sometimes so severely edited that they were incoherent. In 1929, more than 50 percent of American moviegoers lived in a location overseen by such a board. In 1924, Hays instituted "The Formula", a loose set of guidelines for filmmakers, in an effort to get the movie industry to self-regulate the issues that the censorship boards had been created to address. "The Formula" requested that studios send synopses of films being considered to the MPPDA for review. This effort largely failed, however, as studios were under no obligation to send their scripts to Hays's office, nor to follow his recommendations. In 1927, Hays oversaw the creation of a code of "Don'ts and Be Carefuls" for the industry. This list outlined the issues that movies could encounter in different localities. Hays also created a Studio Relations Department (SRD) with staff available to the studios for script reviews and advice regarding potential problems. Again, despite Hays' efforts, studios largely ignored the "Don'ts and Be Carefuls", and by the end of 1929, the MPPDA received only about 20 percent of Hollywood scripts prior to production, and the number of regional and local censorship boards continued to increase. Production Code: 1930–1934 In 1930, the MPPDA introduced the Motion Picture Production Code, commonly called the Hays Code. The Code consisted of moral guidelines regarding what was acceptable to include in films. Unlike the "Dont's and Be Carefuls", which the studios had ignored, the Production Code was endorsed by studio executives. The Code incorporated many of the "Don'ts and Be Carefuls" as specific examples of what could not be portrayed. Among other rules, the code prohibited inclusion of "scenes of passion" unless they were essential to a film's plot; "pointed profanity" in either word or action; "sex perversion"; justification or explicit coverage of adultery; sympathetic treatment of crime or criminals; dancing with "indecent" moves; and white slavery. Because studio executives had been involved in the decision to adopt the code, MPPDA-member studios were more willing to submit scripts for consideration. However, the growing economic impacts of the Great Depression of the early 1930s increased pressure on studios to make films that would draw the largest possible audiences, even if it meant taking their chances with local censorship boards by disobeying the Code. In 1933 and 1934, the Catholic Legion of Decency, along with a number of Protestant and women's groups, launched plans to boycott films that they deemed immoral. In order to avert boycotts which might further harm the profitability of the film industry, the MPPDA created a new department, the Production Code Administration (PCA), with Joseph Breen as its head. Unlike previous attempts at self-censorship, PCA decisions were binding—no film could be exhibited in an American theater without a stamp of approval from the PCA, and any producer attempting to do so faced a fine of $25,000. After ten years of unsuccessful voluntary codes and expanding local censorship boards, the studio approved and agreed to enforce the codes, and the nationwide "Production Code" was enforced starting on July 1, 1934. War years: 1934–1945 In the years that immediately followed the adoption of the Code, Breen often sent films back to Hollywood for additional edits, and in some cases, simply refused to issue PCA approval for a film to be shown. At the same time, Hays promoted the industry's new focus on wholesome films and continued promoting American films abroad. For nearly three years, studios complied with the Code. By 1938, however, as the threat of war in Europe loomed, movie producers began to worry about the possibility of decreased profits abroad. This led to a decreased investment in following the strictures of the code, and occasional refusals to comply with PCA demands. That same year, responding to trends in European films in the run-up to the war, Hays spoke out against using movies as a vehicle for propaganda. In 1945, after nearly 24 years as president, Hays stepped down from his position at the MPPDA, although he continued to act as an advisor for the Association for the next five years. Johnston era: 1945–1963 In 1945 the MPPDA hired Eric Johnston, four-time president of the United States Chamber of Commerce, to replace Hays. During his first year as president, Johnston rebranded the Motion Picture Producers and Distributors of America as the Motion Picture Association of America (MPAA). He also created the Motion Picture Export Association (MPEA) to promote American films abroad by opposing production company monopolies in other countries. In 1947 the MPEA voted to discontinue film shipments to Britain after the British government imposed an import tax on American films. Johnston negotiated with the British government to end the tax in 1948, and film shipments resumed. In 1956, Johnston oversaw the first major revision of the Production Code since it was created in 1930. This revision allowed the treatment of some subjects which had previously been forbidden, including abortion and the use of narcotics, so long as they were "within the limits of good taste". At the same time, the revisions added a number of new restrictions to the code, including outlawing the depiction of blasphemy and mercy killings in films. Johnston was well-liked by studio executives, and his political connections helped him function as an effective liaison between Hollywood and Washington. In 1963, while still serving as president of the MPAA, Johnston died of a stroke. For three years, the MPAA operated without a president while studio executives searched for a replacement. Valenti era: 1966–2004 The MPAA appointed Jack Valenti, former aide to President Lyndon Johnson, as president of the MPAA in 1966. In 1968, Valenti replaced the Production Code with a system of voluntary film ratings, in order to limit censorship of Hollywood films and provide parents with information about the appropriateness of films for children. In addition to concerns about protecting children, Valenti stated in his autobiography that he sought to ensure that American filmmakers could produce the films they wanted, without the censorship that existed under the Production Code that had been in effect since 1934. In 1975, Valenti established the Film Security Office, an anti-piracy division at the MPAA, which sought to recover unauthorized recordings of films to prevent duplication. Valenti continued to fight piracy into the 1980s, asking Congress to install chips in VCRs that would prevent illegal reproduction of video cassettes, and in the 1990s supported law enforcement efforts to stop bootleg distribution of video tapes. Valenti also oversaw a major change in the ratings system that he had helped create—the removal of the "X" rating, which had come to be closely associated with pornography. It was replaced with a new rating, "NC-17", in 1990. In 1994, the Motion Picture Export Association of America changed its name to the Motion Picture Association to more accurately reflect the global nature of audiovisual entertainment in today's international marketplace. In 2001, Valenti established the Digital Strategy Department at the MPAA to specifically address issues surrounding digital film distribution and piracy. Modern era: 2004–present After serving as president of the MPAA for 38 years, Valenti announced that he would step down in 2004. In September of that year, he was replaced by former Secretary of Agriculture Dan Glickman. During his tenure, Glickman focused on tax issues, content protection efforts, and increasing U.S. studios' access to international markets. He led lobbying efforts that resulted in $400 million in federal tax incentives for the film industry, and also supported a law which created federal oversight of anti-piracy efforts. Glickman stepped down in 2010. After a search which lasted over a year, the MPAA hired former U.S. Senator Chris Dodd to replace Glickman in March 2011. In his role as president, Dodd focused on content protection, trade, and improving Hollywood's image. He traveled to China in 2011 in an effort to encourage the Chinese government to both crack down on piracy and further open its film market. A settlement of a long-argued World Trade Organization complaint, coupled with Dodd's efforts, contributed to the United States' agreement with China in 2012 to open China's film market to more Hollywood films and to increase U.S. studios' share of box-office revenues in China. In addition to this agreement with China, the U.S. signed more than 20 memos of understanding with foreign governments regarding the enforcement of intellectual property rights during Dodd's tenure at the MPAA. In 2011, the MPAA supported the passage of the Stop Online Piracy Act (SOPA) and PROTECT IP Act (PIPA). After the two bills were shelved in early 2012, Dodd indicated that Hollywood might cut off campaign contributions to politicians who failed to support anti-piracy efforts in the future. In 2012, the MPAA launched the Diversity and Multicultural Outreach program, as part of an effort to increase diversity in the television and film industry both through employment and representation on screen. Since its inception, the Diversity and Multicultural and Outreach group has conducted outreach and partnered with more than 20 multicultural groups and national civil rights organizations in sponsoring film screenings, festivals, and other diversity-themed events. Throughout his tenure at the MPAA, Dodd also highlighted the need for film studios to embrace technology as a means of distributing content. In June 2017, the MPAA supported the launch of the Alliance for Creativity and Entertainment (ACE), a coalition of entertainment companies, including the six major studios, Netflix and Amazon, that would draw on the MPAA's resources in an effort to reduce online piracy through research and legal efforts. Former U.S. diplomat and Assistant Secretary of State for Economic and Business Affairs Charles Rivkin succeeded Chris Dodd as CEO on September 5, 2017, and as chairman effective December 6, 2017. On January 25, 2019, film streaming service Netflix announced that it had joined the MPAA in an effort to identify itself among the major studios. In September 2019, the association updated its branding to reflect the global nature of the film, television, and streaming industry, officially changing its name to the Motion Picture Association (MPA), a name which it has used internationally since 1994. An updated logo also went into effect at this time. In September 2024, it was announced Amazon MGM Studios would join the MPA, making the seventh member in the entertainment policy group. Film rating system In 1968, the MPAA established the Code and Rating Administration, or CARA (later renamed the Classification and Rating Administration), which began issuing ratings for films exhibited and distributed commercially in the United States to help parents determine what films are appropriate for their children. Since the rating system was first introduced in November 1968, it has gone through several changes, including the addition of a PG-13 rating. The ratings system is completely voluntary, and ratings have no legal standing. Instead, the American film industry enforces the MPAA film ratings after they have been assigned, with many theaters refusing to exhibit non-rated films. For example, it is against the American film industry's policy to admit unaccompanied children to an R-rated film. An unrated film is often denoted by "NR", such as in newspapers, although this is not a formal MPAA rating. In 2006, the film This Film Is Not Yet Rated alleged that the MPAA gave preferential treatment to member studios during the process of assigning ratings, as well as criticizing the rating process for its lack of transparency. In response, the MPAA posted its ratings rules, policies, and procedures, as well as its appeals process, online. According to the MPA, the ratings are made by an independent group of parents. According to a 2015 study commissioned by CARA, ninety-three percent of parents in the U.S. find the rating system to be a helpful tool. The ratings currently used by the MPA's voluntary system are: Members The original MPAA members were the "Big Eight" film studios: Paramount Pictures, Fox Film, Loews, Universal Pictures, and United Artists, followed by Warner Bros. in 1923, Columbia Pictures in 1924, along with Metro-Goldwyn-Mayer (formed by the merger of Loews, Metro Pictures, Goldwyn Pictures, and Louis B. Mayer Productions), and RKO Pictures in 1928. Then came the 1935 merger of Fox Film and 20th Century Pictures into 20th Century Fox. United Artists briefly resigned from the organization in 1956 over a ratings dispute, although they rejoined later in the decade. By 1966, Allied Artists Pictures had joined the original members. In the following decade, new members joining the MPAA included Avco Embassy in 1975 and Walt Disney Studios in 1979. The next year, Filmways became a MPAA member, but was later replaced in 1986 along with Avco Embassy when the De Laurentiis Entertainment Group and Orion Pictures joined the MPAA roster. As of 1995, the MPAA members were MGM—which included United Artists after their 1981 merger, Paramount, Sony Pictures—which included Columbia and TriStar Pictures after their 1989 acquisition, 20th Century Fox, Universal, Disney, and Warner Bros. Turner Entertainment joined the MPAA in 1995, but was purchased in 1996 by Time Warner. The number of members dropped to six in 2005, following Sony's failed attempt to acquire MGM. The MPAA's member companies remained intact until the 2019 acquisition of 21st Century Fox by Disney, including 20th Century Fox. Netflix was approved as a new member in January 2019, making it the first non-studio and the first streaming service to be part of the organization. The addition of Netflix also helped to maintain the number of members after the acquisition of 20th Century Fox by Disney. The MPA aims to recruit additional members. In September 2024, it was announced that Amazon MGM Studios and Prime Video would join the MPA as its seventh member starting October 1, the second non-studio to do so after Netflix in 2019; this would also mark a return to the MPA for MGM, currently a division of Amazon MGM Studios, after it lost membership in 2005 following a buyout led by Sony. Content protection efforts The MPA's concerted efforts at fighting copyright infringement began in 1975 with the establishment of the Film Security Office, which sought to recover unauthorized recordings of films in order to prevent duplication. The MPA has continued to pursue a number of initiatives to combat illegal distribution of films and TV shows, especially in response to new technologies. In the 1980s, it spoke out against VCRs and the threat that the MPA believed they represented to the movie industry, with MPAA president Jack Valenti drawing a parallel between the threat of the VCR and that of the Boston Strangler. In 1986, the MPAA asked Congress to pass a law that would require VCRs to come equipped with a chip to prevent them from making copies. Legal efforts at stopping homemade copies of broadcast television largely ended, however, when the United States Supreme Court ruled that such copying constituted fair use. The MPA continued to support law enforcement efforts to stop bootleg production and distribution of videotapes and laserdiscs into the 1990s, and in 2000 took successful legal action against individuals posting DVD decryption software on the Internet in Universal City Studios, Inc. v. Reimerdes. Following the release of RealDVD—an application that enabled users to make copies of DVDs—RealNetworks sued the DVD Copy Control Association and the major studios in 2008 over the legality of the software, accusing them of violating the Sherman Antitrust Act. The judgment found there were no grounds for the antitrust claim and dismissed the suit. The court later found that the RealNetworks product violated the Digital Millennium Copyright Act (DMCA). The MPA has continued to support law enforcement efforts to prevent illegal distribution of copyrighted materials online. The MPA and its British counterpart, the Federation Against Copyright Theft (FACT), also funded the training of Lucky and Flo, a pair of Labrador Retrievers, to detect polycarbonates used in the manufacturing of DVDs. The MPA strives to protect the creative rights of the large corporate film makers. Its counterpart has come up with infamous slogans such as "Who Makes Movies?" and "You can click, but you can't hide". Online file sharing In the early 2000s, the MPAA began focusing its efforts to curb copyright infringement specifically on peer-to-peer file sharing, initially using a combination of educational campaigns and cease and desist letters to discourage such activity. In the first six months of 2002, the MPAA sent more than 18,000 such letters to internet service providers to forward to users engaged in copyright infringement. In late 2004, the MPAA changed course and filed lawsuits in a concerted effort to address copyright infringement on a number of large online file-sharing services, including BitTorrent and eDonkey. The following year, the MPAA expanded its legal actions to include lawsuits against individuals who downloaded and distributed copyrighted material via peer-to-peer networks. The MPAA also played a role in encouraging the Swedish government to conduct a raid of the Pirate Bay file-sharing website in May 2006. Swedish officials have acknowledged that part of the motivation for the raid was the threat of sanctions from the World Trade Organization, along with a letter from the MPAA. In 2013, the Center for Copyright Information unveiled the Copyright Alert System, a system established through an agreement between the MPAA, the Recording Industry Association of America, and five of the US's largest internet service providers. The system used a third-party service to identify content being distributed illegally. Users were then informed that their accounts were being used for possible copyright infringement and were provided with information about ways to get authorized content online. Users who received multiple notices of infringement faced "mitigations measures", such as temporary slowing of their Internet service, but the system did not include termination of subscriber accounts. Subscribers facing such action had a right to appeal to the American Arbitration Association. In January 2017, the Copyright Alert System was discontinued. While no official reason was given, the MPAA's general counsel stated that the system had not been equipped to stop repeat infringers. On December 24, 2014, the Sony Pictures hack revealed that following a lawsuit in which the MPAA won a multimillion judgment against Hotfile, a file hosting website, the MPAA colluded with Hotfile to misrepresent the settlement so that the case would serve as a deterrent. The settlement was previously believed to be $80 million and was widely reported; however, Hotfile only paid the studios $4 million and agreed to have the $80 million figure recorded as the judgment and the website shut down. In a case resolved in 2015, the MPAA and others supported the United States International Trade Commission (ITC)'s decision to consider electronic transmissions to the U.S. as "articles" so that it could prevent the importation of digital files of counterfeit goods. While the case being considered by the ITC involved dental appliances, the ITC could have also used such authority to bar the importation of pirated movies and TV shows from rogue foreign websites that traffic in infringing content. The Federal Circuit Court of Appeals took up the matter, and ultimately ruled against the ITC. In 2016, the MPAA reported Putlocker as one of the "top 5 rogue cyberlocker services" to the Office of the United States Trade Representative as a major piracy threat; the website was then blocked in the United Kingdom. In 2019, the MPA released an overview of the piracy markets in contravention of the US Government. Added to the list were Chinese hosting service Baidu, and Russian gambling firm 1xBet. Criticism and controversies Publicity campaigns The MPAA has also produced publicity campaigns to discourage piracy. The Who Makes Movies? advertising campaign in 2003 highlighted workers in the film industry describing how piracy affected them. The video spots ran as trailers before films, and as television advertisements. In 2004, the MPAA began using the slogan "You can click, but you can't hide". This slogan appeared in messages that replaced file-sharing websites after they had been shut down through MPAA legal action. It also appeared in posters and videos distributed to video stores by the MPAA. Also in 2004, the MPAA partnered with the Federation Against Copyright Theft and the Intellectual Property Office of Singapore to release "You Wouldn’t Steal A Car", a trailer that was shown before films in theaters equating piracy with car theft. The trailer was later placed at the beginning of the video on many DVDs in many cases as an unskippable clip (not being able to skip or fast-forward), which triggered criticism and a number of parodies. In 2005, the MPAA commissioned a study to examine the effects of file sharing on film industry profitability. The study concluded that the industry lost $6.1 billion per year to piracy, and that up to 44 percent of domestic losses were due to file sharing by college students. In 2008, the MPAA revised the percentage of loss due to college students down to 15 percent, citing human error in the initial calculations of this figure. Beyond the percentage of the loss that was attributable to college students, however, no other errors were found in the study. In 2015, theaters began airing the MPAA's "I Make Movies" series, an ad campaign intended to combat piracy by highlighting the stories of behind-the-scenes employees in the film and television industry. The series pointed audiences to the MPAA's "WhereToWatch" website (later dubbed "The Credits") which provides attention to the behind-the-scenes creativity involved in filmmaking. Accusations of copyright infringement The MPAA itself has been accused of copyright infringement on multiple occasions. In 2007, the creator of a blogging platform called Forest Blog accused the MPAA of violating the license for the platform, which required that users link back to the Forest Blog website. The MPAA had used the platform for its own blog, but without linking back to the Forest Blog website. The MPAA subsequently took the blog offline, and explained that the software had been used on a test basis and the blog had never been publicized. Also in 2007, the MPAA released a software toolkit for universities to help identify cases of file sharing on campus. The software used parts of the Ubuntu Linux distribution, released under the General Public License, which stipulates that the source code of any projects using the distribution be made available to third parties. The source code for the MPAA's toolkit, however, was not made available. When the MPAA was made aware of the violation, the software toolkit was removed from their website. In 2006, the MPAA admitted having made illegal copies of This Film Is Not Yet Rated (a documentary exploring the MPAA itself and the history of its rating system) — an act which Ars Technica explicitly described as hypocrisy and which Roger Ebert called "rich irony". The MPAA subsequently claimed that it had the legal right to copy the film despite this being counter to the filmmaker's explicit request, because the documentary's exploration of the MPAA's ratings board was potentially a violation of the board members' privacy. International activities Around the world, the MPA helps with local law enforcement to combat piracy. The MPA offices in the world are: Motion Picture Association – Canada MPA EMEA (Europe, Middle East and Africa), which has anti-piracy programs in 17 European countries MPA Asia and Pacific, which has anti-piracy programs in 14 Asian countries MPA Latin America, which has anti-piracy programs in two Latin-American countries See also Australian Classification Board British Board of Film Classification DeCSS: decryption program for DVD video discs using Content Scramble System Eirin Entertainment Software Rating Board Will H. Hays National Association of Theatre Owners Operation Red Card Pre-Code Pre-Code Hollywood United States Motion Picture Production Code of 1930 References External links MPPDA Digital Archives (1922–1939) Motion Picture Association of America. Production Code Administration records, Margaret Herrick Library, Academy of Motion Picture Arts and Sciences MPPDA - MPAA - The Motion Picture Production Code film numbers to 52000—Includes a downloadable Excel worksheet The Production Code of the Motion Picture Industry (1930-1967) 1922 establishments in the United States Arts and media trade groups Arts organizations based in Washington, D.C. Computer law Entertainment rating organizations Film censorship in the United States Trade associations based in the United States Organizations established in 1922 Film organizations in the United States
Motion Picture Association
Technology
5,645
36,891,495
https://en.wikipedia.org/wiki/Privately%20owned%20public%20space
Privately owned public space (POPS), or alternatively, privately owned public open spaces (POPOS), are terms used to describe a type of public space that, although privately owned, is legally required to be open to the public under a city's zoning ordinance or other land-use law. The acronym POPOS is preferentially used over POPS on the west coast of the US. Both terms can be used to represent either a singular or plural space or spaces. These spaces are usually the product of a deal between cities and private real estate developers in which cities grant valuable zoning concessions and developers provide in return privately owned public spaces in or near their buildings. Privately owned public spaces commonly include plazas, arcades, small parks, and atriums. Many cities worldwide, including Auckland, New York City, San Francisco, Dublin, Seattle, Seoul, and Toronto, have privately owned public spaces. History The term privately owned public space was popularized by Harvard professor Jerold S. Kayden through his 2000 book Privately Owned Public Space: The New York City Experience, written in collaboration with the New York City Department of City Planning and the Municipal Art Society of New York. It attributed this type of public space to a "legal innovation" introduced in 1961 in New York City, an "incentive zoning" mechanism offering developers the right to build 10 square feet of bonus rentable or sellable floor area in return for one square foot of plaza, and three square feet of bonus floor area in return for one square foot of arcade. Between 1961 and 2000, 503 privately owned public spaces, scattered almost entirely in downtown, midtown, and upper east and west sides of New York City's borough of Manhattan, were constructed at 320 buildings. The book cited the quantitative success of the program's public space production but reported that 41 percent of these spaces were of "marginal" quality and roughly 50 percent of buildings had one or more spaces apparently out of compliance with applicable legal requirements resulting in privatization. While privately owned public space as a term of art refers specifically to private property required to be usable by the public under zoning or similar regulatory arrangements, the phrase in its broadest sense can refer to places, like shopping malls and hotel lobbies, that are privately owned and open to the public, even if they are not legally required to be open to the public. Opinions In 2017, The Guardian published a study of the phenomenon in London, facing a lack of response from both landowners and local authorities they questioned on the subject. In reaction to the report, the Mayor of London, Sadiq Khan promised to publish new guidelines on how these spaces are governed. See also Fourth and Madison Building List of privately owned public spaces in London List of privately owned public spaces in New York City List of privately owned public spaces in San Francisco Private protected area Privatization References External links List of privately owned public open spaces in San Francisco by the city's Planning department Privately owned public space in New York City Information on the website of the Municipal Art Society Urban planning Land use
Privately owned public space
Engineering
611
2,342,502
https://en.wikipedia.org/wiki/Water%20detector
A water detector is an electronic device that is designed to detect the presence of water for purposes such as to provide an alert in time to allow the prevention of water leakage. A common design is a small cable or device that lies flat on a floor and relies on the electrical conductivity of water to decrease the resistance across two contacts. The device then sounds an audible alarm together with providing onward signaling in the presence of enough water to bridge the contacts. These are useful in a normally occupied area near any infrastructure that has the potential to leak water, such as HVAC, water pipes, drain pipes, vending machines, dehumidifiers, or water tanks. Water leak detection Water leak detection is an expression more commonly used for larger, integrated systems installed in modern buildings or those containing valuable artifacts, materials or other critical assets where early notification of a potentially damaging leak would be beneficial. In particular, water leak detection has become a necessity in data centers, trading floors, banks, archives and other mission-critical infrastructure. The water leak detection industry is small and specialized with only a few manufacturers operating worldwide. The original application was in the void created by "computer room" floors in the days of large main-frame computer systems. These use a modular, raised floor based around a structural "floor tile" usually 600 mm square and supported at the corners by pedestals. The void created gave easy access and routing for the mass of power, networking and other interconnecting cables associated with larger computer systems - processors, drives, routers etc. mainframe computers also generated large amounts of heat so a void under the floor could also be used as a plenum to distribute and diffuse chilled air around the computer room. The void therefore was likely to have chilled water pipes running through it along with the drains for condensates associated with refrigeration plant. In addition, designers found the floor void a very convenient place to route other wet services feeding bathrooms, radiators and other facilities. A leak occurring within a floor void would therefore go unnoticed until the hydrostatic head of pressure meant that the water found its way through to floors below where its dripping through the ceiling would be noted or, and more disconcerting, the water would penetrate the joints and connectors of the power or network cabling and cause system failure from short circuit. Current digital water leak detection systems can locate multiple water leaks to within 1 meter resolution over a complex network of cables running several kilometers. This functionality reduces the downtime and potential damage caused by inaccurate reporting that was common with older analogue based systems. Water leak detection systems can be integrated with Building Management Systems using multiple protocols such as Modbus. Using SNMP protocols leak detection systems can inform IT staff in charge of monitoring data center and server rooms. Integrated multi-zone systems The computer room therefore became the early application for systems which would alert the operator to a leaking pipe in sufficient time for remedial action to be taken to prevent a disaster. As computer rooms could be quite large simple "point of use" detectors were not really appropriate although Point Sensors do have value where simple, single point detection is required in, say, basements and sumps. Most modern leak detection systems developed around the use of a water sensitive cable which can be laid in long lengths and complex patterns around the base of the floor; around the perimeter of rooms; as a "barrier" over which water has to flow; following, tracing or attached directly to lines of water pipes. General application The mainframe computer room has largely been replaced with the Data Centre but the application has remained with almost universal use of "computer-room" style raised floors in nearly all new commercial and office construction. To warrant the installation of leak detection the operator has to perceive the risk in addition to the circumstances but most Mechanical and Electrical Design Engineers will take a view of the risk of damage from a leak in terms of effect on the client's own operations, services and assets and, often as important, those of their adjoining neighbours and those on floors below. The installation of leak detection systems is therefore becoming more commonplace in most new commercial office construction schemes along with the more obvious targets of museums, galleries and archives. Leak detection systems must be unobtrusive, effective and robust enough to withstand getting dirty and the moderate physical abuse of other works being carried out under the same floor. Zoned systems have a reputation for being safe, reliable and not prone to the same types of false alarms which those systems which use cumulative resistance techniques. References Measuring instruments Detectors
Water detector
Technology,Engineering
917
32,762,244
https://en.wikipedia.org/wiki/Amine%20fluoride
Amine fluorides are dental drugs. History Amine fluorides were developed in the 1950s by GABA in collaboration with the Institute of Dentistry of the University of Zurich (Switzerland). For the first time, in 1954, Wainwright showed in his study the high permeability of tooth enamel to organic molecules like urea. This aspect made him ask himself if it was not possible to enrich the contents of the enamel with fluoride by using organic molecules as carrier, which were chemically bonded to amino fluoride. In 1957, Mühlemann, Schmid and König published the results of their studies in vitro, in which they demonstrated that some compounds with amino fluoride were obviously superior to those with inorganic fluoride in reducing the solubility of the enamel. In the same year, Irwin, Leaver and Walsh published the results of their experiments in vitro, which demonstrated that monoamine-aliphatic compounds offered protection to the enamel against acid decalcification. In 1967, Muhleman demonstrated the superiority of organic fluoride compared to inorganic fluoride in preventing dental decay. He observed that amine fluoride had a pronounced affinity regarding enamel, by raising the quantity of fluoride in the enamel and also having an antienzyme effect on the microbial activity of dental plaque. His conclusions were the following: Amino fluorides produce the most powerful enrichment in fluoride of the enamel, even in low concentration. The carious preventive action is due to fluoride on one side and to the anti-enzyme effect of the organic fraction on the other side and Also by arresting the formation of dental plaque, as a result of the tensioactive properties. In this way amine fluorides were born in GABA S.A.-BASEL laboratory. The commercial products, which contain amine fluoride or compounds of this with tin-fluoride in their formula, are present under different forms: - gels, - fluids, - dentifrice, - mouth rinse. Structure The unique position of the amine fluoride is based on their special molecular structure: the fluoride ion is bound to an organic fatty acid amine fragment. This is not the case for the inorganics fluorides such as sodium fluoride and sodium monofluorophosphate. Amine fluorides have a hydrophobic molecular moiety, the non-polar tail, with a hydrophilic component, the polar amine head. For this reason, they act like surfactants, reducing the surface tension of saliva, and forming a homogeneous film on all oral surfaces. Due to their surface activity the amine fluorides are rapidly dispersed in the oral cavity and wet all surfaces. In contrast, in the case of inorganic fluorides the counter ion (e.g. sodium) has no transport function; the fluoride is statistically distributed in the oral cavity. Amine fluoride covers the tooth surfaces with a homogeneous molecular layer. This continuous film prevents rapid rinsing off by the saliva. The amine fluorides are thus available as an active agent for a longer period. Amine fluorides have a slightly acidic pH. For this reason, fluoride ions can combine rapidly with the calcium in dental enamel to form calcium fluoride. This acts as a fluoride depot over a longer period: Under cariogenic conditions fluoride ions are made available stimulating the remineralisation of dental enamel and thus prevent acid attacks. See also Silver diammine fluoride Olaflur References Dental drugs Oral hygiene Amines Fluorides
Amine fluoride
Chemistry
749
73,249,949
https://en.wikipedia.org/wiki/Selected%20timeline%20related%20to%20orphan%20wells%20in%20Alberta
Selected timeline related to orphan wells in Alberta, Canada is a list of events relevant to orphan wells in Alberta, Canada. Orphan wells are inactive oil or gas well sites that have no solvent owner that can be held legally or financially accountable for the decommissioning and reclamation obligations to ensure public safety and to address environmental liabilities. 1910s 1910s The province's oldest inactive well has been dormant and unreclaimed since June 30, 1918. 1920s 1920s Some of the legacy sites were in operation in the 1920s or earlier, and have no known operator and no "financial security to cover the cleanup costs." 1940s 1940s Alberta's oil industry experienced a big boom. 1950s Canada's oil production in 1946 was only of oil per day. By 1956, Alberta was producing per day. 1950s The boom in Alberta's oil industry continued. 1960s 1960s One of Alberta's most significant "busts" in the boom and bust cycle. 1970s 1970s Alberta's oil industry experienced a big boom. 1980s 1980s One of Alberta's most significant "busts" that lasted about a decade. 1990s Alberta's oil industry experienced a big boom. 1991 June 12 In Panamericana v. Northern Badger Oil & Gas Ltd. the Court of Appeal of Alberta ruled that "abandonment of oil and gas wells is part of the general law of Alberta enacted to protect the environment and for the health and safety of all citizens." 1999 There were about 40,000 inactive wells in Alberta; by about 2008, there were 60,000, and by 2018, there were 89,217. 2000s January The Alberta Energy and Utilities Board (EUB) established the Public Safety and Sour Gas (PSSG) independent committee to "review and assess the province's regulatory regime as it related to health and safety". Sour gas contains significant amounts of hydrogen sulfide (H2S) which is toxic to animals. There were more than "6,000 sour gas wells, approximately 250 sour gas processing plants and over 18,000 km of operating sour gas pipelines" in the province. The PSSG spent a year consulting with communities that were impacted by sour gas extraction, including 16 Aboriginal communities with the aim of improving regulation of sour gas to reduce its negative effects on public health and safety. 2001 to 2004 Encana and other energy companies "carpet-bombed"drilled "thousands of shallow wells" in Alberta's Wheatland County, in communities such as Redland and Rosebud, Alberta in "coal-bed methane and sand formations." In Ernst v. EnCana Corporation, an unsuccessful 2007 gross negligence $33-million lawsuit filed against Encana, the Alberta government, and the provincial regulator, Energy Resources Conservation Board, the litigant said that "well water was contaminated with explosive volumes of methane and other chemicals". While the lawsuit failed, it was a catalyst for the taxpayer-funded $10 million dollar project to pipe water into the communities where the wells had been drilled, as their ground water was so heavily contaminated. 2002 The industry-led Orphan Well Association (OWA) was established. It is an independent, non-profit organization. 2002 Perpetual Energy Inc. was created as a spin out of Paramount Resources, owned by Clayton Riddell. Riddell served as Perpetual's Chairman until his death in 2018. Riddell owned 41.7% of Perpetual Energy Inc. and his daughter, Susan Riddell Rose, who is Perpetual's CEO, owned 4.8%. 2006 More than 33% of the province's "annual natural gas production was sour gas (1.6 trillion cubic feet)". 2010s 2010s Global oil price decreased in the 2010s 2012 The OWA only had 14 classified orphan wells; in 2013 there were 74; in 2014 there were 162; in 2015 there were 705. During the last bust period in the cyclefrom 2012 to 2017the number of orphan wells increased from 74 in 2012 to 3,200. There were fewer than 300 orphan wells designated orphan wells on the Orphan Well Inventory. In 2012, Lexin had acquired most of its assets from Compton Petroleum Corp, a public company starting in 1996, which was once one of the largest intermediate oil and gas producers in Calgary. In 2006, Compton's capital spending was earmarked at $526 million. During a failed attempt at selling the company in 2008, Compton's shares fell from $12.80 a share to $2.60 within months. In 2012, when Compton was acquired by then New York-based MFC Industrial Ltd., Compton's capital spending was earmarked at just $14 million to $16 million. MFC Industrial at that time had a "record of spinning value out of troubled assets". Michael Smith, then Chairman of MFC, became director of the Mazeppa gas plant, a Lexin subsidiary. In 2014, there were 162 orphan wells. From 2014 to 2016 global oil prices experienced the "largest...declines in modern history". This led to the longest decline in oil prices since the 1980s. 2015 AER representatives reported to a meeting of Oklahoma's Interstate Oil and Gas Compact Commission meeting on surface casing vent flow (SCVF) and gas migration (GM) that by the end of 2015, there was a total of 600,000 wells drilled in Alberta, Saskatchewan, and Manitoba. January 28 The appendix, "Methane from Leaking Abandoned Wells: Health and Safety Issues", for the 2016 AER report was completed by Monique G. Dubé. There were 705 wells in the OWA's inventory. In 2015, there were about 800 oil and gas licensees in the province who contributed annually to the OWA levy. This includes the 17 major energy companies as well as smaller "mom and pop companies". The average cost of reclamation/remediation (R/R) site services in 2015 was $180,000 per site and range from $20,000 to $1 million. This provides work during downturns in the oil industry. When Redwater Energy declared bankruptcy in 2015 and said they could not clean up over a thousand orphan wells, this raised environmental concerns in what the CBC reported as a "tsunami of orphaned oil and gas wells" from 200 in 2014 to "3,127 wells that need to be plugged or abandoned, and a further 1,553 sites that have been abandoned but still need to be reclaimed". On January 31, 2019, in the case of Redwater Energy, the Supreme Court of Canada ruled 5–2 overturning "two lower court decisions that said bankruptcy law has paramountcy over provincial environmental responsibilities". The Supreme Court of Canada "allowed an appeal brought by the AER and the OWA from the decision of the Alberta Court of Appeal in Orphan Well Association v Grant Thornton Limited (Redwater). The "case has been one of the most closely watched by the Canadian oil and gas industry in decades". Redwater lawyers said that it was not possible for the company to comply with both the federal and provincial legislation in regards to the Bankruptcy and Insolvency Act (BIA). The January 31 ruling means that "bankruptcy is not a licence to ignore environmental regulations, and there is no inherent conflict between federal bankruptcy laws and provincial environmental regulations." In 2015, two Alberta courts found in favour of the oil and gas company Redwater Energy's receiver against the AER. Redwater had filed for bankruptcy as a result of the "oil price collapse" and the AER wanted to recuperate funds from the any sales Redwater made, to "help pay to clean up after Redwater's inactive wells as required by provincial regulations". By going into bankruptcy, Redwater avoided paying for its Asset retirement obligations (ARO). In 2015, nine different contractors working on projects at Mazeppa filed a lawsuit against the Lexin subsidiary. Smith disputed the contractors' claim. By 2015, MFC Industrial Ltd. had changed its name MFC Bancorp Ltd., its focus to merchant banking with its listing as a trading company in Vancouver, BC, and its commercial headquarters in Austria. As a result, by 2015, the southern Alberta oil and gas properties' ownership became "complex and opaque." In October 2015, the new ownership group stopped annual surface lease compensation pay to hundreds of landowners in southern Alberta where the company had "Lexin wells". Landowners who wished to complain were directed to the company's office in Hong Kong. The landowners were encouraged to recover the unpaid compensation from the Surface Rights Board instead of the ownership group. According to financial records cited in the Calgary Herald, in late 2015, Bancroft had sold a controlling interest in these southern Alberta assets to an " undisclosed buyer"—then listed as having zero net value. Calgary Herald reported that, as of 2017, company statements and court records show that the Austrian-headquartered MFC Bancorp as Lexin company trustee, was still the "controlling registered shareholder." 2016 It is alleged in a court filing that in 2016 Susan Riddell Rose "engineered the sale of a subsidiary called Perpetual Energy Operating Corp. (PEOC), later renamed Sequoia", to Chinese investors. In May 2016, the Court of Queen's Bench of Alberta (ABQB) in 2016 ABQB 278, "confirmed that the federal Bankruptcy and Insolvency Act supersedes the provincial requirements that companies must clean up wells." "Bankrupt companies can avoid their liabilities and leave them as a public obligation." By November 2016, when Mazeppa gas plant contractors won their lawsuit against a Lexin subsidiaries—LR Processing Partnership and LR Processing Ltd.—forcing them into receivership, the company appealed. AER received complaints in 2016 from a number of concerned employees about Lexin's cost-saving neglect of required maintenance resulting operational safety issues. When Lexin Resources was shut down, reasons cited included concerns about the environment and safety issues. Following the AER's decision, the industry-led OWA took on the responsibility of ensuring that Lexin's 1,200 wells were "shuttered and safe" while awaiting the outcome, which could mean new buyers of wells or Lexin's compliance with the rules and its resumption of control of its assets. November 26 Monique G. Dubé presented AER with the 33-page preliminary analysis, "D79 Abandoned Well Methane Toxicity Assessment", on behalf of AER's Environmental Science Branch and Closure and Liability Branch, which included a map view of AER's Abandoned Wells. The AER assessment identified about 1,500 abandoned wells in urban centers. This includes wells with various license status, including Abandoned, Reclamation Certified and Reclamation Exempt. In rural areas, the assessment identified about 170,000 abandoned wells. In May 2016, the Court of Queen's Bench of Alberta (ABQB) in 2016 ABQB 278, "confirmed that the federal Bankruptcy and Insolvency Act supersedes the provincial requirements that companies must clean up wells." "Bankrupt companies can avoid their liabilities and leave them as a public obligation."[16]: 8  Court of Appeal of Alberta then overturned the decision ruling that environmental obligations were less of a priority that the debts owed. The Redwater case is an example of how the conflict of those two jurisdictions come into play—Oil and Gas Conservation Act (OGCA) is the provincial statute or law dealing with licensing, producing, and managing aspects of all oil and gas assets in Alberta and the federal Bankruptcy and Insolvency Act (BIA). Prior to 2017, the energy industry paid $15 million a year into the Orphan Fund Levy. It doubled to $30 million in 2017. The Calgary, Alberta-headquartered Well Integrity And Abandonment Society (WIA)was founded by Jay Williams who also founded the Canadian Society for Gas Migration (CSGM). The CSGM focused on Service Casing Vent Flow. Both organization are composed of oil and gas professionals and their mandates focus on well bore integrity, including "well abandonment, well integrity, gas migration, ventflow, well construction, cementing, well logging, and plugs". In 2017, the OWA listed "3,127 wells that need[ed] to be plugged or abandoned, and a further 1,553 sites that have been abandoned but still need[ed] to be reclaimed". It is similar in approach to the Canadian Society for Gas Migration (CSGM) consists of a variety of professionals from throughout the oil and gas industry, all with the focus of mitigating gas migration and surface casing vent flow issues throughout not only Alberta, but Canada as a whole Between 1955 and 2017, approximately 580,000 wells were drilled in Canada, according to a Natural Resources Canada (NRC) report on wellbore integrity in the oil and gas industry in Canada. Of these, 400,000 were in Alberta and the NRC anticipated that there would be 100,000s more drilled. OWA reported that with a total expenditure of $30 million in 2017, only 232 inactive orphan wells were plugged or sealed. The remediation and reclamation on these well sites still needed to be completed before the wells can be certified as reclaimed. The OWA inventory included 3,127 inactive orphan wells that needed to be to be plugged or abandoned and 1,553 others that needed to be reclaimed. The New Democratic Party (NDP) provincial government began consulting with the energy industry in 2017 to "introduce new rules that might limit a multi-billion-dollar public liability for reclaiming about 80,000 inactive wells around Alberta." The C.D. Howe Institute report estimated that the social cost of orphan wells, including those incurred by financially insolvent firms, could be more than $8.6 billion. In 2017, the Government of Canada provided Alberta with a one-time grant of $30 million for "activities associated with decommissioning and reclamation". In that year, the provincial government used the federal funds to "cover the interest on a $235 million repayable loan" which the oil and gas industry will repay over the next nine years, to support the OWA's efforts. 2017 About 50% of the newly orphaned wells were the result of 2017 MFC/Lexin 1,400 wells OWA transfer. Nearly 30 companies regulated by AER were insolvent. September There were 3,000 designated orphan wells on the Orphan Well Inventory, 450,000 wells registered in Alberta with about 155,000 "no longer producing but not yet fully remediated". A long-form Calgary Herald article provided details on the "complex and opaque" ownership of the company by the MFC group of companies, of which Lexin is a subsidiary. The rare February 2017 AER enforcement actionthe largest suspension AER ever madesparked questions over the problem of orphan wells. When AER suspended over 1,600 licenses of Lexin Resources Ltd., the limited liability company was forced into receivership, and the OWA assumed temporary custody of these wells. AER had begun to receive concerns submitted by company employees in early 2016, then forwarded to Occupational Health and Safety, including concerns of spillage of a substance that converts gas into sulphur on the plant floor at the MFC Resource Partnership's Mazeppa Gas Plant. AER vice-president, Mark Taylor, said that their goal was to ensure public safety, and to contain both environmental and financial risks. The company had said that it would not be able to maintain its sour gas wells as of mid-February. An April 5, 2017 article in the Financial Post reported that the AER was "suing insolvent" Lexin Resources Ltd., a "relatively small natural gas producer in southern Alberta" to "recover money it is allegedly owed." The AER claims that, "It is not open for a licensee, when times get tough, to transfer the burdens associated with holding AER licenses to the AER and/or the OWA." Lexin Resources Ltd owned "1,514 well licenses — many in partnership with 51 other energy companies." Once abandoned they fall under the management of industry-led agency, the OWA, which would double the number of OWA wells. Grant Thornton, Lexin's court-appointed receiver According to the Post, fifty-one companies, including Canadian Natural Resources Ltd., ExxonMobil Canada, and Husky Energy, who own some of Lexin Resources Ltd. assets, may share the responsibility for Lexin's AROs. The case was one of the most closely watched by the Canadian oil and gas industry and by the insolvency industry. On April 25, 2017, the Court of Appeal of Alberta (ABCA) dismissed the AER and OWA's appeal in a landmark decision, affirming the May 2016 decision of the Court of Queen's Bench of Alberta in favour of Redwater Energy Corporation's receiver, Grant Thornton Limited, in Redwater's bankruptcy proceedings. The ABCA found that Grant Thornton Limited was "entitled to disclaim Redwater's non-producing oil wells and sell its producing ones". The New Democratic Party (NDP) provincial government began consulting with the energy industry to "introduce new rules that might limit a multi-billion-dollar public liability for reclaiming about 80,000 inactive wells around Alberta." The federal government provided a grant of $30 million for decommissioning and reclamation projects. The provincial government used the federal funds to advance a loan of $235 million loan to the OAW to be paid back by industry over a ten-year period. The $30 million covered the interest. When there is a decrease in the price of oil, the number of insolvent energy companies increase. When there is no licensed owner responsible for maintenance of the oil and gas wells, facilities and pipelines, these orphaned wells become the responsibility of the industry-led Orphan Wells Association. In c. 2012, there were fewer than 300 orphan wells. By September 2017, there were 3,000. In 2017, there were 450,000 wells registered in Alberta with about 155,000 "no longer producing but not yet fully remediated". In February 2017, in response to concerns about public safety, environmental and financial risks, the industry-led Alberta Energy Regulator cancelled the licenses for 1,600 wells controlled and owned by the parent MFC ownership group through a small limited liability subsidiary, Lexin Resources Ltd. In their 2017 annual report, the OWA anticipated an increase in orphan properties as nearly 30 companies regulated by AER were insolvent. The number of wells which have been plugged but not remedied is 1062. About 50% of the newly orphaned wells were the result of 2017 MFC/Lexin 1,400 wells OWA transfer. 2018 As of 2018, 37.8% of all inactive wells89,217had been inactive for up to 5 years; 29.8% had been inactive for 5 to 10 years; 16% from 10 to 15 years; 8.2% from 15 to 20 years; 3.9% from 20 to 25 years; and 4.5% had been inactive for over 25 year. According to a 2018 article in the Financial Post, "farmers, ranchers and their lawyers" with these wells on their property, are concerned that an "additional 93,805 inactive wells could become orphaned given Alberta's economic outlook." Based on the OWA's 2018 data, at the current level of the orphan well inventory, the cost of well abandonment and reclamation of their inventory of orphan wells was expected to be around $611 million. However, this estimate of $611 million does not include potential orphan wells. In this context, potential candidates include wells owned by financially insolvent firms and nearly insolvent firms. AER falsely reported to the public that the oil industry's "accumulated environmental liability" was about $58.65 billion. It was later revealed that the actual amount of unfunded cleanup liabilities would amount to $260-billion based on "internal AER calculations" in "a hypothetical worst-case scenario". Tailings pond clean-up represented the "largest but unknown portion of this AER estimate". In response, then AER CEO Jim Ellis apologized for failing to report "that cleaning up after the province’s oil and gas industry would cost $260 billion" and announced his retirement as CE0. The cost of abandonment and remediation per well can be estimated from reviewing the OWA's annual report; those costs are estimated to be $61,000 and $20,000 per well respectively. In 2018 the OWA listed 3,700 in their inventory of orphan wells. As a result of the bankruptcy of Sequoia Resources, its liabilities, including 4,000 wells, pipelines and other facilities", became the OWA's responsibility. February Sequoia Resources Ltd, an oil and gas firm that had purchased "licences for 2,300 wells" notified AER that it was ceasing operations "imminently" and was unable to maintain "almost 200 facilities and nearly 700 pipeline segments". Sequoia Resources Ltd, which had defaulted on its "municipal tax payments", could reclaim all of its properties. Sequoia acquired 3,200 shallow depleted gas wells along with productive wells. There was a sharp decline in gas prices and Sequoia could not meet its obligations. AER was criticized for allowing similar acquisitions where buyers do not have adequate financing. On February 15, 2018, the Supreme Court of Canada held a hearing to determine who gets paid first when an oil company becomes insolventcreditors or abandoned well clean up. ATB Financial, the primary lender for the bankrupt Redwater Redwater Energy, a junior oil and gas company and the CAPP argued in favour of keeping the status quo, whereby creditors have priority, citing the federal Bankruptcy and Insolvency Act. ATB Financial. ATB said that other jurisdictions had regulatory policies that required oil companies to pay for the cost of future remediation before they were granted a license to drill. A CAPP lawyer said that if AER did require oil companies to pay for the cost of remediation before they drill it "would effectively sterilize" a "vast amount capital" that could otherwise be spent "in the public interest" by "exploring for, developing and producing energy." On August 7, 2018 PricewaterhouseCoopers, the trustee for Chinese investors who purchased Sequoia Resources Ltd in 2016, launched a lawsuit against Perpetual Energy Inc. in an "unprecedented bid to void" the 2016 sale of Perpetual Energy Inc.'s subsidiary called Perpetual Energy Operating Corp. (PEOC) now known as Sequoia Resources Ltd to Chinese investors. An article in The Globe and Mail said that this appears to be the "first attempt by a bankruptcy trustee in Alberta to have a previous oil and gas transaction unwound." It could "introduce major new risks to the [oil and gas] industry’s ability to buy and sell assets and could also deliver a severe blow to Perpetual." The lawsuit alleges that Perpetual and its CEO Susan Riddell Rose "knew the deal would sink the buyer". Perpetual says that "the claim is without merit". March Sequoia Resources Ltd then filed for bankruptcy protection "without decommissioning and cleaning up 4,000 wells, pipelines and other facilities", as required of all oil companies. August 7 PricewaterhouseCoopers, the trustee for Chinese investors who purchased Sequoia Resources Ltd in 2016, launched a lawsuit against Perpetual Energy Inc. in an "unprecedented bid to void" the 2016 sale of Perpetual Energy Inc.'s subsidiary called Perpetual Energy Operating Corp. (PEOC) now known as Sequoia Resources Ltd to Chinese investors.August Perpetual Energy Inc. had a market capitalization of about $40-million. An article in The Globe and Mail said that this appears to be the "first attempt by a bankruptcy trustee in Alberta to have a previous oil and gas transaction unwound." It could "introduce major new risks to the [oil and gas] industry’s ability to buy and sell assets and could also deliver a severe blow to Perpetual." The lawsuit alleges that Perpetual and its CEO Susan Riddell Rose "knew the deal would sink the buyer". Perpetual says that "the claim is without merit". October 29 The Shanghai Sinooil Energy Corp and its subsidiary Shanghai Energy Corp made a statement of claim against 12 people including former Shanghai Energy CEO Wentao Yang and COO Kevin Richmond in a Calgary court. They were accused of falsifying documents and diverting money to Sequoia Resources Ltd. Jones wrote that Sequoia Resources Ltd "with links to China's ruling Communist Party" "was set up to acquire aging gas assets with high abandonment liabilities, starting with a package of wells purchased from Perpetual Energy in 2016". According to their annual report, from its creation in 2002 until the fiscal year 2017 which ended March 2018, the OWA "decommissioned approximately 1,400 orphan wells, with more than 800 of the sites reclaimed." In the fiscal year 2018, the OWA decommissioned 501 abandoned wells, with "382 wells downhole decommissioned (abandoned) and waiting on cut and cap of wellhead only". According to the OWA, by 2019, most of the orphan wells in their inventory, are "considered low risk and therefore do not require immediate closure". 2019 There were "still more than 15,000 wells drilled before 1964 that have not been remediated." Of the 440,000 wells drilled in the province, approximately 22,000 were leaking. Trident Exploration's receivership in May 2019 resulted in 3,650 wells that no longer had a solvent owners, and the loss of 94 jobs. As part of Alberta's Area-Based Closure program (ABC), which represented 70% of the provinces remediation activity, the oil and gas industry spent approximately $340 million on clean up. The Supreme Court of Canada (SCC) ruled in Orphan Well Association v. Grant Thornton Ltd.the Redwater case that in the case of a bankruptcy, a company's first priority is to fulfil its environmental obligationsnot as a debtbut as a duty to "citizens and communities." With a vote of 5–2 this overturned "two lower court decisions that said bankruptcy law has paramountcy over provincial environmental responsibilities". On January 31, 2019, in the case of Redwater Energy, the Supreme Court of Canada ruled 5–2 overturning "two lower court decisions that said bankruptcy law has paramountcy over provincial environmental responsibilities". Trident Exploration became insolvent in leaving more than 4,000 unreclaimed wells that had been actively producing the equivalent of 10,000 barrels of oil a day. As of 2019, there were about 3,406 orphan wells on the OWA inventory. 2020s 2020 The PBO report said that, as of 2020, there were 10,000 orphaned and abandoned wells in Alberta. Of these, about 7,400 were abandoned wells that had not yet been designated as orphan by the AER, but do not have a solvent owner. When they are added to the existing OWA's Inventory, the total will triple its current number. May 1 The OWA's inventory listed "2963 orphan wells for abandonment, 297 orphan facilities for decommissioning, 3781 orphan pipeline segments for abandonment, 3116 orphan sites for reclamation, and 939 orphan reclaimed sites." There were about 97,000 inactive wells that were not properly closed and another 71,000 abandoned wells requiring clean-up, according to a University of Calgary Policy School article. The federal government provided a grant of $1.2 billion through the COVID-19 Economic Response Plan announced in 2020. Using the federal grant, in 2020, the province funded the Alberta Site Rehabilitation Program (ASRP) with $1 million in provincial loans. The oil and gas industry paid almost the same amount on clean up$363as they did in 2019, in spite of the federal grant. As of 2020, there were 97,920 wells that were "licensed as temporarily suspended" in Alberta. 2021 There are 475,000 oil wells in Canada alone that will eventually need to be cleaned up and the well site reclaimed. Of all the inactive wells in Alberta, 29% 27,532 wellshave been suspended for more than a decade without being either "abandoned" or reactivated, as of March 25, 2021. The January 2022 Parliamentary Budget Officer (PB0) report on the cost of cleaning Canada's orphan oil and gas wells, estimated that it would cost $361 million just to clean traditional orphan wells nationally, which does not include the cost of oil sands operations. PBO said that there was a gap of $178 between the AER/OWA's security deposit of $237 million in October 2021 and the total cost of clean-up of $415 million. More than 50% of Alberta's wells are not producing oil or gas, yet they have not been cleaned up. The OWA spent $161.5 million in the fiscal year 2021/2022 on decommissioning wells, pipelines, and facilities. In 2021/22 42% of this total went going towards well decommissioning, 30% towards site reclamation, 13% to facilities decommissioning, and 5% to pipeline decommissioning. The annual Orphan Fund Association Levy for the fiscal year 2021/2022 was set at $65 million. In a 2021 journal article in Environmental Science and Technology, McGill University scientists said that CAPP has no records of oil and gas wells prior to 1995, even though Canada's oil and gas industry began in the 1850s. McGill scientists working with various sources estimated that there were more than 370,000 abandoned oil and gas wells (AOG) in Canada. Agencies in the provinces and territories have not included over 60,000 of these AOG wells in their databases. There were 17 major companies in Canada's oil and gas markets. This included Suncor Energy, Calgary-headquartered Imperial Oil, Canadian Natural Resources, Cenovus Energy, Husky Energy—a subsidiary of Cenovus since January 2021, TC Energy, Chevron Canada, a subsidiary of San Ramon, California-based Chevron Corporation, Hong Kong-headquartered CNOOC International, the Spanish company Repsol Canadian subsidiary, Repsol Oil & Gas Canada, Shell Canada, a subsidiary of Anglo-Dutch Royal Dutch Shell, MEG Energy, a junior, Athabasca Oil Corporation—a Calgary-based Canadian company which is in partnership with PetroChina, ConocoPhillips Canada—a subsidiary of Texas-based ConocoPhillips, and Syncrude Canada—a joint venture of five partners Suncor Energy, Imperial Oil, Sinopec, and CNOOC Limited, and Calgary-headquartered-Enbridge— a multinational energy transportation company and Pembina Pipeline. Since at least 2010, Athabasca Oil Sands Corporation, Canadian Natural Resources, Cenovus Energy, Dover Operating Corporation, Esso Imperial Oil, Husky Energy, Suncor Energy, Syncrude, and Total funded research into environmental concerns, including groundwater. An Alberta Liabilities Disclosure Project report that accessed AER data through data under freedom of Information, estimated that Alberta had 300,000 unreclaimed wells. One of the report's authors, Regan Boychuk, said thatwithout including unreclaimed pipelines and pumping stationsthe estimated cost of unreclaimed wells alone, is approximately $40 billion to $70 billion. Between 2005 and 2021, Canada experienced a rise in overall emissions from 32% to 38%. During the same period, Alberta observed a notable increase of 20.2 million tonnes in emissions, while all other provinces, except Manitoba, exhibited a decrease in emissions, according to The Tyee. Manitoba recorded a increase of 1 million tonnes during this timeframe. Under the policies related to methane emissions and coal-fired electricity, introduced during the premiership of Rachel Notley, there were some declines in emissions in Alberta. 2022 The oil and gas sector provided 22% of the Government of Alberta's total estimated revenue for the fiscal year 2021/22. Since 2012, the Alberta government has received $66 billion from the sector. AER reported that, as of July 2022, there were about 170,000 abandoned wells in the province that are the responsibility of the licensees for all abandonment and reclamation costs. This represents 37% of all the wells in Alberta. According to a 2022 article in BOE Report—"Canada’s source for oil and gas news, activity and information" since its establishment in 2013—by 2032, there will be an estimated 258,000 wells designated to be abandoned in Alberta alone. By 2022, of all the wells in Alberta, only 35% were active, according to the January 2022 PB0 report on the cost of cleaning Canada's orphan oil and gas wells. The January 2022 Parliamentary Budget Officer (PB0) report on the cost of cleaning Canada's orphan oil and gas wells, estimated that it would cost $361 million just to clean traditional orphan wells nationally, which does not include the cost of oil sands operations. By 2025, the forecast is $1.1 billion in clean up costs for orphan wells. According to AER, as of December 2022, of the 463,000 oil and gas wells in Alberta, 33.7% or 156,031 were active and 28% or 129,640 were reclaimed. There were 172,236 wells that were either abandoned or inactive—19% or 88,433 were abandoned and 18.1% or 83,803 were inactive. With the Russian invasion of Ukraine, the period between April 2020, with its low prices of oil, to the highs in March 2022, represented the "largest 23-month increase in energy prices since the 1973 oil price". The price of Brent crude oil averaged $116/bbl, representing an increase of 55% in March 2022 compared with December 2021. March 4 The price of Western Canadian Select (WCS) soared to over $100 US per barrel. 2023 According to the February 1, 2023 OWA Orphan Inventory, there were 3,114 orphan sites designated by AER that required decommissioning. This number also includes orphaned pipelines and orphan facilities, including the Mazeppa Gas Plant pumping station. The OWA's 2023 Inventory lists only 3,114 orphan well sites. These are sites that have been designated by AER and assigned to the OWA because they require decommissioning. There are thousands of oil and gas well in municipalities and on landowners properties that require plugging or reclamation and have no solvent owner, but have not yet transitioned to orphan status. They represent environmental and public safety liabilities but are not designated as orphaned by AER and are not being addressed. Liabilities and taxes for these wells become the responsibility of municipalities and landowners depending on where the wells are located. The 2023 OWA Inventory included only 3,114 orphan sites for which it was responsible. According to RMA president, Paul McLauchlin, by 2023, the oil and gas industry owed $245 million in unpaid property taxes to towns and villages across Alberta. March 23 Alberta auditor general, Doug Wylie, published another report critical of the United Conservative Party's (UCP) neglect of orphan wells and other oil patch liabilities in the province. The report said that even though the number of inactive wells increased every year since 2000except for the year that the federal government provided $1.2 billion dollarsoperators still have no timelines for site remediation. Two major issues have not been dealt with"so-called 'legacy sites' and "inadequate security collected". Current AER liability management processes to mitigate risks "associated with closure of oil and gas infrastructure" are not "well-designed" and are not effective. Notes External links Definitions of Orphan, Inactive, Abandoned, Remediation, and Reclamation Citations References A On abandoned wells. This includes the definitions for Orphan, Inactive, Abandoned wells, Remediation, and Reclamation B C * D updated October 22, 2020. E Environmental Protection and Enhancement Act, RSA 2000, c E-12, s 2(i) [EPEA]. This is in the purposes section of the Act and is therefore directional in nature G H I J K L M N O SCC Redwater Decision This includes complete list of companies and their wells for decommissioning. P R S * T U W Y Z Athabasca oil sands Abandoned buildings and structures Oil wells Petroleum technology Environmental issues in Alberta History of the petroleum industry in Alberta
Selected timeline related to orphan wells in Alberta
Chemistry,Engineering
7,566
23,484,571
https://en.wikipedia.org/wiki/Dimethisterone
Dimethisterone, formerly sold under the brand names Lutagan and Secrosteron among others, is a progestin medication which was used in birth control pills and in the treatment of gynecological disorders but is now no longer available. It was used both alone and in combination with an estrogen. It is taken by mouth. Side effects of dimethisterone are similar to those of other progestins. When used in combination with high doses of an estrogen, an increased risk of endometrial cancer can occur. Dimethisterone is a progestin, or a synthetic progestogen, and hence is an agonist of the progesterone receptor, the biological target of progestogens like progesterone. It has some antimineralocorticoid activity and no other important hormonal activity. Dimethisterone was first described and was introduced for medical use in 1959. It started being used in birth control pills in 1965. However, due to its low potency and consequent inability to prevent the increased risk of endometrial cancer with estrogens, dimethisterone was soon discontinued for such purposes. Medical uses Dimethisterone was used alone in the treatment of gynecological disorders and in combination with ethinylestradiol in birth control pills. Side effects Side effects of dimethisterone are similar to those of other progestins. Pharmacology Pharmacodynamics Dimethisterone was derived from modification of ethisterone via introduction of methyl groups at the C6α and C21 positions. Relative to ethisterone, it is 12 times as potent orally as a progestogen in animals (Clauberg test), and, unlike ethisterone, is a pure progestogen with no androgenic (or estrogenic) activity in animals even at very high doses (although some weak antimineralocorticoid activity was observed at high doses in animals). However, in spite of its improved potency over ethisterone, it is a weak progestogen relative to most other progestins, in fact one of the weakest known. Chemistry Dimethisterone, also known as 6α,21-dimethylethisterone or as 6α,21-dimethyl-17α-ethynyltestosterone, as well as 17α-ethynyl-6α,21-dimethylandrost-4-en-17β-ol-3-one or as 6α,21-dimethyl-17β-hydroxy-17α-pregn-4-en-20-yn-3-one, is a synthetic androstane steroid and a derivative of testosterone. Synthesis Chemical syntheses of dimethisterone have been published. History Dimethisterone was developed by the British pharmaceutical company British Drug Houses (which subsequently merged with Merck KGaA) and was first reported in the medical literature in 1959, with introduction for medical use under the brand name Secrosteron following in the same year. It was introduced in the United States as an oral contraceptive in combination with high doses of ethinylestradiol under the brand name Oracon (25 mg dimethisterone, 100 μg ethinylestradiol) in 1965. Due to the fact that it contains a weak progestogen in combination with a large dose of a potent estrogen, this preparation was eventually found to be associated with a substantially increased risk of endometrial cancer in women, and is now no longer marketed. The improved potency of dimethisterone due to 6α-methylation reportedly served as the basis for the synthesis of medroxyprogesterone acetate. Whereas hydroxyprogesterone acetate (the 6α-demethylated analogue of medroxyprogesterone acetate) is around twice as potent as ethisterone orally, medroxyprogesterone acetate shows 10 to 25 times the potency of ethisterone. Society and culture Generic names Dimethisterone is the generic name of the drug and its , , and . Brand names Dimethisterone was marketed alone under the brand names Lutagan and Secrosteron and in combination with ethinylestradiol under the brand names Oracon, Ovin, Secrodyl, Secrovin, and Tova. References Abandoned drugs Androstanes Antimineralocorticoids Drugs developed by Merck Enones Progestogens Alkyne derivatives
Dimethisterone
Chemistry
973
33,537
https://en.wikipedia.org/wiki/Wing
A wing is a type of fin that produces both lift and drag while moving through air. Wings are defined by two shape characteristics, an airfoil section and a planform. Wing efficiency is expressed as lift-to-drag ratio, which compares the benefit of lift with the air resistance of a given wing shape, as it flies. Aerodynamics is the study of wing performance in air. Equivalent foils that move through water are found on hydrofoil power vessels and foiling sailboats that lift out of the water at speed and on submarines that use diving planes to point the boat upwards or downwards, while running submerged. Hydrodynamics is the study of foil performance in water. Etymology and usage The word "wing" from the Old Norse vængr for many centuries referred mainly to the foremost limbs of birds (in addition to the architectural aisle). But in recent centuries the word's meaning has extended to include lift producing appendages of insects, bats, pterosaurs, boomerangs, some sail boats and aircraft, or the airfoil on a race car. Aerodynamics The design and analysis of the wings of aircraft is one of the principal applications of the science of aerodynamics, which is a branch of fluid mechanics. The properties of the airflow around any moving object can be found by solving the Navier-Stokes equations of fluid dynamics. Except for simple geometries, these equations are difficult to solve. Simpler explanations can be given. For a wing to produce "lift", it must be oriented at a suitable angle of attack relative to the flow of air past the wing. When this occurs, the wing deflects the airflow downwards, "turning" the air as it passes the wing. Since the wing exerts a force on the air to change its direction, the air must exert a force on the wing, equal in size but opposite in direction. This force arises from different air pressures that exist on the upper and lower surfaces of the wing. Lower-than-ambient air pressure is generated on the top surface of the wing, with a higher-than ambient-pressure on the bottom of the wing. (See: airfoil) These air pressure differences can be either measured using a pressure-measuring device, or can be calculated from the airspeed] using physical principles including Bernoulli's principle, which relates changes in air speed to changes in air pressure. The lower air pressure on the top of the wing generates a smaller downward force on the top of the wing than the upward force generated by the higher air pressure on the bottom of the wing. This gives an upward force on the wing. This force is called the lift generated by the wing. The different velocities of the air passing by the wing, the air pressure differences, the change in direction of the airflow, and the lift on the wing are different ways of describing how lift is produced so it is possible to calculate lift from any one of the other three. For example, the lift can be calculated from the pressure differences, or from different velocities of the air above and below the wing, or from the total momentum change of the deflected air. Fluid dynamics offers other approaches to solving these problems all which methods produce the same answer if correctly calculated. Given a particular wing and its velocity through the air, debates over which mathematical approach is the most convenient to use can be mistaken by those not familiar with the study of aerodynamics as differences of opinion about the basic principles of flight. Cross-sectional shape Wings with an asymmetrical cross-section are the norm in subsonic flight. Wings with a symmetrical cross-section can also generate lift by using a positive angle of attack to deflect air downward. Symmetrical airfoils have higher stalling speeds than cambered airfoils of the same wing area but are used in aerobatic aircraft as they provide the same flight characteristics whether the aircraft is upright or inverted. Another example comes from sailboats, where the sail is a thin sheet. For flight speeds near the speed of sound (transonic flight), specific asymmetrical airfoil sections are used to minimize the very pronounced increase in drag associated with airflow near the speed of sound. These airfoils, called supercritical airfoils, are flat on top and curved on the bottom. Design features Aircraft wings may feature some of the following: A rounded leading edge cross-section A sharp trailing edge cross-section Leading-edge devices such as slats, slots, or extensions Trailing-edge devices such as flaps or flaperons (combination of flaps and ailerons) Winglets to keep wingtip vortices from increasing drag and decreasing lift Dihedral, or a positive wing angle to the horizontal, increases spiral stability around the roll axis, whereas anhedral, or a negative wing angle to the horizontal, decreases spiral stability. Aircraft wings may have various devices, such as flaps or slats, that the pilot uses to modify the shape and surface area of the wing to change its operating characteristics in flight. Ailerons (usually near the wingtips) to roll the aircraft Spoilers on the upper surface to increase drag for descent and to reduce lift for more weight on wheels during braking Vortex generators to help prevent flow separation in transonic flow Wing fences to keep flow attached to the wing by stopping boundary layer separation from spreading roll direction. Folding wings allow more aircraft storage in the confined space of the hangar deck of an aircraft carrier Variable-sweep wing or "swing wings" that allow outstretched wings during low-speed flight (e.g., take-off, landing and loitering) and swept back wings for high-speed flight (including supersonic flight), such as in the F-111 Aardvark, the F-14 Tomcat, the Panavia Tornado, the MiG-23, the MiG-27, the Tu-160 and the B-1B Lancer. Applications Besides fixed-wing aircraft, applications for wing shapes include: Hang gliders, which use wings ranging from fully flexible (paragliders, gliding parachutes), flexible (framed sail wings), to rigid Kites, which use a variety of lifting surfaces Flying model airplanes Helicopters, which use a rotating wing with a variable pitch angle to provide directional forces Propellers, whose blades generate lift for propulsion. The NASA Space Shuttle, which uses its wings only to glide during its descent to a runway. These types of aircraft are called spaceplanes. Some racing cars, especially Formula One cars, which use upside-down wings (or airfoils) to provide greater traction at high speeds Sailboats, which use sails as vertical wings with variable fullness and direction to move across water Flexible wings In 1948, Francis Rogallo invented the fully limp flexible wing. Domina Jalbert invented flexible un-sparred ram-air airfoiled thick wings. In nature Wings have evolved multiple times in history: in dinosaurs (see Pterosaurs), insects, birds (see Bird wing), mammals (see Bats), fish, reptiles and plants. Wings of pterosaurs, birds, bats, and reptiles all evolved from existing limbs, however insect wings evolved as a completely separate structure. Wings facilitated increased locomotion, dispersal, and diversification. Various species of penguins and other flighted or flightless water birds such as auks, cormorants, guillemots, shearwaters, eider and scoter ducks and diving petrels are efficient underwater swimmers, and use their wings to propel through water. See also Flight Natural world: Bird flight Flight feather Flying and gliding animals Insect flight List of soaring birds Samara (winged seeds of trees) Aviation: Aircraft Blade solidity FanWing and Flettner airplane (experimental wing types) Flight dynamics (fixed-wing aircraft) Kite types Ornithopter – Flapping-wing aircraft (research prototypes, simple toys and models) Otto Lilienthal Wing configuration Wingsuit Sailing: Sails Forces on sails Wingsail References External links How Wings Work - Holger Babinsky Physics Education 2003 How Airplanes Fly: A Physical Description of Lift Demystifying the Science of Flight – Audio segment on NPR's Talk of the Nation Science Friday NASA's explanations and simulations Flight of the StyroHawk wing See How It Flies Aerodynamics Aerospace engineering Aircraft wing components Bird anatomy Bird flight Insect anatomy Mammal anatomy es:Ala (zoología)
Wing
Chemistry,Engineering
1,725
9,478,630
https://en.wikipedia.org/wiki/Integral%20element
In commutative algebra, an element b of a commutative ring B is said to be integral over a subring A of B if b is a root of some monic polynomial over A. If A, B are fields, then the notions of "integral over" and of an "integral extension" are precisely "algebraic over" and "algebraic extensions" in field theory (since the root of any polynomial is the root of a monic polynomial). The case of greatest interest in number theory is that of complex numbers integral over Z (e.g., or ); in this context, the integral elements are usually called algebraic integers. The algebraic integers in a finite extension field k of the rationals Q form a subring of k, called the ring of integers of k, a central object of study in algebraic number theory. In this article, the term ring will be understood to mean commutative ring with a multiplicative identity. Definition Let be a ring and let be a subring of An element of is said to be integral over if for some there exists in such that The set of elements of that are integral over is called the integral closure of in The integral closure of any subring in is, itself, a subring of and contains If every element of is integral over then we say that is integral over , or equivalently is an integral extension of Examples Integral closure in algebraic number theory There are many examples of integral closure which can be found in algebraic number theory since it is fundamental for defining the ring of integers for an algebraic field extension (or ). Integral closure of integers in rationals Integers are the only elements of Q that are integral over Z. In other words, Z is the integral closure of Z in Q. Quadratic extensions The Gaussian integers are the complex numbers of the form , and are integral over Z. is then the integral closure of Z in . Typically this ring is denoted . The integral closure of Z in is the ring This example and the previous one are examples of quadratic integers. The integral closure of a quadratic extension can be found by constructing the minimal polynomial of an arbitrary element and finding number-theoretic criterion for the polynomial to have integral coefficients. This analysis can be found in the quadratic extensions article. Roots of unity Let ζ be a root of unity. Then the integral closure of Z in the cyclotomic field Q(ζ) is Z[ζ]. This can be found by using the minimal polynomial and using Eisenstein's criterion. Ring of algebraic integers The integral closure of Z in the field of complex numbers C, or the algebraic closure is called the ring of algebraic integers. Other The roots of unity, nilpotent elements and idempotent elements in any ring are integral over Z. Integral closure in algebraic geometry In geometry, integral closure is closely related with normalization and normal schemes. It is the first step in resolution of singularities since it gives a process for resolving singularities of codimension 1. For example, the integral closure of is the ring since geometrically, the first ring corresponds to the -plane unioned with the -plane. They have a codimension 1 singularity along the -axis where they intersect. Let a finite group G act on a ring A. Then A is integral over AG, the set of elements fixed by G; see Ring of invariants. Let R be a ring and u a unit in a ring containing R. Then u−1 is integral over R if and only if u−1 ∈ R[u]. is integral over R. The integral closure of the homogeneous coordinate ring of a normal projective variety X is the ring of sections Integrality in algebra If is an algebraic closure of a field k, then is integral over The integral closure of C[[x]] in a finite extension of C((x)) is of the form (cf. Puiseux series) Equivalent definitions Let B be a ring, and let A be a subring of B. Given an element b in B, the following conditions are equivalent: (i) b is integral over A; (ii) the subring A[b] of B generated by A and b is a finitely generated A-module; (iii) there exists a subring C of B containing A[b] and which is a finitely generated A-module; (iv) there exists a faithful A[b]-module M such that M is finitely generated as an A-module. The usual proof of this uses the following variant of the Cayley–Hamilton theorem on determinants: Theorem Let u be an endomorphism of an A-module M generated by n elements and I an ideal of A such that . Then there is a relation: This theorem (with I = A and u multiplication by b) gives (iv) ⇒ (i) and the rest is easy. Coincidentally, Nakayama's lemma is also an immediate consequence of this theorem. Elementary properties Integral closure forms a ring It follows from the above four equivalent statements that the set of elements of that are integral over forms a subring of containing . (Proof: If x, y are elements of that are integral over , then are integral over since they stabilize , which is a finitely generated module over and is annihilated only by zero.) This ring is called the integral closure of in . Transitivity of integrality Another consequence of the above equivalence is that "integrality" is transitive, in the following sense. Let be a ring containing and . If is integral over and integral over , then is integral over . In particular, if is itself integral over and is integral over , then is also integral over . Integral closed in fraction field If happens to be the integral closure of in , then A is said to be integrally closed in . If is the total ring of fractions of , (e.g., the field of fractions when is an integral domain), then one sometimes drops the qualification "in and simply says "integral closure of " and " is integrally closed." For example, the ring of integers is integrally closed in the field . Transitivity of integral closure with integrally closed domains Let A be an integral domain with the field of fractions K and A' the integral closure of A in an algebraic field extension L of K. Then the field of fractions of A' is L. In particular, A' is an integrally closed domain. Transitivity in algebraic number theory This situation is applicable in algebraic number theory when relating the ring of integers and a field extension. In particular, given a field extension the integral closure of in is the ring of integers . Remarks Note that transitivity of integrality above implies that if is integral over , then is a union (equivalently an inductive limit) of subrings that are finitely generated -modules. If is noetherian, transitivity of integrality can be weakened to the statement: There exists a finitely generated -submodule of that contains . Relation with finiteness conditions Finally, the assumption that be a subring of can be modified a bit. If is a ring homomorphism, then one says is integral if is integral over . In the same way one says is finite ( finitely generated -module) or of finite type ( finitely generated -algebra). In this viewpoint, one has that is finite if and only if is integral and of finite type. Or more explicitly, is a finitely generated -module if and only if is generated as an -algebra by a finite number of elements integral over . Integral extensions Cohen-Seidenberg theorems An integral extension A ⊆ B has the going-up property, the lying over property, and the incomparability property (Cohen–Seidenberg theorems). Explicitly, given a chain of prime ideals in A there exists a in B with (going-up and lying over) and two distinct prime ideals with inclusion relation cannot contract to the same prime ideal (incomparability). In particular, the Krull dimensions of A and B are the same. Furthermore, if A is an integrally closed domain, then the going-down holds (see below). In general, the going-up implies the lying-over. Thus, in the below, we simply say the "going-up" to mean "going-up" and "lying-over". When A, B are domains such that B is integral over A, A is a field if and only if B is a field. As a corollary, one has: given a prime ideal of B, is a maximal ideal of B if and only if is a maximal ideal of A. Another corollary: if L/K is an algebraic extension, then any subring of L containing K is a field. Applications Let B be a ring that is integral over a subring A and k an algebraically closed field. If is a homomorphism, then f extends to a homomorphism B → k. This follows from the going-up. Geometric interpretation of going-up Let be an integral extension of rings. Then the induced map is a closed map; in fact, for any ideal I and is surjective if f is injective. This is a geometric interpretation of the going-up. Geometric interpretation of integral extensions Let B be a ring and A a subring that is a noetherian integrally closed domain (i.e., is a normal scheme). If B is integral over A, then is submersive; i.e., the topology of is the quotient topology. The proof uses the notion of constructible sets. (See also: Torsor (algebraic geometry).) Integrality, base-change, universally-closed, and geometry If is integral over , then is integral over R for any A-algebra R. In particular, is closed; i.e., the integral extension induces a "universally closed" map. This leads to a geometric characterization of integral extension. Namely, let B be a ring with only finitely many minimal prime ideals (e.g., integral domain or noetherian ring). Then B is integral over a (subring) A if and only if is closed for any A-algebra R. In particular, every proper map is universally closed. Galois actions on integral extensions of integrally closed domains Proposition. Let A be an integrally closed domain with the field of fractions K, L a finite normal extension of K, B the integral closure of A in L. Then the group acts transitively on each fiber of . Proof. Suppose for any in G. Then, by prime avoidance, there is an element x in such that for any . G fixes the element and thus y is purely inseparable over K. Then some power belongs to K; since A is integrally closed we have: Thus, we found is in but not in ; i.e., . Application to algebraic number theory The Galois group then acts on all of the prime ideals lying over a fixed prime ideal . That is, if then there is a Galois action on the set . This is called the Splitting of prime ideals in Galois extensions. Remarks The same idea in the proof shows that if is a purely inseparable extension (need not be normal), then is bijective. Let A, K, etc. as before but assume L is only a finite field extension of K. Then (i) has finite fibers. (ii) the going-down holds between A and B: given , there exists that contracts to it. Indeed, in both statements, by enlarging L, we can assume L is a normal extension. Then (i) is immediate. As for (ii), by the going-up, we can find a chain that contracts to . By transitivity, there is such that and then are the desired chain. Integral closure Let A ⊂ B be rings and A' the integral closure of A in B. (See above for the definition.) Integral closures behave nicely under various constructions. Specifically, for a multiplicatively closed subset S of A, the localization S−1A' is the integral closure of S−1A in S−1B, and is the integral closure of in . If are subrings of rings , then the integral closure of in is where are the integral closures of in . The integral closure of a local ring A in, say, B, need not be local. (If this is the case, the ring is called unibranch.) This is the case for example when A is Henselian and B is a field extension of the field of fractions of A. If A is a subring of a field K, then the integral closure of A in K is the intersection of all valuation rings of K containing A. Let A be an -graded subring of an -graded ring B. Then the integral closure of A in B is an -graded subring of B. There is also a concept of the integral closure of an ideal. The integral closure of an ideal , usually denoted by , is the set of all elements such that there exists a monic polynomial with with as a root. The radical of an ideal is integrally closed. For noetherian rings, there are alternate definitions as well. if there exists a not contained in any minimal prime, such that for all . if in the normalized blow-up of I, the pull back of r is contained in the inverse image of I. The blow-up of an ideal is an operation of schemes which replaces the given ideal with a principal ideal. The normalization of a scheme is simply the scheme corresponding to the integral closure of all of its rings. The notion of integral closure of an ideal is used in some proofs of the going-down theorem. Conductor Let B be a ring and A a subring of B such that B is integral over A. Then the annihilator of the A-module B/A is called the conductor of A in B. Because the notion has origin in algebraic number theory, the conductor is denoted by . Explicitly, consists of elements a in A such that . (cf. idealizer in abstract algebra.) It is the largest ideal of A that is also an ideal of B. If S is a multiplicatively closed subset of A, then . If B is a subring of the total ring of fractions of A, then we may identify . Example: Let k be a field and let (i.e., A is the coordinate ring of the affine curve ). B is the integral closure of A in . The conductor of A in B is the ideal . More generally, the conductor of , a, b relatively prime, is with . Suppose B is the integral closure of an integral domain A in the field of fractions of A such that the A-module is finitely generated. Then the conductor of A is an ideal defining the support of ; thus, A coincides with B in the complement of in . In particular, the set , the complement of , is an open set. Finiteness of integral closure An important but difficult question is on the finiteness of the integral closure of a finitely generated algebra. There are several known results. The integral closure of a Dedekind domain in a finite extension of the field of fractions is a Dedekind domain; in particular, a noetherian ring. This is a consequence of the Krull–Akizuki theorem. In general, the integral closure of a noetherian domain of dimension at most 2 is noetherian; Nagata gave an example of dimension 3 noetherian domain whose integral closure is not noetherian. A nicer statement is this: the integral closure of a noetherian domain is a Krull domain (Mori–Nagata theorem). Nagata also gave an example of dimension 1 noetherian local domain such that the integral closure is not finite over that domain. Let A be a noetherian integrally closed domain with field of fractions K. If L/K is a finite separable extension, then the integral closure of A in L is a finitely generated A-module. This is easy and standard (uses the fact that the trace defines a non-degenerate bilinear form). Let A be a finitely generated algebra over a field k that is an integral domain with field of fractions K. If L is a finite extension of K, then the integral closure of A in L is a finitely generated A-module and is also a finitely generated k-algebra. The result is due to Noether and can be shown using the Noether normalization lemma as follows. It is clear that it is enough to show the assertion when L/K is either separable or purely inseparable. The separable case is noted above, so assume L/K is purely inseparable. By the normalization lemma, A is integral over the polynomial ring . Since L/K is a finite purely inseparable extension, there is a power q of a prime number such that every element of L is a q-th root of an element in K. Let be a finite extension of k containing all q-th roots of coefficients of finitely many rational functions that generate L. Then we have: The ring on the right is the field of fractions of , which is the integral closure of S; thus, contains . Hence, is finite over S; a fortiori, over A. The result remains true if we replace k by Z. The integral closure of a complete local noetherian domain A in a finite extension of the field of fractions of A is finite over A. More precisely, for a local noetherian ring R, we have the following chains of implications: (i) A complete A is a Nagata ring (ii) A is a Nagata domain A analytically unramified the integral closure of the completion is finite over the integral closure of A is finite over A. Noether's normalization lemma Noether's normalisation lemma is a theorem in commutative algebra. Given a field K and a finitely generated K-algebra A, the theorem says it is possible to find elements y1, y2, ..., ym in A that are algebraically independent over K such that A is finite (and hence integral) over B = K[y1,..., ym]. Thus the extension K ⊂ A can be written as a composite K ⊂ B ⊂ A where K ⊂ B is a purely transcendental extension and B ⊂ A is finite. Integral morphisms In algebraic geometry, a morphism of schemes is integral if it is affine and if for some (equivalently, every) affine open cover of Y, every map is of the form where A is an integral B-algebra. The class of integral morphisms is more general than the class of finite morphisms because there are integral extensions that are not finite, such as, in many cases, the algebraic closure of a field over the field. Absolute integral closure Let A be an integral domain and L (some) algebraic closure of the field of fractions of A. Then the integral closure of A in L is called the absolute integral closure of A. It is unique up to a non-canonical isomorphism. The ring of all algebraic integers is an example (and thus is typically not noetherian). See also Normal scheme Noether normalization lemma Algebraic integer Splitting of prime ideals in Galois extensions Torsor (algebraic geometry) Notes References H. Matsumura Commutative ring theory. Translated from the Japanese by M. Reid. Second edition. Cambridge Studies in Advanced Mathematics, 8. M. Reid, Undergraduate Commutative Algebra, London Mathematical Society, 29, Cambridge University Press, 1995. Further reading Irena Swanson, Integral closures of ideals and rings Do DG-algebras have any sensible notion of integral closure? Is always an integral extension of for a regular sequence ?] Commutative algebra Ring theory Algebraic structures
Integral element
Mathematics
4,168
14,815,779
https://en.wikipedia.org/wiki/HIST1H2AB
Histone H2A type 1-B/E is a protein that in humans is encoded by the HIST1H2AB gene. Histones are basic nuclear proteins that are responsible for the nucleosome structure of the chromosomal fiber in eukaryotes. This structure consists of approximately 146 bp of DNA wrapped around a nucleosome, an octamer composed of pairs of each of the four core histones (H2A, H2B, H3, and H4). The chromatin fiber is further compacted through the interaction of a linker histone, H1, with the DNA between the nucleosomes to form higher order chromatin structures. This gene is intronless and encodes a member of the histone H2A family. Transcripts from this gene lack polyA tails; instead, they contain a palindromic termination element. This gene is found in the large histone gene cluster on chromosome 6p22-p21.3. References Further reading
HIST1H2AB
Chemistry
215
27,119,811
https://en.wikipedia.org/wiki/Xanthosine
Xanthosine is a nucleoside derived from xanthine and ribose. It is the biosynthetic precursor to 7-methylxanthosine by the action of 7-methylxanthosine synthase. 7-Methylxanthosine in turn is the precursor to theobromine (active alkaloid in chocolate), which in turn is the precursor to caffeine, the alkaloid in coffee and tea. See also Xanthosine monophosphate Xanthosine diphosphate Xanthosine triphosphate References Nucleosides Xanthines
Xanthosine
Chemistry
129
47,772,789
https://en.wikipedia.org/wiki/Biotin%20hydrazide
Biotin hydrazide is a biotinyl derivative that can be used as a probe for the determination of protein carbonylation. It readily forms Schiff bases with carbonyl groups. References Reagents for biochemistry Hydrazides
Biotin hydrazide
Chemistry,Biology
49
24,202,598
https://en.wikipedia.org/wiki/C11H14O4
{{DISPLAYTITLE:C11H14O4}} The molecular formula C11H14O4 (molar mass: 210.22 g/mol, exact mass: 210.0892 u) may refer to: Dimethyl carbate Sinapyl alcohol Molecular formulas
C11H14O4
Physics,Chemistry
63
53,290,169
https://en.wikipedia.org/wiki/CFexpress
CFexpress is a standard for removable media cards proposed by the CompactFlash Association (CFA). The standard uses the NVM Express protocol over a PCIe 3.0 interface with 1 to 4 lanes where 1 GB/s data can be provided per lane. There are multiple form factors that feature different PCIe lane counts. One of the goals is to unify the ecosystem of removable storage by being compatible with standards already widely adopted, such as PCIe and NVMe. There already is a wide range of controllers, software and devices that use these standards, accelerating adoption. History On 7 September 2016, the CompactFlash Association announced CFexpress. The specification would be based on the PCI Express interface and NVM Express protocol. On 18 April 2017 the CompactFlash Association published the CFexpress 1.0 specification. Version 1.0 will use the XQD form-factor (38.5 mm × 29.8 mm × 3.8 mm) with two PCIe 3.0 lanes for speeds up to 2 GB/s. NVMe 1.2 is used for low-latency access, low overhead and highly parallel access. On 13 June 2017, Delkin introduced the first CFexpress cards based on the CFexpress 1.0 specification. In February 2018, they released benchmarks, with sample units introduced in the second quarter of 2018, and production scheduled for the third quarter. The CFexpress 2.0 standard was announced on 28 February 2019. It features two new card formats ("type A", one lane, more compact and "type C", four lanes, bigger and thicker, up to 4 GB/s), with the existing cards designated as "type B". The NVM Express protocol was upgraded to 1.3. CFexpress 2.0 type B Gen4 Cards are basically already available in the form of the Seagate Storage Expansion Card for Xbox Series X and Xbox Series S, which are mechanically incompatible to most slots because of their lengthened housing. The CFexpress 4.0 standard was announced on 28 August 2023. CFexpress 4.0 supports up to four PCIe 4.0 lanes, 2 GB/s per lane. The NVM Express protocol was upgraded to 1.4c. Comparison Form factors CFexpress supports the following card sizes. The second column lists the oldest CFexpress version that includes the form factor. The larger form factors have more electrical contacts, allowing more PCIe lanes to be used. Form factor B has the same size and contacts as an XQD card, allowing a single card slot to accept both XQD and CFexpress-B cards. Compatible devices A variety of memory card readers and memory cards were published. Cards Delkin On 13 June 2017, Delkin introduced the first CFexpress cards, which were on the CFexpress 1.0 specification. The cards have a XQD form factor and use two PCIe 3.0 lanes. They come in 32 GB, 64 GB, 128 GB and 256 GB capacities. More details on Delkin's CFexpress cards were revealed in February 2018. The cards should be able to be read from and written to with respectively up to 1.6 GB/s and up to 1.0 GB/s benchmarked with CrystalDiskMark 5.2.1. Sample units will be available in Q2 2018 and production is scheduled for Q3 2018. Delkin's 512 GB Power CFexpress Type B card was reviewed along with several others in the early fall of 2020. Camnostic.com rated it the recommended buy due to generally doing well in its tests, but also because it was the cheaper of the alternatives. The article mentioned a firmware upgrade to address compatibility with the Canon EOS R5 camera in late September 2020. ProGrade Digital ProGrade Digital announced it would begin production and sale of CFexpress cards in 2018 with the Type-B form-factor (the same as XQD). The 1 TB CFexpress card that ProGrade Digital showed at the Spring NAB show in 2018 demonstrated 1,400 Mbyte/sec read speed and over 700 Mbyte/sec burst write speed. This demonstration was performed using a Thunderbolt 3 CFexpress/XQD reader on a MacBook Pro computer. Apacer On 11 December 2018, Apacer announced its first CFexpress card, the PV130-CFX. Wise Advanced On 7 April 2019, Wise Advanced announced it was producing CFexpress cards with 512 GB, 256 GB, and 128 GB capacities, as well as a CFexpress Card Reader, all using CFexpress Type B. Readers CFexpress Type A Sony MRW-G2 CFexpress Type B BLACKJET TX-1CXQ Sony MRW-G1. Compatible with both XQD and CFexpress Type B cards. Delkin CF Express Reader (DDREADER-54) SanDisk Professional PRO-READER CFexpress Angelbird CFexpress Card Reader MK2 | Type B Lexar Professional CFexpress Type B USB 3.1 Reader Lexar Professional CFexpress Type B USB 3.2 Gen 2×2 Reader Parts On October 2, 2017, Rego Electronics announced CFexpress host connectors and card cardkits, parts that manufacturers can use for their CFexpress devices and cards. Client devices As of October 2017, there were no CFexpress client devices released. However, in late October 2017 a Lexar employee stated to Nikon Rumors:CFExpress is essentially the next revision of XQD, and there should be full backward compatibility with XQD, and that getting D4/D5/500/D850’s to work with CFE cards should be a simple software patch. On 23 August 2018, Nikon announced their new mirrorless cameras, the Z6 and Z7. At launch they only supported XQD cards, but a later firmware update enabled support for CFexpress. On 13 February 2019, Nikon further confirmed that CFexpress support via a firmware update will also be coming to the D5, D850 and D500. On 16 December 2019, Nikon released firmware version 2.20 for the Z6 and Z7, adding support for CFExpress. In December 2020, Nikon released firmware version 1.20 for the Nikon D850 DSLR that added support from CFexpress-B in the camera's XQD slot. On 28 August 2018, Phase One announced the XF IQ4 camera system (three bodies). Like the Nikon cameras, future support for CFexpress was added in a later firmware update. On 24 October 2019, Canon announced the development of the EOS-1D X Mark III with dual CFexpress slots. The camera was officially released on 6 January 2020, with availability set for February. On 12 February 2020, Nikon announced the Nikon D6, which uses dual CFexpress slots. On 20 April 2020, Canon announced that the EOS R5, a hybrid mirrorless camera, will support CFexpress and SD UHS-II. On 28 July 2020, Sony announced the α7S III, a mirrorless camera that will support dual CFexpress Type A and SD cards. On 26 January 2021, Sony announced the α1, a mirrorless camera that will support dual CFexpress Type A and SD cards. On 23 February 2021, Sony announced the FX3, a mirrorless camera that will support dual CFexpress Type A and SD cards. On 14 September 2021, Canon announced the EOS R3, a mirrorless camera which has one CF Express Type B slot and one SD format slot. On November 10, 2020, Microsoft launched the Xbox Series X and Series S with a slot for semi-proprietary Expansion Cards based on a CFexpress Type B form factor. These Cards only support PCIe Gen4. On 21 October 2021, Sony announced the α7 IV, a mirrorless camera that will support single CFexpress Type A and SD cards. On 28 October 2021, Nikon announced the Nikon Z 9 flagship mirrorless camera, which uses dual CFexpress Type B slots. See also CompactFlash Memory cards PCI Express NVM Express XQD card SD Express References Computer-related introductions in 2016 Solid-state computer storage media Computer standards
CFexpress
Technology
1,742
35,013,582
https://en.wikipedia.org/wiki/Admission%20control
Admission control is a validation process in communication systems where a check is performed before a connection is established to see if current resources are sufficient for the proposed connection. Applications For some applications, dedicated resources (such as a wavelength across an optical network) may be needed in which case admission control has to verify availability of such resources before a request can be admitted. For more elastic applications, a total volume of resources may be needed prior to some deadline in order to satisfy a new request, in which case admission control needs to verify availability of resources at the time and perform scheduling to guarantee satisfaction of an admitted request. Admission control systems Asynchronous Transfer Mode Audio Video Bridging using Stream Reservation Protocol Call admission control IEEE 1394 Integrated services on IP networks Public switched telephone network References External links Papers about Admission Control in DiffServ systems on Google Scholar Deadline-aware Admission Control for Large Inter-Datacenter Transfers Internet Standards Networking standards
Admission control
Technology,Engineering
189
5,465,213
https://en.wikipedia.org/wiki/Coefficient%20diagram%20method
In control theory, the coefficient diagram method (CDM) is an algebraic approach applied to a polynomial loop in the parameter space. A special diagram called a "coefficient diagram" is used as the vehicle to carry the necessary information and as the criterion of good design. The performance of the closed-loop system is monitored by the coefficient diagram. The most considerable advantages of CDM can be listed as follows: The design procedure is easily understandable, systematic and useful. Therefore, the coefficients of the CDM controller polynomials can be determined more easily than those of the PID or other types of controller. This creates the possibility of an easy realisation for a new designer to control any kind of system. There are explicit relations between the performance parameters specified before the design and the coefficients of the controller polynomials as described in. For this reason, the designer can easily realize many control systems having different performance properties for a given control problem in a wide range of freedom. The development of different tuning methods is required for time delay processes of different properties in PID control. But it is sufficient to use the single design procedure in the CDM technique. This is an outstanding advantage. It is particularly hard to design robust controllers realizing the desired performance properties for unstable, integrating and oscillatory processes having poles near the imaginary axis. It has been reported that successful designs can be achieved even in these cases by using CDM. It is theoretically proven that CDM design is equivalent to LQ design with proper state augmentation. Thus, CDM can be considered an ‘‘improved LQG’’, because the order of the controller is smaller and weight selection rules are also given. It is usually required that the controller for a given plant should be designed under some practical limitations. The controller is desired to be of minimum degree, minimum phase (if possible) and stable. It must have enough bandwidth and power rating limitations. If the controller is designed without considering these limitations, the robustness property will be very poor, even though the stability and time response requirements are met. CDM controllers designed while considering all these problems is of the lowest degree, has a convenient bandwidth and results with a unit step time response without an overshoot. These properties guarantee the robustness, the sufficient damping of the disturbance effects and the low economic property. Although the main principles of CDM have been known since the 1950s, the first systematic method was proposed by Shunji Manabe. He developed a new method that easily builds a target characteristic polynomial to meet the desired time response. CDM is an algebraic approach combining classical and modern control theories and uses polynomial representation in the mathematical expression. The advantages of the classical and modern control techniques are integrated with the basic principles of this method, which is derived by making use of the previous experience and knowledge of the controller design. Thus, an efficient and fertile control method has appeared as a tool with which control systems can be designed without needing much experience and without confronting many problems. Many control systems have been designed successfully using CDM. It is very easy to design a controller under the conditions of stability, time domain performance and robustness. The close relations between these conditions and coefficients of the characteristic polynomial can be simply determined. This means that CDM is effective not only for control system design but also for controller parameters tuning. See also Polynomials References External links Coefficient Diagram Method . Polynomials Control theory
Coefficient diagram method
Mathematics
684
2,567,092
https://en.wikipedia.org/wiki/Oosterscheldekering
The Oosterscheldekering ( English: Eastern Scheldt storm surge barrier), between the islands Schouwen-Duiveland and Noord-Beveland, is the largest of the Delta Works, a series of dams and storm surge barriers, designed to protect the Netherlands from flooding from the North Sea. The construction of the Delta Works was a response to the widespread damage and loss of life in the North Sea flood of 1953. Surge barrier The second longest dam in the Delta Works, after the 10.5-kilometre-long Oesterdam, the nine-kilometre-long Oosterscheldekering (kering meaning barrier) was initially designed, and partly built, as a closed dam, but after public protests, huge sluice-gate-type doors were installed in the remaining four kilometres. These doors are normally open, but can be closed under adverse weather conditions. In this way, the saltwater marine life behind the dam is preserved and fishing can continue, while the land behind the dam is safe from the water. Notable figures involved in the design of the Oosterscheldekering included Jan Agema, and from 1976 the design of the project was led by Frank Spaargaren. On 4 October 1986, Queen Beatrix officially opened the dam for use by saying the well-known words: "De stormvloedkering is gesloten. De Deltawerken zijn voltooid. Zeeland is veilig." (The flood barrier is closed. The Delta Works are completed. Zeeland is safe.) At the artificial island Neeltje-Jans, at one end of the barrier, a plaque is installed with the words: "Hier gaan over het tij, de maan, de wind en wij" ("Here, ruling of the tide of the sea, is done by the moon, the wind and we"). Construction The Oosterscheldekering was the most difficult to build and most expensive part of the Delta works. Work on the dam took more than a decade. It was constructed by a consortium of contractors comprising Ballast Nedam, Boskalis Westminster, Baggermaatschappij Breejenhout, Hollandse Aanneming Maatschappij, Hollandse Beton Maatschappij, Van Oord-Utrecht, Stevin Baggeren, Stevin Beton en Waterbouw, Adriaan Volker Baggermaatschappij, Adriaan Volker Beton en Waterbouw and Aannemerscombinatie Zinkwerken. Construction started in April 1976 and was completed in June 1986. The road over the dam was ready for use in November 1987. The road was opened by the former queen, Princess Juliana on 5 November 1987, exactly 457 years after the St Felix Day's flood of 1530, which had washed away a large chunk of Zeeland, upstream of the new barrier's position. To facilitate the building, an artificial island, Neeltje-Jans, was created in the middle of the estuary. When the construction was finished, the island was rebuilt to be used as education centre for visitors and as a base for maintenance works. The dam is based on 65 concrete pillars with 62 steel doors, each 42 metres wide. The parts were constructed in a dry dock. The area was flooded and a small fleet of special construction ships lifted the pillars and placed them in their final positions. Each pillar is between 35 and 38.75 metres high and weighs 18000 tonnes. The dam is designed to last more than 200 years. The Oosterscheldekering is sometimes referred to as the eighth Wonder of the World. It has been declared one of the modern Seven Wonders of the World by the American Society of Civil Engineers. Construction fleet Four ships were custom designed and built for this project: Mytilus, a ship equipped with various ground working tools, such as needles to make the seabed denser and more stable. Cardium, a ship to transport and lay a special foil carpet on the seabed for the pillars to rest on. Ostrea, a ship capable of lifting a concrete pillar from the dry dock and placing it accurately on a special foil on the seabed. The ship is 85 metres long and has a portal 50 metres high. The ship can only lift 10,000 tonnes, but as a large part of the pillar is underwater, it is not necessary for the ship to be able to lift the full 18,000 tonnes. This ship is considered the flagship of the construction fleet, mainly because of its larger size and power in comparison to the other ships. Macoma, a ship that works closely with the Ostrea, cleaning the foil assisting in placing the pillars accurately in their final position. The ships are named after various types of shellfish. Operation The dam is manually operated but if human control fails, an electronic security system acts as a backup. A Dutch law regulates the conditions under which the dam is allowed to close. The water levels must be at least three meters above regular sea level before the doors can be completely shut. Each sluice gate is closed once a month for testing. Emergency procedures are tested on pre-scheduled dates. Once the test is passed, the shutters are quickly opened again to create a minimum amount of effect on tidal movements and the local marine ecosystem. It takes approximately one hour to close a door. The cost of operation is €17 million per year. The full dam has been closed twenty-eight times since 1986, due to water levels exceeding or being predicted to exceed the three metres. The last time was on January 31 2022, because of Storm Corrie. Tidal power generation In 2015, five Tocardo T2 tidal turbines were installed on the barrier, mounted on a 50 m long frame supported by the road bridge, which could rotate to lift all of the turbines out the water simultaneously. The turbines were installed in the eighth sluice channel from the southern end of the barrier, and started generating electricity to the Dutch grid in 2016. The installation was reported to have cost around US$12.4 million, and was the largest tidal power project in the Netherlands. Each turbine was 5.26 m in diameter (87 m² swept area) and rated at 250 kW for a total power of 1.25 MW. The project was decommissioned after eight years of operation in 2023. Further reading See also Flood control in the Netherlands Delta Works Rijkswaterstaat Jan Agema Frank Spaargaren References External links Satellite view from Google Maps DeltaWorks.org – DeltaWorks.Org about Oosterscheldekering. Includes text, photos, video and virtual tour. Buildings and structures in Schouwen-Duiveland Dams completed in 1986 Dams in Zeeland Delta Works Flood barriers Noord-Beveland Tourist attractions in Zeeland
Oosterscheldekering
Physics
1,410
25,486,644
https://en.wikipedia.org/wiki/Shields%20parameter
The Shields parameter, also called the Shields criterion or Shields number, is a nondimensional number used to calculate the initiation of motion of sediment in a fluid flow. It is a dimensionalization of a shear stress, and is typically denoted or . This parameter has been developed by Albert F. Shields, and is called later Shields parameter. The Shields parameter is the main parameter of the Shields formula. The Shields parameter is given by: where: is a dimensional shear stress; is the density of the sediment; is the density of the fluid; is acceleration due to gravity; is a characteristic particle diameter of the sediment. The critical shear stress and also the critical Shields number ( and ) describe the conditions when the sediment starts moving. Note that the shear stress is a property of the current, while the critical shear stress is a property of the sediment. Physical meaning By multiplying the top and bottom of the Shields parameter by D2, you can see that it is proportional to the ratio of fluid force on the particle to the weight of the particle. References External links Sedimentology Dimensionless numbers of physics Fluid dynamics
Shields parameter
Chemistry,Engineering
225
50,447,273
https://en.wikipedia.org/wiki/NATS%20Messaging
NATS is an open-source messaging system (sometimes called message-oriented middleware). The NATS server is written in the Go programming language. Client libraries to interface with the server are available for dozens of major programming languages. The core design principles of NATS are performance, scalability, and ease of use. The acronym NATS stands for Neural Autonomic Transport System. Synadia develops and provides support for NATS. NATS was originally developed by Derek Collison as the messaging control plane for Cloud Foundry and was written in Ruby. NATS was later ported to Go. The source code is released under the Apache 2.0 License. NATS consists of: The NATS Server - The core Publish-Subscribe Server for NATS. Client libraries for a variety of programming languages. A connector framework - a pluggable Java based framework to connect NATS and other services. NATS is a CNCF project with Kubernetes and Prometheus (software) integration. The NATS server is often referred to as either 'Core NATS' or NATS with 'JetStream'. 'Core NATS' is the set of core NATS functionalities and qualities of service. 'JetStream' is the (optionally enabled) built-in persistence layer that adds streaming, queues, at-least-once and exactly-once delivery guarantees, historical data replay, decoupled flow-control and key/value store functionalities to Core NATS. JetStream replaced the old STAN (NATS Streaming) approach. Example Below is a sample connection string from a telnet connection to the demo.nats.io site: Trying 107.170.221.32... Connected to demo.nats.io. Escape character is '^]'. INFO {"server_id":"NCURTNCYNYCY7RQPLPHPPE3G5RZW7VZDVPPXI7ORSEFP3FLJOD4LZSSL","server_name":"us-south-nats-demo","version":"2.10.25","proto":1,"git_commit":"006039e","go":"go1.23.5","host":"0.0.0.0","port":4222,"headers":true,"tls_available":true,"max_payload":1048576,"jetstream":true,"client_id":57215,"client_ip":"136.62.97.251","nonce":"VF3sSuRNaKVe_pU","xkey":"XBUHUXVNAY7KEZ6NIZGM5M7BIG7SEX3JY3Y2GE6H4YNUEWCS24YSM5RQ"} References See also Prometheus nats exporter Further reading Free software programmed in Go Message-oriented middleware Cloud infrastructure Free software for cloud computing Service-oriented architecture-related products Enterprise application integration Cross-platform free software Software using the Apache license
NATS Messaging
Technology
672
33,457,394
https://en.wikipedia.org/wiki/New%20Enterprise%20Associates
New Enterprise Associates (NEA) is an American-based venture capital firm. NEA focuses investment stages ranging from seed stage through growth stage across an array of industry sectors. With over $25 billion in committed capital, NEA is one of the world's largest venture capital firms. History NEA was founded in 1977 by C. Richard (Dick) Kramlich, Chuck Newhall and Frank Bonsal. Kramlich had worked with noted venture capitalist Arthur Rock beginning in 1969 and Frank Bosnal had been an investment banker at Alex. Brown & Sons where he focused on initial public offerings (IPOs) for startup companies. Chuck Newhall had previously managed an investment fund for T. Rowe Price in the 1970s. The firm was founded with offices on both the East Coast and the West Coast. Among the firm's first investments was 3Com, which NEA backed along with Mayfield Fund and Jack Melchor in 1981. The first NEA investment fund had only $16 million of capital. The firm's second fund raised $45 million and the third fund collected $125 million of commitments from investors in 1984. The firm continued to grow steadily throughout the 1980s and early 1990s raising $900 million from 1987 through 1996 across NEA's next four funds. Beginning with NEA-8 in 1998, the firm greatly increased the size of its investment funds. NEA's tenth fund had $2.3 billion of investor commitments in 2000. After raising a more modest $1.1 billion in 2004 for the firm's eleventh fund, NEA raised $2.3 billion and $2.5 billion for its next two funds, respectively. In 2010, NEA launched its thirteenth investment fund with $2.5 billion of investor capital, the largest since the Financial crisis of 2007–08. In 2012, NEA closed its fourteenth investment fund with $2.6 billion of investor capital. In April 2015, NEA closed its fifteenth investment fund with $3.1 billion in investor capital - the largest venture capital fund ever raised. In June 2017, NEA closed its sixteenth investment fund with $3.3 billion in investor capital - again the largest venture capital fund ever raised. Operations The firm primarily functions in New York and California, and has offices in Baltimore, Bangalore, Beijing, Boston, Menlo Park, California, Mumbai, New York City, San Francisco and Shanghai. Since its founding, NEA invested in nearly 1,000 companies, and realized over 650 liquidity events (with over 250 portfolio company IPOs and over 300 portfolio company acquisitions). In 2018, former CEO of General Electric, Jeff Immelt, joined the firm as a venture partner. Investments NEA has 370 portfolio companies. Some of the firm's investments include: Aerohive Networks, Alimera Sciences, Amicus Therapeutics, Antenna Software, Appian, AppSheet, Arris International, Automation Anywhere, Bloom Energy, Box, Inc., Bright Health, Built Robotics, ByteDance, Champions Oncology, Clarifai, Cleo, Clio, Cloudflare, Clovis Oncology, Cohere, Conviva, CrowdMed, Coursera, Dandelion Energy, Databricks, Drop, Eargo, Edmodo, Enigma, FiscalNote, Formlabs, FTX, Gen.G, Genies, Inc., Goji Electronics, GoodLeap, Goop, HackerOne, Houzz, IFTTT, Illumitex, Illusive Networks, Instabase, Konux, Lexicon Pharmaceuticals, Luminary, Luxtera, MasterClass, Moda Operandi, MongoDB, Pager, Patreon, Philo, Plaid, Raise.com, Regulus Therapeutics, Robinhood Markets, Rock Health, Smartcar, Splashtop, Tamara Mellon, The Yes, ThirdLove, Tintri, Uniphore, Upstart, Upwork, Virtru, Wheels Up, Zoomdata. See also Flagship Pioneering, life sciences venture capital firm References External links New Enterprise Associates (company website) Venture capital firms of the United States Life sciences industry Financial services companies based in California Companies based in Menlo Park, California Financial services companies established in 1977 1977 establishments in California
New Enterprise Associates
Biology
880
7,857,636
https://en.wikipedia.org/wiki/Newick%20format
In mathematics and phylogenetics, Newick tree format (or Newick notation or New Hampshire tree format) is a way of representing graph-theoretical trees with edge lengths using parentheses and commas. It was adopted by James Archie, William H. E. Day, Joseph Felsenstein, Wayne Maddison, Christopher Meacham, F. James Rohlf, and David Swofford, at two meetings in 1986, the second of which was at Newick's restaurant in Dover, New Hampshire, US. The adopted format is a generalization of the format developed by Meacham in 1984 for the first tree-drawing programs in Felsenstein's PHYLIP package. Examples The following tree: could be represented in Newick format in several ways ((,)); no nodes are named (A,B,(C,D)); leaf nodes are named (A,B,(C,D)E)F; all nodes are named (:0.1,:0.2,(:0.3,:0.4):0.5); all but root node have a distance to parent (:0.1,:0.2,(:0.3,:0.4):0.5):0.0; all have a distance to parent (A:0.1,B:0.2,(C:0.3,D:0.4):0.5); distances and leaf names (popular) (A:0.1,B:0.2,(C:0.3,D:0.4)E:0.5)F; distances and all names ((B:0.2,(C:0.3,D:0.4)E:0.5)F:0.1)A; a tree rooted on a leaf node (rare) Newick format is typically used for tools like PHYLIP and is a minimal definition for a phylogenetic tree. Rooted, unrooted, and binary trees When an unrooted tree is represented in Newick notation, an arbitrary node is chosen as its root. Whether rooted or unrooted, typically a tree's representation is rooted on an internal node and it is rare (but legal) to root a tree on a leaf node. A rooted binary tree that is rooted on an internal node has exactly two immediate descendant nodes for each internal node. An unrooted binary tree that is rooted on an arbitrary internal node has exactly three immediate descendant nodes for the root node, and each other internal node has exactly two immediate descendant nodes. A binary tree rooted from a leaf has at most one immediate descendant node for the root node, and each internal node has exactly two immediate descendant nodes. Grammar A grammar for parsing the Newick format (roughly based on ): The grammar nodes Tree: The full input Newick Format for a single tree Subtree: an internal node (and its descendants) or a leaf node Leaf: a node with no descendants Internal: a node and its one or more descendants BranchSet: a set of one or more Branches Branch: a tree edge and its descendant subtree. Name: the name of a node Length: the length of a tree edge. The grammar rules Note, "|" separates alternatives. Tree → Subtree ";" Subtree → Leaf | Internal Leaf → Name Internal → "(" BranchSet ")" Name BranchSet → Branch | Branch "," BranchSet Branch → Subtree Length Name → empty | string Length → empty | ":" number Whitespace (spaces, tabs, carriage returns, and linefeeds) within number is prohibited. Whitespace within string is often prohibited. Whitespace elsewhere is ignored. Sometimes the Name string must be of a specified fixed length; otherwise the punctuation characters from the grammar (semicolon, parentheses, comma, and colon) are prohibited. The Tree → Subtree ";" production is instead the Tree → Branch ";" production in those cases where having the entire tree descended from nowhere is permitted; this captures the replaced production as well because Length can be empty. Note that when a tree having more than one leaf is rooted from one of its leaves, a representation that is rarely seen in practice, the root leaf is characterized as an Internal node by the above grammar. Generally, a root node labeled as Internal should be construed as actually internal if and only if it has at least two Branches in its BranchSet. One can make a grammar that formalizes this distinction by replacing the above Tree production rule with Tree → RootLeaf ";" | RootInternal ";" RootLeaf → Name | "(" Branch ")" Name RootInternal → "(" Branch "," BranchSet ")" Name The first RootLeaf production is for a tree with exactly one leaf. The second RootLeaf production is for rooting a tree from one of its two or more leaves. Notes An unquoted may not contain blanks, parentheses, square brackets, single_quotes, colons, semicolons, or commas. Underscore characters in unquoted s are converted to blanks. A may also be quoted by enclosing it in single quotes. Single quotes in the original string are represented as two consecutive single quote characters. Whitespace may appear anywhere except within an unquoted or a Newlines may appear anywhere except within a or a . Comments are enclosed in square brackets. They can appear anywhere newlines are allowed. Comments starting with are generally computer-generated for additional data. Some dialects allow nested comments. Dialects New Hampshire X format The New Hampshire X (NHX) format is an extension to Newick that adds key-value data (gene duplication, etc.) to Newick nodes. This is done by putting the additional data in brackets [&&NHX:key=value:...] in the node labels. The brackets are used because they represent comments in the Nexus file format, so any parser not understanding these additional information will ignore them. Extended Newick While the standard Newick notation is limited to phylogenetic trees, Extended Newick (Perl Bio::PhyloNetwork) can be used to encode explicit phylogenetic networks. In a phylogenetic network, which is a generalization of a phylogenetic tree, a node either represents a divergence event (cladogenesis) or a reticulation event such as hybridization, introgression, horizontal (lateral) gene transfer or recombination. Nodes that represent a reticulation event are duplicated, annotated by introducing the # symbol into the Newick format, and numbered consecutively (using integer values starting with 1). For example, if leaf Y is the product of hybridisation (x) between lineages leading to C and D in the tree above, one can express this situation by defining two trees in standard Newick notation (A,B,((C,Y)c,D)e)f; and (A,B,(C,(Y,D)d)e)f; standard Newick, all nodes are named (internal nodes lowercase, leaves upper case) or in extended Newick notation (A,B,((C,(Y)x#H1)c,(x#H1,D)d)e)f; extended Newick, all nodes are named; 1 is the integer identifying the hybrid node x The here is a hybrid node. It will be joined by the program into a single node when drawn. This is the picture drawn by Dendroscope for this example: The production rules above is modified by the following for labelling hybrid nodes (in general, nodes representing reticulation events): Leaf → Name Hybrid Hybrid → empty | "#" Type integer -- The #i part is an obligatory identifier for a hybrid node Type → empty | string -- type of reticulation, e.g., H = hybridisation, LGT = lateral gene transfer, R = recombination. In the visualization of LGT events, for a given reticulate node, one incoming edge is usually drawn as an "acceptor" edge and all other incoming edges are drawn as "transfer" edges. Some programs (e.g. Dendroscope and SplitsTree) allow exactly one copy of the reticulate node to be labeled with to indicate that it corresponds to the acceptor edge. Extended Newick is backward-compatible: a hybrid node would simply be interpreted as a few strangely-named nodes for legacy parsers. Rich Newick format The Rich Newick format, also known as the Rice Newick format, is a further extension of Extended Newick. It adds support for: Unrooted phylogenies. This is simply done by writing an unrooted tree as usual (i.e., pick an arbitrary root at a binary branch point) and prefixing to the string. , on the other hand, can be used to force a rooted tree. Bootstrap values and probabilities. This is done by adding additional fields after the length; fields can be left empty as long as the colons are present. This may be backward-incompatible. Ad hoc extensions Some other programs, like NWX, uses comments starting with to encode additional information in an ad hoc manner: MrBayes and BEAST add additional information like probability, length in years, standard deviation for values to the nodes. They also use . Visualization Many tools have been published to visualize Newick tree data. Specific examples include the ETE toolkit ("Environment for Tree Exploration") and T-REX. Phylogenetic software packages such as SplitsTree and the tree-viewer Dendroscope as well as the online tree viewing tool IcyTree can handle standard and extended Newick notation, while the phylogenetic network software PhyloNet makes use of both the Extended Newick and Rich Newick format. See also phyloXML T-REX (Webserver) allows handling phylogenetic trees and networks in the Newick format. Smart Game Format is an application of Newick format and is widely used for recording board games. References External links Miyamoto and Goodman's Phylogram of Eutherian Mammals An example of a large phylogram with its Newick format representation. Phylogenetic tree (newick) viewer (By Huerta-Cepas et al. 2016) Trees (data structures) Graph description languages Phylogenetics
Newick format
Mathematics,Biology
2,176
31,196,751
https://en.wikipedia.org/wiki/Epidemiology%20of%20binge%20drinking
Binge drinking is the practice of consuming excessive amounts of alcohol in a short period of time. Due to the idiosyncrasies of the human body, the exact amount of alcohol that would constitute binge drinking differs among individuals. The definitions of binge drinking are also nuanced across cultures and population subgroups. For example, many studies use gender-specific measures of binge drinking (such as 5+ drinks for men and 4+ drinks for women). The epidemiology of binge drinking likewise differs across cultures and population subgroups. Asia Singapore According to the National Health Survey 2020 conducted by the Health Promotion Board Singapore, binge drinking is defined as consumption of five or more alcoholic drinks over a short period of time. The survey results showed that the frequency of binge drinking was 15.6% in males, 11.9% higher than that for females (3.7%). The largest proportion of males and females who binge drink fall within the 18 – 29 age group. In 2007, Asia Pacific Breweries Singapore (APBS) spearheaded Get Your Sexy Back (GYSB), Singapore's first youth-for-youth initiative to promote responsible and moderate drinking among young adults. The program seeks to widen awareness and educate individuals about responsible drinking behavior by raising the social currency of moderation. The program engages youths in events and activities that are close to their lifestyles, focusing on four major platforms – Music, Fashion, Sports and Friends to spread the message of responsible drinking. Europe The drinking age in most countries is either 16 or 18, though in many countries national or regional regulations ban the consumption and/or the sale of alcoholic drinks stronger than beer or wine to those less than 18 years of age. Licensees may sometimes choose to provide beverages such as diluted wine or beer mixed with lemonade (shandy or Lager Top) with a meal to encourage responsible consumption of alcohol. It is generally perceived that binge drinking is most prevalent in the Vodka Belt (most of Northern and some of Eastern Europe) and least common in the southern part of the continent, in Italy, France, Portugal and the Mediterranean (the Wine Belt). Using a "5-drink, 30-days" (5 standard drinks in a row during the last 30 days) definition, Denmark leads European binge drinking, with 60% of 15–16-year-olds reporting participating in this behavior (and 61% reporting intoxication). However, there currently appears to be at least some convergence of drinking patterns and styles between the northern and southern countries, with the south beginning to drink more like the north more so than the other way around. Malta A notable exception to the lower rates of binge drinking in Southern Europe is the Mediterranean island of Malta, which has adopted the British culture of binge drinking, and where teenagers, often still in their early teens, are able to buy alcohol and drink it in the streets of the main club district, Paceville, due to a lack of police enforcement of the legal drinking age of 17. Statistics show that alcohol consumption in Malta exceeds that in the UK (but binge drinking is slightly lower and intoxication is significantly lower), and report that Malta ranks 5th in the world in common binge drinking. Maltese 15–16-year-olds report binge drinking at a rate of 50%, using a 5-drink, 30-day definition, but only 20% report intoxication in the past 30 days. Spain Since the mid-1990s the botellón has been growing in popularity among young people. Botellón, which literally means "big bottle" in Spanish, is a drinking party or gathering that involves consuming alcohol, usually spirits (often mixed with soft drinks), in a public or semi-public place (beaches, parks, streets, etc.). This can be considered a case of binge drinking since most people that attend it consume 3 to 5 drinks in less than five hours. Among 15–16-year-olds, 23% report being intoxicated in the past 30 days. Russia Binge drinking in Russia ("Zapoy" ("Запой") in Russian), often takes the form of two or more days of continuous drunkenness. Sometimes it can even last up to a week. One study found that among men ages 25–54, about 10% had at least one episode of zapoy in the past year, which can be taken as a sign that one has a drinking problem. Almost half of working-age men in Russia who die are killed by alcohol abuse, reducing Russia's male life expectancy significantly. Vodka is the preferred alcoholic beverage, and Russia is notably considered part of the Vodka Belt. Using a 5-drink, past 30 days definition, 38% of Russian 15–16-year-olds have binged and 27% became intoxicated, a percentage that is on par with other European countries, and even lower than some. United Kingdom In the UK, parallels have been drawn between binge drinking and the Gin Craze of 18th century England. Some areas of the media are spending a great deal of time reporting on what they see as a social ill that is becoming more prevalent as time passes. In 2003, the cost of binge drinking was estimated as £20 billion a year. In response, the government has introduced measures to deter disorderly behavior and sales of alcohol to people under 18, with special provisions in place during the holiday season. In January 2005, it was reported that one million admissions to UK emergency department units each year are alcohol-related; in many cities, Friday and Saturday nights are by far the busiest periods for ambulance services. The culture of drinking in the UK is markedly different from that of some other European nations. In mainland Europe, alcohol tends to be consumed more slowly over the course of an evening, often accompanied by a restaurant meal. In Scandinavia, occasional bouts of heavy drinking are the norm. In the UK (as well as Ireland), by contrast, alcohol is commonly consumed in rapid binges, leading to more regular instances of severe intoxication. In this way the British combine Northern European volumes of consumption with frequency resembling that of Southern Europe. This "drinking urgency" may have been inspired by traditional pre-midnight pub closing hours in the UK, whereas bars in continental Europe would typically remain open for the entire night. This may have stemmed from the Defence of the Realm Act 1914, emergency legislation dating back to the first world war regulating pub opening times with the intention of getting workers out of the pub and into the munitions factories. Consequently, it was criticized for being draconian and denying the working classes their pleasures. This is one of the reasons for introducing the Licensing Act 2003 which came into effect in England and Wales in 2005, and which allows 24 hour licensing (although not all bars have taken advantage of the change). Some observers, however, believed it would exacerbate the problem. As of 2008, results have been mixed and inconsistent across the country. Among young people (under 25), binge drinking (and drinking in general) in England appears to have declined since the late 1990s according to the National Health Service. While being drunk (outside of a student context) in mainland Europe is widely viewed as being socially unacceptable, in the UK the reverse is true in many social circles. Particularly amongst young adults, there is often a certain degree of peer pressure to get drunk during a night out. This culture is increasingly becoming viewed by politicians and the media as a serious problem that ought to be tackled, partly for health reasons, but mostly due to its association with violence and anti-social behaviour. Using a 5-drink, 30-days definition, British 15–16-year-olds binge drink at a rate of 54%, the fourth highest in Europe, and 46% report intoxication in the past 30 days. The British TV channel Granada produces a program called Booze Britain, which documents the binge drinking culture by following groups of young adults. As a reaction to the binge drinking epidemic in Britain, several charities have been created to raise awareness of the dangers of binge drinking and promote responsible drinking. These charities notably include Alcohol Concern and Drinkaware. In May 2018 the Scottish Government implemented a minimum unit price of 50 pence per unit of alcohol sold in Scotland. The aim of this was to improve health outcomes from by reducing the availability of alcoholic beverages that are both low in cost and high in alcohol content. The Americas Canada Canadian binge drinking rates are comparable to those of the United States. They most resemble drinking rates of geographically similar American states bordering Canada. For example, 29% of 15- to 19-year-olds (35% male, 22% female) and 37% of 20- to 24-year-olds (47% male, 17.9% female) report having 5 or more drinks on one occasion, 12 or more times a year in 2000–01. In university, binge drinking is especially common during the first week of orientation, commonly known as "frosh week". The first ever known study comparing the drinking patterns of Canadian and American college students under age 25 (in 1998 and 1999, respectively) found that although Canadian students were more likely to drink, American students drank more heavily overall. "Heavy alcohol use" was defined as usually having 5/4 drinks or more on the days that the person drinks in the past 30 days (American) or 2–3 months (Canadian). Among past year drinkers, 41% and 35% of American and Canadian students, respectively, reported participated in this behavior. Among the total sample, it was 33% and 30%, respectively. Differences included the lack of a gender gap in Canada compared with America, as well some as age-related differences. Canadians exceeded Americans in reported heavy alcohol use until age 19 (especially among the 1% percentage of students under 18), at which point Americans overtook and then began to exceed Canadians, especially among 21- and 22-year-olds. After age 23, there was no longer much of a difference. In Canada, the legal drinking age is 18 or 19, depending on the province. A relatively popular drinking game among the Canadian skateboarders and heavy metal culture is "wizard sticks", in which drinkers tape a stack of their empty beer cans to the can from which they are currently drinking. The name comes from the fact that when the stack gets tall enough, it resembles a wizard's staff. United States Despite having a legal drinking age of 21, binge drinking in the United States remains very prevalent among high school and college students. Using the popular 5/4 definition of "binge drinking", one study found that, in 1999, 44% of American college students (51% male, 40% female) engaged in this practice at least once in the past two weeks. One can also look at the prevalence of "extreme drinking" as well. A more recent study of US first-semester college freshmen in 2003 found that, while 41% of males and 34% of females "binged" (using the 5/4 threshold) at least once in the past two weeks, 20% of males and 8% of females drank 10/8 or more drinks (double the 5/4 threshold) at least once in the same period, and 8% of males and 2% of females drank at least 15/12 drinks (triple the threshold). A main concern of binge drinking on college campuses is how the negative consequences of binge drinking affect the students. A study done by the Harvard School of Public Health reported that students who engage in binge drinking experience numerous problems such as: missing class, engaging in unplanned or unsafe sexual activity, being victims of sexual assault, unintentional injuries, and physical ailments. In 2008 the U.S Surgeon General estimated that around 5,000 Americans aged under 21 die each year from alcohol-related injuries involving underage drinking. Rates of binge drinking in women have been increasing; high risk drinking puts these women at increased risk of the negative long-term effects of alcohol consumption. The population of people who binge drink mainly comprises young adults aged 18–29, although it is by no means rare among older adults. For example, in 2007 (using a 5-drinks definition per occasion for both genders), 42% of 18- to 25-year-olds "binged" at least once a month, while 20% of 16–17-year-olds and 19% of those over age 35 did so. The peak age is 21. Prevalence varies widely by region, with the highest rates being in the North Central states. Binge drinking is more common in men than it is in women. The annual "Monitoring the Future" survey found that, in 2007, 10% of 8th graders, 22% of 10th graders, and 26% of 12th graders report having had five or more drinks at least once in the past two weeks. The same survey also found that alcohol was considered somewhat easier to obtain than cigarettes for 8th and 10th graders, even though the minimum age to purchase alcohol is 21 in all 50 states, while for cigarettes it is 18. The following table represents the percentage of those age 12–20 who illegally binge drink in the United States. American Indians and First Nations Binge drinking is a common pattern among Native Americans in both Canada and the United States. Anastasia M. Shkilnyk, who conducted an observational study of the Asubpeeschoseewagong First Nation of Northwestern Ontario in the late 1970s when they were demoralized by Ontario Minamata disease, has observed that heavy Native American drinkers may not be physiologically dependent on alcohol, but abuse it by engaging in binge drinking, a practice associated with child neglect, violence, and impoverishment. After binges during which entire families and their friends drink until they are unconscious and their funds are exhausted, they go about their business without drinking. Oceania Australia In 2004–2005, statistics from the National Health Survey show that among the general population over 18, 88% of males and 60% of females engaged in binge drinking at least once in the past year, with 12% and 4%, respectively, doing so at least once a week. Among 18 to 24-year-olds, 49% of males and 21% of females did so at least once a week. At the time, the definition for "binge drinking" corresponded to 7 or more standard Australian drinks per occasion for males and 5 or more for females, roughly equivalent to (but slightly less than) the 5/4 (standard American) drinks definition. In March 2008, the Australian government earmarked A$53 million towards a campaign against binge drinking, citing two studies done in the past eight years which showed that binge drinking in Australia was at what Prime Minister Kevin Rudd called "epidemic levels". On June 15, the Australian Medical Association released new guidelines defining binge drinking as four standard Australian drinks a night. The last survey of drinking habits by the Australian Bureau of Statistics found there was an increase in drinking outside the home. In 1999, 34 percent of spending on alcoholic drinks took place on licensed premises. By 2004 this figure had risen to 38 percent. This figure is expected to fall in 2008 in Australia because of stricter licensing laws, smoking bans in pubs and the extra premium people have to pay for buying alcohol in a bar. New Zealand Concerns over binge drinking by teenagers has led to a review of liquor advertising being announced by the New Zealand government in January 2006. The review considered regulation of sport sponsorship by liquor companies, which at present is commonplace. Previously the drinking age in New Zealand was 20, then dropped to 18 in 1999. In direct conjunction with the age-lowering, the Police were found to strictly enforce the on-license (bar, restaurant) code for underage-drinking, less so for the Off-license (liquor stores, supermarkets). As a result, young people ages 15–17 found it significantly harder to get into (or be served at) bars and restaurants than it was before with a poorly enforced (though higher) drinking age of 20. This asymmetric enforcement led to a period of many of New Zealand's youth getting strangers to purchase high alcohol content beverages for them (e.g. cheap vodka or rum) at liquor stores. A propensity to consume an entire bottle of spirits developed and led to an instant increase in the number of youths under 18 being admitted to A&E hospitals. The price of alcohol at supermarkets and liquor stores had also gone down, and the number of outlets had mushroomed as well. Alcohol remains cheap, and sweet, spirit-based ready to drink beverages (similar to alcopops) remain popular among young people. An example of this binge drinking mentality, often seen amongst university students, is the popularity of drinking games such as Edward Wineyhands and Scrumpy Hands, similar to the American drinking game Edward Fortyhands. A recent study showed that 37% of undergraduates binged at least once in the past week. The New Zealand health service classifies Binge Drinking as anytime a person consumes five or more standard drinks in a sitting. References Alcohol abuse Drinking culture Binge Drinking
Epidemiology of binge drinking
Environmental_science
3,540
7,324,284
https://en.wikipedia.org/wiki/Indexed%20language
Indexed languages are a class of formal languages discovered by Alfred Aho; they are described by indexed grammars and can be recognized by nested stack automata. Indexed languages are a proper subset of context-sensitive languages. They qualify as an abstract family of languages (furthermore a full AFL) and hence satisfy many closure properties. However, they are not closed under intersection or complement. The class of indexed languages has generalization of context-free languages, since indexed grammars can describe many of the nonlocal constraints occurring in natural languages. Gerald Gazdar (1988) and Vijay-Shanker (1987) introduced a mildly context-sensitive language class now known as linear indexed grammars (LIG). Linear indexed grammars have additional restrictions relative to IG. LIGs are weakly equivalent (generate the same language class) as tree adjoining grammars. Examples The following languages are indexed, but are not context-free: These two languages are also indexed, but are not even mildly context sensitive under Gazdar's characterization: On the other hand, the following language is not indexed: Properties Hopcroft and Ullman tend to consider indexed languages as a "natural" class, since they are generated by several formalisms, such as: Aho's indexed grammars Aho's one-way nested stack automata Fischer's macro grammars Greibach's automata with stacks of stacks Maibaum's algebraic characterization Hayashi generalized the pumping lemma to indexed grammars. Conversely, Gilman gives a "shrinking lemma" for indexed languages. See also Chomsky hierarchy References External links "NLP in Prolog" chapter on indexed grammars and languages Formal languages
Indexed language
Mathematics
347