id
int64
580
79M
url
stringlengths
31
175
text
stringlengths
9
245k
source
stringlengths
1
109
categories
stringclasses
160 values
token_count
int64
3
51.8k
9,036,441
https://en.wikipedia.org/wiki/Anatoly%20Vershik
Anatoly Moiseevich Vershik (; 28 December 1933 – 14 February 2024) was a Soviet and Russian mathematician. He is most famous for his joint work with Sergei V. Kerov on representations of infinite symmetric groups and applications to the longest increasing subsequences. Biography Vershik studied at Leningrad State University, receiving his doctoral degree in 1974; his advisor was Vladimir Rokhlin. Vershik worked at the St. Petersburg Department of Steklov Institute of Mathematics and at Saint Petersburg State University. In 1998–2008, he was the president of the St. Petersburg Mathematical Society. In 2012, Vershik became a fellow of the American Mathematical Society. In 2015, he was elected a member of Academia Europaea. His doctoral students include Alexander Barvinok, Dmitri Burago, Anna Erschler, Sergey Fomin, Vadim Kaimanovich, Sergei Kerov, Alexander N. Livshits, Andrei Lodkin, Nikolai Mnev, and Natalia Tsilevich. Anatoly Vershik died on 14 February 2024, at the age of 90. See also Bratteli–Vershik diagram References Bibliography Vladimir Arnold, Mikhail Sh. Birman, Israel Gelfand, et al., "Anatolii Moiseevich Vershik (on the occasion of his sixtieth birthday", Russian Math. Surveys 49:3 (1994), 207–221. Anatoly Vershik, Admission to the mathematics faculty in Russia in the 1970s and 1980s, Mathematical Intelligencer vol. 16, No. 4, (1994), 4–5. External links Vershik's personal home page at St. Petersburg Department of the Steklov Mathematical Institute Vershik's CV 1933 births 2024 deaths Soviet mathematicians 20th-century Russian mathematicians 21st-century Russian mathematicians Mathematicians from Saint Petersburg Academic staff of Saint Petersburg State University Combinatorialists Fellows of the American Mathematical Society Members of Academia Europaea Humboldt Research Award recipients
Anatoly Vershik
Mathematics
422
77,650,496
https://en.wikipedia.org/wiki/RDS03-94
RDS03-94, or RDS3-094, is an atypical dopamine reuptake inhibitor that was derived from the wakefulness-promoting agent modafinil. It has substantially higher affinity and potency in terms of dopamine transporter (DAT) inhibition than modafinil (Ki = 39.4nM vs. 8,160nM) whilst retaining the atypical DAT blocker profile of modafinil. However, RDS03-94 also has high affinity for the sigma σ1 receptor (Ki = 2.19nM). RDS03-94 shows some reversal of tetrabenazine-induced motivational deficits in animals and hence may have the capacity to produce pro-motivational effects. However, it appears to be less effective than certain other related agents, like JJC8-088. RDS03-94 is under development for the treatment of psychostimulant use disorder. The drug was first described in the scientific literature in 2020. See also List of modafinil analogues and derivatives References 4-Fluorophenyl compounds Enantiopure drugs Experimental drugs Piperazines Pro-motivational agents Secondary alcohols Sigma receptor ligands Stimulants Thioethers Modafinil analogues
RDS03-94
Chemistry
279
75,864,308
https://en.wikipedia.org/wiki/Habitable%20zone%20for%20complex%20life
A Habitable Zone for Complex Life (HZCL) is a range of distances from a star suitable for complex aerobic life. Different types of limitations preventing complex life give rise to different zones. Conventional habitable zones are based on compatibility with water. Most zones start at a distance from the host star and then end at a distance farther from the star. A planet would need to orbit inside the boundaries of this zone. With multiple zonal constraints, the zones would need to overlap for the planet to support complex life. The requirements for bacterial life produce much larger zones than those for complex life, which requires a very narrow zone. Exoplanets The first confirmed exoplanets was discovered in 1992, several planets orbiting the pulsar PSR B1257+12. Since then the list of exoplanets has grown to the thousands. Most exoplanets are hot Jupiter planets, that orbit very close the star. Many exoplanets are super-Earths, that could be a gas dwarf or large rocky planet, like Kepler-442b at a mass 2.36 times Earths. Star Unstable stars are young and old stars, or very large or small stars. Unstable stars have changing solar luminosity that changes the size of the life habitable zones. Unstable stars also produce extreme solar flares and coronal mass ejections. Solar flares and coronal mass ejections can strip away a planet's atmosphere that is not replaceable. Thus life habitable zones require and very stable star like the Sun, at ±0.1% solar luminosity change. Finding a stable star, like the Sun, is the search for a solar twin, with solar analogs that have been found. Proper star metallicity, size, mass, age, color, and temperature are also very important to having low luminosity variations. The Sun is unique as it is metal rich for its age and type, a G2V star. The Sun is currently in its most stable stage and has the correct metallicity to make it very stable. Dwarf stars (red dwarf/orange dwarf/brown dwarf/subdwarf) are not only unstable, but also emit low energy, so the habitable zone is very close to the star and planets become tidally locked on the timescales needed for the development of life. Giant stars (subgiant/giant star/red giant/red supergiant) are unstable and emit high energy, so the habitable zone is very far from the star. Multiple-star systems are also very common and are not suitable for complex life, as the planet orbit would be unstable due to multiple gravitational forces and solar radiation. Liquid water is possible in Multiple-star systems. Named habitable zones A conventional habitable zone is defined by liquid water. Habitable zone (HZ) (also called the circumstellar habitable zone), the orbit around a star that would allow liquid water to remain for a short period of time (a given period of time) on at least a small part of the planet's surface. Thus within the HZ, water, (H2O) is between and temperature. This zone is a temperature zone, set by the star's radiation and distance from the star. In the Solar System the planet Mars is just at the outer boundary of the habitable zone. The planet Venus is at the inner edge of the habitable zone, but due to its thick atmosphere it has no water. The HZ includes planets with elliptic orbits; such planets might orbit into and out of the HZ. When a planet moves out of the HZ, all its water would freeze to ice on the outside of the HZ, and/or all water would become steam on the inner side. The HZ could be defined as the region where bacteria, a form of life, could possibly survive for a short period of time. The HZ is also sometimes called the "Goldilocks" zone. Optimistic habitable zone (OHZ): a zone where liquid surface water could have been on a planet at some time in its past history. This zone would be larger than the HZ. Mars is an example of a planet in the OHZ.: it is just beyond the HZ today, but had liquid water for a short time span before the Mars carbonate catastrophe, some 4 billion years ago. Continuously habitable zone (CHZ): a zone where liquid water persists on the surface of a planet for years. This requires a near-circular planetary orbit and a stable star. The zone may be much smaller than the habitable zone. Conservative habitable zone: a zone where liquid surface water remains on a planet over a long time span, as on Earth. This might also need a greenhouse effect provided by gases such as CO2 and water vapor to maintain the correct temperature. Rayleigh scattering would also be needed. Named habitable zones for complex life Over time and with more research, astronomers, cosmologists and astrobiologist have discovered more parameters needed for life. Each parameter could have a corresponding zone. Some of the named zones include: Ultraviolet habitable zone: a zone where the ultraviolet (UV) radiation from a star is neither too weak nor too strong for life to exist. Life needs the correct amount of ultraviolet for synthesis of biochemicals. The extent of the zone depends on the amount of ultraviolet radiation from the star, the range of UV wavelengths, the age of the star, and the atmosphere of the planet. In humans UV is used to produce vitamin D. Extreme ultraviolet (EUV) can cause atmospheric loss. Photosynthetic habitable zone: a zone where both long-term liquid water and oxygenic photosynthesis can occur. Tropospheric habitable zone, or ozone habitable zone: a zone where the planet would have the correct amount of ozone needed for life. Inhaling too much ozone causes inflammation and irritation, whereas too little troposphere ozone would produce biochemical smog. On Earth, the troposphere ozone is part of the ground-level ozone protection. Tropospheric ozone is formed by the interaction of ultraviolet light with hydrocarbons and nitrogen oxides. Planet rotation rate habitable zone: the zone where a planet's rotation rate is best for life. If rotation is too slow, the day/night temperature difference is too great. The rotation rate also changes the planet's reflectivity and thus temperature. A fast rotation rate increases wind speed on the planet. The rotation rate affects the planet's clouds and their reflectivity. Slowing the rotation rate changes cloud distributions, cloud altitudes, and cloud opacities. These changes in the clouds changes the temperature of the planet. A high rotation rate also can cause continuous, very fast winds on the surface. Planet rotation axis tilt habitable zone, or obliquity habitable zone: the region where a stable axial tilt for a planet's rotation is maintained. Earth's axis is tilted 23.5°; this gives seasons, providing snow and ice that can melt to provide water run off in the summer. Obliquity has a major impact on a planet's temperature, thus its habitable zone. Tidal habitable zone. Planets too close to the star become tidally locked. The mass of the star and the distance from the star set the tidal habitable zone. A planet tidally locked has one side of the planet facing the star, this side would be very hot. The face away from the star would be well below freezing. A planet too close to the star will also have tidal heating from the star. Tidal heating can vary the planet's orbital eccentricity. Too far from the star and the planet will not receive enough solar heat. Astrosphere habitable zone: the zone in which a planet's astrosphere will be strong enough to protect the planet from the solar wind and cosmic rays. The astrosphere must be long lasting to protect the planet. Mars lost its water and most of its atmosphere after the losing its magnetic field and Mars carbonate catastrophe event. Star-Sun's solar wind is made of charged particles, including plasma, electrons, protons and alpha particles. The solar wind is different for each star. Earth's magnetic field is very large and has protected Earth since its formation. Atmosphere electric field habitable zone: the place in which the ambipolar electric field is correct for the planet's electric field to help ions overcome gravity. The planet's ionosphere must be correct to protect against the loss of the atmosphere. This is addition to a strong magnetic field to protect against the solar wind stripping away the atmosphere and water into outer space. Orbital eccentricity habitable zone: the zone in which planets maintain a nearly circular orbit. As orbits with eccentricity have the planets move in and out of the habitable zones. In the solar system, the grand tack hypothesis proposes the theory of the unique placement of the gas giants, the solar system belts and the planets near circular orbits. Coupled planet-moon - Magnetosphere habitable zone: the zone that planet's moon and the planet's core produce a strong magnetosphere, magnetic field to protect against the solar wind stripping away the planet's atmosphere and water into outer space. Just as Mars had a magnetic field for a short time. Earth's Moon had a large magnetosphere for several hundred million years after its formation, as proposed in a 2020 study by Saied Mighani. The Moon's magnetosphere would have given added protection of Earth's atmosphere as the early Sun was not as stable as it today. In 2020, James Green modeled the coupled planet-moon-magnetosphere habitable zone. The modeling showed a coupled planet–moon magnetosphere that would give planet the protection from stellar wind in the early Solar System. In the case of Earth, the Moon was closer to Earth in the early formation of the solar system, giving added protection. This protection was needed then as the Sun was less stable. Pressure-dependent habitable zone: the zone in which planets may have the correct atmospheric pressure to have liquid surface water. With a low atmospheric pressure, the temperature at which water boils is much lower, and at pressures below that of the triple point, liquid water cannot exist. The average surface pressure on Mars today is close to that of the triple point of water; thus, liquid water cannot exist there. Planets with high-pressure atmospheres may have liquid surface water, but life forms would have difficulty with respiratory systems at high-pressure atmospheres. Galactic habitable zone (GHZ): The GHZ, also called the Galactic Goldilocks zone, is the place in a galaxy in which heavy elements needed for a rocky planet and life are present, but also a place where strong cosmic rays will not kill life and strip the atmosphere off the planet. The term Goldilocks zone is used, as it is a fine balance between the two sites (heavy element and strong cosmic rays). Galactic habitable zone is the place a planet will have the needed parameters to support life. Not all galaxies are able to support life. In many galaxies, life-killing events such as gamma-ray bursts can occur. About 90% of galaxies have long and frequent gamma ray bursts, thus no life. Cosmic rays pose a threat to life. Galaxies with many stars too close together or without any dust protection also are not hospitable for life. Irregular galaxies and other small galaxies do not have enough heavy elements. Elliptical galaxies are full of lethal radiation and lack heavy elements. Large spiral galaxies, like the Milky Way, have the heavy element needs for life at its center and out to about half distance from center bar. Not all large spiral galaxies are the same, spiral galaxies with too much active star formation can kill the galaxy and life. Too little star formation and the spiral arms will collapse. Not all spiral galaxies have the correct galactic ram pressure stripping parameters; too much ram pressure can deplete the galaxy of gas and thus end star formation. The Milky Way is a barred spiral galaxy, the bar is important to star formation and metallicity of the galaxy's stars and planets. Barred spiral galaxy, must have stable arms with the just right star formation. Bars galaxies are in about 65% of spiral galaxies, but most have too much star formation. Peculiar galaxies lack stable spiral arms, while irregular galaxies contain too many new stars and lack heavy elements. Unbarred spiral galaxy, do not correct star formation and metallicity for a galactic goldilocks zone. For long term life on a planet, the spiral arms must be stable for a long period of time, as in the Milky Way. The spiral arms must not be too close to each other, or there will be too much ultraviolet radiation. If the planet moves into or across a spiral arm the orbits of the planets could change, from gravitational disturbances. Movement across a spiral arms also would cause deadly asteroid impacts and high radiation. The planet must be in the correct place in the spiral galaxy: near the galactic center, radiation and gravitational forces are too great for life, whereas the outskirts of a spiral galaxy are metal-poor. The Sun in 28,000 light years from the center bar, in the galactic Goldilocks zone. At this distance, the Sun revolves in the galaxy at the same rate as the spiral-arm rotation, thus minimizing arm crossings. Supergalactic habitable zone: a place in a supercluster of galaxies that can provide for habitability of planets. The supergalactic habitable zone takes into account events in galaxies that can end habitability not only in a galaxy, but all galaxies nearby, such as galaxies merging, active galactic nucleus, starburst galaxy, supermassive black holes and merging black holes, all which output intense radiation. The supergalactic habitable zone also takes into account the abundance of various chemical elements in the galaxy, as not all galaxies or regions within have all the needed elements for life. Habitable zone for complex life (HZCL): the place that all the life habitable zones overlap for a long period of time, as in the Solar System. The list of habitable zones for complex life has grown longer with increasing understanding of the Universe, galaxies, and the Solar System. Complex life is normally defined as eukaryote life forms, including all animals, plants, fungi, and most unicellular organisms. Simple life forms are normally defined as prokaryotes. Other orbital-distance related factors Some factors that depend on planetary distance and may limit complex aerobic life have not been given zone names. These include: Milankovitch cycle The Milankovitch cycle and ice age have been key is shaping Earth. Life on Earth today is using water melting from the last ice age. The ice ages cannot be too long or too cold for life to survive. Milankovitch cycle has an impact on the planet's obliquity also. Life Life on Earth is carbon-based. However, some theories suggest that life could be based on other elements in the periodic table. Other elements proposed have been silicon, boron, arsenic, ammonia, methane and others. As more research has been done on life on Earth, it has been found that only carbon's organic molecules have the complexity and stability to form life. Carbon properties allows for complex chemical bonding that produces covalent bonds needed for organic chemistry. Carbon molecules are lightweight and relatively small in size. Carbon's ability to bond to oxygen, hydrogen, nitrogen, phosphorus, and sulfur (called CHNOPS) is key to life. Gallery See also Exoplanet orbital and physical parameters Habitability of natural satellites – liquid water on a moon Habitability of yellow dwarf systems – liquid water on yellow dwarf star Habitability of red dwarf systems – liquid water on red dwarf star Planetary habitability in the Solar System – liquid water in our Solar System Habitability of binary star systems – liquid water on binary stars Habitability of F-type main-sequence star systems – liquid water on planets orbiting F-type stars Superhabitable planet – a hypothetical exoplanet References Planetary habitability Astronomical hypotheses Extraterrestrial life Extraterrestrial water
Habitable zone for complex life
Astronomy,Biology
3,307
5,966,396
https://en.wikipedia.org/wiki/Transpose%20graph
In the mathematical and algorithmic study of graph theory, the converse, transpose or reverse of a directed graph is another directed graph on the same set of vertices with all of the edges reversed compared to the orientation of the corresponding edges in . That is, if contains an edge then the converse/transpose/reverse of contains an edge and vice versa. Notation The name arises because the reversal of arrows corresponds to taking the converse of an implication in logic. The name is because the adjacency matrix of the transpose directed graph is the transpose of the adjacency matrix of the original directed graph. There is no general agreement on preferred terminology. The converse is denoted symbolically as , , , or other notations, depending on which terminology is used and which book or article is the source for the notation. Applications Although there is little difference mathematically between a graph and its transpose, the difference may be larger in computer science, depending on how a given graph is represented. For instance, for the web graph, it is easy to determine the outgoing links of a vertex, but hard to determine the incoming links, while in the reversal of this graph the opposite is true. In graph algorithms, therefore, it may sometimes be useful to construct an explicit representation of the reversal of a graph, in order to put the graph into a form which is more suitable for the operations being performed on it. An example of this is Kosaraju's algorithm for strongly connected components, which applies depth-first search twice, once to the given graph and a second time to its reversal. Related concepts A skew-symmetric graph is a graph that is isomorphic to its own transpose graph, via a special kind of isomorphism that pairs up all of the vertices. The converse relation of a binary relation is the relation that reverses the ordering of each pair of related objects. If the relation is interpreted as a directed graph, this is the same thing as the transpose of the graph. In particular, the dual order of a partial order can be interpreted in this way as the transposition of a transitively-closed directed acyclic graph. See also References Graph operations Directed graphs
Transpose graph
Mathematics
446
2,787
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology (also xenology or exobiology) is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missions. Astrobiology also concerns the study of the origin and early evolution of life on Earth to try to understand the conditions that are necessary for life to form on other planets. This research seeks to understand how life emerged from non-living matter and how it evolved to become the diverse array of organisms we see today. Research within this topic is conducted utilising the methodology of paleosciences, especially paleobiology, for astrobiological applications. Astrobiology is a rapidly developing field with a strong interdisciplinary aspect that holds many challenges and opportunities for scientists. Astrobiology programs and research centres are present in many universities and research institutions around the world, and space agencies like NASA and ESA have dedicated departments and programs for astrobiology research. Overview The term astrobiology was first proposed by the Russian astronomer Gavriil Tikhov in 1953. It is etymologically derived from the Greek , "star"; , "life"; and , -logia, "study". A close synonym is exobiology from the Greek Έξω, "external"; , "life"; and , -logia, "study", coined by American molecular biologist Joshua Lederberg; exobiology is considered to have a narrow scope limited to search of life external to Earth. Another associated term is xenobiology, from the Greek ξένος, "foreign"; , "life"; and -λογία, "study", coined by American science fiction writer Robert Heinlein in his work The Star Beast; xenobiology is now used in a more specialised sense, referring to 'biology based on foreign chemistry', whether of extraterrestrial or terrestrial (typically synthetic) origin. While the potential for extraterrestrial life, especially intelligent life, has been explored throughout human history within philosophy and narrative, the question is a verifiable hypothesis and thus a valid line of scientific inquiry; planetary scientist David Grinspoon calls it a field of natural philosophy, grounding speculation on the unknown in known scientific theory. The modern field of astrobiology can be traced back to the 1950s and 1960s with the advent of space exploration, when scientists began to seriously consider the possibility of life on other planets. In 1957, the Soviet Union launched Sputnik 1, the first artificial satellite, which marked the beginning of the Space Age. This event led to an increase in the study of the potential for life on other planets, as scientists began to consider the possibilities opened up by the new technology of space exploration. In 1959, NASA funded its first exobiology project, and in 1960, NASA founded the Exobiology Program, now one of four main elements of NASA's current Astrobiology Program. In 1971, NASA funded Project Cyclops, part of the search for extraterrestrial intelligence, to search radio frequencies of the electromagnetic spectrum for interstellar communications transmitted by extraterrestrial life outside the Solar System. In the 1960s-1970s, NASA established the Viking program, which was the first US mission to land on Mars and search for metabolic signs of present life; the results were inconclusive. In the 1980s and 1990s, the field began to expand and diversify as new discoveries and technologies emerged. The discovery of microbial life in extreme environments on Earth, such as deep-sea hydrothermal vents, helped to clarify the feasibility of potential life existing in harsh conditions. The development of new techniques for the detection of biosignatures, such as the use of stable isotopes, also played a significant role in the evolution of the field. The contemporary landscape of astrobiology emerged in the early 21st century, focused on utilising Earth and environmental science for applications within comparate space environments. Missions included the ESA's Beagle 2, which failed minutes after landing on Mars, NASA's Phoenix lander, which probed the environment for past and present planetary habitability of microbial life on Mars and researched the history of water, and NASA's Curiosity rover, currently probing the environment for past and present planetary habitability of microbial life on Mars. Theoretical foundations Planetary habitability Astrobiological research makes a number of simplifying assumptions when studying the necessary components for planetary habitability. Carbon and Organic Compounds: Carbon is the fourth most abundant element in the universe and the energy required to make or break a bond is at just the appropriate level for building molecules which are not only stable, but also reactive. The fact that carbon atoms bond readily to other carbon atoms allows for the building of extremely long and complex molecules. As such, astrobiological research presumes that the vast majority of life forms in the Milky Way galaxy are based on carbon chemistries, as are all life forms on Earth. However, theoretical astrobiology entertains the potential for other organic molecular bases for life, thus astrobiological research often focuses on identifying environments that have the potential to support life based on the presence of organic compounds. Liquid water: Liquid water is a common molecule that provides an excellent environment for the formation of complicated carbon-based molecules, and is generally considered necessary for life as we know it to exist. Thus, astrobiological research presumes that extraterrestrial life similarly depends upon access to liquid water, and often focuses on identifying environments that have the potential to support liquid water. Some researchers posit environments of water-ammonia mixtures as possible solvents for hypothetical types of biochemistry. Environmental stability: Where organisms adaptively evolve to the conditions of the environments in which they reside, environmental stability is considered necessary for life to exist. This presupposes the necessity of a stable temperature, pressure, and radiation levels; resultantly, astrobiological research focuses on planets orbiting Sun-like red dwarf stars. This is because very large stars have relatively short lifetimes, meaning that life might not have time to emerge on planets orbiting them; very small stars provide so little heat and warmth that only planets in very close orbits around them would not be frozen solid, and in such close orbits these planets would be tidally locked to the star; whereas the long lifetimes of red dwarfs could allow the development of habitable environments on planets with thick atmospheres. This is significant as red dwarfs are extremely common. (See also: Habitability of red dwarf systems). Energy source: It is assumed that any life elsewhere in the universe would also require an energy source. Previously, it was assumed that this would necessarily be from a sun-like star, however with developments within extremophile research contemporary astrobiological research often focuses on identifying environments that have the potential to support life based on the availability of an energy source, such as the presence of volcanic activity on a planet or moon that could provide a source of heat and energy. It is important to note that these assumptions are based on our current understanding of life on Earth and the conditions under which it can exist. As our understanding of life and the potential for it to exist in different environments evolves, these assumptions may change. Methods Studying terrestrial extremophiles Astrobiological research concerning the study of habitable environments in our solar system and beyond utilises methods within the geosciences. Research within this branch primarily concerns the geobiology of organisms that can survive in extreme environments on Earth, such as in volcanic or deep sea environments, to understand the limits of life, and the conditions under which life might be able to survive on other planets. This includes, but is not limited to: Deep-sea extremophiles: Researchers are studying organisms that live in the extreme environments of deep-sea hydrothermal vents and cold seeps. These organisms survive in the absence of sunlight, and some are able to survive in high temperatures and pressures, and use chemical energy instead of sunlight to produce food. Desert extremophiles: Researchers are studying organisms that can survive in extreme dry, high temperature conditions, such as in deserts. Microbes in extreme environments: Researchers are investigating the diversity and activity of microorganisms in environments such as deep mines, subsurface soil, cold glaciers and polar ice, and high-altitude environments. Researching Earth's present environment Research also regards the long-term survival of life on Earth, and the possibilities and hazards of life on other planets, including: Biodiversity and ecosystem resilience: Scientists are studying how the diversity of life and the interactions between different species contribute to the resilience of ecosystems and their ability to recover from disturbances. Climate change and extinction: Researchers are investigating the impacts of climate change on different species and ecosystems, and how they may lead to extinction or adaptation. This includes the evolution of Earth's climate and geology, and their potential impact on the habitability of the planet in the future, especially for humans. Human impact on the biosphere: Scientists are studying the ways in which human activities, such as deforestation, pollution, and the introduction of invasive species, are affecting the biosphere and the long-term survival of life on Earth. Long-term preservation of life: Researchers are exploring ways to preserve samples of life on Earth for long periods of time, such as cryopreservation and genomic preservation, in the event of a catastrophic event that could wipe out most of life on Earth. Finding biosignatures on other worlds Emerging astrobiological research concerning the search for planetary biosignatures of past or present extraterrestrial life utilise methodologies within planetary sciences. These include: The study of microbial life in the subsurface of Mars: Scientists are using data from Mars rover missions to study the composition of the subsurface of Mars, searching for biosignatures of past or present microbial life. The study of liquid bodies on icy moons: Discoveries of surface and subsurface bodies of liquid on moons such as Europa, Titan and Enceladus showed possible habitability zones, making them viable targets for the search for extraterrestrial life. , missions like Europa Clipper and Dragonfly are planned to search for biosignatures within these environments. The study of the atmospheres of planets: Scientists are studying the potential for life to exist in the atmospheres of planets, with a focus on the study of the physical and chemical conditions necessary for such life to exist, namely the detection of organic molecules and biosignature gases; for example, the study of the possibility of life in the atmospheres of exoplanets that orbit red dwarfs and the study of the potential for microbial life in the upper atmosphere of Venus. Telescopes and remote sensing of exoplanets: The discovery of thousands of exoplanets has opened up new opportunities for the search for biosignatures. Scientists are using telescopes such as the James Webb Space Telescope and the Transiting Exoplanet Survey Satellite to search for biosignatures on exoplanets. They are also developing new techniques for the detection of biosignatures, such as the use of remote sensing to search for biosignatures in the atmosphere of exoplanets. Talking to extraterrestrials SETI and CETI: Scientists search for signals from intelligent extraterrestrial civilizations using radio and optical telescopes within the discipline of extraterrestrial intelligence communications (CETI). CETI focuses on composing and deciphering messages that could theoretically be understood by another technological civilization. Communication attempts by humans have included broadcasting mathematical languages, pictorial systems such as the Arecibo message, and computational approaches to detecting and deciphering 'natural' language communication. While some high-profile scientists, such as Carl Sagan, have advocated the transmission of messages, theoretical physicist Stephen Hawking warned against it, suggesting that aliens may raid Earth for its resources. Investigating the early Earth Emerging astrobiological research concerning the study of the origin and early evolution of life on Earth utilises methodologies within the palaeosciences. These include: The study of the early atmosphere: Researchers are investigating the role of the early atmosphere in providing the right conditions for the emergence of life, such as the presence of gases that could have helped to stabilise the climate and the formation of organic molecules. The study of the early magnetic field: Researchers are investigating the role of the early magnetic field in protecting the Earth from harmful radiation and helping to stabilise the climate. This research has immense astrobiological implications where the subjects of current astrobiological research like Mars lack such a field. The study of prebiotic chemistry: Scientists are studying the chemical reactions that could have occurred on the early Earth that led to the formation of the building blocks of life- amino acids, nucleotides, and lipids- and how these molecules could have formed spontaneously under early Earth conditions. The study of impact events: Scientists are investigating the potential role of impact events- especially meteorites- in the delivery of water and organic molecules to early Earth. The study of the primordial soup: Researchers are investigating the conditions and ingredients that were present on the early Earth that could have led to the formation of the first living organisms, such as the presence of water and organic molecules, and how these ingredients could have led to the formation of the first living organisms. This includes the role of water in the formation of the first cells and in catalysing chemical reactions. The study of the role of minerals: Scientists are investigating the role of minerals like clay in catalysing the formation of organic molecules, thus playing a role in the emergence of life on Earth. The study of the role of energy and electricity: Scientists are investigating the potential sources of energy and electricity that could have been available on the early Earth, and their role in the formation of organic molecules, thus the emergence of life. The study of the early oceans: Scientists are investigating the composition and chemistry of the early oceans and how it may have played a role in the emergence of life, such as the presence of dissolved minerals that could have helped to catalyse the formation of organic molecules. The study of hydrothermal vents: Scientists are investigating the potential role of hydrothermal vents in the origin of life, as these environments may have provided the energy and chemical building blocks needed for its emergence. The study of plate tectonics: Scientists are investigating the role of plate tectonics in creating a diverse range of environments on the early Earth. The study of the early biosphere: Researchers are investigating the diversity and activity of microorganisms in the early Earth, and how these organisms may have played a role in the emergence of life. The study of microbial fossils: Scientists are investigating the presence of microbial fossils in ancient rocks, which can provide clues about the early evolution of life on Earth and the emergence of the first organisms. Research The systematic search for possible life outside Earth is a valid multidisciplinary scientific endeavor. However, hypotheses and predictions as to its existence and origin vary widely, and at the present, the development of hypotheses firmly grounded on science may be considered astrobiology's most concrete practical application. It has been proposed that viruses are likely to be encountered on other life-bearing planets, and may be present even if there are no biological cells. Research outcomes , no evidence of extraterrestrial life has been identified. Examination of the Allan Hills 84001 meteorite, which was recovered in Antarctica in 1984 and originated from Mars, is thought by David McKay, as well as few other scientists, to contain microfossils of extraterrestrial origin; this interpretation is controversial. Yamato 000593, the second largest meteorite from Mars, was found on Earth in 2000. At a microscopic level, spheres are found in the meteorite that are rich in carbon compared to surrounding areas that lack such spheres. The carbon-rich spheres may have been formed by biotic activity according to some NASA scientists. On 5 March 2011, Richard B. Hoover, a scientist with the Marshall Space Flight Center, speculated on the finding of alleged microfossils similar to cyanobacteria in CI1 carbonaceous meteorites in the fringe Journal of Cosmology, a story widely reported on by mainstream media. However, NASA formally distanced itself from Hoover's claim. According to American astrophysicist Neil deGrasse Tyson: "At the moment, life on Earth is the only known life in the universe, but there are compelling arguments to suggest we are not alone." Elements of astrobiology Astronomy Most astronomy-related astrobiology research falls into the category of extrasolar planet (exoplanet) detection, the hypothesis being that if life arose on Earth, then it could also arise on other planets with similar characteristics. To that end, a number of instruments designed to detect Earth-sized exoplanets have been considered, most notably NASA's Terrestrial Planet Finder (TPF) and ESA's Darwin programs, both of which have been cancelled. NASA launched the Kepler mission in March 2009, and the French Space Agency launched the COROT space mission in 2006. There are also several less ambitious ground-based efforts underway. The goal of these missions is not only to detect Earth-sized planets but also to directly detect light from the planet so that it may be studied spectroscopically. By examining planetary spectra, it would be possible to determine the basic composition of an extrasolar planet's atmosphere and/or surface. Given this knowledge, it may be possible to assess the likelihood of life being found on that planet. A NASA research group, the Virtual Planet Laboratory, is using computer modeling to generate a wide variety of virtual planets to see what they would look like if viewed by TPF or Darwin. It is hoped that once these missions come online, their spectra can be cross-checked with these virtual planetary spectra for features that might indicate the presence of life. An estimate for the number of planets with intelligent communicative extraterrestrial life can be gleaned from the Drake equation, essentially an equation expressing the probability of intelligent life as the product of factors such as the fraction of planets that might be habitable and the fraction of planets on which life might arise: where: N = The number of communicative civilizations R* = The rate of formation of suitable stars (stars such as the Sun) fp = The fraction of those stars with planets (current evidence indicates that planetary systems may be common for stars like the Sun) ne = The number of Earth-sized worlds per planetary system fl = The fraction of those Earth-sized planets where life actually develops fi = The fraction of life sites where intelligence develops fc = The fraction of communicative planets (those on which electromagnetic communications technology develops) L = The "lifetime" of communicating civilizations However, whilst the rationale behind the equation is sound, it is unlikely that the equation will be constrained to reasonable limits of error any time soon. The problem with the formula is that it is not used to generate or support hypotheses because it contains factors that can never be verified. The first term, R*, number of stars, is generally constrained within a few orders of magnitude. The second and third terms, fp, stars with planets and fe, planets with habitable conditions, are being evaluated for the star's neighborhood. Drake originally formulated the equation merely as an agenda for discussion at the Green Bank conference, but some applications of the formula had been taken literally and related to simplistic or pseudoscientific arguments. Another associated topic is the Fermi paradox, which suggests that if intelligent life is common in the universe, then there should be obvious signs of it. Another active research area in astrobiology is planetary system formation. It has been suggested that the peculiarities of the Solar System (for example, the presence of Jupiter as a protective shield) may have greatly increased the probability of intelligent life arising on Earth. Biology Biology cannot state that a process or phenomenon, by being mathematically possible, has to exist forcibly in an extraterrestrial body. Biologists specify what is speculative and what is not. The discovery of extremophiles, organisms able to survive in extreme environments, became a core research element for astrobiologists, as they are important to understand four areas in the limits of life in planetary context: the potential for panspermia, forward contamination due to human exploration ventures, planetary colonization by humans, and the exploration of extinct and extant extraterrestrial life. Until the 1970s, life was thought to be entirely dependent on energy from the Sun. Plants on Earth's surface capture energy from sunlight to photosynthesize sugars from carbon dioxide and water, releasing oxygen in the process that is then consumed by oxygen-respiring organisms, passing their energy up the food chain. Even life in the ocean depths, where sunlight cannot reach, was thought to obtain its nourishment either from consuming organic detritus rained down from the surface waters or from eating animals that did. The world's ability to support life was thought to depend on its access to sunlight. However, in 1977, during an exploratory dive to the Galapagos Rift in the deep-sea exploration submersible Alvin, scientists discovered colonies of giant tube worms, clams, crustaceans, mussels, and other assorted creatures clustered around undersea volcanic features known as black smokers. These creatures thrive despite having no access to sunlight, and it was soon discovered that they form an entirely independent ecosystem. Although most of these multicellular lifeforms need dissolved oxygen (produced by oxygenic photosynthesis) for their aerobic cellular respiration and thus are not completely independent from sunlight by themselves, the basis for their food chain is a form of bacterium that derives its energy from oxidization of reactive chemicals, such as hydrogen or hydrogen sulfide, that bubble up from the Earth's interior. Other lifeforms entirely decoupled from the energy from sunlight are green sulfur bacteria which are capturing geothermal light for anoxygenic photosynthesis or bacteria running chemolithoautotrophy based on the radioactive decay of uranium. This chemosynthesis revolutionized the study of biology and astrobiology by revealing that life need not be sunlight-dependent; it only requires water and an energy gradient in order to exist. Biologists have found extremophiles that thrive in ice, boiling water, acid, alkali, the water core of nuclear reactors, salt crystals, toxic waste and in a range of other extreme habitats that were previously thought to be inhospitable for life. This opened up a new avenue in astrobiology by massively expanding the number of possible extraterrestrial habitats. Characterization of these organisms, their environments and their evolutionary pathways, is considered a crucial component to understanding how life might evolve elsewhere in the universe. For example, some organisms able to withstand exposure to the vacuum and radiation of outer space include the lichen fungi Rhizocarpon geographicum and Rusavskia elegans, the bacterium Bacillus safensis, Deinococcus radiodurans, Bacillus subtilis, yeast Saccharomyces cerevisiae, seeds from Arabidopsis thaliana ('mouse-ear cress'), as well as the invertebrate animal Tardigrade. While tardigrades are not considered true extremophiles, they are considered extremotolerant microorganisms that have contributed to the field of astrobiology. Their extreme radiation tolerance and presence of DNA protection proteins may provide answers as to whether life can survive away from the protection of the Earth's atmosphere. Jupiter's moon, Europa, and Saturn's moon, Enceladus, are now considered the most likely locations for extant extraterrestrial life in the Solar System due to their subsurface water oceans where radiogenic and tidal heating enables liquid water to exist. The origin of life, known as abiogenesis, distinct from the evolution of life, is another ongoing field of research. Oparin and Haldane postulated that the conditions on the early Earth were conducive to the formation of organic compounds from inorganic elements and thus to the formation of many of the chemicals common to all forms of life we see today. The study of this process, known as prebiotic chemistry, has made some progress, but it is still unclear whether or not life could have formed in such a manner on Earth. The alternative hypothesis of panspermia is that the first elements of life may have formed on another planet with even more favorable conditions (or even in interstellar space, asteroids, etc.) and then have been carried over to Earth. The cosmic dust permeating the universe contains complex organic compounds ("amorphous organic solids with a mixed aromatic-aliphatic structure") that could be created naturally, and rapidly, by stars. Further, a scientist suggested that these compounds may have been related to the development of life on Earth and said that, "If this is the case, life on Earth may have had an easier time getting started as these organics can serve as basic ingredients for life." More than 20% of the carbon in the universe may be associated with polycyclic aromatic hydrocarbons (PAHs), possible starting materials for the formation of life. PAHs seem to have been formed shortly after the Big Bang, are widespread throughout the universe, and are associated with new stars and exoplanets. PAHs are subjected to interstellar medium conditions and are transformed through hydrogenation, oxygenation and hydroxylation, to more complex organics—"a step along the path toward amino acids and nucleotides, the raw materials of proteins and DNA, respectively". In October 2020, astronomers proposed the idea of detecting life on distant planets by studying the shadows of trees at certain times of the day to find patterns that could be detected through observation of exoplanets. Philosophy David Grinspoon called astrobiology a field of natural philosophy. Astrobiology intersects with philosophy by raising questions about the nature and existence of life beyond Earth. Philosophical implications include the definition of life itself, issues in the philosophy of mind and cognitive science in case intelligent life is found, epistemological questions about the nature of proof, ethical considerations of space exploration, along with the broader impact of discovering extraterrestrial life on human thought and society. Dunér has emphasized philosophy of astrobiology as an ongoing existential exercise in individual and collective self-understanding, whose major task is constructing and debating concepts such as the concept of life. Key issues, for Dunér, are questions of resource money and monetary planning, epistemological questions regarding astrobiological knowledge, linguistics issues about interstellar communication, cognitive issues such as the definition of intelligence, along with the possibility of interplanetary contamination. Persson also emphasized key philosophical questions in astrobiology. They include ethical justification of resources, the question of life in general, the epistemological issues and knowledge about being alone in the universe, ethics towards extraterrestrial life, the question of politics and governing uninhabited worlds, along with questions of ecology. For von Hegner, the question of astrobiology and the possibility of astrophilosophy differs. For him, the discipline needs to bifurcate into astrobiology and astrophilosophy since discussions made possible by astrobiology, but which have been astrophilosophical in nature, have existed as long as there have been discussions about extraterrestrial life. Astrobiology is a self-corrective interaction among observation, hypothesis, experiment, and theory, pertaining to the exploration of all natural phenomena. Astrophilosophy consists of methods of dialectic analysis and logical argumentation, pertaining to the clarification of the nature of reality. Šekrst argues that astrobiology requires the affirmation of astrophilosophy, but not as a separate cognate to astrobiology. The stance of conceptual speciesm, according to Šekrst, permeates astrobiology since the very name astrobiology tries to talk about not just biology, but about life in a general way, which includes terrestrial life as a subset. This leads us to either redefine philosophy, or consider the need for astrophilosophy as a more general discipline, to which philosophy is just a subset that deals with questions such as the nature of the human mind and other anthropocentric questions. Most of the philosophy of astrobiology deals with two main questions: the question of life and the ethics of space exploration. Kolb specifically emphasizes the question of viruses, for which the question whether they are alive or not is based on the definitions of life that include self-replication. Schneider tried to defined exo-life, but concluded that we often start with our own prejudices and that defining extraterrestrial life seems futile using human concepts. For Dick, astrobiology relies on metaphysical assumption that there is extraterrestrial life, which reaffirms questions in the philosophy of cosmology, such as fine-tuning or the anthropic principle. Rare Earth hypothesis The Rare Earth hypothesis postulates that multicellular life forms found on Earth may actually be more of a rarity than scientists assume. According to this hypothesis, life on Earth (and more, multi-cellular life) is possible because of a conjunction of the right circumstances (galaxy and location within it, planetary system, star, orbit, planetary size, atmosphere, etc.); and the chance for all those circumstances to repeat elsewhere may be rare. It provides a possible answer to the Fermi paradox which suggests, "If extraterrestrial aliens are common, why aren't they obvious?" It is apparently in opposition to the principle of mediocrity, assumed by famed astronomers Frank Drake, Carl Sagan, and others. The principle of mediocrity suggests that life on Earth is not exceptional, and it is more than likely to be found on innumerable other worlds. Missions Research into the environmental limits of life and the workings of extreme ecosystems is ongoing, enabling researchers to better predict what planetary environments might be most likely to harbor life. Missions such as the Phoenix lander, Mars Science Laboratory, ExoMars, Mars 2020 rover to Mars, and the Cassini probe to Saturn's moons aim to further explore the possibilities of life on other planets in the Solar System. Viking program The two Viking landers each carried four types of biological experiments to the surface of Mars in the late 1970s. These were the only Mars landers to carry out experiments looking specifically for metabolism by current microbial life on Mars. The landers used a robotic arm to collect soil samples into sealed test containers on the craft. The two landers were identical, so the same tests were carried out at two places on Mars' surface; Viking 1 near the equator and Viking 2 further north. The result was inconclusive, and is still disputed by some scientists. Norman Horowitz was the chief of the Jet Propulsion Laboratory bioscience section for the Mariner and Viking missions from 1965 to 1976. Horowitz considered that the great versatility of the carbon atom makes it the element most likely to provide solutions, even exotic solutions, to the problems of survival of life on other planets. However, he also considered that the conditions found on Mars were incompatible with carbon based life. Beagle 2 Beagle 2 was an unsuccessful British Mars lander that formed part of the European Space Agency's 2003 Mars Express mission. Its primary purpose was to search for signs of life on Mars, past or present. Although it landed safely, it was unable to correctly deploy its solar panels and telecom antenna. EXPOSE EXPOSE is a multi-user facility mounted in 2008 outside the International Space Station dedicated to astrobiology. EXPOSE was developed by the European Space Agency (ESA) for long-term spaceflights that allow exposure of organic chemicals and biological samples to outer space in low Earth orbit. Mars Science Laboratory The Mars Science Laboratory (MSL) mission landed the Curiosity rover that is currently in operation on Mars. It was launched 26 November 2011, and landed at Gale Crater on 6 August 2012. Mission objectives are to help assess Mars' habitability and in doing so, determine whether Mars is or has ever been able to support life, collect data for a future human mission, study Martian geology, its climate, and further assess the role that water, an essential ingredient for life as we know it, played in forming minerals on Mars. Tanpopo The Tanpopo mission is an orbital astrobiology experiment investigating the potential interplanetary transfer of life, organic compounds, and possible terrestrial particles in the low Earth orbit. The purpose is to assess the panspermia hypothesis and the possibility of natural interplanetary transport of microbial life as well as prebiotic organic compounds. Early mission results show evidence that some clumps of microorganism can survive for at least one year in space. This may support the idea that clumps greater than 0.5 millimeters of microorganisms could be one way for life to spread from planet to planet. ExoMars rover ExoMars is a robotic mission to Mars to search for possible biosignatures of Martian life, past or present. This astrobiological mission was under development by the European Space Agency (ESA) in partnership with the Russian Federal Space Agency (Roscosmos); it was planned for a 2022 launch; however, technical and funding issues and the Russian invasion of Ukraine have forced ESA to repeatedly delay the rover's delivery to 2028. Mars 2020 Mars 2020 successfully landed its rover Perseverance in Jezero Crater on 18 February 2021. It will investigate environments on Mars relevant to astrobiology, investigate its surface geological processes and history, including the assessment of its past habitability and potential for preservation of biosignatures and biomolecules within accessible geological materials. The Science Definition Team is proposing the rover collect and package at least 31 samples of rock cores and soil for a later mission to bring back for more definitive analysis in laboratories on Earth. The rover could make measurements and technology demonstrations to help designers of a human expedition understand any hazards posed by Martian dust and demonstrate how to collect carbon dioxide (CO2), which could be a resource for making molecular oxygen (O2) and rocket fuel. Europa Clipper Europa Clipper is a mission launched by NASA on 14 October 2024 that will conduct detailed reconnaissance of Jupiter's moon Europa beginning in 2030, and will investigate whether its internal ocean could harbor conditions suitable for life. It will also aid in the selection of future landing sites. Dragonfly Dragonfly is a NASA mission scheduled to land on Titan in 2036 to assess its microbial habitability and study its prebiotic chemistry. Dragonfly is a rotorcraft lander that will perform controlled flights between multiple locations on the surface, which allows sampling of diverse regions and geological contexts. Proposed concepts Icebreaker Life Icebreaker Life is a lander mission that was proposed for NASA's Discovery Program for the 2021 launch opportunity, but it was not selected for development. It would have had a stationary lander that would be a near copy of the successful 2008 Phoenix and it would have carried an upgraded astrobiology scientific payload, including a 1-meter-long core drill to sample ice-cemented ground in the northern plains to conduct a search for organic molecules and evidence of current or past life on Mars. One of the key goals of the Icebreaker Life mission is to test the hypothesis that the ice-rich ground in the polar regions has significant concentrations of organics due to protection by the ice from oxidants and radiation. Journey to Enceladus and Titan Journey to Enceladus and Titan (JET) is an astrobiology mission concept to assess the habitability potential of Saturn's moons Enceladus and Titan by means of an orbiter. Enceladus Life Finder Enceladus Life Finder (ELF) is a proposed astrobiology mission concept for a space probe intended to assess the habitability of the internal aquatic ocean of Enceladus, Saturn's sixth-largest moon. Life Investigation For Enceladus Life Investigation For Enceladus (LIFE) is a proposed astrobiology sample-return mission concept. The spacecraft would enter into Saturn orbit and enable multiple flybys through Enceladus' icy plumes to collect icy plume particles and volatiles and return them to Earth on a capsule. The spacecraft may sample Enceladus' plumes, the E ring of Saturn, and the upper atmosphere of Titan. Oceanus Oceanus is an orbiter proposed in 2017 for the New Frontiers mission No. 4. It would travel to the moon of Saturn, Titan, to assess its habitability. Oceanus objectives are to reveal Titan's organic chemistry, geology, gravity, topography, collect 3D reconnaissance data, catalog the organics and determine where they may interact with liquid water. Explorer of Enceladus and Titan Explorer of Enceladus and Titan (E2T) is an orbiter mission concept that would investigate the evolution and habitability of the Saturnian satellites Enceladus and Titan. The mission concept was proposed in 2017 by the European Space Agency. See also The Living Cosmos Citations General references The International Journal of Astrobiology , published by Cambridge University Press, is the forum for practitioners in this interdisciplinary field. Astrobiology, published by Mary Ann Liebert, Inc., is a peer-reviewed journal that explores the origins of life, evolution, distribution, and destiny in the universe. Loeb, Avi (2021). Extraterrestrial: The First Sign of Intelligent Life Beyond Earth. Houghton Mifflin Harcourt. Further reading D. Goldsmith, T. Owen, The Search For Life in the Universe, Addison-Wesley Publishing Company, 2001 (3rd edition). Andy Weir's 2021 novel, Project Hail Mary, centers on astrobiology. External links Astrobiology.nasa.gov UK Centre for Astrobiology Spanish Centro de Astrobiología Astrobiology Research at The Library of Congress Astrobiology Survey – An introductory course on astrobiology Summary - Search For Life Beyond Earth (NASA; 25 June 2021) Origin of life Astronomical sub-disciplines Branches of biology Speculative evolution
Astrobiology
Astronomy,Biology
8,184
78,380,033
https://en.wikipedia.org/wiki/3C%20309.1
3C 309.1 is a quasar located in the constellation of Ursa Minor. It has a redshift (z) of 0.90 and was first identified as an astronomical radio source from the Third Cambridge Catalogue of Radio Sources by in 1966. This object contains a compact steep spectrum (CSS) source, and is classified as one of the brightest and largest of its kind. Description 3C 309.1 has a triple radio structure. It has a radio core found self-absorbed with an extended position angle of 162° ± 2°. On both sides of the core, there are two relatively extended outer radio lobes having a defined positional angle of 90°. In sub-arcsecond resolutions, the structure is made up of several components. Three of them are aligned east–west while the others are located along the path of extended emission in a southern direction, clearly detected by two X-ray images. In two of the brightest components, there is polarized emission. However, when viewed at a 5 GHz milliarcsecond (mas) resolution, a bright core is found instead straddled by two other weaker components with a separation of 8.7 kiloparsecs. Sub-milliarcsecond imaging shows the core to be compact with a more extended component located 20 mas to the south. The jet of 3C 309.1 is one-sided. It is found to be flaring away from the nucleus with a sharp change in brightness, likely caused through various Kelvin-Helmholtz instabilities in confined fluid flow and pressure being exerted in confined medium. In Very Long Baseline Interferometry radio imaging, the jet is shown to extend from the core southwards with a distance of 260 parsecs (60 mas). At eastwards, it bends at 90° before fading rapidly. Furthermore, the jet is extremely polarized. The host galaxy of 3C 309.1 is a flat elliptical galaxy according to Hubble Space Telescope imaging. It has a major axis orientated along the position angle of 130°. Extensive emission-line gas is also seen surrounding the object at high pressure, with a massive cooling rate exceeding 1000 Mʘ yr−1 implying its host galaxy might have been formed within a Hubble time. References External links 3C 309.1 on SIMBAD 3C 309.1 on NASA/IPAC Database Quasars Ursa Minor 309.1 71.15 2821824 Active galaxies Astronomical objects discovered in 1966
3C 309.1
Astronomy
510
37,785,074
https://en.wikipedia.org/wiki/HD%2027631%20b
HD 27631 b is an extrasolar planet that has more mass than Jupiter. It orbits 3.25 AU from the star, taking six years to revolve around the parent star HD 27631. Its orbit is eccentric, around 12%. In 2023, the inclination and true mass of HD 27631 b were determined via astrometry. References External links Exoplanets discovered in 2011 Giant planets Horologium (constellation) Exoplanets detected by radial velocity Exoplanets detected by astrometry
HD 27631 b
Astronomy
107
56,234,193
https://en.wikipedia.org/wiki/Ulocladium%20botrytis
Ulocladium botrytis is an anamorphic filamentous fungus belonging to the phylum Ascomycota. Commonly found in soil and damp indoor environments, U.botrytis is a hyphomycetous mould found in many regions of the world. It is also occasionally misidentified as a species of the genera Alternaria or Pithomyces due to morphological similarities. Ulocladium botrytis is rarely pathogenic to humans but is associated with human allergic responses and is used in allergy tests. Ulocladium botrytis has been implicated in some cases of human fungal nail infection. The fungus was first discovered in 1851 by German mycologist Carl Gottlieb Traugott Preuss. History and taxonomy The genus Ulocladium was first discovered in 1851 by German mycologist, Preuss, in a small batch of his specimens. An abundant hyphomycetous growth of Ulocladium was found on a thin sliver of wood and was drawn and labeled by Preuss as Ulocladium botrytis in his manuscript. This sample was later acquired by the Botanisches Museum in Berlin. At the time, the name of the genus and the species type was published as a nomen nudum due to insufficient description. Furthermore, certain taxa of Ulocladium greatly resemble Alternaria species, resulting in occasional misidentifications. During the late 1900s, a mycologist named Curran described Alternaria maritima as a species new to Ireland. However, Curran's new claim was questioned when another mycologist, Kohlmeyer, initiated a movement to verify the classification of this fungus. After much study, it was found that Alternaria maritima was in fact Ulocladium botrytis. Although Ulocladium is now a genus of its own, it was once included in the genus Alternaria. Several recent DNA-based phylogenetic studies have presented convincing data which places Ulocladium species within the genus Alternaria; however, Ulocladium species do not produce certain compounds and metabolites produced by Alternaria species. Some modern sources believe that Ulocladium botrytis should be considered conspecific with Ulocladium atrum. Growth and morphology Ulocladium botrytis is a hyphomycetous mould that favors growth in damp indoor environments. Although it mainly uses nitrogen, other nutrient sources have been tested to determine that U. botrytis growth rate is dependent on the type of media provided. Ulocladium botrytis colonies are commonly velvety in texture and grow in an assortment of colors ranging from dark blackish brown to black. The hyphae are 3-4 μm in diameter and yellow to golden brown in colour with a smooth or slightly rough texture. Conidiophores are short and either erect and ascending, or contorted into various shapes. In addition, they are often bifurcated near the apex at sharp angles. Ulocladium botrytis conidiophores are typically light golden brown in color and smooth, with a length of up to 100 μm and a thickness of around 3-5 μm. The conidia themselves are typically ellipsoidal or obovoid in shape; spheroidal conidia are uncommon in this species. They are golden brown in color and frequently have a minute hilum and a warty, verrucose exterior ornamentation. Ulocladium botrytis conidia typically have three transverse septa and longitudinal septum, but these septa rarely overlap to form a cross. This species never forms conidial chains and the conidia never have a beak. Physiology Ulocladium botrytis is an anamorphic fungus, thus it undergoes asexual reproduction. Although it is an asexual fungus, U. botrytis possesses the mating type locus, which consists of two dissimilar DNA sequences termed MAT1-1-1 and MAT1-2-1. These U. botrytis MAT genes are essential for controlling colony size and asexual traits such as conidial size and number in U.botrytis. The U. botrytis MAT genes have lost the ability to regulate sexual reproduction in U. botrytis; however, they have the ability to partially induce sexual reproduction in Cochliobolus heterostrophus, a heterothallic species, upon heterologous complementation. Ulocladium botrytis has cellulolytic ability and contains a cellulose-degrading enzyme complex that can degrade recalcitrant plant litter under alkaline conditions, a trait that is uncommon in other cellulolytic systems. This fungus' ability to hydrolyze cellulose in the solid form is best at a pH of 6.0, as this pH allows maximal growth of U. botrytis under alkaline conditions. In contrast, its ability to hydrolyze liquid cellulose under alkaline conditions is best at a pH of 8.0. Additionally, a new tyrosine kinase (p56tck) inhibitor called ulocladol, with the molecular formula C16H14O7, was found in ethyl acetate extract from U. botrytis. Ulocladium botrytis also synthesizes extracellular keratinases and can grow in the presence of keratin. Moreover, this fungus can produce carboxymethyl cellulase and protease on Eichhornia crassipes wastes. As a fungus, Ulocladium botrytis produces a diverse collection of chemical compounds and metabolites. It produces mixtures of volatile organic compounds that include terpenes, alcohols, ketones, and nitrogen-containing compounds. Furthermore, U. botrytis aids in decreasing aldehyde levels. Dodecane and 9,10,12,13-tetrahydroxyheneicosanoic acid were also found as metabolites of U. botrytis. Another U. botrytis metabolite is 1-hydroxy-6-methyl-8-(hydroxymethyl)xanthone, which has antimicrobial effects indicating its identification as an antifungal metabolite. Importantly, a major protein allergen of Alternaria alternata, termed Alt a 1, and an allergen homologous to it is expressed in the excretory-secretory materials of U. botrytis. Habitat and ecology The distribution of Ulocladium botrytis is fairly broad, wherein it has been found worldwide in areas of Europe, North America, Egypt, India, Pakistan, and Kuwait. It is often isolated from soil, where it is a common contaminant; however, U. botrytis also grows on rotten wood, paper, and other textiles or on dead herbaceous plants. It also heavily favors growth in damp indoor environments. This fungus has been found growing on deciduous alder trees (Alnus) which belong to the birch family Betulaceae. Trees in this family include the American green alder and the mountain alder. U. botrytis can also be found growing on the evergreen coniferous tree genus Pseudotsuga of the family Pinaceae; different trees include the Douglas fir and the big-cone spruce. In addition, this fungus can grow on the flowering plant genus Sphaeralcea of the mallow family Malvaceae; plants include the desert hollyhock and the prairie mallow. A previously conducted study also isolated a unique strain of Ulocladium botrytis, strain number 193A4, from the marine sponge Callyspongia vaginalis. Another independent study found seed-borne Ulocladium botrytis from pearl millet (Pennisetum typhoides). Relationships with other organisms coexisting in the same ecosystem has served to be beneficial for some organisms and this applies to U. botrytis. Ulocladium botrytis is capable of surviving in xerophilic ecosystems and alkaline-calcareous soils, both extreme habitats, when associating with the tree species Scutia buxifolia. The U. botrytis strain associated with this environment is called LPSC 813 and has great cellulolytic ability. Ulocladium. botrytis has potential, albeit limited, to be used as a biocontrol agent against the parasitic herbaceous plant genus Orobanche that affect the yield of certain crops like tomatoes. Ulocladium botrytis is also capable of in vitro antagonism of root-disease pathogens such as Heterobasidion annosum, Phellinus weirii, and Armillaria ostoyae. Apart from U. botrytis, other Ulocladium species such as U. atrum and U. oudemansii also present biocontrol potential. Impact on human health Ulocladium botrytis is currently regarded as a source of home allergen sensitization and is used in skin-prick tests that test for mould allergens and work-related allergens. This is due to the production and detection of Alt a 1, the major allergen produced by Alternaria alternata, in U.botrytis. In addition, U. botrytis also releases another allergen, homologous to Alt a 1, that possesses the capacity to cause allergic responses in humans. The allergic symptoms caused by U. botrytis are compatible with rhinitis and asthma; however, U. botrytis was also found in patients of allergic fungal sinusitis. Importantly, Ulocladium botrytis is rarely pathogenic to humans but has been found to be associated with cases of onychomycosis, a fungal infection of the nail. References Ulocladium Cereal diseases Fungal plant pathogens and diseases Fungi described in 1851 Fungus species
Ulocladium botrytis
Biology
2,083
1,123,761
https://en.wikipedia.org/wiki/Slit%20lamp
In ophthalmology and optometry, a slit lamp is an instrument consisting of a high-intensity light source that can be focused to shine a thin sheet of light into the eye. It is used in conjunction with a biomicroscope. The lamp facilitates an examination of the anterior segment and posterior segment of the human eye, which includes the eyelid, sclera, conjunctiva, iris, natural crystalline lens, and cornea. The binocular slit-lamp examination provides a stereoscopic magnified view of the eye structures in detail, enabling anatomical diagnoses to be made for a variety of eye conditions. A second, hand-held lens is used to examine the retina. History Two conflicting trends emerged in the development of the slit lamp. One trend originated from clinical research and aimed to apply the increasingly complex and advanced technology of the time. The second trend originated from ophthalmologic practice and aimed at technical perfection and a restriction to useful methods. The first man credited with developments in this field was Hermann von Helmholtz (1850) when he invented the ophthalmoscope. In ophthalmology and optometry, the instrument is called a "slit lamp", although it is more correctly called a "slit lamp instrument". Today's instrument is a combination of two separate developments, the corneal microscope and the slit lamp itself. The first concept of a slit lamp dates back to 1911 credited to Allvar Gullstrand and his "large reflection-free ophthalmoscope." The instrument was manufactured by Zeiss and consisted of a special illuminator connected to a small stand base through a vertical adjustable column. The base was able to move freely on a glass plate. The illuminator employed a Nernst glower which was later converted into a slit through a simple optical system. However, the instrument never received much attention and the term "slit lamp" did not appear in any literature again until 1914. It was not until 1919 that several improvements were made to the Gullstrand slit lamp made by Vogt Henker. First, a mechanical connection was made between lamp and ophthalmoscopic lens. This illumination unit was mounted to the table column with a double articulated arm. The binocular microscope was supported on a small stand and could be moved freely across the tabletop. Later, a cross slide stage was used for this purpose. Vogt introduced Koehler illumination, and the reddish Nernst glower was replaced with the brighter and whiter incandescent lamp. Special mention should be paid to the experiments that followed Henker's improvements in 1919. On his improvements the Nitra lamp was replaced with a carbon arc lamp with a liquid filter. At this time the great importance of color temperature and the luminance of the light source for slit lamp examinations were recognized and the basis created for examinations in red-free light. In the year 1926, the slit lamp instrument was redesigned. The vertical arrangement of the projector made it easy to handle. For the first time, the axis through the patient's eye was fixed along a common swiveling axis, although the instrument still lacked a coordinate cross-slide stage for instrument adjustment. The importance of focal illumination had not yet been fully recognized. In 1927, stereo cameras were developed and added to the slit lamp to further its use and application. In 1930, Rudolf Theil further developed the slit lamp, encouraged by Hans Goldmann. Horizontal and vertical co-ordinate adjustments were performed with three control elements on the cross-slide stage. The common swivel axis for microscope and illumination system was connected to the cross-slide stage, which allowed it to be brought to any part of the eye to be examined. A further improvement was made in 1938. A control lever or joystick was used for the first time to allow for horizontal movement. Following World War II the slit lamp was improved again. On this particular improvement the slit projector could be swiveled continuously across the front of the microscope. This was improved again in 1950, when a company named Littmann redesigned the slit lamp. They adopted the joystick control from the Goldmann instrument and the illumination path present in the Comberg instrument. Additionally, Littmann added the stereo telescope system with a common objective magnification changer. In 1965, the Model 100/16 Slit Lamp was produced based on the slit lamp by Littmann. This was soon followed by the Model 125/16 Slit Lamp in 1972. The only difference between the two models was their operating distances of 100 mm to 125 mm. With the introduction of the photo slit lamp further advancements were possible. In 1976, the development of the Model 110 Slit Lamp and the 210/211 Photo Slit Lamps were an innovation by which each were constructed from standard modules allowing for a wide range of different configurations. At the same time, halogen lamps replaced the older illumination systems to make them brighter and essentially daylight quality. From 1994 onwards, new slit lamps were introduced which took advantage of new technologies. The last major development was in 1996 in which included new slit lamp optics. See also "From Lateral Illumination to Slit Lamp - An Outline of Medical History". General procedure While a patient is seated in the examination chair, they rest their chin and forehead on a support area to steady the head. Using the biomicroscope, the ophthalmologist or optometrist then proceeds to examine the patient's eye. A fine strip of paper, stained with fluorescein, a fluorescent dye, may be touched to the side of the eye; this stains the tear film on the surface of the eye to aid examination. The dye is naturally rinsed out of the eye by tears. A subsequent test may involve placing drops in the eye in order to dilate the pupils. The drops take about 15 to 20 minutes to work, after which the examination is repeated, allowing the back of the eye to be examined. Patients will experience some light sensitivity for a few hours after this exam, and the dilating drops may also cause increased pressure in the eye, leading to nausea and pain. Patients who experience serious symptoms are advised to seek medical attention immediately. Adults need no special preparation for the test; however children may need some preparation, depending on age, previous experiences, and level of trust. Illuminations Various methods of slitlamp illumination are required to obtain full advantage of slit-lamp biomicroscope. There are mainly six type of illuminating options: Diffuse illumination, Direct focal illumination, Specular reflection, Transillumination or retroillumination, Indirect lateral illumination or Indirect proximal illumination and Sclerotic scatter. Oscillatory Illumination is sometimes considered an illumination technique. Observation with an optical section or direct focal illumination is the most frequently applied method of examination with the slit lamp. With this method, the axes of illuminating and viewing path intersect in the area of the anterior eye media to be examined, for example, the individual corneal layers. Diffuse illumination If media, especially that of the cornea, are opaque, optical section images are often impossible depending on severity. In these cases, diffuse illumination may be used to advantage. For this, the slit is opened very wide and a diffuse, attenuated survey illumination is produced by inserting a ground glass screen or diffuser in the illuminating path. "Wide beam" illumination is the only type that has the light source set wide open. Its main purpose is to illuminate as much of the eye and its adnexa at once for general observation. Direct focal illumination Observation with an optical section or direct focal illumination is the most frequently applied method. It is achieved by directing a full-height, hairline to medium width, medium-bright beam obliquely into the eye and focusing it on the cornea so that a quadrilateral block of light illuminates the transparent medias of eye. Viewing arm and illuminating arm are kept parfocal. This type of illumination is useful for depth localization. Direct focal illumination is used for grading cells and flare in anterior chamber by shortening height of beam to 2–1 mm. Specular reflection Specular reflection, or reflected illumination is just like patches of reflection seen on sunlit lake water surface. To achieve specular reflection, the examiner directs a medium to narrow beam of light (it must be thicker than an optical section) toward the eye from the temporal side. The angle of illumination should be wide (50°-60°) relative to the examiners axis of observation (which should be slightly nasal to the patients visual axis). A bright zone of specular reflection will be evident on the temporal, midperipheral corneal epithelium. It is used to see endothelial outline of cornea. Transillumination or retroillumination In certain cases, illumination by optical section does not yield sufficient information or is impossible. This is the case, for example, when larger, extensive zones or spaces of the ocular media are opaque. Then the scattered light that is not very bright normally is absorbed. A similar situation arises when areas behind the crystalline lens are to be observed. In this case the observation beam must pass a number of interfaces that may reflect and attenuate the light. Indirect illumination With this method, light enters the eye through a narrow to medium slit (2 to 4 mm) to one side of the area to be examined. The axes of illuminating and viewing path do not intersect at the point of image focus, to achieve this; the illuminating prism is decentered by rotating it about its vertical axis off the normal position. In this way, reflected, indirect light illuminates the area of the anterior chamber or cornea to be examined. The observed corneal area then lies between the incident light section through the cornea and the irradiated area of the iris. Observation is thus against a comparatively dark background. Sclerotic scatter or scattering sclero-corneal illumination With this type of illumination, a wide light beam is directed onto the limbal region of the cornea at an extremely low angle of incidence and with a laterally de-centered illuminating prism. Adjustment must allow the light beam to transmit through the corneal parenchymal layers according to the principle of total reflection allowing the interface with the cornea to be brightly illuminated. The magnification should be selected so that the entire cornea can be seen at a glance. Special techniques Fundus observation and gonioscopy with the slit lamp Fundus observation is generally performed via ophthalmoscopy, where the observer (fundus camera or observing eye) is focused to infinity, which brings the subject's fundus into focus due to the refractive power of the subject's optical media. In contrast, the microscope in slit lamp biomicroscopy is focused to the anterior segments of the eye, such that direct observation of the fundus is impossible due to the subject's refractive power. However, with the use of auxiliary optics, the fundus can be brought within the focusing range of the microscope. These optics usually take the form of a lens placed on or near the subject's cornea, which range in optical properties and practical application. Watzke–Allen test is a test used in diagnosis of a full thickness macular hole and also to assess retinal function after surgical closure of the hole, with the help of slit lamp. Light filters Most slit-lamps have five light filters options: Unfiltered, Heat absorption- for increased patient comfort Grey filter, Red free- for better visualisation of nerve fibre layer and haemorrhages and blood vessels. Cobalt blue- after staining with fluorescein dye, for seeing corneal ulcers, contact lens fitting, Seidel's test Cobalt blue light Slit lamps produce light of the wavelength 450 to 500 nm, known as "cobalt blue". This light is specifically useful for looking for problems in the eye once it has been stained with fluorescein. Types There are two distinct slit lamp types based on the location of their illumination system: Zeiss type In the Zeiss type slit lamp, the illumination is located below the microscope. This type of slit lamp is named after the manufacturing company Carl Zeiss. Haag Streit type In the Haag Streit type slit lamp, the illumination is located above the microscope. This type of slit lamp is named after the manufacturing company Haag Streit. Interpretation The slit lamp exam may detect many diseases of the eye, including: Cataract Conjunctivitis Corneal injury such as corneal ulcer or corneal swelling Diabetic retinopathy Fuchs' dystrophy Keratoconus (Fleischer ring) Macular degeneration Retinal detachment Retinal vessel occlusion Retinitis pigmentosa Sjögren's syndrome Toxoplasmosis Uveitis Wilson's disease (Kayser–Fleischer ring) A sign that may be seen in slit lamp examination is a "flare", which is when the slit-lamp beam is visible in the anterior chamber. This occurs when there is breakdown of the blood-aqueous barrier with resultant exudation of protein. References Further reading Vivino MA, Chintalagiri S, Trus B, Dati.les M., "Development of a Scheimpflug slit lamp camera system for quantitative densitometric analysis", Computer Systems Laboratory, National Eye Institute, National Institutes of Health, Bethesda, MD. Eye (Lond). 1993;7 (Pt 6):791–798. "Slit-Lamp Gonioscopy." Postgraduate Medical Journal 39.451 (1963): 310. Jobe, Frederick W. Slit Lamp. United States BAUSCH & LOMB, assignee. Patent "2235319" March 1941. Nikon, Slit Lamp CS-1 Microscope, accessed February 6, 2011. Ledford, Janice K. and Sanders, Valerie N. "The slit lamp primer", 2nd ed., SLACK Incorporated, , 2006. Schwartz, Gary S., "The eye exam: a complete guide", pp. 109–128 Slit Lamp Biomicroscopy, SLACK Incorporated, , 2006. Koppenhöfer, Eilhard, "From Lateral Illumination to Slit Lamp – An Outline of Medical History", online published 2012 Ophthalmic equipment Types of lamp Optical devices Optical instruments
Slit lamp
Materials_science,Engineering
2,977
49,359,079
https://en.wikipedia.org/wiki/Arrhenia%20eburnea
Arrhenia eburnea is a species of agaric fungus in the family Hygrophoraceae. Found in Spain, it was described as new to science in 2003. It has white to ivory-colored fruit bodies with decurrent gills, and a smooth stipe. Its spores are smooth, hyaline, ellipsoid to somewhat cylindrical, and measure 9–11.5 by 4.5–6.5 μm. The specific epithet eburnea, derived from the Latin eburneus, refers to the yellowish-white hues of the fruit bodies. References External links Fungi described in 2003 Fungi of Europe Hygrophoraceae Fungus species
Arrhenia eburnea
Biology
141
25,336,602
https://en.wikipedia.org/wiki/Networked%20music%20performance
A networked music performance or network musical performance is a real-time interaction over a computer network that enables musicians in different locations to perform as if they were in the same room. These interactions can include performances, rehearsals, improvisation or jamming sessions, and situations for learning such as master classes. Participants may be connected by "high fidelity multichannel audio and video links" as well as MIDI data connections and specialized collaborative software tools. While not intended to be a replacement for traditional live stage performance, networked music performance supports musical interaction when co-presence is not possible and allows for novel forms of music expression. Remote audience members and possibly a conductor may also participate. History One of the earliest examples of a networked music performance experiment was the 1951 piece: “Imaginary Landscape No. 4 for Twelve Radios” by composer John Cage. The piece “used radio transistors as a musical instrument. The transistors were interconnected thus influencing each other.” In the late 1970s, as personal computers were becoming more available and affordable, groups like the League of Automatic Music Composers began to experiment with linking multiple computers, electronic instruments, and analog circuitry to create novel forms of music. The 1990s saw several important experiments in networked performance. In 1993, The University of Southern California Information Sciences Institute began experimenting with networked music performance over the Internet.The Hub (band), which was formed by original members of The League of Automatic Composers, experimented in 1997 with sending MIDI data over Ethernet to distributed locations. However, “ it was more difficult than imagined to debug all of the software problems on each of the different machines with different operating systems and CPU speeds in different cities”. In 1998, there was a three-way audio-only performance between musicians in Warsaw, Helsinki, and Oslo dubbed “Mélange à trois”. The early distributed performances all faced problems such as network delay, issues synchronizing signals, echo, and troubles with the acquisition and rendering of non-immersive audio and video. The development of high-speed internet over provisioned backbones, such as Internet2, made high quality audio links possible beginning in the early 2000s. One of the first research groups to take advantage of the improved network performance was the SoundWIRE group at Stanford University's CCRMA. That was soon followed by projects such as the Distributed Immersive Performance experiments, SoundJack, and DIAMOUSES. Awareness in musical performance Workspace awareness in a face-to-face situation is gathered through consequential communication, feedthrough, and intentional communication. A traditional music performance setting is an example of very tightly-coupled, synergistic collaboration in which participants have a high level of workspace awareness. “Each player must not only be conscious of his or her own part, but also of the parts of other musicians. The other musicians' gestures, facial expressions and bodily movements, as well as the sounds emitted by their instruments [are] clues to meanings and intentions of others”. Research has indicated that musicians are also very sensitive to the acoustic response of the environment in which they are performing. Ideally a networked music performance system would facilitate the high level of awareness that performers experience in a traditional performance setting. Technical issues in networked music performance Bandwidth demand, latency sensitivity, and a strict requirement for audio stream synchronization are the factors that make networked music performance a challenging application. These factors are described in more detail below. Bandwidth High definition audio streaming, which is used to make a networked music performance as realistic as possible, is considered to be one of the most bandwidth-demanding uses of today's networks. Latency One of the major issues with networked music performance is that latency is introduced into the audio as it is processed by a participant's local system and sent across the network. For interaction in a networked music performance to feel natural, the latency generally must be kept below 30 milliseconds, the bound of human perception. If there is too much delay in the system, it will make performance very difficult since musicians adjust their playing to coordinate the performance based on the sounds they hear created by other players. However, the characteristics of the piece being played, the musicians, and the types of instruments used ultimately define the tolerance. Synchronization cues may be used in a network music performance system that is designed for long latency situations. Audio stream synchronization Both end systems and networks must synchronize multiple audio streams from separate locations to form a consistent presentation of the music. This is a challenging problem for today's systems. Objectives of a networked music performance system The objectives of a networked music performance can be summarized as: It should allow musicians and possibly audience members and/or a conductor to collaborate from remote locations It should create a realistic immersive virtual space for synchronous, interactive performance It should support workspace awareness that allows participants to be aware of the actions of others in the virtual workspace and facilitate all forms of communication Current research SoundWIRE at CCRMA, Stanford University The SoundWIRE research group explores several research areas in the use of networks for music performance including: multi-channel audio streaming, physical models and virtual acoustics, the sonification of network performance, psychoacoustics, and networked music performance practice. The group has developed a software system, JackTrip, that supports multi-channel, high quality, uncompressed streaming audio for networked music performance over the internet. The Sonic Arts Research Centre The Sonic Arts Research Centre (SARC) at Queen's University Belfast has been a major player in carrying out network performances since 2006 and has been active in the use of networks as both collaborative and performance tools. The network team at SARC is led by Prof Pedro Rebelo and Dr Franziska Schroeder with varying set-ups of performers, instruments and compositional strategies. A group of artists and researchers has emerged around this field of distributed creativity at SARC and this has helped create a broader knowledge base and focus for activities. As a result, since 2007 SARC has a dedicated team of staff and students with knowledge and experience of network performance, which SARC refers to as "distributed creativity". Regular performances, workshops and collaborations with institutions such as SoundWire, CCRMA Stanford University, and RPI, led by composer and performer Pauline Oliveros, as well as with the University of São Paulo, have helped strengthen this emerging community of researchers and practitioners. The field is related to research on distributed creativity. Distributed Immersive Performance (DIP) experiments The Distributed Immersive Performance project is based at the Integrated Media Systems Center at the University of Southern California. Their experiments explore the challenges of creating a seamless environment for remote, synchronous collaboration. The experiments use 3D audio with correct spatial sound localization as well as HD or DV video projected onto wide screen displays to create an immersive virtual space. There are interaction sites set up at various locations on the University of Southern California campus and at several partner locations such as the New World Symphony in Miami Beach, Florida. DIAMOUSES The DIAMOUSES project is coordinated by the Music Informatics Lab at the Technological Education Institution of Crete in Hellas. It supports a wide range of networked music performance scenarios with a customizable platform that handles the broadcasting and synchronization of audio and video signals across a network. Wireless Music Studio (WeMUST) The A3Lab team at Polytechnic University of the Marches conducts research on the use of the wireless medium for uncompressed audio networking in the NMP context. A mix of open source software, ARM platforms and dedicated wireless equipment have been documented, especially for outdoor use, where buildings of historical importance or difficult environments (e.g. sea) can be explored for the performance. A premiere of the system have been conducted with musicians playing a Stockhausen composition on different boats over the coast in Ancona, Italy. The project also aims at shifting music computing from laptops to embedded devices. See also Internet band CSCW CELT and Opus codecs designed for these applications Computer music Jamulus RTP-MIDI Comparison of Remote Music Performance Software SoundJack References External links Network Music Bibliography at Mendeley Computer networking
Networked music performance
Technology,Engineering
1,678
258,729
https://en.wikipedia.org/wiki/Rate%20of%20return%20pricing
Rate of return pricing or target-return pricing is a method by which a company will set the price of its product based on their desired returns on said product. The concept of rate return pricing is very similar to return on investment, but in this circumstance the company can manipulate its prices to achieve the desired goal. This method is used primarily by companies that either have a lot of capital or have a monopoly on the market and when an investor requests a specific return on their investment. In a competitive market rate of return pricing can be a poor market strategy as its focus at the final profit margins and does not account for supply and demand factors. If a competitor is able to set a lower price, it could decrease demand for the product resulting in a lower sales then forecasted and failing to reach the desired profit margin. Formula The formula is: Target-return pricing = unit cost + [(desired return on investment * invested capital) / expected unit sales] Use Rate of return pricing enables firms to better assess the profitability of a product or service. It enables the cost of invested capital to be accounted when the setting price per unit and can be used to forecast the end monetary return of an exercise. It also helps the company in reaching certain profit goals' while maintaining liquidity. Additionally, if market conditions are stable, forecasts for returns will be extremely accurate as a certain target is being used in pricing achievements are solely dependent on sales. References Pricing Financial ratios
Rate of return pricing
Mathematics
293
168,239
https://en.wikipedia.org/wiki/Lifting%20body
A lifting body is a fixed-wing aircraft or spacecraft configuration in which the body itself produces lift. In contrast to a flying wing, which is a wing with minimal or no conventional fuselage, a lifting body can be thought of as a fuselage with little or no conventional wing. Whereas a flying wing seeks to maximize cruise efficiency at subsonic speeds by eliminating non-lifting surfaces, lifting bodies generally minimize the drag and structure of a wing for subsonic, supersonic and hypersonic flight, or spacecraft re-entry. All of these flight regimes pose challenges for proper flight safety. Lifting bodies were a major area of research in the 1960s and 70s as a means to build a small and lightweight crewed spacecraft. The US built a number of lifting body rocket planes to test the concept, as well as several rocket-launched re-entry vehicles that were tested over the Pacific. Interest waned as the US Air Force lost interest in the crewed mission, and major development ended during the Space Shuttle design process when it became clear that the highly shaped fuselages made it difficult to fit fuel tankage. Advanced spaceplane concepts in the 1990s and 2000s did use lifting-body designs. Examples include the HL-20 Personnel Launch System (1990) and the Prometheus spaceplane (2010). The Dream Chaser lifting-body spaceplane, an extension of HL-20 technology, was proposed as one of three vehicles to potentially carry US crew to and from the International Space Station, but eventually was selected as a resupply vehicle instead. In 2015 the ESA Intermediate eXperimental Vehicle performed the first ever successful reentry of a lifting body spacecraft. History The lifting body had been imagined by 1917, in which year an aircraft with something like a delta wing plan form with a thick included fuselage was described in a patent by Roy Scroggs. However at low airspeeds the lifting body is inefficient and did not enter mainstream airplane design. Aerospace-related lifting body research arose from the idea of spacecraft re-entering the Earth's atmosphere and landing much like a regular airplane. Following atmospheric re-entry, the capsule spacecraft from the Mercury, Gemini, and Apollo series had very little control over where they landed. A steerable spacecraft with wings could significantly extend its landing envelope. However, the vehicle's wings would have to be designed to withstand the dynamic and thermal stresses of both re-entry and hypersonic flight. One proposal eliminated wings altogether: design the fuselage body to produce lift by itself. NASA's refinements of the lifting body concept began in 1962 with R. Dale Reed of NASA's Armstrong Flight Research Center. The first full-size model to come out of Reed's program was the NASA M2-F1, an unpowered craft made of wood. Initial tests were performed by towing the M2-F1 along a dry lakebed at Edwards Air Force Base California, behind a modified Pontiac Catalina. Later the craft was towed behind a C-47 and released. Since the M2-F1 was a glider, a small rocket motor was added in order to extend the landing envelope. The M2-F1 was soon nicknamed the "Flying Bathtub". In 1963, NASA began programs with heavier rocket-powered lifting-body vehicles to be air launched from under the starboard wing of a NB-52B, a derivative of the B-52 jet bomber. The first flights started in 1966. Of the Dryden lifting bodies, all but the unpowered NASA M2-F1 used an XLR11 rocket engine as was used on the Bell X-1. A follow-on design designated the Northrop HL-10 was developed at NASA Langley Research Center. Air flow separation caused the crash of the Northrop M2-F2 lifting body. The HL-10 attempted to solve part of this problem by angling the port and starboard vertical stabilizers outward and enlarging the center one. Starting 1965 the Russian lifting-body Mikoyan-Gurevich MiG-105 or EPOS (Russian acronym for Experimental Passenger Orbital Aircraft) was developed and several test flights made. Work ended in 1978 when the efforts shifted to the Buran program, while work on another small-scale spacecraft partly continued in the Bor program. The IXV is a European Space Agency lifting body experimental re-entry vehicle intended to validate European reusable launchers which could be evaluated in the frame of the FLPP program. The IXV made its first flight in February 2015, launched by a Vega rocket. Orbital Sciences proposed a commercial lifting-body spaceplane in 2010. The Prometheus is more fully described below. Aerospace applications Lifting bodies pose complex control, structural, and internal configuration issues. Lifting bodies were eventually rejected in favor of a delta wing design for the Space Shuttle. Data acquired in flight test using high-speed landing approaches at very steep descent angles and high sink rates was used for modeling Shuttle flight and landing profiles. In planning for atmospheric re-entry, the landing site is selected in advance. For reusable reentry vehicles, typically a primary site is preferred that is closest to the launch site in order to reduce costs and improve launch turnaround time. However, weather near the landing site is a major factor in flight safety. In some seasons, weather at landing sites can change quickly relative to the time necessary to initiate and execute re-entry and safe landing. Due to weather, it is possible the vehicle may have to execute a landing at an alternate site. Furthermore, most airports do not have runways of sufficient length to support the approach landing speed and roll distance required by spacecraft. Few airports exist in the world that can support or be modified to support this type of requirement. Therefore, alternate landing sites are very widely spaced across the U.S. and around the world. The Shuttle's delta wing design was driven by these issues. These requirements were further exacerbated by requirements that extended the Shuttle's flight landing envelope. Nonetheless, the lifting body concept has been implemented in a number of other aerospace programs, the previously mentioned NASA X-38, Lockheed Martin X-33, BAC's Multi Unit Space Transport And Recovery Device, Europe's EADS Phoenix, and the joint Russian-European Kliper spacecraft. Of the three basic design shapes usually analyzed for such programs (capsule, lifting body, aircraft) the lifting body may offer the best trade-off in terms of maneuverability and thermodynamics while meeting its customers' mission requirements. Current systems The Dream Chaser is a suborbital and orbital vertical-takeoff, horizontal-landing (VTHL) lifting-body spaceplane being developed by Sierra Nevada Corporation (SNC). The Dream Chaser design is planned to eventually carry up to seven people to and from low Earth orbit, and the spaceplane is currently planned to be used for delivering cargo to the International Space Station under the Commercial Resupply Services program. The vehicle will launch vertically on a Vulcan Centaur and land horizontally on conventional runways. Body lift Some aircraft with wings also employ bodies that generate lift. Some of the early 1930s high-wing monoplane designs of the Bellanca Aircraft Company, such as the Bellanca Aircruiser, had vaguely airfoil-shaped fuselages capable of generating some lift, with even the wing struts on some versions given widened fairings to give them some lift-generating capability. The Gee Bee R-1 Super Sportster racing plane of the 1930s, likewise, from more modern aerodynamic studies, has been shown to have had considerable ability to generate lift with its fuselage design, important for the R-1's intended racing role, while in highly banked pylon turns while racing. Vincent Burnelli developed several aircraft between the 1920s and 1950 that used fuselage lift. Like the earlier Bellanca monoplanes, the Short SC.7 Skyvan produces a substantial amount of lift from its fuselage shape, almost as much as the 35% each of the wings produces. Fighters like the F-15 Eagle also produce substantial lift from the wide fuselage between the wings. Because the F-15 Eagle's wide fuselage is so efficient at lift, an F-15 is able to land successfully with only one wing, albeit under nearly full power, with thrust contributing significantly to lift. In the summer of 1983, an Israeli F-15 staged a mock dogfight with Skyhawks for training purposes, near Nahal Tzin in the Negev desert. During the exercise, one of the Skyhawks miscalculated and collided forcefully with the F-15's wing root. The F-15's pilot was aware that the wing had been seriously damaged, but decided to try and land in a nearby airbase, not knowing the extent of his wing damage. It was only after he had landed, when he climbed out of the cockpit and looked backward, that the pilot realized what had happened: the wing had been completely torn off the plane, and he had landed the plane with only one wing attached. A few months later, the damaged F-15 had been given a new wing, and returned to operational duty in the squadron. The engineers at McDonnell Douglas had a hard time believing the story of the one-winged landing: as far as their planning models were concerned, this was an impossibility. In 2010, Orbital Sciences proposed the Prometheus "blended lifting-body" spaceplane vehicle, about one-quarter the size of the Space Shuttle, as a commercial option for carrying astronauts to low Earth orbit under the commercial crew program. The Vertical Takeoff, Horizontal Landing (VTHL) vehicle was to have been launched on a human-rated Atlas V rocket but would land on a runway. The initial design was to have carried a crew of 4, but it could carry up to 6, or a combination of crew and cargo. In addition to Orbital Sciences, the consortium behind the proposal included Northrop Grumman, which would have built the spaceplane, and the United Launch Alliance, which would have provided the launch vehicle. Failing to be selected for a CCDev phase 2 award by NASA, Orbital announced in April 2011 that they would likely wind down their efforts to develop a commercial crew vehicle. Design principles of lifting bodies are used also in the construction of hybrid airships. Armstrong Flight Research Center The US government developed a variety of proof-of-concept and flight-test vehicle lifting body designs from the early 1960s through the mid-1970s at Armstrong Flight Research Center. These included: M2-F1 M2-F2 M2-F3 HL-10 X-24A X-24B Pilots and flights Wood, Haise and Engle each made a single car-towed flight of the M2-F1. Popular culture Lifting bodies have appeared in some science fiction works, including the movie Marooned, and as John Crichton's spacecraft Farscape-1 in the TV series Farscape. The Discovery Channel TV series conjectured using lifting bodies to deliver a probe to a distant earth-like planet in the animated Alien Planet. Gerry Anderson's 1969 Doppelgänger used a VTOL lifting body lander / ascender to visit an Earth-like planet, only to crash in both attempts. His series UFO featured a lifting body craft visually similar to the M2-F2 for orbital operations ("The Man Who Came Back"). In the Buzz Aldrin's Race Into Space computer game, a modified X-24A becomes an alternative lunar capable spacecraft that the player can choose over the Gemini or Apollo capsule. The 1970s television program The Six Million Dollar Man used footage of a lifting body aircraft, culled from actual NASA exercises, in the show's title sequence. The scenes included an HL-10's separation from its carrier plane—a modified B-52—and an M2-F2 piloted by Bruce Peterson, crashing and tumbling violently along the Edwards dry lakebed runway. The cause of the crash was attributed to the onset of Dutch roll stemming from control instability as induced by flow separation. The episode "The Deadly Replay" (season 2 episode 8 aired 9/22/1974) features the HL-10 as a prop of the story. See also Martin X-23 PRIME BOR-4 Kliper Lockheed Star Clipper Lockheed Martin X-33 HL-20 Personnel Launch System Dream Chaser (spacecraft) Space Rider (spacecraft) Prometheus (spacecraft) Facetmobile Blended wing body Flying wing MUSTARD 1953 Horton "Wingless" http://aerospacelegacyfoundation.com/aviation-history-flying-wings/ Arup S-2 1932, Snyder "Arup" (blurs the boundary between "flying wing" and lifting body) Burnelli RB-1 References References Other sources McPhee, John (1973), The Deltoid Pumpkin Seed; . (Story of the Aereon, a combination aerodyne/aerostat, a.k.a. hybrid airship.) External links Lifting Bodies Fact Sheet (NASA) NASA Tech Paper 3101: Numerical Analysis and Simulation of an Assured Crew Return Vehicle Flow Field (The math of airflow over a lifting body) NASA Photo Collections from Dryden Flight Research Center HL-10 M2-F1 M2-F2 M2-F3 X-24A and X24B Short M2-F1 history Some history of lifting body flight Wingless Flight: The Lifting Body Story. NASA History Series SP-4220 1997 PDF Aircraft configurations Wing configurations
Lifting body
Engineering
2,746
14,606,380
https://en.wikipedia.org/wiki/Humphrey%E2%80%93Parkes%20terminology
Humphrey–Parkes terminology is a system of nomenclature for the plumage of birds. Before the Humphrey–Parkes system, plumage was named after the belief that a certain plumage was breeding plumage and others were not. However, as this system did not always work correctly, the new Humphrey–Parkes came into use to rectify this error. This terminology is named after P. S. Humphrey and K. C. Parkes. Under the Humphrey–Parkes nomenclature, the main adult plumage, especially when it is produced by a complete molt, is called basic plumage. In most birds, the non-breeding plumage, which is worn longer than the breeding plumage, is known as the basic plumage. In birds that molt only once a year, the regular and only plumage is known as basic plumage. In some birds, a partial molt occurs before the bird breeds. This plumage is known as the alternate plumage and is generally what was previously known as a bird's breeding plumage. If a bird produces a third plumage in addition to the basic and alternative, it is known as supplemental plumage. This plumage is most frequently found in ptarmigans. The unique plumage of a juvenile bird is known as juvenal (or less precisely, juvenile) plumage. When the bird is molting, the molt is known as a prejuvenal, prebasic, prealternate, or presupplemental molt, depending on which type follows the molt. For birds that do not completely molt into full adult plumage the first time, a numbering system is used to signify which plumage it is in. For example, for the first time a bird enters basic plumage, the plumage is known as first basic plumage; the second, second basic plumage. The numbers are dropped after a bird achieves its full adult plumage. References Birds
Humphrey–Parkes terminology
Biology
379
901,062
https://en.wikipedia.org/wiki/Large%20Helical%20Device
The (LHD) is a fusion research device located in Toki, Gifu, Japan. It is operated by the National Institute for Fusion Science, and is the world's second-largest superconducting stellarator, after Wendelstein 7-X. The LHD employs a heliotron magnetic field originally developed in Japan. The objective of the project is to conduct fusion plasma confinement research in a steady state in order to elucidate possible solutions to physics and engineering problems in helical plasma reactors. The LHD uses neutral beam injection, ion cyclotron radio frequency (ICRF), and electron cyclotron resonance heating (ECRH) to heat the plasma, much like conventional tokamaks. The helical divertor heat and particle exhaust system uses the large helical coils to produce a diverting field. This configuration allows for the modification of the stochastic layer size, which is positioned between the confined plasma volume and the field lines that terminate on the divertor plate. Boundary plasma research at LHD focuses on the capability of the helical divertor as an exhaust system for heliotrons and stellarators. History Design finalized 1987 Start of construction 1990 Plasma operations from 1998 Neutral beam injection of 3 MW was used in 1999. In 2005 it maintained a plasma for 3,900 seconds. In 2006 a new helium cooler was added. Using the new cooler, by 2018 a total of 10 long term operations have been achieved, reaching a maximum power level of 11.833 kA. To aid public acceptance, an exhaust system was designed to catch and filter the radioactive tritium the fusion process produces. See also Fusion reactor National Institutes of Natural Sciences, Japan References External links Large Helical Device Website good diagrams (worth archiving page) Super Dense Core plasmas in LHD. Harris. 2008 16 slides. advanced - inc ballooning mode and future development options Fusion power Stellarators Plasma physics facilities Toki, Gifu
Large Helical Device
Physics,Chemistry
405
58,887,918
https://en.wikipedia.org/wiki/Expensive%20tissue%20hypothesis
The expensive tissue hypothesis (ETH) relates brain and gut size in evolution (specifically in human evolution). It suggests that in order for an organism to evolve a large brain without a significant increase in basal metabolic rate (as seen in humans), the organism must use less energy on other expensive tissues; the paper introducing the ETH suggests that in humans, this was achieved by eating an easy-to-digest diet and evolving a smaller, less energy-intensive gut. The ETH has inspired many research projects to test its validity in primates and other organisms. The human brain stands out among the mammals because of its relative size compared to the rest of the body. The brain of Homo sapiens is about three times larger than that of its closest living relative, the chimpanzee. For a primate of its body size, the relative size of the brain and that of the digestive tract is rather unexpected; the digestive tract is smaller than expected for a primate of a human body size. In 1995, two scientists proposed an attempt to solve this phenomenon of human evolution using the Expensive Tissue Hypothesis. Original paper The original paper introducing the ETH was written by Leslie Aiello and P. E. Wheeler. Availability to new data on basal metabolic rate (BMR) and brain size has shown that energetics is an issue in the maintenance of a relatively large brain, like the human brain. In mammals, brain size is positively correlated with the BMR. In the paper, Aiello and Wheeler sought to explain how humans managed to provide enough energy to their large and metabolically expensive brains while still maintaining a BMR comparable to other primates with smaller brains. They found that humans’ smaller relative gut size almost completely compensated for the metabolic cost of the larger brain. They went on to postulate that a larger brain would allow for more complex foraging behavior, which would result in a higher quality diet, which would then allow the gut to shrink further, freeing up more energy for the brain. This research also presented a case for studying the evolution of organs in a more interconnected manner, rather than in isolation. Further research Anthropologists have been able to observe a dramatic contrast in relative brain size between humans and our great ape ancestors. Studies have shown that brain size differences underlie major differences in cognitive performance. Brain tissue is energetically expensive, requiring a great amount of energy compared to several other somatic tissues during rest. To understand how the body is able to provide the brain with the right amount of energy to function properly, scientists consider the cost side of the equation and focus on how brain and other expensive tissues such as the gut or the testes may trade off. Another possibility is that there may not be any trading off, rather there are other ways that humans are keeping the brain nourished. The academic debate around the ETH is still active, and has inspired a number of similar tests, all attempting to verify or disprove the hypothesis in another species or group of species by looking at encephalization (a ratio between brain size and body size), gut size, and/or diet quality. Primates, being the closest living relatives to humans, are a natural starting point for testing the hypothesis, and as such are examined by many of these tests. One such study supported the expensive tissue hypothesis and found a positive correlation between diet quality and brain size (as would be expected by the original paper), but it did note that there were exceptions among the species tested. A broader study including primates and other mammals disputed the ETH, finding that there is no negative correlation between brain and gut sizes; it did, however, support the idea of energy trade-offs in evolution as it found a negative correlation between encephalization and adipose deposits. Studies have also been done in species less similar to humans, such as anurans and fish. The study of anurans found that among the 30 species tested, there was a significant negative correlation between gut size and brain size, as Aiello and Wheeler found in humans and primates in their original research. One study of fish used Peters' elephantnose fish (Gnathonemus petersii), a species of carnivorous fish, which has a uniquely large brain, about three times the size expected for a fish of its body size. The research found that these fish also had significantly smaller guts than other similar carnivorous fish. These further studies enrich the debate over the ETH. A 2018 study by Huang, Yu, and Liao investigated the possible effects of gut microbiota in the expensive tissue hypothesis among vertebrates. Researchers have investigated various symbiotic gut bacteria as well as other microorganisms that have coevolved in the digestive tracts of humans and other animals. These microbiotas have evolved to form mutually beneficial relationships with their hosts, and play important roles in immune function, nutrition, and physiology. Any disruption in the gut can lead to gastrointestinal dysfunction like obesity, for example. Several studies have also shown that the diversity and composition of gut microbiota vary topographically and temporarily. This is because specific bacteria have been linked to the host's food intake as well as the use of nutrition and energy metabolism. Any changes or modifications of the microbial landscape in the gut can lead to several complex and dynamic interactions throughout life. Additionally, the choice of the host is strongly associated with the diversification and complexity of the microbiota; for instance, the study illustrated that a diet high in fat increases the number of bacteria belonging to the phylum Bacteroidetes and decreases the number belonging to the Firmicutes in children's guts, and also theorized that diet quality is related to gut size. The same study also found that gut size has also seen coevolution alongside brain size, partly because the brain and the gut are both among the most energetically costly organs in vertebrate bodies. Based on the expensive tissue hypothesis, the higher energy expenditure of vertebrates with larger brains is balanced by a corresponding decrease in the energy consumed by other energetically costly organs, e.g. the gut. Some evidence also suggests that vertebrates with large brains have evolved to balance out the energetic expenditure by trading off with gut size. For example, researchers have found a negative correlation between brain size and gut size in guppies as well as the Omei wood frog. Gut microbiota respond to diet quality in a way that influences the metabolism of the host. For instance, improving energy yield in the host by increasing the efficiency of certain metabolic pathways is one of the main processes that drives the trade-off between brain size and gut size. This process is also correlated with the ETH hypothesis because brain size increases when energy input is at a high level due to consumption of high-energy diets and the overall increase in constant energy input. However, after several investigations, the study did not find strong evidence to support the notion that brain size is negatively correlated with gut microbiota in vertebrates. A similar study was done by Tsuboi et al., showing clear evidence that brain size is correlated with gut size by controlling the effects of shared ancestral and ecological confounding variables. The study found that the evolution of a larger brain is closely related to an increase in reproductive investment in egg size and parental size. The result of the experiment concluded that the energy cost of encephalization might have played a role in the evolution of brain size in both endothermic as well as ectothermic vertebrates. For example, the study found that homeothermic vertebrates such as the elephantnose fish Gnathonemus petersii have a large brain and a smaller intestine and stomach size, which suggests that energy constraints on brain size are found in highly encephalized tropical species. Additionally, the study found that the evolution of brain size is associated with an increase in egg size and can lead to an extended period of parental care, and that the energetic constraints of encephalization are also applicable to homeothermic vertebrates. Despite this evidence, however, most of the study was done on live-bearing and egg-bearing species within the Chondrichthyes, and cannot necessarily be generalized across all homeothermic and ectothermic vertebrates. Further studies did show that there is definitely a positive correlation between brain mass residuals and BMS residuals in mammals, but the relationship is only significant in primates. When considering the expensive tissue hypothesis, one also needs to consider how energy trade-off hypotheses affect the rest of the body, too. Animals might escape energetic constraints by reducing the size of other expensive tissues in the body or by reducing energy allocation to expensive processes such as locomotion or reproduction. References Hypotheses Evolutionary biology
Expensive tissue hypothesis
Biology
1,806
59,354,087
https://en.wikipedia.org/wiki/Trigonometric%20Rosen%E2%80%93Morse%20potential
The trigonometric Rosen–Morse potential, named after the physicists Nathan Rosen and Philip M. Morse, is among the exactly solvable quantum mechanical potentials. Definition In dimensionless units and modulo additive constants, it is defined as where is a relative distance, is an angle rescaling parameter, and is so far a matching length parameter. Another parametrization of same potential is which is the trigonometric version of a one-dimensional hyperbolic potential introduced in molecular physics by Nathan Rosen and Philip M. Morse and given by, a parallelism that explains the potential's name. The most prominent application concerns the parametrization, with non-negative integer, and is due to Schrödinger who intended to formulate the hydrogen atom problem on Albert Einstein's closed universe, , the direct product of a time line with a three-dimensional closed space of positive constant curvature, the hypersphere , and introduced it on this geometry in his celebrated equation as the counterpart to the Coulomb potential, a mathematical problem briefly highlighted below. The hypersphere is a surface in a four-dimensional Euclidean space, , and is defined as, where , , , and are the Cartesian coordinates of a vector in , and is termed to as hyper-radius. Correspondingly, Laplace operator in is given by, In now switching to polar coordinates, one finds the Laplace operator expressed as Here, stands for the squared angular momentum operator in four dimensions, while is the standard three-dimensional squared angular momentum operator. Considering now the hyper-spherical radius as a constant, one encounters the Laplace-Beltrami operator on as With that the free wave equation on takes the form The solutions, , to this equation are the so-called four-dimensional hyper-spherical harmonics defined as where are the Gegenbauer polynomials. Changing in () variables as one observes that the function satisfies the one-dimensional Schrödinger equation with the potential according to The one-dimensional potential in the latter equation, in coinciding with the Rosen–Morse potential in () for and , clearly reveals that for integer values, the first term of this potential takes its origin from the centrifugal barrier on . Stated differently, the equation (), and its version () describe inertial (free) quantum motion of a rigid rotator in the four-dimensional Euclidean space, , such as the H Atom, the positronium, etc. whose "ends" trace the large "circles" (i.e. spheres) on . Now the question arises whether the second term in () could also be related in some way to the geometry. To the amount the cotangent function solves the Laplace–Beltrami equation on , it represents a fundamental solution on , a reason for which Schrödinger considered it as the counterpart to the Coulomb potential in flat space, by itself a fundamental solution to the Laplacian. Due to this analogy, the cotangent function is frequently termed to as "curved Coulomb" potential. Such an interpretation ascribes the cotangent potential to a single charge source, and here lies a severe problem. Namely, while open spaces, as is , support single charges, in closed spaces single charge can not be defined in a consistent way. Closed spaces are necessarily and inevitably charge neutral meaning that the minimal fundamental degrees of freedom allowed on them are charge dipoles (see Fig. 1). For this reason, the wave equation which transforms upon the variable change, , into the familiar one-dimensional Schrödinger equation with the trigonometric Rosen–Morse potential, in reality describes quantum motion of a charge dipole perturbed by the field due to another charge dipole, and not the motion of a single charge within the field produced by another charge. Stated differently, the two equations () and () do not describe strictly speaking a Hydrogen Atom on , but rather quantum motion on of a light dipole perturbed by the dipole potential of another very heavy dipole, like the H Atom, so that the reduced mass, , would be of the order of the electron mass and could be neglected in comparison with the energy. In order to understand this decisive issue, one needs to focus attention to the necessity of ensuring validity on of both the Gauss law and the superposition principle for the sake of being capable to formulate electrostatic there. With the cotangent function in () as a single-source potential, such can not be achieved. Rather, it is necessary to prove that the cotangent function represents a dipole potential. Such a proof has been delivered in. To understand the line of arguing of it is necessary to go back to the expression for the Laplace operator in () and before considering the hyper-radius as a constant, factorize this space into a time line and . For this purpose, a "time" variable is introduced via the logarithm of the radius. Introducing this variable change in () amounts to the following Laplacian, The parameter is known as "conformal time", and the whole procedure is referred to as "radial quantization". Charge-static is now built up in setting =const in () and calculating the harmonic function to the remaining piece, the so-called conformal Laplacian, , on , which is read off from () as where we have chosen , equivalently, . Then the correct equation to be employed in the calculation of the fundamental solution is . This Green function to has been calculated for example in. Its values at the respective South and North poles, in turn denoted by , and , are reported as and From them one can now construct the dipole potential for a fundamental charge placed, say, on the North pole, and a fundamental charge of opposite sign, , placed on the antipodal South pole of . The associated potentials, and , are then constructed through multiplication of the respective Green function values by the relevant charges as In now assuming validity of the superposition principle, one encounters a Charge Dipole (CD) potential to emerge at a point on according to The electric field to this dipole is obtained in the standard way through differentiation as and coincides with the precise expression prescribed by the Gauss theorem on , as explained in. Notice that stands for dimension-less charges. In terms of dimensional charges, , related to via the potential perceived by another charge , is For example, in the case of electrostatic, the fundamental charge is taken the electron charge, , in which case the special notation of is introduced for the so-called fundamental coupling constant of electrodynamics. In effect, one finds In Fig. 2 we display the dipole potential in (). With that, the one-dimensional Schrödinger equation that describes on the quantum motion of an electric charge dipole perturbed by the trigonometric Rosen–Morse potential, produced by another electric charge dipole, takes the form of Because of the relationship, , with being the node number of the wave function, one could change labeling of the wave functions, , to the more familiar in the literature, . In eqs. ()-() one recognizes the one-dimensional wave equation with the trigonometric Rosen–Morse potential in () for and . In this way, the cotangent term of the trigonometric Rosen–Morse potential could be derived from the Gauss law on in combination with the superposition principle, and could be interpreted as a dipole potential generated by a system consisting of two opposite fundamental charges. The centrifugal term of this potential has been generated by the kinetic energy operator on . In this manner, the complete trigonometric Rosen–Morse potential could be derived from first principles. Back to Schrödinger's work, the hyper-radius for the H Atom has turned out to be very big indeed, and of the order of . This is by eight orders of magnitudes larger than the H Atom size. The result has been concluded from fitting magnetic dipole elements to hydrogen hyper-fine structure effects (see } and reference therein). The aforementioned radius is sufficiently large to allow approximating the hyper-sphere locally by plane space in which case the existence of single charge still could be justified. In cases in which the hyper spherical radius becomes comparable to the size of the system, the charge neutrality takes over. Such an example will be presented in section 6 below. Before closing this section, it is in order to bring the exact solutions to the equations ()-(), given by where stand for the Romanovski polynomials. Application to Coulomb fluids Coulomb fluids consist of dipolar particles and are modelled by means of direct numerical simulations. It is commonly used to choose cubic cells with periodic boundary conditions in conjunction with Ewald summation techniques. In a more efficient alternative method pursued by, one employs as a simulation cell the hyper spherical surface in (). As already mentioned above, the basic object on is the electric charge dipole, termed to as "bi-charge" in fluid dynamics, which can be visualized classically as a rigid "dumbbell" (rigid rotator) of two antipodal charges of opposite signs, and . The potential of a bi-charge is calculated by solving on the Poisson equation, Here, is the angular coordinate of a charge placed at angular position , read off from the North pole, while stands for the anti-podal to angular coordinate of the position, at which the charge of opposite signs is placed in the Southern hemisphere. The solution found, equals the potential in (), modulo conventions regarding the charge signs and units. It provides an alternative proof to that delivered by the equations ()-() of the fact that the cotangent function on has to be associated with the potential generated by a charge dipole. In contrast, the potentials in the above equations (), and (), have been interpreted in as due to so called single "pseudo-charge" sources, where a "pseudo-charge" is understood as the association of a point charge with a uniform neutralizing background of a total charge, . The pseudo-charge potential, , solves . Therefore, the bi-charge potential is the difference between the potentials of two antipodal pseudo-charges of opposite signs. Application to color confinement and the physics of quarks The confining nature of the cotangent potential in () finds an application in a phenomenon known from the physics of strong interaction which refers to the non-observability of free quarks, the constituents of the hadrons. Quarks are considered to possess three fundamental internal degree of freedom, conditionally termed to as "colors", red , blue , and green , while anti-quarks carry the corresponding anti-colors, anti-red , anti-blue , or anti-green , meaning that the non-observability of free quarks is equivalent to the non-observability of free color-charges, and thereby to the "color neutrality" of the hadrons. Quark "colors" are the fundamental degrees of freedom of the Quantum Chromodynamics (QCD), the gauge theory of strong interaction. In contrast to the Quantum Electrodynamics, the gauge theory of the electromagnetic interactions, QCD is a non-Abelian theory which roughly means that the "color" charges, denoted by , are not constants, but depend on the values, , of the transferred momentum, giving rise to the so-called, running of the strong coupling constant, , in which case the Gauss law becomes more involved. However, at low momentum transfer, near the so-called infrared regime, the momentum dependence of the color charge significantly weakens, and in starting approaching a constant value, drives the Gauss law back to the standard form known from Abelian theories. For this reason, under the condition of color charge constancy, one can attempt to model the color neutrality of hadrons in parallel to the neutrality of Coulomb fluids, namely, by considering quantum color motions on closed surfaces. In particular for the case of the hyper-sphere , it has been shown in, that a potential, there denoted by , and obtained from the one in () through the replacement, i.e. the potential where is the number of colors, is the adequate one for the description of the spectra of the light mesons with masses up to . Especially, the hydrogen like degeneracies have been well captured. This because the potential, in being a harmonic function to the Laplacian on , has same symmetry as the Laplacian by itself, a symmetry that is defined by the isometry group of , i.e. by , the maximal compact group of the conformal group . For this reason, the potential in (), as part of , accounts not only for color confinement, but also for conformal symmetry in the infrared regime of QCD. Within such a picture, a meson is constituted by a quark -anti-quark color dipole in quantum motion on an geometry, and gets perturbed by the dipole potential in (), generated by and other color dipole, such as a gluon -anti-gluon , as visualized in Fig. 3. The geometry could be viewed as the unique closed space-like geodesic of a four-dimensional hyperboloid of one sheet, , foliating outside of the causal Minkowski light-cone the space-like region, assumed to have one more spatial dimension, this in accord with the so-called de Sitter Special Relativity, . Indeed, potentials, in being instantaneous and not allowing for time orderings, represent virtual, i.e. acausal processes and as such can be generated in one-dimensional wave equations upon proper transformations of virtual quantum motions on surfaces located outside the causal region marked by the Light Cone. Such surfaces can be viewed as geodesics of the surfaces foliating the space like region. Quantum motions on open geodesics can give rise to barriers describing resonances transmitted through them. An illustrative example for the application of the color confining dipole potential in () to meson spectroscopy is given in Fig. 4. It should be pointed out that the potentials in the above equations () and () have been alternatively derived in, from Wilson loops with cusps, predicting their magnitude as , and in accord with (). The potential in () has furthermore been used in in the Dirac equation on , and has been shown to predict realistic electromagnetic nucleon form-factors and related constants such as mean square electric-charge and magnetic-dipole radii, proton and nucleon magnetic dipole moments and their ratio, etc. The property of the trigonometric Rosen-Morse potential, be it in the parametrization with in eq. (32) which is of interest to electrodynamics, or in the parametrization of interest to QCD from the previous section, qualifies it to studies of phase transitions in systems with electromagnetic or strong interactions on hyperspherical "boxes" of finite volumes . The virtue of such studies lies in the possibility to express the temperature, , as the inverse, , to the radius of the hypersphere. For this purpose, knowledge on the partition function (statistical mechanics), here denoted by , of the potential under consideration is needed. In the following we evaluate for the case of the Schrödinger equation on with linear energy (here in units of MeV), where is the reduced mass of the two-body system under consideration. The partition function (statistical mechanics) for this energy spectrum is defined in the standard way as, Here, the thermodynamic beta is defined as with standing for the Boltzmann constant. In evaluating it is useful to recall that with the increase of the second term on the right hand side in () becomes negligible compared to the term proportional , a behavior which becomes even more pronounced for the choices, , and . In both cases is much smaller compared to the corresponding dimensionless factor, , multiplying . For this reason the partition function under investigation might be well approximated by, Along same lines, the partition function for the parametrization corresponding to the Hydrogen atom on has been calculated in, where a more sophisticated approximation has been employed. When transcribed to the current notations and units, the partition function in presents itself as, The infinite integral has first been treated by means of partial integration giving, Then the argument of the exponential under the sign of the integral has been cast as, thus reaching the following intermediate result, As a next step the differential has been represented as an algebraic manipulation which allows to express the partition function in () in terms of the function of complex argument according to, where is an arbitrary path on the complex plane starting in zero and ending in . For more details and physical interpretations, see. See also Romanovski polynomials Pöschl–Teller potential References Quantum mechanical potentials Mathematical physics
Trigonometric Rosen–Morse potential
Physics,Mathematics
3,514
185,829
https://en.wikipedia.org/wiki/Scream%20Tracker
Scream Tracker is a tracker (an integrated multi-track step sequencer and sampler as a software application). It was created by Psi (Sami Tammilehto), one of the founders of the Finnish demogroup Future Crew. It was written in C and assembly language. The first version (1.0) had monophonic 4-bit output via the PC speaker, as well as 8-bit output via Covox's Speech Thing (a digital-to-analog converter using the parallel port) or a Sound Blaster 1.x card. The first popular version of Scream Tracker, version 2.2, was published in 1990. Versions prior to 3.0 created STM (Scream Tracker Module) files, while versions 3.0 and above used the S3M (ScreamTracker 3 Module) format. As of version 3.0, Scream Tracker supports up to 99 8-bit samples, 32 channels, 100 patterns, and 256 order positions. It can also handle up to 9 FM-synthesis channels on sound cards using the popular OPL2/3/4 chipsets, and, unusually, can play PCM samples and FM instruments at the same time. There are channels referred to as R1..8, L1..8 and A1..9 to be assigned to those 32 ones, which gives an effective amount of only 25 channels. 16-position free panning is available using the S8x command, but only on the Gravis Ultrasound. The usage of the A channels requires the presence of an AdLib-compatible card either by itself or alongside another sound card. The last version of Scream Tracker was 3.21, released in 1994, placing it in competition with FastTracker 2. It was the precursor of the PC tracking scene and its interface inspired newer trackers like Impulse Tracker. Various other trackers (such as Impulse Tracker or OpenMPT) adopted the use of the Scream Tracker's S3M format. See also MilkyTracker List of audio trackers References Audio trackers Demoscene software DOS software 1990 software Assembly language software Software developed in Finland
Scream Tracker
Technology
436
37,879,508
https://en.wikipedia.org/wiki/Alexander%20Winchell
Alexander Winchell (December 31, 1824, in North East, New York – February 19, 1891, in Ann Arbor, Michigan) was an American geologist who contributed to this field mainly as an educator and a popular lecturer and writer. His views on evolution aroused controversy among his contemporaries; today the racism of these views is more cause for comment. Biography Education Winchell graduated from the Wesleyan University of Middletown, Connecticut, in 1847. Early career He then taught at Pennington Male Seminary of New Jersey, Amenia Seminary of New York (where he had previously been a student), an academy in Newbern, Alabama, and the Mesopotamia Female Seminary of Eutaw, the last of which was founded by him. He became president of the Masonic University at Selma, Alabama, in 1853. He was elected as a member of the American Philosophical Society in 1865. Michigan In 1854 Winchell entered the service of the University of Michigan as professor of physics and civil engineering. Eventually he became professor of geology and paleontology at Michigan. In 1859, Winchell was appointed as State Geologist of Michigan for the newly formed second geological survey of the state. He held the post until 1863 when the state did not appropriate funding to continue the survey. The survey was resumed in 1869, and Winchell was reappointed in April. Owing to conflicting opinions between Winchell and his superiors, he resigned in 1871. He stayed at Michigan until 1872. Cotton-growing venture In 1863 Winchell took up a lease on a cotton plantation near Vicksburg, Mississippi, under a plan devised by Gen. Lorenzo Thomas to lease plantations along the Mississippi River to loyal men from the North, who would hire black laborers on terms prescribed by the army. Winchell organized the Ann Arbor Cotton Company and sold stock to the university's president, whereupon he received a leave of absence to engage in cotton planting. General Thomas set wages at a low level ($7 per month for men, $5 for women, minus the cost of medical attention and clothing). Even then, many lessees defrauded the freedmen of their earnings. In the winter of 1863–64, the Treasury Department briefly assumed control of the Mississippi Valley labor system, mandated a substantial increase in black wages, and contemplated leasing the plantations directly to the freedmen. Winchell complained that the Treasury's regulations were "framed in the exclusive interest of the negro and in the non-recognition of the moral sense and patriotism of the white man." The venture brought him only problems, and after he returned to Michigan in 1864, his brother Martin, who was managing the plantation, was killed by guerrillas. Syracuse University In 1872, he was appointed chancellor of Syracuse University. The depression of 1873 affected both his personal finances and those of Syracuse, and these troubles led him to resign this position in 1874. Late career and controversy In 1875, he worked as a professor of geology and zoology at Vanderbilt University. There, his views on evolution, as expressed in his book Adamites and Preadamites: or, A Popular Discussion (1878), were not acceptable to the University administration because they diverged from Biblical teaching. Today the views on the "inferiority of the Negro" (quote from his 1878 book) would probably have been the focus of controversy. In any case, he was obligated to resign in 1878. In 1888, he co-founded the Geological Society of America in Ithaca, New York along with John J. Stevenson, Charles H. Hitchcock, John R. Procter and Edward Orton. He served as the 3rd president of GSA in 1891. He then returned to the University of Michigan, where he was professor of geology and paleontology. His work in geology was not so significant as his teaching and popular lectures and writing in this field. He was much concerned with reconciling science and religion. He was an advocate of theistic evolution. Legacy The fish Clear chub Hybopsis winchelli (Girard, 1856) is named after him. Works Sketches of Creation (1870) The Doctrine of Evolution (1874) The Geology of the Stars (1874) Reconciliation of Science and Religion (1877) Adamites and Preadamites (1878) World-Life or Comparative Geology (1883) Preadamites Or, a Demonstration of the Existence of Men Before Adam (1888) Sparks from a Geologist's Hammer Geological Excursions Geological Studies Proof of Negro inferiority References External links 1824 births 1891 deaths 19th-century American geologists University of Michigan faculty Wesleyan University alumni American non-fiction writers 19th-century American writers Presidents of Syracuse University People from North East, New York Theistic evolutionists Scientists from New York (state) Presidents of the Geological Society of America
Alexander Winchell
Biology
969
14,773,744
https://en.wikipedia.org/wiki/Protein%20AATF
Protein AATF, also known as apoptosis-antagonizing transcription factor is a protein that in humans is encoded by the AATF gene. Function The protein encoded by this gene was identified on the basis of its interaction with MAP3K12/DLK, a protein kinase known to be involved in the induction of cell apoptosis. This gene product contains a leucine zipper, which is a characteristic motif of transcription factors, and was shown to exhibit strong transactivation activity when fused to Gal4 DNA binding domain. Overexpression of this gene interfered with MAP3K12 induced apoptosis. Interactions Protein AATF has been shown to interact with: PAWR, POLR2J, Retinoblastoma protein, and Transcription factor Sp1. References Further reading External links Transcription factors
Protein AATF
Chemistry,Biology
168
50,876,594
https://en.wikipedia.org/wiki/Surface%20Science%20%28journal%29
Surface Science is a monthly peer-reviewed scientific journal published by Elsevier that covers the physics and chemistry of surfaces and interfaces. It was established in 1964. The journal encompasses Surface Science Letters, which was published separately until 1993. The scope of the journal includes nanotechnology, catalysis, and soft matter and features both experimental and computational studies. Extended reviews are published in its companion journal, Surface Science Reports. According to the Journal Citation Reports, the journal has a 2020 impact factor of 1.942. References External links Physics journals Materials science journals Academic journals established in 1964 Elsevier academic journals Monthly journals English-language journals
Surface Science (journal)
Materials_science,Engineering
128
77,547,459
https://en.wikipedia.org/wiki/Tearing%20mode
Tearing mode is distruptions seen in Tokamak in which Ideal MHD Instability grow of the order of 10−1 microsec. Types of Tearing mode Rayleigh–Taylor instability Magnetic reconnection Ballooning instability Resistive ballooning mode Universal instability Kelvin–Helmholtz instability Disruption instability Edge-localized mode Transport barrier mode References Plasma instabilities Stability theory
Tearing mode
Physics,Mathematics
76
9,033,403
https://en.wikipedia.org/wiki/Technetium%20%2899mTc%29%20fanolesomab
{{DISPLAYTITLE:Technetium (99mTc) fanolesomab}} Technetium (99mTc) fanolesomab (trade name NeutroSpec, manufactured by Palatin Technologies) is a mouse monoclonal antibody formerly used to aid in the diagnosis of appendicitis. It is labeled with a radioisotope, technetium-99m (99mTc). History and use NeutroSpec was approved by the U.S. Food and Drug Administration (FDA) in June 2004 for imaging of patients with symptoms of appendicitis. It consisted of an intact murine (mouse) IgM monoclonal antibody against human CD15, labeled with technetium-99m so as to be visible on a gamma camera image. Since anti-CD15 antibodies bind selectively to white blood cells such as neutrophils, it could be used to localize the site of an infection. Deaths and associated recall The FDA received reports from Palatin of 2 deaths and 15 life-threatening adverse events in patients who had received NeutroSpec. These events occurred within minutes of administration of NeutroSpec and included shortness of breath, low blood pressure, and cardiopulmonary arrest. Affected patients required resuscitation with intravenous fluids, blood pressure support, and oxygen. Most, but not all, of the patients who experienced these events had existing cardiac and/or pulmonary conditions that may have placed them at higher risk for these adverse events. A review of all post-marketing reports showed an additional 46 patients who experienced adverse events that were similar but less severe. All of the reactions occurred immediately after NeutroSpec was administered. Marketing of the product was suspended in December 2005. References Further reading External links Technetium (99mTc) Fanolesomab from Micromedex Monoclonal antibodies Withdrawn drugs Technetium compounds Technetium-99m Antibody-drug conjugates Radiopharmaceuticals
Technetium (99mTc) fanolesomab
Chemistry,Biology
422
289,691
https://en.wikipedia.org/wiki/Tahini
Tahini () (, ) or rashi () is a Middle-Eastern condiment made from ground sesame. Its more commonly eaten variety comes from hulled sesame, but unhulled seeds can also be used for preparing it. The latter variety has been described as slightly bitter, but more nutritious. It is served by itself (as a dip) or as a major ingredient in hummus, baba ghanoush, and halva. Tahini is used in the cuisines of the Levant and Eastern Mediterranean, the South Caucasus, the Balkans, South Asia, Central Asia, and amongst Ashkenazi Jews as well as parts of Russia and North Africa. Sesame paste (though not called tahini) is also used in some East Asian cuisines. Etymology Tahini is of Semitic origin and comes from a colloquial Levantine Arabic pronunciation of (), or more accurately (), whence also English tahina and Hebrew ṭḥina . It is derived from the root , which as a verb means "to grind", and also produces the word , "flour" in some dialects. The word tahini had appeared in English by the late 1930s. History The oldest mention of sesame is in a cuneiform document written 4,000 years ago that describes the custom of serving the gods sesame wine. The historian Herodotus writes about the cultivation of sesame 3,500 years ago in the region of the Tigris and Euphrates in Mesopotamia. It was mainly used as a source of oil. Tahini is mentioned as an ingredient of hummus kasa, a recipe transcribed in an anonymous 13th-century Arabic cookbook, Kitab Wasf al-Atima al-Mutada. Sesame paste is an ingredient in some Chinese and Japanese dishes; Sichuan cuisine uses it in some recipes for dandan noodles. Sesame paste is also used in Indian cuisine. In North America, sesame tahini, along with other raw nut butters, was available by 1940 in health food stores. Preparation and storage Tahini is made from sesame seeds that are soaked in water and then crushed to separate the bran from the kernels. The crushed seeds are soaked in salt water, causing the bran to sink. The floating kernels are skimmed off the surface, toasted, and ground to produce an oily paste. It can also be prepared with untoasted seeds and called "raw tahini". Because of tahini's high oil content, some manufacturers recommend refrigeration to prevent spoilage. Others do not recommend refrigeration, as it makes the product more viscous and more difficult to serve. Culinary uses Tahini-based sauces are common in Middle Eastern restaurants as a side dish or as a garnish, usually including lemon juice, salt, and garlic, and thinned with water. Hummus is made of cooked, mashed chickpeas typically blended with tahini, lemon juice and salt. Tahini sauce is also a popular topping for meat and vegetables in Middle Eastern cuisine. A sweet spread, ḥalawa ṭaḥīniyya ( "sweet tahini"), is a type of halva sweet. It sometimes has mashed or sliced pistachio pieces sprinkled inside or on top. It is usually spread on bread and eaten as a quick snack. For sweets Tahini is also used in sweet dishes like cakes, cookies, halva, and ice cream. By region Armenia In Armenia, tahini can be used as a sauce to put on lahmajoun. China In Chinese cuisine, sesame paste ( zhīmájiàng) is used as a condiment in many dishes. Chinese sesame paste differs from the Middle Eastern tahini in that the sesame is roasted; the paste is much darker, and has far less astringency. Often, white sesame paste is used in salty dishes, while black sesame paste is used in desserts (not to be confused with black sesame soup, which is made in a different manner from sesame paste). Sesame paste is a primary condiment in the hot dry noodles of Hubei cuisine and ma jiang mian (sesame paste noodles) of Northeastern Chinese cuisine and Taiwanese cuisine. Sesame paste is also used as a bread or mantou spread, and may be paired with or baked into bing (Chinese flatbread). Sesame paste is used as a seasoning, condiment and dip in cold dishes (such as liangfen) and hot pot. Cyprus In Cyprus, tahini, locally pronounced as tashi, is used as a dip for bread and sometimes in pitta souvlaki rather than tzatziki, which is customary in Greece; it is also used to make "tahinopitta" (tahini pie). Greece In Greece, tahini () is used as a spread on bread either alone or topped with honey or jam. Jars of tahini ready-mixed with honey or cocoa are available in the breakfast food aisles of Greek supermarkets. Iran In Iran, tahini is called ardeh () in Persian. It is used to make ḥalvardeh (), a kind of halva made of tahini, sugar, egg whites, and other ingredients. It is also eaten during breakfast, usually with an accompanying sweet substance, such as grape syrup, date syrup, honey, or jam. Ardeh and halvardeh are among the souvenirs of the Iranian cities of Yazd and Ardakan. Iraq In Iraq, tahini is known as rashi (راشي), and is mixed with date syrup (rub) to make a sweet dessert usually eaten with bread. Israel In Israel, tahini ( ṭḥina) is a staple foodstuff. It is served as a dip with flat bread or pita, a topping for many foods such as falafel, sabich, Jerusalem mixed grill and shawarma, and as an ingredient in various spreads. It is also used as a sauce for meat and fish, and in sweet desserts like halva, halva ice cream and tahini cookies. It is also served baked in the oven with kufta made of lamb or beef with spices and herbs, or with a whole fish in the coastal areas and the Sea of Galilee. Levant In the Levant, tahini (Levantine Arabic: ṭḥine) is a staple food and is used in various spreads and culinary preparations. It is the main ingredient of the Ṭaraṭor (sauce) which is used with falafel and shawarma. It is also used as a sauce for meat and fish. It is an ingredient in a seafood dish called ṣiyadiyeh. Palestine In the Gaza Strip, a rust-colored variety known as "red tahina" is served in addition to ordinary tahina. It is achieved by a different and lengthier process of roasting the sesame seeds, and has a more intense taste. Red tahina is used in sumagiyya (lamb with chard and sumac) and salads native to the falaḥeen from the surrounding villages, as well as southern Gaza. In the West Bank city of Nablus, tahina is mixed with qizḥa paste to make "black tahina", used in baking. Turkey In Turkey, tahini () is mixed with pekmez to make tahin-pekmez, which is often served as a breakfast item or after meals as a sweet dip for breads. Nutrition In a 100-gram reference amount, tahini provides 592 calories from its composition as 53% fat, 22% carbohydrates, 17% protein, and 3% water (table). It is a rich source of thiamine (138% of the Daily Value, DV), phosphorus (113% DV), zinc (49% DV), niacin (38% DV), iron (34% DV), magnesium (27% DV), and folate (25% DV) (table). Tahini is a moderate source of calcium, other B vitamins, and potassium (table). See also List of Middle Eastern dishes List of dips List of sesame seed dishes List of spreads References Sesame dishes Nut and seed butters Food ingredients Food paste Spreads (food) Arab cuisine Armenian cuisine Cypriot cuisine Greek cuisine Iranian cuisine Iraqi cuisine Israeli cuisine Jordanian cuisine Lebanese cuisine Levantine cuisine Palestinian cuisine Syrian cuisine Turkish cuisine White sauces Middle Eastern cuisine Jewish cuisine Balkan cuisine
Tahini
Technology
1,733
26,290,265
https://en.wikipedia.org/wiki/Binet%20equation
The Binet equation, derived by Jacques Philippe Marie Binet, provides the form of a central force given the shape of the orbital motion in plane polar coordinates. The equation can also be used to derive the shape of the orbit for a given force law, but this usually involves the solution to a second order nonlinear, ordinary differential equation. A unique solution is impossible in the case of circular motion about the center of force. Equation The shape of an orbit is often conveniently described in terms of relative distance as a function of angle . For the Binet equation, the orbital shape is instead more concisely described by the reciprocal as a function of . Define the specific angular momentum as where is the angular momentum and is the mass. The Binet equation, derived in the next section, gives the force in terms of the function : Derivation Newton's Second Law for a purely central force is The conservation of angular momentum requires that Derivatives of with respect to time may be rewritten as derivatives of with respect to angle: Combining all of the above, we arrive at The general solution is where is the initial coordinate of the particle. Examples Kepler problem Classical The traditional Kepler problem of calculating the orbit of an inverse square law may be read off from the Binet equation as the solution to the differential equation If the angle is measured from the periapsis, then the general solution for the orbit expressed in (reciprocal) polar coordinates is The above polar equation describes conic sections, with the semi-latus rectum (equal to ) and the orbital eccentricity. Relativistic The relativistic equation derived for Schwarzschild coordinates is where is the speed of light and is the Schwarzschild radius. And for Reissner–Nordström metric we will obtain where is the electric charge and is the vacuum permittivity. Inverse Kepler problem Consider the inverse Kepler problem. What kind of force law produces a noncircular elliptical orbit (or more generally a noncircular conic section) around a focus of the ellipse? Differentiating twice the above polar equation for an ellipse gives The force law is therefore which is the anticipated inverse square law. Matching the orbital to physical values like or reproduces Newton's law of universal gravitation or Coulomb's law, respectively. The effective force for Schwarzschild coordinates is where the second term is an inverse-quartic force corresponding to quadrupole effects such as the angular shift of periapsis (It can be also obtained via retarded potentials). In the parameterized post-Newtonian formalism we will obtain where for the general relativity and in the classical case. Cotes spirals An inverse cube force law has the form The shapes of the orbits of an inverse cube law are known as Cotes spirals. The Binet equation shows that the orbits must be solutions to the equation The differential equation has three kinds of solutions, in analogy to the different conic sections of the Kepler problem. When , the solution is the epispiral, including the pathological case of a straight line when . When , the solution is the hyperbolic spiral. When the solution is Poinsot's spiral. Off-axis circular motion Although the Binet equation fails to give a unique force law for circular motion about the center of force, the equation can provide a force law when the circle's center and the center of force do not coincide. Consider for example a circular orbit that passes directly through the center of force. A (reciprocal) polar equation for such a circular orbit of diameter is Differentiating twice and making use of the Pythagorean identity gives The force law is thus Note that solving the general inverse problem, i.e. constructing the orbits of an attractive force law, is a considerably more difficult problem because it is equivalent to solving which is a second order nonlinear differential equation. See also Classical central-force problem General relativity Two-body problem in general relativity Bertrand's theorem References Classical mechanics Eponymous laws of physics
Binet equation
Physics
820
2,055,144
https://en.wikipedia.org/wiki/Upgrader
An upgrader is a facility that upgrades bitumen (extra heavy oil) into synthetic crude oil. Upgrader plants are typically located close to oil sands production, for example, the Athabasca oil sands in Alberta, Canada or the Orinoco tar sands in Venezuela. Processes Upgrading means using fractional distillation and/or chemical treatment to convert bitumen so it can be handled by oil refineries. At a minimum, this means reducing its viscosity so that it can be pumped through pipelines (bitumen is 1000x more viscous than light crude oil). However this process often also includes separating out heavy fractions and reducing sulfur, nitrogen and metals like nickel and vanadium. Upgrading may involve multiple processes: Vacuum distillation to separate lighter fractions, leaving behind a residue with molecular weights over 400. De-asphalting the vacuum distillation residue to remove the highest molecular weight alicyclic compounds, which precipitate as black/brown asphaltenes when the mixture is dissolved in C3–C7 alkanes, leaving "de-asphalted oil" (DAO) in solution. A mixture of propane and butane will remove metallic compounds that would interfere with hydrotreating. Cracking to break long chain molecules in the DAO into shorter ones. Hydrotreating may also be employed to remove sulfur and reduce the level of nitrogen. Research into using biotechnology to perform some of these processes at lower temperatures and cost is ongoing. See also Canadian Centre for Energy Information History of the petroleum industry in Canada (oil sands and heavy oil) Scotford Upgrader Syncrude Suncor Visbreaker References Further reading University of Alberta Tutorial on Upgrading of Oilsands Bitumen Bituminous sands Petroleum production Petroleum technology Industrial buildings and structures Chemical plants
Upgrader
Chemistry,Engineering
367
36,697,260
https://en.wikipedia.org/wiki/Local%20area%20emergency
A local area emergency (SAME code: LAE) is an advisory issued by local authorities through the Emergency Alert System (EAS) in the United States to notify the public of an event that does not pose a significant threat to public safety and/or property by itself, but could escalate, contribute to other more serious events, or disrupt critical public safety services. Instructions, other than public protective actions, may be provided. Examples include: a disruption in water, electric or natural gas service, road closures due to excessive snowfall, or a potential terrorist threat where the public is asked to remain alert. Example URGENT - IMMEDIATE BROADCAST REQUESTED LOCAL AREA EMERGENCY TENNESSEE EMERGENCY MANAGEMENT AGENCY NASHVILLE TN RELAYED BY NATIONAL WEATHER SERVICE HUNTSVILLE AL 1259 PM CDT SAT SEP 21 2013 ...LOCAL AREA EMERGENCY FOR LINCOLN AND MOORE COUNTIES... THE FOLLOWING MESSAGE IS TRANSMITTED AT THE REQUEST OF THE TENNESSEE EMERGENCY MANAGEMENT AGENCY NASHVILLE TN. DUCK RIVER ELECTRIC CUSTOMERS WILL BE WITHOUT POWER FOR ABOUT 4 HOURS BEGINNING 12 AM ON SUNDAY SEPTEMBER 22ND. THE OUTAGE WILL AFFECT ALL OF MOORE COUNTY AND PART OF LINCOLN COUNTY AROUND BOONEVILLE. THE OUTAGE WILL ALLOW DUCK RIVER AND TVA TO PERFORM SYSTEM MAINTENANCE. IF YOU RELY ON MEDICAL EQUIPMENT DEPENDENT ON ELECTRICITY, HAVE A PLAN TO MAINTAIN USE OF YOUR EQUIPMENT DURING THE OUTAGE. CHARGE YOUR CELL PHONES. CORDLESS PHONES WILL NOT WORK DURING THE POWER OUTAGE. UNPLUG COMPUTER, TELEVISIONS, AND SENSITIVE ELECTRONICS. NOTIFY YOUR HOME SECURITY COMPANY. KNOW HOW TO MANUALLY OPERATE GARAGE DOORS AND ELECTRIC GATES. MINIMIZE OPENING REFRIGERATOR AND FREEZER DOORS DURING THE OUTAGE. IF YOU HAVE A GENERATOR, MAKE SURE IT HAS BEEN INSTALLED PROPERLY. CHECK TO MAKE SURE ALL HEAT PRODUCING APPLIANCES SUCH AS STOVES, TOASTER OVENS, AND IRONS ARE TURNED OFF. IF ELECTRICITY IS REQUIRED TO RUN YOUR WATER OR TO REFILL YOUR TOILET FOR FLUSHING, HAVE A RESERVE OF WATER ON HAND PRIOR TO THE PLANNED POWER OUTAGE. NEVER USE A GAS RANGE, INDOOR COOKER, CHARCOAL OR GAS BARBEQUE FOR HEATING. CALL 911 IF YOU HAVE AN EMERGENCY. References http://www.srh.noaa.gov/bro/?n=mapcolors http://www.srh.noaa.gov/lub/?n=nonweathercemdescriptions Emergency Alert System National Weather Service Warning systems
Local area emergency
Technology,Engineering
481
804,098
https://en.wikipedia.org/wiki/Multi-Use%20Radio%20Service
In the United States, the Multi-Use Radio Service (MURS) is a licensed by rule (i.e. under part 95, subpart J, of title 47, Code of Federal Regulations) two-way radio service similar to the Citizens band (CB). Established by the U.S. Federal Communications Commission in the fall of 2000, MURS created a radio service allowing for licensed by rule (Part 95) operation in a narrow selection of the VHF band, with a power limit of 2 watts. The FCC formally defines MURS as "a private, two-way, short-distance voice or data communications service for personal or business activities of the general public." MURS stations may not be connected to the public telephone network, may not be used for store and forward operations, and radio repeaters are not permitted. In 2009, Industry Canada (IC) established a five-year transition plan, which would have permitted the use of MURS in Canada starting June 2014. In August 2014 IC announced a deferral of MURS introduction, as "the Department does not feel that the introduction of MURS devices in Canada is warranted at this time, and has decided to defer the introduction of MURS devices in Canada until a clearer indication of actual need is provided by Canadian MURS advocates and/or stakeholders ..." Eligibility No licenses are required or issued for MURS within the United States. Any person is authorized to use the MURS frequencies given that it: Is not a foreign government or a representative of a foreign government. Uses the transmitter in accordance with 47 CFR. 95.1309. Operates in accordance with the rules contained in Sections 95.1301-95.1309. Operates only legal, type-accepted MURS equipment. Frequencies MURS comprises the following five frequencies: Channels 1–3 must use "narrowband" frequency modulation (2.5 kHz deviation; 11.25 kHz bandwidth). Channels 4 and 5 may use either "wideband" FM (5 kHz deviation; 20 kHz bandwidth) or "narrowband" FM. All five channels may use amplitude modulation with a bandwidth up to 8 kHz. MURS falls under part 95 and was not mandated for narrow-banding, such as those of Part 90 in the public service bands by January 2013. Because previous business band licensees who have maintained their active license remain grandfathered with their existing operating privileges, it is possible to find repeaters or other operations not authorized by Part 95 taking place. These are not necessarily illegal. If legal, such operations may enjoy primary status on their licensed frequency and as such are legally protected from harmful interference by MURS users. Range MURS range will vary, depending on antenna size and placement. With an external antenna, ranges of or more can be expected. Since MURS radios use frequencies in the VHF business band, they are subject to obstructions in line of sight, which includes the curvature of the Earth. The higher you can place the antennas on both transmit and receive sides (within legal limits), the further you can transmit and receive. Some antenna manufacturers claim an external antenna can increase the effective radiated power of a transmitter by a factor of 4. Authorized modes Permitted areas of operation MURS operation is authorized anywhere a CB radio station is authorized and within or over any area of the world where radio services are regulated by the FCC. Those areas are within the territorial limits of: The fifty United States The District of Columbia American Samoa (seven islands) Baker Island Caribbean Insular areas Commonwealth of Northern Mariana Islands Commonwealth of Puerto Rico Guam Island Howland Island Jarvis Island Johnston Atoll (Islets East, Johnston, North and Sand) Kingman Reef Midway Atoll (Islets Eastern and Sand) Navassa Island Pacific Insular areas Palmyra Atoll (more than fifty islets) United States Virgin Islands (50 islets and cays) Wake Island Aboard any vessel of the United States, with the permission of the captain, while the vessel is traveling either domestically or in international waters Restrictions Transmitter power output is limited to 2 watts. The highest point of any MURS antenna must not be more than above the ground or above the highest point of the structure to which it is mounted, whichever is higher. Transmitting on MURS frequencies is not allowed while aboard aircraft in flight. Devices that use MURS must be specially labeled and certified. Products There are a wide variety of radio products that use MURS frequencies. MURS devices include wireless base station intercoms, handheld two-way radios, wireless dog training collars, wireless public address units, customer service callboxes, wireless remote switches, and wireless callboxes with or without gate opening ability. Since MURS uses standard frequencies, most devices that use MURS are compatible with each other. Most analog two-way radios utilize a technology called CTCSS or DCS that helps block out unwanted transmissions. To make MURS two-way radios work together, they must have matching CTCSS or DCS tones. This can usually be done via basic programming which almost all MURS two-way radios support. The goTenna, a digital radio product, operates on the MURS band and pairs with smartphones to enable users to send texts and share locations on a peer-to-peer basis. goTenna is not interoperable with other MURS devices, even though they operate on the same spectrum, employing "listen-before-talk" to reduce interference in the band's five channels. Notable users According to Bill Fawcett's Spaniel Journal, Spaniel pro-handler Dan Langhans was given a set of VHF business-band radios on the frequency of 154.57 MHz which became known by the trade as "blue dot" radios. Costco Wholesale use Motorola DTR600, DLR1020, and Motorola Curve on Frequency 1 for general use among employees and Frequency 2 for communication with major sales departments. Walmart and Sam's Club use a Motorola Solutions model Motorola RDM2070D, which is exclusive to Walmart and Sam's Club. The Motorola RDM2070D is preprogrammed on MURS frequencies with most channels using CTCSS tone 21/4Z/136.5Hz. See also Business band Family Radio Service General Mobile Radio Service Public Radio Service Unlicensed Personal Communications Services References External links FCC Wireless Services: MURS Home MURS Rules Summary Bandplans Radio communications Radio regulations
Multi-Use Radio Service
Engineering
1,311
14,881,305
https://en.wikipedia.org/wiki/40S%20ribosomal%20protein%20S18
40S ribosomal protein S18 is a protein that in humans is encoded by the RPS18 gene. Ribosomes, the organelles that catalyze protein synthesis, consist of a small 40S subunit and a large 60S subunit. Together these subunits are composed of 4 RNA species and approximately 80 structurally distinct proteins. This gene encodes a ribosomal protein that is a component of the 40S subunit. The protein belongs to the S13P family of ribosomal proteins. It is located in the cytoplasm. The gene product of the E. coli ortholog (ribosomal protein S13) is involved in the binding of fMet-tRNA, and thus, in the initiation of translation. This gene is an ortholog of mouse Ke3. As is typical for genes encoding ribosomal proteins, there are multiple processed pseudogenes of this gene dispersed through the genome. References Further reading External links Ribosomal proteins
40S ribosomal protein S18
Chemistry
196
65,318,331
https://en.wikipedia.org/wiki/Ammonia%20%2813N%29
{{DISPLAYTITLE:Ammonia (13N)}} Ammonia (13N) (ammonia with radioisotope nitrogen-13) is a medication for diagnostic positron emission tomography (PET) imaging of the myocardium. References External links Radiopharmaceuticals
Ammonia (13N)
Chemistry
63
17,978,307
https://en.wikipedia.org/wiki/Delta%20E
The Delta E, or Thor-Delta E was an American expendable launch system used for twenty-three orbital launches between 1965 and 1971. It was a member of the Delta family of rockets. The first stage was a Thor missile in the DSV-2C configuration, and the second stage was the Delta-E, which was derived from the earlier Delta-A. Three Castor-1 solid rocket boosters were clustered around the first stage. Two different solid-fuel upper stages were available; an Altair-2 was used on the baseline version, however this could be replaced with an FW-4D to increase performance. A Delta E with the FW-4D upper stage was designated Delta E1. Six flights used the Delta E configuration and seventeen used the Delta E1. Delta E rockets were launched from Cape Canaveral Air Force Station Launch Complex 17 and Vandenberg Air Force Base Space Launch Complex 2E. All 23 flights were successful. On December 16 1965, a Delta E launched the Pioneer 6 space probe. References Delta (rocket family)
Delta E
Astronomy
221
32,167
https://en.wikipedia.org/wiki/Ubiquitin
Ubiquitin is a small (8.6 kDa) regulatory protein found in most tissues of eukaryotic organisms, i.e., it is found ubiquitously. It was discovered in 1975 by Gideon Goldstein and further characterized throughout the late 1970s and 1980s. Four genes in the human genome code for ubiquitin: UBB, UBC, UBA52 and RPS27A. The addition of ubiquitin to a substrate protein is called ubiquitylation (or ubiquitination or ubiquitinylation). Ubiquitylation affects proteins in many ways: it can mark them for degradation via the proteasome, alter their cellular location, affect their activity, and promote or prevent protein interactions. Ubiquitylation involves three main steps: activation, conjugation, and ligation, performed by ubiquitin-activating enzymes (E1s), ubiquitin-conjugating enzymes (E2s), and ubiquitin ligases (E3s), respectively. The result of this sequential cascade is to bind ubiquitin to lysine residues on the protein substrate via an isopeptide bond, cysteine residues through a thioester bond; serine, threonine, and tyrosine residues through an ester bond; or the amino group of the protein's N-terminus via a peptide bond. The protein modifications can be either a single ubiquitin protein (monoubiquitylation) or a chain of ubiquitin (polyubiquitylation). Secondary ubiquitin molecules are always linked to one of the seven lysine residues or the N-terminal methionine of the previous ubiquitin molecule. These 'linking' residues are represented by a "K" or "M" (the one-letter amino acid notation of lysine and methionine, respectively) and a number, referring to its position in the ubiquitin molecule as in K48, K29 or M1. The first ubiquitin molecule is covalently bound through its C-terminal carboxylate group to a particular lysine, cysteine, serine, threonine or N-terminus of the target protein. Polyubiquitylation occurs when the C-terminus of another ubiquitin is linked to one of the seven lysine residues or the first methionine on the previously added ubiquitin molecule, creating a chain. This process repeats several times, leading to the addition of several ubiquitins. Only polyubiquitylation on defined lysines, mostly on K48 and K29, is related to degradation by the proteasome (referred to as the "molecular kiss of death"), while other polyubiquitylations (e.g. on K63, K11, K6 and M1) and monoubiquitylations may regulate processes such as endocytic trafficking, inflammation, translation and DNA repair. The discovery that ubiquitin chains target proteins to the proteasome, which degrades and recycles proteins, was honored with the Nobel Prize in Chemistry in 2004. Identification Ubiquitin (originally, ubiquitous immunopoietic polypeptide) was first identified in 1975 as an 8.6 kDa protein expressed in all eukaryotic cells. The basic functions of ubiquitin and the components of the ubiquitylation pathway were elucidated in the early 1980s at the Technion by Aaron Ciechanover, Avram Hershko, and Irwin Rose for which the Nobel Prize in Chemistry was awarded in 2004. The ubiquitylation system was initially characterised as an ATP-dependent proteolytic system present in cellular extracts. A heat-stable polypeptide present in these extracts, ATP-dependent proteolysis factor 1 (APF-1), was found to become covalently attached to the model protein substrate lysozyme in an ATP- and Mg2+-dependent process. Multiple APF-1 molecules were linked to a single substrate molecule by an isopeptide linkage, and conjugates were found to be rapidly degraded with the release of free APF-1. Soon after APF-1-protein conjugation was characterised, APF-1 was identified as ubiquitin. The carboxyl group of the C-terminal glycine residue of ubiquitin (Gly76) was identified as the moiety conjugated to substrate lysine residues. The protein Ubiquitin is a small protein that exists in all eukaryotic cells. It performs its myriad functions through conjugation to a large range of target proteins. A variety of different modifications can occur. The ubiquitin protein itself consists of 76 amino acids and has a molecular mass of about 8.6 kDa. Key features include its C-terminal tail and the 7 lysine residues. It is highly conserved throughout eukaryote evolution; human and yeast ubiquitin share 96% sequence identity. Genes Ubiquitin is encoded in mammals by four different genes. UBA52 and RPS27A genes code for a single copy of ubiquitin fused to the ribosomal proteins L40 and S27a, respectively. The UBB and UBC genes code for polyubiquitin precursor proteins. Ubiquitylation Ubiquitylation (also known as ubiquitination or ubiquitinylation) is an enzymatic post-translational modification in which an ubiquitin protein is attached to a substrate protein. This process most commonly binds the last amino acid of ubiquitin (glycine 76) to a lysine residue on the substrate. An isopeptide bond is formed between the carboxyl group (COO−) of the ubiquitin's glycine and the epsilon-amino group (ε-) of the substrate's lysine. Trypsin cleavage of a ubiquitin-conjugated substrate leaves a di-glycine "remnant" that is used to identify the site of ubiquitylation. Ubiquitin can also be bound to other sites in a protein which are electron-rich nucleophiles, termed "non-canonical ubiquitylation". This was first observed with the amine group of a protein's N-terminus being used for ubiquitylation, rather than a lysine residue, in the protein MyoD and has been observed since in 22 other proteins in multiple species, including ubiquitin itself. There is also increasing evidence for nonlysine residues as ubiquitylation targets using non-amine groups, such as the sulfhydryl group on cysteine, and the hydroxyl group on threonine and serine. The end result of this process is the addition of one ubiquitin molecule (monoubiquitylation) or a chain of ubiquitin molecules (polyubiquitination) to the substrate protein. Ubiquitination requires three types of enzyme: ubiquitin-activating enzymes, ubiquitin-conjugating enzymes, and ubiquitin ligases, known as E1s, E2s, and E3s, respectively. The process consists of three main steps: Activation: Ubiquitin is activated in a two-step reaction by an E1 ubiquitin-activating enzyme, which is dependent on ATP. The initial step involves production of a ubiquitin-adenylate intermediate. The E1 binds both ATP and ubiquitin and catalyses the acyl-adenylation of the C-terminus of the ubiquitin molecule. The second step transfers ubiquitin to an active site cysteine residue, with release of AMP. This step results in a thioester linkage between the C-terminal carboxyl group of ubiquitin and the E1 cysteine sulfhydryl group. The human genome contains two genes that produce enzymes capable of activating ubiquitin: UBA1 and UBA6. Conjugation: E2 ubiquitin-conjugating enzymes catalyse the transfer of ubiquitin from E1 to the active site cysteine of the E2 via a trans(thio)esterification reaction. In order to perform this reaction, the E2 binds to both activated ubiquitin and the E1 enzyme. Humans possess 35 different E2 enzymes, whereas other eukaryotic organisms have between 16 and 35. They are characterised by their highly conserved structure, known as the ubiquitin-conjugating catalytic (UBC) fold. Ligation: E3 ubiquitin ligases catalyse the final step of the ubiquitylation cascade. Most commonly, they create an isopeptide bond between a lysine of the target protein and the C-terminal glycine of ubiquitin. In general, this step requires the activity of one of the hundreds of E3s. E3 enzymes function as the substrate recognition modules of the system and are capable of interaction with both E2 and substrate. Some E3 enzymes also activate the E2 enzymes. E3 enzymes possess one of two domains: the homologous to the E6-AP carboxyl terminus (HECT) domain and the really interesting new gene (RING) domain (or the closely related U-box domain). HECT domain E3s transiently bind ubiquitin in this process (an obligate thioester intermediate is formed with the active-site cysteine of the E3), whereas RING domain E3s catalyse the direct transfer from the E2 enzyme to the substrate. The anaphase-promoting complex (APC) and the SCF complex (for Skp1-Cullin-F-box protein complex) are two examples of multi-subunit E3s involved in recognition and ubiquitylation of specific target proteins for degradation by the proteasome. In the ubiquitylation cascade, E1 can bind with many E2s, which can bind with hundreds of E3s in a hierarchical way. Having levels within the cascade allows tight regulation of the ubiquitylation machinery. Other ubiquitin-like proteins (UBLs) are also modified via the E1–E2–E3 cascade, although variations in these systems do exist. E4 enzymes, or ubiquitin-chain elongation factors, are capable of adding pre-formed polyubiquitin chains to substrate proteins. For example, multiple monoubiquitylation of the tumor suppressor p53 by Mdm2 can be followed by addition of a polyubiquitin chain using p300 and CBP. Types Ubiquitylation affects cellular process by regulating the degradation of proteins (via the proteasome and lysosome), coordinating the cellular localization of proteins, activating and inactivating proteins, and modulating protein–protein interactions. These effects are mediated by different types of substrate ubiquitylation, for example the addition of a single ubiquitin molecule (monoubiquitylation) or different types of ubiquitin chains (polyubiquitylation). Monoubiquitylation Monoubiquitylation is the addition of one ubiquitin molecule to one substrate protein residue. Multi-monoubiquitylation is the addition of one ubiquitin molecule to multiple substrate residues. The monoubiquitylation of a protein can have different effects to the polyubiquitylation of the same protein. The addition of a single ubiquitin molecule is thought to be required prior to the formation of polyubiquitin chains. Monoubiquitylation affects cellular processes such as membrane trafficking, endocytosis and viral budding. Polyubiquitin chains Polyubiquitylation is the formation of a ubiquitin chain on a single lysine residue on the substrate protein. Following addition of a single ubiquitin moiety to a protein substrate, further ubiquitin molecules can be added to the first, yielding a polyubiquitin chain. These chains are made by linking the glycine residue of a ubiquitin molecule to a lysine of ubiquitin bound to a substrate. Ubiquitin has seven lysine residues and an N-terminus that serves as points of ubiquitination; they are K6, K11, K27, K29, K33, K48, K63 and M1, respectively. Lysine 48-linked chains were the first identified and are the best-characterised type of ubiquitin chain. K63 chains have also been well-characterised, whereas the function of other lysine chains, mixed chains, branched chains, M1-linked linear chains, and heterologous chains (mixtures of ubiquitin and other ubiquitin-like proteins) remains more unclear. Lysine 48-linked polyubiquitin chains target proteins for destruction, by a process known as proteolysis. Multi-ubiquitin chains at least four ubiquitin molecules long must be attached to a lysine residue on the condemned protein in order for it to be recognised by the 26S proteasome. This is a barrel-shape structure comprising a central proteolytic core made of four ring structures, flanked by two cylinders that selectively allow entry of ubiquitylated proteins. Once inside, the proteins are rapidly degraded into small peptides (usually 3–25 amino acid residues in length). Ubiquitin molecules are cleaved off the protein immediately prior to destruction and are recycled for further use. Although the majority of protein substrates are ubiquitylated, there are examples of non-ubiquitylated proteins targeted to the proteasome. The polyubiquitin chains are recognised by a subunit of the proteasome: S5a/Rpn10. This is achieved by a ubiquitin-interacting motif (UIM) found in a hydrophobic patch in the C-terminal region of the S5a/Rpn10 unit. Lysine 63-linked chains are not associated with proteasomal degradation of the substrate protein. Instead, they allow the coordination of other processes such as endocytic trafficking, inflammation, translation, and DNA repair. In cells, lysine 63-linked chains are bound by the ESCRT-0 complex, which prevents their binding to the proteasome. This complex contains two proteins, Hrs and STAM1, that contain a UIM, which allows it to bind to lysine 63-linked chains. Methionine 1-linked (or linear) polyubiquitin chains are another type of non-degradative ubiquitin chains. In this case, ubiquitin is linked in a head-to-tail manner, meaning that the C-terminus of the last ubiquitin molecule binds directly to the N-terminus of the next one. Although initially believed to target proteins for proteasomal degradation, linear ubiquitin later proved to be indispensable for NF-kB signaling. Currently, there is only one known E3 ubiquitin ligase generating M1-linked polyubiquitin chains - linear ubiquitin chain assembly complex (LUBAC). Less is understood about atypical (non-lysine 48-linked) ubiquitin chains but research is starting to suggest roles for these chains. There is evidence that atypical chains linked by lysine 6, 11, 27, 29 and methionine 1 can induce proteasomal degradation. Branched ubiquitin chains containing multiple linkage types can be formed. The function of these chains is unknown. Structure Differently linked chains have specific effects on the protein to which they are attached, caused by differences in the conformations of the protein chains. K29-, K33-, K63- and M1-linked chains have a fairly linear conformation; they are known as open-conformation chains. K6-, K11-, and K48-linked chains form closed conformations. The ubiquitin molecules in open-conformation chains do not interact with each other, except for the covalent isopeptide bonds linking them together. In contrast, the closed conformation chains have interfaces with interacting residues. Altering the chain conformations exposes and conceals different parts of the ubiquitin protein, and the different linkages are recognized by proteins that are specific for the unique topologies that are intrinsic to the linkage. Proteins can specifically bind to ubiquitin via ubiquitin-binding domains (UBDs). The distances between individual ubiquitin units in chains differ between lysine 63- and 48-linked chains. The UBDs exploit this by having small spacers between ubiquitin-interacting motifs that bind lysine 48-linked chains (compact ubiquitin chains) and larger spacers for lysine 63-linked chains. The machinery involved in recognising polyubiquitin chains can also differentiate between K63-linked chains and M1-linked chains, demonstrated by the fact that the latter can induce proteasomal degradation of the substrate. Function The ubiquitylation system functions in a wide variety of cellular processes, including: Antigen processing Apoptosis Biogenesis of organelles Cell cycle and division DNA transcription and repair Differentiation and development Immune response and inflammation Neural and muscular degeneration Maintenance of pluripotency Morphogenesis of neural networks Modulation of cell surface receptors, ion channels and the secretory pathway Response to stress and extracellular modulators Ribosome biogenesis Viral infection Membrane proteins Multi-monoubiquitylation can mark transmembrane proteins (for example, receptors) for removal from membranes (internalisation) and fulfil several signalling roles within the cell. When cell-surface transmembrane molecules are tagged with ubiquitin, the subcellular localization of the protein is altered, often targeting the protein for destruction in lysosomes. This serves as a negative feedback mechanism, because often the stimulation of receptors by ligands increases their rate of ubiquitylation and internalisation. Like monoubiquitylation, lysine 63-linked polyubiquitin chains also has a role in the trafficking some membrane proteins. Genomic maintenance Proliferating cell nuclear antigen (PCNA) is a protein involved in DNA synthesis. Under normal physiological conditions PCNA is sumoylated (a similar post-translational modification to ubiquitylation). When DNA is damaged by ultra-violet radiation or chemicals, the SUMO molecule that is attached to a lysine residue is replaced by ubiquitin. Monoubiquitylated PCNA recruits polymerases that can carry out DNA synthesis with damaged DNA; but this is very error-prone, possibly resulting in the synthesis of mutated DNA. Lysine 63-linked polyubiquitylation of PCNA allows it to perform a less error-prone mutation bypass known by the template switching pathway. Ubiquitylation of histone H2AX is involved in DNA damage recognition of DNA double-strand breaks. Lysine 63-linked polyubiquitin chains are formed on H2AX histone by the E2/E3 ligase pair, Ubc13-Mms2/RNF168. This K63 chain appears to recruit RAP80, which contains a UIM, and RAP80 then helps localize BRCA1. This pathway will eventually recruit the necessary proteins for homologous recombination repair. Transcriptional regulation Histones can be ubiquitinated, usually in the form of monoubiquitylation, although polyubiquitylated forms do occur. Histone ubiquitylation alters chromatin structure and allows the access of enzymes involved in transcription. Ubiquitin on histones also acts as a binding site for proteins that either activate or inhibit transcription and also can induce further post-translational modifications of the protein. These effects can all modulate the transcription of genes. Deubiquitination Deubiquitinating enzymes (deubiquitinases; DUBs) oppose the role of ubiquitylation by removing ubiquitin from substrate proteins. They are cysteine proteases that cleave the amide bond between the two proteins. They are highly specific, as are the E3 ligases that attach the ubiquitin, with only a few substrates per enzyme. They can cleave both isopeptide (between ubiquitin and lysine) and peptide bonds (between ubiquitin and the N-terminus). In addition to removing ubiquitin from substrate proteins, DUBs have many other roles within the cell. Ubiquitin is either expressed as multiple copies joined in a chain (polyubiquitin) or attached to ribosomal subunits. DUBs cleave these proteins to produce active ubiquitin. They also recycle ubiquitin that has been bound to small nucleophilic molecules during the ubiquitylation process. Monoubiquitin is formed by DUBs that cleave ubiquitin from free polyubiquitin chains that have been previously removed from proteins. Ubiquitin-binding domains Ubiquitin-binding domains (UBDs) are modular protein domains that non-covalently bind to ubiquitin, these motifs control various cellular events. Detailed molecular structures are known for a number of UBDs, binding specificity determines their mechanism of action and regulation, and how it regulates cellular proteins and processes. Disease associations Pathogenesis The ubiquitin pathway has been implicated in the pathogenesis of a wide range of diseases and disorders, including: Neurodegeneration Infection and immunity Genetic disorders Cancer Neurodegeneration Ubiquitin is implicated in neurodegenerative diseases associated with proteostasis dysfunction, including Alzheimer's disease, motor neuron disease, Huntington's disease and Parkinson's disease. Transcript variants encoding different isoforms of ubiquilin-1 are found in lesions associated with Alzheimer's and Parkinson's disease. Higher levels of ubiquilin in the brain have been shown to decrease malformation of amyloid precursor protein (APP), which plays a key role in triggering Alzheimer's disease. Conversely, lower levels of ubiquilin-1 in the brain have been associated with increased malformation of APP. A frameshift mutation in ubiquitin B can result in a truncated peptide missing the C-terminal glycine. This abnormal peptide, known as UBB+1, has been shown to accumulate selectively in Alzheimer's disease and other tauopathies. Infection and immunity Ubiquitin and ubiquitin-like molecules extensively regulate immune signal transduction pathways at virtually all stages, including steady-state repression, activation during infection, and attenuation upon clearance. Without this regulation, immune activation against pathogens may be defective, resulting in chronic disease or death. Alternatively, the immune system may become hyperactivated and organs and tissues may be subjected to autoimmune damage. On the other hand, viruses must block or redirect host cell processes including immunity to effectively replicate, yet many viruses relevant to disease have informationally limited genomes. Because of its very large number of roles in the cell, manipulating the ubiquitin system represents an efficient way for such viruses to block, subvert or redirect critical host cell processes to support their own replication. The retinoic acid-inducible gene I (RIG-I) protein is a primary immune system sensor for viral and other invasive RNA in human cells. The RIG-I-like receptor (RLR) immune signaling pathway is one of the most extensively studied in terms of the role of ubiquitin in immune regulation. Genetic disorders Angelman syndrome is caused by a disruption of UBE3A, which encodes a ubiquitin ligase (E3) enzyme termed E6-AP. Von Hippel–Lindau syndrome involves disruption of a ubiquitin E3 ligase termed the VHL tumor suppressor, or VHL gene. Fanconi anemia: Eight of the thirteen identified genes whose disruption can cause this disease encode proteins that form a large ubiquitin ligase (E3) complex. 3-M syndrome is an autosomal-recessive growth retardation disorder associated with mutations of the Cullin7 E3 ubiquitin ligase. Diagnostic use Immunohistochemistry using antibodies to ubiquitin can identify abnormal accumulations of this protein inside cells, indicating a disease process. These protein accumulations are referred to as inclusion bodies (which is a general term for any microscopically visible collection of abnormal material in a cell). Examples include: Neurofibrillary tangles in Alzheimer's disease Lewy body in Parkinson's disease Pick bodies in Pick's disease Inclusions in motor neuron disease and Huntington's disease Mallory bodies in alcoholic liver disease Rosenthal fibers in astrocytes Link to cancer Post-translational modification of proteins is a generally used mechanism in eukaryotic cell signaling. Ubiquitylation, ubiquitin conjugation to proteins, is a crucial process for cell cycle progression and cell proliferation and development. Although ubiquitylation usually serves as a signal for protein degradation through the 26S proteasome, it could also serve for other fundamental cellular processes, in endocytosis, enzymatic activation and DNA repair. Moreover, since ubiquitylation functions to tightly regulate the cellular level of cyclins, its misregulation is expected to have severe impacts. First evidence of the importance of the ubiquitin/proteasome pathway in oncogenic processes was observed due to the high antitumor activity of proteasome inhibitors. Various studies have shown that defects or alterations in ubiquitylation processes are commonly associated with or present in human carcinoma. Malignancies could be developed through loss of function mutation directly at the tumor suppressor gene, increased activity of ubiquitylation, and/or indirect attenuation of ubiquitylation due to mutation in related proteins. Direct loss of function mutation of E3 ubiquitin ligase Renal cell carcinoma The VHL (Von Hippel–Lindau) gene encodes a component of an E3 ubiquitin ligase. VHL complex targets a member of the hypoxia-inducible transcription factor family (HIF) for degradation by interacting with the oxygen-dependent destruction domain under normoxic conditions. HIF activates downstream targets such as the vascular endothelial growth factor (VEGF), promoting angiogenesis. Mutations in VHL prevent degradation of HIF and thus lead to the formation of hypervascular lesions and renal tumors. Breast cancer The BRCA1 gene is another tumor suppressor gene in humans which encodes the BRCA1 protein that is involved in response to DNA damage. The protein contains a RING motif with E3 Ubiquitin Ligase activity. BRCA1 could form dimer with other molecules, such as BARD1 and BAP1, for its ubiquitylation activity. Mutations that affect the ligase function are often found and associated with various cancers. Cyclin E As processes in cell cycle progression are the most fundamental processes for cellular growth and differentiation, and are the most common to be altered in human carcinomas, it is expected for cell cycle-regulatory proteins to be under tight regulation. The level of cyclins, as the name suggests, is high only at certain a time point during the cell cycle. This is achieved by continuous control of cyclins or CDKs levels through ubiquitylation and degradation. When cyclin E is partnered with CDK2 and gets phosphorylated, an SCF-associated F-box protein Fbw7 recognizes the complex and thus targets it for degradation. Mutations in Fbw7 have been found in more than 30% of human tumors, characterizing it as a tumor suppressor protein. Increased ubiquitination activity Cervical cancer Oncogenic types of the human papillomavirus (HPV) are known to hijack cellular ubiquitin-proteasome pathway for viral infection and replication. The E6 proteins of HPV will bind to the N-terminus of the cellular E6-AP E3 ubiquitin ligase, redirecting the complex to bind p53, a well-known tumor suppressor gene whose inactivation is found in many types of cancer. Thus, p53 undergoes ubiquitylation and proteasome-mediated degradation. Meanwhile, E7, another one of the early-expressed HPV genes, will bind to Rb, also a tumor suppressor gene, mediating its degradation. The loss of p53 and Rb in cells allows limitless cell proliferation to occur. p53 regulation Gene amplification often occur in various tumor cases, including of MDM2, a gene encodes for a RING E3 Ubiquitin ligase responsible for downregulation of p53 activity. MDM2 targets p53 for ubiquitylation and proteasomal degradation thus keeping its level appropriate for normal cell condition. Overexpression of MDM2 causes loss of p53 activity and therefore allowing cells to have a limitless replicative potential. p27 Another gene that is a target of gene amplification is SKP2. SKP2 is an F-box protein with a role in substrate recognition for ubiquitylation and degradation. SKP2 targets p27Kip-1, an inhibitor of cyclin-dependent kinases (CDKs). CDKs2/4 partner with the cyclins E/D, respectively, forming a family of cell cycle regulators which control cell cycle progression through the G1 phase. Low level of p27Kip-1 protein is often found in various cancers and is due to overactivation of ubiquitin-mediated proteolysis through overexpression of SKP2. Efp Efp, or estrogen-inducible RING-finger protein, is an E3 ubiquitin ligase whose overexpression has been shown to be the major cause of estrogen-independent breast cancer. Efp's substrate is 14-3-3 protein which negatively regulates cell cycle. Evasion of ubiquitination Colorectal cancer The gene associated with colorectal cancer is the adenomatous polyposis coli (APC), which is a classic tumor suppressor gene. APC gene product targets beta-catenin for degradation via ubiquitylation at the N-terminus, thus regulating its cellular level. Most colorectal cancer cases are found with mutations in the APC gene. However, in cases where APC gene is not mutated, mutations are found in the N-terminus of beta-catenin which renders it ubiquitination-free and thus increased activity. Glioblastoma As the most aggressive cancer originated in the brain, mutations found in patients with glioblastoma are related to the deletion of a part of the extracellular domain of the epidermal growth factor receptor (EGFR). This deletion causes CBL E3 ligase unable to bind to the receptor for its recycling and degradation via a ubiquitin-lysosomal pathway. Thus, EGFR is constitutively active in the cell membrane and activates its downstream effectors that are involved in cell proliferation and migration. Phosphorylation-dependent ubiquitylation The interplay between ubiquitylation and phosphorylation has been an ongoing research interest since phosphorylation often serves as a marker where ubiquitylation leads to degradation. Moreover, ubiquitylation can also act to turn on/off the kinase activity of a protein. The critical role of phosphorylation is largely underscored in the activation and removal of autoinhibition in the Cbl protein. Cbl is an E3 ubiquitin ligase with a RING finger domain that interacts with its tyrosine kinase binding (TKB) domain, preventing interaction of the RING domain with an E2 ubiquitin-conjugating enzyme. This intramolecular interaction is an autoinhibition regulation that prevents its role as a negative regulator of various growth factors and tyrosine kinase signaling and T-cell activation. Phosphorylation of Y363 relieves the autoinhibition and enhances binding to E2. Mutations that render the Cbl protein dysfunctional due to the loss of its ligase/tumor suppressor function and maintenance of its positive signaling/oncogenic function have been shown to cause the development of cancer. As a drug target Screening for ubiquitin ligase substrates Deregulation of E3-substrate interactions is a key cause of many human disorders, therefore identifying E3 ligase substrates is crucial. In 2008, 'Global Protein Stability (GPS) Profiling' was developed to discover E3 ubiquitin ligase substrates. This high-throughput system made use of reporter proteins fused with thousands of potential substrates independently. By inhibition of the ligase activity (through the making of Cul1 dominant negative thus renders ubiquitination not to occur), increased reporter activity shows that the identified substrates are being accumulated. This approach added a large number of new substrates to the list of E3 ligase substrates. Possible therapeutic applications Blocking of specific substrate recognition by the E3 ligases, e.g. bortezomib. Challenge Finding a specific molecule that selectively inhibits the activity of a certain E3 ligase and/or the protein–protein interactions implicated in the disease remains as one of the important and expanding research area. Moreover, as ubiquitination is a multi-step process with various players and intermediate forms, consideration of the much complex interactions between components needs to be taken heavily into account while designing the small molecule inhibitors. Similar proteins Ubiquitin is the most-understood post-translation modifier, however, several family of ubiquitin-like proteins (UBLs) can modify cellular targets in a parallel but distinct route. Known UBLs include: small ubiquitin-like modifier (SUMO), ubiquitin cross-reactive protein (UCRP, also known as interferon-stimulated gene-15 ISG15), ubiquitin-related modifier-1 (URM1), neuronal-precursor-cell-expressed developmentally downregulated protein-8 (NEDD8, also called Rub1 in S. cerevisiae), human leukocyte antigen F-associated (FAT10), autophagy-8 (ATG8) and -12 (ATG12), Few ubiquitin-like protein (FUB1), MUB (membrane-anchored UBL), ubiquitin fold-modifier-1 (UFM1) and ubiquitin-like protein-5 (UBL5, which is but known as homologous to ubiquitin-1 [Hub1] in S. pombe). Although these proteins share only modest primary sequence identity with ubiquitin, they are closely related three-dimensionally. For example, SUMO shares only 18% sequence identity, but they contain the same structural fold. This fold is called "ubiquitin fold". FAT10 and UCRP contain two. This compact globular beta-grasp fold is found in ubiquitin, UBLs, and proteins that comprise a ubiquitin-like domain, e.g. the S. cerevisiae spindle pole body duplication protein, Dsk2, and NER protein, Rad23, both contain N-terminal ubiquitin domains. These related molecules have novel functions and influence diverse biological processes. There is also cross-regulation between the various conjugation pathways, since some proteins can become modified by more than one UBL, and sometimes even at the same lysine residue. For instance, SUMO modification often acts antagonistically to that of ubiquitination and serves to stabilize protein substrates. Proteins conjugated to UBLs are typically not targeted for degradation by the proteasome but rather function in diverse regulatory activities. Attachment of UBLs might, alter substrate conformation, affect the affinity for ligands or other interacting molecules, alter substrate localization, and influence protein stability. UBLs are structurally similar to ubiquitin and are processed, activated, conjugated, and released from conjugates by enzymatic steps that are similar to the corresponding mechanisms for ubiquitin. UBLs are also translated with C-terminal extensions that are processed to expose the invariant C-terminal LRGG. These modifiers have their own specific E1 (activating), E2 (conjugating) and E3 (ligating) enzymes that conjugate the UBLs to intracellular targets. These conjugates can be reversed by UBL-specific isopeptidases that have similar mechanisms to that of the deubiquitinating enzymes. Within some species, the recognition and destruction of sperm mitochondria through a mechanism involving ubiquitin is responsible for sperm mitochondria's disposal after fertilization occurs. Prokaryotic origins Ubiquitin is believed to have descended from bacterial proteins similar to ThiS () or MoaD (). These prokaryotic proteins, despite having little sequence identity (ThiS has 14% identity to ubiquitin), share the same protein fold. These proteins also share sulfur chemistry with ubiquitin. MoaD, which is involved in molybdopterin biosynthesis, interacts with MoeB, which acts like an E1 ubiquitin-activating enzyme for MoaD, strengthening the link between these prokaryotic proteins and the ubiquitin system. A similar system exists for ThiS, with its E1-like enzyme ThiF. It is also believed that the Saccharomyces cerevisiae protein Urm1, a ubiquitin-related modifier, is a "molecular fossil" that connects the evolutionary relation with the prokaryotic ubiquitin-like molecules and ubiquitin. Archaea have a functionally closer homolog of the ubiquitin modification system, where "sampylation" with SAMPs (small archaeal modifier proteins) is performed. The sampylation system only uses E1 to guide proteins to the proteosome. Proteoarchaeota, which are related to the ancestor of eukaryotes, possess all of the E1, E2, and E3 enzymes plus a regulated Rpn11 system. Unlike SAMP which are more similar to ThiS or MoaD, Proteoarchaeota ubiquitin are most similar to eukaryotic homologs. Prokaryotic ubiquitin-like protein (Pup) and ubiquitin bacterial (UBact) Prokaryotic ubiquitin-like protein (Pup) is a functional analog of ubiquitin which has been found in the gram-positive bacterial phylum Actinomycetota. It serves the same function (targeting proteins for degradations), although the enzymology of ubiquitylation and pupylation is different, and the two families share no homology. In contrast to the three-step reaction of ubiquitylation, pupylation requires two steps, therefore only two enzymes are involved in pupylation. In 2017, homologs of Pup were reported in five phyla of gram-negative bacteria, in seven candidate bacterial phyla and in one archaeon The sequences of the Pup homologs are very different from the sequences of Pup in gram-positive bacteria and were termed Ubiquitin bacterial (UBact), although the distinction has yet not been proven to be phylogenetically supported by a separate evolutionary origin and is without experimental evidence. The finding of the Pup/UBact-proteasome system in both gram-positive and gram-negative bacteria suggests that either the Pup/UBact-proteasome system evolved in bacteria prior to the split into gram positive and negative clades over 3000 million years ago or, that these systems were acquired by different bacterial lineages through horizontal gene transfer(s) from a third, yet unknown, organism. In support of the second possibility, two UBact loci were found in the genome of an uncultured anaerobic methanotrophic Archaeon (ANME-1;locus CBH38808.1 and locus CBH39258.1). Human proteins containing ubiquitin domain These include ubiquitin-like proteins. ANUBL1; BAG1; BAT3/BAG6; C1orf131; DDI1; DDI2; FAU; HERPUD1; HERPUD2; HOPS; IKBKB; ISG15; LOC391257; MIDN; NEDD8; OASL; PARK2; RAD23A; RAD23B; RPS27A; SACS; 8U SF3A1; SUMO1; SUMO2; SUMO3; SUMO4; TMUB1; TMUB2; UBA52; UBB; UBC; UBD; UBFD1; UBL4A; UBL4B; UBL7; UBLCP1; UBQLN1; UBQLN2; UBQLN3; UBQLN4; UBQLNL; UBTD1; UBTD2; UHRF1; UHRF2; Related proteins Ubiquitin-associated protein domain Prediction of ubiquitination Currently available prediction programs are: UbiPred is a SVM-based prediction server using 31 physicochemical properties for predicting ubiquitylation sites. UbPred is a random forest-based predictor of potential ubiquitination sites in proteins. It was trained on a combined set of 266 non-redundant experimentally verified ubiquitination sites available from our experiments and from two large-scale proteomics studies. CKSAAP_UbSite is SVM-based prediction that employs the composition of k-spaced amino acid pairs surrounding a query site (i.e. any lysine in a query sequence) as input, uses the same dataset as UbPred. See also Autophagy Autophagin Endoplasmic-reticulum-associated protein degradation JUNQ and IPOD Prokaryotic ubiquitin-like protein SUMO enzymes References External links GeneReviews/NCBI/NIH/UW entry on Angelman syndrome OMIM entries on Angelman syndrome UniProt entry for ubiquitin Notes from MIT course. Proteins Post-translational modification Protein structure
Ubiquitin
Chemistry
9,175
54,729,451
https://en.wikipedia.org/wiki/NGC%203941
NGC 3941 is a barred lenticular galaxy located in the constellation Ursa Major. It is located at a distance of circa 40 million light years from Earth, which, given its apparent dimensions, means that NGC 3941 is about 40,000 light years across. It was discovered by William Herschel in 1787. Characteristics The galaxy has a counterrotating extended gas disk, which was possibly formed by a galaxy merger of accretion of gas with oppositely directed angular momentum. The disk has low velocity dispersion throughout, which suggests it has settled into equilibrium. The gas is found primarily in two rings. The inner appears almost circular in HI imaging, which may mean it is inclined ~20° or more with respect to the stellar disk, whereas the outer ring has approximately the same ellipticity with the stellar disk. The stellar disk is symmetric and there are evidence of weak spiral arms outside the primary bar. In the infrared Ks-band, the bar has ansae-type morphology and in the inner elliptical there is an inner disc showing two-armed spirals. Based on the fact that the spiral arms and the gas are rotating in different directions it has been suggested that the inner ionised gas and the outer HI gas are inclined, almost perpendicular to the stellar disk. The galaxy features also a secondary bar. NGC 3941 is type 2 Seyfert galaxy. In the centre of the galaxy lies a supermassive black hole whose mass is estimated to be, based on velocity dispersion, 141 million (108.14) . Supernova One supernova has been observed in NGC 3941: SN 2018pv (Type Ia, mag 15.1) was discovered by Masaki Tsuboi on 3 February 2018. It reached magnitude 12.7, making it tied with SN 2018aoz for the brightest supernova of 2018. Nearby galaxies NGC 3941 forms a small galaxy group with NGC 3930 and UGC 6955, which is part of the Ursa Major Cluster. References External links Barred lenticular galaxies Ursa Major 3941 06857 37235
NGC 3941
Astronomy
426
12,721,713
https://en.wikipedia.org/wiki/Proteolix
Proteolix, Inc., was a private biotechnology company with headquarters in South San Francisco, California. Proteolix was founded in 2003 based on technology developed by co-founders Craig M. Crews (Yale University) and Raymond J. Deshaies (California Institute of Technology). Susan Molineaux and Phil Whitcomeref></ref> joined as co-founders. Proteolix was launched based on an $18.2 million A round comprising investments by Latterell Venture Partners, US Venture Partners, Advanced Technology Ventures, and The Vertical Group. Its lead product candidate, carfilzomib (PR-171), is a tetrapeptide epoxyketone for treating multiple myeloma, a blood cancer. The drug carfilzomib works by preventing proteasomes from breaking down other proteins. Proteolix focused primarily on the proteasome as a therapeutic target. At the time of its sale (see below), the company had two earlier-stage programs, an orally-bioavailable proteasome inhibitor for oncology (PR-047), and an agent preferentially targeting the immuno form of the proteasome (PR-957), with potential utility in areas such as rheumatoid arthritis. At the time of sale, carfilzomib's route of administration was intravenous, and the company was exploring its potential utility in multiple myeloma, Non-Hodgkin lymphoma (NHL) and other cancers. Proteolix was acquired by Onyx Pharmaceuticals in 2009 for $810 million (nominal value). PR-171 is sold by Onyx Pharmaceuticals as Kyprolis. Onyx renamed PR-047 to "ONX 0912" and PR-957 to "ONX 0914". Onyx Pharmaceuticals is a subsidiary of Amgen. References External links Proteolix's corporate website Biotechnology companies of the United States Companies based in South San Francisco, California Life sciences industry
Proteolix
Biology
429
1,864,122
https://en.wikipedia.org/wiki/Chocolate%20agar
Chocolate agar (CHOC) or chocolate blood agar (CBA) is a nonselective, enriched growth medium used for isolation of pathogenic bacteria. It is a variant of the blood agar plate, containing red blood cells that have been lysed by slowly heating to 80°C. Chocolate agar is used for growing fastidious respiratory bacteria, such as Haemophilus influenzae and Neisseria meningitidis. In addition, some of these bacteria, most notably H. influenzae, need growth factors such as nicotinamide adenine dinucleotide (factor V or NAD) and hemin (factor X), which are inside red blood cells; thus, a prerequisite to growth for these bacteria is the presence of red blood cell lysates. The heat also inactivates enzymes which could otherwise degrade NAD. The agar is named for its color and contains no chocolate products. Variants Chocolate agar with the addition of bacitracin becomes selective for the genus Haemophilus. Another variant of chocolate agar called Thayer–Martin agar contains an assortment of antibiotics which select for Neisseria species. Composition The composition of chocolate agar includes the following components: The exact concentrations of these ingredients may vary slightly depending on the specific formulation used in different laboratories or by different manufacturers. History Addition of heated blood to media was first documented for use by Cohen and Fitzgerald in 1910 and then by Dr. Olga Povitzky at the New York City Department of Health Bureau of Laboratories. The term "chocolate agar" comes from the brown color generated from the higher concentration of heated blood in the mixture and was given the distinctive description first by Warren Crowe in 1915. References Microbiological media
Chocolate agar
Biology
361
21,042,541
https://en.wikipedia.org/wiki/Parameter%20validation
In computer software, the term parameter validation is the automated processing, in a module, to validate the spelling or accuracy of parameters passed to that module. The term has been in common use for over 30 years. Specific best practices have been developed, for decades, to improve the handling of such parameters. Parameter validation can be used to defend against cross-site scripting attacks. See also Data validation Strong typing Error handling Sanity check Notes References "Parameter validation for software reliability", G.B. Alleman, 1978, webpage: ACM-517: paper presents a method for increasing software reliability through parameter validation. Software testing
Parameter validation
Technology,Engineering
130
1,127,278
https://en.wikipedia.org/wiki/Constantan
Constantan, also known in various contexts as Eureka, Advance, and Ferry, refers to a copper-nickel alloy commonly used for its stable electrical resistance across a wide range of temperatures. It usually consists of 55% copper and 45% nickel. Its main feature is the low thermal variation of its resistivity, which is constant over a wide range of temperatures. Other alloys with similarly low temperature coefficients are known, such as manganin (Cu [86%] / Mn [12%] / Ni [2%] ). History In 1887, Edward Weston discovered that metals can have a negative temperature coefficient of resistance, inventing what he called his "Alloy No. 2." It was produced in Germany where it was renamed "Konstantan". Constantan alloy Of all alloys used in modern strain gauges, constantan is the oldest, and still the most widely used. This situation reflects the fact that constantan has the best overall combination of properties needed for many strain gauge applications. This alloy has, for example, an adequately high strain sensitivity, or gauge factor, which is relatively insensitive to strain level and temperature. Its resistivity () is high enough to achieve suitable resistance values in even very small grids, and its temperature coefficient of resistance is fairly low. In addition, constantan is characterized by a good fatigue life and relatively high elongation capability. However, constantan tends to exhibit a continuous drift at temperatures above ; and this characteristic should be taken into account when zero stability of the strain gauge is critical over a period of hours or days. Constantan is also used for electrical resistance heating and thermocouples. A-alloy Very importantly, constantan can be processed for self-temperature compensation to match a wide range of test material coefficients of thermal expansion. A-alloy is supplied in self-temperature-compensation (S-T-C) numbers 00, 03, 05, 06, 09, 13, 15, 18, 30, 40, and 50, for use on test materials with corresponding thermal expansion coefficients, expressed in parts per million by length (or μm/m) per degrees Fahrenheit. P alloy For the measurement of very large strains, 5% (50,000 microstrain) or above, annealed constantan (P alloy) is the grid material normally selected. Constantan in this form is very ductile; and, in gauge lengths of and longer, can be strained to >20%. It should be borne in mind, however, that under high cyclic strains the P alloy will exhibit some permanent resistivity change with each cycle, and cause a corresponding zero shift in the strain gauge. Because of this characteristic and the tendency for premature grid failure with repeated straining, P alloy is not ordinarily recommended for cyclic strain applications. P alloy is available with S-T-C numbers of 08 and 40 for use on metals and plastics, respectively. Physical properties Temperature measurement Constantan is also used to form thermocouples with wires made of iron, copper, or chromel. It has an extraordinarily strong negative Seebeck coefficient above 0 degrees Celsius, leading to a good temperature sensitivity. References Bibliography External links National Pollutant Inventory - Copper and compounds fact sheet (archived 2008) Copper alloys Electric heating Nickel alloys
Constantan
Chemistry
676
47,725,139
https://en.wikipedia.org/wiki/NGC%20235
NGC 235 is a lenticular galaxy in the constellation of Cetus. Its companion, PGC 2570, appears in the line of sight of NGC 235, but has no relation with NGC 235. This pair was first discovered by Francis Leavenworth in 1886. Dreyer, the compiler of the catalogue, described the galaxy as "extremely faint, small, round, brighter middle and nucleus". References External links Lenticular galaxies ? 0235 Cetus
NGC 235
Astronomy
93
60,773,981
https://en.wikipedia.org/wiki/HR%20858
HR 858 (also known as HD 17926 or TOI-396) is a star with a planetary system located 103 light-years from the Sun in the southern constellation of Fornax. It has a yellow-white hue and is visible to the naked eye, but it is a challenge to see with an apparent visual magnitude of 6.4. The star is drifting further away with a radial velocity of 10 km/s. It has an absolute magnitude of +3.82. This object is a slightly-evolved F-type main-sequence star with a stellar classification of F6V, which indicates it is generating energy through core hydrogen fusion. It is roughly two billion years old and is spinning with a projected rotational velocity of 8.3 km/s. The star has 1.1 times the mass of the Sun and 1.3 times the Sun's radius. It is radiating 2.3 times the luminosity of the Sun from its photosphere at an effective temperature of 6,201 K. A faint co-moving stellar companion, designated component B, at an angular separation of . This corresponds to a projected separation of . It is a red dwarf star. Planetary system In May 2019, HR 858 was announced to have at least 3 exoplanets as observed by the transit method with the Transiting Exoplanet Survey Satellite. All three are orbiting close to the host star and are close in size, each about twice the radius of the Earth. Described as super-Earths by their discovery paper, measurements of their masses suggest that in terms of composition they may be better described as sub-Neptunes. Planets 'b' and 'c' may be in a 3:5 mean-motion resonance. Further research measured the masses of the planets b and d using accurate radial velocities, giving masses of as well as planetary densities of 2.44 and 4.9 g/cm3. The system displays significant transit timing variations. The mass of planet c could not be measured using radial velocities, but it is constrained to be less than , and a not very reliable value of was measured using TTVs. Notes References External links in-the-sky.org F-type main-sequence stars Planetary systems with three confirmed planets Binary stars Fornax CD-31 1148 017926 013363 0858 M-type main-sequence stars 396
HR 858
Astronomy
497
491,622
https://en.wikipedia.org/wiki/Damasio%27s%20theory%20of%20consciousness
Developed in his (1999) book, "The Feeling of What Happens", Antonio Damasio's theory of consciousness proposes that consciousness arises from the interactions between the brain, the body, and the environment. According to this theory, consciousness is not a unitary experience, but rather emerges from the dynamic interplay between different brain regions and their corresponding bodily states. Damasio argues that our conscious experiences are influenced by the emotional responses that are generated by our body's interactions with the environment, and that these emotional responses play a crucial role in shaping our conscious experience. This theory emphasizes the importance of the body and its physiological processes in the emergence of consciousness. Damasio's three layered theory is based on a hierarchy of stages, with each stage building upon the last. The most basic representation of the organism is referred to as the Protoself, Core Consciousness, and Extended Consciousness. Damasio's approach to explaining the development of consciousness relies on three notions: emotion, feeling, and feeling a feeling. Emotions are a collection of unconscious neural responses that give rise to feelings. Emotions are complex reactions to stimuli that cause observable external changes in the organism. A feeling arises when the organism becomes aware of the changes it is experiencing as a result of external or internal stimuli. Antonio Damasio’s work on consciousness : 1. Holistic Approach: Damasio argues that consciousness isn’t just a brain function but involves the entire body. He suggests that the brain works in tandem with older biological systems like the endocrine and immune systems, emphasizing a holistic view of consciousness . 2. Homeostasis as Central: Damasio’s theory places homeostasis at the core of consciousness, proposing that consciousness evolved to help organisms maintain internal stability, which is crucial for survival . 3. Microbiome Influence: Damasio highlights the role of the gut microbiome in influencing brain function and emotional states, suggesting that our consciousness is affected by the microbial environment within our bodies . 4. Dual Mind Registers: He distinguishes between two mental registers: one for cognitive functions like reasoning, and another for emotions and feelings, which are tied to the body’s state . Protoself According to Damasio's theory of consciousness, the protoself is the first stage in the hierarchical process of consciousness generation. Shared by many species, the protoself is the most basic representation of the organism, and it arises from the brain's constant interaction with the body. The protoself is an unconscious process that creates a "map" of the body's physiological state, which is then used by the brain to generate conscious experience. This "map" is constantly updated as the brain receives new stimuli from the body, and it forms the foundation for the development of more complex forms of consciousness. Damasio asserts that the protoself is signified by a collection of neural patterns that are representative of the body's internal state. The function of this 'self' is to constantly detect and record, moment by moment, the internal physical changes that affect the homeostasis of the organism. Protoself does not represent a traditional sense of self; rather, it is a pre-conscious state, which provides a reference for the core self and autobiographical self to build from. As Damasio puts it, "Protoself is a coherent collection of neural patterns, which map moment-by-moment the state of the physical structure of the organism" (Damasio 1999). Multiple brain areas are required for the protoself to function. Namely, the hypothalamus, which controls the general homeostasis of the organism, the brain stem, whose nuclei map body signals, and the insular cortex, whose function is linked to emotion. These brain areas work together to keep up with the constant process of collecting neural patterns to map the current status of the body's responses to environmental changes. The protoself does not require language in order to function; moreover, it is a direct report of one's experience. In this state, emotion begins to manifest itself as second-order neural patterns located in subcortical areas of the brain. Emotion acts as a neural object, from which a physical reaction can be drawn. This reaction causes the organism to become aware of the changes that are affecting it. From this realization, springs Damasio's notion of “feeling”. This occurs when the patterns contributing to emotion manifest as mental images, or brain movies. When the body is modified by these neural objects, the second layer of self emerges. This is known as core consciousness. Core consciousness Sufficiently more evolved is the second layer of Damasio's theory, Core Consciousness. This emergent process occurs when an organism becomes consciously aware of feelings associated with changes occurring to its internal bodily state; it is able to recognize that its thoughts are its own, and that they are formulated in its own perspective. It develops a momentary sense of self, as the brain continuously builds representative images, based on communications received from the Protoself. This level of consciousness is not exclusive to human beings and remains consistent and stable throughout the lifetime of the organism The image is a result of mental patterns which are caused by an interaction with internal or external stimulus. A relationship is established, between the organism and the object it is observing as the brain continuously creates images to represent the organism's experience of qualia. Damasio's definition of emotion is that of an unconscious reaction to any internal or external stimulus which activates neural patterns in the brain. ‘Feeling’ emerges as a still unconscious state which simply senses the changes affecting the Protoself due to the emotional state. These patterns develop into mental images, which then float into the organism's awareness. Put simply, consciousness is the feeling of knowing a feeling. When the organism becomes aware of the feeling that its bodily state (Protoself) is being affected by its experiences, or response to emotion, Core Consciousness is born. The brain continues to present nonverbal narrative sequence of images in the mind of the organism, based on its relationship to objects. An object in this context can be anything from a person, to a melody, to a neural image. Core consciousness is concerned only with the present moment, here and now. It does not require language or memory, nor can it reflect on past experiences or project itself into the future. Extended consciousness When consciousness moves beyond the here and now, Damasio's third and final layer emerges as Extended Consciousness. This level could not exist without its predecessors, and, unlike them, requires a vast use of conventional memory. Therefore, an injury to a person's memory center can cause damage to their extended consciousness, without hurting the other layers. The autobiographical self draws on memory of past experiences which involves use of higher thought. This autobiographical layer of self is developed gradually over time. Working memory is necessary for an extensive display of items to be recalled and referenced. Linguistic areas of the brain are activated to enhance the organism's experience, however, according to the language of thought hypothesis, language would not be necessarily required. Criticism Damasio's theory of consciousness has been met with criticism for its lack of explanation regarding the generation of conscious experiences by the brain. Researchers have posited that the brain's interaction with the body alone cannot account for the complexity of conscious experience, and that additional factors must be considered. Furthermore, the theory has been criticized for its inadequate treatment of the concept of self-awareness and its lack of a clear method of measuring consciousness, which hinders empirical testing and evaluation. Formalistic Elements Theories of emotion currently fall into four main categories which follow one another in a historical series: evolutionary (ethological), physiological, neurological, and cognitive. Evolutionary theories derive from Darwin's 'Emotions in man and the animals'. Physiological theories suggest that responses within the body are responsible for emotions. Neurological theories propose that activity within the brain leads to emotional responses. Cognitive theories argue that thoughts and other mental activity play an essential role in forming emotions. Note that no current theory of emotion falls strictly within a single category, rather each theory uses one approach to form its core premises from which it is then able to extend its main postulates. Damasio's tri-level view of the human mind, which posits that we share the two lowest levels with other animals, has been suggested before. For example, see Dyer ( www.conscious-computation.webnode.com ). Dyer's triune expansion is compared to Damasio's- 1. sensorimotor stage (c.f. Damasio's Protoself) 2. spatiotemporal stage (Core-Consciousness) 3. cognolinguistic stage (Extended Consciousness) An important feature of Damasio's theory (one that it shares with Dyer's theory) is the key role played by mental images, consciously mediating the information exchange between endocrine and cognitive. Ledoux and Brown have a different view of how emotion is connected to general cognition. They place emotionality on a similar level as that of other cognitive states. (In fairness, both Dyer's and Damasio's models concur on this point, ie that emotionality is not isolated to a particular layer within the tri-level framework). Earlier less sophisticated models placed the emotions strictly within the Limbic circuits, where their primary role was to consciously respond to, as well as cause responses within, the hypothalamus, the interface between intentional mind states and metabolic (endocrine) body states. Emotionality is demonstrably a global mind state, just like consciousness. For example, we can be simultaneously aware of (low-level) a pain (low-level) in our body, and an idea (high level) that enters our imagination (working memory). Likewise, our (low-level) emotional reaction to a painful workplace injury (fear, threat to well-being) can coexist with our (high-level) feeling of anger and indignation at the co-worker who failed to follow safety guidelines. Substantive Process A careful reading of Damasio's work reveals that he distinguishes his theories from those of his predecessors, in how the formalistic elements interact with each other in a dynamically integrated system. E.g, the suggestion of a dynamic neural map ultimately posits that we are the instantaneous configuration of a neural state in the present moment, rather than the supporting biological construct. I.e., our conscious identity is the software, not the hardware, even though our unique hardware constrains how we operate as the software. Need for consciousness and qualia A common criticism stems from the fact that both knowing and feeling can be processed with equal success without conscious awareness, as machines, for instance, do, and those models do not explain the need for consciousness and qualia. References Behavioral neuroscience Consciousness
Damasio's theory of consciousness
Biology
2,239
21,331,420
https://en.wikipedia.org/wiki/Peace%20Boat
is a global non-government organization headquartered in Japan established for the purpose of raising awareness and building connections internationally among groups that work for peace, human rights, environmental protection and sustainable development. "Peace Boat" may also refer to one of the ships embarking on a cruise under the Peace Boat organization. Since its founding in 1983, the Shinjuku, Tokyo based organization has launched more than 100 voyages. These cruises, the main operation of the Peace Boat organization, are on average carried out at least three times a year. Peace Boat, described by the San Francisco Chronicle as a "floating university of sorts", offers educational opportunities aboard, with conferences related to global events. They also provide humanitarian aid at their various stops and visit local organizations. Besides the international voyages, Peace Boat carries out a number of other projects seeking justice in various international realms such as a campaign for the abolition of land mines, the Global Article 9 Conference to Abolish War, the Global Hibakusha Forums, and others onboard and in ports. Peace Boat also acts as the Northeast Asia regional secretariat of the Global Partnership for the Prevention of Armed Conflict, and is member of ICAN (International Campaign to Abolish Nuclear Weapons), having played a significant role in negotiations to strengthen the Treaty on the Prohibition of Nuclear Weapons, which was awarded the Nobel Peace Prize on December 10, 2017. Peace Boat is an NGO in Special Consultative Status with the Economic and Social Council of the United Nations and a committed campaigner for the Sustainable Development Goals (SDGs). History In 1983, Yoshioka Tatsuya and Kiyomi Tsujimoto, then students of Waseda University, initiated Peace Boat in answer to Japanese history textbook controversies. With the assistance of like-minded students, they organized the first voyage. Peace Boat has since visited more than 270 ports with over 70,000 participants. During the first six years after it was founded, Peace Boat ran one- to two-week long cruises to various Asian countries around Japan at the rate of one per year. Time on the boat was used to hold lectures and events with guest speakers invited from the countries to be visited. When at port, international exchange events were carried out with local NGOs and student groups. This became the foundational style for which the rest of the cruises would be based on. In 1990, the 10th Peace Boat cruise marked the beginning of the circumnavigational cruise series. During the cruise, the Gulf War broke out and the ship encountered a US aircraft carrier in the Red Sea. After the success of first round-the-world cruise, Peace Boat continued them on a regular basis. In 1991, after the fall of the Soviet Union, Peace Boat set out to the Kuril Islands with the notion of a citizen's diplomacy mission, stopping at Iturup, Kunashir, and Shikotan islands. There were homestays and tours. This was the first trip made to these islands without a visa by an NGO from Japan. Over the past 30 years, Peace Boat has organized over 100 voyages, including more than 60 around-the-world voyages, carrying over 70,000 participants to over 270 ports. The participants range from toddlers to people in their 90s, from many different countries and professions. The organization was nominated for the Nobel Prize in 2008. Ships During its history, Peace Boat has chartered many different vessels. Oliva:The Topaz (31,500 GT) was a transatlantic ocean liner built in 1955 as Empress of Britain and operated as Peace Boat between 2000 and 2008.Clipper Pacific (18,416 GT) was built in 1970 for Royal Caribbean and operated for Peace Boat briefly in 2008. However, due to numerous repeated problems with the ship, the charter was cut short, ending in Piraeus, Greece instead of ending in Japan as scheduled.Mona Lisa (28,891 GRT) was built in 1966 by a shipyard in Scotland and chartered to replace the Clipper Pacific; she completed the remainder of the voyage and operated as Peace Boat between 2008 and 2009. Oceanic (38,772 GT) was built in 1965 by an Italian shipyard and operated as Peace Boat between 2009 and 2012. Ocean Dream (32,265 GT) was built in 1981 by a Danish shipyard and operated as Peace Boat between 2012 and 2020. In July 2019 The Zenith (47,413 GT) was announced to leave Pullmantur's fleet in early 2020 to join Peace Boat. The ship was delivered to Peace Boat in February 2020 and renamed The Zenith. Since 2020, Peace Boat is operating the Pacific World. which replaces the Ocean Dream and The Zenith. Other projects Landmine Abolition Campaign Since 1998, Peace Boat has continually run a project called P-MAC, or Peace Boat Mine Abolition Campaign, to support organizations carrying out landmine removal in such countries as Cambodia and Afghanistan. In the world there are approximately 110 million landmines in the ground, and even now many continue to be injured or lose their life without a trace. Most of these victims are not combatants but normal civilians. As of 2009, through a number of campaigns, Peace Boat raised money to clear 886,472 sq meters of landmine inundated areas and open five elementary schools. Fund raising campaigns are ongoing. Peace Ball Project Since 1999, Peace Boat has donated over 12,000 soccer balls to 43 countries. The Peace Ball project delivers soccer balls and other sports equipment to disadvantaged children, and uses the power of the world's most popular sport to build bridges of communication and solidarity. GET Language Programme Launched in 1999, the onboard GET language programme allows participants to communicate more effectively with the people they meet onboard and in port. The programme focuses on oral communication, viewing languages as global tools for international and intercultural exchange, and combines onboard classroom study with exchange programmes and home-stays in selected ports of call. Global University Programme In 2000, Peace Boat established its Global University peace education programme. Seminars at sea and study/exposure tours at ports of call make up the Global University curriculum, an intensive peace and sustainability education programme focused on experiential learning. Global Partnership for the Prevention of Armed Conflict (GPPAC) In 2004, Peace Boat became the Northeast Asia regional secretariat for the Global Partnership for the Prevention of Armed Conflict (GPPAC). This is an international network of NGOs working in peacebuilding and conflict prevention. It is made up of 15 regions, each working with its own action plan to address issues specific to each region. Global Article 9 Campaign In light of the Japanese government's pressure to amend it, Peace Boat together with the Japan Lawyers' International Solidarity Association (JALISA), launched the Global Article 9 Campaign to Abolish War in 2005. The Campaign strives not only to protect Article 9 locally, but also to build an international movement supporting Article 9 as the shared property of the world, calling for a global peace that does not rely on force. Vietnam Defoliate Victim Support Campaign From 2005 to 2008 Peace Boat raised approximately $13,000 in funds which were donated to Vietnam Association of Victims of Agent Orange and subsequently used to cover a portion of construction costs for a facility for supporting victims. On the 2009 cruise, Peace Boat visited the facility with a group of Japanese atomic bomb victims, and held the first exchange program there. International Campaign to Abolish Nuclear Weapons (ICAN) The International Campaign to Abolish Nuclear Weapons (ICAN) is a coalition of NGOs in 100 countries around the world. Peace Boat is a member of the campaign's international steering group, led by Executive Committee Member Kawasaki Akira. ICAN played a significant role in advocacy leading to the adoption of a treaty to prohibit nuclear weapons at the United Nations in New York in July 2017. In October 2017, the Norwegian Nobel Committee decided to award the Nobel Peace Prize for 2017 to ICAN. The organization received the award for its work to draw attention to the catastrophic humanitarian consequences of any use of nuclear weapons and for its ground-breaking efforts to achieve a treaty-based prohibition of such weapons. The Hibakusha Project The Hibakusha Project was started by Peace Boat to highlight the inhumanity of nuclear weapons and to forge a path toward a nuclear abolition. As part of the project, Hibakusha (atomic bomb survivors from Hiroshima and Nagasaki) join Peace Boat voyages to give their testimonies to the world of their first hand experiences with nuclear weapons, and call for their abolition. In 2016, the project has taken place on ninth separate Peace Boat voyages and more than 170 Hibakusha have travelled around the world sharing their testimonies. Peace Boat Millennium Development Goals Campaign Since 2009, Peace Boat run its own Millennium Development Goals Campaign in partnership with various international organizations and NGOs to raise awareness of the MDGs and the role of civil society in achieving these goals. Peace Boat's ship displayed the United Nations Millennium Campaign logo ‘End Poverty 2015’. Peace Boat Disaster Relief Volunteer Centre The Peace Boat Disaster Relief Volunteer Centre (PBV) was established following the tremendous devastation caused by the 2011 Great East Japan earthquake and tsunami. The centre based its activities in one of the worst affected areas, Ishinomaki City in Miyagi Prefecture, and dispatched thousands of volunteers there to support local residents in carrying out emergency relief efforts. PBV carries out domestic and international emergency relief work at sites affected by natural disasters such as typhoons, floods and heavy snow. At the same time, it works toward future disaster prevention and reduction by proactively building partnerships with business and local government authorities and cultivating a network of volunteer leaders ready to act. Ecoship Peace Boat's Ecoship is a transformational programme to construct the planet's most environmentally sustainable cruise ship. Peace Boat organised a multi-disciplinary charrette, bringing together world experts from fields as diverse as naval architecture, renewable energy, and biophilic and biomimetic design with the goal of defining the specifications for a ‘restorative’ vessel – where radical energy efficiency and closed material flow combine for a net positive impact on the environment. It will be a flagship for climate action. Its whole-system design and maximization of renewable energy use will enable 40% cuts. Ecoship was introduced in an official press conference at COP21. The Ocean and Climate Youth Ambassador Programme A group of young leaders from states on the front line of climate change and marine degradation joined Peace Boat's 95th Global Voyage in Barcelona on September and October 2017 as a part of a new programme to highlight these crucial issues and build momentum for climate action and the Bonn 2017 UN Climate Change Conference (COP23). These young people between 19 and 26 years of age were from the regions of the Pacific Ocean, Indian Ocean and Caribbean. The Ocean and Climate Youth Ambassador Programme was an endorsed event of the COP23 in line with Fiji's vision for the COP23, as recognized by the COP23 Presidency Secretariat. In June and July 2018, the second edition of the programme took place from Stockholm to New York City. The third edition took place on May and June 2019 from Valletta to New York City. References External links Environmental organizations based in Japan Climate change organizations International environmental organizations Anti-nuclear organizations Nature conservation organisations based in Asia International organizations based in Japan 1983 establishments in Japan International Campaign to Abolish Nuclear Weapons
Peace Boat
Engineering
2,296
7,944,408
https://en.wikipedia.org/wiki/Society%20of%20Naval%20Architects%20and%20Marine%20Engineers
The Society of Naval Architects and Marine Engineers (SNAME) is a global professional society that provides a forum for the advancement of the engineering profession as applied to the marine field. Although it particularly names the naval architecture and marine engineering specialties, the society includes all types of engineers and professionals amongst its members and is dedicated to advancing the art, science and practice of naval architecture and marine engineering. Mission The mission of the Society is to advance the art, science and practice of naval architecture, marine engineering, ocean engineering, and other marine-related professions through: The global exchange of knowledge and ideas relative to the marine industry Education in engineering as it relates to the marine industry Encouraging and sponsoring research and development in naval architecture, marine engineering, ocean engineering and other marine fields. History The Society of Naval Architects and Marine Engineers was organized in 1893, to advance the art, science, and practice of naval architecture, shipbuilding, and marine engineering. In its earliest days, SNAME was incorporated and nurtured by men including William H. Webb, George E. Weed, Rear Admiral George W. Melville. Other men took the helm thereafter, including Edwin A. Stevens, David W. Taylor, Vice Admiral Land, Kenneth M Davidson, Dr. Alvin C. Purdy, and Blakely Smith. Membership SNAME offers various membership grades, including student, associate, full member and fellow status. Full members generally have earned a Bachelor of Science degree in naval architecture, marine engineering or hold a degree in engineering and have experience that is associated with ship design, construction or operation. However membership is open to professionals in related industries that comes from all backgrounds and experience. Marine design is inherently a wide-ranging engineering design field and SNAME has members with wide experience ranging from electrical engineering, to weapons systems design, to racing yacht design, to deep ocean engineering, to human factors. Members can be awarded Fellow status upon review and approval of their achievements in the naval architectural or engineering profession as applied to the marine field. The society also awards the David W. Taylor Medal for "notable achievement in naval architecture and/or marine engineering." Publications SNAME publishes peer-reviewed technical papers and authoritative text books on engineering subjects within the marine field. The society also is a repository and forum for original research and analysis through its Technology and Research Committees which are staffed by volunteers with exceptional experience and knowledge in their chosen specialties. Code of Ethics The society functions under its own code of engineering ethics, which generally follows the Professional Engineers Code of Ethics. The Society also develops and supports the United States Naval Architects and Marine Engineers (NAME) Principles and Practice of Engineering Exam. References Marine engineering organizations International professional associations Engineering societies based in the United States Organizations based in Virginia
Society of Naval Architects and Marine Engineers
Engineering
548
41,980,659
https://en.wikipedia.org/wiki/Nalini%20Anantharaman
Nalini Anantharaman (born 26 February 1976) is a French mathematician who has won major prizes including the Henri Poincaré Prize in 2012. Life Nalini Florence Anantharaman was born in Paris in 1976 to two mathematicians. Her father and her mother are Professors at the University of Orléans. She entered Ecole Normale Supérieure in 1994. She completed her PhD in Paris under the supervision of François Ledrappier in 2000 at Université Pierre et Marie Curie (Paris 6). She became a full Professor, at the University of Paris-Sud, Orsay in 2009 following time out at the University of California in Berkeley in the year before as a Visiting Miller professor. From January to June 2013 she was in Princeton at the Institute for Advanced Study. She is now a Professor at Université de Strasbourg. Recognition In 2012 she won the Henri Poincaré Prize for mathematical physics that she shared with Freeman Dyson, Barry Simon and fellow Frenchwoman Sylvia Serfaty. Anantharaman was included for her work in "quantum chaos, dynamical systems and Schrödinger equation, including a remarkable advance in the problem of quantum unique ergodicity". In 2011 she won the Salem Prize which is awarded for work associated with the Fourier Series. She also took the from the French Academy of Sciences in 2011. In 2015, Nalini Anantharaman was elected to be a member of the Academia Europaea. She was an invited plenary speaker at the 2018 International Congress of Mathematicians. In 2018, for her work related to “Quantum Chaos”, Anantharaman won the Infosys Prize (in Mathematical Sciences category), one of the highest monetary awards in India that recognize excellence in science and research. In 2020 she received the Nemmers Prize in Mathematics. Selected writings References 1976 births Living people École Normale Supérieure alumni Scientists from Paris 20th-century French mathematicians French people of Indian descent French people of Tamil descent Dynamical systems theorists French systems scientists 21st-century French mathematicians 20th-century French women mathematicians 21st-century French women mathematicians
Nalini Anantharaman
Mathematics
422
37,709,269
https://en.wikipedia.org/wiki/Creep%20and%20shrinkage%20of%20concrete
Creep and shrinkage of concrete are two physical properties of concrete. The creep of concrete, which originates from the calcium silicate hydrates (C-S-H) in the hardened Portland cement paste (which is the binder of mineral aggregates), is fundamentally different from the creep of metals and polymers. Unlike the creep of metals, it occurs at all stress levels and, within the service stress range, is linearly dependent on the stress if the pore water content is constant. Unlike the creep of polymers and metals, it exhibits multi-months aging, caused by chemical hardening due to hydration which stiffens the microstructure, and multi-year aging, caused by long-term relaxation of self-equilibrated micro-stresses in the nano-porous microstructure of the C-S-H. If concrete is fully dried, it does not creep, but it is next to impossible to dry concrete fully without severe cracking. Changes of pore water content due to drying or wetting processes cause significant volume changes of concrete in load-free specimens. They are called the shrinkage (typically causing strains between 0.0002 and 0.0005, and in low strength concretes even 0.0012) or swelling (< 0.00005 in normal concretes, < 0.00020 in high strength concretes). To separate shrinkage from creep, the compliance function , defined as the stress-produced strain (i.e., the total strain minus shrinkage) caused at time t by a unit sustained uniaxial stress applied at age , is measured as the strain difference between the loaded and load-free specimens. The multi-year creep evolves logarithmically in time (with no final asymptotic value), and over the typical structural lifetimes it may attain values 3 to 6 times larger than the initial elastic strain. When a deformation is suddenly imposed and held constant, creep causes relaxation of critically produced elastic stress. After unloading, creep recovery takes place, but it is partial, because of aging. In practice, creep during drying is inseparable from shrinkage. The rate of creep increases with the rate of change of pore humidity (i.e., relative vapor pressure in the pores). For small specimen thickness, the creep during drying greatly exceeds the sum of the drying shrinkage at no load and the creep of a loaded sealed specimen (Fig. 1 bottom). The difference, called the drying creep or Pickett effect (or stress-induced shrinkage), represents a hygro-mechanical coupling between strain and pore humidity changes. Drying shrinkage at high humidities (Fig. 1 top and middle) is caused mainly by compressive stresses in the solid microstructure which balance the increase in capillary tension and surface tension on the pore walls. At low pore humidities (<75%), shrinkage is caused by a decrease of the disjoining pressure across nano-pores less than about 3 nm thick, filled by adsorbed water. The chemical processes of Portland cement hydration lead to another type of shrinkage, called the autogeneous shrinkage, which is observed in sealed specimens, i.e., at no moisture loss. It is caused partly by chemical volume changes, but mainly by self-desiccation due to loss of water consumed by the hydration reaction. It amounts to only about 5% of the drying shrinkage in normal concretes, which self-desiccate to about 97% pore humidity. But it can equal the drying shrinkage in modern high-strength concretes with very low water-cement ratios, which may self-desiccate to as low as 75% humidity. The creep originates in the calcium silicate hydrates (C-S-H) of hardened Portland cement paste. It is caused by slips due to bond ruptures, with bond restorations at adjacent sites. The C-S-H is strongly hydrophilic, and has a colloidal microstructure disordered from a few nanometers up. The paste has a porosity of about 0.4 to 0.55 and an enormous specific surface area, roughly 500 m2/cm3. Its main component is the tri-calcium silicate hydrate gel (3 CaO · 2 SiO3 · 3 H2O, in short C3S2H3). The gel forms particles of colloidal dimensions, weakly bound by van der Waals forces. The physical mechanism and modeling are still being debated. The constitutive material model in the equations that follow is not the only one available but has at present the strongest theoretical foundation and fits best the full range of available test data. Stress–strain relation at constant environment In service, the stresses in structures are < 50% of concrete strength, in which case the stress–strain relation is linear, except for corrections due to microcracking when the pore humidity changes. The creep may thus be characterized by the compliance function (Fig. 2). As increases, the creep value for fixed diminishes. This phenomenon, called aging, causes that depends not only on the time lag but on both and separately. At variable stress , each stress increment applied at time produces strain history . The linearity implies the principle of superposition (introduced by Boltzmann and for the case of aging, by Volterra). This leads to the (uniaxial) stress–strain relation of linear aging viscoelasticity: Here denotes shrinkage strain augmented by thermal expansion, if any. The integral is the Stieltjes integral, which admits histories with jumps; for time intervals with no jumps, one may set to obtain the standard (Riemann) integral. When history is prescribed, then Eq.(1) represents a Volterra integral equation for . This equation is not analytically integrable for realistic forms of , although numerical integration is easy. The solution for strain imposed at any age (and for ) is called the relaxation function . To generalize Eq. (1) to a triaxial stress–strain relation, one may assume the material to be isotropic, with an approximately constant creep Poisson ratio, . This yields volumetric and deviatoric stress–strain relations similar to Eq. (1) in which is replaced by the bulk and shear compliance functions: At high stress, the creep law appears to be nonlinear (Fig. 2) but Eq. (1) remains applicable if the inelastic strain due to cracking with its time-dependent growth is included in . A viscoplastic strain needs to be added to only in the case that all the principal stresses are compressive and the smallest in magnitude is much larger in magnitude than the uniaxial compressive strength . In measurements, Young's elastic modulus depends not only on concrete age but also on the test duration because the curve of compliance versus load duration has a significant slope for all durations beginning with 0.001 s or less. Consequently, the conventional Young's elastic modulus should be obtained as , where is the test duration. The values day and days give good agreement with the standardized test of , including the growth of as a function of , and with the widely used empirical estimate . The zero-time extrapolation happens to be approximately age-independent, which makes a convenient parameter for defining . For creep at constant total water content, called the basic creep, a realistic rate form of the uniaxial compliance function (the thick curves in Fig. 1 bottom) was derived from the solidification theory: where ; = flow viscosity, which dominates multi-decade creep; = load duration; = 1 day, , ; = volume of gel per unit volume of concrete, growing due to hydration; and = empirical constants (of dimension ). Function gives age-independent delayed elasticity of the cement gel (hardened cement paste without its capillary pores) and, by integration, . Integration of gives as a non-integrable binomial integral, and so, if the values of are sought, they must be obtained by numerical integration or by an approximation formula (a good formula exists). However, for computer structural analysis in time steps, is not needed; only the rate is needed as the input. Equations (3) and (4) are the simplest formulae satisfying three requirements: 1) Asymptotically for both short and long times , , should be a power function of time; and 2) so should the aging rate, given by ) (power functions are indicated by self-similarity conditions); and 3) (this condition is required to prevent the principle of superposition from giving non-monotonic recovery curves after unloading which are physically objectionable). Creep at variable environment At variable mass of evaporable (i.e., not chemically bound) water per unit volume of concrete, a physically realistic constitutive relation may be based on the idea of microprestress , considered to be a dimensionless measure of the stress peaks at the creep sites in the microstructure. The microprestress is produced as a reaction to chemical volume changes and to changes in the disjoining pressures acting across the hindered adsorbed water layers in nanopores (which are < 1 nm thick on the average and at most up to about ten water molecules, or 2.7 nm, in thickness), confined between the C-S-H sheets. The disjoining pressures develop first due to unequal volume changes of hydration products. Later, they relax due to creep in the C-S-H so as to maintain thermodynamic equilibrium (i.e., equality of chemical potentials of water) with water vapor in the capillary pores, and build up due to any changes of temperature or humidity in these pores. The rate of bond breakages may be assumed to be a quadratic function of the level of microprestress, which requires Eq. (4) to be generalized as A crucial property is that the microprestress is not appreciably affected by the applied load (since pore water is much more compressible than the solid skeleton and behaves like a soft spring coupled in parallel with a stiff framework). The microprestress relaxes in time and its evolution at each point of a concrete structure may be solved from the differential equation where = positive constants (the absolute value ensures that could never become negative). The microprestress can model the fact that drying and cooling, as well as wetting and heating, accelerate creep. The fact that changes of or produce new microprestress peaks and thus activate new creep sites explains the drying creep effect. A part of this effect, however, is caused by the fact that microcracking in a companion load-free specimen renders its overall shrinkage smaller than the shrinkage in an uncracked (compressed) specimen, thus increasing the difference between the two (which is what defines creep). The concept of microprestress is also needed to explain the stiffening due to aging. One physical cause of aging is that the hydration products gradually fill the pores of hardened cement paste, as reflected in function in Eq. (3). But hydration ceases after about one year, yet the effect of the age at loading is strong even after many years. The explanation is that the microstress peaks relax with age, which reduces the number of creep sites and thus the rate of bond breakages. At variable environment, time in Eq. (3) must be replaced by equivalent hydration time where = decreasing function of (0 if about 0.8) and . In Eq. (4), must be replaced by where = reduced time (or maturity), capturing the effect of and on creep viscosity; = function of decreasing from 1 at to 0 at ; , 5000 K. The evolution of humidity profiles ( = coordinate vector) may be approximately considered as uncoupled from the stress and deformation problem and may be solved numerically from the diffusion equation div[grad } where = self-desiccation caused by hydration (which reaches about 0.97 in normal concretes and about 0.80 in high strength concretes), = diffusivity, which decreases about 20 times as drops from 1.0 to 0.6. The free (unrestrained) shrinkage strain rate is, approximately, where = shrinkage coefficient. Since the -values at various points are incompatible, the calculation of the overall shrinkage of structures as well as test specimens is a stress analysis problem, in which creep and cracking must be taken into account. For finite element structural analysis in time steps, it is advantageous to convert the constitutive law to a rate-type form. This may be achieved by approximating with a Kelvin chain model (or the associated relaxation function with a Maxwell chain model). The history integrals such as Eq. 1 then disappear from the constitutive law, the history being characterized by the current values of the internal state variables (the partial strains or stresses of the Kelvin or Maxwell chain). Conversion to a rate-type form is also necessary for introducing the effect of variable temperature, which affects (according to the Arrhenius law) both the Kelvin chain viscosities and the rate of hydration, as captured by . The former accelerates creep if the temperature is increased, and the latter decelerates creep. Three-dimensional tensorial generalization of Eqs. (3)-(7) is required for finite element analysis of structures. Approximate cross-section response at drying Although multidimensional finite element calculations of creep and moisture diffusion are nowadays feasible, simplified one-dimensional analysis of concrete beams or girders based on the assumption of planar cross sections remaining planar still reigns in practice. Although (in box girder bridges) it involves deflection errors of the order of 30%. In that approach, one needs as input the average cross-sectional compliance function (Fig. 1 bottom, light curves) and average shrinkage function of the cross section (Fig. 1 left and middle) ( = age at start of drying). Compared to the point-wise constitutive equation, the algebraic expressions for such average characteristics are considerably more complicated and their accuracy is lower, especially if the cross section is not under centric compression. The following approximations have been derived and their coefficients optimized by fitting a large laboratory database for environmental humidities below 98%: where = effective thickness, = volume-to-surface ratio, = 1 for normal (type I) cement; = shape factor (e.g., 1.0 for a slab, 1.15 for a cylinder); and , = constant; (all times are in days). Eqs. (3) and (4) apply except that must be replaced by where and . The form of the expression for shrinkage halftime is based on the diffusion theory. Function 'tanh' in Eq. 8 is the simplest function satisfying two asymptotic conditions ensuing from the diffusion theory: 1) for short times , and 2) the final shrinkage must be approached exponentially. Generalizations for the temperature effect exist, too. Empirical formulae have been developed for predicting the parameter values in the foregoing equations on the basis of concrete strength and some parameters of the concrete mix. However, they are very crude, leading to prediction errors with the coefficients of variation of about 23% for creep and 34% for drying shrinkage. These high uncertainties can be drastically reduced by updating certain coefficients of the formulae according to short-time creep and shrinkage tests of the given concrete. For shrinkage, however, the weight loss of the drying test specimens must also be measured (or else the problem of updating is ill-conditioned). A fully rational prediction of concrete creep and shrinkage properties from its composition is a formidable problem, far from resolved satisfactorily. Engineering applications The foregoing form of functions and has been used in the design of structures of high creep sensitivity. Other forms have been introduced into the design codes and standard recommendations of engineering societies. They are simpler though less realistic, especially for multi-decade creep. Creep and shrinkage can cause a major loss of prestress. Underestimation of multi-decade creep has caused excessive deflections, often with cracking, in many of large-span prestressed segmentally erected box girder bridges (over 60 cases documented). Creep may cause excessive stress and cracking in cable-stayed or arch bridges, and roof shells. Non-uniformity of creep and shrinkage, caused by differences in the histories of pore humidity and temperature, age and concrete type in various parts of a structures may lead to cracking. So may interactions with masonry or with steel parts, as in cable-stayed bridges and composite steel-concrete girders. Differences in column shortenings are of particular concern for very tall buildings. In slender structures, creep may cause collapse due to long-time instability. The creep effects are particularly important for prestressed concrete structures (because of their slenderness and high flexibility), and are paramount in safety analysis of nuclear reactor containments and vessels. At high temperature exposure, as in fire or postulated nuclear reactor accidents, creep is very large and plays a major role. In preliminary design of structures, simplified calculations may conveniently use the dimensionless creep coefficient = . The change of structure state from time of initial loading to time can simply, though crudely, be estimated by quasi-elastic analysis in which Young's modulus is replaced by the so-called age-adjusted effective modulus . The best approach to computer creep analysis of sensitive structures is to convert the creep law to an incremental elastic stress–strain relation with an eigenstrain. Eq. (1) can be used but in that form the variations of humidity and temperature with time cannot be introduced and the need to store the entire stress history for each finite element is cumbersome. It is better to convert Eq. (1) to a set of differential equations based on the Kelvin chain rheologic model. To this end, the creep properties in each sufficiently small time step may be considered as non-aging, in which case a continuous spectrum of retardation moduli of Kelvin chain may be obtained from by Widder's explicit formula for approximate Laplace transform inversion. The moduli () of the Kelvin units then follow by discretizing this spectrum. They are different for each integration point of each finite element in each time step. This way the creep analysis problem gets converted to a series of elastic structural analyses, each of which can be run on a commercial finite element program. For an example see the last reference below. See also Deformation (engineering) Selected bibliography References Bagheri, A., Jamali, A., Pourmir, M., and Zanganeh, H. (2019). "The Influence of Curing Time on Restrained Shrinkage Cracking of Concrete with Shrinkage Reducing Admixture," Advances in Civil Engineering Materials 8, no. 1: 596-610. https://doi.org/10.1520/ACEM20190100 ACI Committee 209 (1972). "Prediction of creep, shrinkage and temperature effects in concrete structures" ACI-SP27, Designing for Effects of Creep, Shrinkage and Temperature}, Detroit, pp. 51–93 (reaproved 2008) ACI Committee 209 (2008). Guide for Modeling and Calculating Shrinkage and Creep in Hardened Concrete ACI Report 209.2R-08, Farmington Hills. Brooks, J.J. (2005). "30-year creep and shrinkage of concrete." Magazine of Concrete Research, 57(9), 545–556. Paris, France. CEB-FIP Model Code 1990. Model Code for Concrete Structures. Thomas Telford Services Ltd., London, Great Britain; also published by Comité euro-international du béton (CEB), Bulletins d'Information No. 213 and 214, Lausanne, Switzerland. FIB Model Code 2011. "Fédération internationale de béton (FIB). Lausanne. Harboe, E.M., et al. (1958). "A comparison of the instantaneous and the sustained modulus of elasticity of concrete", Concr. Lab. Rep. No. C-354, Division of Engineering Laboratories, US Dept. of the Interior, Bureau of Reclamation, Denver, Colorado. Jirásek, M., and Bažant, Z.P. (2001). Inelastic analysis of structures, J. Wiley, London (chapters 27, 28). RILEM (1988a). Committee TC 69, Chapters 2 and 3 in Mathematical Modeling of Creep and Shrinkage of Concrete, Z.P. Bažant, ed., J. Wiley, Chichester and New York, 1988, 57–215. Troxell, G.E., Raphael, J.E. and Davis, R.W. (1958). "Long-time creep and shrinkage tests of plain and reinforced concrete" Proc. ASTM 58} pp. 1101–1120. Vítek, J.L. (1997). "Long-Term deflections of Large Prestressed Concrete Bridges". CEB Bulletin d'Information No. 235 – Serviceability Models – Behaviour and Modelling in Serviceability Limit States Including Repeated and Sustained Load, CEB, Lausanne, pp. 215–227 and 245–265. Wittmann, F.H. (1982). "Creep and shrinkage mechanisms." Creep and shrinkage of concrete structures, Z.P. Bažant and F.H. Wittmann, eds., J. Wiley, London 129–161. Bažant, Z.P., and Yu, Q. (2012). "Excessive long-time deflections of prestressed box girders." ASCE J. of Structural Engineering, 138 (6), 676–686, 687–696. Concrete Continuum mechanics Deformation (mechanics) Materials degradation Solid mechanics
Creep and shrinkage of concrete
Physics,Materials_science,Engineering
4,640
5,493,022
https://en.wikipedia.org/wiki/Allocation%20concealment
In a randomized experiment, allocation concealment hides the sorting of trial participants into treatment groups so that this knowledge cannot be exploited. Adequate allocation concealment serves to prevent study participants from influencing treatment allocations for subjects. Studies with poor allocation concealment (or none at all) are prone to selection bias. Some standard methods of ensuring allocation concealment include sequentially numbered, opaque, sealed envelopes (SNOSE); sequentially numbered containers; pharmacy controlled randomization; and central randomization. CONSORT guidelines recommend that allocation concealment methods be included in a study's protocol, and that the allocation concealment methods be reported in detail in their publication; however, a 2005 study determined that most clinical trials have unclear allocation concealment in their protocols, in their publications, or both. A 2008 study of 146 meta-analyses concluded that the results of randomized controlled trials with inadequate or unclear allocation concealment tended to be biased toward beneficial effects only if the trials' outcomes were subjective as opposed to objective. Allocation concealment is different from blinding. An allocation concealment method prevents influence on the randomization process, while blinding conceals the outcome of the randomization. However, allocation concealment may also be called "randomization blinding". Impact Without the use of allocation concealment, researchers may (consciously or unconsciously) place subjects expected to have good outcomes in the treatment group, and those expected to have poor outcomes in the control group. This introduces considerable bias in favor of treatment. Naming Allocation concealment has also been called randomization blinding, blinded randomization, and bias-reducing allocation among other names. The term 'allocation concealment' was first introduced by Shultz et al. The authors justified the introduction of the term: Subversion and fraud Traditionally, each patient's treatment allocation data was stored in a sealed envelopes, which was to be opened to determine treatment allocation. However, this system is prone to abuse. Reports of researchers opening envelopes prematurely or holding the envelopes up to lights to determine their contents has led some researchers to say that the use of sealed envelopes is no longer acceptable. , sealed envelopes were still in use in some clinical trials. Modern clinical trials often use centralized allocation concealment. Although considered more secure, central allocations are not completely immune from subversion. Typical and sometimes successful strategies include keeping a list of previous allocations (up to 15% of study personnel report keeping lists). See also Blinded experiment Design of experiments Randomized experiment Metascience Sealedenvelope.com—a provider of allocation concealment services References Design of experiments Research Scientific misconduct Scientific method
Allocation concealment
Technology
530
41,512,292
https://en.wikipedia.org/wiki/Electroless%20nickel-boron%20plating
Electroless nickel-boron coating (often called NiB coating) is a metal plating process that can create a layer of a nickel-boron alloy on the surface of a solid substrate, like metal or plastic. The process involves dipping the substrate in a water solution containing nickel salt and a boron-containing reducing agent, such as an alkylamineborane or sodium borohydride. It is a type of electroless nickel plating. A similar process, that uses a hypophosphite as a reducing agent, yields a nickel-phosphorus coating instead. Unlike electroplating, electroless plating processes in general not require passing an electric current through the bath and the substrate; the reduction of the metal cations in solution to metallic is achieved by purely chemical means, through an autocatalytic reaction. Thus electroless plating creates an even layer of metal regardless of the geometry of the surface – in contrast to electroplating which suffers from uneven current density due to the effect of substrate shape on the electric field at its surface. Moreover, electroless plating can be applied to non-conductive surfaces. The plating bath usually also contains buffers, complexants, and other control chemicals. History Electroless nickel-boron plating developed as a variant of the similar nickel-phosphorus process, discovered accidentally by Charles Adolphe Wurtz in 1844. In 1969, Harold Edward Bellis from DuPont filed a patent for a general class of electroless plating processes using sodium borohydride, dimethylamine borane, or sodium hypophosphite, in the presence of thallium salts, thus producing a metal-thallium-boron or metal-thallium-phosphorus; where the metal could be either nickel or cobalt. The boron or phosphorus contents was claimed to be variable from 0.1 to 12%, and that of thallium from 0.5 to 6%. The coatings were claimed to be "an intimate dispersion of hard trinickel boride () or nickel phosphide () in a soft matrix of nickel and thallium". Several versions of the process were patented by Charles Edward McComas in the following years: Bellis's formulation was modified by adding stronger chelating agents ("Generation 1"). A method for applying nickel-thallium-boron to titanium without fatiguing the titanium substrate was also developed and commercialized by Purecoat International under the brand "NiBRON". In 1986, McComas patented an improved formulation ("Generation 2") for nickel-cobalt-thallium-boron coating, that further increased stability and repeatability of the process. The patent claims that the coating consisted of "hard, amorphous alloy nodules of high nickel content dispersed or rooted in a softer alloy of high cobalt content". In 1994 McComas developed another formulation ("Generation 3"), patented in 1998, that allowed consumed chemicals to be replenished during the plating, making it into a continuous process rather than a batch one. A further improvement, patented by McComas in 2000 ("Generation 4") used lead tungstate as the stabilizer in place of thallium sulfate, at a pH of 10 to 14. This formulation has been commercialized by UCT Coatings under various brand names, including UltraCem and FailZero. Types The earliest variants of electroless nickel-boron plating included thallium salts in the plating bath, and actually created nickel-thallium-boron coatings ("Type 1" in the AMS classification). Eventually formulations were devised that were free from the toxic thallium ingredients, resulting in true nickel-boron ("Type 2") coatings. Characteristics As-plated grains of amorphous nickel boron deposit in a columnar structure with the columns being perpendicular to the substrate surface and forming a nodular topography on the surface. Coating will contain 2.5–8% boron by weight. The nodular structure reduces surface-to-surface contact of two mating/sliding surfaces, thus reducing friction and improving heat dissipation. It is also claimed to reduce drag in both gas and liquid flows. Applications Actual and potential applications of electroless nickel-boron coating include saw blades, ship propellers, down-hole crude oil pumping equipment, bushings, thrust washers and paper guide plates. References Metal plating Coatings
Electroless nickel-boron plating
Chemistry
931
1,269,912
https://en.wikipedia.org/wiki/Nimodipine
Nimodipine, sold under the brand name Nimotop among others, is a calcium channel blocker used in preventing vasospasm secondary to subarachnoid hemorrhage (a form of cerebral hemorrhage). It was originally developed within the calcium channel blocker class as it was used for the treatment of high blood pressure, but is not used for this indication. It was patented in 1971 and approved for medical use in the US in 1988. It was approved for medical use in Germany in 1985. Medical use Because it has some selectivity for cerebral vasculature, nimodipine's main use is in the prevention of cerebral vasospasm and resultant ischemia, a complication of subarachnoid hemorrhage (a form of cerebral bleed), specifically from ruptured intracranial berry aneurysms irrespective of the patient's post-ictus neurological condition. Its administration begins within 4 days of a subarachnoid hemorrhage and is continued for three weeks. If blood pressure drops by over 5%, dosage is adjusted. There is still controversy regarding the use of intravenous nimodipine on a routine basis. A 2003 trial (Belfort et al.) found nimodipine was inferior to magnesium sulfate in preventing seizures in women with severe preeclampsia. Nimodipine is not regularly used to treat head injury. Several investigations have been performed evaluating its use for traumatic subarachnoid hemorrhage; a systematic review of 4 trials did not suggest any significant benefit to the patients that receive nimodipine therapy. There was one report case of nimodipine being successfully used for treatment of ultradian bipolar cycling after brain injury and, later, amygdalohippocampectomy. Dosage The regular dosage is 60 mg tablets every four hours. If the patient is unable to take tablets orally, it was previously given via intravenous infusion at a rate of 1–2 mg/hour (lower dosage if the body weight is <70 kg or blood pressure is too low), but since the withdrawal of the IV preparation, administration by nasogastric tube is an alternative. Contraindications Nimodipine is associated with low blood pressure, flushing and sweating, edema, nausea and other gastrointestinal problems, most of which are known characteristics of calcium channel blockers. It is contraindicated in unstable angina or an episode of myocardial infarction more recently than one month. While nimodipine was occasionally administered intravenously in the past, the FDA released an alert in January 2006, warning that it had received reports of the approved oral preparation being used intravenously, leading to severe complications; this was despite warnings on the box that this should not be done. Side-effects The FDA has classified the side effects into groups based on dosages levels at q4h. For the high dosage group (90 mg) less than 1% of the group experienced adverse conditions including itching, gastrointestinal hemorrhage, thrombocytopenia, neurological deterioration, vomiting, diaphoresis, congestive heart failure, hyponatremia, decreasing platelet count, disseminated intravascular coagulation, deep vein thrombosis. Pharmacokinetics Absorption After oral administration, it reaches peak plasma concentrations within one and a half hours. Patients taking enzyme-inducing anticonvulsants have lower plasma concentrations, while patients taking sodium valproate were markedly higher. Metabolism Nimodipine is metabolized in the first pass metabolism. The dihydropyridine ring of the nimodipine is dehydrogenated in the hepatic cells of the liver, a process governed by cytochrome P450 isoform 3A (CYP3A). This can be completely inhibited however, by troleandomycin (an antibiotic) or ketoconazole (an antifungal drug). Excretion Studies in non-human mammals using radioactive labeling have found that 40–50% of the dose is excreted via urine. The residue level in the body was never more than 1.5% in monkeys. Mode of action Nimodipine binds specifically to L-type voltage-gated calcium channels. There are numerous theories about its mechanism in preventing vasospasm, but none are conclusive. Nimodipine has additionally been found to act as an antagonist of the mineralocorticoid receptor, or as an antimineralocorticoid. Synthesis The key acetoacetate (2) for the synthesis of nimodipine (5) is obtained by alkylation of sodium acetoacetate with 2-methoxyethyl chloride, Aldol condensation of meta-nitrobenzene (1) and the subsequent reaction of the intermediate with enamine (4) gives nimodipine. Stereochemistry Nimodipine contains a stereocenter and can exist as either of two enantiomers. The pharmaceutical drug is a racemate, an equal mixture of the (R)- and (S)- forms. References Further reading Use as cerebral vasodilator: ; eidem, Antimineralocorticoids Calcium channel blockers Dihydropyridines Carboxylate esters Ethers 3-Nitrophenyl compounds Isopropyl esters
Nimodipine
Chemistry
1,152
25,268
https://en.wikipedia.org/wiki/Quantum%20electrodynamics
In particle physics, quantum electrodynamics (QED) is the relativistic quantum field theory of electrodynamics. In essence, it describes how light and matter interact and is the first theory where full agreement between quantum mechanics and special relativity is achieved. QED mathematically describes all phenomena involving electrically charged particles interacting by means of exchange of photons and represents the quantum counterpart of classical electromagnetism giving a complete account of matter and light interaction. In technical terms, QED can be described as a very accurate way to calculate the probability of the position and movement of particles, even those massless such as photons, and the quantity depending on position (field) of those particles, and described light and matter beyond the wave-particle duality proposed by Albert Einstein in 1905. Richard Feynman called it "the jewel of physics" for its extremely accurate predictions of quantities like the anomalous magnetic moment of the electron and the Lamb shift of the energy levels of hydrogen. It is the most precise and stringently tested theory in physics. History The first formulation of a quantum theory describing radiation and matter interaction is attributed to British scientist Paul Dirac, who during the 1920s computed the coefficient of spontaneous emission of an atom. He is credited with coining the term "quantum electrodynamics". Dirac described the quantization of the electromagnetic field as an ensemble of harmonic oscillators with the introduction of the concept of creation and annihilation operators of particles. In the following years, with contributions from Wolfgang Pauli, Eugene Wigner, Pascual Jordan, Werner Heisenberg and Enrico Fermi, physicists came to believe that, in principle, it was possible to perform any computation for any physical process involving photons and charged particles. However, further studies by Felix Bloch with Arnold Nordsieck, and Victor Weisskopf, in 1937 and 1939, revealed that such computations were reliable only at a first order of perturbation theory, a problem already pointed out by Robert Oppenheimer. At higher orders in the series infinities emerged, making such computations meaningless and casting doubt on the theory's internal consistency. This suggested that special relativity and quantum mechanics were fundamentally incompatible. Difficulties increased through the end of the 1940s. Improvements in microwave technology made it possible to take more precise measurements of the shift of the levels of a hydrogen atom, later known as the Lamb shift and magnetic moment of the electron. These experiments exposed discrepancies that the theory was unable to explain. A first indication of a possible solution was given by Bethe in 1947. He made the first non-relativistic computation of the shift of the lines of the hydrogen atom as measured by Lamb and Retherford. Despite limitations of the computation, agreement was excellent. The idea was simply to attach infinities to corrections of mass and charge that were actually fixed to a finite value by experiments. In this way, the infinities get absorbed in those constants and yield a finite result with good experimental agreement. This procedure was named renormalization. Based on Bethe's intuition and fundamental papers on the subject by Shin'ichirō Tomonaga, Julian Schwinger, Richard Feynman and Freeman Dyson, it was finally possible to produce fully covariant formulations that were finite at any order in a perturbation series of quantum electrodynamics. Tomonaga, Schwinger, and Feynman were jointly awarded the 1965 Nobel Prize in Physics for their work in this area. Their contributions, and Dyson's, were about covariant and gauge-invariant formulations of quantum electrodynamics that allow computations of observables at any order of perturbation theory. Feynman's mathematical technique, based on his diagrams, initially seemed unlike the field-theoretic, operator-based approach of Schwinger and Tomonaga, but Dyson later showed that the two approaches were equivalent. Renormalization, the need to attach a physical meaning at certain divergences appearing in the theory through integrals, became one of the fundamental aspects of quantum field theory and is seen as a criterion for a theory's general acceptability. Even though renormalization works well in practice, Feynman was never entirely comfortable with its mathematical validity, referring to renormalization as a "shell game" and "hocus pocus". Neither Feynman nor Dirac were happy with that way to approach the observations made in theoretical physics, above all in quantum mechanics. QED is the model and template for all subsequent quantum field theories. One such subsequent theory is quantum chromodynamics, which began in the early 1960s and attained its present form in the 1970s, developed by H. David Politzer, Sidney Coleman, David Gross and Frank Wilczek. Building on Schwinger's pioneering work, Gerald Guralnik, Dick Hagen, and Tom Kibble, Peter Higgs, Jeffrey Goldstone, and others, Sheldon Glashow, Steven Weinberg and Abdus Salam independently showed how the weak nuclear force and quantum electrodynamics could be merged into a single electroweak force. Feynman's view of quantum electrodynamics Introduction Near the end of his life, Richard Feynman gave a series of lectures on QED intended for the lay public. These lectures were transcribed and published as Feynman (1985), QED: The Strange Theory of Light and Matter, a classic non-mathematical exposition of QED from the point of view articulated below. The key components of Feynman's presentation of QED are three basic actions. A photon goes from one place and time to another place and time. An electron goes from one place and time to another place and time. An electron emits or absorbs a photon at a certain place and time. These actions are represented in the form of visual shorthand by the three basic elements of diagrams: a wavy line for the photon, a straight line for the electron and a junction of two straight lines and a wavy one for a vertex representing emission or absorption of a photon by an electron. These can all be seen in the adjacent diagram. As well as the visual shorthand for the actions, Feynman introduces another kind of shorthand for the numerical quantities called probability amplitudes. The probability is the square of the absolute value of total probability amplitude, . If a photon moves from one place and time to another place and time , the associated quantity is written in Feynman's shorthand as , and it depends on only the momentum and polarization of the photon. The similar quantity for an electron moving from to is written . It depends on the momentum and polarization of the electron, in addition to a constant Feynman calls n, sometimes called the "bare" mass of the electron: it is related to, but not the same as, the measured electron mass. Finally, the quantity that tells us about the probability amplitude for an electron to emit or absorb a photon Feynman calls j, and is sometimes called the "bare" charge of the electron: it is a constant, and is related to, but not the same as, the measured electron charge e. QED is based on the assumption that complex interactions of many electrons and photons can be represented by fitting together a suitable collection of the above three building blocks and then using the probability amplitudes to calculate the probability of any such complex interaction. It turns out that the basic idea of QED can be communicated while assuming that the square of the total of the probability amplitudes mentioned above (P(A to B), E(C to D) and j) acts just like our everyday probability (a simplification made in Feynman's book). Later on, this will be corrected to include specifically quantum-style mathematics, following Feynman. The basic rules of probability amplitudes that will be used are: The indistinguishability criterion in (a) is very important: it means that there is no observable feature present in the given system that in any way "reveals" which alternative is taken. In such a case, one cannot observe which alternative actually takes place without changing the experimental setup in some way (e.g. by introducing a new apparatus into the system). Whenever one is able to observe which alternative takes place, one always finds that the probability of the event is the sum of the probabilities of the alternatives. Indeed, if this were not the case, the very term "alternatives" to describe these processes would be inappropriate. What (a) says is that once the physical means for observing which alternative occurred is removed, one cannot still say that the event is occurring through "exactly one of the alternatives" in the sense of adding probabilities; one must add the amplitudes instead. Similarly, the independence criterion in (b) is very important: it only applies to processes which are not "entangled". Basic constructions Suppose we start with one electron at a certain place and time (this place and time being given the arbitrary label A) and a photon at another place and time (given the label B). A typical question from a physical standpoint is: "What is the probability of finding an electron at C (another place and a later time) and a photon at D (yet another place and time)?". The simplest process to achieve this end is for the electron to move from A to C (an elementary action) and for the photon to move from B to D (another elementary action). From a knowledge of the probability amplitudes of each of these sub-processes – E(A to C) and P(B to D) – we would expect to calculate the probability amplitude of both happening together by multiplying them, using rule b) above. This gives a simple estimated overall probability amplitude, which is squared to give an estimated probability. But there are other ways in which the result could come about. The electron might move to a place and time E, where it absorbs the photon; then move on before emitting another photon at F; then move on to C, where it is detected, while the new photon moves on to D. The probability of this complex process can again be calculated by knowing the probability amplitudes of each of the individual actions: three electron actions, two photon actions and two vertexes – one emission and one absorption. We would expect to find the total probability amplitude by multiplying the probability amplitudes of each of the actions, for any chosen positions of E and F. We then, using rule a) above, have to add up all these probability amplitudes for all the alternatives for E and F. (This is not elementary in practice and involves integration.) But there is another possibility, which is that the electron first moves to G, where it emits a photon, which goes on to D, while the electron moves on to H, where it absorbs the first photon, before moving on to C. Again, we can calculate the probability amplitude of these possibilities (for all points G and H). We then have a better estimation for the total probability amplitude by adding the probability amplitudes of these two possibilities to our original simple estimate. Incidentally, the name given to this process of a photon interacting with an electron in this way is Compton scattering. There is an infinite number of other intermediate "virtual" processes in which more and more photons are absorbed and/or emitted. For each of these processes, a Feynman diagram could be drawn describing it. This implies a complex computation for the resulting probability amplitudes, but provided it is the case that the more complicated the diagram, the less it contributes to the result, it is only a matter of time and effort to find as accurate an answer as one wants to the original question. This is the basic approach of QED. To calculate the probability of any interactive process between electrons and photons, it is a matter of first noting, with Feynman diagrams, all the possible ways in which the process can be constructed from the three basic elements. Each diagram involves some calculation involving definite rules to find the associated probability amplitude. That basic scaffolding remains when one moves to a quantum description, but some conceptual changes are needed. One is that whereas we might expect in our everyday life that there would be some constraints on the points to which a particle can move, that is not true in full quantum electrodynamics. There is a nonzero probability amplitude of an electron at A, or a photon at B, moving as a basic action to any other place and time in the universe. That includes places that could only be reached at speeds greater than that of light and also earlier times. (An electron moving backwards in time can be viewed as a positron moving forward in time.) Probability amplitudes Quantum mechanics introduces an important change in the way probabilities are computed. Probabilities are still represented by the usual real numbers we use for probabilities in our everyday world, but probabilities are computed as the square modulus of probability amplitudes, which are complex numbers. Feynman avoids exposing the reader to the mathematics of complex numbers by using a simple but accurate representation of them as arrows on a piece of paper or screen. (These must not be confused with the arrows of Feynman diagrams, which are simplified representations in two dimensions of a relationship between points in three dimensions of space and one of time.) The amplitude arrows are fundamental to the description of the world given by quantum theory. They are related to our everyday ideas of probability by the simple rule that the probability of an event is the square of the length of the corresponding amplitude arrow. So, for a given process, if two probability amplitudes, v and w, are involved, the probability of the process will be given either by or The rules as regards adding or multiplying, however, are the same as above. But where you would expect to add or multiply probabilities, instead you add or multiply probability amplitudes that now are complex numbers. Addition and multiplication are common operations in the theory of complex numbers and are given in the figures. The sum is found as follows. Let the start of the second arrow be at the end of the first. The sum is then a third arrow that goes directly from the beginning of the first to the end of the second. The product of two arrows is an arrow whose length is the product of the two lengths. The direction of the product is found by adding the angles that each of the two have been turned through relative to a reference direction: that gives the angle that the product is turned relative to the reference direction. That change, from probabilities to probability amplitudes, complicates the mathematics without changing the basic approach. But that change is still not quite enough because it fails to take into account the fact that both photons and electrons can be polarized, which is to say that their orientations in space and time have to be taken into account. Therefore, P(A to B) consists of 16 complex numbers, or probability amplitude arrows. There are also some minor changes to do with the quantity j, which may have to be rotated by a multiple of 90° for some polarizations, which is only of interest for the detailed bookkeeping. Associated with the fact that the electron can be polarized is another small necessary detail, which is connected with the fact that an electron is a fermion and obeys Fermi–Dirac statistics. The basic rule is that if we have the probability amplitude for a given complex process involving more than one electron, then when we include (as we always must) the complementary Feynman diagram in which we exchange two electron events, the resulting amplitude is the reverse – the negative – of the first. The simplest case would be two electrons starting at A and B ending at C and D. The amplitude would be calculated as the "difference", , where we would expect, from our everyday idea of probabilities, that it would be a sum. Propagators Finally, one has to compute P(A to B) and E(C to D) corresponding to the probability amplitudes for the photon and the electron respectively. These are essentially the solutions of the Dirac equation, which describe the behavior of the electron's probability amplitude and the Maxwell's equations, which describes the behavior of the photon's probability amplitude. These are called Feynman propagators. The translation to a notation commonly used in the standard literature is as follows: where a shorthand symbol such as stands for the four real numbers that give the time and position in three dimensions of the point labeled A. Mass renormalization A problem arose historically which held up progress for twenty years: although we start with the assumption of three basic "simple" actions, the rules of the game say that if we want to calculate the probability amplitude for an electron to get from A to B, we must take into account all the possible ways: all possible Feynman diagrams with those endpoints. Thus there will be a way in which the electron travels to C, emits a photon there and then absorbs it again at D before moving on to B. Or it could do this kind of thing twice, or more. In short, we have a fractal-like situation in which if we look closely at a line, it breaks up into a collection of "simple" lines, each of which, if looked at closely, are in turn composed of "simple" lines, and so on ad infinitum. This is a challenging situation to handle. If adding that detail only altered things slightly, then it would not have been too bad, but disaster struck when it was found that the simple correction mentioned above led to infinite probability amplitudes. In time this problem was "fixed" by the technique of renormalization. However, Feynman himself remained unhappy about it, calling it a "dippy process", and Dirac also criticized this procedure as "in mathematics one does not get rid of infinities when it does not please you". Conclusions Within the above framework physicists were then able to calculate to a high degree of accuracy some of the properties of electrons, such as the anomalous magnetic dipole moment. However, as Feynman points out, it fails to explain why particles such as the electron have the masses they do. "There is no theory that adequately explains these numbers. We use the numbers in all our theories, but we don't understand them – what they are, or where they come from. I believe that from a fundamental point of view, this is a very interesting and serious problem." Mathematical formulation QED action Mathematically, QED is an abelian gauge theory with the symmetry group U(1), defined on Minkowski space (flat spacetime). The gauge field, which mediates the interaction between the charged spin-1/2 fields, is the electromagnetic field. The QED Lagrangian for a spin-1/2 field interacting with the electromagnetic field in natural units gives rise to the action where are Dirac matrices. a bispinor field of spin-1/2 particles (e.g. electron–positron field). , called "psi-bar", is sometimes referred to as the Dirac adjoint. is the gauge covariant derivative. e is the coupling constant, equal to the electric charge of the bispinor field. is the covariant four-potential of the electromagnetic field generated by the electron itself. It is also known as a gauge field or a connection. is the external field imposed by external source. m is the mass of the electron or positron. is the electromagnetic field tensor. This is also known as the curvature of the gauge field. Expanding the covariant derivative reveals a second useful form of the Lagrangian (external field set to zero for simplicity) where is the conserved current arising from Noether's theorem. It is written Equations of motion Expanding the covariant derivative in the Lagrangian gives For simplicity, has been set to zero. Alternatively, we can absorb into a new gauge field and relabel the new field as From this Lagrangian, the equations of motion for the and fields can be obtained. Equation of motion for ψ These arise most straightforwardly by considering the Euler-Lagrange equation for . Since the Lagrangian contains no terms, we immediately get so the equation of motion can be written Equation of motion for Aμ Using the Euler–Lagrange equation for the field, the derivatives this time are Substituting back into () leads to which can be written in terms of the current as Now, if we impose the Lorenz gauge condition the equations reduce to which is a wave equation for the four-potential, the QED version of the classical Maxwell equations in the Lorenz gauge. (The square represents the wave operator, .) Interaction picture This theory can be straightforwardly quantized by treating bosonic and fermionic sectors as free. This permits us to build a set of asymptotic states that can be used to start computation of the probability amplitudes for different processes. In order to do so, we have to compute an evolution operator, which for a given initial state will give a final state in such a way to have This technique is also known as the S-matrix. The evolution operator is obtained in the interaction picture, where time evolution is given by the interaction Hamiltonian, which is the integral over space of the second term in the Lagrangian density given above: and so, one has where T is the time-ordering operator. This evolution operator only has meaning as a series, and what we get here is a perturbation series with the fine-structure constant as the development parameter. This series is called the Dyson series. Feynman diagrams Despite the conceptual clarity of this Feynman approach to QED, almost no early textbooks follow him in their presentation. When performing calculations, it is much easier to work with the Fourier transforms of the propagators. Experimental tests of quantum electrodynamics are typically scattering experiments. In scattering theory, particles' momenta rather than their positions are considered, and it is convenient to think of particles as being created or annihilated when they interact. Feynman diagrams then look the same, but the lines have different interpretations. The electron line represents an electron with a given energy and momentum, with a similar interpretation of the photon line. A vertex diagram represents the annihilation of one electron and the creation of another together with the absorption or creation of a photon, each having specified energies and momenta. Using Wick's theorem on the terms of the Dyson series, all the terms of the S-matrix for quantum electrodynamics can be computed through the technique of Feynman diagrams. In this case, rules for drawing are the following To these rules we must add a further one for closed loops that implies an integration on momenta , since these internal ("virtual") particles are not constrained to any specific energy–momentum, even that usually required by special relativity (see Propagator for details). The signature of the metric is . From them, computations of probability amplitudes are straightforwardly given. An example is Compton scattering, with an electron and a photon undergoing elastic scattering. Feynman diagrams are in this case and so we are able to get the corresponding amplitude at the first order of a perturbation series for the S-matrix: from which we can compute the cross section for this scattering. Nonperturbative phenomena The predictive success of quantum electrodynamics largely rests on the use of perturbation theory, expressed in Feynman diagrams. However, quantum electrodynamics also leads to predictions beyond perturbation theory. In the presence of very strong electric fields, it predicts that electrons and positrons will be spontaneously produced, so causing the decay of the field. This process, called the Schwinger effect, cannot be understood in terms of any finite number of Feynman diagrams and hence is described as nonperturbative. Mathematically, it can be derived by a semiclassical approximation to the path integral of quantum electrodynamics. Renormalizability Higher-order terms can be straightforwardly computed for the evolution operator, but these terms display diagrams containing the following simpler ones that, being closed loops, imply the presence of diverging integrals having no mathematical meaning. To overcome this difficulty, a technique called renormalization has been devised, producing finite results in very close agreement with experiments. A criterion for the theory being meaningful after renormalization is that the number of diverging diagrams is finite. In this case, the theory is said to be "renormalizable". The reason for this is that to get observables renormalized, one needs a finite number of constants to maintain the predictive value of the theory untouched. This is exactly the case of quantum electrodynamics displaying just three diverging diagrams. This procedure gives observables in very close agreement with experiment as seen e.g. for electron gyromagnetic ratio. Renormalizability has become an essential criterion for a quantum field theory to be considered as a viable one. All the theories describing fundamental interactions, except gravitation, whose quantum counterpart is only conjectural and presently under very active research, are renormalizable theories. Nonconvergence of series An argument by Freeman Dyson shows that the radius of convergence of the perturbation series in QED is zero. The basic argument goes as follows: if the coupling constant were negative, this would be equivalent to the Coulomb force constant being negative. This would "reverse" the electromagnetic interaction so that like charges would attract and unlike charges would repel. This would render the vacuum unstable against decay into a cluster of electrons on one side of the universe and a cluster of positrons on the other side of the universe. Because the theory is "sick" for any negative value of the coupling constant, the series does not converge but is at best an asymptotic series. From a modern perspective, we say that QED is not well defined as a quantum field theory to arbitrarily high energy. The coupling constant runs to infinity at finite energy, signalling a Landau pole. The problem is essentially that QED appears to suffer from quantum triviality issues. This is one of the motivations for embedding QED within a Grand Unified Theory. Electrodynamics in curved spacetime This theory can be extended, at least as a classical field theory, to curved spacetime. This arises similarly to the flat spacetime case, from coupling a free electromagnetic theory to a free fermion theory and including an interaction which promotes the partial derivative in the fermion theory to a gauge-covariant derivative. See also Abraham–Lorentz force Anomalous magnetic moment Bhabha scattering Cavity quantum electrodynamics Circuit quantum electrodynamics Compton scattering Euler–Heisenberg Lagrangian Gupta–Bleuler formalism Lamb shift Landau pole Moeller scattering Non-relativistic quantum electrodynamics Photon polarization Positronium Precision tests of QED QED vacuum QED: The Strange Theory of Light and Matter Quantization of the electromagnetic field Scalar electrodynamics Schrödinger equation Schwinger model Schwinger–Dyson equation Vacuum polarization Vertex function Wheeler–Feynman absorber theory References Further reading Books Journals External links Feynman's Nobel Prize lecture describing the evolution of QED and his role in it Feynman's New Zealand lectures on QED for non-physicists The Strange Theory of Light | Animation of Feynman pictures light by QED – Animations demonstrating QED Freeman Dyson Quantum electronics Quantum field theory
Quantum electrodynamics
Physics,Materials_science
5,772
36,764,899
https://en.wikipedia.org/wiki/Bin%20Chen
Bin Chen is a Chinese-born American materials scientist who works at the NASA Ames Research Center. She is an adjunct professor at University of California, Santa Cruz. She earned a B.S. from Nanjing University and a Ph.D. from Pennsylvania State University. Biography Bin Chen was born in China. After graduating from Nanjing University with a B.S. in Chemistry, she moved to the United States for graduate school. After briefly studying at both Boston University and the University of Illinois at Urbana-Champaign, she moved to Pennsylvania State University and earned a Ph.D. in Chemistry. She then moved to Palo Alto, where she was hired by University of California, Santa Cruz and NASA Ames Research Center. Research Chen has worked on materials synthesis and applications for both sustainable energy and ultra-sensitive detection techniques. She has conducted over $3 million worth of federal projects (NASA, DARPA, and DTRA) on materials studies and device fabrication in the past five years. Her current interest is energy harvesting and storage devices based on hybrid nanocomposite materials. The Chen group conducts multidisciplinary research projects involving electrical engineers, physicists, material scientists, chemists, and planetary scientists ranging from undergraduates to postdoctoral researchers. Bin Chen has been profiled in both the Dekker Encyclopedia of Nanoscience and Nanotechnology and Marquis Who’s Who in America. Her work has been highlighted at Technology Review by MIT (2007) and IEEE Nanotechnology Initiative recognition. Chen is a NASA TGER award recipient. Bin Chen's journal articles include Molecular Fiber Sensors Based on Surface Enhanced Raman Scattering (SERS) and Renewable Energy: Solar Fuels GRC and GRS. References External links Bin Chen's Article in the Dekker Encyclopedia of Nanoscience and Nanotechnology, Second Edition Chinese materials scientists Living people Chinese expatriates in the United States Women materials scientists and engineers Year of birth missing (living people) Boston University alumni Nanjing University alumni Pennsylvania State University alumni University of California, Santa Cruz faculty
Bin Chen
Materials_science,Technology
411
64,393,884
https://en.wikipedia.org/wiki/List%20of%20esters
In chemistry, an ester is a compound derived from an acid (organic or inorganic) in which the hydrogen atom (H) of at least one acidic hydroxyl group () of that acid is replaced by an organyl group (). Analogues derived from oxygen replaced by other chalcogens belong to the ester category as well (i.e. esters of acidic , , , and groups). According to some authors, organyl derivatives of acidic hydrogen of other acids are esters as well (e.g. amides), but not according to the IUPAC. An example of an ester formation is the substitution reaction between a carboxylic acid () and an alcohol (R'OH), forming an ester (), where R and R′ are organyl groups, or H in the case of esters of formic acid. Glycerides, which are fatty acid esters of glycerol, are important esters in biology, being one of the main classes of lipids, and making up the bulk of animal fats and vegetable oils. Esters of carboxylic acids with low molecular weight are commonly used as fragrances and found in essential oils and pheromones. Phosphoesters form the backbone of DNA molecules. Nitrate esters, such as nitroglycerin, are known for their explosive properties, while polyesters are important plastics, with monomers linked by ester moieties. Esters of carboxylic acids usually have a sweet smell and are considered high-quality solvents for a broad array of plastics, plasticizers, resins, and lacquers. They are also one of the largest classes of synthetic lubricants on the commercial market. By number of R' group carbons (R−C(=O)−O−R') 1 carbon 2 carbons 3 carbons 4 carbons 5 carbons 7 carbons 8 carbons 10 carbons By number of R group carbons (R−C(=O)−O−R') 0 carbons 1 carbon 2 carbons 3 carbons 4 carbons 5 carbons 6 carbons 7 carbons 8 carbons 9 carbons 10 carbons 16 carbons List of ester odorants Many esters of carboxylic acid have distinctive fruit-like odors, and many occur naturally in fruits and the essential oils of plants. This has also led to their common use in artificial flavorings and fragrances which aim to mimic those odors. Lactones Lactones are a specific class cyclic carboxylic esters that are formed through intramolecular esterification. References Esters Esters
List of esters
Chemistry
560
66,716,581
https://en.wikipedia.org/wiki/Lepiota%20subalba
Lepiota subalba is a species of fungus belonging to the family Agaricaceae. It is native to Europe. References subalba Fungus species
Lepiota subalba
Biology
34
56,917,808
https://en.wikipedia.org/wiki/C-C%20motif%20chemokine%20ligand%2027
C-C motif chemokine ligand 27 is a protein that in humans is encoded by the CCL27 gene. Function This gene is one of several CC cytokine genes clustered on the p-arm of chromosome 9. Cytokines are a family of secreted proteins involved in immunoregulatory and inflammatory processes. The CC cytokines are proteins characterized by two adjacent cysteines. The protein encoded by this gene is chemotactic for skin-associated memory T lymphocytes. CCL27 is associated with homing of memory T lymphocytes to the skin, and plays a role in T cell-mediated inflammation of the skin. CCL27 is expressed in numerous tissues, including gonads, thymus, placenta and skin. It elicits its chemotactic effects by binding to the chemokine receptor CCR10. The gene for CCL27 is located on human chromosome 9. Studies of a similar murine protein indicate that these protein-receptor interactions have a pivotal role in T cell-mediated skin inflammation. [provided by RefSeq, Sep 2014]. References Further reading Cytokines
C-C motif chemokine ligand 27
Chemistry
239
40,361,983
https://en.wikipedia.org/wiki/Open%20Identity%20Exchange
The Open Identity Exchange (OIX) is a non-profit organization that works to accelerate the adoption of digital identity services based on open standards. It is also technology-agnostic and operates collaboratively across both the private and public sectors. History Genesis Shortly after coming into office, the Obama administration asked the General Services Administration (GSA) how to leverage open identity technologies to help the American public interact more easily and efficiently with federal websites, such as those of the National Institutes of Health (NIH), the Social Security Administration (SSA), and the Internal Revenue Service (IRS). At the 2009 RSA Conference, the GSA sought to build a public/private partnership with the OpenID Foundation (OIDF) and the Information Card Foundation (ICF) to craft a workable identity information framework that would establish the legal and policy precedents needed to establish trust for Open ID transactions. This partnership eventually developed a trust framework model. Further meetings were held at the Internet Identity Workshop in November 2009, resulting in OIDF and ICF forming a joint steering committee. The committee's task was to study the best implementation options for the newly created framework. Foundation The US Chief Information Officer recommended the formation of a non-profit corporation, the Open Identity Exchange (OIX). In January 2010, the OIDF and ICF approved grants to fund the creation of the Open Identity Exchange. Booz Allen Hamilton, CA Technologies, Equifax, Google, PayPal, Verisign, and Verizon were all members of either OIDF or ICF, and agreed to become founding members of OIX. Trust To trust that the Identity Provider is delivering accurate data, the following should be considered: - Identity Providers must ensure that the Relying Party is legitimate (i.e., not a hacker or phisher). - While direct trust agreements between relying parties and identity providers are a common solution, they become unmanageable at the scale of the Internet. OIXnet In 2014, OIX established the OIXnet trust registry, a global authoritative registry of business, legal, and technical requirements needed to ensure market adoption and global interoperability. In 2015, the OIDF also announced plans to register all companies self-certifying conformance to OpenID Connect via the OpenID Certification Program on OIXnet. Purpose OIXnet is an official, online, and publicly accessible repository of documents and information relating to identity systems and participants, referred to as a “registry”. It functions as an official and centralised source of such documents and information, much like a government-operated recorder of deeds. Individuals and entities can register documents and information with the OIXnet registry to provide notice of their contents to the public.Members of the public seeking access to such documents or information can go to that single authoritative location to find them. The OIXnet registry is designed to provide a single, comprehensive and authoritative location where documents and information relating to a specific purpose, such as identity systems, can be safely stored to notify others of certain facts. From this location, such documents and information can be accessed by interested stakeholders seeking such information. Early participants OIXnet was launched in 2015. The OpenID Foundation was the first registrant, registering the initial set of organisations, including Google, ForgeRock, Microsoft, NRI, PayPal and Ping Identity, certifying conformance to OpenID Connect. Additional registrations were added to OIXnet throughout 2015 and 2016, with 10 trusted identity services currently registered. Status The OIXnet registry was in a pilot phase as of 2016, registering new and diverse trust frameworks and communities of interest. International Chapters OIX developed a chapters policy in 2015 that allows regional OIX chapters to be established. In 2016, the OIX United Kingdom Chapter was approved by OIX board and launched. Leadership The OIX board represents leaders in online identity in the internet, telecom, and data aggregation industries, concerned with both market expansion and information security. Government relations The OIX board met with Howard Schmidt in 2011 to discuss the public–private partnership envisioned in the NSTIC strategy. The UK government's Cabinet Office joined the OIX at the board level, as it began the work on its Identity Assurance Programme, which is now GOV.UK Verify. In 2015, the States of Jersey commissioned an OIX Discovery project to explore how the knowledge, expertise, and components of one of these models, the UK’s GOV.UK Verify identity assurance scheme, could be leveraged to provide a cost-effective solution to meet Jersey’s requirements. Membership The Open Identity Exchange currently has five executive members and over 50 general members. Executive Members Barclays International Airlines Group LexisNexis Mastercard NatWest Group OIX UK Europe Chapter At the beginning of 2015, the Cabinet Office requested Open Identity Exchange to begin exploring the legal, business, and pragmatic considerations of creating a self-sustaining UK ‘chapter’ of the Open Identity Exchange. Up until that point, OIX UK operated as an independent UK entity, able to administer ‘directed funding’ from member organisations. It had received a series of grants from the UK Cabinet Office, that were used for the collaboratively funded projects. An ad hoc board of advisers was formed of independent, experienced, public and private sector leaders who addressed policy considerations during this transition process. In addition to considering the role of OIX UK in the future, this board of advisers considered the private sector's needs for identity services, resulting in an ongoing OIX project. The Open Identity Exchange board of directors approved an OIX chapters policy at the end of 2015, allowing the formation of individual chapters affiliated with OIX in various local markets. In April 2016 the OIX UK Europe Chapter appointed its board of directors. White Papers The OIX White Papers deliver joint research to examine a wide range of challenges facing the open identity market and to provide possible solutions. They are written by experts in the fields of technology, particularly open identity. OIX OIX: An Open Market Solution for Online Identity Assurance Trust Frameworks Trust Framework Requirements and Guidelines The Personal Network: A New Trust Model and Business Model for Personal Data Federated Online Attribute Exchange Initiatives Personal Levels of Assurance (PLOA) The Three Pillars of Trust UK Identity Assurance Programme (IDAP) Overview of Legal Liability in the IDAP (In development) US National Strategy for Trusted Identities in Cyberspace (NSTIC) Comments on US NSTIC Steering Group Draft Charter and Related Governance Issues United States National Strategy for Trusted Identities in Cyberspace Identity Ecosystem Steering Committee Plenary and Governing Board Charter OIX Response to "Models for a Governance Structure for the National Strategy for Trusted Identity in Cyberspace" White Papers Published in 2016 Open Identity Exchange (OIX) White Papers focus on current issues and opportunities in emerging identity markets. OIX White Papers are intended to deliver value to the identity ecosystem and take one of two perspectives: a retrospective report on the outcome of a given project or pilot or a prospective discussion on a current issue or opportunity. OIX White Papers are authored by independent domain experts and are intended as summaries for a general business audience. Recent published whitepapers include: • Use of online activity as part of the identity verification • UK private sector needs for identity assurance • Use of digital identity in peer-to-peer economy • Shared signals proof of concept • Creating a digital identity in Jersey • Just Giving and GOV.UK Verify • Creating a pensions dashboard • Could digital identities help transform consumers attitudes and behavior towards savings? • Digital identity across borders: opening a bank account in another EU country • Generating Revenue and Subscriber Benefits: An Analysis of: The ARPU of Identity Projects OIX projects deliver joint research to examine a wide range of challenges facing the open identity market and to provide possible solutions. States of Jersey: Creating a Digital ID The hypothesis was that the UK Government identity assurance model could be adapted for Jersey with the support of certified UK IdPs and potential identity assurance hub providers, to meet the requirements of SoJ. The hypothesis also considered that this would create an attractive market opportunity in Jersey for one or more of these providers. LIGHTest Project This is a 3-year project that started in September 2016 and is partially funded from the European Union's Horizon 2020 research and innovation programme under G.A. No. 700321. The LIGHTest consortium consists of 14 partners from 9 European countries and is coordinated by Fraunhofer-Gesellschaft. The project looks to reach out beyond Europe, to build a global community. LIGHTest (Lightweight Infrastructure for Global Heterogeneous Trust management in support of an open Ecosystem of Stakeholders and Trust schemes) The objective of LIGHTest is to create a global cross-domain trust infrastructure that renders it transparent and easy for verifiers to evaluate electronic transactions. By querying different trust authorities worldwide and combining trust aspects related to identity, business, reputation etc,. it will become possible to conduct domain-specific trust decisions. This is achieved by reusing existing governance, organization, infrastructure, standards, software, community, and know-how of the existing Domain Name System, combined with new innovative building blocks. This approach allows an efficient global rollout of a solution that assists decision-makers in their trust decisions. By integrating mobile identities into the scheme, LIGHTest also enables domain-specific assessments on Levels of Assurance for these identities. GOV.UK Verify The UK Government's Cabinet Office joined the OIX at board level as it began the work on its Identity Assurance Programme (IDAP). Through the OIX Directed Funding programme, a considerable number of projects continue to be carried out under OIX governance, the results of which have helped with the ongoing development of GOV.UK Verify. Work continues as GDS looks at how digital identities can be used in both the public and private sector. GOV.UK Verify is built and maintained by the Government Digital Service (GDS), part of the Cabinet Office. The UK Government is committed to expanding GOV.UK Verify and helping to grow a market for identity assurance that will be able to meet user needs in relation to central government services, as well as local, health and private sector services. GOV.UK Verify uses certified companies to verify your identity to government. A certified company is a private company that works to high industry and government standards when they verify your identity. References External links OIXnet Cloud standards Password authentication Federated identity Identity management initiative Computational trust Information technology organisations based in the United Kingdom Organisations based in the City of Westminster
Open Identity Exchange
Technology,Engineering
2,141
2,650,394
https://en.wikipedia.org/wiki/Respiratory%20rate
The respiratory rate is the rate at which breathing occurs; it is set and controlled by the respiratory center of the brain. A person's respiratory rate is usually measured in breaths per minute. Measurement The respiratory rate in humans is measured by counting the number of breaths for one minute through counting how many times the chest rises. A fibre-optic breath rate sensor can be used for monitoring patients during a magnetic resonance imaging scan. Respiration rates may increase with fever, illness, or other medical conditions. Inaccuracies in respiratory measurement have been reported in the literature. One study compared respiratory rate counted using a 90-second count period, to a full minute, and found significant differences in the rates.. Another study found that rapid respiratory rates in babies, counted using a stethoscope, were 60–80% higher than those counted from beside the cot without the aid of the stethoscope. Similar results are seen with animals when they are being handled and not being handled—the invasiveness of touch apparently is enough to make significant changes in breathing. Various other methods to measure respiratory rate are commonly used, including impedance pneumography, and capnography which are commonly implemented in patient monitoring. In addition, novel techniques for automatically monitoring respiratory rate using wearable sensors are in development, such as estimation of respiratory rate from the electrocardiogram, photoplethysmogram, or accelerometry signals. Breathing rate is often interchanged with the term breathing frequency. However, this should not be considered the frequency of breathing because realistic breathing signal is composed of many frequencies. Normal range For humans, the typical respiratory rate for a healthy adult at rest is 12–15 breaths per minute. The respiratory center sets the quiet respiratory rhythm at around two seconds for an inhalation and three seconds exhalation. This gives the lower of the average rate at 12 breaths per minute. Average resting respiratory rates by age are: birth to 6 weeks: 30–40 breaths per minute 6 months: 25–40 breaths per minute 3 years: 20–30 breaths per minute 6 years: 18–25 breaths per minute 10 years: 17–23 breaths per minute Adults: 15–18 breaths per minute 50 years: 18-25 breaths per minute Elderly ≥ 65 years old: 12–28 breaths per minute. Elderly ≥ 80 years old: 10-30 breaths per minute. Minute volume Respiratory minute volume is the volume of air which is inhaled (inhaled minute volume) or exhaled (exhaled minute volume) from the lungs in one minute. Diagnostic value The value of respiratory rate as an indicator of potential respiratory dysfunction has been investigated but findings suggest it is of limited value. One study found that only 33% of people presenting to an emergency department with an oxygen saturation below 90% had an increased respiratory rate. An evaluation of respiratory rate for the differentiation of the severity of illness in babies under 6 months found it not to be very useful. Approximately half of the babies had a respiratory rate above 50 breaths per minute, thereby questioning the value of having a "cut-off" at 50 breaths per minute as the indicator of serious respiratory illness. It has also been reported that factors such as crying, sleeping, agitation and age have a significant influence on the respiratory rate. As a result of these and similar studies the value of respiratory rate as an indicator of serious illness is limited. Nonetheless, respiratory rate is widely used to monitor the physiology of acutely-ill hospital patients. It is measured regularly to facilitate identification of changes in physiology along with other vital signs. This practice has been widely adopted as part of early warning systems. Abnormal respiratory rates See also Subparabrachial nucleus - nucleus in the brain stem that regulates breathing rate Respiratory system Heart rate and pulse and systolic and diastolic blood pressure measurements and the level of oxygen saturation- some other vital signs- can provide related information about the heart and lungs and the great vessels, since these systems work with one another, are relatively close together in gross (macroscopic) anatomy, and are physiologically very related. References Respiratory physiology Respiratory therapy Temporal rates
Respiratory rate
Physics
830
1,606,222
https://en.wikipedia.org/wiki/Dow%20process%20%28bromine%29
The Dow process is the electrolytic method of bromine extraction from brine, and was Herbert Henry Dow's second revolutionary process for generating bromine commercially. This process was patented in 1891. In the original invention, bromide-containing brines are treated with sulfuric acid and bleaching powder to oxidize bromide to bromine, which remains dissolved in the water. The aqueous solution is dripped onto burlap, and air is blown through causing bromine to volatilize. Bromine is trapped with iron turnings to give a solution of ferric bromide. Treatment with more iron metal converted the ferric bromide to ferrous bromide via comproportionation. Where desired, free bromine may be obtained by thermal decomposition of ferrous bromide. Before Dow entered the bromine business, brine was evaporated by heating with wood scraps and then crystallized sodium chloride was removed. An oxidizing agent was added, and bromine was formed in the solution. Then bromine was distilled. This was a very complicated and costly process. References Chemical processes
Dow process (bromine)
Chemistry
236
6,684,233
https://en.wikipedia.org/wiki/Monoxide
A monoxide is any oxide containing only one atom of oxygen. A well known monoxide is carbon monoxide; see carbon monoxide poisoning. The prefix mono (Greek for "one") is used in chemical nomenclature. In proper nomenclature, the prefix is not always used in compounds with one oxygen atom. Generally, when the oxygen is bonded to a nonmetal, the prefix mono is used. However, when the oxygen atom bonds to a metal, the prefix is dropped. For instance, in the compound K2O, potassium (K) is a metal and therefore its proper name is potassium oxide, rather than potassium monoxide. Among monoxides, carbon monoxide and dihydrogen monoxide (water) are both neutral, germanium(II) oxide is distinctly acidic, and both tin(II) oxide and lead(II) oxide are amphoteric. References
Monoxide
Chemistry
182
56,902,334
https://en.wikipedia.org/wiki/Dentocorticium%20bicolor
Dentocorticium bicolor is a species of fungus in the family Polyporaceae. It was originally described by Patric Henry Brabazon Talbot in 1948 as Grandinia bicolor. The type was collected in the Pietermaritzburg district of Natal Province in South Africa, where it was found growing on dead wood. It has also been found in Australia, East Asia, North America, and South America. The fungus was transferred to genus Dentocorticium in 2018 by Karen Nakasone and Shuang-Hui He based on phylogenetic evidence. References Fungi described in 1948 Fungi of Africa Fungi of Asia Fungi of Australia Fungi of North America Fungi of South America Polyporaceae Fungus species
Dentocorticium bicolor
Biology
142
11,598,554
https://en.wikipedia.org/wiki/Ecarin%20clotting%20time
Ecarin clotting time (ECT) is a laboratory test used to monitor anticoagulation during treatment with hirudin, an anticoagulant medication which was originally isolated from leech saliva. Ecarin, the primary reagent in this assay, is derived from the venom of the saw-scaled viper, Echis carinatus. In the clinical assay, a known quantity of ecarin is added to the plasma of a patient treated with hirudin. Ecarin activates prothrombin through a specific proteolytic cleavage, which produces meizothrombin, a prothrombin-thrombin intermediate which retains the full molecular weight of prothrombin, but possesses a low level of procoagulant enzymatic activity. Crucially, this activity is inhibited by hirudin and other direct thrombin inhibitors, but not by heparin. The ECT is also unaffected by prior treatment with warfarin or the presence of phospholipid-dependent anticoagulants, such as lupus anticoagulant. Thus, the ECT is prolonged in a specific and linear fashion with increasing concentrations of hirudin. An enhancement of the ECT is the ecarin chromogenic assay (ECA) in which diluted sample is mixed with an excess of purified prothrombin and the generated meizothrombin is measured with a specific chromogenic substrate. This assay shows no interference from prothrombin or fibrinogen in the sample and is suitable for the measurement of all direct thrombin inhibitors. References Blood tests
Ecarin clotting time
Chemistry
349
56,095,189
https://en.wikipedia.org/wiki/Brazilian%20jurisdictional%20waters
Brazilian jurisdictional waters (, AJB) are the riverine and oceanic spaces over which Brazil exerts some degree of jurisdiction over activities, persons, installations and natural resources. They comprise internal waters, the territorial sea and exclusive economic zone (EEZ), to a distance of from baselines along the coast, as well as waters overlying the extended continental shelf, where Brazilian claims of jurisdiction over its overlying waters are controversial, as the water column over this stretch of seabed is part of the high seas. The continental shelf of Brazil is under a different legal regime from its overlying waters. The Brazilian Navy covers both the shelf and the waters in its less formal concept of a "Blue Amazon". The AJB's total claimed area stands at 5,669,852.41 km² (equivalent to 67% of land territory), of which 2,094,656.59 km² are above the extended shelf. These maritime zones are based on the United Nations Convention on the Law of the Sea (UNCLOS). From 1970 until it came into effect in 1994, Brazil had claimed a territorial sea as far as 200 nautical miles from the coast, instead of the present 12, but retains rights over natural resources in this area through its EEZ. Its coastline is the longest in the South Atlantic Ocean, but only three archipelagos contribute to its EEZ: Fernando de Noronha, Trindade and Martin Vaz and Saint Peter and Saint Paul. Brazil's marine ecosystem is hydrographically and topographically complex and exhibits high rates of endemism and an economic potential in biotechnology. The two prevailing ocean currents, Brazil and North Brazil, have warm, nutrient-poor waters sustaining relatively low biomasses for each species, with a correspondingly limited fishing potential. In winter, cold waters of the Falkland Current may reach as far as the 24th parallel south and cold fronts and extratropical cyclones bring rough seas. The wind, waves, tides and thermal and osmotic gradients offer untouched potentials for renewable energy generation. 26.4% of the EEZ was under protected areas in 2021, mostly around the remote archipelagos of Saint Peter and Saint Paul and Trindade and Martin Vaz. Both are only populated by researchers and military personnel, which is one of the reasons for the government's marine science programs. Most of the country's population lives near the coast and most of its international trade is conducted through the sea, but local shipbuilding and the national merchant marine have little presence in this trade. Coastal shipping answers a modest share of internal trade and mostly covers the oil and natural gas sector. There is no official measurement of the Brazilian maritime economy; 2015 estimates placed it at 2.67% of the gross domestic product directly tied to the sea, mostly in the tourism-dominated service sector. Coast guard duties in jurisdictional waters are assigned to the Navy. Definition International law Brazilian regulation on maritime spaces follows UNCLOS, a codification of international maritime law which came into force in 1994. This Convention, ratified by 168 states as of 2022, unifies centuries of rulemaking on interstate disputes over control of the seas. It organizes the sea off the coast of sovereign states in multiple zones: the territorial sea, contiguous zone, exclusive economic zone and continental shelf. Distances are measured in nautical miles from baselines along the coast. There are two kinds of baseline: normal lines, which follow the low-water line as plotted on official charts, or straight lines where the coast is too jagged or island-strewn. The territorial sea extends up to from baselines and grants coastal states sovereignty over the airspace, water column, seabed and subsoil. Each of these spaces is treated separately in the other maritime zones. In the contiguous zone, at 12 to 24 miles from baselines, a coastal state does not have full sovereignty, but may take measures to prevent or repress unlawful activities in its territory or territorial sea. The contiguous zone is part of the EEZ, which has a width of 188 miles, from the limit of the territorial sea until a distance of from baselines. This area gives a coastal state jurisdiction over the exploitation, conservation and management of its waters, seabed and subsoil. The high seas begin at the EEZ's outer limit. The continental shelf as defined in international law is distinct from the geological continental shelf and consists in an area of seabed and its subsoil, excluding the overlying water column, over which a coastal state has sovereign rights over its natural resources. It extends "to the outer edge of the continental margin, or to a distance of 200 nautical miles from the baselines", as defined in the UNCLOS. If a coastal state's continental margin extends beyond 200 nmi, it may propose an extended continental shelf to the Commission on the Limits of the Continental Shelf (CLCS), an international organ created by the UNCLOS. Brazilian law The term "Brazilian jurisdictional waters" (, AJB) exists in legislation since 1941 at the earliest, although it was less common than "Brazilian waters", "waters of the territorial sea" or "territorial waters". The territorial sea has had a legal definition since 1850, an exclusive fishing regime since 1938, a continental shelf since 1950, and a contiguous zone since 1966. An extended continental shelf was first proposed in 2004 and the country has yet to reach a full understanding with the CLCS to make its claims final and binding. Law no. 8,617 of January 4, 1993 defined Brazil's maritime zones according to the UNCLOS definitions, and Decree no. 1,530 of June 22, 1995 reproduced the convention's text, making it enforceable within the country. When ratifying the convention, Brazil also announced any foreign military operations within its EEZ must be notified in advance. The concept of AJB was becoming commonplace in legislation since a 1987 law on the prohibition of whaling within the AJB. Other legislative acts used expressions such as "waters under Brazilian jurisdiction", "waters under national jurisdiction" and "Brazilian jurisdictional marine waters", but the Navy took a liking to AJB. This term was used for many years without an explicit definition until the Maritime Authority's Norms for the Operation of Foreign Waters in Brazilian Jurisdictional Waters ( – NORMAM-04/2001 (Ordinance 61/, September 22, 2001): The Navy has the jurisdiction to complete and expound on the gaps in national maritime legislation, and therefore, definitions given in its norms prevail over others. Matching definitions have been given in other editions of the NORMAM and in the 2014 Navy Basic Doctrine, which further established that the AJB cannot be considered part of the high seas. Presidential decrees and the National Defense White Paper, which was ratified by Congress in 2018, have accepted the Navy's definition. Brazilan law regulates maritime traffic, environmental conservation, natural resource exploitation and scientific research in the AJB. Internal waters and the territorial sea are its only components which are part of Brazil's territory, where the state exerts full sovereignty. The EEZ and continental shelf merely offer sovereign rights over natural resources. Therefore the expression "Brazilian maritime territory", which some authors used for the totality of Brazilian maritime zones, is misleading. Waters overlying the extended continental shelf Some legal scholars have criticized the Brazilian state's claim of jurisdiction over the water column overlying its extended continental shelf. Beyond 200 nmi, the water column is part of the high seas, even when its underlying seabed belongs to a state's continental shelf. At the International Journal of Marine and Coastal Law, Alexandre Pereira Silva summarized in 2020 that the concept of AJB is incompatible with the UNCLOS and infringes on the freedom of the high seas. Tiago V. Zanella, an author on maritime law, does not dismiss the concept's "enormous strategic importance" for the country, but considers that to speak of jurisdictional rights over this area is to perform an "undue appropriation of a zone that is 'open to all states'". In the hypothetical case of a foreign whaling vessel in the waters overlying the extended continental shelf, over 200 nmi from coastal baselines, Brazilian law would require the Navy to impede this vessel's illegal activities in the AJB. The vessel's owners could resort to an international court, such as the International Court of Justice or the International Tribunal for the Law of the Sea, which would rule in their favor. As a party to the UNCLOS, Brazil would have to comply with this ruling. Naval officer Alexander Neves de Assumpção, in his thesis for the Naval War College, concedes there is a risk of naval commanders being led to infringe on international treaties ratified by Brazil. Nonetheless, he contended that "the concept of AJB need not be changed", as it is already moderated in legislation by the expressions "jurisdiction, to some degree", "for the purposes of control and oversight" and "within the limits of national and international law". To oversee exploitation of the seafloor, Brazil would still have a limited jurisdiction, not to be conflated with sovereignty, over its overlying waters, even when they lie in the high seas. No country has contested Brazil's definition, and Argentina and Chile have likewise claimed jurisdictions beyond what was given in the UNCLOS. What remained to be done was to draft norms clarifying which kinds of oversight are allowed. Blue Amazon To encompass all maritime spaces under Brazilian jurisdiction, in 2004 the Navy publicized the concept of a "Blue Amazon" (), which, as mentioned by admiral Júlio Soares de Moura Neto, Commander of the Navy in 2013, is a synonym for jurisdictional waters. The formal definition is "the region which comprises the surface of the sea, waters overlying the seabed, seabed and subsoil within the atlantic expanse which projects from the coast until the outer limit of the Brazilian continental shelf". The Blue Amazon is not a legal term. It is used in the Navy's external and internal communication and by scientific, environmental and other civilian sectors. The term was coined to draw the public's attention to this area by comparison with the "Green" Amazon's vastness and abundance of natural resources. It is understood in multiple facets, that is, areas of interest to the state: sovereignty and national defense via political-strategic influence in the South Atlantic Ocean, economic prosperity, scientific and technological innovation and environmental conservation, with an emphasis on the first. The Navy has undertaken a national campaign to publicize the concept, seeking popular support to its maritime strategy, the expansion of maritime limits and military re-equipment. More broadly, it promotes the public's "maritime mentality", seeking to regain what its proponents see as an "oceanic destiny" neglected by public consciousness. Delimitation The Brazilian coast measures 7,491 km, the longest in the South Atlantic. Its baselines project, in the Navy's numbers, an area of 3,575,195.81 km2 within the 200 nmi strip, including a territorial sea measuring 157,975.47 km2 and a contiguous zone of 325,328.34 km2. 2,094,656.59 km2 of extended continental shelf are added to this number to reach a total of 5,669,852.41 km2. This corresponds to 67% of national territory (8.5 million km2) and 1.1 times the size of the Legal Amazon (5.2 million km2). As the AJB also include internal waters, around 60 thousand kilometers of waterways can be counted in its extent. The 5.7 million km2 total claim is reached when counting the most recent (2018) revised proposals for the extended continental shelf. Earlier proposals reached a total of 4,451,766 km2. This area has two international maritime boundaries, one with French Guiana and another with Uruguay, both of which are defined by rhumb lines (which cross the meridians at a constant angle) starting from the border: near the Oyapock River, for the former, and Chuí Lighthouse, for the latter. These limits were defined in 1972 with Uruguay and 1981 with France. Territorial sea Since the 19th century, the Brazilian territorial sea was defined as a three-mile strip along the coast. Exclusive fishing rights were at at 12 nmi from the shore in 1938. A presidential decree added a further three miles of territorial sea in 1966, in a "six miles plus six miles" regime, comparable to a contiguous zone and exclusive fishing rights zone, as far as 12 nmi from the shore. The territorial sea was once again extended in 1969 to a width of 12 nmi. In the following year, Emílio Garrastazu Médici's government (1969–1974) claimed a territorial sea as far as 200 nmi, spanning 3.2 million km2 of the ocean. All of its seabed, subsoil and airspace were to be placed under Brazilian sovereignty. At a time when the ruling military dictatorship envisioned great power status, this decision answered fishing interests and fears of foreign activity (military exercises and exploitation of recently discovered oil fields off the coast of Rio de Janeiro). Public opinion, riding a wave of patriotic fervor, responded favorably. Other Latin American countries endorsed the measure, which was not without precedent, as Argentina and Uruguay had made similar declarations. Contemporary international law defined no maximum width for the territorial sea, but in the early 1970s most states, including traditional maritime powers, recognized no jurisdiction beyond 12 nmi from the shore. Therefore, the Ministry of Foreign Affairs received letters of protest from the United States, the Soviet Union and nine other industrialized states. The Brazilian fleet, with a mere 57 ships of significant tonnage, lacked the effective capability to patrol the full extent of the claims. When Brazil signed the UNCLOS, it gave in to great power pressures, in the opinion of diplomat Luiz Augusto de Araújo Castro. Once the treaty was harmonized with the country's law in 1993, the Brazilian government retracted the limits of its territorial sea, from 200 to 12 nmi, but secured a 200-mile EEZ. EEZ The University of British Columbia's Sea Around Us database quantifies a Brazilian EEZ spanning 2,400,918 km2 projected from the continental shore, 468,599 km2 surrounding the Trindade and Martim Vaz Archipelago, 363,373 km2 surrounding the Fernando de Noronha Archipelago and 413,641 km2 surrounding the Saint Peter and Saint Paul Archipelago. Official Brazilian numbers are 3,539,919 km2 of total EEZ area, the world's 11th largest, with a water volume of 10 billion cubic meters. Nonetheless, this is a relatively small area compared to the length of the coast, as Brazil has few islands at major distances from the coast. The archipelagos of Trindade and Martim Vaz and Saint Peter and Saint Paul have minimal land areas, but project a quarter of the EEZ. Article 121 of UNCLOS confers an EEZ and continental shelf to islands, but denies such privileges to "rocks which cannot sustain human habitation or economic life of their own". Among Brazil's oceanic islands, only Fernando de Noronha, Trindade and Belmonte (in Saint Peter and Saint Paul) are permanently inhabited. Fernando de Noronha has the largest population, 3,167 in the 2022 census. Trindade and Saint Peter and Saint Paul have research outposts established by the Navy. Rocas Atoll has no more than an automatic lighthouse. UNCLOS recognized it within Fernando de Noronha's jurisdiction, just as the Island of Martim Vaz was included in Trindade's. However, Colombian representatives in a continental shelf dispute with Nicaragua pointed in 2019 that Brazil claims Rocas as an island and note the official Brazilian map has an EEZ projecting from the atoll. Occupation of St. Peter and St. Paul Extending the EEZ is an openly declared objective of the Navy's presence in both Trindade and Saint Peter and Saint Paul. Brazilian sovereignty over Trindade was once contested by the United Kingdom in 1895–1896, and a permanent presence is maintained since 1957, with a population of 36 military personnel in 2023. On the other hand, Saint Peter and Saint Paul was a neglected territory with no records of human inhabitation. Only after UNCLOS came into force did the Navy's command take serious measures to occupy the area. The Archipelago Program (), organized in 1996, installed a scientific station in Belmonte Islet and changed the site's toponymy from "Saint Peter and Saint Paul Rocks" to "Saint Peter and Saint Paul Archipelago". The station has room for four researches and/or seamen over 15-day periods. Conditions for habitation are poor: researchers are only allowed after undergoing survival training, and a ship is kept on standby to aid the station, which is about 1,000 km from the coast. The islets and rocks have a maximum width of 420 meters, lack soil and drinking water and are exposed to seismic events and severe weather. In the official Brazilian understanding, a permanent presence is by itself enough to distinguish an island according to Article 121, regardless of the population's biweekly rotation and difficult survival. Starting on 1995, the Navy's Directorate of Hydrography and Navigation published nautical charts with a dotted red line in the 200 nmi radius around the rocks, indicating its potential EEZ and continental shelf. The Navy presented the Interministerial Commission on Marine Resources with its case in 1999, citing the precedents of Rockall, Okinotorishima, some Hawaiian islands, Clipperton, Jan Mayen and Aves. The Ministry of Foreign Relations was favorable and noted: "although the UNCLOS is clear about the rocks which cannot sustain human habitation, it cannot be denied that there is permanent occupation in the said archipelago, though its ‘inhabitants’ depend on the continent for its sustainability". Their greatest concern was having the claim challenged by other parties to the UNCLOS. After securing approval from the president and the National Defense Council, in August 27, 2004 Brazil submitted the coordinates of the external limits of its EEZ to the UN's Bulletin of the Law of the Sea. The area around Saint Peter and Saint Paul was formally claimed for the first time. yielded rights to an area the size of Bahia, whose limits are closer to Africa than South America. Article 121 has its controversies, among them the South China Sea Arbitration, whose conclusions may contradict Brazil's interpretation of the legal status of Saint Peter and Saint Paul. The Permanent Court of Arbitration ruled that "the mere presence of a small number of persons on a feature" that "is only capable of sustaining habitation through the continued delivery of supplies from outside" does not equate to island status under Article 121. No state has objected to Brazil's claim over the archipelago. Oceanography Geomorphology The seabed and subsoil beneath the AJB mostly consists of a portion of the divergent continental margin formed out of the split between the South American and African plates. Its continental shelf (geomorphological, not legal), slope and rise are well defined but interrupted by other features fashioned by tectonic and sedimentary forces. The far northern coast is part of the North Atlantic's divergent margin, and another sector to the east is a part of a transform margin. 20.5% of the area beneath the EEZ is at depths shallower than 200 m, which may be considered part of the continental shelf. Deep-sea features cover the rest: the continental slope (13.3%), terraces (1.7%), submarine canyons (1.4%), the continental rise (40%), abyssal plains (29.6%), submarine fans (4.9%), seamounts (2.2%), guyots (1.4%), ridges (1.2%) and spreading ridges (1.4%). These percentages add up to more than 100%, as some features occupy the same spaces. Five islands or archipelagos rise out of the ocean floor: Saint Peter and Saint Paul in the Mid-Atlantic Ridge, Rocas Atoll and Fernando de Noronha in the Fernando de Noronha Ridge and Trindade and Martin Vaz in the Vitória-Trindade Ridge. Ocean currents The two main surface currents off the Brazilian coast are the Brazil and North Brazil (or Guiana) currents, both of which have warm, nutrient-poor waters with a deep thermocline (the layer in which temperature rapidly lowers with depth). They appear around the 11th parallel south, between Recife and Maceió, when the South Equatorial Current, pushed west by the trade winds, meets the South American continent and splits in two. Most of its water proceeds to the northwest towards the Caribbean, forming the North Brazil Current, and the remainder flows southwest, forming the Brazil Current. Both run parallel to the coast. The North Brazil Current achieves speeds of 1–2 m/s, pushing the Amazonas River's plume, which provides a fifth of the global fresh water discharge into the ocean, to the northwest. Amazonian waters may be found up to 320 km from the coast. The Brazil Current is the western arm of the South Atlantic Gyre, a counterclockwise cycle of currents between South America and Africa. It flows until the latitudes of 35° to 40° S, where it meets the colder waters of the Falkland Current and both turn to the east, forming the South Atlantic Current. The Gyre returns to South America through the South Equatorial Current. The Brazil Current's surface water mass is known as the Tropical Water, with a 18 °C to 28 °C temperature range and an average salinity of 35.1 to 36.2 ppm. These values are comparable to its North Atlantic counterpart, the Gulf Stream. It is, however, slower, with a speed below 0.6 m/s. Its depth in the water column reaches 200 m on the edge of the continental shelf. In the Southern and Southeastern regions, the Brazil Current draws closer and further from the coast throughout the year, defining a strong seasonal pattern in water temperature and salinity. In winter, the Falkland Current may reach as far north as the 24th parallel south. Its water mass, known as the Subantarctic Water, mixes with the Tropical Water to form the South Atlantic Central Water (SACW), which, as a colder and denser mass, forms a layer beneath the Tropical Water in the Brazil Current. Certain points in the coast (Cape Frio and Cape Santa Marta) are subject to upwellings of the SACW when northeasterly winds push surface waters. In the Santos Basin the Tropical Water has concentrations of 4.19 ml/L of dissolved oxygen, 0.02 μmol/L of phosphate, 1.10 μmol/L of nitrate and 2.04 μmol/L of silicate. In contrast, the SACW has 5.13 ml/L of oxygen, 0.51 μmol/L of phosphate, 6.14 μmol/L of nitrate and 5.12 μmol/L of silicate. At the 20th parallel south, the SACW extends to a depth of 660 m in the first semester. Further depths contain the Antarctic Intermediate Water (700–1,200 m), North Atlantic Deep Water (1,200–2,000 m) and Antarctic Bottom Water. Climate Brazilian jurisdictional waters have three climate patterns: northern, from Cape Orange, Amapá to Cape Branco, Paraíba,central, from Cape Branco to Cape São Tomé, Rio de Janeiro, and southern, from Cape São Tomé to Chuí Stream. The northern climate pattern is dominated by the Intertopical Convergence Zone, a belt of clouds shaped in an east-west direction by trade winds. It moves south of the Equator from Janury to April, although it may rapidly change its position, and brings convective rainfall, often in the form of storms. In some years it stays further north, causing drought in Northeastern Brazil and lower temperatures in the southern tropical Atlantic. The inverse happens when it stays further south. The central region is more seasonal. Easterly and northeasterly trade winds carry moisture from the coast and become colder and stronger in winter, from June to August, due to the South Atlantic High. In this period, precipitation increases between Cape Branco and Salvador and lowers to the south. Easterly waves and, from May to October, cold fronts cause rainfall, and in the latter case, rough seas and lower temperatures. The southern region is ruled by two phenomena, the South Atlantic Convergence Zone (SACZ) and extratropical cyclones. The SACZ is a northwest-southeast axis of clouds most common in summer, south of Bahia's coast, causing multiple days of bad weather. Extratropical cyclones may occur on a weekly basis in winter and are followed by cold fronts. They come from the south of the continent in a northeasterly direction. causing low temperatures, rainfall and rough seas. Wind speeds may exceed 60 km/h in trajectories parallel to the coast, occasionally sinking small fishing boats. Cold air masses linger in their aftermath, some of which are dry, having crossed the Andes, while others come from the Weddell Sea and are humid, but not as cold. Brazil's oceanic islands have maritime-influenced tropical climates. Trindade has an average annual temperature of 25 °C and a dry season from January to March. Fernando de Noronha has an average annual temperature of 27 °C, with a dry season from August to February. Oceanographic and meteorological data are traditionally collected by ships, coastal stations and drifting or stationary buoys, which is labor-intensive to repeatedly monitor over large areas and time periods, but can be eased by satellites. Public investment into these activities is organized since 1995 under the Pilot Program for the Global Ocean Observing System (GOOS/Brasil), which includes the National Buoys Program and the Pilot Research Moored Array in the Tropical Atlantic (Pirata), a joint American-French-Brazilian program which contributes to climate monitoring in Northern and Northeastern Brazil. Marine life Brazil's marine ecosystem is vast and hydrologically and topographically complex, spanning a wide range of habitats and high levels of endemism. 31.8% of the coast's length can be classified into bays and estuaries, 27.6% as beaches and rocky shores, 18% as lagoons and coastal marshes, 13.6% as mangrove forests and 9% as dunes and cliffs. About 3,000 km or a third of the coast has reefs in the continental shelf: coral reefs from 0° 52' N to 19° S and rocky reefs from 20° to 28° S. At greater depths, sedimented slopes, submarine canyons, reef-forming and solitary corals, methane seeps and pockmarks, seamounts and guyots have distinct benthic communities. A 2011 literature review counted 9,103 marine species in the Brazilian coast, of which 8,878 were animals: 1,966 crustaceans, 1,833 molluscs, 1,294 vertebrates (fish), 987 annelids, 535 cnidarians, 400 sponges, 308 miscellaneous invertebrates, 254 echinoderms, 178 miscellaneous vertebrates, 133 bryozoans, 70 tunicates and 45 flatworms. In other kingdoms, two species were found among bacteria, 488 rhodophytes, 201 chlorophytes and 14 angiosperms among plants and 49 dinoflagellates and 15 foraminiferans among protists. Real numbers may be as high as 13 thousand. 66 invasive species have been accounted for. Although the number of species is high, each has a relatively small biomass. The two prevailing currents, Brazil and North Brazil, are poor in nutrient salts in the euphotic layer, where photosynthesis and biomass production take place at the lowest trophic level, and have a deep thermocline, which restrains bottom-to-surface nutrient flow. Greater levels of biomass may be found in the Falkland Current, which has a higher concentration of nutrient salts; upwelling zones such as Cape Frio; closer to shore, where shallow waters, river discharge, wind and tides allow turbulence to enrich seawater; and stretches of the Northern coast under the influence of the Amazon River's nutrient-rich fresh water. In nutrient poor-waters, picoplankton are the chief primary producers. Upwelling zones have larger species of phytoplankton and greater populations of pelagic fish. Pelagic community transfer organic matter to benthic communities, of which there are two geographical groups: thee northern, southeastern and southern coasts have flat bottoms of sand, mud and clay, whereas the eastern and northeastern coasts have irregular, rocky bottoms formed by calcareous algae. Seabird diversity is relatively low (around 130 species). Rocas Atoll and other areas are breeding grounds for Northern Hemisphere birds, from September to May, and southern birds from May to August. Human activity The "Blue Amazon's" human dimension is not as complex as the "Green Amazon's", as seafarers and oil rig workers are its only inhabitants. On the other hand, 26.6% of the Brazilian population or 50 million inhabitants lived in the coastal zone's 450 thousand km2 as of the 2010 census, a demographic density up to five times the national average. This population is concentrated in a few urban centers, leaving other areas of the coast with a low density of occupation. 13 state capitals are coastal. Many coastal resources have concurrent and competing uses and are thus a stage for social and environmental conflicts stemming from contradictions between environmental conservation, economic development and public, private, local and global interests. Interests as diverse as those of the ministries of Justice and Public Security, Defense, Foreign Affairs, Economy, Infrastructure, Agriculture and Livestock, Education, Citizenship, Technology and Innovation, Environment, Tourism and Regional Development are represented in the Interministerial Commission for Marine Resources (, CIRM). This body coordinates the state's strategic programs in the sector, such as LEPLAC, as outlined in the National Marine Resources Policy and the four-year Sectoral Plans for Marine Resources. The CIRM is coordinated by the Commander of the Navy, who is represented in the commission by an officer who also heads the Secretariat (SECIRM), a supporting organ which maintains contact with federal ministries, state governments, the scientific community and private entities. Research The Brazilian state invests in several research programs in the South Atlantic to shore up its continental shelf expansion proposals, ensure national presence in oceanic islands and understand the area's biodiversity and natural resources. National oceanography has succeeded in surveys of the continental shelf's geology and the EEZ's living resources, engineering projects and participation in international research programs, but the number of researches and availability of equipment and vessels are not enough for the breadth of the field. Oceanic science, technology and innovation in the country is mostly financed by public entities, with notable exceptions of companies such as Chevron, Equinor, Shell and Vale. 65 higher education institutions offered 1,840 annual positions in courses in Marine Sciences in 2022. Both the Navy and civilian institutions operate oceanographic research vessels. A national institute of the sea comparable to the role played by the Instituto Nacional de Pesquisas Espaciais, Empresa Brasileira de Pesquisa Agropecuária andFundação Oswaldo Cruz in other areas, did not exist until the foundation of the National Oceanic Research Institute (, Inpo). Initially staffed with only 17 officials and a yearly budget of R$ 10 million, it is a small organization conceived to aggregate research data and direct strategic projects. According to the Intergovernmental Oceanographic Commission, out of 370 thousand papers on marine science published globally in 2010–2014, Brazilian authors were on 13 thousand. In four categories, "functions and processes of marine ecosystems", "ocean health", "blue growth" and "human health and well-being", the percentage of papers in Brazil's total scientific output is higher than the international average, and the country is deemed specialized in these areas. The category "marine data and oceanic observation" is at the global average and "oceans and climate", "oceanic technology" and "oceanic crust and marine geological risks" were below average. Economy Brazilian jurisdictional waters directly participate in the national Gross Domestic Product (GDP) in six sectors: services (particularly tourism), energy, manufacturing, defense, fishing and transport. Furthermore, the sea hosts critical communications infrastructure, submarine cables through which the Brazilian Internet receives 98% of its data. Indirect economic contributions are much greater and may be difficult to measure: for instance, coastal sites boost value in the real estate sector. Specialists deem the maritime economy to still have idle potential, particularly in the "blue GDP" or blue economy, a socially- and environmentally-minded economic frontier. A "maritime sector" does not exist in chief economic indexes such as the GDP and many activities are counted as part of other sectors, such as agriculture. There is no official and systematic methodology for its calculation. The first scientific study to account for the sector produced estimates for 2015: the maritime economy produced R$1.1 trillion or 18.93% of the national GDP and employed 19,829,439 workers. "Maritime-adjacent" sectors were responsible for 16.26% of the GDP and 17,745,279 jobs, mostly in the tertiary sector. Activities directly conducted offshore, or whose products offshore, represented 2.67% of the GDP and 2,084,160 jobs. In this estimate, the tourism-centered service sector is the chief activity in Brazil's maritime economy, rather than traditionally maritime sectors such as oil and gas, fishing and aquaculture. When properly accounted for, the sector is comparable in size to agribusiness. Comparisons with estimates in other countries may be misleading, as their methodologies are different, but the value estimated for the economy directly connected to the sea is consistent with a 2013 United States estimate of 2.2% of national GDP. In 2020 the CIRM tasked a research team with the definition of a measuring methodology so that in the future, official numbers can be published through the Institute of Geography and Statistics. Trade Coastal cities have in the sea a natural trade route between themselves and with other continents, one that is cost-effective for large volumes of cargo and long distances. Brazilian ports moved 1.151 billion tons of cargo in 2020 and employed 43,205 registered workers in 2021. The busiest ports were Santos, Paranaguá and Itaguaí, but Northern and Northeastern ports are on the rise as export terminals for Center-Western agricultural production. Maritime transport is the primary mode of Brazilian international trade, shipping 98.6% of exports by weight and 88.9% by value in 2021 and 95% of the weight and 74% of the value of imports. In contrast, waterways have a modest participation in internal trade, contrary to what the length of the coastline may suggest. Internal shipping was used for a share of only 15% of domestic transport demand in 2015, 10% in coastal shipping and 5% on inland waterways. The sector has declined since 1950, when 32.4% of domestic transport demand was provided by ships. The sector has grown over the 21st century. It mostly services the oil and gas sector; from 2017 to 2019, the two largest points of departure were the Campos and Santos sedimentary basins, while the two largest destinations were the Petrobras terminals in São Sebastião and Angra dos Reis. Transport between coastal hubs was historically provided almost exclusively by coastal shipping, but since the 1950s, developmental policies have prioritized land transport and the automobile industry. In the present, highways are the primary mode of transport. Coastal shipping has idle potential, and the sector's representatives emphasize its predictability, multimodality and lower risks of damage, theft and environmental accidents. However, companies interested in coastal shipping face logistical difficulties in modal integration, insufficient line frequency and high costs, which are a result of the fleet's high occupancy rates. The Brazilian Merchant Marine employed 26,631 mariners, 887 ships and 5.522 million in deadweight tonnage in 2023. It is well-represented on internal shipping, in which it provided for 92% of container transport, 59.1% of general cargo, 24.3% of dry bulk cargo and 4.1% of wet bulk cargo in 2021. On international trade, the Brazilian flag can hardly be seen, with a few exceptions such as shipping to other Mercosul countries or oil exports. In 2008, Brazilian companies were responsible for around 10% of the international freight market, mostly by using chartered foreign vessels. In 2005, only 4% of freight fees from external trade were paid to Brazilian companies. Shipbuilding Brazil's naval industry is historically concentrated in the state of Rio de Janeiro. 26 shipyards were in operation in 2010, of which 15 were in Rio. 152 projects were under construction in 2016, mostly barges and towboats (82 units), followed by oil tankers, offshore support vessels, tugboats, oil platforms, submarines and gas tankers. The industry is labor-intensive and each direct job may indirectly create another five. Shipyards employed 21 thousand workers in 2019, a major drop from the 82 thousand in 2014, but the sector was recovering. At the peak of shipbuilding, ships flying the national flag provided 17.6% of international freight in 1974. Production began to decline in the 1980s and growth was only regained in the 21st century, driven by the oil industry's demand. National-flagged vessels couldn't compete after deregulation and the lifting of protectionist policies, and local shipbuilding costs remained higher than in other countries with greater labor and energy prices. Tourism Nautical tourism provided for R$12.6 billion of Brazil's 2023 GDP, only accounting for travel and sports in speedboats, sailboats, yatches, jets and similar craft. The boat sector created 150 thousand direct and indirect jobs. The cruise ship sector added a further R$5 billion and 80 thousand jobs in the 2023/2024 season. More broadly, coastal tourism also includes beaches, bathing resorts and diving) and their infrastructure: hotels, food, recreation, sporting equipment and marinas. Brazilian attractions in this sector are the vast coastline and internal waters, its climate and its scenery, with tropical and subtropical white sand beaches and, further south, coastal mountains. A deficit of cruise ship vacancies in the 2023/2024 season suggests an untapped potential, but insufficient docks are a major shortcoming. Mining Oil and natural gas are among Brazil's chief interests in the sea, since the 1970s and even more after discoveries in the pre-salt layer of coastal sedimentary basins in the 2000s. Most national production of these resources takes place beneath jurisdictional waters; production was largest in the states of São Paulo, Rio de Janeiro and Espírito Santo. Brazil was the world's 8th largest crude oil and lease condensate producer in 2023. Supply exceeds domestic demand, but the country still imports crude oil and its derivatives for lack of refining capacity. The South Atlantic's seabed and subsoil are also a new frontier for underwater mining. Coal, gas hydrates, aggregates, heavy mineral sands, phosphorites, evaporites, sulphur, cobalt-rich ferromanganese crusts, polymetallic sulfides and polymetallic nodules have been prospected in the Brazilian continental shelf. In the moment, this sector is of little relevance. As of 2019, the only 11 mining titles registered at the National Mining Agency for underwater mining referred to limestone and shelly limestone extraction. Renewable energy The Brazilian coast has an untapped potential in renewable energy generation from its tides, waves, winds and osmotic and thermal gradients. Total tidal and wave generation potential was estimated at 114 GW in 2022. Viable site for tidal power generation are in the Northern region and Maranhão, while wave power is possible in the remaining coastal states. Vertical temperature gradients may be studied for ocean thermal energy conversion in oceanic islands and the middle continental shelves of Santa Catarina and Rio de Janeiro. Usable osmotic gradients may be found in major estuarine systems such as the Amazon Delta and Patos Lagoon. Offshore wind power potential was estimated in 2024 at 480 GW over fixed foundations (at maximum depths of 70 m) and 748 with floating wind turbines. Major urban centers such as Fortaleza, Rio de Janeiro and Porto Alegre are close to the main potential wind power areas, with the greatest potential in the South. However, initial costs would be high and a significant scale of production would only be achieved with significant investments in the transmission network, port infrastructure and manufacturing capacity. For comparison, Brazil's electrical grid had 200 GW of centralized power in 2024, primarily sourced from hydroelectric power. Fishing Brazil historically provides little more than 0.5% of global marine fishing output. The United Nations Food and Agriculture Organization accounted for a national fishing production of 758 thousand tons in 2022, out of a global total of 91.029 million. Over 30% of captures take place in rivers and lakes. Brazil also produced 738 thousand tons from aquaculture out of a global total of 94.413 million. Export-focused industrial fishing developed in the country since the 1960s, driven by a mistaken belief that fish stocks would be endless. The vast oceanic area under national jurisdiction does not by itself make the country a fishing power, as already mentioned oceanographic conditions do not produce a large biomass of fish. Fish stocks were comprehensively surveyed by the Program for Evaluation of the Sustainable Potential of Living Resources in the Exclusive Economic Zone of Brazil (, ReviZEE), a decentralized, multidisciplinary effort undertaken in 1996–2005. As of its conclusion, 69% of marine fish output consisted of eight families: and (Sciaenidae), sardines (Clupeidae), tunas and related fish (Scombridae), shrimp (Penaeidae), catfish (Ariidae) , (Mugilidae), , , , and other fish of the Carangidae family and red fishes of the Lutjanidae family. Most species targeted by coastal and continental fisheries were over-exploited and there was no perspective of increased production. Oceanic fisheries had greater potential, but even then, stocks were nearing the limits of sustainable exploitation. Even at deeper waters inaccessible to traditional fleets, stocks have limited potential. The greatest long-term potential for growth lies in aquaculture, including mariculture in the coast's many bays and bights. Fishing and aquaculture provide little more than 0.5% of national GDP, though they are relevant at the local level, creating 3.5 million direct and indirect jobs, mostly from artisanal fishermen and their families. Out of an estimated fleet of 21,732 boats in 2019, most had less than 12 meters in length and only a third were motorized. Industrial fishing is concentrated in the South and Southeast. Fish are not central to the Brazilian diet: annual per capita consumption stood at 9.5 kg in 2020, below the global average of 20 kg. Even then, production does not meed demand in 2022 Brazil was among the world's 11 largest fish importers. Biotechnology Besides fishing, another category of living resources is biotechnology, which takes advantage of molecules and genes from marine microorganisms. In the South Atlantic, research in this area focuses on hydrolytic enyzmes and bioremediation. The Brazilian state promotes a prospection program, Biomar, since 2005. This activity has next to no environmental impacts, but the industrialization of marine biotechnological products was still distant as of 2020. Environmental policy Brazilian marine ecosystems are under pressure from industrial fishing, navigation, port and land pollution, coastal development, mining, oil and gas extraction, invasive species and climate change. Industrial, mining, agriculture, pharmaceutical, sanitary and other residues drain into the sea from the continent. A particularly serious case was the Mariana dam disaster, which led to mining residues with a high concentration of iron, aluminum, manganese, arsenic, mercury and other metals crossing over 600 km through the Doce River until meeting the sea. Oil spills are the most visible type of pollution, of which the largest case took place in the Northeastern coast in 2019. Global ocean acidification may impede biogenic calficication in Espírito Santo and the Abrolhos Bank and dissolve existing shells and skeletons, releasing carbon dioxide. In the South Atlantic, increasing seawater surface temperatures will tend to weaken the Falkland Current, displacing the Brazil-Falkland Confluence to the south. Marine environmental management falls upon a mesh of policies, norms, programs and agencies. Enforcement is assigned to the Brazilian Institute of Environment and Renewable Natural Resources (IBAMA) and the Navy. 27.6% of the territorial sea and 26.4% of the EEZ, or 26.5% of these areas in total, were protected under 190 conservation units in 2021. Coastal areas had a further 549 units. Until 2020, when environmental protection areas were decreed in the archipelagos of Saint Peter and Saint Paul and Trindade and Martim Vaz, conservation unit coverage in the EEZ did not exceed 1.5%. This measure allowed Brazil to announce its full implementation of Aichi Target 11, the protection of at least 10% of coastal and marine areas, which it had committed itself to fulfill in 2010, as a party to the Convention on Biological Diversity. However, full (no-take) protection coverage stood at only 2.5%. The Ministry of the Environment had as its target to extend this number to 10% in the following 15 years. Unprotected areas that could be given priority include reefs at the edge of the Amazonic continental shelf, shallow waters of the North Brazilian Chain, the southern part of the Abrolhos Bank and, in the South, deep-water coral reefs, rhodolith beds and mobile bottom benthic communities. The most serious overlap between risk factors and biodiversity is at areas in the Southeast and southern Bahia, to a total of 83 thousand km2. This conclusion was published in the journal Diversity and Distributions in 2021, based on the distribution of 143 animal species with critically endangered, endangered or vulnerable conservation status. Its authors contend that the criteria for existing areas have been more opportunistic and political than biologica. The archipelagos of Saint Peter and Saint Paul and Trindade and Martim Vaz are remote areas and their conservation harms few economic interests, unlike the coast. Security The "Blue Amazon's" limits are imaginary lines over the sea and only physically exist insofar as they are patrolled by Brazilian ships. Jurisdictional waters are a border region and as such, must be monitored and if needed, denied access to external actors. This burden falls on the Armed Forces and particularly the Navy, in its "dual" nature, simultaneously tasked with police and war operations. There is no independent coast guard. In formal recognition of this role, the Navy Command is rewarded with some of the royalties of offshore oil extraction. The Brazilian Air Force aids the Navy's activities with its patrol aircraft. Brazil has further responsibilities of maritime search and rescue from the 7th parallel north to the 35th parallel south, as far east as the 10th meridian west. The Navy's "coast guard" dimension is embodied in its Naval Districts, to which a considerable number of patrol vessels are assigned. The Navy's commander is the Brazilian Maritime Authority and as such is responsible for implementing and overseeing laws and regulations on the sea and interior waters, if needed by seizing foreign vessels which conduct unauthorized activities in any of the BJW's maritime zones and handing them to appropriate authorities. In its warmaking dimension, the Navy is tasked with deterring foreign powers and, should war break out, to deny them use of the sea, control maritime areas and project power over land. Its priorities are the coastal strip between Santos and Vitória, the Amazon Delta, archipelagos, oceanic islands, oil rigs and naval and port installations. Naval strategic thought defines the Blue Amazon as the "vital area", the Atlantic from the 16th parallel north to Antarctica as the "primary area" and the Caribbean Sea and East Pacific Ocean as the "secondary area". In defense of its interests in the South Atlantic, Brazil has a two-pronged approach of military re-equipment and the development of closer ties with other states in the region. References and notes Notes References Sources Marine ecoregions Biota of the Atlantic Ocean Ecoregions of Brazil Environment of Brazil Borders of Brazil Economy of Brazil Brazil
Brazilian jurisdictional waters
Biology
10,090
52,505,826
https://en.wikipedia.org/wiki/Symphiles
Symphiles are insects or other organisms which live as welcome guests in the nest of a social insect (such as the ant, myrmecophily, or termite, termitophily) by which they are fed and guarded. The relationship between the symphile and host may be symbiotic, inquiline or parasitic. Symphile taxa This is a selection of taxa exhibiting symphilia, not a complete list. Fibularhizoctonia Fibularhizoctonia, sometimes referred to as cuckoo fungus due to their adaptation to mimic termite eggs, employ chemical and morphological mimicry to benefit from the defense termites provide their brood. If termite workers are present to care for a brood which contains cuckoo fungus, the sclerotia, or "termite balls", are unlikely to germinate and their presence will increase the survival rate of the termite eggs. When worker termites were experimentally removed from brood that contained slerotia, the fungus germinated by exploiting the termite eggs. This means the termitophilic relationship between termites and Fibularhizoctonia can be parasitic or mutualistic. Phengaris arion The large blue butterfly, Phengaris arion (formerly Maculinea arion), exhibits a unique parasitic relationship with a single species of red ant, Myrmica sabuleti. Psithyrus Cuckoo bumblebees, members of the subgenus Psithyrus in the genus Bombus, are obligate brood parasites; they must use colonies of true bumblebees to rear their young. A Psithyrus female will kill or subdue the host colony's queen and then use pheromones and/or physical attacks to force the host colony to feed her and raise her brood. Staphylinidae Many species of Staphylinidae (commonly known as “Rove Beetles”) have developed complex interspecies relationships with ants. Ant associations range from near free-living species which prey only on ants, to obligate inquilines of ants, which exhibit extreme morphological and chemical adaptations to the harsh environments of ant nests. Some species are fully integrated into the host colony, and are cleaned and fed by ants. Many of these, including species in tribe Clavigerini, are myrmecophagous, placating their hosts with glandular secretions while eating the brood. Staphylinidae is currently considered to be the largest family of beetles, with over 58,000 species described. As such, many myrmecophilous species are unknown. The majority of studied myrmecophilous Rove Beetles belong to the subfamily Aleocharinae, including the commonly studied genera Pella, Dinarda, Tetradonia, Ecitomorpha, Ecitophya, Atemeles, and Limechusa, and to the subfamily Pselaphinae, which includes Claviger and Adranes. There are also representatives of Scydmaenidae, which includes 117 myrmecophilous species in 20 genera The Aleocharinae possess defensive glands on their abdomens, which are used in myrmecophilous species to prevent attacks by their host ant and in more extreme cases to integrate completely into the colony. Many Pselephinae species have trichomes, tufts of hairs which hold placating pheromones. Pselephines have evolved trichomes independently at least four times, most notably in all members of Clavigerini, but also in Attapsenius and Songius genera. Ecology and behavior Due to their large number and diversity, myrmecophilous Rove Beetles occupy an array of behaviors. Myrmecophilous interactions can be generalized into categories, in three of which Staphylinids can be found. The synecthrans, or “persecuted guests,” the synoeketes, or “tolerated guests,” and the symphiles, or “true guests.” Synecthrans Synecthran insects live on the periphery of the host colony and are not accepted into the colony. Synoeketes Synoeketetic insects live in close contact with their host ants but are not integrated into the colony. These species may be further categorized as neutral, mimetic, loricate, and symphiloid synoeketes. Symphiles Symphilic insects have been fully integrated into the host’s society. Symphilic species have undergone complex morphological adaptations, many gaining the appearance of their host's species. Most have developed trichomes, which secrete appeasement pheromones. The most extreme adaptations, found in members of tribe Clavigerini, include the reduction of mouthparts for trophallaxis and the fusing of many body and antennal segments. While most symphiles use antennal contact to stimulate food giving from their host, at least one member of Clavigerini, Claviger testaceus, secretes a chemical to induce regurgitation from its host ant Lasius flavus. Symphiles typically take on many roles in the colony, raising young, feeding and grooming adults, and helping transport food and larvae. Many Staphylinids are capable of following ant pheromone trails, although they are not limited to following trails laid by their host ant. This allows symphiles of army ants to migrate with the colony. Most species are trophallactic, being fed by other members of the colony. Almost all species have also been observed feeding on the brood, making them obligate parasites. Types of mimicry Auditory mimicry Once the larvae of the large blue butterfly (Phengaris arion) is brought into a Myrmica sabuleti colony, it will mimic the sounds a queen Myrmica larva would make, increasing the chances that the host ant colony will prefer to care for it over their own larvae. The caterpillar feeds on the ant grubs and is a predacious symphile. Chemical mimicry Chemical mimicry refers to the production of one species’ chemical signals by another species. Many myrmecophilous Staphylinids have evolved chemical mimicry to deter or placate ants. For Staphylinids accepted into the host colony, chemical mimicry is used for camouflage. The majority of the chemical signals used are cuticular hydrocarbons, which are produced in the cuticle of the host ant at certain concentrations and are palpated to determine the identity of an ant. Species in close contact with their host ants are able to pick up the host’s hydrocarbons and imitate the ant’s hydrocarbon pattern, thus appearing in scent at least to be the same species as the host ant. As hydrocarbon patterns are specific to an individual colony, the rove beetles are generally restricted to one nest. The production of a new hydrocarbon pattern takes time, during which the beetle is vulnerable to detection and attack. Some species, such as Zyras comes, produce volatile pheromones as well as cuticular hydrocarbons, which may provide it more protection than contact based pheromones while traveling with its host in foraging trails. Physical adaptation The army ants that rove beetles prey on are blind, so it is important that the rove beetles feel similar to their host species. Physical adaptation to resemble ants has evolved in rove beetles on at least twelve separate occasions. References Ecology Mutualism (biology) Myrmecology Staphylinidae
Symphiles
Biology
1,579
26,186,739
https://en.wikipedia.org/wiki/Hytort%20process
The Hytort process is an above-ground shale oil extraction process developed by the Institute of Gas Technology. It is classified as a reactive fluid process, which produces shale oil by hydrogenation. The Hytort process has advantages when processing oil shales containing less hydrogen, such as the eastern United States Devonian oil shales. In this process, oil shale is processed at controlled heating rates in a high-pressure hydrogen environment, which allows a carbon conversion rate of around 80%. Hydrogen reacts with coke precursors (a chemical structure in the oil shale that is prone to form char during retorting but has not yet done so). In the case of Eastern US Devonian shales, the reaction roughly doubles the yield of oil, depending on the characteristics of the oil shale and process. In 1980, the HYCRUDE Corporation was established to commercialize the Hytort technology. The feasibility study was conducted by HYCRUDE Corporation, Phillips Petroleum Company, Bechtel Group and the Institute of Gas Technology. See also Galoter process Alberta Taciuk Process Petrosix process Kiviter process TOSCO II process Fushun process Paraho process Lurgi-Ruhrgas process Chevron STB process LLNL HRS process KENTORT II References Oil shale technology
Hytort process
Chemistry
265
22,813,490
https://en.wikipedia.org/wiki/Sindh%20Cities%20Improvement%20Program
The Sindh Cities Improvement Program (SCIP) is a program initiated by the Government of Sindh with the assistance of the Asian Development Bank (ADB), with the aim of improving municipal services and development of major urban centers of Sindh, Pakistan. The first phase of the program was implemented in selected towns in upper Sindh, known as the Sukkur cluster, and focused on improving water, sanitation and solid waste management services, thereby improving public health and quality of life. The SCIP Program Support Unit (PSU) is located in the Clifton district of Karachi. References External links Official website Government of Sindh Waste management in Pakistan Urban planning
Sindh Cities Improvement Program
Engineering
129
77,897,015
https://en.wikipedia.org/wiki/Mojtaba%20Amani
Mojtaba Amani (; born 21 March 1963) is an Iranian diplomat, who has served as Iran's ambassador to Lebanon since 2022. Prior to this, he headed Iran's interest section in Egypt from 2009 to 2014. In September 2024, during the Lebanon pager explosions, Amani was injured by an exploding pager. The New York Times reported that he lost one eye and sustained injury to the other, though the Iranian embassy in Beirut denied these claims. His first public appearance since the pager explosion took place in November 2024. He was seen with injuries to the hand, face and eyes. Education Amani holds a master's degree in International Relations from the University of Tehran. Professional experience Amani began working at the Ministry of Foreign Affairs in 1988, with his first role as the Deputy Head of the Minister's Office. Other roles Expert in the First Department of Middle East and North Africa. Deputy Representative of the Islamic Republic of Iran in Cairo. Expert in the Office for Political and International Studies. Deputy Director of the Office for Political and International Studies. Head of the Iranian Representation in Cairo. Senior Expert on Egypt Studies in the Office for Political and International Studies. References 1963 births Living people Ambassadors of Iran to Lebanon 21st-century Iranian diplomats University of Tehran alumni Explosion survivors
Mojtaba Amani
Chemistry
267
1,846,387
https://en.wikipedia.org/wiki/Password%20Safe
Password Safe is a free and open-source password manager program originally written for Microsoft Windows but supporting a wide array of operating systems, with compatible clients available for Linux, FreeBSD, Android, IOS, BlackBerry and other operating systems. History The program was initiated by Bruce Schneier at Counterpane Systems. the program is maintained on GitHub by a group of volunteers. Design After filling in the master password the user has access to all account data entered and saved previously. The data can be organized by categories, searched, and sorted based on references which are easy for the user to remember. There are various key combinations and mouse clicks to copy parts of the stored data (password, email, username etc.), or use the autofill feature (for filling forms). The program can be set to minimize automatically after a period of idle time and clear the clipboard. It is possible to compare and synchronize (merge) two different password databases. The program can be set up to generate automatic backups. Password Safe does not support database sharing, but the single-file database can be shared by any external sharing method (for example Syncthing, Dropbox etc.). The password database is not stored online. Features Note: All uncited information in this section is sourced from the official Help file included with the application Password management Stored passwords can be sectioned into groups and subgroups in a tree structure. Changes to entries can be tracked, including a history of previous passwords, the creation time, modification time, last access time, and expiration time of each password stored. Text notes can be entered with the password details. Import and export The password list can be exported to various file formats including TXT, XML and previous versions of Password Safe. Password Safe also supports importing these files. Password Safe supports importing TXT and CSV files which were exported from KeePass version 1.x (V1). KeePass version 2.x (V2) allows databases to be exported as a KeePass V1 database, which in turn can be imported to Password Safe. Password Safe cannot directly import an XML file exported by KeePass V1 or V2, as the fields are too different. However, the Help file provides instructions for processing an exported XML file with one of multiple XSLT files (included with Password Safe) which will produce a Password Safe compatible XML file that can then be imported. File encryption Password Safe can encrypt any file using a key derived from a passphrase provided by the user through the command-line interface. Password generator The software features a built-in password generator that generates random passwords. The user may also designate parameters for password generation (length, character set, etc.), creating a "Named Password Policy" by which different passwords can be created. Cryptography The original Password Safe was built on Bruce Schneier's Blowfish encryption algorithm. Rony Shapiro implemented Twofish encryption along with other improvements to the 3.xx series of Password Safe. The keys are derived using an equivalent of PBKDF2 with SHA-256 and a configurable number of iterations, currently set at 2048. In a 2012 paper analysing various database formats of password storage programs for security vulnerabilities the researchers found that the format used by Password Safe (version 3 format) was the most resistant to various cryptographic attacks. Reception Reviewers have highlighted the program's simplicity as its best feature. See also List of password managers Password manager References External links Password Safe at FileHare.com Password Safe at Schneier.com pwSafe Password Safe clone for OS X and iOS Password Safe at Softonline.net Cryptographic software Personal information manager software for Windows Linux software Java platform software Free password managers Portable software Software that uses wxWidgets 2002 software Free software programmed in C++ Freeware Free and open-source Android software
Password Safe
Mathematics
811
48,680,583
https://en.wikipedia.org/wiki/Moto%20360%20%282nd%20generation%29
The Moto 360 (2nd generation), also known as the Moto 360 (2015), is an Android Wear-based smartwatch. It was announced on September 14, 2015 at the IFA. It was discontinued by Motorola in February 2017. Design and hardware The Moto 360 (2nd generation) has a circular design, similar to the Huawei Watch and LG Watch Urbane, with 42mm diameter options. The case is stainless steel and available in several different finishes. Removable wrist bands are available in metal and Horween leather, and more readily removable than those of the previous generation. The device has an "all-day" battery which Motorola claims to last longer than that of the previous generation Moto 360. Like the previous watch, the 2nd generation Moto 360 charges wirelessly by being placed on an included cradle. It has dual microphones for voice recognition and noise rejection and a vibration motor allowing tactile feedback. An ambient light sensor optimizes screen brightness and allows gesture controls such as dimming the screen by placing one's hand over it. Bluetooth 4.0 LE is included for connectivity and wireless accessories. Like the previous generation, its ambient light sensor is located below the main display. A PPG and 9-axis accelerometer enable health and activity monitoring. It has IP67 certification for dust resistance and fresh water resistance rated at 30-minutes at 1 meter (4 feet) depth. Software As of early 2017, the Moto 360 runs Android Wear 1.5, Google's Android-based platform specifically designed for wearable devices and Android Marshmallow 6.0.1 and pairs with any phone running Android 4.3 or higher. Also compatible with iPhone iOS v9+ when paired with Android Wear app for iOS . Its software displays notifications from paired phones. It uses paired phones to enable interactive features such as Google Now cards, search, navigation, playing music, and integration with apps such as Google Fit, Evernote, and others. The last supported version of the software is Android Wear 2.0 Price The starting price was US$300. Reception Impressions of the Moto 360 were generally positive, especially in comparison to its predecessor, however the limitations of Android Wear concerned some critics. In contrasting the industrial design with the software, Dan Seifert of The Verge noted "if you buy the Moto 360 smartwatch, you’re paying more for the watch than you are for the smart". The Guardian gave the device four out of five stars, concluding that "it’s no more capable than almost any other Android Wear watch" despite having "fluid performance" and being more comfortable than the first generation. See also Moto 360 (1st generation) Wearable computer Microsoft Band Apple Watch References External links Android (operating system) devices Products introduced in 2015 Wear OS devices Smartwatches Motorola products
Moto 360 (2nd generation)
Technology
587
14,446,548
https://en.wikipedia.org/wiki/Relaxin/insulin-like%20family%20peptide%20receptor%204
Relaxin/insulin-like family peptide receptor 4, also known as RXFP4, is a human G-protein coupled receptor. Function GPR100 is a member of the rhodopsin family of G protein-coupled receptors (GPRs) (Fredriksson et al., 2003).[supplied by OMIM] See also Relaxin receptor References External links Further reading G protein-coupled receptors
Relaxin/insulin-like family peptide receptor 4
Chemistry
86
38,720,150
https://en.wikipedia.org/wiki/Google%20Play%20Music
Google Play Music was a music and podcast streaming service and an online music locker operated by Google as part of its Google Play line of services. The service was announced on May 10, 2011; after a six-month, invitation-only beta period, it was publicly launched on November 16, 2011, and shut down in December 2020. Users with standard accounts could store up to 50,000 songs from their personal libraries at no cost. A paid Google Play Music subscription allowed users to on-demand stream any song in the Google Play Music catalog and in YouTube Music Premium catalog and in several territories in YouTube Premium catalog. Also, users could purchase additional tracks from the music store section of Google Play. Google Play Music mobile apps also supported offline playback of tracks stored on the device. Features Standard accounts Google Play Music offered all users storage of up to 50,000 files for free. Users could listen to songs through the service's web player and mobile apps. The service scanned the user's collection and matched the files to tracks in Google's catalog, which could then be streamed or downloaded in up to 320 kbit/s quality. Any files that were not matched were uploaded to Google's servers for streaming or re-download. Songs purchased through the Google Play Store did not count against the 50,000-song upload limit. Supported file formats for upload included: MP3, AAC, WMA, FLAC, Ogg, or ALAC. Non-MP3 uploads would be converted to MP3. Files could be up to 300 MB after conversion. Songs could be downloaded on the mobile apps for offline playback, and on computers through the Music Manager app. Standard users located in the United States, Canada, and India could also listen to curated radio stations, supported by video and banner advertisements. Stations were based on "an activity, your mood, or your favorite popular music". Up to six songs per hour could be skipped when listening to curated radio. Podcasts were also available for free to listen to for standard users in the US and Canada. Premium accounts With a paid subscription to Google Play Music, users received access to on-demand streaming of 40 million songs and offline music playback on the mobile apps, with no advertisements during listening and no limit on the number of track skips. A one-time 30-day free trial for a subscription to Google Play Music was offered for new users. Paid subscribers also received access to YouTube Premium (including YouTube Music) in eligible countries. Platforms On computers, music and podcasts could be listened to from a dedicated Google Play Music section of the Google Play website. On smartphones and tablets, music could be listened to through the Google Play Music mobile app for the Android and iOS operating systems, while podcasts were only supported on Android. Up to five smartphones could be used to access the library in Google Play Music, and up to ten devices total. Listening was limited to one device at a time. Samsung Galaxy S8 In April 2017, reports surfaced that the default music player on the then-new Samsung Galaxy S8 would be Google Play Music, continuing a trend that started with the S7 in 2016. However, for the S8, Samsung partnered with Google to incorporate additional exclusive features into the app, including the ability to upload up to 100,000 tracks, an increase from the 50,000 tracks users are normally allowed to upload. Google also stated that it would develop other "special features in Google Play Music just for Samsung customers". In June, Google Play Music on the S8 was updated to exclusively feature "New Release Radio", a daily, personalized playlist of new music releases. In July, the playlist was made available to all users, with Google noting in a press release that the exclusivity on Samsung devices was part of an "early access program" for testing and feedback purposes. History Introduction (2010–2011) Google first hinted at releasing a cloud media player during their 2010 Google I/O developer conference, when Google's then-Senior Vice President of Social Vic Gundotra showed a "Music" section of the then-called Android Market during a presentation. A music service was officially announced at the following year's I/O conference on May 10, 2011, under the name "Music Beta". Initially, it was only available by invitation to residents of the United States, and had limited functionality; the service featured a no-cost "music locker" for storage of up to 20,000 songs, but no music store was present during the beta period, as Google was not yet able to reach licensing deals with major record labels. After a six-month beta period, Google publicly launched the service in the US on November 16, 2011, as "Google Music" with its "These Go to Eleven" announcement event. The event introduced several features of the service, including a music store integrated into the then-named Android Market, music sharing via the Google+ social network, "Artist Hub" pages for musicians to self-publish music, and song purchasing reflected on T-Mobile phone bills. At launch, Google had partnerships with three major labels – Universal Music Group, EMI, and Sony Music Entertainment – along with other, smaller labels, although no agreement had been reached with Warner Music Group; in total, 13 million tracks were covered by these deals, 8 million of which were available for purchase on the launch date. To promote the launch, several artists released free songs and exclusive albums through the store; The Rolling Stones debuted the live recording Brussels Affair (Live 1973), and Pearl Jam released a live concert recorded in Toronto as 9.11.2011 Toronto, Canada. Slow growth (2012–2017) In January 2012, a feature was added to Google Music that allows users to download 320 kbit/s MP3 copies of any file in their library, with a two-download limit per track via the web, or unlimited downloads via the Music Manager app. According to a February 2012 report from CNET, Google executives were displeased with Google Music's adoption rate and revenues in its first three months. In March 2012, the company rebranded the Android Market and its digital content services as "Google Play"; the music service was renamed "Google Play Music". Google announced in October 2012 that they had signed deals with Warner Music Group that would bring "their full music catalog" to the service. At the Google I/O developer conference in May 2013, Google announced that Google Play Music would be expanded to include a paid on-demand music streaming service called "All Access", allowing users to stream any song in the Google Play catalog. It debuted immediately in the United States for $9.99 per month ($7.99 per month if the users signed up before June 30). The service allows users to combine the All Access catalog with their own library of songs. Google Play Music was one of the first four apps compatible with Google's Chromecast digital media player that launched in July 2013. In October 2014, a new "Listen Now" feature was introduced, providing contextual and curated recommendations and playlists. The feature was adapted from technology by Songza, which Google acquired earlier in the year. On November 12, 2014, Google subsidiary YouTube announced "Music Key", a new premium service succeeding All Access that included the Google Play Music streaming service, along with advertising-free access to streaming music videos on YouTube. Additionally, aspects of the two platforms were integrated; Google Play Music recommendations and YouTube music videos are available across both services. The service was re-launched in a revised form as YouTube Red (now YouTube Premium) on October 28, 2015, expanding its scope to offer ad-free access to all YouTube videos, as opposed to just music videos, as well as premium content produced in collaboration with notable YouTube producers and personalities. In December 2015, Google started offering a Google Play Music family plan, that allows unlimited access for up to six family members for US$14.99/month. The family plan is currently only available in Australia, Belgium, Brazil, Canada, Chile, the Czech Republic, France, Germany, Ireland, Italy, Japan, Mexico, the Netherlands, New Zealand, Norway, Russia, South Africa, Spain, Ukraine, the United Kingdom, and the United States. In April 2016, Google announced that podcasts would be coming to Google Play Music. Its first original podcast series, "City Soundtracks", was announced in March 2017, and would "feature interviews with various musicians about how their hometowns influenced their work, including the people and the moments that had an impact". In November 2016, Google introduced the Google Home smart speaker system, with built-in support for Google Play Music. Sunsetting (2018–2020) In May 2018, YouTube announced a new version of the YouTube Music service, including a web-based desktop player and redesigned mobile app, more dynamic recommendations based on various factors, and use of Google artificial intelligence technology to search songs based on lyrics and descriptions. YouTube Music was provided to Google Play Music users as part of the YouTube Premium offering. In June 2018, Google announced that YouTube Red would be replaced by YouTube Premium along with YouTube Music. As a result, users subscribed to Google Play Music in the United States, Australia, New Zealand and Mexico are now given access to YouTube Premium—which includes YouTube Music Premium. Users outside of those four countries are still required to pay the regular YouTube Premium price to access Premium features, but are given free access to YouTube Music Premium. In June 2018, Google announced plans to shut down Play Music and offer subscribers to migrate to YouTube Music. Since May 2020, users are able to move their music collections, personal taste preferences and playlists to YouTube Music and their podcast history, subscriptions to Google Podcasts. In August 2020, Google announced a detailed shutdown timeline starting in late August and ending with complete data deletion in December. By late August the Music Manager no longer supported uploading or downloading music. By September, Google Play Music was no longer available in New Zealand and South Africa, and by October, music streaming started shutting down for some users internationally on the web and the app. The music store was made unavailable in October 2020 and finally all usage of the service was discontinued in December 2020 and was replaced by YouTube Music and Google Podcasts. Geographic availability Standard accounts on Google Play Music was available in 63 countries before the discontinuation of the service. The full list included: Argentina, Australia, Austria, Belarus, Belgium, Bolivia, Bosnia and Herzegovina, Brazil, Bulgaria, Canada, Chile, Colombia, Costa Rica, Croatia, Cyprus, Czech Republic, Denmark, Dominican Republic, Ecuador, El Salvador, Estonia, Finland, France, Germany, Greece, Guatemala, Honduras, Hungary, Iceland, India, Ireland, Italy, Japan, Latvia, Liechtenstein, Lithuania, Luxembourg, North Macedonia, Malta, Mexico, Netherlands, New Zealand, Nicaragua, Norway, Panama, Paraguay, Peru, Poland, Portugal, Romania, Russia, Serbia, Slovakia, Slovenia, South Africa, Spain, Sweden, Switzerland, Ukraine, United Kingdom, United States, Uruguay, and Venezuela. Premium subscriptions are available in the same countries as Standard accounts. Availability of music was introduced in the United Kingdom, France, Germany, Italy, and Spain in October 2012, Czech Republic, Finland, Hungary, Liechtenstein, Netherlands, Russia, and Switzerland in September 2013, Mexico in October 2013, Germany in December 2013, Greece, Norway, Sweden, and Slovakia in March 2014, Canada, Poland and Denmark in May 2014, Bolivia, Chile, Colombia, Costa Rica, Peru, and Ukraine in July 2014, Dominican Republic, Ecuador, Guatemala, Honduras, Nicaragua, Panama, Paraguay, El Salvador, and Venezuela in August 2014, Brazil and Uruguay in September 2014, 13 new countries in November 2014, Brazil in November 2014, Argentina in June 2015, Japan in September 2015, South Africa and Serbia in December 2015, and India in September 2016, where only purchasing of music was offered. The All Access subscription service launched in India in April 2017. Reception In 2013, Entertainment Weekly compared a number of music services and gave Google Play Music All Access a "B+" score, writing, "The addition of uploading to augment the huge streaming archive fills in some huge gaps." References External links Android (operating system) software IOS software Play Music Mobile software 2011 software Mobile software distribution platforms Products introduced in 2011 Products and services discontinued in 2020 Music streaming services Android Auto software Music
Google Play Music
Technology
2,547
51,016,079
https://en.wikipedia.org/wiki/Mount%20Elliott%20Mining%20Complex
Mount Elliott Mining Complex is a heritage-listed copper mine and smelter at Selwyn, Shire of Cloncurry, Queensland, Australia. It was designed by William Henry Corbould and built in 1908. It is also known as Mount Elliott Smelter and Selwyn. It was added to the Queensland Heritage Register on 16 September 2011. History The Mount Elliott Mining Complex is an aggregation of the remnants of copper mining and smelting operations from the early 20th century and the associated former mining township of Selwyn. The earliest copper mining at Mount Elliott was in 1906 with smelting operations commencing shortly after. Significant upgrades to the mining and smelting operations occurred under the management of W.H. Corbould during 1909–1910. Following these upgrades and increases in production, the Selwyn Township grew quickly and had 1500 residents by 1918. The Mount Elliott Company took over other companies on the Cloncurry field in the 1920s, including the Mount Cuthbert and Kuridala/Hampden smelters. Mount Elliott operations were taken over by Mount Isa Mines in 1943 to ensure the supply of copper during World War Two. The Mount Elliott Company was eventually liquidated in 1953. Mount Elliott Smelter The existence of copper in the Leichhardt River area of north western Queensland had been known since Ernest Henry discovered the Great Australia Mine in 1867 at Cloncurry. In 1899 James Elliott discovered copper on the conical hill that became Mount Elliott, but having no capital to develop the mine, he sold an interest to James Morphett, a pastoralist of Fort Constantine station near Cloncurry. Morphett, being drought stricken, in turn sold out to John Moffat of Irvinebank, the most successful mining promoter in Queensland at the time. Plentiful capital and cheap transport were prerequisites for developing the Cloncurry field, which had stagnated for forty years. Without capital it was impossible to explore and prove ore-bodies; without proof of large reserves of wealth it was futile to build a railway; and without a railway it was hazardous to invest capital in finding large reserves of ore. The mining investor or the railway builder had to break the impasse. In 1906-1907 copper averaged a ton on the London market, the highest price for thirty years, and the Cloncurry field grew. The Great Northern railway was extended west of Richmond in 1905-1906 by the Queensland Government and mines were floated on the Melbourne Stock Exchange. At Mount Elliott a prospecting shaft had been sunk and on 1 August 1906 a Cornish boiler and winding plant were installed on the site. Mount Elliott Limited was floated in Melbourne on 13 July 1906. In 1907 it was taken over by British and French interests and restructured. Combining with its competitor, Hampden Cloncurry Copper Mines Limited, Mount Elliott formed a special company to finance and construct the railway from Cloncurry to Malbon, Kuridala (then Friezeland) and Mount Elliott (later Selwyn). This new company then entered into an agreement with the Queensland Railways Department in July 1908. The Selwyn railway, which was known as the "Syndicate Railway", aroused opposition in 1908 from the trade unions and Labor movement generally, who contended that railways should be State-owned. However, the Hampden-Mount Elliott Railway Bill was passed by the Queensland Parliament and assented to on 21 April 1908; construction finished in December 1910. The railway terminated at the Mount Elliott smelter. By 1907 the main underlie shaft had been sunk and construction of the smelters was underway using a second-hand water-jacket blast furnace and converters. At this time, W.H. Corbould was appointed general manager of Mount Elliott Limited. The second-hand blast furnace and converters were commissioned or "blown in" in May 1909, but were problematic causing hold-ups. Corbould referred to the equipment in use as being the "worst collection of worn-out junk he had ever come across". Corbould soon convinced his directors to scrap the plant and let him design new works. Corbould was a metallurgist and geologist as well as mine/smelter manager. He foresaw a need to obtain control and thereby ensure a reliable supply of ore from a cross-section of mines in the region. He also saw a need to implement an effective strategy to manage the economies of smelting low-grade ore. Smelting operations in the region were made difficult by the technical and economic problems posed by the deterioration in the grade of ore. Corbould resolved the issue by a process of blending ores with different chemical properties, increasing the throughput capacity of the smelter and by championing the unification of smelting operations in the region. In 1912, Corbould acquired Hampden Consols Mine at Kuridala for Mount Elliott Limited, followed with the purchases of other small mines in the district. Walkers Limited of Maryborough was commissioned to manufacture a new water jacket furnace for the smelters. An air compressor and blower for the smelters were constructed in the powerhouse and an electric motor and dynamo provided power for the crane and lighting for the smelter and mine. The new smelter was blown in September 1910, a month after the first train arrived, and it ran well, producing of blister copper by the end of the year. The new smelting plant made it possible to cope with low-grade sulphide ores at Mount Elliott. The use of of low-grade sulphide ores bought from the Hampden Consols Mine in 1911 made it clear that if a supply of higher sulphur ore could be obtained and blended, performance and economy would improve. Accordingly, the company bought a number of smaller mines in the district in 1912. Corbould mined with cut and fill stoping but a young Mines Inspector condemned the system, ordered it dismantled and replaced with square set timbering. In 1911, after gradual movement in stopes on the No.3 level, the smelter was closed for two months. Nevertheless, of blister copper was produced in 1911, rising to in 1912 - the company's best year. Many of the surviving structures at the site were built at this time. Troubles for Mount Elliott started in 1913. In February, a fire at the Consols Mine closed it for months. In June, a thirteen-week strike closed the whole operation, severely depleting the workforce. The year 1913 was also bad for industrial accidents in the area, possibly due to inexperienced people replacing the strikers. Nevertheless, the company paid generous dividends that year. At the end of 1914 smelting ceased for more than a year due to shortage of ore. Although of blister copper was produced in 1913, production fell to in 1914 and the workforce dwindled to only 40 men. For the second half of 1915 and early 1916 the smelter treated ore railed south from Mount Cuthbert. At the end of July 1916 the smelting plant at Selwyn was dismantled except for the flue chambers and stacks. A new furnace with a capacity of per day was built, a large amount of second-hand equipment was obtained and the converters were increased in size. After the enlarged furnace was commissioned in June 1917, continuing industrial unrest retarded production which amounted to only of copper that year. The point of contention was the efficiency of the new smelter which processed twice as much ore while employing fewer men. The company decided to close down the smelter in October and reduce the size of the furnace, the largest in Australia, from . In the meantime the price of copper had almost doubled from 1916 due to wartime consumption of munitions. The new furnace commenced on 16 January 1918 and of ore were smelted yielding of blister copper which were sent to the Bowen refinery before export to Britain. Local coal and coke supply was a problem and materials were being sourced from the distant Bowen Colliery. The smelter had a good run for almost a year except for a strike in July and another in December, which caused Corbould to close down the plant until New Year. In 1919, following relaxation of wartime controls by the British Metal Corporation, the copper price plunged from about per ton at the start of the year to per ton in April, dashing the company's optimism regarding treatment of low grade ores. The smelter finally closed after two months operation and most employees were laid off. For much of the period 1919 to 1922, Corbould was in England trying to raise capital to reorganise the company's operations but he failed and resigned from the company in 1922. The Mount Elliott Company took over the assets of the other companies on the Cloncurry field in the 1920s - Mount Cuthbert in 1925 and Kuridala in 1926. Mount Isa Mines bought the Mount Elliott plant and machinery, including the three smelters, in 1943 for , enabling them to start copper production in the middle of the Second World War. The Mount Elliott Company was finally liquidated in 1953. In 1950 A.E. Powell took up the Mount Elliott Reward Claim at Selwyn and worked close to the old smelter buildings. An open cut mine commenced at Starra, south of Mount Elliott and Selwyn, in 1988 and is Australia's third largest copper producer producing copper-gold concentrates from flotation and gold bullion from carbon-in-leach processing. Profitable copper-gold ore bodies were recently proved at depth beneath the Mount Elliott smelter and old underground workings by Cyprus Gold Australia Pty Ltd. These deposits were subsequently acquired by Arimco Mining Pty Ltd for underground development which commenced in July 1993. A decline tunnel portal, ore and overburden dumps now occupy a large area of the Maggie Creek valley south-west of the smelter which was formerly the site of early miner's camps. Selwyn Township In 1907, the first hotel, run by H. Williams, was opened at the site. The township was surveyed later, around 1910, by the Queensland Mines Department. The town was to be situated north of the mine and smelter operations adjacent the railway, about distant. It took its name from the nearby Selwyn Ranges which were named, during Burke's expedition, after the Victorian Government Geologist, Alfred Richard Cecil Selwyn. The town has also been known by the name of Mount Elliott, after the nearby mines and smelter. Many of the residents either worked at the Mount Elliott Mine and Smelter or worked in the service industries which grew around the mining and smelting operations. Little documentation exists about the everyday life of the town's residents. Surrounding sheep and cattle stations, however, meant that meat was available cheaply and vegetables grown in the area were delivered to the township by horse and cart. Imported commodities were, however, expensive. By 1910 the town had four hotels. There was also an aerated water manufacturer, three stores, four fruiterers, a butcher, baker, saddler, garage, police, hospital, banks, post office (officially from 1906 to 1928, then unofficially until 1975) and a railway station. There was even an orchestra of ten players in 1912. The population of Selwyn rose from 1000 in 1911 to 1500 in 1918, before gradually declining. Description Mount Elliott Smelter Mount Elliott Smelter is located in Cloncurry Shire, approximately one kilometre south of the former township of Selwyn, and south of Cloncurry. The main processing infrastructure and remains are centralised on the northern side of the low hill known as Mount Elliott. Immediately north of the main processing complex is a shallow basin formed by natural rises in ground. Within this basin lay the powerhouse and boiler house machinery beds, ore tunnel, beehive kiln, the lower condenser area and railway embankments. To the east of the central complex are the remains of an assay office, furnaces, one possible and one identifiable explosives magazine, the upper condenser bank, tank stands, and the remains of a substantial residence. To the south-west, on the central low hill, are the remains of a winder engine machine-bed, a square brick stack that served the boilers of the winder complex, and the single remaining bed and footings of the primary ore crusher. Further west on low ridges are the remains of a strong room, office, stone tank stand and smithy. Selwyn Township The Selwyn Township is located at the northern end of a valley running south to Mount Elliott Smelter which is about distant. All buildings have been removed and the evidence of the township now comprises garden plots, cement surfaces and corrugated iron water tanks. The site of the Union Hotel remains identifiable and the timber stumps of the stationmaster's house, the railway formation and surviving timber sleepers are among the most visible remains. The railway embankment follows the eastern side of the valley between the town and smelter passing several miners' hut sites comprising rough stone walls, and benched surfaces with stone retaining walls. Identifiable sites in the valley include a cement surface formerly under the high-set school building, the police station site, and the original smelter site at the base of a hill on the western side of the valley. The town cemetery, about south-east of the town, contains about fifteen headstones in separate sections for Catholics and Protestants. All but one headstone are from Melrose & Fenwick of Townsville. The grave sites include three women, a returned Anzac accidentally killed in the mine, a man who died of injuries in the local hospital, and a miner from Mount Cobalt who was buried in 1925 after Selwyn was almost deserted. Heritage listing Mount Elliott Mining Complex was listed on the Queensland Heritage Register on 16 September 2011 having satisfied the following criteria. The Mount Elliott Mining Complex, incorporating the remnants of the Mount Elliott Mine, Smelter, a range of associated infrastructure, scattered archaeological artefacts, the abandoned town of Selwyn and its associated cemetery, has the potential to provide important information on aspects of Queensland's history particularly early copper smelter practices and technologies, the full range of activities peripheral to those base operations and, importantly, the people who lived and worked in this complex historic mining landscape. The Mount Elliott Mining Complex has sufficient archaeological integrity and diversity in its assemblage to facilitate detailed studies which would reveal the largely undocumented social and cultural aspects of the occupation and use of the mine, smelter and Selwyn Township areas. Important research questions could focus on, but are not limited to, cultural identity and ethnicity, socioeconomic status, individual and collective living conditions, and individual adaptations to the remoteness and harshness of the local environment. Archaeological investigations within the Mount Elliott Mining Complex have potential to reveal specific details about the function and use of the area that complement and augment archival records. Investigations of the remnant mining and smelting infrastructure may help answer important research questions relating to mining and smelting operations including, but not limited to: the design and operation of an early primary ore-processing plant, base metals mine and smelting operation in Queensland adaptation of work practices due to remoteness, harshness and the local environment fundamental and influential changes to copper mining and smelting practice in Queensland initiated by Mount Elliott's manager W.R. Corbould (1907-1922), especially the use of more efficient and economical production techniques compared to other similar operations The Mount Elliott Mining Complex is an important component of a broader historic mining landscape as operations at Mount Elliott helped initiate extractive and primary processing industries in the Cloncurry region and north-west Queensland generally. The remnant infrastructure, mining artefacts, and the remains of the mining township of Selwyn have potential for important comparative material to other mining sites in the region, particularly the nearby Hampden Company Smelter at Kuridala and Mount Cuthbert Township and Smelter. Archaeological investigations at the Mount Elliott Mining Complex may reveal new information that expands our understanding of such enterprises and on the everyday lives of the people living and working at such sites across Queensland. See also Mount Elliott Company Metallurgical Plant and Mill Mount Elliott mine References Burke, Heather and Gordon Grimwade (2001) Cultural Heritage Recommendations for the Mount Elliott Mine and Smelter, Northwest QLD, unpublished report to AustralAsian Resource Consultants Pty Ltd Hooper, Colin (1993) Angor to Zillmanton: Stories of North Queensland's deserted towns. Mundingburra: Colin Hooper Hore-Lacy, I. (ed) (1981) Broken Hill to Mount Isa: The Mining Odyssey of W.H. Corbould, Melbourne: Hyland House Kerr, Ruth (1992) Queensland Historical Mining Sites Study, Volume 4, Unpublished report to the Department of Environment and Heritage, Brisbane Knight, James (1992) Mount Elliott Mine and Smelter Site, North West Queensland: a preliminary survey and conservation recommendations, unpublished report to Cyprus Gold Australia Corporation Lennon, Jane and Howard Pearce (1996) Mining Heritage Places Study: Northern and Western Queensland, Volume 4: Mount Isa Mining District, unpublished report to the Queensland Department of Environment and Heritage Nexus Archaeology & Heritage (2009) Assessment of Historical and Industrial Archaeology Values: Mt Elliott Smelter Precinct, North-west Queensland, unpublished report to Ivanhoe Cloncurry Limited Attribution External links Queensland Heritage Register Shire of Cloncurry Industrial buildings in Queensland Articles incorporating text from the Queensland Heritage Register Smelting Copper mines in Queensland Metal companies of Australia Copper mining companies of Australia Archaeological sites in Queensland
Mount Elliott Mining Complex
Chemistry
3,617
28,013,951
https://en.wikipedia.org/wiki/Lactarius%20alnicola
Lactarius alnicola, commonly known as the golden milkcap, is a species of fungus in the family Russulaceae. The fruit bodies produced by the fungus are characterized by a sticky, vanilla-colored cap up to wide with a mixture of yellow tones arranged in faint concentric bands. The stem is up to long and has yellow-brown spots. When it is cut or injured, the mushroom oozes a white latex, which has an intensely peppery taste. The acrid taste of the fruit bodies renders them unpalatable. Two varieties have been named: var. pitkinensis, known from Colorado, and var. pungens, from Michigan. The fungus is found in the western United States and Mexico, where it grows in mycorrhizal associations with various coniferous trees species, such as spruce, pine and fir, and deciduous species such as oak and alder. It has also been collected in India. Taxonomy The species was originally described by American mycologist Alexander H. Smith in 1960, from a collection made near Warm Lake, Idaho, two years prior. The species was originally collected under alders with conifers nearby, and its specific epithet reflects the presumed association between the species—alnicola means "living with alder". Researchers subsequently discovered that the species has a relationship with conifers, not with alders, as the name implies. The mushroom is commonly known as the "golden milkcap". Lactarius alnicola is classified in subsection Scrobiculati of section Piperites in the genus Lactarius. Species in this subsection are characterized by having a milk-white to creamy or whey-like latex that soon turns yellow upon exposure to air, and which may stain freshly cut surfaces of the fruit body yellow. Further, the cap margin is bearded, strigose (covered with sharp, straight, and stiff hairs), and coarsely tomentose or woolly when young. Other species in the subsection include L. subpaludosus, L. delicatus, L. torminosus, L. payettensis, L. gossypinus, L. pubescens, L. resimus and L. scrobiculatus (the type species of the subsection). Description The cap is wide, initially convex but becoming depressed to funnel-shaped in maturity. The cap margin is initially rolled inward, then becomes uplifted as the cap expands. The cap surface is sticky to slimy, and near the margin there are matted "hairs" beneath the slimy or sticky layer. The color of the cap surface is yellow-ochre, sometimes with concentric bands of lighter and darker shades; the color becomes paler near the margin. The gills are adnate (squarely attached to the stem) to decurrent (attached to and running down the length of the stem), narrow, and crowded closely together. Forked near the stem, the gills are initially whitish before becoming pale ochraceous-buff. There are many lamellulae—small gills that do not extend completely to the stem. The stem is long and thick, nearly equal in width throughout or tapered downward, dry, hard, coarsely pitted, and whitish to cream yellowish. It is initially solid, then becomes hollow with age. The flesh is thick, hard, whitish, and slowly stains pale yellow after the mushroom has been cut open. It has no distinctive odor, while the taste is immediately acrid. The latex is sparse, white on exposure to air, and unchanging or very slowly changing color to yellow. It stains cut flesh yellow, and tastes acrid. According to mycologist David Arora, the oak-loving central and southern Californian population of this species has a more latent acrid taste. The spore print may range slightly in color: thin deposits are white, thick deposits are more yellow. The mushroom is considered inedible because of the intensely peppery taste. Microscopic characters The spores are 7.5–10 by 6–8.5 μm, ellipsoid, and ornamented with warts and narrow bands that form a partial reticulum. The surface prominences are up to 1 μm high, but mostly in the range 0.3–0.6 μm. The spores are hyaline (translucent) and amyloid, meaning that they will adsorb iodine when stained with Melzer's reagent. The basidia, the spore-bearing cells, are four-spored, and measure 37–48 by 8–11 μm. The cap cuticle is an ixocutis (a tissue layer on the surface of a mushroom made of a layer of gelatinous hyphae) made of encrusted hyphae that are 3–5 μm wide. Varieties In their 1979 monograph of North American Lactarius species, Hesler and Smith named two varieties of L. alnicola. Lactarius alnicola var. pitkinensis, reported under mixed aspen and conifers from Ashcroft, Colorado, is very similar to the nominate variety, but it has a white to cream-colored cap and white, unchanging latex. It has slightly smaller fruit bodies, with caps up to wide, and stems up to long; its spores are slightly larger, measuring 9–10.5 by 7.5–9 μm. Lactarius alnicola var. pungens, reported only from mixed forests in Michigan, is similar but has a tacky surface that soon dries, a dull ochraceous to ochraceous-tan cap with an ochraceous-tawny center. It has whitish flesh, with a pungent odor described as "distinct and peculiar". Similar species Novice mushroom hunters may mistake L. alnicola for the edible species Cantharellus cibarius, which has a vase-shaped fruit body with strongly decurrent gills. Other similar Lactarius species include L. zonarius, L. payettensis, L. yazooensis, L. olympianus, and L. psammicola f. glaber. L. olympianus associates with conifers and has a pale yellow-ochre, frequently zonate cap, but may be distinguished by its stem, which is usually covered with spots. L. payettensis has a roughened, not smooth, cap margin. L. yazooensis has a zonate cap and extremely acrid flesh. Its gills change color from pale vinaceous to light pinkish-brown in maturity. L. psammicola f. glaber has a pinkish-buff spore print. Mature fruit bodies of L. scrobiculatus var. montanus have been confused with L. alnicola. Its fruit bodies feature a smooth cap margin, acrid taste, white latex which slowly (over several minutes) turns yellow on exposure or stains the flesh yellow, and do not turn "clay color" when bruised. Ecology, habitat, and distribution Lactarius alnicola is an ectomycorrhizal species, and engages in a mutualistic association with certain plant species. In this association, the hyphae of the fungus permeate large volumes of soil and obtain scarce elements, especially phosphorus—which is often limiting for plant growth—which they pass on to the plant in exchange for metabolic products of the plant's photosynthesis. The ectomycorrhizae that the fungus forms in association with Picea engelmannii have been shown to contain lactifers (latex-producing cells) and pigments similar to the fruit body. Fruit bodies of the fungus grow in groups on the ground under alders and conifers, usually appearing between July and October. It is a fairly common species in the western United States and Baja California. Additional collection locations in Mexico include Veracruz, Villarreal, and Tapia. A population in central and southern California is known to associate with oak trees. In the Rocky Mountains it is associated with the subalpine tree species Engelmann Spruce (Picea engelmannii), while at lower elevations it is commonly found with white spruce (Picea glauca). It is also known to associate with Ponderosa Pine (Pinus ponderosa) and Douglas-fir (genus Pseudotsuga). The mushroom has also been collected from Bageshwar, in the state of Uttarakhand, India. Lactarius alnicola generally establishes symbiotic associations with alder trees (Alnus spp.) in humid, wooded environments. As a mycorrhizal fungus, it improves the intake of nutrients for the tree by promoting nitrogen absorption from the soil. See also List of Lactarius species References Cited books External links Fungi described in 1960 alnicola Fungi of India Fungi of North America Inedible fungi Taxa named by Alexander H. Smith Fungus species
Lactarius alnicola
Biology
1,854
430,790
https://en.wikipedia.org/wiki/Gauge%20boson
In particle physics, a gauge boson is a bosonic elementary particle that acts as the force carrier for elementary fermions. Elementary particles whose interactions are described by a gauge theory interact with each other by the exchange of gauge bosons, usually as virtual particles. Photons, W and Z bosons, and gluons are gauge bosons. All known gauge bosons have a spin of 1 and therefore are vector bosons. For comparison, the Higgs boson has spin zero and the hypothetical graviton has a spin of 2. Gauge bosons are different from the other kinds of bosons: first, fundamental scalar bosons (the Higgs boson); second, mesons, which are composite bosons, made of quarks; third, larger composite, non-force-carrying bosons, such as certain atoms. Gauge bosons in the Standard Model The Standard Model of particle physics recognizes four kinds of gauge bosons: photons, which carry the electromagnetic interaction; W and Z bosons, which carry the weak interaction; and gluons, which carry the strong interaction. Isolated gluons do not occur because they are colour-charged and subject to colour confinement. Multiplicity of gauge bosons In a quantized gauge theory, gauge bosons are quanta of the gauge fields. Consequently, there are as many gauge bosons as there are generators of the gauge field. In quantum electrodynamics, the gauge group is U(1); in this simple case, there is only one gauge boson, the photon. In quantum chromodynamics, the more complicated group SU(3) has eight generators, corresponding to the eight gluons. The three W and Z bosons correspond (roughly) to the three generators of SU(2) in electroweak theory. Massive gauge bosons Gauge invariance requires that gauge bosons are described mathematically by field equations for massless particles. Otherwise, the mass terms add non-zero additional terms to the Lagrangian under gauge transformations, violating gauge symmetry. Therefore, at a naïve theoretical level, all gauge bosons are required to be massless, and the forces that they describe are required to be long-ranged. The conflict between this idea and experimental evidence that the weak and strong interactions have a very short range requires further theoretical insight. According to the Standard Model, the W and Z bosons gain mass via the Higgs mechanism. In the Higgs mechanism, the four gauge bosons (of SU(2)×U(1) symmetry) of the unified electroweak interaction couple to a Higgs field. This field undergoes spontaneous symmetry breaking due to the shape of its interaction potential. As a result, the universe is permeated by a non-zero Higgs vacuum expectation value (VEV). This VEV couples to three of the electroweak gauge bosons (W, W and Z), giving them mass; the remaining gauge boson remains massless (the photon). This theory also predicts the existence of a scalar Higgs boson, which has been observed in experiments at the LHC. Beyond the Standard Model Grand unification theories The Georgi–Glashow model predicts additional gauge bosons named X and Y bosons. The hypothetical X and Y bosons mediate interactions between quarks and leptons, hence violating conservation of baryon number and causing proton decay. Such bosons would be even more massive than W and Z bosons due to symmetry breaking. Analysis of data collected from such sources as the Super-Kamiokande neutrino detector has yielded no evidence of X and Y bosons. Gravitons The fourth fundamental interaction, gravity, may also be carried by a boson, called the graviton. In the absence of experimental evidence and a mathematically coherent theory of quantum gravity, it is unknown whether this would be a gauge boson or not. The role of gauge invariance in general relativity is played by a similar symmetry: diffeomorphism invariance. W′ and Z′ bosons W′ and Z′ bosons refer to hypothetical new gauge bosons (named in analogy with the Standard Model W and Z bosons). See also 1964 PRL symmetry breaking papers Boson Glueball Quantum chromodynamics Quantum electrodynamics References External links Explanation of gauge boson and gauge fields by Christopher T. Hill Bosons Particle physics
Gauge boson
Physics
916
10,019,122
https://en.wikipedia.org/wiki/Rebeccamycin
Rebeccamycin (NSC 655649) is a weak topoisomerase I inhibitor isolated from Nocardia bacteria. It is structurally similar to staurosporine, but does not show any inhibitory activity against protein kinases. It shows significant antitumor properties in vitro (IC50=480nM against mouse B16 melanoma cells and IC50=500nM against P388 leukemia cells). It is an antineoplastic antibiotic and an intercalating agent. Becatecarin (BMS-181176) is a synthetic analog of rebeccamycin. Rebeccamycin and becatecarin have been tested in phase II clinical trials for the treatment of lung cancer, liver cancer, breast cancer, lymphoma, retinoblastoma, kidney cancer, and ovarian cancer. References Further reading Experimental cancer drugs Topoisomerase inhibitors Halogen-containing alkaloids
Rebeccamycin
Chemistry
195
709,527
https://en.wikipedia.org/wiki/J-SH04
The J-SH04 was a mobile phone made by Sharp Corporation and released by J-Phone (SoftBank Mobile). It was only available in Japan, and was released in November 2000. It was Japan's second phone with a built-in, back-facing camera. It has a 110,000 pixel CMOS image sensor and a 256 color display. The phone weights 74g, and its dimensions are 127 × 39 × 17 mm. It was succeeded by the J-SH05 flip phone, which was released just one month later. It is also considered to be one of the first phones with polyphonic ringtones. While the J-SH04 popularized the concept of a camera phone (branded as Sha-Mail) and was the world's first fully integrated camera and telephone over a cellular mobile network, it had a number of predecessors. In December 1997, Kyocera released the VP-110, which was a PCMCIA videophone adapter with an 80,000-pixel CCD camera that swiveled 210° and attached to the DataScope DS-110 and DS-320 mobile phones. Kyocera released the first commercial mobile camera-phone in September 1999, the VP-210 Visual Phone which had a front-facing 110,000-pixel CMOS camera enabling both video calling and sending photos over the air. The VP-210 could send its still images as mail attachments or send video at 2 frames per second over a PHS network. In contrast, the J-SH04's camera on the back of the phone was designed to take photos facing away from the user, which was a more popular way to use digital cameras at the time than video calling and selfie photos. The SH04 was the transformational moment for the camera phone. Samsung's SCH-V200 phone equipped with a VGA camera was released in South Korea several months before the J-SH04. The Samsung SCH-V200's camera was inside the same case as the phone and used the same battery and memory, but it was not integrated with the phone function. It could not convey an image "at a distance," which some regard as part of the definition of a "camera phone." Instead, photos taken by the SCH-V200 had to be transferred to a PC in order to be sent over a network. References External links http://k-tai.impress.co.jp/cda/article/showcase_top/3913.html The first photo sent from a phone, BBC Witness History Sharp Corporation mobile phones
J-SH04
Technology
535
205,490
https://en.wikipedia.org/wiki/Exeligmos
An exeligmos () is a period of 54 years, 33 days that can be used to predict successive eclipses with similar properties and location. For a solar eclipse, after every exeligmos a solar eclipse of similar characteristics will occur in a location close to the eclipse before it. For a lunar eclipse the same part of the earth will view an eclipse that is very similar to the one that occurred one exeligmos before it (see main text for visual examples). The exeligmos is an eclipse cycle that is a triple saros, three saroses (or saroi) long, with the advantage that it has nearly an integer number of days so the next eclipse will be visible at locations and times near the eclipse that occurred one exeligmos earlier. In contrast, each saros, an eclipse occurs about eight hours later in the day or about 120° to the west of the eclipse that occurred one saros earlier. It corresponds to: () The 57 eclipse years means that if there is a solar eclipse (or lunar eclipse), then after one exeligmos a New Moon (resp. Full Moon) will take place at the same node of the orbit of the Moon, and under these circumstances another eclipse can occur. Details The Greeks had knowledge of the exeligmos by at latest 100 BC. A Greek astronomical clock called the Antikythera mechanism used epicyclic gearing to predict the dates of consecutive exeligmoses. The exeligmos is 669 synodic months (every eclipse cycle must be an integer number of synodic months), almost exactly 726 draconic months (which ensures the sun and moon are in alignment during the new moon), and also almost exactly 717 anomalistic months (ensuring the moon is at the same point of its elliptic orbit). It also corresponds to 114 eclipse seasons. The first two factors make this a long-lasting eclipse series. The latter factor is what makes all the eclipses in an exeligmos so similar. The near-integer number of anomalistic months ensures that the apparent diameter of the moon will be nearly the same with each successive eclipse. The fact that it is very nearly a whole integer of days ensures each successive eclipse in the series occurs very close to the previous eclipse in the series. For each successive eclipse in an exeligmos series the longitude and latitude can change significantly because an exeligmos is over a month longer than a calendar year, and the gamma increases/decreases because an exeligmos is about three hours shorter than a draconic month. The sun's apparent diameter also changes significantly in one month, affecting the length and width of a solar eclipse. Solar exeligmos example Here is a comparison of two annular solar eclipses one exeligmos apart: Lunar exeligmos example Here is a comparison of two total lunar eclipses one exeligmos apart: Sample series of solar exeligmos Exeligmos table of solar saros 136. Each eclipse occurs at roughly the same longitude but moves about 5-15 degrees in latitude with each successive cycle. Solar exeligmos animation Here is an animation of an exeligmos series. Note the similar paths of each total eclipse, and how they fall close to the same longitude of the earth. Solar Saros Animation (for comparison) This next animation is from the entire saros series of the exeligmos above. Notice how each eclipse falls on a different side of the earth (120 degrees apart). See also Eclipse cycle Saros Full moon cycle References Eclipses Time in astronomy Ancient Greek astronomy
Exeligmos
Astronomy
750
53,089,666
https://en.wikipedia.org/wiki/Kappa%20Lupi
The Bayer designation κ Lupi (Kappa Lupi) is shared by two star systems in the constellation Lupus: κ1 Lupi (HD 134481) κ2 Lupi (HD 134482) According to Eggleton and Tokovinin (2008), the pair form a binary system with an angular separation of . References Lupi, Kappa Lupus (constellation)
Kappa Lupi
Astronomy
80
357,512
https://en.wikipedia.org/wiki/Royal%20Astronomical%20Society
The Royal Astronomical Society (RAS) is a learned society and charity that encourages and promotes the study of astronomy, solar-system science, geophysics and closely related branches of science. Its headquarters are in Burlington House, on Piccadilly in London. The society has over 4,000 members, known as fellows, most of whom are professional researchers or postgraduate students. Around a quarter of Fellows live outside the UK. The society holds monthly scientific meetings in London, and the annual National Astronomy Meeting at varying locations in the British Isles. The RAS publishes the scientific journals Monthly Notices of the Royal Astronomical Society, Geophysical Journal International and RAS Techniques and Instruments, along with the trade magazine Astronomy & Geophysics. The RAS maintains an astronomy research library, engages in public outreach and advises the UK government on astronomy education. The society recognises achievement in astronomy and geophysics by issuing annual awards and prizes, with its highest award being the Gold Medal of the Royal Astronomical Society. The RAS is the UK adhering organisation to the International Astronomical Union and a member of the UK Science Council. History The society was founded in 1820 as the Astronomical Society of London to support astronomical research. At that time, most members were 'gentleman astronomers' rather than professionals. It became the Royal Astronomical Society in 1831 on receiving a Royal Charter from William IV. In 1846 the RAS absorbed the Spitalfields Mathematical Society, which had been founded in 1717 but was suffering from a decline in membership and dwindling finances. The nineteen remaining members of the mathematical society were given free lifetime membership of the RAS; in exchange, their society's extensive library was donated to the RAS. Between 1835 and 1916 women were not allowed to become fellows, but Anne Sheepshanks, Lady Margaret Lindsay Huggins, Agnes Clerke, Annie Jump Cannon and Williamina Fleming were made honorary members. In 1886 Isis Pogson was the first woman to attempt election as a fellow of the RAS, being nominated (unsuccessfully) by her father and two other fellows. All fellows had been male up to this time and her nomination was withdrawn when lawyers claimed that under the provisions of the society's royal charter, fellows were only referred to as he and as such had to be men. A Supplemental Charter in 1915 opened up fellowship to women. On 14 January 1916, Mary Adela Blagg, Ella K Church, A Grace Cook, Irene Elizabeth Toye Warner and Fiammetta Wilson were the first five women to be elected to Fellowship. Publications One of the major activities of the RAS is publishing refereed journals. It publishes three primary research journals: Monthly Notices of the Royal Astronomical Society for topics in astronomy; Geophysical Journal International for topics in geophysics (in association with the Deutsche Geophysikalische Gesellschaft); and RAS Techniques & Instruments for research methods in those disciplines. The society also publishes a trade magazine for members, Astronomy & Geophysics. The history of journals published by the RAS (with abbreviations used by the Astrophysics Data System) is: Memoirs of the Royal Astronomical Society (MmRAS): 1822–1977 Monthly Notices of the Royal Astronomical Society (MNRAS): 1827–present Geophysical Supplement to Monthly Notices (MNRAS): 1922–1957 Geophysical Journal (GeoJ): 1958–1988 Geophysical Journal International (GeoJI): 1989–present (volume numbering continues from GeoJ) Quarterly Journal of the Royal Astronomical Society (QJRAS): 1960–1996 Astronomy & Geophysics (A&G): 1997–present (volume numbering continues from QJRAS) RAS Techniques & Instruments (RASTI): 2021–present Membership Fellows Full members of the RAS are styled Fellows, and may use the post-nominal letters FRAS. Fellowship is open to anyone over the age of 18 who is considered acceptable to the society. As a result of the society's foundation in a time before there were many professional astronomers, no formal qualifications are required. However, around three quarters of fellows are professional astronomers or geophysicists. Most of the other fellows are postgraduate students studying for a PhD in those fields, but there are also advanced amateur astronomers, historians of science who specialise in those disciplines, and other related professionals. The society acts as the professional body for astronomers and geophysicists in the UK and fellows may apply for the Science Council's Chartered Scientist status through the society. The fellowship passed 3,000 in 2003. Friends In 2009 an initiative was launched for those with an interest in astronomy and geophysics but without professional qualifications or specialist knowledge in the subject. Such people may join the Friends of the RAS, which offers popular talks, visits and social events. Meetings The Society organises an extensive programme of meetings: The biggest RAS meeting each year is the National Astronomy Meeting, a major conference of professional astronomers. It is held over 4–5 days each spring or early summer, usually at a university campus in the United Kingdom. Hundreds of astronomers attend each year. More frequent smaller 'highlight' meetings feature lectures about research topics in astronomy and geophysics, often given by winners of the society's awards. They are normally held in Burlington House in London on the afternoon of the second Friday of each month from October to May. The talks are intended to be accessible to a broad audience of astronomers and geophysicists, and are free for anyone to attend (not just members of the society). Formal reports of the meetings are published in The Observatory magazine. Specialist discussion meetings are held on the same day as each highlight meeting. These are aimed at professional scientists in a particular research field, and allow several speakers to present new results or reviews of scientific fields. Usually two discussion meetings on different topics (one in astronomy and one in geophysics) take place simultaneously at different locations within Burlington House, prior to the day's highlight meeting. They are free for members of the society, but charge a small entry fee for non-members. The RAS holds a regular programme of public lectures aimed at a general, non-specialist, audience. These are mostly held on Tuesdays once a month, with the same talk given twice: once at lunchtime and once in the early evening. The venues have varied, but are usually in Burlington House or another nearby location in central London. The lectures are free, though some popular sessions require booking in advance. The society occasionally hosts or sponsors meetings in other parts of the United Kingdom, often in collaboration with other scientific societies and universities. Library The Royal Astronomical Society has a more comprehensive collection of books and journals in astronomy and geophysics than the libraries of most universities and research institutions. The library receives some 300 current periodicals in astronomy and geophysics and contains more than 10,000 books from popular level to conference proceedings. Its collection of astronomical rare books is second only to that of the Royal Observatory in Edinburgh in the UK. The RAS library is a major resource not just for the society but also the wider community of astronomers, geophysicists, and historians. Education The society promotes astronomy to members of the general public through its outreach pages for students, teachers, the public and media researchers. The RAS has an advisory role in relation to UK public examinations, such as GCSEs and A Levels. Associated groups The RAS sponsors topical groups, many of them in interdisciplinary areas where the group is jointly sponsored by another learned society or professional body: The Astrobiology Society of Britain (with the NASA Astrobiology Institute) The Astroparticle Physics Group (with the Institute of Physics) The Astrophysical Chemistry Group (with the Royal Society of Chemistry) The British Geophysical Association (with the Geological Society of London) The Magnetosphere Ionosphere and Solar-Terrestrial group (generally known by the acronym MIST) The UK Planetary Forum The UK Solar Physics group Presidents The first person to hold the title of President of the Royal Astronomical Society was William Herschel, though he never chaired a meeting, and since then the post has been held by many distinguished astronomers. The post has generally had a term of office of two years, but some holders resigned after one year e.g. due to poor health. Francis Baily and George Airy were elected a record four times each. Baily's eight years in the role are a record (Airy served for seven). Since 1876 no one has served for more than two years in total. The current president is Mike Lockwood, who began his term in May 2024 and will serve for two years. Awards and prizes The highest award of the Royal Astronomical Society is its Gold Medal, which can be awarded for any purpose but most frequently recognises extraordinary lifetime achievement. Among the recipients best known to the general public are Albert Einstein in 1926, and Stephen Hawking in 1985. Other awards are for particular topics in astronomy or geophysics research, which include the Eddington Medal, the Herschel Medal, the Chapman Medal and the Price Medal. Beyond research, there are specific awards for school teaching (Patrick Moore Medal), public outreach (Annie Maunder Medal), instrumentation (Jackson-Gwilt Medal) and history of science (Agnes Mary Clerke Medal). Lectureships include the Harold Jeffreys Lectureship in geophysics, the George Darwin Lectureship in astronomy, and the Gerald Whitrow Lectureship in cosmology. Each year, the society grants a handful of free memberships for life (termed honorary fellowship) to prominent researchers resident outside the UK. Other activities The society occupies premises at Burlington House, London, where a library and meeting rooms are available to fellows and other interested parties. The society represents the interests of astronomy and geophysics to UK national and regional, and European government and related bodies, and maintains a press office, through which it keeps the media and the public at large informed of developments in these sciences. The society allocates grants to worthy causes in astronomy and geophysics, and assists in the management of the Paneth Trust. See also National Astronomy Week (NAW) List of astronomical societies List of geoscience organizations References External links The Royal Astronomical Society Scientific organizations established in 1820 Learned societies of the United Kingdom Astronomy organizations Astronomy societies Astronomy in the United Kingdom Astronomical Organisations based in London with royal patronage 1820 establishments in the United Kingdom
Royal Astronomical Society
Astronomy
2,061
3,693,855
https://en.wikipedia.org/wiki/Essure
Essure was a device for female sterilization. It is a metal coil which when placed into each fallopian tube induces fibrosis and blockage. Essure was designed as an alternative to tubal ligation. However, it was recalled by Bayer in 2018, and the device is no longer sold due to complications secondary to its implantation. The company has reported that several patients implanted with the Essure System for Permanent Birth Control have experienced and/or reported adverse effects, including: perforation of the uterus and/or fallopian tubes, identification of inserts in the abdominal or pelvic cavity, persistent pain, and suspected allergic or hypersensitivity reaction. Although designed to remain in place for a lifetime, it was approved based on short-term safety studies. Of the 745 women with implants in the original premarket studies, 92% were followed up at one year, and 25% for two years, for safety outcomes. A 2009 review concluded that Essure appeared safe and effective based on short-term studies, that it was less invasive and could be cheaper than laparoscopic bilateral tubal ligation. About 750,000 women have received the device worldwide. Initial trials found about 4% of women had tubal perforation, expulsion, or misplacement of the device at the time of the procedure. Since 2013, the product has been controversial, with thousands of women reporting severe side effects leading to surgical extraction. Rates of repeat surgery in the first year were ten times greater with Essure than with tubal ligation. Campaigner Erin Brockovich has been hosting a website where women can share their stories after having the procedure. As of 2015 many adverse events, including tubal perforations, intractable pain and bleeding leading to hysterectomies, possible device-related deaths, and hundreds of unintended pregnancies occurred, according to the US FDA adverse events database and other studies. It was developed by Conceptus Inc. and approved for use in the United States in 2002. Conceptus was acquired by Bayer AG of Germany in June 2013. In 2017, the CE marking in the European Union, and thus the commercial license for Essure was suspended for at least three months. Authorities in France and Ukraine recalled the implants, and the manufacturer withdrew the product voluntarily in Canada, the UK, Finland, and the Netherlands. In April 2018, the FDA restricted sale and use of Essure which resulted in a 70% decrease in sales. In July 2018 Bayer announced the halt of sales in the U.S. by the end of 2018. The device is featured in the 2018 Netflix documentary The Bleeding Edge. Use A 2015 review found the effectiveness of Essure is unclear due to the low quality of evidence. With perfect use another review found evidence of a 99.8% effective based on 5 years of follow-up. The reported insertional failure rates are "failure to place 2 inserts in the first procedure (5%), initial tubal patency (3.5%), expulsion (2.2%), perforation (1.8%), or other unsatisfactory device location (0.6%)". Upon follow-up, occlusion was observed to have occurred in 96.5% of patients at 3 months with the remainder occluded by 6 months. A 2015 study published in the BMJ concluded that Essure was as efficacious as laparoscopic sterilization at preventing pregnancy, but with a "10-fold higher risk of undergoing re-operation" when compared to patients who underwent a laparoscopic sterilization procedure. Follow-up For the Essure method, three months after insertion a radiologist is supposed to perform a fluoroscopic procedure called a hysterosalpingogram, to confirm that the fallopian tubes are completely blocked and that the woman can rely on the Essure inserts for birth control. A contrast agent (dye) is injected through the cervix, and an x-ray technologist takes photos of the Essure coils to ensure no contrast leaks past the Essure. Adverse effects Serious side effects may include persistent pain, perforation of the uterus and fallopian tubes, and migration of the coils into the pelvis or abdomen. Because of the stainless steel medical staff need to be notified before magnetic resonance imaging (MRI) can be performed. However, the inserts were found to be safe with MRI using a 3-Tesla magnet and is considered MR-conditional. Risks Procedural complications Inability to place inserts (4%) Cramping (30%) Pain (13%) Nausea/vomiting (11%) Dizziness/light headed (9%) Bleeding/spotting (7%) Vasovagal response (fainting) (1.3%) Perforation, expulsion, or other unsatisfactory location of the insert Long-term complications Sources: Abdominal pain (3.8%) Back pain (9%) Menstrual cramps, severe (2.9%) Pelvic or lower abdominal pain, severe (2.5%) Gas/bloating (1.3%) Headache (2.5%) Heavier menstrual bleeding (1.9%) Vaginal discharge or infection (1.5%) Pregnancy (0.48%) and increased risk of ectopic pregnancy Allergic reaction to the materials Rash Autoimmune disease (0.99%) Weight changes Depression Hair loss Suicide attempt (0.55%) Procedure A physician places the coils into the fallopian tubes by a catheter passed from the vagina through the cervix and uterus. This occurs successfully between 63% and 100% of the time. Once in place, the ingrowth continues over a period of three months, resulting in blockage in the Fallopian tubes; the tissue barrier formed is supposed to prevent sperm from reaching an egg. During that intervening three-month period, women are advised to use an alternate contraceptive method. Unlike tubal ligation, it may not require a general anaesthetic (though is often done under general anaesthetic). Despite this, some women have reported considerable pain during the procedure. In one 2007 prospective study, the mean time for procedure was 6.8 minutes (range = 5–18 minutes) for a trained physician to perform. The procedure can be performed in a physician's office. The procedure is reported to be permanent and not reversible by the manufacturer. Nevertheless, several Essure reversals have been performed. Device The small, flexible inserts are made from polyester fibers, nickel-titanium, stainless steel and solder. The insert contains inner polyethylene terephthalate fibers to induce inflammation, causing a benign fibrotic ingrowth, and is held in place by flexible stainless steel inner coil and a dynamic outer nickel titanium alloy coil. Unlike temporary methods of birth control, the Essure inserts do not contain or release hormones. The inserts do not prevent the transmission of sexually transmitted infections. Regulatory history A Facebook group called Essure Problems which had 33,140 members (as of 04/03/2017) called the method "E-hell" and mentioned mostly pain, bleeding, bloating and other side effects from the device. Some women had coils break and perforate their internal organs, or conceived and gave birth to a child, at a number well above what Bayer has been reporting. Erin Brockovich became involved in the controversy and hosts a website where women can share their stories after having the procedure. Since then Bayer provided two toll-free telephone numbers for patient complaints, has advised that women reporting adverse effects are "consistent with clinical trials and consistent with what the FDA is seeing", and further insisted that it wanted to hear from any women experiencing problems with Essure. In April 2015, a group of six delegates from the Essure Problems group, including a doctor with Essure experience, spoke before 36 members of the FDA and the Congressional HELP committee regarding a citizen's petition filed with the FDA. The FDA began investigating the claims of then over 16,000 members of the group as well as the legalities of the approval process that Essure went through. As of 2015, one postmarketing study was not published for 13 years after the device was approved, and another postmarketing study had not been published as of 2015. FDA The product was approved by the FDA in 2002. In 2013, the product made news in North America, with women complaining of severe side effects leading to surgical extraction. According to one article, women who have gotten pregnant are naming these children e-babies. In October 2013, the FDA stated that since the product was approved in 2002 it had received 943 reports of adverse events related to Essure, mainly for pain (606 of the complaints). An additional 1,000 more complaints have been sent to the FDA in a voluntary reporting system, but physicians are not obliged to report complaints. In June 2015, the FDA reported an investigation into Essure and its over 5000 complaints, seven reported deaths, and many additional side effects, all linked to Essure, its specific chemical composition, its improper placement and its insertion. The agency announced that its Obstetrics and Gynecology Devices Panel would conduct an evidence-based review of Essure's safety in September 2015 due to the rise in adverse event reports from only 950 reports between 2002 through October 2013, to more than 4,150, or 81 percent of the total, from October 2013 to June 2015. In February 2016, the FDA issued a "black box" label to warn the public about the harmful complications associated with the use of this device and requested Bayer to conduct a new postmarket surveillance to follow 2,000 women for at least three years, comparing the effectiveness and safety of the device with other surgical contraceptive methods. Women and doctors were required to sign a decision checklist before Essure implantation, and to give consent to a test three months later to ensure the device was properly placed and functioning. In July 2020, Bayer published interim data from the FDA-mandated postmarket surveillance study comparing patients who received Essure to those who receiving a laparoscopic tubal ligation. The interim data reported the incidence of several side effects in each group. In Essure patients, chronic lower abdominal or pelvic pain occurred in 9% and abnormal bleeding in 16%, compared to 4.5% reporting pain and 10% with abnormal bleeding in the tubal ligation group. It also reported new allergic or hypersensitivity reactions in 22% of patients and no reports of new autoimmune disorders, although blinded independent verification was pending. Recruitment of patients receiving Essure into the postmarket surveillance study has ceased as the device is no longer available on the US market. Legal issues In August 2020, Bayer agreed to a US$1.6 billion settlement to resolve approximately 90% of the nearly 39,000 U.S. claims related to Essure. Bayer stated that the settlement did not imply any admission of wrongdoing or liability. In 2023, the device was the subject of a class action lawsuit in Australia. Over 1000 women joined the suit claiming that the device caused pain, suffering, and significant bleeding. The suit was dismissed in December 2024. References External links Essure Procedure by Erin Brockovich Drugs developed by Bayer Sterilization (medicine) Medical technology
Essure
Biology
2,363
34,175,231
https://en.wikipedia.org/wiki/Corosolic%20acid
Corosolic acid is a pentacyclic triterpene acid found in Lagerstroemia speciosa. It is similar in structure to ursolic acid, differing only in the fact that it has a 2-alpha-hydroxy group. References Triterpenes Organic acids
Corosolic acid
Chemistry
63
22,210
https://en.wikipedia.org/wiki/One-time%20pad
In cryptography, the one-time pad (OTP) is an encryption technique that cannot be cracked, but requires the use of a single-use pre-shared key that is larger than or equal to the size of the message being sent. In this technique, a plaintext is paired with a random secret key (also referred to as a one-time pad). Then, each bit or character of the plaintext is encrypted by combining it with the corresponding bit or character from the pad using modular addition. The resulting ciphertext will be impossible to decrypt or break if the following four conditions are met: The key must be at least as long as the plaintext. The key must be truly random. The key must never be reused in whole or in part. The key must be kept completely secret by the communicating parties. It has also been mathematically proven that any cipher with the property of perfect secrecy must use keys with effectively the same requirements as OTP keys. Digital versions of one-time pad ciphers have been used by nations for critical diplomatic and military communication, but the problems of secure key distribution make them impractical for most applications. First described by Frank Miller in 1882, the one-time pad was re-invented in 1917. On July 22, 1919, U.S. Patent 1,310,719 was issued to Gilbert Vernam for the XOR operation used for the encryption of a one-time pad. Derived from his Vernam cipher, the system was a cipher that combined a message with a key read from a punched tape. In its original form, Vernam's system was vulnerable because the key tape was a loop, which was reused whenever the loop made a full cycle. One-time use came later, when Joseph Mauborgne recognized that if the key tape were totally random, then cryptanalysis would be impossible. The "pad" part of the name comes from early implementations where the key material was distributed as a pad of paper, allowing the current top sheet to be torn off and destroyed after use. For concealment the pad was sometimes so small that a powerful magnifying glass was required to use it. The KGB used pads of such size that they could fit in the palm of a hand, or in a walnut shell. To increase security, one-time pads were sometimes printed onto sheets of highly flammable nitrocellulose, so that they could easily be burned after use. There is some ambiguity to the term "Vernam cipher" because some sources use "Vernam cipher" and "one-time pad" synonymously, while others refer to any additive stream cipher as a "Vernam cipher", including those based on a cryptographically secure pseudorandom number generator (CSPRNG). History Frank Miller in 1882 was the first to describe the one-time pad system for securing telegraphy. The next one-time pad system was electrical. In 1917, Gilbert Vernam (of AT&T Corporation) invented and later patented in 1919 () a cipher based on teleprinter technology. Each character in a message was electrically combined with a character on a punched paper tape key. Joseph Mauborgne (then a captain in the U.S. Army and later chief of the Signal Corps) recognized that the character sequence on the key tape could be completely random and that, if so, cryptanalysis would be more difficult. Together they invented the first one-time tape system. The next development was the paper pad system. Diplomats had long used codes and ciphers for confidentiality and to minimize telegraph costs. For the codes, words and phrases were converted to groups of numbers (typically 4 or 5 digits) using a dictionary-like codebook. For added security, secret numbers could be combined with (usually modular addition) each code group before transmission, with the secret numbers being changed periodically (this was called superencryption). In the early 1920s, three German cryptographers (Werner Kunze, Rudolf Schauffler, and Erich Langlotz), who were involved in breaking such systems, realized that they could never be broken if a separate randomly chosen additive number was used for every code group. They had duplicate paper pads printed with lines of random number groups. Each page had a serial number and eight lines. Each line had six 5-digit numbers. A page would be used as a work sheet to encode a message and then destroyed. The serial number of the page would be sent with the encoded message. The recipient would reverse the procedure and then destroy his copy of the page. The German foreign office put this system into operation by 1923. A separate notion was the use of a one-time pad of letters to encode plaintext directly as in the example below. Leo Marks describes inventing such a system for the British Special Operations Executive during World War II, though he suspected at the time that it was already known in the highly compartmentalized world of cryptography, as for instance at Bletchley Park. The final discovery was made by information theorist Claude Shannon in the 1940s who recognized and proved the theoretical significance of the one-time pad system. Shannon delivered his results in a classified report in 1945 and published them openly in 1949. At the same time, Soviet information theorist Vladimir Kotelnikov had independently proved the absolute security of the one-time pad; his results were delivered in 1941 in a report that apparently remains classified. There also exists a quantum analogue of the one time pad, which can be used to exchange quantum states along a one-way quantum channel with perfect secrecy, which is sometimes used in quantum computing. It can be shown that a shared secret of at least 2n classical bits is required to exchange an n-qubit quantum state along a one-way quantum channel (by analogue with the result that a key of n bits is required to exchange an n bit message with perfect secrecy). A scheme proposed in 2000 achieves this bound. One way to implement this quantum one-time pad is by dividing the 2n bit key into n pairs of bits. To encrypt the state, for each pair of bits i in the key, one would apply an X gate to qubit i of the state if and only if the first bit of the pair is 1, and apply a Z gate to qubit i of the state if and only if the second bit of the pair is 1. Decryption involves applying this transformation again, since X and Z are their own inverses. This can be shown to be perfectly secret in a quantum setting. Example Suppose Alice wishes to send the message hello to Bob. Assume two pads of paper containing identical random sequences of letters were somehow previously produced and securely issued to both. Alice chooses the appropriate unused page from the pad. The way to do this is normally arranged for in advance, as for instance "use the 12th sheet on 1 May", or "use the next available sheet for the next message". The material on the selected sheet is the key for this message. Each letter from the pad will be combined in a predetermined way with one letter of the message. (It is common, but not required, to assign each letter a numerical value, e.g., a is 0, b is 1, and so on.) In this example, the technique is to combine the key and the message using modular addition, not unlike the Vigenère cipher. The numerical values of corresponding message and key letters are added together, modulo 26. So, if key material begins with XMCKL and the message is hello, then the coding would be done as follows: h e l l o message 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) message + 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key = 30 16 13 21 25 message + key = 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) (message + key) mod 26 E Q N V Z → ciphertext If a number is larger than 25, then the remainder after subtraction of 26 is taken in modular arithmetic fashion. This simply means that if the computations "go past" Z, the sequence starts again at A. The ciphertext to be sent to Bob is thus EQNVZ. Bob uses the matching key page and the same process, but in reverse, to obtain the plaintext. Here the key is subtracted from the ciphertext, again using modular arithmetic: E Q N V Z ciphertext 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext − 23 (X) 12 (M) 2 (C) 10 (K) 11 (L) key = −19 4 11 11 14 ciphertext – key = 7 (h) 4 (e) 11 (l) 11 (l) 14 (o) ciphertext – key (mod 26) h e l l o → message Similar to the above, if a number is negative, then 26 is added to make the number zero or higher. Thus Bob recovers Alice's plaintext, the message hello. Both Alice and Bob destroy the key sheet immediately after use, thus preventing reuse and an attack against the cipher. The KGB often issued its agents one-time pads printed on tiny sheets of flash paper, paper chemically converted to nitrocellulose, which burns almost instantly and leaves no ash. The classical one-time pad of espionage used actual pads of minuscule, easily concealed paper, a sharp pencil, and some mental arithmetic. The method can be implemented now as a software program, using data files as input (plaintext), output (ciphertext) and key material (the required random sequence). The exclusive or (XOR) operation is often used to combine the plaintext and the key elements, and is especially attractive on computers since it is usually a native machine instruction and is therefore very fast. It is, however, difficult to ensure that the key material is actually random, is used only once, never becomes known to the opposition, and is completely destroyed after use. The auxiliary parts of a software one-time pad implementation present real challenges: secure handling/transmission of plaintext, truly random keys, and one-time-only use of the key. Attempt at cryptanalysis To continue the example from above, suppose Eve intercepts Alice's ciphertext: EQNVZ. If Eve tried every possible key, she would find that the key XMCKL would produce the plaintext hello, but she would also find that the key TQURI would produce the plaintext later, an equally plausible message: 4 (E) 16 (Q) 13 (N) 21 (V) 25 (Z) ciphertext − 19 (T) 16 (Q) 20 (U) 17 (R) 8 (I) possible key = −15 0 −7 4 17 ciphertext-key = 11 (l) 0 (a) 19 (t) 4 (e) 17 (r) ciphertext-key (mod 26) In fact, it is possible to "decrypt" out of the ciphertext any message whatsoever with the same number of characters, simply by using a different key, and there is no information in the ciphertext that will allow Eve to choose among the various possible readings of the ciphertext. If the key is not truly random, it is possible to use statistical analysis to determine which of the plausible keys is the "least" random and therefore more likely to be the correct one. If a key is reused, it will noticeably be the only key that produces sensible plaintexts from both ciphertexts (the chances of some random incorrect key also producing two sensible plaintexts are very slim). Perfect secrecy One-time pads are "information-theoretically secure" in that the encrypted message (i.e., the ciphertext) provides no information about the original message to a cryptanalyst (except the maximum possible length of the message). This is a very strong notion of security first developed during WWII by Claude Shannon and proved, mathematically, to be true for the one-time pad by Shannon at about the same time. His result was published in the Bell System Technical Journal in 1949. If properly used, one-time pads are secure in this sense even against adversaries with infinite computational power. Shannon proved, using information theoretic considerations, that the one-time pad has a property he termed perfect secrecy; that is, the ciphertext C gives absolutely no additional information about the plaintext. This is because (intuitively), given a truly uniformly random key that is used only once, a ciphertext can be translated into any plaintext of the same length, and all are equally likely. Thus, the a priori probability of a plaintext message M is the same as the a posteriori probability of a plaintext message M given the corresponding ciphertext. Conventional symmetric encryption algorithms use complex patterns of substitution and transpositions. For the best of these currently in use, it is not known whether there can be a cryptanalytic procedure that can efficiently reverse (or even partially reverse) these transformations without knowing the key used during encryption. Asymmetric encryption algorithms depend on mathematical problems that are thought to be difficult to solve, such as integer factorization or the discrete logarithm. However, there is no proof that these problems are hard, and a mathematical breakthrough could make existing systems vulnerable to attack. Given perfect secrecy, in contrast to conventional symmetric encryption, the one-time pad is immune even to brute-force attacks. Trying all keys simply yields all plaintexts, all equally likely to be the actual plaintext. Even with a partially known plaintext, brute-force attacks cannot be used, since an attacker is unable to gain any information about the parts of the key needed to decrypt the rest of the message. The parts of the plaintext that are known will reveal only the parts of the key corresponding to them, and they correspond on a strictly one-to-one basis; a uniformly random key's bits will be independent. Quantum cryptography and post-quantum cryptography involve studying the impact of quantum computers on information security. Quantum computers have been shown by Peter Shor and others to be much faster at solving some problems that the security of traditional asymmetric encryption algorithms depends on. The cryptographic algorithms that depend on these problems' difficulty would be rendered obsolete with a powerful enough quantum computer. One-time pads, however, would remain secure, as perfect secrecy does not depend on assumptions about the computational resources of an attacker. Problems Despite Shannon's proof of its security, the one-time pad has serious drawbacks in practice because it requires: Truly random, as opposed to pseudorandom, one-time pad values, which is a non-trivial requirement. Random number generation in computers is often difficult, and pseudorandom number generators are often used for their speed and usefulness for most applications. True random number generators exist, but are typically slower and more specialized. Secure generation and exchange of the one-time pad values, which must be at least as long as the message. This is important because the security of the one-time pad depends on the security of the one-time pad exchange. If an attacker is able to intercept the one-time pad value, they can decrypt messages sent using the one-time pad. Careful treatment to make sure that the one-time pad values continue to remain secret and are disposed of correctly, preventing any reuse (partially or entirely)—hence "one-time". Problems with data remanence can make it difficult to completely erase computer media. One-time pads solve few current practical problems in cryptography. High-quality ciphers are widely available and their security is not currently considered a major worry. Such ciphers are almost always easier to employ than one-time pads because the amount of key material that must be properly and securely generated, distributed and stored is far smaller. Additionally, public key cryptography overcomes the problem of key distribution. True randomness High-quality random numbers are difficult to generate. The random number generation functions in most programming language libraries are not suitable for cryptographic use. Even those generators that are suitable for normal cryptographic use, including /dev/random and many hardware random number generators, may make some use of cryptographic functions whose security has not been proven. An example of a technique for generating pure randomness is measuring radioactive emissions. In particular, one-time use is absolutely necessary. For example, if and represent two distinct plaintext messages and they are each encrypted by a common key , then the respective ciphertexts are given by: where means XOR. If an attacker were to have both ciphertexts and , then simply taking the XOR of and yields the XOR of the two plaintexts . (This is because taking the XOR of the common key with itself yields a constant bitstream of zeros.) is then the equivalent of a running key cipher. If both plaintexts are in a natural language (e.g., English or Russian), each stands a very high chance of being recovered by heuristic cryptanalysis, with possibly a few ambiguities. Of course, a longer message can only be broken for the portion that overlaps a shorter message, plus perhaps a little more by completing a word or phrase. The most famous exploit of this vulnerability occurred with the Venona project. Key distribution Because the pad, like all shared secrets, must be passed and kept secure, and the pad has to be at least as long as the message, there is often no point in using a one-time pad, as one can simply send the plain text instead of the pad (as both can be the same size and have to be sent securely). However, once a very long pad has been securely sent (e.g., a computer disk full of random data), it can be used for numerous future messages, until the sum of the messages' sizes equals the size of the pad. Quantum key distribution also proposes a solution to this problem, assuming fault-tolerant quantum computers. Distributing very long one-time pad keys is inconvenient and usually poses a significant security risk. The pad is essentially the encryption key, but unlike keys for modern ciphers, it must be extremely long and is far too difficult for humans to remember. Storage media such as thumb drives, DVD-Rs or personal digital audio players can be used to carry a very large one-time-pad from place to place in a non-suspicious way, but the need to transport the pad physically is a burden compared to the key negotiation protocols of a modern public-key cryptosystem. Such media cannot reliably be erased securely by any means short of physical destruction (e.g., incineration). A 4.7 GB DVD-R full of one-time-pad data, if shredded into particles in size, leaves over 4 megabits of data on each particle. In addition, the risk of compromise during transit (for example, a pickpocket swiping, copying and replacing the pad) is likely to be much greater in practice than the likelihood of compromise for a cipher such as AES. Finally, the effort needed to manage one-time pad key material scales very badly for large networks of communicants—the number of pads required goes up as the square of the number of users freely exchanging messages. For communication between only two persons, or a star network topology, this is less of a problem. The key material must be securely disposed of after use, to ensure the key material is never reused and to protect the messages sent. Because the key material must be transported from one endpoint to another, and persist until the message is sent or received, it can be more vulnerable to forensic recovery than the transient plaintext it protects (because of possible data remanence). Authentication As traditionally used, one-time pads provide no message authentication, the lack of which can pose a security threat in real-world systems. For example, an attacker who knows that the message contains "meet jane and me tomorrow at three thirty pm" can derive the corresponding codes of the pad directly from the two known elements (the encrypted text and the known plaintext). The attacker can then replace that text by any other text of exactly the same length, such as "three thirty meeting is cancelled, stay home". The attacker's knowledge of the one-time pad is limited to this byte length, which must be maintained for any other content of the message to remain valid. This is different from malleability where the plaintext is not necessarily known. Without knowing the message, the attacker can also flip bits in a message sent with a one-time pad, without the recipient being able to detect it. Because of their similarities, attacks on one-time pads are similar to attacks on stream ciphers. Standard techniques to prevent this, such as the use of a message authentication code can be used along with a one-time pad system to prevent such attacks, as can classical methods such as variable length padding and Russian copulation, but they all lack the perfect security the OTP itself has. Universal hashing provides a way to authenticate messages up to an arbitrary security bound (i.e., for any , a large enough hash ensures that even a computationally unbounded attacker's likelihood of successful forgery is less than p), but this uses additional random data from the pad, and some of these techniques remove the possibility of implementing the system without a computer. Common implementation errors Due to its relative simplicity of implementation, and due to its promise of perfect secrecy, one-time-pad enjoys high popularity among students learning about cryptography, especially as it is often the first algorithm to be presented and implemented during a course. Such "first" implementations often break the requirements for information theoretical security in one or more ways: The pad is generated via some algorithm, that expands one or more small values into a longer "one-time-pad". This applies equally to all algorithms, from insecure basic mathematical operations like square root decimal expansions, to complex, cryptographically secure pseudo-random random number generators (CSPRNGs). None of these implementations are one-time-pads, but stream ciphers by definition. All one-time pads must be generated by a non-algorithmic process, e.g. by a hardware random number generator. The pad is exchanged using non-information-theoretically secure methods. If the one-time-pad is encrypted with a non-information theoretically secure algorithm for delivery, the security of the cryptosystem is only as secure as the insecure delivery mechanism. A common flawed delivery mechanism for one-time-pad is a standard hybrid cryptosystem that relies on symmetric key cryptography for pad encryption, and asymmetric cryptography for symmetric key delivery. Common secure methods for one-time pad delivery are quantum key distribution, a sneakernet or courier service, or a dead drop. The implementation does not feature an unconditionally secure authentication mechanism such as a one-time MAC. The pad is reused (exploited during the Venona project, for example). The pad is not destroyed immediately after use. Uses Applicability Despite its problems, the one-time-pad retains some practical interest. In some hypothetical espionage situations, the one-time pad might be useful because encryption and decryption can be computed by hand with only pencil and paper. Nearly all other high quality ciphers are entirely impractical without computers. In the modern world, however, computers (such as those embedded in mobile phones) are so ubiquitous that possessing a computer suitable for performing conventional encryption (for example, a phone that can run concealed cryptographic software) will usually not attract suspicion. The one-time-pad is the optimum cryptosystem with theoretically perfect secrecy. The one-time-pad is one of the most practical methods of encryption where one or both parties must do all work by hand, without the aid of a computer. This made it important in the pre-computer era, and it could conceivably still be useful in situations where possession of a computer is illegal or incriminating or where trustworthy computers are not available. One-time pads are practical in situations where two parties in a secure environment must be able to depart from one another and communicate from two separate secure environments with perfect secrecy. The one-time-pad can be used in superencryption. The algorithm most commonly associated with quantum key distribution is the one-time pad. The one-time pad is mimicked by stream ciphers. Numbers stations often send messages encrypted with a one-time pad. Quantum and post-quantum cryptography A common use of the one-time pad in quantum cryptography is being used in association with quantum key distribution (QKD). QKD is typically associated with the one-time pad because it provides a way of distributing a long shared secret key securely and efficiently (assuming the existence of practical quantum networking hardware). A QKD algorithm uses properties of quantum mechanical systems to let two parties agree on a shared, uniformly random string. Algorithms for QKD, such as BB84, are also able to determine whether an adversarial party has been attempting to intercept key material, and allow for a shared secret key to be agreed upon with relatively few messages exchanged and relatively low computational overhead. At a high level, the schemes work by taking advantage of the destructive way quantum states are measured to exchange a secret and detect tampering. In the original BB84 paper, it was proven that the one-time pad, with keys distributed via QKD, is a perfectly secure encryption scheme. However, this result depends on the QKD scheme being implemented correctly in practice. Attacks on real-world QKD systems exist. For instance, many systems do not send a single photon (or other object in the desired quantum state) per bit of the key because of practical limitations, and an attacker could intercept and measure some of the photons associated with a message, gaining information about the key (i.e. leaking information about the pad), while passing along unmeasured photons corresponding to the same bit of the key. Combining QKD with a one-time pad can also loosen the requirements for key reuse. In 1982, Bennett and Brassard showed that if a QKD protocol does not detect that an adversary was trying to intercept an exchanged key, then the key can safely be reused while preserving perfect secrecy. The one-time pad is an example of post-quantum cryptography, because perfect secrecy is a definition of security that does not depend on the computational resources of the adversary. Consequently, an adversary with a quantum computer would still not be able to gain any more information about a message encrypted with a one time pad than an adversary with just a classical computer. Historical uses One-time pads have been used in special circumstances since the early 1900s. In 1923, they were employed for diplomatic communications by the German diplomatic establishment. The Weimar Republic Diplomatic Service began using the method in about 1920. The breaking of poor Soviet cryptography by the British, with messages made public for political reasons in two instances in the 1920s (ARCOS case), appear to have caused the Soviet Union to adopt one-time pads for some purposes by around 1930. KGB spies are also known to have used pencil and paper one-time pads more recently. Examples include Colonel Rudolf Abel, who was arrested and convicted in New York City in the 1950s, and the 'Krogers' (i.e., Morris and Lona Cohen), who were arrested and convicted of espionage in the United Kingdom in the early 1960s. Both were found with physical one-time pads in their possession. A number of nations have used one-time pad systems for their sensitive traffic. Leo Marks reports that the British Special Operations Executive used one-time pads in World War II to encode traffic between its offices. One-time pads for use with its overseas agents were introduced late in the war. A few British one-time tape cipher machines include the Rockex and Noreen. The German Stasi Sprach Machine was also capable of using one time tape that East Germany, Russia, and even Cuba used to send encrypted messages to their agents. The World War II voice scrambler SIGSALY was also a form of one-time system. It added noise to the signal at one end and removed it at the other end. The noise was distributed to the channel ends in the form of large shellac records that were manufactured in unique pairs. There were both starting synchronization and longer-term phase drift problems that arose and had to be solved before the system could be used. The hotline between Moscow and Washington D.C., established in 1963 after the 1962 Cuban Missile Crisis, used teleprinters protected by a commercial one-time tape system. Each country prepared the keying tapes used to encode its messages and delivered them via their embassy in the other country. A unique advantage of the OTP in this case was that neither country had to reveal more sensitive encryption methods to the other. U.S. Army Special Forces used one-time pads in Vietnam. By using Morse code with one-time pads and continuous wave radio transmission (the carrier for Morse code), they achieved both secrecy and reliable communications. Starting in 1988, the African National Congress (ANC) used disk-based one-time pads as part of a secure communication system between ANC leaders outside South Africa and in-country operatives as part of Operation Vula, a successful effort to build a resistance network inside South Africa. Random numbers on the disk were erased after use. A Belgian flight attendant acted as courier to bring in the pad disks. A regular resupply of new disks was needed as they were used up fairly quickly. One problem with the system was that it could not be used for secure data storage. Later Vula added a stream cipher keyed by book codes to solve this problem. A related notion is the one-time code—a signal, used only once; e.g., "Alpha" for "mission completed", "Bravo" for "mission failed" or even "Torch" for "Allied invasion of French Northern Africa" cannot be "decrypted" in any reasonable sense of the word. Understanding the message will require additional information, often 'depth' of repetition, or some traffic analysis. However, such strategies (though often used by real operatives, and baseball coaches) are not a cryptographic one-time pad in any significant sense. NSA At least into the 1970s, the U.S. National Security Agency (NSA) produced a variety of manual one-time pads, both general purpose and specialized, with 86,000 one-time pads produced in fiscal year 1972. Special purpose pads were produced for what the NSA called "pro forma" systems, where "the basic framework, form or format of every message text is identical or nearly so; the same kind of information, message after message, is to be presented in the same order, and only specific values, like numbers, change with each message." Examples included nuclear launch messages and radio direction finding reports (COMUS). General purpose pads were produced in several formats, a simple list of random letters (DIANA) or just numbers (CALYPSO), tiny pads for covert agents (MICKEY MOUSE), and pads designed for more rapid encoding of short messages, at the cost of lower density. One example, ORION, had 50 rows of plaintext alphabets on one side and the corresponding random cipher text letters on the other side. By placing a sheet on top of a piece of carbon paper with the carbon face up, one could circle one letter in each row on one side and the corresponding letter on the other side would be circled by the carbon paper. Thus one ORION sheet could quickly encode or decode a message up to 50 characters long. Production of ORION pads required printing both sides in exact registration, a difficult process, so NSA switched to another pad format, MEDEA, with 25 rows of paired alphabets and random characters. (See Commons:Category:NSA one-time pads for illustrations.) The NSA also built automated systems for the "centralized headquarters of CIA and Special Forces units so that they can efficiently process the many separate one-time pad messages to and from individual pad holders in the field". During World War II and into the 1950s, the U.S. made extensive use of one-time tape systems. In addition to providing confidentiality, circuits secured by one-time tape ran continually, even when there was no traffic, thus protecting against traffic analysis. In 1955, NSA produced some 1,660,000 rolls of one time tape. Each roll was 8 inches in diameter, contained 100,000 characters, lasted 166 minutes and cost $4.55 to produce. By 1972, only 55,000 rolls were produced, as one-time tapes were replaced by rotor machines such as SIGTOT, and later by electronic devices based on shift registers. The NSA describes one-time tape systems like 5-UCO and SIGTOT as being used for intelligence traffic until the introduction of the electronic cipher based KW-26 in 1957. Exploits While one-time pads provide perfect secrecy if generated and used properly, small mistakes can lead to successful cryptanalysis: In 1944–1945, the U.S. Army's Signals Intelligence Service was able to solve a one-time pad system used by the German Foreign Office for its high-level traffic, codenamed GEE. GEE was insecure because the pads were not sufficiently random—the machine used to generate the pads produced predictable output. In 1945, the US discovered that Canberra–Moscow messages were being encrypted first using a code-book and then using a one-time pad. However, the one-time pad used was the same one used by Moscow for Washington, D.C.–Moscow messages. Combined with the fact that some of the Canberra–Moscow messages included known British government documents, this allowed some of the encrypted messages to be broken. One-time pads were employed by Soviet espionage agencies for covert communications with agents and agent controllers. Analysis has shown that these pads were generated by typists using actual typewriters. This method is not truly random, as it makes the pads more likely to contain certain convenient key sequences more frequently. This proved to be generally effective because the pads were still somewhat unpredictable because the typists were not following rules, and different typists produced different patterns of pads. Without copies of the key material used, only some defect in the generation method or reuse of keys offered much hope of cryptanalysis. Beginning in the late 1940s, US and UK intelligence agencies were able to break some of the Soviet one-time pad traffic to Moscow during WWII as a result of errors made in generating and distributing the key material. One suggestion is that Moscow Centre personnel were somewhat rushed by the presence of German troops just outside Moscow in late 1941 and early 1942, and they produced more than one copy of the same key material during that period. This decades-long effort was finally codenamed VENONA (BRIDE had been an earlier name); it produced a considerable amount of information. Even so, only a small percentage of the intercepted messages were either fully or partially decrypted (a few thousand out of several hundred thousand). The one-time tape systems used by the U.S. employed electromechanical mixers to combine bits from the message and the one-time tape. These mixers radiated considerable electromagnetic energy that could be picked up by an adversary at some distance from the encryption equipment. This effect, first noticed by Bell Labs during World War II, could allow interception and recovery of the plaintext of messages being transmitted, a vulnerability code-named Tempest. See also Agrippa (A Book of the Dead) Information theoretic security Numbers station One-time password Session key Steganography Tradecraft Unicity distance No-hiding theorem Notes References Further reading External links Detailed description and history of One-time Pad with examples and images on Cipher Machines and Cryptology The FreeS/WAN glossary entry with a discussion of OTP weaknesses Information-theoretically secure algorithms Stream ciphers Cryptography 1882 introductions
One-time pad
Mathematics,Engineering
7,536
5,549,818
https://en.wikipedia.org/wiki/Figure%20of%20merit
A figure of merit (FOM) is a performance metric that characterizes the performance of a device, system, or method, relative to its alternatives. Examples Accuracy of a rifle Audio amplifier figures of merit such as gain or efficiency Battery life of a laptop computer Calories per serving Clock rate of a CPU is often given as a figure of merit, but is of limited use in comparing between different architectures. FLOPS may be a better figure, though these too are not completely representative of the performance of a CPU. Contrast ratio of an LCD Frequency response of a speaker Fill factor of a solar cell Resolution of the image sensor in a digital camera Measure of the detection performance of a sonar system, defined as the propagation loss for which a 50% detection probability is achieved Noise figure of a radio receiver The thermoelectric figure of merit, zT, a material constant proportional to the efficiency of a thermoelectric couple made with the material The figure of merit of digital-to-analog converter, calculated as (power dissipation)/(2ENOB × effective bandwidth) [J/Hz] Luminous efficacy of lighting Profit of a company Residual noise remaining after compensation in an aeromagnetic survey Heat absorption and transfer quality for a solar cooker Computational benchmarks are synthetic figures of merit that summarize the speed of algorithms or computers in performing various typical tasks. References Engineering ratios
Figure of merit
Mathematics,Engineering
286
48,937,744
https://en.wikipedia.org/wiki/Nilestriol
Nilestriol () (brand name Wei Ni An; developmental code name LY-49825), also known as nylestriol (, ), is a synthetic estrogen which was patented in 1971 and is marketed in China. It is the 3-cyclopentyl ether of ethinylestriol, and is also known as ethinylestriol cyclopentyl ether (EE3CPE). Nilestriol is a prodrug of ethinylestriol, and is a more potent estrogen in comparison. It is described as a slowly-metabolized, long-acting estrogen and derivative of estriol. Nilestriol was assessed in combination with levonorgestrel for the potential treatment of postmenopausal osteoporosis, but this formulation ultimately was not marketed. See also List of estrogen esters § Ethers of steroidal estrogens References Cyclopentyl ethers Diols Estranes Estrogen ethers Prodrugs Synthetic estrogens Triols
Nilestriol
Chemistry
225
3,295,260
https://en.wikipedia.org/wiki/Testosterone%20propionate
Testosterone propionate, sold under the brand name Testoviron among others, is an androgen and anabolic steroid (AAS) medication which is used mainly in the treatment of low testosterone levels in men. It has also been used to treat breast cancer in women. It is given by injection into muscle usually once every two to three days. Side effects of testosterone propionate include symptoms of masculinization like acne, increased hair growth, voice changes, and increased sexual desire. Testosterone supplementation is also known to reduce the threshold for aggressive behavior in men. The drug is a synthetic androgen and anabolic steroid and hence is an agonist of the androgen receptor (AR), the biological target of androgens like testosterone and dihydrotestosterone (DHT). It has strong androgenic effects and moderate anabolic effects, which make it useful for producing masculinization and suitable for androgen replacement therapy. Testosterone propionate is a testosterone ester and a relatively short-acting prodrug of testosterone in the body. Because of this, it is considered to be a natural and bioidentical form of testosterone. Testosterone propionate was discovered in 1936 and was introduced for medical use in 1937. It was the first testosterone ester to be marketed, and was the major form of testosterone used in medicine until about 1960. The introduction of longer-acting testosterone esters like testosterone enanthate, testosterone cypionate, and testosterone undecanoate starting in the 1950s resulted in testosterone propionate mostly being superseded. As such, it is rarely used today. In addition to its medical use, testosterone propionate is used to improve physique and performance. The drug is a controlled substance in many countries and so non-medical use is generally illicit. Medical uses Testosterone propionate is used primarily in androgen replacement therapy. It is specifically approved for the treatment of hypogonadism in men, breast cancer, low sexual desire, delayed puberty in boys, and menopausal symptoms. Available forms Testosterone propionate is usually provided as an oil solution for use by intramuscular injection. It was also previously available as an 30 mg or 50 mg aqueous suspension. Buccal tablets of testosterone propionate were previously available as well. Side effects Side effects of testosterone propionate include virilization among others. Testosterone propionate is often a painful injection, which is attributed to its short ester chain. Pharmacology Pharmacodynamics Testosterone propionate is a prodrug of testosterone and is an androgen and anabolic–androgenic steroid (AAS). That is, it is an agonist of the androgen receptor (AR). Pharmacokinetics Testosterone propionate is administered in oil via intramuscular injection. It has a relatively short elimination half-life and mean residence time of 2 days and 4 days, respectively. As such, it has a short duration of action and must be administered two to three times per week. Intramuscular injection of testosterone propionate as an oil solution, aqueous suspension, and emulsion has been compared. Chemistry Testosterone propionate, or testosterone 17β-propanoate, is a synthetic androstane steroid and a derivative of testosterone. It is an androgen ester; specifically, it is the C17β propionate (propanoate) ester of testosterone. History Testosterone esters were synthesized for the first time in 1936, and were found to have greatly improved potency relative to testosterone. Among the esters synthesized, testosterone propionate was the most potent, and for this reason, was selected for further development, subsequently being marketed. Testosterone propionate was introduced in 1937 by Schering AG in Germany under the brand name Testoviron. It was the first commercially available form of testosterone, and the first testosterone ester, to be introduced. The medication was the major form of testosterone used medically before 1960. Buccal testosterone propionate tablets were introduced for medical use in the mid-to-late 1940s under the brand name Oreton Buccal Tablets. An aqueous suspension of testosterone propionate was marketed by Ciba by 1950. In the 1950s, longer-acting testosterone esters like testosterone enanthate and testosterone cypionate were introduced and superseded testosterone propionate. Although rarely used nowadays due to its short duration, testosterone propionate remains medically available. Society and culture Generic names Testosterone propionate is the generic name of the drug and its and . It has also been referred to as testosterone propanoate or as propionyltestosterone. Brand names Testosterone propionate is or has been marketed under a variety of brand names, including, among numerous others: Agrovirin Andronate Andrusol-P Anertan Masenate Neo-Hombreol Oreton Perandren Synandrol Testoviron Availability Testosterone propionate is no longer available commercially in the United States except via a compounding pharmacy. Legal status Testosterone propionate, along with other AAS, is a schedule III controlled substance in the United States under the Controlled Substances Act and a schedule IV controlled substance in Canada under the Controlled Drugs and Substances Act. References Anabolic–androgenic steroids Androstanes Ketones Propionate esters Testosterone esters
Testosterone propionate
Chemistry
1,122
21,837,511
https://en.wikipedia.org/wiki/P-constrained%20group
In mathematics, a p-constrained group is a finite group resembling the centralizer of an element of prime order p in a group of Lie type over a finite field of characteristic p. They were introduced by in order to extend some of Thompson's results about odd groups to groups with dihedral Sylow 2-subgroups. Definition If a group has trivial p core Op(G), then it is defined to be p-constrained if the p-core Op(G) contains its centralizer, or in other words if its generalized Fitting subgroup is a p-group. More generally, if Op(G) is non-trivial, then G is called p-constrained if G/Op(G) is . All p-solvable groups are p-constrained. See also p-stable group The ZJ theorem has p-constraint as one of its conditions. References Finite groups Properties of groups
P-constrained group
Mathematics
185
19,732,644
https://en.wikipedia.org/wiki/HD%2060532%20c
HD 60532 c is an extrasolar planet located approximately 84 light-years away in the constellation of Puppis, orbiting the star HD 60532. This planet has a true mass of 7.46 times more than Jupiter, orbits at 1.58 AU, and takes 607 days to revolve in an eccentric orbit. This planet was discovered on September 22, 2008 in La Silla Observatory using the HARPS spectrograph. On this same day, the second planet in this system, HD 60532 b, was discovered in a 3:1 orbital resonance. References External links Exoplanets discovered in 2008 Giant planets Puppis Exoplanets detected by radial velocity
HD 60532 c
Astronomy
142
250,884
https://en.wikipedia.org/wiki/NGC%202346
NGC 2346 is a planetary nebula near the celestial equator in the constellation of Monoceros, less than a degree to the ESE of Delta Monocerotis. It is informally known as the Butterfly Nebula. The nebula is bright and conspicuous with a visual magnitude of 9.6, and has been extensively studied. Among its most remarkable characteristics is its unusually cool central star, which is a spectroscopic binary, and its unusual shape. The nebular is bipolar in form, with modest outflow velocities in the range of 8–11 km/s, while the center is girded by an expanding belt of molecular gas. The electron density of the nebula is on the order of 400 per cubic centimeter. The ionization of the nebula is the result of ultraviolet emission from the binary companion. The stronger infrared emission from molecular emission is coming from the belt, which is expanding at the rate of . The mass of the molecular gas in the nebula is estimated to be in the range of 0.34–, and is much greater than the mass of the ionized gas. The central star is a binary star consisting of an A-type subgiant and a subdwarf O star. The system, which has an orbital period of  days, is also variable, probably due to dust in orbit around it. The dust itself is heated by the central star and so NGC 2346 is unusually bright in the infrared part of the spectrum. When one of the two stars evolved into a red giant, it engulfed its companion, which stripped away a ring of material from the larger star's atmosphere. When the red giant's core was exposed, a fast stellar wind inflated two ‘bubbles’ from either side of the ring. Gallery References External links Planetary nebulae Spectroscopic binaries Monoceros 2346
NGC 2346
Astronomy
372
17,781,024
https://en.wikipedia.org/wiki/List%20of%20troglobites
A troglobite (or, formally, troglobiont) is an animal species, or population of a species, strictly bound to underground habitats, such as caves. These are separate from species that mainly live in above-ground habitats but are also able to live underground (eutroglophiles), and species that are only cave visitors (subtroglophiles and trogloxenes). Land-dwelling troglobites may be referred to as troglofauna, while aquatic species may be called stygofauna, although for these animals the term stygobite is preferable. Troglobites typically have evolutionary adaptations to cave life. Examples of such adaptations include slow metabolism, reduced energy consumption, better food usage efficiency, decrease or loss of eyesight (anophthalmia), and depigmentation (absence of pigment in the integument). Conversely, as opposed to lost or reduced functions, many species have evolved elongated antenna and locomotory appendages, in order to better move around and respond to environmental stimuli. These structures are also full of chemical, tactile, and humidity receptors. Troglobites commonly do not survive well outside caves and therefore cannot travel between separate cave systems. As a result, many troglobiotic species are endemic to a single cave or system of caves. Not all cave dwelling species are considered to be troglobites. An animal found in an underground environment may be a troglophile (a species living both in subterranean and in epigean habitats, e.g. bats and cave swallows) or a trogloxene (a species only occurring sporadically in a hypogean habitat and unable to establish a subterranean population). Flatworms Hausera hauseri Mollusca Bivalvia Congeria jalzici (cave clam) Congeria kusceri Congeria mualomerovici Eupera troglobia Gastropoda Foushee Cavesnail (Amnicola cora) Angustopila psammion Tumbling Creek cavesnail (Antrobia culveri) Manitou cavesnail (Antrorbis breweri) Cecilioides Phantom cave snail (Cochliopa texana) Peck's cave snail (Glyphyalinia pecki) Maitai Cave snail (Hadopyrgus ngataana) Laoennea renouardi Neritilia mimotoi Mimic cavesnail (Phreatodrobia imitata) Cave physa (Physella spelunca) Pisulina maxima Tashan cave snail (Trogloiranica tashanica) Zospeum tholussum Velvet worms White cave velvet worm (Peripatopsis alba) Speleoperipatus spelaeus Arthropoda Arachnida Acontius stercoricola Adelocosa anops – Kauaʻi cave wolf spider Anapistula (various spp.) Apneumonella (various spp.) Agraecina cristiani – Movile cave spider Anopsolobus subterraneus Aops oncodactylus – Barrow Island cave scorpion Apochthonius mysterius – Mystery cave pseudoscorpion Apochthonius typhlus – Stone County cave pseudoscorpion Baiami (various spp.) Calicina cloughensis Charinus spelaeus Chinquipellobunus madlae Chthonius (various spp.) Cicurina madla – Madla Cave meshweaver Cybaeus (various spp.) Cybaeozyga heterops Cycloctenus (various spp.) Desognanops humphreysi Dysderoides (various spp.) Gamasomorpha (various spp.) Herpyllus (various spp.) Hesperochernes occidentalis – guano pseudoscorpion Hexathele cavernicola Hickmania troglodytes Hongkongia (various spp.) Lycosa howarthi Lygromma sp. Hormurus polisorum – Christmas Island cave scorpion Maymena (various spp.) Mesostalita nocturna Mimetus strinatii Mundochthonius cavernicolus – cavernicolous pseudoscorpion Neobisium maritimum Nukuhiva adamsoni Olin platnicki Ossinissa justoi Parobisium yosemite – Yosemite cave pseudoscorpion Phanetta subterranea Phanotea (various spp.) Phyxelida makapanensis Porrhomma cavernicola – cavernicolous Porrhomma spider Porrhomma rosenhaueri Sinopoda scurion – Eyeless huntsman spider Spelocteniza ashmolei Spelungula cavernicola – Nelson cave spider Stalita taenaria Stiphidion (various spp.) Tartarus (various spp.) Telema (various spp.) Telemofila (various spp.) Tengella (various spp.) Texella reddelli Titanobochica magna – cave pseudoscorpion Toxopsiella (various spp.) Trichopelma Troglodiplura beirutpakbarai Troglodiplura challeni Troglodiplura harrisi Troglodiplura lowryi Troglodiplura samankunani Troglokhammouanus steineri – Xe Bang Fai cave scorpion Trogloneta (various spp.) Trogloraptor marchingtoni Troglothele Usofila (various spp.) Vietbocap lao – Nam Lot cave scorpion Wanops coecus Myriapoda Millipedes Causeyella species Chaetaspis aleyorum – Aleys' cave millipede Chersoiulus sphinx Desmoxytes Mammamia profuga Polydesmus subterraneus Sinocallipus Tetracion Tingupa pallida Titanophyllum spiliarum Trichopetalum whitei Zosteractis interminata Centipedes Cryptops speleorex Eupolybothrus cavernicolus Scolopocryptops troglocaudatus Crustacea Crayfish Others Allocrangonyx hubrichti – Hubricht's long-tailed amphipod Alpioniscus strasseri Andhracoides gebaueri– Belum cave isopod Andhracoides shabuddin– Guthikonda cave isopod Androniscus dentiger – rosy woodlouse Bactrurus brachycaudus – short-tailed groundwater amphipod Bactrurus hubrichti – sword-tail cave amphipod Bactrurus pseudomucronatus – false sword-tailed cave amphipod Barburia yanezi Caecidotea antricola – cave isopod Caecidotea dimorpha – Missouri cave isopod Caecidotea fustis – Fustis cave isopod Caecidotea salemensis – Salem cave isopod Caecidotea serrata – serrated cave isopod Caecidotea stiladactyla – slender-fingered cave isopod Caecidotea stygia – stygian cave isopod Cancrocaeca Cerberusa caeca Chaceus caecus Cyclops vernalis Diacyclops yeatmani – Yeatman's groundwater copepod Gammarus acherondytes – Illinois cave amphipod Holoped amazonicum Lirceus usdagalun – Lee County cave isopod Macromaxillocaris Munidopsis polymorpha – Blind albino cave crab Niphargus species Orcovita hickski Orcovita orchardorum Palaemonias alabamae – Alabama cave shrimp Palaemonias ganteri – Kentucky cave shrimp Phasmon typhlops Samarplax principe Spelaeorchestia koloana Speocirolana Stygiocaris Stygobromus barri – Barr's cave amphipod Stygobromus clantoni – Clanton's cave amphipod Stygobromus heteropodus – Pickle Springs amphipod Stygobromus onondagaensis – Onondaga cave amphipod Stygobromus ozarkensis – Ozark cave amphipod Stygobromus parvus – minute cave amphipod Stygobromus subtilis – subtle cave amphipod Teretamon spelaeum Troglocaris Typhlatya Typhlocaris Typhlocirolana Typhlopseudothelphusa Villalobosius lopezformenti Yucatalana Insecta Fish List of cave fish Amphibians Cave salamanders Mammals No known mammals live exclusively in caves. Most bats sleep in caves during the day and hunt at night, but they are considered troglophiles or trogloxenes. However some fossorials which spend their whole lives underground might be considered subterranean fauna, although they are not true troglofauna as they do not live in caves. Echinodermata Asterinides sp. Copidaster cavernicola – Cozumel's cave sea star Ophionereis commutabilis Porifera Eunapius subterraneus Racekiela cavernicola Annelida Erpobdella borisi Erpobdella mestrovi Haemopis caeca Marifugia cavatica See also Subterranean fauna Troglofauna In popular culture There is a zombie named the Troglobite in Plants vs. Zombies 2. References Cave animals Troglobites
List of troglobites
Biology
2,042
6,353,617
https://en.wikipedia.org/wiki/Induction%20sealing
Induction sealing is the process of bonding thermoplastic materials by induction heating. This involves controlled heating an electrically conducting object (usually aluminum foil) by electromagnetic induction, through heat generated in the object by eddy currents. Induction sealing is used in many types of manufacturing. In packaging, it is used for package fabrication such as forming tubes from flexible materials, attaching plastic closures to package forms, etc. Perhaps the most common use of induction sealing is cap sealing, a non-contact method of heating an inner seal to hermetically seal the top of plastic and glass containers. This sealing process takes place after the container has been filled and capped. Sealing process The closure is supplied to the bottler with an aluminum foil layer liner already inserted. Although there are various liners to choose from, a typical induction liner is multi-layered. The top layer is a paper pulp that is generally spot-glued to the cap. The next layer is wax that is used to bond a layer of aluminum foil to the pulp. The bottom layer is a polymer film laminated to the foil. After the cap or closure is applied, the container passes under an induction coil, which emits an oscillating electromagnetic field. As the container passes under the induction coil (sealing head), the conductive aluminum foil liner begins to heat as a result of the eddy currents being induced. The heat melts the wax, which is absorbed into the pulp backing and releases the foil from the cap. The polymer film also heats and flows onto the lip of the container. When cooled, the polymer creates a bond with the container resulting in a hermetically sealed product. Neither the container nor its contents are negatively affected, and the heat generated does not harm the contents. It is possible to overheat the foil and thereby cause damage to the seal layer and to any protective barriers. This could result in faulty seals, even weeks after the initial sealing process, so proper sizing of the induction sealing is vital to determine the exact system necessary to run a particular product. Sealing can be done with either a handheld unit or on a conveyor system. A more recent development (which better suits a small number of applications) allows for induction sealing to be used to apply a foil seal to a container without the need for a closure. In this case, foil is supplied pre-cut or in a reel. Where supplied in a reel, it is die cut and transferred onto the container neck. When the foil is in place, it is pressed down by the seal head, the induction cycle is activated, and the seal is bonded to the container. This process is known as direct application or sometimes "capless" induction sealing. Potential uses There are a variety of reasons companies choose to use induction sealing: Tamper evidence Leak prevention Freshness retention Protection against package pilferage Sustainability Production speed Tamper evidence With the U.S. Food and Drug Administration (FDA) regulations concerning tamper-resistant packaging, pharmaceutical packagers must find ways to comply as outlined in Sec. 450.500 Tamper-Resistant Packaging Requirements for Certain over-the-counter (OTC) Human Drug Products (CPG 7132a.17). Induction sealing systems meet or exceed these government regulations. As stated in section 6 of Packaging Systems: "...6. CONTAINER MOUTH INNER SEALS. Paper, thermal plastic, plastic film, foil, or a combination thereof, is sealed to the mouth of a container (e.g., bottle) under the cap. The seal must be torn or broken to open the container and remove the product. The seal cannot be removed and reapplied without leaving visible evidence of entry. Seals applied by heat induction to plastic containers appear to offer a higher degree of tamper-resistance than those that depend on an adhesive to create the bond..." Leak prevention/protection Some shipping companies require liquid chemical products to be sealed prior to shipping to prevent hazardous chemicals from spilling on other shipments. Freshness Induction sealing keeps unwanted pollutants from seeping into food products, and may assist in extending shelf life of certain products. Pilferage protection Induction-sealed containers help prevent the product from being broken into by leaving a noticeable residue on plastic containers from the liner itself. Pharmaceutical companies purchase liners that will purposely leave liner film/foil residue on bottles. Food companies that use induction seals do not want the liner residue as it could potentially interfere with the product itself upon dispensing. They, in turn, put a notice on the product that it has been induction-sealed for their protection; letting the consumer know it was sealed upon leaving the factory and they should check for an intact seal before using. Sustainability In some applications, induction sealing can be considered to contribute towards sustainability goals by allowing lower bottle weights as the pack relies on the presence of an induction foil seal for its security, rather than a mechanically strong bottle neck and closure. Induction heating analysis Some manufacturers have produced devices which can monitor the magnetic field strength present at the induction head (either directly or indirectly via such mechanisms as pick up coils), dynamically predicting the heating effect in the foil. Such devices provide quantifiable data post-weld in a production environment where uniformity – particularly in parameters such as foil peel-off strength, is important. Analysers may be portable or designed to work in conjunction with conveyor belt systems. High speed power analysis techniques (voltage and current measurement in near real time) can be used to intercept power delivery from mains to generator or generator to head in order to calculate energy delivered to the foil and the statistical profile of that process. As the thermal capacity of the foil is typically static, such information as true power, apparent power and power factor may be used to predict foil heating with good relevance to final weld parameters and in a dynamic manner. Many other derivative parameters may be calculated for each weld, yielding confidence in a production environment that is notably more difficult to achieve in conduction transfer systems, where analysis, if present is generally post-weld as relatively large thermal mass of heating and conduction elements combined impair rapid temperature change. Inductive heating with quantitative feedback such as that provided by power analysis techniques further allows for the possibility of dynamic adjustments in energy delivery profile to the target. This opens the possibility of feed-forward systems where the induction generator properties are adjusted in near real-time as the heating process proceeds, allowing for a specific heating profile track and subsequent compliance feedback – something that is not generally practical for conduction heating processes. Benefits of induction vs. conduction sealing Conduction sealing requires a hard metal plate to make perfect contact with the container being sealed. Conduction sealing systems delay production time because of required system warm-up time. They also have complex temperature sensors and heaters. Unlike conduction sealing systems, induction sealing systems require very little power resources, deliver instant startup time, and have a sealing head which can conform to "out of specification" containers when sealing. Induction sealing also offers advantages when sealing to glass: Using a conduction sealer to seal a simple foil structure to glass gives no tolerance or compressibility to allow for any irregularity in the glass surface finish. With an induction sealer, the contact face can be of a compressible material, ensuring a perfect bond each time. Real-world applications of induction sealing Induction sealing is broadly used to preserve the freshness and integrity of various products, such as: Sauces, jams, and condiments Dairy products, including milk and yogurt containers Beverages, such as water, coffee, tea, and juices Nutritional supplements Ready-to-eat meals, soups, and other food products Pet food and animal care products Medications and other pharmeceuticals Cosmetics, such as shampoos and conditioners Skin care products, such as lotions, creams, and serums Various chemical substances, including cleaning agents, pesticides, and fertilizers Pastes, adhesives, sealants, paints, and coatings Lubricants and greases Sporting goods supplies History 1957–1958 – Original concept and method for induction sealing is conceived and proven by Jack Palmer (a process engineer at that time for the FR Corporation – Bronx, NY) as a means of solving liquid leakage from polyethylene bottles during shipment 1960 – is awarded to Jack Palmer, in which his concept and process of induction sealing is made public Mid-1960s – Induction sealing is used worldwide 1973 – First solid-state cap sealer introduced 1982 – Chicago Tylenol murders 1983 – First transistorized air-cooled power supply for induction cap sealing 1985 – Universal coil technology debuted 1992 – Water-cooled, IGBT-based sealer introduced 1997 – Waterless cap sealers introduced by Auto-Mate Technologies - The Originator of the Patented "Waterless Cap Sealer (half the size and relatively maintenance-free) 2003 - Auto-Mate Technologies SMART-SEAL, Full-Featured "Waterless Cap Sealers" 2004 – 6 kW system introduced, Auto-Mate Technologies adds inspection capabilities to the induction sealing machine; making it a combination inspection machine for inspection of crooked caps, missing foils, checking for sealed caps, conveyor monitoring, rejecting etc. References Further reading Yam, K. L., "Encyclopedia of Packaging Technology", John Wiley & Sons, 2009, External links FDA’s regulations concerning tamper-resistant packaging http://www.enerconind.co.uk/ Induction heating Packaging machinery
Induction sealing
Engineering
1,935
52,348,753
https://en.wikipedia.org/wiki/Uvitonic%20acid
Uvitonic acid (6-methyl-2,4-pyridinedicarboxylic acid) is an organic compound with the formula CH3C5H2N(COOH)2. The acid is a pyridine analogue of the benzene derivative uvitic acid. Under normal conditions, the acid is a white crystalline substance. Preparation Uvitonic acid is obtained by the action of ammonia on pyruvic acid. See also Uvitic acid References Dicarboxylic acids Pyridines
Uvitonic acid
Chemistry
111
76,679,651
https://en.wikipedia.org/wiki/Barssia%20yezomontana
Barssia yezomontana is a species of fungus from the genus Barssia. First described by Kobayasi in 1938. Distribution Barssia yezomontana is primarily found in Northern Japan, it was originally found in Yezo (Hokkaido). References Pezizales Fungus species Fungi of Japan
Barssia yezomontana
Biology
63
42,682,329
https://en.wikipedia.org/wiki/Environmental%20Sciences%20Europe
Environmental Sciences Europe is a peer-reviewed scientific journal covering all aspects of environmental science. It was established in 1989 as Umweltwissenschaften und Schadstoff-Forschung (German for Environmental Science and Pollution Research), obtaining its current name in 2011. It is published by Springer Science+Business Media and the editor-in-chief is Henner Hollert (RWTH Aachen University). Since 2011, the journal has been open access. Abstracting and indexing The journal is abstracted and indexed in: According to the Journal Citation Reports, the journal has a 2021 impact factor of 5.481, ranking it 79th out of 279 journals in the category "Environmental Sciences". Controversies In June 2014 ESE republished the retracted paper in question in the Séralini affair, which had been originally published in Food and Chemical Toxicology in September 2012 and then retracted in November 2013. References External links Springer Science+Business Media academic journals Academic journals established in 1989 Environmental science journals English-language journals Creative Commons Attribution-licensed journals Open access journals
Environmental Sciences Europe
Environmental_science
220
1,112,432
https://en.wikipedia.org/wiki/Unordered%20pair
In mathematics, an unordered pair or pair set is a set of the form {a, b}, i.e. a set having two elements a and b with , where {a, b} = {b, a}. In contrast, an ordered pair (a, b) has a as its first element and b as its second element, which means (a, b) ≠ (b, a). While the two elements of an ordered pair (a, b) need not be distinct, modern authors only call {a, b} an unordered pair if a ≠ b. But for a few authors a singleton is also considered an unordered pair, although today, most would say that {a, a} is a multiset. It is typical to use the term unordered pair even in the situation where the elements a and b could be equal, as long as this equality has not yet been established. A set with precisely two elements is also called a 2-set or (rarely) a binary set. An unordered pair is a finite set; its cardinality (number of elements) is 2 or (if the two elements are not distinct) 1. In axiomatic set theory, the existence of unordered pairs is required by an axiom, the axiom of pairing. More generally, an unordered n-tuple is a set of the form {a1, a2,... an}. Notes References . Basic concepts in set theory
Unordered pair
Mathematics
312
1,868,210
https://en.wikipedia.org/wiki/Leading%20zero
A leading zero is any 0 digit that comes before the first nonzero digit in a number string in positional notation. For example, James Bond's famous identifier, 007, has two leading zeros. Any zeroes appearing to the left of the first non-zero digit (of any integer or decimal) do not affect its value, and can be omitted (or replaced with blanks) with no loss of information. Therefore, the usual decimal notation of integers does not use leading zeros except for the zero itself, which would be denoted as an empty string otherwise. However, in decimal fractions strictly between −1 and 1, the leading zeros digits between the decimal point and the first nonzero digit are necessary for conveying the magnitude of a number and cannot be omitted, while trailing zeros – zeros occurring after the decimal point and after the last nonzero digit – can be omitted without changing the meaning. Occurrence Often, leading zeros are found on non-electronic digital displays or on such electronic ones as seven-segment displays, that contain fixed sets of digits. These devices include manual counters, stopwatches, odometers, and digital clocks. Leading zeros are also generated by many older computer programs when creating values to assign to new records, accounts and other files, and as such are likely to be used by utility billing systems, human resources information systems and government databases. Many digital cameras and other electronic media recording devices use leading zeros when creating and saving new files to make names of the equal length. Leading zeros are also present whenever the number of digits is fixed by the technical system (such as in a memory register), but the stored value is not large enough to result in a non-zero most significant digit. The count leading zeros operation efficiently determines the number of leading zero bits in a machine word. Leading zeros can have its meaning for various reasons: in data where, for any reason, a standard data length is required or expected, e.g. in identifiers like James Bond as 007. in cases where the digit does not represent a value but a distinguishing character, for example in telephone numbers in numerical codes where the meaning of digits is dependent on their position A leading zero appears in roulette in the United States, where "00" is distinct from "0" (a wager on "0" will not win if the ball lands in "00", and vice versa). Sports where competitors are numbered follow this as well; a stock car numbered "07" would be considered distinct from one numbered "7". Benito Santiago, a Major League Baseball catcher who wore the number 09 for several years, is the only major professional sports league player to use a jersey number with a leading zero, not counting several who have worn the number 00 (he wore the extra zero to avoid complications with his catcher's pads, allowing the back strap to run between the numbers instead of over a single digit 9). Dennis Rodman had requested the number 01 when he joined the Chicago Bulls (as his usual number 10 had already been retired), but the National Basketball Association forbade it, and Rodman instead wore 91. In most countries other than the United States, numbers between 0 and 1, expressed as a decimal, include a zero before the decimal point (e.g. 0.64 or in many countries 0,64) while in the United States this zero is often omitted (.64). Advantages Collation Leading zeros are used to make ascending order of numbers correspond with alphabetical order: e.g., 11 comes alphabetically before 2, but after 02. (See, e.g., ISO 8601.) This does not work with negative numbers, though, whether leading zeros are used or not: −23 comes alphabetically after −01, −1, and −22, although it is less than all of them. Error prevention Leading zeros in a sentence also make it less likely that a careless reader will overlook the decimal point. For example, in modern pharmacy there is a widely followed convention that leading zeros before a decimal must not be omitted from any dose or dosage value in drug prescribing (e.g. 0.2 mg must be used, not .2 mg). Meanwhile, trailing zeros are forbidden (e.g. 2 mg must be used, not 2.0 mg). In both cases, the intention is to prevent misreading and the resultant misdose by one or several orders of magnitude. Fraud prevention Leading zeros can also be used to prevent fraud by filling in character positions that might normally be empty. For example, adding leading zeros to the amount of a check (or similar financial document) makes it more difficult for fraudsters to alter the amount of the check before presenting it for payment. Zero as a prefix A prefix 0 is used in C to specify string representations of octal numbers, as required by the ANSI C standard for the strtol() function (which converts strings to long integers) in the <stdlib.h> library. Many other programming languages, such as Python, Perl, Ruby, PHP, and the Unix shell bash also follow this specification for converting strings to numbers. As an example, "0020" does not represent 2010 (2×101 + 0×100), but rather 208 = 1610 (2×81 + 0×80 = 1×101 + 6×100). Decimal numbers written with leading zeros will be interpreted as octal by languages that follow this convention and will generate errors if they contain "8" or "9", since these digits do not exist in octal. This behavior can be a nuisance when working with sequences of strings with embedded, zero-padded decimal numbers (typically file names) to facilitate alphabetical sorting (see above) or when validating inputs from users who would not know that adding a leading zero triggers this base conversion. In Czechia, a zero prefix was formerly used as one of ways how to indicate a type of house number. As the standard house numbers, conscription house numbers (čísla popisná) are used. However, for temporary and recreational structures, a special number series of registration house numbers (čísla evidenční) is used. This type is distinguished by any prefix (0, E or N), by a distinguishing text or abbreviation or by color of the sign. See also Trailing zero 00 (disambiguation) Leading digit References Computer data Digital electronics Numeral systems 0 (number)
Leading zero
Mathematics,Technology,Engineering
1,352
2,114,529
https://en.wikipedia.org/wiki/Atat%C3%BCrk%20Dam
The Atatürk Dam (), originally the Karababa Dam, is the third largest dam in the world and it is a zoned rock-fill dam with a central core on the Euphrates River on the border of Adıyaman Province and Şanlıurfa Province in the Southeastern Anatolia Region of Turkey. Built both to generate electricity and to irrigate the plains in the region, it was renamed in honour of Mustafa Kemal Atatürk (1881–1938), the founder of the Turkish Republic. The construction began in 1983 and was completed in 1990. The dam and the hydroelectric power plant, which went into service after the upfilling of the reservoir was completed in 1992, are operated by the State Hydraulic Works (DSİ). The reservoir created behind the dam, called Atatürk Reservoir (), is the third largest in Turkey. The dam is situated northwest of Bozova, Şanlıurfa Province, on state road D-875 from Bozova to Adıyaman. Centerpiece of the 22 dams on the Euphrates and the Tigris, which comprise the integrated, multi-sector, Southeastern Anatolia Project (, known as GAP), it is one of the world's largest dams. The Atatürk Dam, one of the five operational dams on the Euphrates as of 2008, was preceded by Keban and Karakaya dams upstream and followed by Birecik and the Karkamış dams downstream. Two more dams on the river have been under construction. The dam embankment is and . The hydroelectric power plant (HEPP) has a total installed power capacity of 2,400 MW and generates 8,900 GW·h electricity annually. The total cost of the dam project was about . The dam was depicted on the reverse of the Turkish one-million-lira banknotes of 1995–2005 and of the 1 new lira banknote of 2005–2009. Dam The initial development project for the southeastern region of Turkey was presented in 1970. As the objectives for regional development have changed significantly and the ambitions have grown in the 1970s, the original plan underwent major modifications. The most important change in the project was abandoning the Middle Karababa Dam design, and adopting the design of the Atatürk Dam to increase the storage and power generation capacities of the dam. Dolsar Engineering and ATA Construction, two prominent Turkish companies, signed for the building of the dam. The construction of the cofferdam began in 1985 and was completed in 1987. The fill work for the main dam lasted from 1987 to 1990. The Atatürk Dam, listed in international construction publications as the world's largest construction site, was completed in a world record time of around 50 months. The rock-fill dam undergoes deformations that are regularly and systematically monitored since 1990 with different types of sensors. It is estimated that the central portion of the dam crest has settled by around since the end of the construction. Settlement of the dam crest up to has been measured since the start of the detailed geodetic monitoring in 1992. The maximum horizontal (radial) deformation measured is about . The permeation grouting work was carried out by subcontractor Solétanche Bachy and the rehabilitation work for the post-tensioning of the dam crest with ground anchors by Vorspann System Losinger International (VSL). Hydroelectric power plant The HEPP of the Atatürk Dam is the biggest of a series of 19 power plants of the GAP project. It consists of eight Francis turbine and generator groups of 300 MW each, supplied by Sulzer Escher Wyss and ABB Asea Brown Boveri respectively. The up to steel pressure pipes (penstocks) with a total weight of 26.600 tons were supplied and installed by the German NOELL company (today DSD NOELL). The power plant's first two power units came on line in 1992 and it became fully operational in December 1993. The HEPP can generate 8,900 GWh of electricity annually. Its capacity makes up around one third of the total capacity of the GAP project. During the periods of low demand for electricity, only one of the eight units of the HEPP is in operation while in times of high demand, all the eight units are in operation. Hence, depending upon the energy demand and the state of the interconnected system, the amount of water to be released from the HEPP might vary between 200 and 2,000 m3/s in one day. Irrigation Originating in the mountains of eastern Anatolia and flowing southwards to Syria and Iraq, the Euphrates and the Tigris are very irregular rivers, used to cause great problems each year with droughts in summer and flooding in winter. The water of the Euphrates River is regulated by means of large reservoirs of the Keban and Atatürk Dams. However, the waters released from the HEPPs of those dams also need to be regulated. The Birecik and the Karkamış Dams downstream the Atatürk Dam are constructed for the purpose of harnessing the waters released from large-scale dams and HEPPs. Nearly of arable land in the Şanlıurfa-Harran and Mardin-Ceylanpınar plains in upper Mesopotamia is being irrigated via gravity-flow with water diverted from the Atatürk Dam through the Şanlıurfa Tunnels system, which consists of two parallel tunnels, each long and in diameter. The flow rate of water through the tunnels is about , which makes one-third of the total flow of the Euphrates. The tunnels are the largest in the world, in terms of length and flow rate, built for irrigation purposes. The first tunnel was completed in 1995 and the other in 1996. The reservoir behind the dam will irrigate another 406,000 ha by pumping for a total of 882,000 ha. The Atatürk Dam and the Şanlıurfa Tunnel system are two major components of the GAP project. Irrigation started in the Harran Plain in the spring of 1995. The impact of the irrigation on the economy of the region is significant. In ninety percent of the irrigated area, cotton is planted. Irrigation expansion within the Harran plains also increased Southeastern Anatolia's cotton production from 164,000 to 400,000 metric tons in 2001, or nearly sixty percent. With almost 50% share of the country's cotton production, the region developed to the leader in Turkey. Reservoir lake The Atatürk Reservoir, extending over an area of with a water volume of 48.7 km3 (63,400 million cu yd), ranks third in size in Turkey after Lake Van and Lake Tuz. The reservoir water level touched amsl in 1994. Since then, it varies between 526 and 537 m amsl. The full reservoir level is , and the minimum operation level is amsl. Some 10 towns and 156 villages of three provinces are located around the Lake Atatürk Dam. The lake provides a fisheries and recreation site. For transportation purposes, several ferries have been operated in the reservoir. The reservoir lake is called "sea" by local people. Geostrategic importance About 90% of Euphrates' total annual flow originates in Turkey, while the remaining part is added in Syria, but nothing is contributed further downstream in Iraq. In general, the stream varies greatly in its flow from season to season and year to year. As an example, the annual flow at the border with Syria ranged from in 1961 to in 1963. One of the most important legal texts on the waters of the Euphrates-Tigris river system is the protocol annexed to the 1946 Treaty of Friendship and Good Neighborly Relations between Iraq and Turkey. The protocol provided the control and management of the Euphrates and the Tigris depending to a large extent on the regulations of flow in Turkish source areas. Turkey agreed to begin monitoring the two border-crossing rivers and to share related data with Iraq. In 1980, Turkey and Iraq further specified the nature of the earlier protocol by forming a joint committee on technical issues, which Syria joined later in 1982 as well. Turkey unilaterally guaranteed to allow 15.75 km3/year (500 m3/s) of water across the border to Syria without any formal agreement on the sharing of the Euphrates water. Mid-January 1990, when the first phase of the dam was completed, Turkey held back the flow of the Euphrates entirely for a month to begin filling up the reservoir. Turkey had notified Syria and Iraq by November 1989 of her decision to fill the reservoir over a period of one month explaining the technical reasons and providing a detailed program for making up for the losses. The downstream neighbors protested vehemently. At this point, the Atatürk Dam has cut the flow from the Euphrates by about a third. Syria and Iraq claim to be suffering severe water shortages due to the GAP development. Both countries allege that Turkey is intentionally withholding supplies from its downstream neighbors, turning water into a weapon. Turkey denies these claims, and insists it has always supplied its southern neighbors with the promised minimum of . It argues that Iraq and Syria in fact benefit from the regulated water by the dams as they protect all three riparian countries from seasonal droughts and floods. Two damaging earthquakes of M w 5.5 and M w 5.1 occurred in the town of Samsat near the Atatürk Reservoir in 2017 and 2018, respectively. The spatio-temporal evolution of seismicity and its source properties in relation to the temporal water-level variations and the stresses resulting from surface loading and pore-pressure diffusion shows the water-level and seismicity rate are anti-correlated in this dam, which is explained by the stabilization effect of the gravitational induced stress imposed by water loading on the local faults. The overall effective stress in the seismogenic zone increased over decades due to pore-pressure diffusion, explaining the enhanced background seismicity during recent years. See also Nissibi Euphrates Bridge References External links GAP official website Dams in Şanlıurfa Province Dams in Adıyaman Province Hydroelectric power stations in Turkey Irrigation in Turkey Southeastern Anatolia Project Dams on the Euphrates River Water politics in the Middle East Syria–Turkey relations Dams completed in 1992 Rock-filled dams Crossings of the Euphrates
Atatürk Dam
Engineering
2,118
9,321,714
https://en.wikipedia.org/wiki/Sterol%20carrier%20protein
Sterol carrier proteins (also known as nonspecific lipid transfer proteins) is a family of proteins that transfer steroids and probably also phospholipids and gangliosides between cellular membranes. These proteins are different from plant nonspecific lipid transfer proteins but structurally similar to small proteins of unknown function from Thermus thermophilus. This domain is involved in binding sterols. The human sterol carrier protein 2 (SCP2) is a basic protein that is believed to participate in the intracellular transport of cholesterol and various other lipids. Human proteins containing this domain HSD17B4; HSDL2; SCP2; STOML1; See also Steroidogenic acute regulatory protein and START domain References External links Sterol carrier proteins in SCOP SCP-2 sterol transfer family in Pfam Peripheral membrane proteins Protein domains Protein families Water-soluble transporters
Sterol carrier protein
Biology
194
70,564,555
https://en.wikipedia.org/wiki/Ultrasound-switchable%20fluorescence%20imaging
Ultrasound-switchable fluorescence (USF) imaging is a deep optics imaging technique. In last few decades, fluorescence microscopy has been highly developed to image biological samples and live tissues. However, due to light scattering, fluorescence microscopy is limited to shallow tissues (about 1 mm). Since fluorescence is characterized by high contrast, high sensitivity, and low cost which is crucial to investigate deep tissue information, developing fluorescence imaging technique with high depth-to-resolution ratio would be promising.. Recently, ultrasound-switchable fluorescence imaging has been developed to achieve high signal-to-noise ratio (SNR) and high spatial resolution imaging without sacrificing image depth. Basic principle The theoretical model was first proposed by Yuan in 2009, he developed an ultrasound-modulated fluorescence based on a fluorophore-quencher-labeled microbubble system which can control the fluorescent emission inside the ultrasound-focal zone to increase the spatial resolution and SNR o f the imaging. In terms of the USF imaging principle, a short ultrasound pulse is applied to activate the fluorescent emission inside the ultrasound focal volume without triggering fluorescence outside of the ultrasound focal volume. Thus, the fluorophores distribution in the ultrasound focal zone can be distinguished and imaged by screening the target. Two basic elements are required in USF imaging technique, the first is unique USF contrast agents whose fluorescence emission can be controlled by a focused ultrasound wave. Secondly, a sensitive USF imaging system is also required to detect the signal and suppress the background noise. Imaging contrast agents At present, two types of contrast agents have been developed. Fluorophore-quencher-labeled microbubble The first type is fluorophore-quencher-labeled microbubble which is first discovered by Yuan in 2019, and developed by Liu et al. in 2014. The basic principle of this type of contrast agent is to change the fluorophore concentration on the microbubble surface. In 2000, Morgan et al. found that negative ultrasould wave can make the microbubble several times bigger. As a result, the distance between quencher and fluorophore on microbubble surface become larger (the concentration of the flurophore on the surface are reduced) which means the quenching efficiency is extremely decreased and the fluorophore shows high emission efficiency (ON state). The microbubble outside the ultrasound focal zone keep the same small size during the whole process, so the quenching efficiency is always high enough to suppress the fluorophore emission (OFF state). Fluorophore-labeled thermosensitive polymers or fluorophore-encapsulated nanoparticles (NPs) The second type of contrast agents is fluorophore-labeled thermosensitive polymers or fluorophore-encapsulated nanoparticles (NPs). The critical part of this kind of agent is the combination of the thermo-sensitive carrier and environment-sensitive (usually polarity-sensitive) fluorophore labeled on it. When the environment temperature is under a certain threshold (Tth1), the polarity of the carrier on which the fluorophore shows quite low emission efficiency (OFF state). When focused ultrasound is applied, the focal zone is heated above a temperature threshold (Tth2) and the structure of the thermo-sensitive carrier will be changed which makes the polarity of it changes too, therefore, the polarity-sensitive fluorophore will be swathed on. During the whole process, the fluorophore outside of the ultrasound focal zone keep switched off because the temperature is under Tth1. USF imgaing system The purpose of the USF imaging system is to sensitively detect the USF signal and dramatically suppress the background noise. The image system first dramatically increase the system sensitivity by adopting a lock-in amplifier and a cooled photomultiplier tube(PMT); Then the system use a correlation algorithm to distinguish the USF signal from the background noise; Also, it detects only the change of the fluorescence signal caused by the ultrasound, The modulated-frequency excitation laser keep running all the time, the ultrasound-induced temperature rise change the amplitude of the fluorescence signal in modulated frequency. After interfering with a phase-locked reference signal, the lock-in amplifier reports the USF signal; The system can also reduce laser leakage by using several emission filters. Signal to Noise ratio USF imaging can increase the SNR by differentiating signal photons from background photons. The background photons may come from autofluoresence, light scattering, imperfect contrast agents and laser leakage. To reduce autofluoresence, the NIR fluorophore can be adopted since the biological tissue components produce least autofloresence in NIR region. According to rayleigh theory: I(r,θ) = 1/λ4 The light with large wavelength scatter less, so the light scattering which result in part of the background noise can be reduced. Also, by adopting ultrasound to control the fluorescent emission, the signal fluorophore can be easily differentiate from the background fluorophore. As we mentioned above, the laser leakage can be minimized by emission filters. Spatial resolution When using second type of contrast agents (fluorophore-labeled thermosensitive NPs), the spatial resolution can be further improved based on two mechanisms. Nonlinear acoustic effect Acoustic diffraction is the main obstacle to increase the spatial resolution. By controlling ultrasound exposure power, the nonlinear acoustic effect can occur, as a result, a part of acoustic energy at the fundamental frequency can be transferred to higher harmonic frequency components in the focal volume which can be more tightly focused. This is the major reason that nonlinear acoustic effect can reduce the ultrasound-induced temperature focal size. Thermal confinement The spatial resolution of the USF technique is determined by the size of the region where the fluorophores can be switched ON. Only the temperature is above the threshold, the fluorephore can be switched on. However, due to the thermal diffusion or conduction, ultrasound-induced thermal energy need to be confined within the focal volume size by controlling the ultrasound exposure time, so the fluorophores can be switched ON is usually smaller than the actual focal size of the ultrasound. Applications The USF technique can be combined with a light-pulse-delay technique and a photon counting technique to achieve high-resolution imaging in a deep turbid medium. In 2016, Cheng et al. achieved high-resolution fluorescence imaging in centimeter-deep tissue phantoms with high SNR and high sensitivity, they synthesized and characterized a NIR extremely environment-sensitive fluorophore, ADP(CA)2, and a family of USF contrast agents based on this dye. In the recent study in 2019, Yao et al. first achieved in vivo ultrasound-switchable fluorescence imaging in mice with high resolution. ICG-encapsulated PNIPAM nanoparticles was adopted as contrast agents which is quite stable in biological environment. Compared with CT imaging results, they found USF imaging maintained high sensitivity and specificity in deep tissue. References Ultrasound Optical microscopy techniques Fluorescence
Ultrasound-switchable fluorescence imaging
Chemistry
1,505
1,285,375
https://en.wikipedia.org/wiki/Hyperbolic%20metric%20space
In mathematics, a hyperbolic metric space is a metric space satisfying certain metric relations (depending quantitatively on a nonnegative real number δ) between points. The definition, introduced by Mikhael Gromov, generalizes the metric properties of classical hyperbolic geometry and of trees. Hyperbolicity is a large-scale property, and is very useful to the study of certain infinite groups called Gromov-hyperbolic groups. Definitions In this paragraph we give various definitions of a -hyperbolic space. A metric space is said to be (Gromov-) hyperbolic if it is -hyperbolic for some . Definition using the Gromov product Let be a metric space. The Gromov product of two points with respect to a third one is defined by the formula: Gromov's definition of a hyperbolic metric space is then as follows: is -hyperbolic if and only if all satisfy the four-point condition Note that if this condition is satisfied for all and one fixed base point , then it is satisfied for all with a constant . Thus the hyperbolicity condition only needs to be verified for one fixed base point; for this reason, the subscript for the base point is often dropped from the Gromov product. Definitions using triangles Up to changing by a constant multiple, there is an equivalent geometric definition involving triangles when the metric space is geodesic, i.e. any two points are end points of a geodesic segment (an isometric image of a compact subinterval of the reals). Note that the definition via Gromov products does not require the space to be geodesic. Let . A geodesic triangle with vertices is the union of three geodesic segments (where denotes a segment with endpoints and ). If for any point there is a point in at distance less than of , and similarly for points on the other edges, and then the triangle is said to be -slim . A definition of a -hyperbolic space is then a geodesic metric space all of whose geodesic triangles are -slim. This definition is generally credited to Eliyahu Rips. Another definition can be given using the notion of a -approximate center of a geodesic triangle: this is a point which is at distance at most of any edge of the triangle (an "approximate" version of the incenter). A space is -hyperbolic if every geodesic triangle has a -center. These two definitions of a -hyperbolic space using geodesic triangles are not exactly equivalent, but there exists such that a -hyperbolic space in the first sense is -hyperbolic in the second, and vice versa. Thus the notion of a hyperbolic space is independent of the chosen definition. Examples The hyperbolic plane is hyperbolic: in fact the incircle of a geodesic triangle is the circle of largest diameter contained in the triangle and every geodesic triangle lies in the interior of an ideal triangle, all of which are isometric with incircles of diameter 2 log 3. Note that in this case the Gromov product also has a simple interpretation in terms of the incircle of a geodesic triangle. In fact the quantity is just the hyperbolic distance from to either of the points of contact of the incircle with the adjacent sides: for from the diagram , so that . The Euclidean plane is not hyperbolic, for example because of the existence of homotheties. Two "degenerate" examples of hyperbolic spaces are spaces with bounded diameter (for example finite or compact spaces) and the real line. Metric trees and more generally real trees are the simplest interesting examples of hyperbolic spaces as they are 0-hyperbolic (i.e. all triangles are tripods). The 1-skeleton of the triangulation by Euclidean equilateral triangles is not hyperbolic (it is in fact quasi-isometric to the Euclidean plane). A triangulation of the plane has a hyperbolic 1-skeleton if every vertex has degree 7 or more. The two-dimensional grid is not hyperbolic (it is quasi-isometric to the Euclidean plane). It is the Cayley graph of the fundamental group of the torus; the Cayley graphs of the fundamental groups of a surface of higher genus is hyperbolic (it is in fact quasi-isometric to the hyperbolic plane). Hyperbolicity and curvature The hyperbolic plane (and more generally any Hadamard manifolds of sectional curvature ) is -hyperbolic. If we scale the Riemannian metric by a factor then the distances are multiplied by and thus we get a space that is -hyperbolic. Since the curvature is multiplied by we see that in this example the more (negatively) curved the space is, the lower the hyperbolicity constant. Similar examples are CAT spaces of negative curvature. While curvature is a property that is essentially local, hyperbolicity is a large-scale property which does not see local (i.e. happening in a bounded region) metric phenomena. For example, the union of an hyperbolic space with a compact space with any metric extending the original ones remains hyperbolic. Important properties Invariance under quasi-isometry One way to make precise the meaning of "large scale" is to require invariance under quasi-isometry. This is true of hyperbolicity. If a geodesic metric space is quasi-isometric to a -hyperbolic space then there exists such that is -hyperbolic. The constant depends on and on the multiplicative and additive constants for the quasi-isometry. Approximate trees in hyperbolic spaces The definition of an hyperbolic space in terms of the Gromov product can be seen as saying that the metric relations between any four points are the same as they would be in a tree, up to the additive constant . More generally the following property shows that any finite subset of an hyperbolic space looks like a finite tree. For any there is a constant such that the following holds: if are points in a -hyperbolic space there is a finite tree and an embedding such that for all and The constant can be taken to be with and this is optimal. Exponential growth of distance and isoperimetric inequalities In an hyperbolic space we have the following property: There are such that for all with , every path joining to and staying at distance at least of has length at least . Informally this means that the circumference of a "circle" of radius grows exponentially with . This is reminiscent of the isoperimetric problem in the Euclidean plane. Here is a more specific statement to this effect. Suppose that is a cell complex of dimension 2 such that its 1-skeleton is hyperbolic, and there exists such that the boundary of any 2-cell contains at most 1-cells. Then there is a constant such that for any finite subcomplex we have Here the area of a 2-complex is the number of 2-cells and the length of a 1-complex is the number of 1-cells. The statement above is a linear isoperimetric inequality; it turns out that having such an isoperimetric inequality characterises Gromov-hyperbolic spaces. Linear isoperimetric inequalities were inspired by the small cancellation conditions from combinatorial group theory. Quasiconvex subspaces A subspace of a geodesic metric space is said to be quasiconvex if there is a constant such that any geodesic in between two points of stays within distance of . A quasi-convex subspace of an hyperbolic space is hyperbolic. Asymptotic cones All asymptotic cones of an hyperbolic space are real trees. This property characterises hyperbolic spaces. The boundary of a hyperbolic space Generalising the construction of the ends of a simplicial tree there is a natural notion of boundary at infinity for hyperbolic spaces, which has proven very useful for analysing group actions. In this paragraph is a geodesic metric space which is hyperbolic. Definition using the Gromov product A sequence is said to converge to infinity if for some (or any) point we have that as both and go to infinity. Two sequences converging to infinity are considered equivalent when (for some or any ). The boundary of is the set of equivalence classes of sequences which converge to infinity, which is denoted . If are two points on the boundary then their Gromov product is defined to be: which is finite iff . One can then define a topology on using the functions . This topology on is metrisable and there is a distinguished family of metrics defined using the Gromov product. Definition for proper spaces using rays Let be two quasi-isometric embeddings of into ("quasi-geodesic rays"). They are considered equivalent if and only if the function is bounded on . If the space is proper then the set of all such embeddings modulo equivalence with its natural topology is homeomorphic to as defined above. A similar realisation is to fix a basepoint and consider only quasi-geodesic rays originating from this point. In case is geodesic and proper one can also restrict to genuine geodesic rays. Examples When is a simplicial regular tree the boundary is just the space of ends, which is a Cantor set. Fixing a point yields a natural distance on : two points represented by rays originating at are at distance . When is the unit disk, i.e. the Poincaré disk model for the hyperbolic plane, the hyperbolic metric on the disk is and the Gromov boundary can be identified with the unit circle. The boundary of -dimensional hyperbolic space is homeomorphic to the -dimensional sphere and the metrics are similar to the one above. Busemann functions If is proper then its boundary is homeomorphic to the space of Busemann functions on modulo translations. The action of isometries on the boundary and their classification A quasi-isometry between two hyperbolic spaces induces a homeomorphism between the boundaries. In particular the group of isometries of acts by homeomorphisms on . This action can be used to classify isometries according to their dynamical behaviour on the boundary, generalising that for trees and classical hyperbolic spaces. Let be an isometry of , then one of the following cases occur: First case: has a bounded orbit on (in case is proper this implies that has a fixed point in ). Then it is called an elliptic isometry. Second case: has exactly two fixed points on and every positive orbit accumulates only at . Then is called an hyperbolic isometry. Third case: has exactly one fixed point on the boundary and all orbits accumulate at this point. Then it is called a parabolic isometry. More examples Subsets of the theory of hyperbolic groups can be used to give more examples of hyperbolic spaces, for instance the Cayley graph of a small cancellation group. It is also known that the Cayley graphs of certain models of random groups (which is in effect a randomly-generated infinite regular graph) tend to be hyperbolic very often. It can be difficult and interesting to prove that certain spaces are hyperbolic. For example, the following hyperbolicity results have led to new phenomena being discovered for the groups acting on them. The hyperbolicity of the curve complex has led to new results on the mapping class group. Similarly, the hyperbolicity of certain graphs associated to the outer automorphism group Out(Fn) has led to new results on this group. See also Negatively curved group Ideal triangle Notes References . Metric geometry Hyperbolic geometry
Hyperbolic metric space
Mathematics
2,411
444,000
https://en.wikipedia.org/wiki/Leyland%20cypress
The Leyland cypress, Cupressus × leylandii, × Cuprocyparis leylandii or × Cupressocyparis leylandii, often referred to simply as leylandii, is a fast-growing coniferous evergreen tree much used in horticulture, primarily for hedges and screens. Even on sites of relatively poor culture, plants have been known to grow to heights of in 16 years. Their rapid, thick growth means they are sometimes used to achieve privacy, but such use can result in disputes with neighbours whose own property becomes overshadowed. The tree is a hybrid of Monterey cypress (Cupressus macrocarpa) and Nootka cypress (Cupressus nootkatensis). It is almost always sterile, and is propagated mainly by cuttings. History In 1845, the Leighton Hall, Powys estate was purchased by the Liverpool banker Christopher Leyland. In 1847, he gave it to his nephew John Naylor (1813–1889). Naylor commissioned Edward Kemp to lay out the gardens, which included redwoods, monkey puzzle trees and two North American species of conifers in close proximity to each other – Monterey cypress and Nootka cypress. The two parent species would not likely cross in the wild, as their natural ranges are more than apart, but in 1888, the hybrid cross occurred when the female flowers or cones of Nootka cypress were fertilised by pollen from Monterey cypress. John Naylor's eldest son Christopher John (1849–1926) inherited Leighton Hall from his father in 1889. Christopher was a sea captain by trade. In 1891, he inherited the Leyland Entailed Estates established under the will of his great-great-uncle, which passed to him following the death of his uncle Thomas Leyland. On receiving the inheritance, Christopher changed his surname to Leyland, and moved to Haggerston Castle, Northumberland. He further developed the hybrid at his new home, and hence named the first clone variant 'Haggerston Grey'. His younger brother John (1856–1906) resultantly inherited Leighton Hall, and when in 1911 the reverse hybrid of the cones of the Monterey cypress were fertilised with pollen from the Nootka, that hybrid was baptised 'Leighton Green.' The hybrid has since arisen on nearly 20 separate occasions, always by open pollination, showing the two species are readily compatible and closely related. As a hybrid, although fertility of certain Leyland cypress forms were recently reported, most Leyland cypress were thought to be sterile, and nearly all the trees now seen have resulted from cuttings originating from those few plants. Over 40 forms of Leyland cypress are known, and as well as 'Haggerston Grey' and 'Leighton Green', other well-known forms include 'Stapehill', which was discovered in 1940 in a garden in Ferndown, Dorset by M. Barthelemy and 'Castlewellan', which originated from a single mutant tree in the Castlewellan estate arboretum in Northern Ireland. This form, widely propagated from the 1970s, was selected by the park director, John Keown, and was named Cupressus macrocarpa 'Keownii', 1963. Description A large, evergreen tree, Cupressus × leylandii reaches a size between 20 and 25 m high, with its leaves giving it a compact, thick and regular habit. It grows very fast with yearly increases of 1 m. The leaves, about 1 mm long and close to the twig, are presented in flaky, slightly aromatic branches. They are dark green, somewhat paler on the underside, but can have different colors, depending on the cultivar. The crown of many forms is broadly columnar with slightly overhanging branch tips. The branches are slightly flattened and densely populated with scaly needles. The tree bark is dark red or brown and has deep grooves. The seeds are found in cones about 2 cm in length, with eight scales and five seeds with tiny resinous vesicles. With the tree being a hybrid, its seeds are sterile. Over time, the cones shrink dry and turn gray or chocolate brown and then have a diameter of 1 cm. Taxonomic status Cupressus × leylandii is a hybrid of two other cypress species: Monterey cypress (Cupressus macrocarpa) and Nootka cypress (Cupressus nootkatensis). The taxonomic status of Nootka cypress has changed over time, and this has affected the taxonomic status of the hybrid. Nootka cypress was first regarded as belonging in the genus Cupressus, but was later placed in Chamaecyparis. It has become clear, however, that when the genus Cupressus is defined to include Chamaecyparis, it is paraphyletic unless it also includes Juniperus. In 2004, Little et al. transferred the Nootka cypress to Callitropsis. Little (2006) proposed another alternative by transferring all the North American species of Cupressus, including the Monterey cypress (C. macrocarpa), to Callitropsis. In some of these classifications, this and other hybrids of Nootka cypress become very unusual in being intergeneric hybrids, the only ones ever reported among the gymnosperms. In 2010, Mao et al. performed a more detailed molecular analysis and redefined Cupressus to exclude Chamaecyparis, but to include the Nootka cypress. It may be added that attempts to cross Nootka cypress with other Chamaecyparis species have been universally unsuccessful. The scientific name of Leyland cypress depends on the taxonomic treatment of Nootka cypress. Where Nootka cypress is considered as Cupressus nootkatensis, the hybrid is within the Cupressus genus and is therefore Cupressus × leylandii. If both Monterey and Nootka cypress are considered as species of Callitropsis, the hybrid is Callitropsis × leylandii. However where the parents are treated as being in different genera, Leyland cypress becomes an intergeneric hybrid: if Nootka cypress is within Chamaecyparis, the name of the hybrid becomes ×Cupressocyparis leylandii, and where it is treated as Xanthocyparis, the hybrid becomes ×Cuprocyparis leylandii. Two other similar hybrids have also been raised, both involving Nootka cypress with other Cupressus species: Cupressus arizonica var. glabra × Cupressus nootkatensis (Cupressus × notabilis) Cupressus lusitanica × Cupressus nootkatensis (Cupressus × ovensii) Adaptation Leyland cypress is light-demanding, but is tolerant of high levels of pollution and salt spray. A hardy, fast-growing natural hybrid, it thrives on a variety of soils, and sites are commonly planted in gardens to provide a quick boundary or shelter hedge, because of their rapid growth. Although widely used for screening, it has not been planted much for forestry purposes. In both forms of the hybrid, Leyland cypress combines the hardiness of the Nootka or Alaska cypress with the fast growth of the Monterey cypress. The tallest Leyland cypress documented is about tall and still growing. However, because their roots are relatively shallow, a large leylandii tends to topple over. The shallow root structure also means that it is poorly adapted to areas with hot summers, such as the southern half of the United States. In these areas, it is prone to develop cypress canker disease, which is caused by the fungus Seiridium cardinale. Canker causes extensive dieback and ultimately kills the tree. In California's Central Valley, they rarely live more than 10 years before succumbing, and not much longer in southern states like Alabama. In these areas, the canker-resistant Arizona cypress is much more successful. In northern areas where heavy snows occur, this plant is also susceptible to broken branches and uprooting in wet, heavy snow. The tree has also been introduced in Kenya on parts of Mount Kenya. The sap can cause skin irritation in susceptible individuals. Commercialization In 1925, a firm of commercial nurserymen specialising in conifers were looking for a breed that was fast-growing, and could be deployed in barren, windy and salty areas such as Cornwall. Eventually they found the six original trees developed by Leyland, and began propagating the species. In 1953, a freak tornado blew down one of the original trees at Haggerston (the other original five trees still survive), on which the research division of the Forestry Commission started developing additional hybrids. Commercial nurseries spotted the plant's potential, and for many years, it was the biggest-selling item in every garden centre in Great Britain, making up to 10% of their total sales. Uses They continue to be popular for cultivation in parks and gardens. Leyland cypress trees are commonly planted to quickly form fence or protection hedges. However, their rapid growth (up to 1 m per year), their thick shade and their large potential size (often more than 20 m high in garden conditions, and they can reach at least 35 m) make them problematic. Cultivars The cultivar 'Gold Rider' has gained the Royal Horticultural Society's Award of Garden Merit (confirmed 2017), though the original hybrid has now lost its AGM status. Other cultivars include 'Douglas Gold', 'Leighton Green', 'Drabb', 'Haggerston Grey', 'Emerald Isle', 'Ferndown', 'Golconda', 'Golden Sun', 'Gold Rider', 'Grecar', 'Green Spire', 'Grelive', Haggerston 3, Haggerston 4, Haggerston 5, Haggerston 6, 'Harlequin', 'Herculea', 'Hyde Hall', 'Irish Mint', 'Jubilee', 'Medownia', 'Michellii', 'Moncal', 'Naylor's Blue', 'New Ornament', 'Olive's Green', 'Robinson's Gold', 'Rostrevor', 'Silver Dust', 'Variegata', 'Ventose', and 'Winter Sun'. Legal aspects The plant's rapid growth and great potential height can become a serious problem. In 2005 in the United Kingdom, an estimated 17,000 households were involved in disputes over the height of garden hedges. Such disputes between neighbours have been known to deteriorate into violence and in at least one case, culminate in murder when in 2001, retired Environment Agency officer Llandis Burdon, 57, was shot dead after an alleged dispute over a leylandii hedge in Talybont-on-Usk, Powys. Part VIII of the United Kingdom's Anti-Social Behaviour Act 2003, introduced in 2005, gave a way for people in England and Wales affected by high hedges (usually, but not necessarily, of leylandii) to ask their local authority to investigate complaints about the hedges, and gave the authorities in England and Wales power to have the hedges reduced in height. In May 2008, UK resident Christine Wright won a 24-year legal battle to have her neighbour's leylandii trees cut down for blocking sunlight to her garden. Legislation with similar effect followed in Northern Ireland, Isle of Man and Scotland. Gallery References External links Cupressaceae Hybrid plants Trees of Wales Flora of the West Coast of the United States Garden plants of North America Drought-tolerant plants Ornamental trees Plants described in 1926 Flora without expected TNC conservation status
Leyland cypress
Biology
2,402
35,633,206
https://en.wikipedia.org/wiki/Gaussian%20moat
In number theory, the Gaussian moat problem asks whether it is possible to find an infinite sequence of distinct Gaussian prime numbers such that the difference between consecutive numbers in the sequence is bounded. More colorfully, if one imagines the Gaussian primes to be stepping stones in a sea of complex numbers, the question is whether one can walk from the origin to infinity with steps of bounded size, without getting wet. The problem was first posed in 1962 by Basil Gordon (although it has sometimes been erroneously attributed to Paul Erdős) and it remains unsolved. With the usual prime numbers, such a sequence is impossible: the prime number theorem implies that there are arbitrarily large gaps in the sequence of prime numbers, and there is also an elementary direct proof: for any n, the n − 1 consecutive numbers n! + 2, n! + 3, ..., n! + n are all composite. The problem of finding a path between two Gaussian primes that minimizes the maximum hop size is an instance of the minimax path problem, and the hop size of an optimal path is equal to the width of the widest moat between the two primes, where a moat may be defined by a partition of the primes into two subsets and its width is the distance between the closest pair that has one element in each subset. Thus, the Gaussian moat problem may be phrased in a different but equivalent form: is there a finite bound on the widths of the moats that have finitely many primes on the side of the origin? Computational searches have shown that the origin is separated from infinity by a moat of width 6. It is known that, for any positive number k, there exist Gaussian primes whose nearest neighbor is at distance k or larger. In fact, these numbers may be constrained to be on the real axis. For instance, the number 20785207 is surrounded by a moat of width 17. Thus, there definitely exist moats of arbitrarily large width, but these moats do not necessarily separate the origin from infinity. References Further reading External links Prime numbers Unsolved problems in number theory Complex numbers
Gaussian moat
Mathematics
451
33,169,750
https://en.wikipedia.org/wiki/WISEPA%20J062309.94-045624.6
WISEPA J062309.94-045624.6 (also called WISE J0623-0456) is a brown dwarf of spectral type T8. It is the coldest brown dwarf with a radio emission (as of October 2024). WISE J0623-0456 was discovered in 2011 with the Wide-field Infrared Survey Explorer and a spectrum with the NASA Infrared Telescope Facility confirmed it as a T8-dwarf. WISE J0623-0456 was identified as a radio source with the Australian SKA Pathfinder. Follow-up observations were carried out with the Australia Telescope Compact Array (ATCA) and MeerKAT. The source showed a double-peaked pulsed emission, with a period of 1.889 ± 0.018 hours in ATCA and 1.912 ± 0.005 hours in MeerKAT. The source has a radio luminousity of 1014.8 erg s−1 Hz−1 and is comparable to other radio bright ultracool dwarfs with a similar spectral type. The radio emission of WISE J0623-0456 is strongly circularly polarized and periodic. The researchers therefore think that the radio emission comes from electron cyclotron maser instability (ECMI), which is connected to aurora in ultracool dwarfs. The researchers find that the magnetic field has a strength of at least B > 0.71 kG. Another work finds that the shape of the lightcurve can be reproduced by active field lines (AFLs). This work also found that the brown dwarf is likely seen pole-on. The rotation and magnetic axes are misaligned significantly (similar to Uranus and Neptune) and the magnetic cycle is likely longer than 6 months. M- and L-dwarfs can produce the observed radio luminosities on their own, but cooler T- and Y-dwarfs don't have the necessary corona to produce the radio emission. The alternative is that the plasma is fed to the magnetosphere from a companion, similar to the role of Io for the aurora on Jupiter. See also Other T-dwarfs with detected radio emission SIMP J013656.5+093347.3 T2.5, planetary-mass object 2MASS J10475385+2124234 T6.5 WISEPC J112254.73+255021.5 T6 WISEPA J101905.63+652954.2 T5.5+T7.0 References T-type brown dwarfs Monoceros Astronomical radio sources
WISEPA J062309.94-045624.6
Astronomy
530
3,223,960
https://en.wikipedia.org/wiki/Allais%20paradox
The Allais paradox is a choice problem designed by to show an inconsistency of actual observed choices with the predictions of expected utility theory. The Allais paradox demonstrates that individuals rarely make rational decisions consistently when required to do so immediately. The independence axiom of expected utility theory, which requires that the preferences of an individual should not change when altering two lotteries by equal proportions, was proven to be violated by the paradox. Statement of the problem The Allais paradox arises when comparing participants' choices in two different experiments, each of which consists of a choice between two gambles, A and B. The payoffs for each gamble in each experiment are as follows: Several studies involving hypothetical and small monetary payoffs, and recently involving health outcomes, have supported the assertion that when presented with a choice between 1A and 1B, most people would choose 1A. Likewise, when presented with a choice between 2A and 2B, most people would choose 2B. Allais further asserted that it was reasonable to choose 1A alone or 2B alone, as the expected average outcomes (in millions) are 1.00 for 1A gamble, 1.39 for 1B, 0.11 for 2A and 0.50 for 2B. However, that the same person (who chose 1A alone or 2B alone) would choose both 1A and 2B together is inconsistent with expected utility theory. According to expected utility theory, the person should choose either 1A and 2A or 1B and 2B. Expected payouts (in millions) are 1.11 for 1A+2A combination, 1.89 for 1B+2B combination, 1.50 for 1A+2B combination and 1.50 for 1B+2A combination. The inconsistency stems from the fact that in expected utility theory, equal outcomes (e.g. $1 million for all gambles) added to each of the two choices should have no effect on the relative desirability of one gamble over the other; equal outcomes should "cancel out". In each experiment the two gambles give the same outcome 89% of the time (starting from the top row and moving down, both 1A and 1B give an outcome of $1 million with 89% probability, and both 2A and 2B give an outcome of nothing with 89% probability). If this 89% ‘common consequence’ is disregarded, then in each experiment the choice between gambles will be the same – 11% chance of $1 million versus 10% chance of $5 million. After re-writing the payoffs, and disregarding the 89% chance of winning — equalising the outcome — then 1B is left offering a 1% chance of winning nothing and a 10% chance of winning $5 million, while 2B is also left offering a 1% chance of winning nothing and a 10% chance of winning $5 million. Hence, choice 1B and 2B can be seen as the same choice. In the same manner, 1A and 2A can also be seen as the same choice, i.e.: Allais presented his paradox as a counterexample to the independence axiom. Independence means that if an agent is indifferent between simple lotteries and , the agent is also indifferent between mixed with an arbitrary simple lottery with probability and mixed with with the same probability . Violating this principle is known as the "common consequence" problem (or "common consequence" effect). The idea of the common consequence problem is that as the prize offered by increases, and become consolation prizes, and the agent will modify preferences between the two lotteries so as to minimize risk and disappointment in case they do not win the higher prize offered by . Difficulties such as this gave rise to a number of alternatives to, and generalizations of, the theory, notably including prospect theory, developed by Daniel Kahneman and Amos Tversky, weighted utility (Chew), rank-dependent expected utility by John Quiggin, and regret theory. The point of these models was to allow a wider range of behavior than was consistent with expected utility theory. Michael Birnbaum performed experimental dissections of the paradox and showed that the results violated the theories of Quiggin, Kahneman, Tversky, and others, but could be explained by his configural weight theory that violates the property of coalescing. The main point Allais wished to make is that the independence axiom of expected utility theory may not be a valid axiom. The independence axiom states that two identical outcomes within a gamble should be treated as irrelevant to the analysis of the gamble as a whole. However, this overlooks the notion of complementarities, the fact your choice in one part of a gamble may depend on the possible outcome in the other part of the gamble. In the above choice, 1B, there is a 1% chance of getting nothing. However, this 1% chance of getting nothing also carries with it a great sense of disappointment if you were to pick that gamble and lose, knowing you could have won with 100% certainty if you had chosen 1A. This feeling of disappointment, however, is contingent on the outcome in the other portion of the gamble (i.e. the feeling of certainty). Hence, Allais argues that it is not possible to evaluate portions of gambles or choices independently of the other choices presented, as the independence axiom requires, and thus is a poor judge of our rational action (1B cannot be valued independently of 1A as the independence or sure thing principle requires of us). We don't act irrationally when choosing 1A and 2B; rather expected utility theory is not robust enough to capture such "bounded rationality" choices that in this case arise because of complementarities. Intuition behind the Allais paradox Zero effect vs certainty effect The most common explanation of the Allais paradox is that individuals prefer certainty over a risky outcome even if this defies the expected utility axiom. The certainty effect was popularised by Kahneman and Tversky (1979), and further discussed in Wakker (2010). The certainty effect highlights the appeal of a zero-variance lottery. Recent studies have indicated an alternate explanation to the certainty effect called the zero effect. The zero effect is a slight adjustment to the certainty effect that states individuals will appeal to the lottery that doesn't have the possibility of winning nothing (aversion to zero). During prior Allais style tasks that involve two experiments with four lotteries, the only lottery without a possible outcome of zero was the zero-variance lottery, making it impossible to differentiate the impact these effects have on decision making. Running two additional lotteries allowed the two effects to be distinguished and hence, their statistical significance to be tested. From the two-stage experiment, if an individual selected lottery 1A over 1B, then selected lottery 2B over 2A, they conform to the paradox and violate the expected utility axiom. The third experiment choices of participants who had already violated the expected utility theory(in the first two experiments) highlighted the underlying effect causing the Allais Paradox. Participants who chose 3B over 3A provided evidence of the certainty effect, while those who chose 3A over 3B showed evidence of the zero effect. Participants who chose (1A,2B,3B) only deviated from the rational choice when presented with a zero-variance lottery. Participants who chose (1A,2B,3A) deviated from the rational lottery choice to avoid the risk of winning nothing (aversion to zero). Findings of the six-lottery experiment indicated the zero effect was statistically significant with a p-value < 0.01. The certainty effect was found to be statistically insignificant and not the intuitive explanation individuals deviating from the expected utility theory. Mathematical proof of inconsistency Using the values above and a utility function U(W), where W is wealth, we can demonstrate exactly how the paradox manifests. Because the typical individual prefers 1A to 1B and 2B to 2A, we can conclude that the expected utilities of the preferred is greater than the expected utilities of the second choices, or, Experiment 1 Experiment 2 We can rewrite the latter equation (Experiment 2) as which contradicts the first bet (Experiment 1), which shows the player prefers the sure thing over the gamble. History The Allais Paradox was first introduced in 1952, where Maurice Allais presented various choice sets to an audience of economists at Colloques Internationaux du Centre National de la Recherche Scientifique, an economics conference in Paris. Similar to the choice sets above, the audience provided decisions that were inconsistent with expected utility theory. Despite this result, the audience was not convinced of the validity of Allais's finding and dismissed the paradox as a simple irregularity. Regardless, in 1953 Allais published his finding of the Allais paradox in Econometrica, an economics peer-reviewed journal. Allais’ work was yet to be considered feasible in the field of behavioural economics until the 1980s. Table 1 demonstrates the appearance of the Allais paradox in literature, collected through JSTOR. Historian, Floris Heukelom, attributes this unpopularity to four distinct reasons. Firstly, Allais's work had not been translated from French to English until 1979 when he produced Expected Utility Hypotheses and the Allais Paradox. This 700-page book consisted of five parts: Editorial Introduction The 1952 Allais Theory of Choice involving Risk, The neo-Bernoullian Position versus the 1952 Allais Theory, Contemporary Views on the neo-Bernoullian Theory and the Allais Paradox, Allais' rejoinder: theory and empirical evidence. Of these, various economists and researchers of relevant study backgrounds contributed, including economist and cofounder of the mathematical field of game theory, Oskar Morgenstern. Secondly, the field of economics in a behavioural sense was scarcely studied in the 1950s and 60s. The Von Neumann-Morgenstern utility theorem, which assumes that individuals make decisions that maximise utility, had been proven 6 years prior to the Allais paradox, in 1947. Thirdly, In 1979, Allais's work was noticed and cited by Amos Tversky and Daniel Kahneman in their paper introducing Prospect Theory, titled Prospect Theory: An Analysis of Decision under Risk. Critiquing expected utility theory and postulating that individuals perceive the prospect of a loss differently to that of a gain, Kahneman and Tversky's research credited the Allais paradox as the “best known counterexample to expected utility theory”. Furthermore, Kahneman and Tversky's article became one of the most cited articles in Econometrica, thus adding to the popularity of the Allais paradox. The Allais Paradox was again presented in Tversky and Kahneman's Thinking, Fast and Slow (2011), a New York Times Best Seller. Finally, Allais's prominence was further promoted when he received the Nobel Prize in Economic Sciences in 1988 for "his pioneering contributions to the theory of markets and efficient utilization of resources", thus bolstering the recognition of the paradox. Criticisms Whilst the Allais paradox is considered a counterexample to expected utility theory, Luc Wathieu, Professor of Marketing at Georgetown University, argued that the Allais paradox demonstrates the need for a modified utility function, and is not paradoxical in nature. In A Critique of the Allais Paradox (1993), Wathieu contends that the paradox "does not constitute a valid test of the independence axiom" that is required in expected utility theory. This is because the paradox involves the comparison of preferences between two separate cases, rather than the preferences in one choice set. Applications The mismatch between human behaviour and classical economics that is highlighted by the Allais paradox indicates the need for a remodelled expected utility function to account for the violation of the independence axiom. Yoshimura et al. (2013) modified the standard utility function proposed by expected utility theory, coined the “dynamic utility function”, by including a variable that is dependent on the state of an individual. The findings of this experiment suggested that the switching of preferences apparent in the Allais paradox are due to the state of the individual, which include bankruptcy and wealth. List & Haigh (2005) tests the appearance of the Allais paradox in the behaviours of professional traders through an experiment and compares the results with those of university students. By providing two lotteries similar to those used to prove the Allais paradox, the researchers concluded that those who were professional traders less frequently make choices that are inconsistent with expected utility, as opposed to students. See also Ellsberg paradox Priority heuristic St. Petersburg paradox References Further reading review Lewis, Michael. (2017). The Undoing Project: A Friendship That Changed Our Minds. New York: Norton. Behavioral economics Behavioral finance Decision-making paradoxes Paradoxes in utility theory fr:Maurice Allais#Le paradoxe d'Allais
Allais paradox
Biology
2,639
46,671,034
https://en.wikipedia.org/wiki/Free-piston%20linear%20generator
The free-piston linear generator (FPLG) uses chemical energy from fuel to drive magnets through a stator and converts this linear motion into electric energy. Because of its versatility, low weight and high efficiency, it can be used in a wide range of applications, although it is of special interest to the mobility industry as range extenders for electric vehicles. Description The free-piston engine linear generators can be divided in 3 subsystems: One (or more) reaction section with a single or two opposite pistons One (or more) linear electric generator, which is composed of a static part (the stator) and a moving part (the magnets) connected to the connection rod. One (or more) return unit to push the piston back due to the lack of a crankshaft (typically a gas spring or an opposed reaction section) The FPLG has many potential advantages compared to traditional electric generator powered by an internal combustion engine. One of the main advantages of the FPLG comes from the absence of crankshaft. It leads to a smaller and lighter generator with fewer parts. This also allows a variable compression and expansion ratios, which makes it possible to operate with different kinds of fuel. The linear generator also allows the control of the resistance force, and therefore a better control of the piston's movement and of the reaction. The total efficiency (including mechanical and generator) of free-piston linear generators can be significantly higher than conventional internal combustion engines and comparable to fuel cells. Development The first patents of free-piston linear generators date from around 1940, however in the last decades, especially after the development of rare-earth magnets and power electronics, many different research groups have been working in this field. These include: Libertine LPE, UK. West Virginia University (WVU), USA. Chalmers University of Technology, Sweden. Electric Generator, Pontus Ostenberg, USA - 1943 Free Piston Engine, Van Blarigan, Sandia National Laboratory, USA - Since 1995 Aquarius Engines, Israel. Free-Piston Engine Project, Newcastle University, UK - Since 1999 Shanghai Jiaotong University, China. Free-Piston Linear Generator, German Aerospace Center (DLR), Germany - since 2002 Free Piston Power Pack (FP3), Pempek Systems, Australia - 2003 Free Piston Energy Converter, KTH Electrical Engineering, Sweden - 2006 Linear Combustion Engine, Czech technical university - 2004 Internal Combustion Linear Generator Integrated Power System, Xu Nanjing, China - 2010 micromer ag (Switzerland) - 2012 Free-piston engine linear generator, Toyota, Japan - 2014 Although there is a variety of names and abbreviations for the technology, the terms "Free-piston linear generator" and "FPLG" particularly refer to the project at German Aerospace Center. Operation The free-piston linear generator generally consists of three subsystems: combustion chamber, linear generator and return unit (normally a gas spring), which are coupled through a connecting rod. In the combustion chamber, a mixture of fuel and air is ignited, increasing the pressure and forcing the moving parts (connection rod, linear generator and pistons) in the direction of the gas spring. The gas spring is compressed, and, while the piston is near the bottom dead center (BDC), fresh air and fuel are injected into the combustion chamber, expelling the exhaust gases. The gas spring pushes the moving parts assembly back to the top dead center (TDC), compressing the mixture of air and fuel that was injected and the cycle repeats. This works in a similar manner to the two-stroke engine, however it is not the only possible configuration. The linear generator can generate a force opposed to the motion, not only during expansion but also during compression. The magnitude and the force profile affect the piston movement, as well as the overall efficiency. Variations The FPLG has been conceived in many different configurations, but for most applications, particularly for the automotive industry, focus has been on two opposed pistons in the same cylinder with one combustion chamber with a gas spring at the end of each cylinder. This balances out the forces in order to reduce vibration and noise. In the simplest case, a second unit is just a mirror of the first, with no functional connection to the first. Alternatively, a single combustion chamber or gas spring can be used, allowing for a more compact design and easier synchronization between the pistons. The gas spring and combustion chamber can be placed on the ends of the connection rods, or they can share the same piston, using opposite sides in order to reduce space. The linear generator itself has also many different configurations and forms. It can be designed as round tube, a cylinder or even flat plate in order to reduce the center of gravity, and/or improve the heat dissipation. The free-piston linear generator's great versatility comes from the absence of a crankshaft, removing a great pumping loss, giving the engine a further degree of freedom. The combustion can be two-stroke engine or four-stroke engine. However, a four-stroke requires a much higher intermediate storage of energy, the rotational inertia of the crankshaft, to propel the piston through the four strokes. With the absence of a crankshaft, a gas spring would need to power the piston through the intake, compression, and exhaust strokes. Hence the reason why most of the current research focuses on the two-strokes cycle. Several variations are possible for combustion: Spark ignition (Otto) Compression ignition (Diesel) Homogeneous charge compression ignition (HCCI) The DLR research The Institute of Vehicle Concepts of the German Aerospace Center is currently developing a FPLG (or Freikolbenlineargenerator - FKLG) since 2002, and has published several papers about this subject. During the first few years of research, the theoretical background along with the 3 subsystems were developed separately. In 2013, the first entire system was built and operated successfully. The German center is currently into its 2nd version of the entire system, on which two opposed cylinders will be used in order to reduce vibration and noise, making it viable for the automotive industry. See also Free-piston engine References External links FPLG project from the DLR A history of free piston linear alternator developments Free-piston engines Internal combustion engine Engines Piston engines Engine technology
Free-piston linear generator
Physics,Technology,Engineering
1,287