text
stringlengths
11
320k
source
stringlengths
26
161
A galactic algorithm is an algorithm with record-breaking theoretical ( asymptotic ) performance, but which is not used due to practical constraints. Typical reasons are that the performance gains only appear for problems that are so large they never occur, or the algorithm's complexity outweighs a relatively small gain in performance. Galactic algorithms were so named by Richard Lipton and Ken Regan, [ 1 ] because they will never be used on any data sets on Earth. Even if they are never used in practice, galactic algorithms may still contribute to computer science : This alone could be important and often is a great reason for finding such algorithms. For example, if tomorrow there were a discovery that showed there is a factoring algorithm with a huge but provably polynomial time bound, that would change our beliefs about factoring. The algorithm might never be used, but would certainly shape the future research into factoring. An example of a galactic algorithm is the fastest known way to multiply two numbers , [ 4 ] which is based on a 1729-dimensional Fourier transform . [ 5 ] It needs O ( n log ⁡ n ) {\displaystyle O(n\log n)} bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits." [ 5 ] The AKS primality test is galactic. It is the most theoretically sound of any known algorithm that can take an arbitrary number and tell if it is prime . In particular, it is provably polynomial-time , deterministic , and unconditionally correct . All other known algorithms fall short on at least one of these criteria, but the shortcomings are minor and the calculations are much faster, so they are used instead. ECPP in practice runs much faster than AKS, but it has never been proven to be polynomial time. The Miller–Rabin test is also much faster than AKS, but produces only a probabilistic result. However the probability of error can be driven down to arbitrarily small values (say < 10 − 100 {\displaystyle <10^{-100}} ), good enough for practical purposes. There is also a deterministic version of the Miller-Rabin test, which runs in polynomial time over all inputs, but its correctness depends on the generalized Riemann hypothesis (which is widely believed, but not proven). The existence of these (much) faster alternatives means AKS is not used in practice. The first improvement over brute-force matrix multiplication (which needs O ( n 3 ) {\displaystyle O(n^{3})} multiplications) was the Strassen algorithm : a recursive algorithm that needs O ( n 2.807 ) {\displaystyle O(n^{2.807})} multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory , are the Coppersmith–Winograd algorithm and its slightly better successors, needing O ( n 2.373 ) {\displaystyle O(n^{2.373})} multiplications. These are galactic – "We nevertheless stress that such improvements are only of theoretical interest, since the huge constants involved in the complexity of fast matrix multiplication usually make these algorithms impractical." [ 6 ] Claude Shannon showed a simple but asymptotically optimal code that can reach the theoretical capacity of a communication channel . It requires assigning a random code word to every possible n {\displaystyle n} -bit message, then decoding by finding the closest code word. If n {\displaystyle n} is chosen large enough, this beats any existing code and can get arbitrarily close to the capacity of the channel. Unfortunately, any n {\displaystyle n} big enough to beat existing codes is also completely impractical. [ 7 ] These codes, though never used, inspired decades of research into more practical algorithms that today can achieve rates arbitrarily close to channel capacity. [ 8 ] The problem of deciding whether a graph G {\displaystyle G} contains H {\displaystyle H} as a minor is NP-complete in general, but where H {\displaystyle H} is fixed, it can be solved in polynomial time. The running time for testing whether H {\displaystyle H} is a minor of G {\displaystyle G} in this case is O ( n 2 ) {\displaystyle O(n^{2})} , [ 9 ] where n {\displaystyle n} is the number of vertices in G {\displaystyle G} and the big O notation hides a constant that depends superexponentially on H {\displaystyle H} . The constant is greater than 2 ↑ ↑ ( 2 ↑ ↑ ( 2 ↑ ↑ ( h / 2 ) ) ) {\displaystyle 2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow (h/2)))} in Knuth's up-arrow notation , where h {\displaystyle h} is the number of vertices in H {\displaystyle H} . [ 10 ] Even the case of h = 4 {\displaystyle h=4} cannot be reasonably computed as the constant is greater than 2 pentated by 4, or 2 tetrated by 65536, that is, 2 ↑ ↑ ↑ 4 = 65536 2 = 2 2 ⋅ ⋅ 2 ⏟ 65536 {\displaystyle 2\uparrow \uparrow \uparrow 4={}^{65536}2=\underbrace {2^{2^{\cdot ^{\cdot ^{2}}}}} _{65536}} . In cryptography jargon, a "break" is any attack faster in expectation than brute force – i.e., performing one trial decryption for each possible key. For many cryptographic systems, breaks are known, but are still practically infeasible with current technology. One example is the best attack known against 128-bit AES , which takes only 2 126 {\displaystyle 2^{126}} operations. [ 11 ] Despite being impractical, theoretical breaks can provide insight into vulnerability patterns, and sometimes lead to discovery of exploitable breaks. For several decades, the best known approximation to the traveling salesman problem in a metric space was the very simple Christofides algorithm which produced a path at most 50% longer than the optimum. (Many other algorithms could usually do much better, but could not provably do so.) In 2020, a newer and much more complex algorithm was discovered that can beat this by 10 − 34 {\displaystyle 10^{-34}} percent. [ 12 ] Although no one will ever switch to this algorithm for its very slight worst-case improvement, it is still considered important because "this minuscule improvement breaks through both a theoretical logjam and a psychological one". [ 13 ] A single algorithm, "Hutter search", can solve any well-defined problem in an asymptotically optimal time, barring some caveats. It works by searching through all possible algorithms (by runtime), while simultaneously searching through all possible proofs (by length of proof), looking for a proof of correctness for each algorithm. Since the proof of correctness is of finite size, it "only" adds a constant and does not affect the asymptotic runtime. However, this constant is so big that the algorithm is entirely impractical. [ 14 ] [ 15 ] For example, if the shortest proof of correctness of a given algorithm is 1000 bits long, the search will examine at least 2 999 other potential proofs first. Hutter search is related to Solomonoff induction , which is a formalization of Bayesian inference . All computable theories (as implemented by programs) which perfectly describe previous observations are used to calculate the probability of the next observation, with more weight put on the shorter computable theories. Again, the search over all possible explanations makes this procedure galactic. Simulated annealing , when used with a logarithmic cooling schedule, has been proven to find the global optimum of any optimization problem. However, such a cooling schedule results in entirely impractical runtimes, and is never used. [ 16 ] However, knowing this ideal algorithm exists has led to practical variants that are able to find very good (though not provably optimal) solutions to complex optimization problems. [ 17 ] The expected linear time MST algorithm is able to discover the minimum spanning tree of a graph in O ( m + n ) {\displaystyle O(m+n)} , where m {\displaystyle m} is the number of edges and n {\displaystyle n} is the number of nodes of the graph. [ 18 ] However, the constant factor that is hidden by the Big O notation is huge enough to make the algorithm impractical. An implementation is publicly available [ 19 ] and given the experimentally estimated implementation constants, it would only be faster than Borůvka's algorithm for graphs in which m + n > 9 ⋅ 10 151 {\displaystyle m+n>9\cdot 10^{151}} . [ 20 ] Researchers have found an algorithm that achieves the provably best-possible [ 21 ] asymptotic performance in terms of time-space tradeoff. [ 22 ] But it remains purely theoretical: "Despite the new hash table’s unprecedented efficiency, no one is likely to try building it anytime soon. It’s just too complicated to construct." [ 23 ] and "in practice, constants really matter. In the real world, a factor of 10 is a game ender.” [ 23 ] Connectivity in undirected graphs (also known as USTCON, for Unconnected Source-Target CONnectivity) is the problem of deciding if a path exists between two nodes in an undirected graph, or in other words, if they are in the same connected component . If you are allowed to use O ( N ) {\displaystyle O({\text{N}})} space, polynomial time solutions such as Dijkstra's algorithm have been known and used for decades. But for many years it was unknown if this could be done deterministically in O ( log N ) {\displaystyle O({\text{log N}})} space (class L ), though it was known to be possible with randomized algorithms (class NL ). In 2004, a breakthrough paper by Omer Reingold showed that USTCON is in fact in L . [ 24 ] However, despite the asymptotically better space requirement, this algorithm is galactic . The constant hidden by the O ( log N ) {\displaystyle O({\text{log N}})} is so big that in any practical case it uses far more memory than the well known O ( N ) {\displaystyle O({\text{N}})} algorithms, plus it is exceedingly slow. So despite being a landmark in theory (more than 1000 citations as of 2025) it is never used in practice. Low-density parity-check codes , also known as LDPC or Gallager codes, are an example of an algorithm that was galactic when first developed, but became practical as computation improved. They were originally conceived by Robert G. Gallager in his doctoral dissertation [ 25 ] at the Massachusetts Institute of Technology in 1960. [ 26 ] [ 27 ] Although their performance was much better than other codes of that time, reaching the Gilbert–Varshamov bound for linear codes , the codes were largely ignored as their iterative decoding algorithm was prohibitively computationally expensive for the hardware available. [ 28 ] Renewed interest in LDPC codes emerged following the invention of the closely-related turbo codes (1993), whose similarly iterative decoding algorithm outperformed other codes used at that time. LDPC codes were subsequently rediscovered in 1996. [ 29 ] They are now used in many applications today.
https://en.wikipedia.org/wiki/Galactic_algorithm
Galactic astronomy is the study of the Milky Way galaxy and all its contents. This is in contrast to extragalactic astronomy , which is the study of everything outside our galaxy, including all other galaxies. Galactic astronomy should not be confused with galaxy formation and evolution , which is the general study of galaxies , their formation, structure, components, dynamics, interactions, and the range of forms they take. The Milky Way galaxy, where the Solar System is located, is in many ways the best-studied galaxy, although important parts of it are obscured from view in visible wavelengths by regions of cosmic dust . The development of radio astronomy , infrared astronomy and submillimetre astronomy in the 20th century allowed the gas and dust of the Milky Way to be mapped for the first time. A standard set of subcategories is used by astronomical journals to split up the subject of Galactic Astronomy: [ 1 ] [ citation needed ]
https://en.wikipedia.org/wiki/Galactic_astronomy
The galactic coordinate system is a celestial coordinate system in spherical coordinates , with the Sun as its center, the primary direction aligned with the approximate center of the Milky Way Galaxy , and the fundamental plane parallel to an approximation of the galactic plane but offset to its north. It uses the right-handed convention , meaning that coordinates are positive toward the north and toward the east in the fundamental plane . [ 1 ] Longitude (symbol l ) measures the angular distance of an object eastward along the galactic equator from the Galactic Center. Analogous to terrestrial longitude , galactic longitude is usually measured in degrees (°). Latitude (symbol b ) measures the angle of an object northward of the galactic equator (or midplane) as viewed from Earth. Analogous to terrestrial latitude , galactic latitude is usually measured in degrees (°). The first galactic coordinate system was used by William Herschel in 1785. A number of different coordinate systems, each differing by a few degrees, were used until 1932, when Lund Observatory assembled a set of conversion tables that defined a standard galactic coordinate system based on a galactic north pole at RA 12 h 40 m , dec +28° (in the B1900.0 epoch convention) and a 0° longitude at the point where the galactic plane and equatorial plane intersected. [ 1 ] In 1958, the International Astronomical Union (IAU) defined the galactic coordinate system in reference to radio observations of galactic neutral hydrogen through the hydrogen line , changing the definition of the Galactic longitude by 32° and the latitude by 1.5°. [ 1 ] In the equatorial coordinate system , for equinox and equator of 1950.0 , the north galactic pole is defined at right ascension 12 h 49 m , declination +27.4°, in the constellation Coma Berenices , with a probable error of ±0.1°. [ 2 ] Longitude 0° is the great semicircle that originates from this point along the line in position angle 123° with respect to the equatorial pole . The galactic longitude increases in the same direction as right ascension. Galactic latitude is positive towards the north galactic pole, with a plane passing through the Sun and parallel to the galactic equator being 0°, whilst the poles are ±90°. [ 3 ] Based on this definition, the galactic poles and equator can be found from spherical trigonometry and can be precessed to other epochs ; see the table. The IAU recommended that during the transition period from the old, pre-1958 system to the new, the old longitude and latitude should be designated l I and b I while the new should be designated l II and b II . [ 3 ] This convention is occasionally seen. [ 4 ] Radio source Sagittarius A* , which is the best physical marker of the true Galactic Center , is located at 17 h 45 m 40.0409 s , −29° 00′ 28.118″ (J2000). [ 2 ] Rounded to the same number of digits as the table, 17 h 45.7 m , −29.01° (J2000), there is an offset of about 0.07° from the defined coordinate center, well within the 1958 error estimate of ±0.1°. Due to the Sun's position, which currently lies 56.75 ± 6.20 ly north of the midplane, and the heliocentric definition adopted by the IAU, the galactic coordinates of Sgr A* are latitude +0° 07′ 12″ south, longitude 0° 04′ 06″ . Since as defined the galactic coordinate system does not rotate with time, Sgr A* is actually decreasing in longitude at the rate of galactic rotation at the sun, Ω , approximately 5.7 milliarcseconds per year (see Oort constants ). An object's location expressed in the equatorial coordinate system can be transformed into the galactic coordinate system. In these equations, α is right ascension , δ is declination . NGP refers to the coordinate values of the north galactic pole and NCP to those of the north celestial pole. [ 5 ] The reverse (galactic to equatorial) can also be accomplished with the following conversion formulas. Where: In some applications use is made of rectangular coordinates based on galactic longitude and latitude and distance. In some work regarding the distant past or future the galactic coordinate system is taken as rotating so that the x -axis always goes to the centre of the galaxy. [ 6 ] There are two major rectangular variations of galactic coordinates, commonly used for computing space velocities of galactic objects. In these systems the xyz -axes are designated UVW , but the definitions vary by author. In one system, the U axis is directed toward the Galactic Center ( l = 0°), and it is a right-handed system (positive towards the east and towards the north galactic pole); in the other, the U axis is directed toward the galactic anticenter ( l = 180°), and it is a left-handed system (positive towards the east and towards the north galactic pole). [ 7 ] The galactic equator runs through the following constellations : [ 8 ]
https://en.wikipedia.org/wiki/Galactic_coordinate_system
A galactic orientation describes the spatial orientation of a galactic plane , where such exists for a given galaxy . For a spiral galaxy , this can be obtained from the inclination of the galactic plane to the plane of the sky , and the position angle of the major axis as viewed from Earth. The result yields a direction perpendicular to the galactic plane. [ 1 ] In the case of the Milky Way , this is given by the coordinates of the galactic pole . Galactic clusters [ 2 ] [ 3 ] are gravitationally bound large-scale structures of multiple galaxies . The evolution of these aggregates is determined by time and manner of formation and the process of how their structures and constituents have been changing with time. Gamow (1952) and Weizscker (1951) showed that the observed rotations of galaxies are important for cosmology . They postulated that the rotation of galaxies might be a clue of physical conditions under which these systems formed. Thus, understanding the distribution of spatial orientations of the spin vectors of galaxies is critical to understanding the origin of the angular momenta of galaxies. There are mainly three scenarios for the origin of galaxy clusters and superclusters . These models are based on different assumptions of the primordial conditions, so they predict different spin vector alignments of the galaxies. The three hypotheses are the pancake model , the hierarchy model , and the primordial vorticity theory . The three are mutually exclusive as they produce contradictory predictions. However, the predictions made by all three theories are based on the precepts of cosmology. Thus, these models can be tested using a database with appropriate methods of analysis. A galaxy is a large gravitational aggregation of stars, dust, gas, and an unknown component termed dark matter . The Milky Way Galaxy [ 4 ] is only one of the billions of galaxies in the known universe. Galaxies are classified into spirals , [ 5 ] ellipticals , irregular , and peculiar . Sizes can range from only a few thousand stars (dwarf irregulars) to 10 13 stars in giant ellipticals. Elliptical galaxies are spherical or elliptical in appearance. Spiral galaxies range from S0, the lenticular galaxies, to Sb, which have a bar across the nucleus, to Sc galaxies which have strong spiral arms. In total count, ellipticals amount to 13%, S0 to 22%, Sa, b, c galaxies to 61%, irregulars to 3.5%, and peculiars to 0.9%. At the center of most galaxies is a high concentration of older stars. This portion of a galaxy is called the nuclear bulge . Beyond the nuclear bulge lies a large disc containing young, hot stars, called the disk of the galaxy. There is a morphological separation: Ellipticals are most common in clusters of galaxies, and typically the center of a cluster is occupied by a giant elliptical. Spirals are most common in the field, i.e., not in clusters. The primordial vorticity theory predicts that the spin vectors of galaxies are distributed primarily perpendicular to the cluster plane. [ 6 ] The primordial vorticity is called top-down scenario. Sometimes it is also called turbulence model. In the turbulence scenario, first flattened rotating proto-clusters formed due to cosmic vorticity in the early universe. Subsequent density and pressure fluctuations caused galaxies to form. The idea that galaxy formation is initiated by primordial turbulence has a long history. Ozernoy (1971, 1978) proposes that galaxies form from high-density regions behind the shocks produced by turbulence. According to the primordial vorticity theory, the presence of large chaotic velocities generates turbulence, which, in turn, produces density and pressure fluctuations. Density fluctuations on the scale of clusters of galaxies could be gravitationally bound, but galactic mass fluctuations are always unbound. Galaxies form when unbound galactic mass eddies, expanding faster than their bound cluster background. So forming galaxies collide with each other as clusters start to recollapse. These collisions produce shocks and high-density proto-galaxies at the eddy interfaces. As clusters recollapse, the system of galaxies undergoes a violent collective relaxation. The pancake model was first proposed in the 1970s by Yakob B. Zel'dovich at the Institute of Applied Mathematics in Moscow . [ 7 ] The pancake model predicts that the spin vectors of galaxies tend to lie within the cluster plane. In the pancake scenario, formation of clusters took place first and it was followed by their fragmentation into galaxies due to adiabatic fluctuations. According to the non-linear gravitational instability theory, a growth of small inhomogeneities leads to the formation of thin, dense, and gaseous condensations that are called `pancakes'. These condensations are compressed and heated to high temperatures by shock waves causing them to quickly fragment into gas clouds. The later clumping of these clouds results in the formation of galaxies and their clusters. Thermal, hydrodynamic, and gravitational instabilities arise during the course of evolution. It leads to the fragmentation of gaseous proto-clusters and, subsequently, clustering of galaxies takes place. The pancake scheme follows three simultaneous processes: first, gas cools and new clouds of cold gas form; secondly, these clouds cluster to form galaxies; and thirdly, the forming galaxies and, to an extent, single clouds cluster together to form a cluster of galaxies. According to the hierarchy model, the directions of the spin vectors should be distributed randomly. In hierarchy model, galaxies were first formed and then they obtained their angular momenta by tidal force while they were gathering gravitationally to form a cluster. Those galaxies grow by subsequent merging of proto-galactic condensations or even by merging of already fully formed galaxies. In this scheme, one could imagine that large irregularities like galaxies grew under the influence of gravities from small imperfections in the early universe. The angular momentum transferred to a developing proto-galaxy by the gravitational interaction of the quadrupole moment of the system with the tidal field of the matter.
https://en.wikipedia.org/wiki/Galactic_orientation
A galactic quadrant , or quadrant of the Galaxy , is one of four circular sectors in the division of the Milky Way Galaxy. In actual astronomical practice, the delineation of the galactic quadrants is based upon the galactic coordinate system , which places the Sun as the pole of the mapping system . The Sun is used instead of the Galactic Center for practical reasons since all astronomical observations (by humans ) to date have been based on Earth or within the Solar System . Quadrants are described using ordinals —for example, "1st galactic quadrant", [ 1 ] "second galactic quadrant", [ 2 ] or "third quadrant of the Galaxy". [ 3 ] Viewing from the north galactic pole with 0 degrees (°) as the ray that runs starting from the Sun and through the galactic center, the quadrants are as follows (where l is galactic longitude ): Due to the orientation of the Earth to the rest of the galaxy, the 2nd galactic quadrant is primarily only visible from the northern hemisphere while the 4th galactic quadrant is mostly only visible from the southern hemisphere . Thus, it is usually more practical for amateur stargazers to use the celestial quadrants . Nonetheless, cooperating or international astronomical organizations are not so bound by the Earth's horizon . Based on a view from Earth, one may look towards major constellations for a rough sense of where the borders of the quadrants are: [ 5 ] (Note: by drawing a line through the following, one can also approximate the galactic equator .) A long tradition of dividing the visible skies into four precedes the modern definitions of four galactic quadrants. Ancient Mesopotamian formulae spoke of "the four corners of the universe" and of "the heaven's four corners", [ 6 ] and the Biblical Book of Jeremiah echoes this phraseology: "And upon Elam will I bring the four winds from the four quarters of heaven" (Jeremiah, 49:36). Astrology too uses quadrant systems to divide up its stars of interest. The astronomy of the location of constellations sees each of the Northern and Southern celestial hemispheres divided into four quadrants. "Galactic quadrants" within Star Trek are based around a meridian that runs from the center of the Galaxy through Earth's Solar System , [ 7 ] which is not unlike the system used by astronomers. However, rather than have the perpendicular axis run through the Sun, as is done in astronomy, the Star Trek version runs the axis through the galactic center. In that sense, the Star Trek quadrant system is less geocentric as a cartographical system than the standard. Also, rather than use ordinals, Star Trek designates them by the Greek letters Alpha , Beta , Gamma , and Delta . The Canadian Galactic Plane Survey (CGPS) created a radio map of the Galaxy based on Star Trek ' s quadrants, joking that "the CGPS is primarily concerned with Cardassians , while the SGPS (Southern Galactic Plane Survey) focuses on Romulans ". [ 8 ] "Galactic quadrants" within Star Wars canon astrography map depicts a top-down view of the galactic disk, with "Quadrant A" (i.e. "north") as the side of the galactic center that Coruscant is located on. As the capital planet of the Republic and later the Empire, Coruscant is used as the reference point for galactic astronomy, set at XYZ coordinates 0-0-0. Standardized galactic time measurements are also based on Coruscant's local solar day and year. The Imperium of Man's territory in the Milky Way Galaxy in Warhammer 40,000 is divided into five zones, known as "segmentae". [ 9 ] Navigation in the Milky Way is also identified with cardinal directions, indicating distance from the Sol System: for example, Ultima Segmentum, the largest segmentum in the Imperium of Man, is located to the galactic east of the Sol System. The 0° "north" in Imperial maps does not correspond to the 0° in the real-world.
https://en.wikipedia.org/wiki/Galactic_quadrant
This glossary of astronomy is a list of definitions of terms and concepts relevant to astronomy and cosmology , their sub-disciplines, and related fields. Astronomy is concerned with the study of celestial objects and phenomena that originate outside the atmosphere of Earth . The field of astronomy features an extensive vocabulary and a significant amount of jargon. Also visual brightness (V) . Also argument of perifocus or argument of pericenter . Also the north node . Also exobiology . Also planetary geology . Also celestial body . Also spelled astronomical catalog . Also celestial object . Also obliquity . Also critical velocity or critical rotation . Also spelled circumstellar disk . Also compact object . Also space dust . Also cosmic microwave background radiation (CMBR) . Also break-up velocity . Also meridian transit . Also the south node . Also distant detached object and extended scattered disc object . Also ecliptic plane or plane of the ecliptic . Also elliptic orbit . Also exoplanet . Also the Cusp of Aries . Also background stars . Also galactic core or galactic center . Also galactic year or cosmic year . Also group of galaxies (GrG) . Also geosynchronous equatorial orbit ( GEO ). Also the Hill radius . Also Laplace's invariable plane or the Laplace plane . Also Keplerian orbit . Also Edgeworth–Kuiper belt . Also Lagrange point , libration point , or L-point . Also the Lenakaeia Supercluster , Local Supercluster , or Local SCI . Also Moon phase . Also the Northward equinox . Also shooting star or falling star . Also normalized polar moment of inertia . Also minor moon or minor natural satellite . Also MK classification . Also rise width . Also stellar association . Also bare eye or unaided eye . Also moon . Also arc length . Also the Öpik–Oort cloud . Also orbital plot . Also revolution period . Also simply called space . Also pericenter . Also reference plane . Also planetary object . Also sometimes called planetology . Also planemo or planetary body . Also gravitational primary , primary body , or central body . Also direct motion . Also quasi-stellar radio source Also interstellar planet , nomad planet , orphan planet , and starless planet . Also twinkling . Also major semi-axis . Also southward equinox . Also positional astronomy . Also standard acceleration due to gravity . Also spelled star catalog . Also stellar system . Also stellar envelope . Also spectral classification . Also simply stellar model . Also substar . Also synodic rotation period . Also tidal acceleration . Also Tisserand parameter . Also the Johnson system or Johnson–Morgan system . Also the Local Supercluster ( LSC or LC ). An acronym of X-ray bright optically normal galaxy .
https://en.wikipedia.org/wiki/Galactocentric_distance
Galactogen is a polysaccharide of galactose that functions as energy storage in pulmonate snails and some Caenogastropoda . [ 1 ] This polysaccharide is exclusive of the reproduction and is only found in the albumen gland from the female snail reproductive system and in the perivitelline fluid of eggs. Galactogen serves as an energy reserve for developing embryos and hatchlings, which is later replaced by glycogen in juveniles and adults. [ 2 ] The advantage of accumulating galactogen instead of glycogen in eggs remains unclear, [ 3 ] although some hypotheses have been proposed (see below). Galactogen has been reported in the albumen gland of pulmonate snails such as Helix pomatia , [ 4 ] Limnaea stagnalis , [ 5 ] Oxychilus cellarius , [ 6 ] Achatina fulica , [ 7 ] Aplexa nitens and Otala lactea , [ 8 ] Bulimnaea megasoma , [ 9 ] Ariolimax columbianis , [ 10 ] Ariophanta , [ 11 ] Biomphalaria glabrata , [ 12 ] and Strophochelius oblongus . [ 13 ] This polysaccharide was also identified in the Caenogastropoda Pila virens and Viviparus , [ 11 ] Pomacea canaliculata , [ 14 ] and Pomacea maculata . [ 15 ] In adult gastropods , galactogen is confined to the albumen gland, showing a large variation in content during the year and reaching a higher peak in the reproductive season. [ 2 ] During the reproductive season, this polysaccharide is rapidly restored in the albumen gland after being transferred to the eggs, decreasing its total amount only after repeated ovipositions. [ 16 ] [ 17 ] In Pomacea canaliculata snails, galactogen would act, together with perivitellins , as a main limiting factor of reproduction. [ 17 ] This polysaccharide has been identified in the Golgi zone of the secretory cells from the albumen gland in the form of discrete granules 200 Å in diameter. [ 18 ] [ 19 ] [ 20 ] The appearance of galactogen granules within the secretory globules suggests that this is the site of biosynthesis of the polysaccharide. [ 1 ] [ 20 ] Apart from the albumen gland, galactogen is also found as a major component of the perivitelline fluid from the snail eggs, comprising the main energy source for the developing embryo. [ 4 ] [ 5 ] [ 14 ] [ 15 ] Galactogen is a polymer of galactose with species-specific structural variations. In this polysaccharide , the D-galactose are predominantly β (1→3) and β (1→6) linked; however some species also have β (1→2) and β (1→4). [ 3 ] The galactogen of the aquatic Basommatophora (e.g. Lymnaea , Biomphalaria ) is highly branched with only 5-8 % of the sugar residues in linear sections, and β(1→3) and β(1→6) bonds alternate more-or-Iess regularly. In the terrestrial Stylommatophora (e.g. Helix , Arianta , Cepaea , Achatina ) up to 20% of the sugar residues are linear β(1→3) bound. The galactogen of Ampullarius sp species has an unusually large proportion of linearly arranged sugars, with 5% β(1→3), 26% β(1→6), and 10% β(1→2). [ 3 ] Other analyses in Helix pomatia suggested a dichotomous structure, where each galactopyranose unit bears a branch or side chain. [ 21 ] [ 22 ] Molecular weight determinations in galactogen extracted from the eggs of Helix pomatia and Limnaea stagnalis were estimated in 4x10 6 and 2.2x10 6 , respectively. [ 23 ] [ 24 ] In these snails galactogen contains only D-galactose. [ 25 ] Depending upon the origin of the galactogen, apart from D-galactose, L-galactose, L-fucose, D-glucose, L-glucose and phosphate residues may also be present; [ 3 ] for instance, the galactogen from Ampullarius sp. contains 98% of D-galacotose and 2% of L- fucose, [ 26 ] and the one isolated from Pomacea maculata eggs consist in 68% of D-galactose and 32% of D-glucose. [ 15 ] Phosphate-substituted galactose residues are found in the galactogen of individual species from various snail genera such as Biomphalaria , Helix and Cepaea . [ 27 ] Therefore, current knowledge indicates it could be considered either a homopolysaccharide of or a heteropolysaccharide dominated by galactose. Galactogen is synthesized by secretory cells in the albumen gland of adult female snails and later transferred to the egg. This process is under neurohormonal control, [ 9 ] [ 28 ] notably by the brain galactogenin . [ 29 ] The biochemical pathways for glycogen and galactogen synthesis are closely related. Both use glucose as a common precursor and its conversion to activated galactose is catalyzed by UDP-glucose 4-epimerase and galactose-1-P uridyl-transferase. This enables glucose to be the common precursor for both glycogenesis and galactogenesis . [ 30 ] In fact, both polysaccharides are found in the same secretory cells of the albumen gland and are subject to independent seasonal variations. [ 19 ] Glycogen accumulates in autumn as a general energy storage for hibernation, whereas galactogen is synthesized during spring in preparation of egg-laying. [ 31 ] It is commonly accepted that galactogen production is restricted to embryo nutrition and therefore is mainly transferred to eggs. Little is known about the galactogen-synthesizing enzymes . A D- galactosyltransferase was described in the albumen gland of Helix pomatia . [ 32 ] This enzyme catalyzes the transfer of D-galactose to a (1→6) linkage and is dependent upon the presence of acceptor galactogen. Similarly, a β-(1→3)-galactosyltransferase activity has been detected in albumen gland extracts from Limnaea stagnalis . [ 33 ] In embryos and fasting newly hatched snails, galactogen is most likely an important donor (via galactose) of metabolic intermediates. In feeding snails, the primary diet is glucose-containing starch and cellulose . These polymers are digested and contribute glucose to the pathways of intermediary metabolism. [ 1 ] Galactogen consumption begins at the gastrula stage and continues throughout development. Up to 46-78 % of egg galactogen disappears during embryo development. The remainder is used up within the first days after hatching. [ 9 ] Only snail embryos and hatchlings are able to degrade galactogen, whereas other animals and even adult snails do not. [ 9 ] [ 34 ] [ 35 ] β- galactosidase may be important in the release of galactose from galactogen; however, most of the catabolic pathway of this polysaccharide is still unknown. [ 1 ] Besides being a source of energy, few other functions have been described for galactogen in the snail eggs, and all of them are related to embryo defense and protection. Given that carbohydrates retain water, the high amount of this polysaccharide would protect the eggs from desiccation from those snails that have aerial oviposition. [ 36 ] [ 37 ] Besides, the high viscosity that the polysaccharide may confer to the perivitelline fluid has been suggested as a potential antimicrobial defense. [ 37 ] Since galactogen is a β-linked polysaccharide , such as cellulose or hemicelluloses , specific biochemical adaptations are needed to exploit it as a nutrient, such as specific glycosidases. However, apart from snail embryos and hatchlings, no animal seems to be able to catabolize galactogen, including adult snails. This fact led to consider galactogen as part of an antipredation defense system exclusive of gastropods, deterring predators by lowering the nutritional value of eggs. [ 15 ]
https://en.wikipedia.org/wiki/Galactogen
Galactolipids are a type of glycolipid whose sugar group is galactose . They differ from glycosphingolipids in that they do not have nitrogen in their composition. [ 1 ] They are the main part of plant membrane lipids where they substitute phospholipids to conserve phosphate for other essential processes. These chloroplast membranes contain a high quantity of monogalactosyldiacylglycerol (MGDG) and digalactosyldiacylglycerol (DGDG). They probably also assume a direct role in photosynthesis , as they have been found in the X-ray structures of photosynthetic complexes. [ 2 ] Galactolipids are more bioavailable than free fatty acids, and have been shown to exhibit COX mediated anti-inflammatory activity. [ 3 ] Bio-guided fractionation of spinach leaves ( Spinacia oleracea ) revealed alpha-linolenic acid galactolipids (18:3, n-3) were responsible for inhibitory effects on tumor promoter-induced Epstein-Barr virus (EBV) activation. [ 4 ] Recently, it has been demonstrated that this same galactolipid , 1,2-di-O-α-linolenoyl-3-O-α-D-galactopyranosyl- sn -glycerol, [ 5 ] may be important for the anti-inflammatory activity of Dog Rose ( Rosa canina ), a medicinal plant with documented effect on inflammatory diseases such as arthritis. The galactosphingolipid galactocerebroside (GalC) and its sulfated derivative sulfatide is also in abundance present (together with a small group of proteins) in myelin , the membrane around the axons in the nervous system of vertebrates . [ 6 ] It is galactolipids, rather than phlorotannins , that act as herbivore deterrents in Fucus vesiculosus against the sea urchin Arbacia punctulata . [ 7 ]
https://en.wikipedia.org/wiki/Galactolipid
Galactolysis refers to the catabolism of galactose . Galactolysis is a metabolic process by which galactose is catabolized into glucose derivatives. This process primarily takes place in the liver, where galactose is converted through the Leloir Pathway into derivatives that subsequently enters the glycolysis pathway to be further broken down for energy production. Galactolysis is essential for metabolism of dietary galactose, which is commonly obtained from lactose in milk and diary products. Defects regarding this pathway can lead to a rare genetic disorder called galactosemia , [ 1 ] which is a condition characterized by toxic accumulation of galactose. Galactose is a six-carbon sugar ( hexose ) commonly ingested as lactose which will get hydrolyzed by the enzyme lactase into glucose and galactose. The galactose is then absorbed into the bloodstream where Galactolysis occurs. The primary pathway of Galactolysis in humans is known as the Leloir pathway. This pathway was discovered by Luis Federico Leloir , who received a Nobel Prize in Chemistry in 1970. In the first step, the activated form α- D -galactose is phosphorylated by the enzyme galactokinase (GALK1) in order to form α- D -galactose-1-phosphate. Next, the enzyme galactose-1-phosphate uridylyl transferase [ 2 ] (GALT) will facilitate the exchange of the phosphate group from galactose-1-phosphate with the UDP group from UDP-glucose, resulting in the formation of UDP-galactose and glucose-1-phosphate. UDP-galactose is then converted into UDP-glucose by changing the orientation of the hydroxyl group on the 4th carbon through epimerization. This helps replenish the UDP-glucose used in step 2. Finally, glucose-1-phosphate will be converted to glucose-6-phosphate by the enzyme phosphoglucomutase. [ 3 ] [ 1 ] [ 4 ] Disruptions in the Leloir pathway can lead to a rare inherited genetic disorder known as galactosemia. This condition is caused by the deficiencies in galactokinase, galactose-1-phosphate uridylyl UDP-glucose, and UDP-galactose-4-epimerase. This results in the toxic accumulation of galactose in the tissues There are 3 types of galactosemia or galactose deficiencies: Most people suffering from galactosemia must make significant lifestyle adjustments and receive medical care. Firstly, most patients have to remove milk and dairy products to prevent the accumulation of galactose. In addition, many children with Type 1 may have speech delays, motor coordination, and learning disabilities. These often requires interventions such as speech therapy and special education support to aid the child in their development. [ 1 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galactolysis
Galactomannans are polysaccharides consisting of a mannose backbone with galactose side groups, more specifically, a (1-4)-linked beta-D-mannopyranose backbone with branchpoints from their 6-positions linked to alpha-D-galactose, (i.e. 1-6-linked alpha-D-galactopyranose). In order of increasing number of mannose-to-galactose ratio: [ 1 ] Galactomannans are often used in food products to increase the viscosity of the water phase. Guar gum has been used to add viscosity to artificial tears , but is not as stable as carboxymethylcellulose . [ 2 ] Galactomannans are used in foods as stabilisers . Guar and locust bean gum (LBG) are commonly used in ice cream to improve texture and reduce ice cream meltdown. LBG is also used extensively in cream cheese, [ 3 ] [ unreliable medical source? ] fruit preparations and salad dressings. Tara gum is seeing growing acceptability as a food ingredient but is still used to a much lesser extent than guar or LBG. Guar has the highest usage in foods, largely due to its low and stable price. As galactomannan is a component of the cell wall of the mold Aspergillus [ 4 ] and it is released during growth, detection of galactomannan in blood is used to diagnose invasive aspergillosis infections in humans. This is performed with monoclonal antibodies in a double-sandwich ELISA ; this assay from Bio-Rad Laboratories was approved by the FDA in 2003 and is of moderate accuracy. [ 5 ] The assay is most useful in patients who have had hemopoietic cell transplants (stem cell transplants). False positive Aspergillus Galactomannan test have been found in patients on intravenous treatment with some antibiotics or fluids containing gluconate or citric acid such as some transfusion platelets, parenteral nutrition or PlasmaLyte. [ 6 ] [ 7 ]
https://en.wikipedia.org/wiki/Galactomannan
A galactosylceramide , or galactocerebroside is a type of cerebroside consisting of a ceramide with a galactose residue at the 1-hydroxyl moiety. The galactose is cleaved by galactosylceramidase . Galactosylceramide is a marker for oligodendrocytes in the brain, whether or not they form myelin . [ 1 ] [ 2 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galactosylceramide
Galanin-like peptide ( GALP ) is a neuropeptide present in humans and other mammals. It is a 60- amino acid polypeptide produced in the arcuate nucleus of the hypothalamus and the posterior pituitary gland . [ 1 ] [ 2 ] [ 3 ] It is involved in the regulation of appetite and may also have other roles such as in inflammation, sex behavior, and stress. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] [ 9 ] Findings additionally suggest that GALP could play a function in energy metabolism due to its ability to maintain continual activation of the sympathetic nervous system (SNS) via thermogenesis, which refers to the production of heat within living organisms. [ 10 ] In addition, the administration of GALP directly into the brain leads to a reduction in the secretion of thyroid-stimulating hormone (TSH), which indicates the involvement of GALP in the neuroendocrine regulation of the hypothalamic-pituitary-thyroid (HPT) axis, and further adding to the evidence of the role of GALP in energy homeostasis. [ 11 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galanin-like_peptide
The article concerns the total synthesis of galanthamine , a drug used for the treatment of mild to moderate Alzheimer's disease . [ 1 ] The natural source of galantamine are certain species of daffodil and because these species are scarce and because the isolation of galanthamine from daffodil is expensive (a 1996 figure specifies 50,000 US dollar per kilogram, the yield from daffodil is 0.1–0.2% dry weight) alternative synthetic sources are under development by means of total synthesis . In 1962 racemic galanthamine and epi-galanthamine were prepared by organic reduction of racemic narwedine by D. H. R. Barton . Narwedine is the related enone (galanthamine the allyl alcohol ) obtained in an oxidative coupling. Chemical yield : 1.4%. In addition they isolated (−)-narwardine by chiral resolution from a mixture of racemic narwedine and 0.5 equivalents of (+)-galanthamine. In this way they were able to obtain (−)galanthamine again by reduction In 1976 Kametani obtained both galanthamine enantiomers by using a derivative of tartaric acid as a chiral resolving agent . In 1977 Koga obtained both enantiomers via a chiral pool synthesis starting from L-tyrosine [ 2 ] [ 3 ] and in 1988 Carrol optimized the oxidative coupling route to 11% yield based on isovanillin . In 1989 Vlahov exploited asymmetric reduction by biocatalysis in the synthesis of several galanthamine precursors. and in 1994 Shieh & Carlson [ 4 ] obtained (−)-galanthamine by spontaneous resolution of its narwedine precursor. Racemic narwedine was treated with 0.01 equivalent of (+)-galanthamine resulting in a 76% yield. Narwedine is a racemic conglomerate allowing the isolation of the S,S enantiomer from the R,R enantiomer by simple crystallization. What made the process unique is that both enantiomers are in dynamic chemical equilibrium with each other through a common phenol in a Michael reaction-like reaction brought about by triethylamine . In 1999 Jordis performed (−)-galanthamine synthesis on a multikilogram scale based on Carrol chemistry and Shieh/Carlson chiral resolution. This would become the basis for current industrial production by Sanochemia (AT). In 2000 Fels proposed an intramolecular Heck reaction for the construction of the galanthamine backbone and in the same year Trost & Toste obtained (−)-galanthamine in an asymmetric synthesis involving asymmetric allylic alkylation and an intramolecular Heck reaction. Improved methods were published in 2002 and 2005 (see below) In 2004 Node obtained (−)-galanthamine via a remote asymmetric induction method with starting chiral compound D-phenylalanine . [ 5 ] Brown prepared (−)-galanthamine in 2007 starting from isovanillin . [ 6 ] Isovanillin was also used by Magnus (2009) [ 7 ] D-glucose was used by Chida (2010). [ 8 ] Syntheses of racemic galanthamine have been reported by Wang in 2006 [ 9 ] and by Saito in 2008. [ 10 ] The method outlined by Jordis in 1999 forms the basis for industrial galanthamine production. [ 11 ] This method is based on electrophilic halogenation of 3,4-dimethoxybenzaldehyde 1 (accessible from isovanillin ) with bromine / acetic acid to organobromide 2 followed by regioselective demethylation with sulfuric acid to phenol 3 . This compound reacts in a reductive amination ( sodium borohydride ) with tyramine 4 to amine 5 which is formylated with ethyl formate and formic acid in dioxane in the next step to compound 6 . An oxidative phenol coupling takes place next with Potassium ferricyanide and potassium carbonate in toluene to 7 . The C8a-C14 bond is formed in the first step followed by a Michael addition of the other phenolic group to the newly formed enone group. The reaction step creates two stereocenters leading to two diastereomeric pairs of enantiomers . By the nature of the ABD skeleton the desired S,S/R,R pair is the major product formed and the other pair S,R/R,S is removed in workup. The ketone group is protected as the ketal 8 with 1,2-propylene glycol enabling the organic reduction by lithiumaluminiumhydride of both the bromine group and the formyl group. In the second phase the ketal group is removed ( hydrochloric acid ) forming racemic (S,S/R,R) narwedine 9 . Enantiopure (−)-narwedine is obtained via the dynamic chiral resolution method pioneered by Shieh/Carlson and in the final step the ketone is reduced to the alcohol with L-selectride . This final step is enantioselective producing the desired S,S,R compound because the approach of H − is restricted to the Si face as the Re face is shielded by the DB ring system. Formation of the S,S,S epimer is also avoided by keeping the reaction temperature below −15 °C. The total synthesis of galanthamine ( Trost 2005) [ 12 ] is described as follows: the sequence starts by bromination by electrophilic aromatic substitution of isovanillin 1 to bromophenol 2 , then by synthesis of the second intermediate 5 by reacting glutaraldehyde 3 in a coupled aldol reaction and Horner–Wadsworth–Emmons reaction with trimethyl phosphonoacetate 4 . The hydroxyl group is activated as a trichloroethyl carbonate leaving group to 6 . Next an enantioselective Trost AAA reaction takes place between bromophenol 2 and carbonate 6 to the allyl ether 7 . Next the aldehyde group is protected as an acetal in 8 and this step enables the organic reduction of the ester group to the alcohol 9 with DIBAH and subsequent homologation of this alcohol to a nitrile by Mitsunobu -type reaction using acetone cyanohydrine as the source of cyanide, to yield 10 followed by aldehyde deprotection to 11 . The intramolecular Heck reaction to 12 forms the dihydrofuran ring. Allylic oxidation by selenium dioxide provides allylic alcohol 13 with the correct stereochemistry. The aldehyde reacts with methylamine to the imine 14 and reduction of the imine and nitrile by DIBAL-H leading to ring-closure to the aminal 15 (not isolated) followed by acid quenching gives the hemi-aminal 16 . In the final step the hemiaminal is reduced to give Galanthamine 17 together with 6% of the epi isomer 18 . [ 13 ] A total synthesis reported by Eli Lilly and the University of Southampton in 2007 also starts from isovanillin. [ 6 ] The aldehyde group in its derivative 1 is converted to its amine by reductive amination with methylamine which is then protected as a BOC group in 2 . The remainder of the carbon framework is added with chiral propargyl alcohol 3 (introducing the 4a stereocenter and obtained by chiral synthesis of the ketone with R-Alpine borane ) in a Mitsunobu reaction to aryl ether 4 . The trimethylsilyl protective group is removed by potassium carbonate in methanol and the subsequent enyne metathesis reaction with Grubbs' catalyst gives diene 5 . A hydroboration–oxidation reaction converts 5 to alcohol 6 and an intramolecular Heck reaction affords tricycle 7 with alkene isomerization and establishment of the 8a stereocenter with correct stereochemistry based on chiral induction . The allyl alcohol group in 8 is introduced by selenoxide oxidation with an excess of the desired diastereomer . In the final step to galanthamine 9 the hydroxyl group is activated as the triflate and the amine group as the mesylate for intramolecular azepine ring closure via nucleophilic substitution (with 6% epimer formation).
https://en.wikipedia.org/wiki/Galantamine_total_synthesis
Galaxy Zoo is a crowdsourced astronomy project which invites people to assist in the morphological classification of large numbers of galaxies . It is an example of citizen science as it enlists the help of members of the public to help in scientific research. [ 1 ] [ 2 ] There have been 15 versions as of July 2017. [ 3 ] Galaxy Zoo is part of the Zooniverse , a group of citizen science projects. An outcome of the project is to better determine the different aspects of objects and to separate them into classifications. A key factor leading to the creation of the project was the problem of what has been referred to as data deluge , where research produces vast sets of information to the extent that research teams are not able to analyse and process much of it. [ 4 ] [ 5 ] [ 6 ] Kevin Schawinski , previously an astrophysicist at Oxford University and co-founder of Galaxy Zoo, described the problem that led to Galaxy Zoo's creation when he was set the task of classifying the morphology of more than 900,000 galaxies by eye that had been imaged by the Sloan Digital Sky Survey at the Apache Point Observatory in New Mexico , USA . "I classified 50,000 galaxies myself in a week, it was mind-numbing." [ 7 ] Chris Lintott , a co-founder of the project and a professor of astrophysics at the University of Oxford , stated: "In many parts of science, we're not constrained by what data we can get, we're constrained by what we can do with the data we have. Citizen science is a very powerful way of solving that problem." [ 6 ] The Galaxy Zoo concept was inspired by others such as Stardust@home , where the public was asked by NASA to search images obtained from a mission to a comet for interstellar dust impacts. [ 7 ] Unlike earlier internet-based citizen science projects such as SETI@home , which used spare computer processing power to analyse data (also known as distributed or volunteer computing), Stardust@home involved the active participation of human volunteers to complete the research task. [ 8 ] In August 2014, the Stardust team reported the discovery of first potential interstellar space particles after citizen scientists had looked through more than a million images. [ 9 ] In 2007, when Galaxy Zoo first started, the science team hoped that 20–30,000 people would take part in classifying the 900,000 galaxies that made up the sample . [ 7 ] It had been estimated that a perfect graduate student working 24 hours a day 7 days a week would take 3–5 years to classify all the galaxies in the sample once. [ 6 ] However, in the first Galaxy Zoo, more than 40 million classifications were made in approximately 175 days by more than 100,000 volunteers, providing an average of 38 classifications per galaxy. [ 10 ] Chris Lintott commented that: "One advantage is that you get to see parts of space that have never been seen before. These images were taken by a robotic telescope and processed automatically, so the odds are that when you log on, that first galaxy you see will be one that no human has seen before." [ 7 ] This was confirmed by Kevin Schawinski : "Most of these galaxies have been photographed by a robotic telescope, and then processed by computer. So this is the first time they will have been seen by human eyes.". [ 8 ] Galaxy Zoo recruited volunteers to help with the largest galaxy census ever carried out. [ 8 ] Opening the project to the general public saved the professional astronomers the task of studying all the galaxies themselves, resulting in classification of a large number of galaxies undertaken in a shorter time than what smaller research teams would be able to do, classifying 900,000 galaxies in months rather than years if done by smaller research teams. [ 8 ] Computer programs had been unable to reliably classify galaxies: several groups had attempted to develop image-analysis programs. [ 11 ] Kevin Schawinski stated: "The human brain is actually much better than a computer at these pattern recognition tasks." [ 8 ] [ 12 ] However, volunteers astonished the project's organizers by classifying the entire catalog years ahead of schedule. [ 11 ] An online forum was later set up two weeks after the initial start, partially due to a large volume of emails being sent around, to the point that it was troublesome for those receiving them to process and respond to them. This led volunteers to point out anomalies that on closer inspection have turned out to be new astronomical objects such as ' Hanny's Voorwerp ' and ' the Green Pea galaxies '. [ 11 ] "I'm incredibly impressed by what they've managed to achieve," says University of Oxford astronomer Roger Davies , former president of the Royal Astronomical Society ."They've made it possible to do things with a huge survey." [ 11 ] The Galaxy Zoo forum became a hotbed for the discussion of the SDSS images and more general science questions. Its 'global moderator', volunteer communuity manager and UK astronomy enthusiast Alice Sheppard, said of it: "I don't quite know what it is, but Galaxy Zoo does something to people. The contributions, both creative and academic, that people have made to the forum are as stunning as the sight of any spiral, and never fail to move me." [ 13 ] Author Michael Nielsen wrote in his book Reinventing Discovery : "But Galaxy Zoo can go beyond computers, because it can also apply human intelligence in the analysis, the kind of intelligence that recognizes that Hanny's Voorwerp or a Pea galaxy is out of the ordinary, and deserves further investigation. Galaxy Zoo is thus a hybrid, able to do deep analyses of large data sets that are impossible in any other way." [ 13 ] A community feeling was also created. Roger Davies stated: "The community of Galaxy Zoo gives them the opportunity to participate that they're looking for." [ 11 ] This community became known as the 'Zooites'. [ 14 ] [ 15 ] Aida Berges, a homemaker living in Puerto Rico who has classified hundreds of thousands of galaxies, stated: "Every galaxy has a story to tell. They are beautiful, mysterious, and show how amazing our universe is. It was love at first sight when I started in Galaxy Zoo ... It is a magical place, and it feels like coming home at last." [ 6 ] [ 13 ] The Galaxy Zoo Forum became a read-only archive in July 2014. After seven years online and over 650,000 posts, it continues to generate science. As of July 2017, 60 scientific papers have been published as a direct result of Galaxy Zoo and hundreds of thousands of volunteers. [ 3 ] [ 16 ] In previous studies though, it was found that data produced by volunteers was more likely to contain bias or mistakes. [ 17 ] [ 18 ] [ 19 ] However Chris Lintott says that crowdsourced results are reliable, as proven by the fact that they are being used and published in peer-reviewed science papers. [ 17 ] Indeed, other scientists have questioned crowdsourcing and crowdsourced studies. Steven Bamford, a Galaxy Zoo research scientist, stated: "As a professional researcher you take pride in the work that you do. And the idea that anybody off the street could come and do something better sounds threatening but also implausible." [ 17 ] David Anderson , the founder of BOINC , stated: [For many sceptical scientists] "There's this idea that they're giving up control somehow, and that their importance would be diminished". [ 20 ] The continuing goodwill of citizen scientists is also questioned. Chris Lintott stated: "Rather than letting anyone pitch for volunteers, we'd like to be a place where people can come and expect a certain level of commitment". [ 20 ] A conference was held between 10–12 July 2017 at St. Catherine's College , Oxford , to recognise the tenth anniversary of the start of Galaxy Zoo in July 2007. [ 3 ] [ 21 ] [ 22 ] Co-founder Chris Lintott stated: "What started as a small project has been completely transformed by the enthusiasm and efforts of the volunteers... It has had a real impact on our understanding of galaxy evolution." [ 3 ] 125 million galaxy classifications resulting in 60 peer reviewed academic papers from at least 15 different projects have been made since July 2007. [ 3 ] Discoveries include: Hanny's Voorwerp , Green pea galaxies and more recently objects known as 'Yellow Balls'. [ 3 ] On the conference Twitter feed, #GZ10, it states that 10 of the 60 papers have over 100 citations [within the Astrophysics Data System ] in 10 years. [ 16 ] Karen Masters , an astrophysicist at Portsmouth University and project scientist for GZ stated: "We're genuinely asking for help with something we cannot do ourselves and the results have made a big contribution to the field." [ 3 ] As a result of GZ's success, the citizen science web portal Zooniverse was started, which has since hosted a 100 projects. [ 3 ] The original Galaxy Zoo consisted 100,000 galaxies imaged by the Sloan Digital Sky Survey . With so many galaxies, it had been assumed that it would take years for visitors to the site to work through them all, but within 24 hours of launch, the website was receiving almost 70,000 classifications an hour. In the end, more than 50 million classifications were received by the project during its first year, contributed by more than 150,000 people. [ citation needed ] This was started in July 2007 and retired in 2009. [ 10 ] This consisted of some 250,000 of the brightest galaxies from the Sloan Digital Sky Survey . [ 23 ] Galaxy Zoo 2 allowed for a much more detailed classification, by shape and by the intensity or dimness of the galactic core , and with a special section for oddities like mergers or ring galaxies . [ 24 ] The sample also contained fewer optical oddities. The project which collected classifications from February 2009 [ 23 ] - April 2010 [ 23 ] and closed with some 60 million classifications. [ 23 ] This studied the role of interacting galaxies. Interacting galaxies are galaxies that exhibit a gravitational influence on one another. This influence is exhibited over the course of millions or even billions of years as two or more galaxies pass nearby one another. The near passage of two massive structures can cause the galaxies to be distorted and possibly merge. The Galaxy Zoo Mergers aimed to provide a set of tools that allowed users to randomly sample various sets of simulation parameters in rapid succession by showing 8 simulation outputs at a time. This started in November 2009 and was retired in June 2012. [ 25 ] [ 26 ] Galaxy Zoo used images partner from the Palomar Transient Factory to find Supernovae. The task in this Galaxy Zoo project was to help catch exploding stars – supernovae. Data for the site was provided by an automatic survey in California at the Palomar Observatory. Astronomers followed up on the best candidates at telescopes around the world. This started in August 2009 and was retired in August 2012. [ 27 ] [ 28 ] The site's third incarnation, Galaxy Zoo Hubble drew from surveys conducted by the Hubble Space Telescope to view earlier epochs of galaxy formation. In these surveys, which involve many days of dedicated observing time, we can see light from galaxies which has taken billions of years to reach us. The idea behind Galaxy Zoo Hubble was to be able to compare galaxies then to galaxies now, giving us a clear understanding of what factors influence their growth, whether through mergers, active black holes or simply star formation. This started in April 2010 and was retired in September 2012. [ 29 ] In October 2016, a study titled: "Galaxy Zoo: Morphological Classifications for 120,000 Galaxies in HST Legacy Imaging" was accepted for publication by the journal Monthly Notices of the Royal Astronomical Society . [ 30 ] The abstract begins: "We present the data release paper for the Galaxy Zoo: Hubble project. This is the third phase in a large effort to measure reliable, detailed morphologies of galaxies by using crowdsourced visual classifications of colour composite images. Images in Galaxy Zoo Hubble were selected from various publicly-released Hubble Space Telescope Legacy programs conducted with the Advanced Camera for Surveys, with filters that probe the rest- frame optical emission from galaxies out to z ≈1." [ 30 ] The present Galaxy Zoo (4) combines new imaging from the Sloan Digital Sky Survey with the most distant images yet from the Hubble Space Telescope CANDELS survey. The CANDELS survey makes use of the new Wide Field Camera 3 to take ultra-deep images of the universe. The project also includes images taken with the United Kingdom Infrared Telescope in Hawaii, for the recently completed UKIDSS project. UKIDSS is the largest, deepest survey of the sky at infrared wavelengths. [ 31 ] Kevin Schawinski explained that: "The two sources of data work together perfectly: the new images from Sloan give us our most detailed view of the local universe, while the CANDELS survey from the Hubble telescope allows us to look deeper into the universe's past than ever before." [ 31 ] In October 2016, a paper was accepted for publishing in MNRAS titled: "Galaxy Zoo: Quantitative Visual Morphological Classifications for 48,000 galaxies from CANDELS". [ 32 ] Quoting: "We present quantified visual morphologies of approximately 48,000 galaxies observed in three Hubble Space Telescope legacy fields by the Cosmic And Near-infrared Deep Extragalactic Legacy Survey (CANDELS) and classified by participants in the Galaxy Zoo project. 90% of galaxies have z < 3 and are observed in rest-frame optical wavelengths by CANDELS. Each galaxy received an average of 40 independent classifications, which we combine into detailed morphological information on galaxy features such as clumpiness, bar instabilities, spiral structure, and merger and tidal signatures." [ 32 ] On 17 December 2013, Galaxy Zoo opened a project called Radio Galaxy Zoo. It uses observations from the Australia Telescope Large Area Survey in Radio, and compares them to the Spitzer Space Telescope 's infrared data. There are about 6000 images to look through. [ 33 ] The CSIRO press release states that Radio Galaxy Zoo is a new citizen science project that lets anyone become a cosmic explorer. It continues that by matching galaxy images with radio images from CSIRO's Australia Telescope, a participant can work out if a galaxy has a supermassive black hole . [ 33 ] Another project that uses data from volunteer classifications is Galaxy Zoo Quench, which studies the interactions between galaxies and the effect it has on starbursts (among others). [ 34 ] [ 35 ] This has yet to be completed. As of July 2017, the full list of Galaxy Zoo projects (15) is: Galaxy Zoo 1, Galaxy Zoo 2, Galaxy Zoo Mergers, Galaxy Zoo Supernovae, Galaxy Zoo Hubble, Galaxy Zoo CANDELS, Radio Galaxy Zoo, Galaxy Zoo Quench, Galaxy Zoo DECALS 1, Galaxy Zoo DECALS2 + SDSS, Illustris, UKIDSS, Galaxy Zoo Bar Lengths and two more. [ 3 ] In June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification , particularly concerning spiral galaxies , may not be supported, and may need updating. [ 36 ] [ 37 ] One of the original aims for Galaxy Zoo was to explore which way galaxies rotated. Cosmologist Kate Land stated: "Some people have argued that galaxies are rotating all in agreement with each other, not randomly as we'd expect. We want people to classify the galaxies according to which way they're rotating and I'll be able to go and see if there's anything bizarre going on. If there are any patterns that we're not expecting, it could really turn up some surprises." [ 7 ] In Galaxy Zoo 1, volunteers were asked to judge from the SDSS images whether the galaxies were elliptical or spiral and, if spiral, whether they were rotating in a clockwise (Z-wise) or anti-clockwise (S-wise) direction. The rotation, also called the chirality , of galaxies has been examined in several Galaxy Zoo related papers. [ 38 ] [ 39 ] [ 40 ] Among the results a psychological bias was demonstrated. [ 38 ] Galaxy Zoo scientists wanted to determine whether spiral galaxies were evenly distributed, or whether an intrinsic property of the universe caused them to rotate one way or the other. When the Science team came to analyse the results, they found an excess of anticlockwise-spinning spiral galaxies. [ 38 ] But when the team asked volunteers to classify the same images which had then been reversed, there was still an excess of anticlockwise classifications, delegating that the human brain has real difficulty discerning between something rotating clockwise or anticlockwise. [ 38 ] Having measured this effect, the team could adjust for it, and established that spirals near each other tended to rotate in the same direction. [ 38 ] Mainstream astronomical theory before Galaxy Zoo held that elliptical (or 'early type') galaxies were red in color and spiral (or 'late type') galaxies were blue in color: several papers published as a result of Galaxy Zoo have proved otherwise. [ 34 ] [ 41 ] [ 42 ] [ 43 ] A population of blue ellipticals was found. [ 41 ] These are galaxies which have changed their shape from spiral to oval, but still have young stars in them. [ 41 ] Indeed, Galaxy Zoo came about through Schawinski's searching for blue elliptical galaxies, as near the end of 2006, he had spent most of his waking hours trying to find these rare galaxies. [ 44 ] Blueness in galaxies means that new stars are forming. However ellipticals are almost always red, indicating that they are full of old and dead stars. [ 44 ] Thus, blue ellipticals are paradoxical, but give clues to star-formation in different types of galaxies. [ 44 ] Also, a population of red spirals was found. [ 42 ] These have a different evolutionary path from normal spiral galaxies, showing red spiral galaxies can stop making new stars without changing their shape. [ 42 ] Using Galaxy Zoo data for their sample, Tojeiro et al. 2013 found (pg.5): 13,959 red ellipticals, 381 blue ellipticals, 5,139 blue late-type spirals, 294 red late-type spirals, 1,144 blue early-type spirals, and 1,265 red early-type spirals. [ 43 ] Chris Lintott stated: "These red spiral galaxies had been lurking in the data and no-one had spotted them. They were staring us in the face. Now we know that a third of spirals around the edges of some clusters of galaxies are red." [ 45 ] He also stated: "These results are possible thanks to a major scientific contribution from our many volunteer armchair astronomers. No group of professionals could have classified this many galaxies alone." [ 46 ] A team using the Hubble Space Telescope has independently verified the existence of red spirals. [ 47 ] Meghan Gray stated: "Our two projects have approached the problem from very different directions. It is gratifying to see that we each provide independent pieces of the puzzle pointing to the same conclusion." [ 45 ] [ 46 ] It is thought that Red Spirals are galaxies in the process of transition from young to old. [ 48 ] They are more massive than blue spirals and are found on the outskirts of large clusters of galaxies. Chris Lintott stated: "We think what we're seeing is galaxies that have been gently strangled, so to speak, where somehow the gas supply for star formation has been cut off, but that they've been strangled so gently that the arms are still there." [ 48 ] The cause might be the Red Spiral's gentle interaction with a galaxy cluster. He further explained: "The kind of thing we're imagining [is that] as the galaxy moves into a denser environment, there's lot of gas in clusters as well as galaxies, and it's possible the gas from the galaxy just gets stripped off by the denser medium it's plowing into." [ 48 ] The properties of Galactic Dust have been examined in several Galaxy Zoo papers. [ 49 ] [ 50 ] [ 51 ] [ 52 ] The interstellar medium of spiral galaxies is filled by gas and small solid particles called dust grains. Despite constituting only a minor fraction of the galactic mass (between 0.1% and 0.01% for the Milky Way), dust grains have a major role in shaping the appearance of a galaxy. Because of their dimension (typically smaller than a few tenths of a micron ), they are very effective in absorbing and scattering the radiation emitted by stars in the ultraviolet , optical and near-infrared . [ 53 ] Although the interstellar regions are more devoid of matter than any vacuum artificially created on earth, there is matter in space. These regions have very low densities and consist mainly of gas (99%) and dust. In total, approximately 15% of the visible matter in the Milky Way is composed of interstellar gas and dust. [ 54 ] The study of dust in galaxies is interesting for many reasons. [ 55 ] For example, the dimming effects of dust need to be corrected for to estimate the total mass of a galaxy from measurements of its light. Standard candles used to measure the expansion history of the Universe also need to be corrected for dust extinction. A catalogue of 1,990 overlapping galaxies was published in 2013, which had been collected by volunteers on the Galaxy Zoo forum using SDSS images. The abstract states: 'Analysis of galaxies with overlapping images offers a direct way to probe the distribution of dust extinction and its effects on the background light.' [ 52 ] This catalogue was also used in a study of ultraviolet attenuation laws. [ 56 ] Some spiral galaxies have central bar-shaped structures composed of stars. These galaxies are called 'barred spirals' and have been investigated by Galaxy Zoo in several studies. [ 57 ] [ 58 ] [ 59 ] [ 60 ] It is unclear why some spiral galaxies have bars and some do not. [ 61 ] Galaxy Zoo research has shown that red spirals are about twice as likely to host bars as blue spirals. These colours are significant. Blue galaxies get their hue from the hot young stars they contain, implying that they are forming stars in large numbers. In red galaxies, this star formation has stopped, leaving behind the cooler, long-lived stars that give them their red colour. [ 61 ] Karen Masters, a scientist involved in the studies, stated: "For some time data have hinted that spirals with more old stars are more likely to have bars, but with such a large number of bar classifications we're much more confident about our results. It's not yet clear whether the bars are some side effect of an external process that turns spiral galaxies red, or if they alone can cause this transformation." [ 61 ] Spiral galaxies usually have 'bulges' at their centers. These bulges are huge, tightly packed groups of stars. However, using Galaxy Zoo volunteer classifications, it has been found that some spiral galaxies do not have bulges. [ 62 ] [ 63 ] Many galactic bulges are thought to host a supermassive black hole at their centers: however pure disk galaxies with no bulges but with growing central black holes were found. [ 62 ] That pure disk galaxies and their central black holes may be consistent with a relation derived from elliptical and bulge-dominated galaxies with very different formation histories implies the details of stellar galaxy evolution and dynamics may not be fundamental to the co-evolution of galaxies and black holes. [ 62 ] It seems that these bulgeless galaxies have formed in environments isolated from other galaxies. [ 64 ] It is hypothesised that the black hole mass may be more tightly tied to the overall gravitational potential of a galaxy and therefore its dark matter halo, rather than to the dynamical bulge component. [ 64 ] In September 2014, a paper titled: "Galaxy Zoo: CANDELS Barred Disks and Bar Fractions" was accepted for publication by the MNRAS . [ 65 ] This was the first set of results from the Hubble Space Telescope CANDELS survey that was part of Galaxy Zoo 4. The study reports "the discovery of strong barred structures in massive disk galaxies at z ≈1.5 in deep rest-frame optical images from CANDELS". [ 65 ] From within a sample of 876 disk galaxies identified by visual classification in Galaxy Zoo 4, 123 barred galaxies are examined. It is found that the bar fraction across the redshift range 0.5 < z < 2 does not significantly evolve. [ 65 ] (See also under Retired projects above.) Galaxy Zoo Mergers was a Galaxy Zoo project started in November 2009 and retired in June 2012. There have also been a number of studies on galaxy mergers, among which was a survey of ≈3000, which presented "the largest, most homogeneous catalogue of merging galaxies in the nearby universe". [ 66 ] [ 67 ] This catalogue was spread over two papers and was a result of volunteers selecting likely candidates from Galaxy Zoo 1 and posting them on the Galaxy Zoo forum. Other papers that have used Galaxy Zoo data resulted in observations that include those taken by the Chandra X-ray Observatory . [ 50 ] [ 68 ] [ 69 ] [ 70 ] Zooniverse projects:
https://en.wikipedia.org/wiki/Galaxy_Zoo
Galaxy effective radius or half-light radius ( R e {\displaystyle R_{e}} ) is the radius at which half of the total light of a galaxy is emitted. [ 1 ] [ 2 ] This assumes the galaxy has either intrinsic spherical symmetry or is at least circularly symmetric as viewed in the plane of the sky. Alternatively, a half-light contour , or isophote , may be used for spherically and circularly asymmetric objects. R e {\displaystyle R_{e}} is an important length scale in R 4 {\displaystyle {\sqrt[{4}]{R}}} term in de Vaucouleurs's law , [ 3 ] which characterizes a specific rate at which surface brightness decreases as a function of radius: I ( R ) = I e ⋅ e − 7.67 ( R / R e 4 − 1 ) {\displaystyle I(R)=I_{e}\cdot e^{-7.67\left({\sqrt[{4}]{R/{R_{e}}}}-1\right)}} where I e {\displaystyle I_{e}} is the surface brightness at R = R e {\displaystyle R=R_{e}} . At R = 0 {\displaystyle R=0} , I ( R = 0 ) = I e ⋅ e 7.67 ≈ 2000 ⋅ I e {\displaystyle I(R=0)=I_{e}\cdot e^{7.67}\approx 2000\cdot I_{e}} Thus, the central surface brightness is approximately 2000 ⋅ I e {\displaystyle 2000\cdot I_{e}} . This astronomy -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galaxy_effective_radius
In cosmology , the study of galaxy formation and evolution is concerned with the processes that formed a heterogeneous universe from a homogeneous beginning , the formation of the first galaxies , the way galaxies change over time, and the processes that have generated the variety of structures observed in nearby galaxies. Galaxy formation is hypothesized to occur from structure formation theories, as a result of tiny quantum fluctuations in the aftermath of the Big Bang . The simplest model in general agreement with observed phenomena is the Lambda-CDM model —that is, clustering and merging allows galaxies to accumulate mass, determining both their shape and structure. Hydrodynamics simulation, which simulates both baryons and dark matter , is widely used to study galaxy formation and evolution. Because of the inability to conduct experiments in outer space, the only way to “test” theories and models of galaxy evolution is to compare them with observations. Explanations for how galaxies formed and evolved must be able to predict the observed properties and types of galaxies. Edwin Hubble created an early galaxy classification scheme, now known as the Hubble tuning-fork diagram. It partitioned galaxies into ellipticals , normal spirals , barred spirals (such as the Milky Way ), and irregulars . These galaxy types exhibit the following properties which can be explained by current galaxy evolution theories: Astronomers now believe that disk galaxies likely formed first, then evolved into elliptical galaxies through galaxy mergers. Current models also predict that the majority of mass in galaxies is made up of dark matter , a substance which is not directly observable, and might not interact through any means except gravity. This observation arises because galaxies could not have formed as they have, or rotate as they are seen to, unless they contain far more mass than can be directly observed. The earliest stage in the evolution of galaxies is their formation. When a galaxy forms, it has a disk shape and is called a spiral galaxy due to spiral-like "arm" structures located on the disk. There are different theories on how these disk-like distributions of stars develop from a cloud of matter: however, at present, none of them exactly predicts the results of observation. Olin J. Eggen , Donald Lynden-Bell , and Allan Sandage [ 2 ] in 1962, proposed a theory that disk galaxies form through a monolithic collapse of a large gas cloud. The distribution of matter in the early universe was in clumps that consisted mostly of dark matter. These clumps interacted gravitationally, putting tidal torques on each other that acted to give them some angular momentum. As the baryonic matter cooled, it dissipated some energy and contracted toward the center. With angular momentum conserved, the matter near the center speeds up its rotation. Then, like a spinning ball of pizza dough, the matter forms into a tight disk. Once the disk cools, the gas is not gravitationally stable, so it cannot remain a singular homogeneous cloud. It breaks, and these smaller clouds of gas form stars. Since the dark matter does not dissipate as it only interacts gravitationally, it remains distributed outside the disk in what is known as the dark halo . Observations show that there are stars located outside the disk, which does not quite fit the "pizza dough" model. It was first proposed by Leonard Searle and Robert Zinn [ 3 ] that galaxies form by the coalescence of smaller progenitors. Known as a top-down formation scenario, this theory is quite simple yet no longer widely accepted. More recent theories include the clustering of dark matter halos in the bottom-up process. Instead of large gas clouds collapsing to form a galaxy in which the gas breaks up into smaller clouds, it is proposed that matter started out in these “smaller” clumps (mass on the order of globular clusters ), and then many of these clumps merged to form galaxies, [ 4 ] which then were drawn by gravitation to form galaxy clusters . This still results in disk-like distributions of baryonic matter with dark matter forming the halo for all the same reasons as in the top-down theory. Models using this sort of process predict more small galaxies than large ones, which matches observations. Astronomers do not currently know what process stops the contraction. In fact, theories of disk galaxy formation are not successful at producing the rotation speed and size of disk galaxies. It has been suggested that the radiation from bright newly formed stars, or from an active galactic nucleus can slow the contraction of a forming disk. It has also been suggested that the dark matter halo can pull the galaxy, thus stopping disk contraction. [ 5 ] The Lambda-CDM model is a cosmological model that explains the formation of the universe after the Big Bang . It is a relatively simple model that predicts many properties observed in the universe, including the relative frequency of different galaxy types; however, it underestimates the number of thin disk galaxies in the universe. [ 6 ] The reason is that these galaxy formation models predict a large number of mergers. If disk galaxies merge with another galaxy of comparable mass (at least 15 percent of its mass) the merger will likely destroy, or at a minimum greatly disrupt the disk, and the resulting galaxy is not expected to be a disk galaxy (see next section). While this remains an unsolved problem for astronomers, it does not necessarily mean that the Lambda-CDM model is completely wrong, but rather that it requires further refinement to accurately reproduce the population of galaxies in the universe. Elliptical galaxies (most notably supergiant ellipticals , such as ESO 306-17 ) are among some of the largest known thus far . Their stars are on orbits that are randomly oriented within the galaxy (i.e. they are not rotating like disk galaxies). A distinguishing feature of elliptical galaxies is that the velocity of the stars does not necessarily contribute to flattening of the galaxy, such as in spiral galaxies. [ 7 ] Elliptical galaxies have central supermassive black holes , and the masses of these black holes correlate with the galaxy's mass. Elliptical galaxies have two main stages of evolution. The first is due to the supermassive black hole growing by accreting cooling gas. The second stage is marked by the black hole stabilizing by suppressing gas cooling, thus leaving the elliptical galaxy in a stable state. [ 8 ] The mass of the black hole is also correlated to a property called sigma which is the dispersion of the velocities of stars in their orbits. This relationship, known as the M-sigma relation , was discovered in 2000. [ 9 ] Elliptical galaxies mostly lack disks, although some bulges of disk galaxies resemble elliptical galaxies. Elliptical galaxies are more likely found in crowded regions of the universe (such as galaxy clusters ). Astronomers now see elliptical galaxies as some of the most evolved systems in the universe. It is widely accepted that the main driving force for the evolution of elliptical galaxies is mergers of smaller galaxies. Many galaxies in the universe are gravitationally bound to other galaxies, which means that they will never escape their mutual pull. If those colliding galaxies are of similar size, the resultant galaxy will appear similar to neither of the progenitors, [ 10 ] but will instead be elliptical. There are many types of galaxy mergers, which do not necessarily result in elliptical galaxies, but result in a structural change. For example, a minor merger event is thought to be occurring between the Milky Way and the Magellanic Clouds. Mergers between such large galaxies are regarded as violent, and the frictional interaction of the gas between the two galaxies can cause gravitational shock waves , which are capable of forming new stars in the new elliptical galaxy. [ 11 ] By sequencing several images of different galactic collisions, one can observe the timeline of two spiral galaxies merging into a single elliptical galaxy. [ 12 ] In the Local Group , the Milky Way and the Andromeda Galaxy are gravitationally bound, and currently approaching each other at high speed. Simulations show that the Milky Way and Andromeda are on a collision course, and are expected to collide in less than five billion years. During this collision, it is expected that the Sun and the rest of the Solar System will be ejected from its current path around the Milky Way. The remnant could be a giant elliptical galaxy. [ 13 ] One observation that must be explained by a successful theory of galaxy evolution is the existence of two different populations of galaxies on the galaxy color-magnitude diagram. Most galaxies tend to fall into two separate locations on this diagram: a "red sequence" and a "blue cloud". Red sequence galaxies are generally non-star-forming elliptical galaxies with little gas and dust, while blue cloud galaxies tend to be dusty star-forming spiral galaxies. [ 15 ] [ 16 ] As described in previous sections, galaxies tend to evolve from spiral to elliptical structure via mergers. However, the current rate of galaxy mergers does not explain how all galaxies move from the "blue cloud" to the "red sequence". It also does not explain how star formation ceases in galaxies. Theories of galaxy evolution must therefore be able to explain how star formation turns off in galaxies. This phenomenon is called galaxy "quenching". [ 17 ] Stars form out of cold gas (see also the Kennicutt–Schmidt law ), so a galaxy is quenched when it has no more cold gas. However, it is thought that quenching occurs relatively quickly (within 1 billion years), which is much shorter than the time it would take for a galaxy to simply use up its reservoir of cold gas. [ 18 ] [ 19 ] Galaxy evolution models explain this by hypothesizing other physical mechanisms that remove or shut off the supply of cold gas in a galaxy. These mechanisms can be broadly classified into two categories: (1) preventive feedback mechanisms that stop cold gas from entering a galaxy or stop it from producing stars, and (2) ejective feedback mechanisms that remove gas so that it cannot form stars. [ 20 ] One theorized preventive mechanism called “strangulation” keeps cold gas from entering the galaxy. Strangulation is likely the main mechanism for quenching star formation in nearby low-mass galaxies. [ 21 ] The exact physical explanation for strangulation is still unknown, but it may have to do with a galaxy's interactions with other galaxies. As a galaxy falls into a galaxy cluster, gravitational interactions with other galaxies can strangle it by preventing it from accreting more gas. [ 22 ] For galaxies with massive dark matter halos , another preventive mechanism called “virial shock heating” may also prevent gas from becoming cool enough to form stars. [ 19 ] Ejective processes, which expel cold gas from galaxies, may explain how more massive galaxies are quenched. [ 23 ] One ejective mechanism is caused by supermassive black holes found in the centers of galaxies. Simulations have shown that gas accreting onto supermassive black holes in galactic centers produces high-energy jets ; the released energy can expel enough cold gas to quench star formation. [ 24 ] Our own Milky Way and the nearby Andromeda Galaxy currently appear to be undergoing the quenching transition from star-forming blue galaxies to passive red galaxies. [ 25 ] Dark energy and dark matter account for most of the Universe's energy, so it is valid to ignore baryons when simulating large-scale structure formation (using methods such as N-body simulation ). However, since the visible components of galaxies consist of baryons, it is crucial to include baryons in the simulation to study the detailed structures of galaxies. At first, the baryon component consists of mostly hydrogen and helium gas, which later transforms into stars during the formation of structures. From observations, models used in simulations can be tested and the understanding of different stages of galaxy formation can be improved. In cosmological simulations, astrophysical gases are typically modeled as inviscid ideal gases that follow the Euler equations , which can be expressed mainly in three different ways: Lagrangian, Eulerian, or arbitrary Lagrange-Eulerian methods. Different methods give specific forms of hydrodynamical equations. [ 26 ] When using the Lagrangian approach to specify the field, it is assumed that the observer tracks a specific fluid parcel with its unique characteristics during its movement through space and time. In contrast, the Eulerian approach emphasizes particular locations in space that the fluid passes through as time progresses. To shape the population of galaxies, the hydrodynamical equations must be supplemented by a variety of astrophysical processes mainly governed by baryonic physics. Processes, such as collisional excitation, ionization, and inverse Compton scattering , can cause the internal energy of the gas to be dissipated. In the simulation, cooling processes are realized by coupling cooling functions to energy equations. Besides the primordial cooling, at high temperature, 10 5 K < T < 10 7 K {\displaystyle \ 10^{5}K<T<10^{7}K\,} , heavy elements (metals) cooling dominates. [ 27 ] When T < 10 4 K {\displaystyle \ T<10^{4}K\,} , the fine structure and molecular cooling also need to be considered to simulate the cold phase of the interstellar medium . Complex multi-phase structure, including relativistic particles and magnetic field, makes simulation of interstellar medium difficult. In particular, modeling the cold phase of the interstellar medium poses technical difficulties due to the short timescales associated with the dense gas. In the early simulations, the dense gas phase is frequently not modeled directly but rather characterized by an effective polytropic equation of state. [ 28 ] More recent simulations use a multimodal distribution [ 29 ] [ 30 ] to describe the gas density and temperature distributions, which directly model the multi-phase structure. However, more detailed physics processes needed to be considered in future simulations, since the structure of the interstellar medium directly affects star formation . As cold and dense gas accumulates, it undergoes gravitational collapse and eventually forms stars. To simulate this process, a portion of the gas is transformed into collisionless star particles, which represent coeval, single-metallicity stellar populations and are described by an initial underlying mass function. Observations suggest that star formation efficiency in molecular gas is almost universal, with around 1% of the gas being converted into stars per free fall time. [ 31 ] In simulations, the gas is typically converted into star particles using a probabilistic sampling scheme based on the calculated star formation rate. Some simulations seek an alternative to the probabilistic sampling scheme and aim to better capture the clustered nature of star formation by treating star clusters as the fundamental unit of star formation. This approach permits the growth of star particles by accreting material from the surrounding medium. [ 32 ] In addition to this, modern models of galaxy formation track the evolution of these stars and the mass they return to the gas component, leading to an enrichment of the gas with metals. [ 33 ] Stars have an influence on their surrounding gas by injecting energy and momentum. This creates a feedback loop that regulates the process of star formation. To effectively control star formation, stellar feedback must generate galactic-scale outflows that expel gas from galaxies. Various methods are utilized to couple energy and momentum, particularly through supernova explosions, to the surrounding gas. These methods differ in how the energy is deposited, either thermally or kinetically. However, excessive radiative gas cooling must be avoided in the former case. Cooling is expected in dense and cold gas, but it cannot be reliably modeled in cosmological simulations due to low resolution. This leads to artificial and excessive cooling of the gas, causing the supernova feedback energy to be lost via radiation and significantly reducing its effectiveness. In the latter case, kinetic energy cannot be radiated away until it thermalizes. However, using hydrodynamically decoupled wind particles to inject momentum non-locally into the gas surrounding active star-forming regions may still be necessary to achieve large-scale galactic outflows. [ 34 ] Recent models explicitly model stellar feedback. [ 35 ] These models not only incorporate supernova feedback but also consider other feedback channels such as energy and momentum injection from stellar winds, photoionization, and radiation pressure resulting from radiation emitted by young, massive stars. [ 36 ] During the Cosmic Dawn , galaxy formation occurred in short bursts of 5 to 30 Myr due to stellar feedbacks. [ 37 ] Simulation of supermassive black holes is also considered, numerically seeding them in dark matter haloes, due to their observation in many galaxies [ 38 ] and the impact of their mass on the mass density distribution. Their mass accretion rate is frequently modeled by the Bondi-Hoyle model. Active galactic nuclei (AGN) have an impact on the observational phenomena of supermassive black holes, and further have a regulation of black hole growth and star formation. In simulations, AGN feedback is usually classified into two modes, namely quasar and radio mode. Quasar mode feedback is linked to the radiatively efficient mode of black hole growth and is frequently incorporated through energy or momentum injection. [ 39 ] The regulation of star formation in massive galaxies is believed to be significantly influenced by radio mode feedback, which occurs due to the presence of highly collimated jets of relativistic particles. These jets are typically linked to X-ray bubbles that possess enough energy to counterbalance cooling losses. [ 40 ] The ideal magnetohydrodynamics approach is commonly utilized in cosmological simulations since it provides a good approximation for cosmological magnetic fields. The effect of magnetic fields on the dynamics of gas is generally negligible on large cosmological scales. Nevertheless, magnetic fields are a critical component of the interstellar medium since they provide pressure support against gravity [ 41 ] and affect the propagation of cosmic rays. [ 42 ] Cosmic rays play a significant role in the interstellar medium by contributing to its pressure, [ 43 ] serving as a crucial heating channel, [ 44 ] and potentially driving galactic gas outflows. [ 45 ] The propagation of cosmic rays is highly affected by magnetic fields. So in the simulation, equations describing the cosmic ray energy and flux are coupled to magnetohydrodynamics equations. [ 46 ] Radiation hydrodynamics simulations are computational methods used to study the interaction of radiation with matter. In astrophysical contexts, radiation hydrodynamics is used to study the epoch of reionization when the Universe had high redshift. There are several numerical methods used for radiation hydrodynamics simulations, including ray-tracing, Monte Carlo , and moment-based methods. Ray-tracing involves tracing the paths of individual photons through the simulation and computing their interactions with matter at each step. This method is computationally expensive but can produce very accurate results.
https://en.wikipedia.org/wiki/Galaxy_formation_and_evolution
Galaxy morphological classification is a system used by astronomers to divide galaxies into groups based on their visual appearance. There are several schemes in use by which galaxies can be classified according to their morphologies, the most famous being the Hubble sequence , devised by Edwin Hubble and later expanded by Gérard de Vaucouleurs and Allan Sandage . However, galaxy classification and morphology are now largely done using computational methods and physical morphology. The Hubble sequence is a morphological classification scheme for galaxies invented by Edwin Hubble in 1926. [ 2 ] [ 3 ] It is often known colloquially as the “Hubble tuning-fork” because of the shape in which it is traditionally represented. Hubble's scheme divides galaxies into three broad classes based on their visual appearance (originally on photographic plates ): [ 4 ] These broad classes can be extended to enable finer distinctions of appearance and to encompass other types of galaxies, such as irregular galaxies , which have no obvious regular structure (either disk-like or ellipsoidal). [ 4 ] The Hubble sequence is often represented in the form of a two-pronged fork, with the ellipticals on the left (with the degree of ellipticity increasing from left to right) and the barred and unbarred spirals forming the two parallel prongs of the fork on the right. Lenticular galaxies are placed between the ellipticals and the spirals, at the point where the two prongs meet the “handle”. [ 9 ] To this day, the Hubble sequence is the most commonly used system for classifying galaxies, both in professional astronomical research and in amateur astronomy . [ 10 ] Nonetheless, in June 2019, citizen scientists through Galaxy Zoo reported that the usual Hubble classification , particularly concerning spiral galaxies , may not be supported, and may need updating. [ 11 ] [ 12 ] The de Vaucouleurs system for classifying galaxies is a widely used extension to the Hubble sequence , first described by Gérard de Vaucouleurs in 1959. [ 13 ] De Vaucouleurs argued that Hubble's two-dimensional classification of spiral galaxies —based on the tightness of the spiral arms and the presence or absence of a bar—did not adequately describe the full range of observed galaxy morphologies. In particular, he argued that rings and lenses are important structural components of spiral galaxies. [ 14 ] The de Vaucouleurs system retains Hubble's basic division of galaxies into ellipticals , lenticulars , spirals and irregulars . To complement Hubble's scheme, de Vaucouleurs introduced a more elaborate classification system for spiral galaxies, based on three morphological characteristics: [ 15 ] The different elements of the classification scheme are combined — in the order in which they are listed — to give the complete classification of a galaxy. For example, a weakly barred spiral galaxy with loosely wound arms and a ring is denoted SAB(r)c. Visually, the de Vaucouleurs system can be represented as a three-dimensional version of Hubble's tuning fork, with stage (spiralness) on the x -axis, family (barredness) on the y -axis, and variety (ringedness) on the z -axis. [ 17 ] De Vaucouleurs also assigned numerical values to each class of galaxy in his scheme. Values of the numerical Hubble stage T run from −6 to +10, with negative numbers corresponding to early-type galaxies (ellipticals and lenticulars) and positive numbers to late types (spirals and irregulars). [ 18 ] Thus, as a rough rule, lower values of T correspond to a larger fraction of the stellar mass contained in a spheroid/bulge relative to the disk. The approximate mapping between the spheroid-to-total stellar mass ratio (M B /M T ) and the Hubble stage is M B /M T =(10−T) 2 /256 based on local galaxies. [ 19 ] Elliptical galaxies are divided into three 'stages': compact ellipticals (cE), normal ellipticals (E) and late types (E + ). Lenticulars are similarly subdivided into early (S − ), intermediate (S 0 ) and late (S + ) types. Irregular galaxies can be of type magellanic irregulars ( T = 10) or 'compact' ( T = 11). The use of numerical stages allows for more quantitative studies of galaxy morphology. The Yerkes scheme was created by American astronomer William Wilson Morgan . Together with Philip Keenan , Morgan also developed the MK system for the classification of stars through their spectra. The Yerkes scheme uses the spectra of stars in the galaxy; the shape, real and apparent; and the degree of the central concentration to classify galaxies. [ 21 ] Thus, for example, the Andromeda Galaxy is classified as kS5. [ 22 ]
https://en.wikipedia.org/wiki/Galaxy_morphological_classification
The rotation curve of a disc galaxy (also called a velocity curve ) is a plot of the orbital speeds of visible stars or gas in that galaxy versus their radial distance from that galaxy's centre. It is typically rendered graphically as a plot , and the data observed from each side of a spiral galaxy are generally asymmetric, so that data from each side are averaged to create the curve. A significant discrepancy exists between the experimental curves observed, and a curve derived by applying gravity theory to the matter observed in a galaxy. Theories involving dark matter are the main postulated solutions to account for the variance. [ 2 ] The rotational/orbital speeds of galaxies/stars do not follow the rules found in other orbital systems such as stars/planets and planets/moons that have most of their mass at the centre. Stars revolve around their galaxy's centre at equal or increasing speed over a large range of distances. In contrast, the orbital velocities of planets in planetary systems and moons orbiting planets decline with distance according to Kepler’s third law . This reflects the mass distributions within those systems. The mass estimations for galaxies based on the light they emit are far too low to explain the velocity observations. [ 3 ] The galaxy rotation problem is the discrepancy between observed galaxy rotation curves and the theoretical prediction, assuming a centrally dominated mass associated with the observed luminous material. When mass profiles of galaxies are calculated from the distribution of stars in spirals and mass-to-light ratios in the stellar disks, they do not match with the masses derived from the observed rotation curves and the law of gravity . A solution to this conundrum is to hypothesize the existence of dark matter and to assume its distribution from the galaxy's center out to its halo . Thus the discrepancy between the two curves can be accounted for by adding a dark matter halo surrounding the galaxy. [ 4 ] Though dark matter is by far the most accepted explanation of the rotation problem, other proposals have been offered with varying degrees of success. Of the possible alternatives , one of the most notable is modified Newtonian dynamics (MOND), which involves modifying the laws of gravity. [ 5 ] Vesto Slipher made the first measurements related to galaxy rotation curves in 1914 when observing the Andromeda galaxy. [ 6 ] Slipher observed that the stars on the left side of the galaxy's bulge were approaching at speeds of around 320 km/s, faster than those on the right, which were moving at about 280 km/s. This suggested that the galaxy's disc was rotating in such a way that it appeared to be spinning toward us. [ 7 ] [ 6 ] In 1918 Francis Pease determined the rotation speed within the central region of Andromeda. [ 6 ] His findings were represented by the formula V c = − 0.48 r − 316 {\displaystyle \displaystyle V_{c}=-0.48r-316} , where V c {\displaystyle V_{c}} is the measured circular speed (in km/s) at a distance r {\displaystyle r} from the center of Andromeda's bulge. The results indicated that the central part of the galaxy rotates at a constant angular speed. [ 6 ] In 1932, Jan Hendrik Oort became the first to report that measurements of the stars in the solar neighborhood indicated that they moved faster than expected when a mass distribution based upon visible matter was assumed, but these measurements were later determined to be essentially erroneous. [ 8 ] In 1939, Horace Babcock reported in his PhD thesis measurements of the rotation curve for Andromeda which suggested that the mass-to-luminosity ratio increases radially. [ 9 ] He attributed that to either the absorption of light within the galaxy or to modified dynamics in the outer portions of the spiral and not to any form of missing matter. Babcock's measurements turned out to disagree substantially with those found later, and the first measurement of an extended rotation curve in good agreement with modern data was published in 1957 by Henk van de Hulst and collaborators, who studied M31 with the Dwingeloo Radio Observatory 's newly commissioned 25-meter radio telescope . [ 10 ] A companion paper by Maarten Schmidt showed that this rotation curve could be fit by a flattened mass distribution more extensive than the light. [ 11 ] In 1959, Louise Volders used the same telescope to demonstrate that the spiral galaxy M33 also does not spin as expected according to Keplerian dynamics . [ 12 ] Reporting on NGC 3115 , Jan Oort wrote that "the distribution of mass in the system appears to bear almost no relation to that of light... one finds the ratio of mass to light in the outer parts of NGC 3115 to be about 250". [ 13 ] On page 302–303 of his journal article, he wrote that "The strongly condensed luminous system appears imbedded in a large and more or less homogeneous mass of great density" and although he went on to speculate that this mass may be either extremely faint dwarf stars or interstellar gas and dust, he had clearly detected the dark matter halo of this galaxy. The Carnegie telescope (Carnegie Double Astrograph) was intended to study this problem of Galactic rotation. [ 17 ] Oort also did work on motion inside the Milky Way , and tried to determine what are known as the Oort constants , but did not find very accurate values. With space telescopes like Hipparcos and Gaia it has been possible to study the rotation of the Milky Way much more accurately. In the late 1960s and early 1970s, Vera Rubin , an astronomer at the Department of Terrestrial Magnetism at the Carnegie Institution of Washington , worked with a new sensitive spectrograph that could measure the velocity curve of edge-on spiral galaxies to a greater degree of accuracy than had ever before been achieved. [ 18 ] Together with fellow staff-member Kent Ford , Rubin announced at a 1975 meeting of the American Astronomical Society the discovery that most stars in spiral galaxies orbit at roughly the same speed, [ 19 ] and that this implied that galaxy masses grow approximately linearly with radius well beyond the location of most of the stars (the galactic bulge ). Rubin presented her results in an influential paper in 1980. [ 20 ] These results suggested either that Newtonian gravity does not apply universally or that, conservatively, upwards of 50% of the mass of galaxies was contained in the relatively dark galactic halo. Although initially met with skepticism, Rubin's results have been confirmed over the subsequent decades. [ 21 ] If Newtonian mechanics is assumed to be correct, it would follow that most of the mass of the galaxy had to be in the galactic bulge near the center and that the stars and gas in the disk portion should orbit the center at decreasing velocities with radial distance from the galactic center (the dashed line in Fig. 1). Observations of the rotation curve of spirals, however, do not bear this out. Rather, the curves do not decrease in the expected inverse square root relationship but are "flat", i.e. outside of the central bulge the speed is nearly a constant (the solid line in Fig. 1). It is also observed that galaxies with a uniform distribution of luminous matter have a rotation curve that rises from the center to the edge, and most low-surface-brightness galaxies (LSB galaxies) have the same anomalous rotation curve. The rotation curves might be explained by hypothesizing the existence of a substantial amount of matter permeating the galaxy outside of the central bulge that is not emitting light in the mass-to-light ratio of the central bulge. The material responsible for the extra mass was dubbed dark matter , the existence of which was first posited in the 1930s by Jan Oort in his measurements of the Oort constants and Fritz Zwicky in his studies of the masses of galaxy clusters . While the observed galaxy rotation curves were one of the first indications that some mass in the universe may not be visible, many different lines of evidence now support the concept of cold dark matter as the dominant form of matter in the universe. Among the lines of evidence are mass-to-light ratios which are much too low without a dark matter component, the amount of hot gas detected in galactic clusters by x-ray astronomy , measurements of cluster mass with the Sunyaev–Zeldovich effect and with gravitational lensing . [ 22 ] : 368 Models of the formation of galaxies are based on their dark matter halos. [ 23 ] The existence of non-baryonic cold dark matter (CDM) is today a major feature of the Lambda-CDM model that describes the cosmology of the universe and matches high precision astrophysical observations. [ 24 ] : 25.1.1 The rotational dynamics of galaxies are well characterized by their position on the Tully–Fisher relation , which shows that for spiral galaxies the rotational velocity is uniquely related to their total luminosity. A consistent way to predict the rotational velocity of a spiral galaxy is to measure its bolometric luminosity and then read its rotation rate from its location on the Tully–Fisher diagram. Conversely, knowing the rotational velocity of a spiral galaxy gives its luminosity. Thus the magnitude of the galaxy rotation is related to the galaxy's visible mass. [ 26 ] While precise fitting of the bulge, disk, and halo density profiles is a rather complicated process, it is straightforward to model the observables of rotating galaxies through this relationship. [ 27 ] [ better source needed ] So, while state-of-the-art cosmological and galaxy formation simulations of dark matter with normal baryonic matter included can be matched to galaxy observations, there is not yet any straightforward explanation as to why the observed scaling relationship exists. [ 28 ] [ 29 ] Additionally, detailed investigations of the rotation curves of low-surface-brightness galaxies (LSB galaxies) in the 1990s [ 30 ] and of their position on the Tully–Fisher relation [ 31 ] showed that LSB galaxies had to have dark matter haloes that are more extended and less dense than those of galaxies with high surface brightness, and thus surface brightness is related to the halo properties. Such dark-matter-dominated dwarf galaxies may hold the key to solving the dwarf galaxy problem of structure formation . Very importantly, the analysis of the inner parts of low and high surface brightness galaxies showed that the shape of the rotation curves in the centre of dark-matter dominated systems indicates a profile different from the NFW spatial mass distribution profile. [ 32 ] [ 33 ] This so-called cuspy halo problem is a persistent problem for the standard cold dark matter theory. Simulations involving the feedback of stellar energy into the interstellar medium in order to alter the predicted dark matter distribution in the innermost regions of galaxies are frequently invoked in this context. [ 34 ] [ 35 ] In order to accommodate a flat rotation curve, a density profile for a galaxy and its environs must be different than one that is centrally concentrated. Newton's version of Kepler's Third Law implies that the spherically symmetric, radial density profile ρ ( r ) is: ρ ( r ) = v ( r ) 2 4 π G r 2 ( 1 + 2 d log ⁡ v ( r ) d log ⁡ r ) {\displaystyle \rho (r)={\frac {v(r)^{2}}{4\pi Gr^{2}}}\left(1+2~{\frac {d\log v(r)}{d\log r}}\right)} where v ( r ) is the radial orbital velocity profile and G is the gravitational constant . This profile closely matches the expectations of a singular isothermal sphere profile where if v ( r ) is approximately constant then the density ρ ∝ r −2 to some inner "core radius" where the density is then assumed constant. Observations do not comport with such a simple profile, as reported by Navarro, Frenk, and White in a seminal 1996 paper. [ 36 ] The authors then remarked that a "gently changing logarithmic slope" for a density profile function could also accommodate approximately flat rotation curves over large scales. They found the famous Navarro–Frenk–White profile , which is consistent both with N-body simulations and observations given by ρ ( r ) = ρ 0 r R s ( 1 + r R s ) 2 {\displaystyle \rho (r)={\frac {\rho _{0}}{{\frac {r}{R_{s}}}\left(1+{\frac {r}{R_{s}}}\right)^{2}}}} where the central density, ρ 0 , and the scale radius, R s , are parameters that vary from halo to halo. [ 37 ] Because the slope of the density profile diverges at the center, other alternative profiles have been proposed, for example the Einasto profile , which has exhibited better agreement with certain dark matter halo simulations. [ 38 ] [ 39 ] Observations of orbit velocities in spiral galaxies suggest a mass structure according to: v ( r ) = ( r d Φ d r ) 1 / 2 {\displaystyle v(r)=\left(r\,{\frac {d\Phi }{dr}}\right)^{1/2}} with Φ the galaxy gravitational potential . Since observations of galaxy rotation do not match the distribution expected from application of Kepler's laws, they do not match the distribution of luminous matter. [ 20 ] This implies that spiral galaxies contain large amounts of dark matter or, alternatively, the existence of exotic physics in action on galactic scales. The additional invisible component becomes progressively more conspicuous in each galaxy at outer radii and among galaxies in the less luminous ones. [ clarification needed ] A popular interpretation of these observations is that about 26% of the mass of the Universe is composed of dark matter, a hypothetical type of matter which does not emit or interact with electromagnetic radiation . Dark matter is believed to dominate the gravitational potential of galaxies and clusters of galaxies. Under this theory, galaxies are baryonic condensations of stars and gas (namely hydrogen and helium) that lie at the centers of much larger haloes of dark matter, affected by a gravitational instability caused by primordial density fluctuations. Many cosmologists strive to understand the nature and the history of these ubiquitous dark haloes by investigating the properties of the galaxies they contain (i.e. their luminosities, kinematics, sizes, and morphologies). The measurement of the kinematics (their positions, velocities and accelerations) of the observable stars and gas has become a tool to investigate the nature of dark matter, as to its content and distribution relative to that of the various baryonic components of those galaxies. There have been a number of attempts to solve the problem of galaxy rotation by modifying gravity without invoking dark matter. One of the most discussed is modified Newtonian dynamics (MOND), originally proposed by Mordehai Milgrom in 1983, which modifies the Newtonian force law at low accelerations to enhance the effective gravitational attraction. [ 40 ] MOND has had a considerable amount of success in predicting the rotation curves of low-surface-brightness galaxies, [ 41 ] matching the baryonic Tully–Fisher relation , [ 42 ] and the velocity dispersions of the small satellite galaxies of the Local Group . [ 43 ] Using data from the Spitzer Photometry and Accurate Rotation Curves (SPARC) database, a group has found that the radial acceleration traced by rotation curves (an effect given the name "radial acceleration relation") could be predicted just from the observed baryon distribution (that is, including stars and gas but not dark matter). [ 44 ] This so-called radial acceleration relation (RAR) might be fundamental for understanding the dynamics of galaxies. [ 45 ] The same relation provided a good fit for 2693 samples in 153 rotating galaxies, with diverse shapes, masses, sizes, and gas fractions. Brightness in the near infrared, where the more stable light from red giants dominates, was used to estimate the density contribution due to stars more consistently. The results are consistent with MOND, and place limits on alternative explanations involving dark matter alone. However, cosmological simulations within a Lambda-CDM framework that include baryonic feedback effects reproduce the same relation, without the need to invoke new dynamics (such as MOND). [ 46 ] Thus, a contribution due to dark matter itself can be fully predictable from that of the baryons, once the feedback effects due to the dissipative collapse of baryons are taken into account. Attempts to model of galaxy rotation based on a general relativity metric, showing that the rotation curves for the Milky Way , NGC 3031 , NGC 3198 and NGC 7331 are consistent with the mass density distributions of the visible matter [ 47 ] and other similar work [ 48 ] have been disputed. [ 49 ] According to recent analysis of the data produced by the Gaia spacecraft , it would seem possible to explain at least the Milky Way 's rotation curve without requiring any dark matter if instead of a Newtonian approximation the entire set of equations of general relativity is adopted. [ 50 ] [ 51 ]
https://en.wikipedia.org/wiki/Galaxy_rotation_curve
Galeas per montes (galleys across mountains) is the name given to a feat of military engineering made between December 1438 and April 1439 by the Republic of Venice , when several Venetian ships, including galleys and frigates were transported from the Adriatic Sea to Lake Garda . The operation required towing the ships upstream on the river Adige until Rovereto , then transporting the fleet by land to Torbole , on the Northern shores of the lake. The second leg of the journey was the most remarkable achievement, requiring a land journey 20 km through the Loppio Lake and the narrow Passo San Giovanni [ it ] . The Republic of Venice was at the time a power in the Mediterranean and, in the 15th century, it began an expansion phase towards the mainland of the current Lombardia and Veneto regions both through military conquest (e.g. Padua) or spontaneous "dedication", as in the case of Vicenza . The city of Brescia , located West of Lake Garda, allied with the Republic of Venice to escape the Duchy of Milan on November 20, 1426. [ 1 ] In 1438, the Duke of Milan Filippo Maria Visconti waged war against the Republic of Venice and, through a series of lucky victories, took control of Lombard lands up to the southern shores of Lake Garda. At the same time, the city of Brescia was under siege by the mercenary condottiero Niccolò Piccinino , on the Duke of Milan's payroll, and called on the Venetian Senate for assistance. [ 2 ] Piccinino took control of the entire Southern sector of the lake, so the Venetian warlord Gattamelata ( Erasmo da Narni ) could only access the lake from its Northern shores, namely Torbole or Riva . The Milanese army was also fortified in the castles of Peschiera del Garda and Desenzano , making a head-on clash too expensive. To avoid this problem, the Republic of Venice decided to prepare a military plan that would allow its troops (and navy) to surprise the Visconti army entering the lake from its Northern shore. On December 1, 1438, after a very long session, the Republic's Minor Council approved a plan formulated by Blasio de Arboribus, Niccolò Carcavilla, and Niccolò Sorbolo [ dubious – discuss ] that would become the galeas per montes . The plan foresaw moving a fleet of warships by dragging it upstream the Adige river, then beaching it, and dragging it on wooden rollers along the Loppio valley to the Northern shores of Lake Garda, near Torbole . From there, the Venetian fleet would have unleashed a surprise attack toward the Milanese army, that was anchored in Desenzano , cutting supplies to the Visconti militia guarding Peschiera del Garda , and gaining a foothold to free Brescia and potentially threaten Milan . The fleet, that included 25 large ships, 6 galleys and 2 frigates , set sail in January 1439 from Venice entering the mouths of the Adige river near Sottomarina . The fleet went upstream until Verona where, since the river was drier than usual, the Venetians had to fit the ships with devices to increase their buoyancy in order to reduce their draught . The fleet was then dragged further upstream until the village of Marco (Rovereto) [ it ] , where it was beached. [ 3 ] The Venetians designed and built special devices for the operations, and hired hundreds of workers including diggers, carpenters, sailors, and local craftsmen. The workers flattened the road that would be used by the fleet, and used around 2000 oxen divided in groups, since the largest ships could require more than 200 oxen to be dragged. In order to facilitate the passing of the fleet, the workers leveled natural and man-made obstacles, and built several bridges and infrastructural aids. The main road for the ships was built by laying down wooden planks, so that the massively heavy ships could be slid over the planks using wooden rollers. The fleet's passage was made easier by having ships sail through the Lago di Loppio, reducing the length of the land passage. After the lake, the fleet was once again beached, and dragged along the steep and narrow slope from Passo San Giovanni to Torbole. As the ships would gather velocity during the downhill segment (potentially crashing against rocks), they were slowed down by tying their masts to large boulders using winches and thick ropes. To further slow down the ships' descent, Venetians unfurled the ships' sails, and made use of a local strong wind, the so-called Ora del Garda [ it ] . The complex operation was completed in only 15 days, but cost the staggeringly high amount of 16,000 Ducats . It was one of the most remarkable feats of military engineering at the time, becoming famous throughout Europe. [ 4 ] The fleet's presence on the lake allowed the Venetians to resupply Brescia, though these operations were soon noticed and contested by the Milanese navy. The two navies faced each other in two battles on April 12 and September 26, 1439, both seeing the defeat of the Venetians. The Venetians finally managed to re-capture Lake Garda and Brescia only in 1440. An instrumental step in this victory was the naval battle in April 1440, where the Venetian fleet inflicted a major defeat to the Milanese navy on the waters off the Ponale pass. A painting by Tintoretto in the Doge's Palace 's Sala del Maggior Consiglio celebrates this victory. Unica viache ancor rimanesse ad approvvigionare Brescia era quella del lago di Garda, poiché essendo la costa orientale di esso formata dal Veronese, imbarcati colà i viveri, facilmente si potevano condurre a Brescia, e se il Piccinino fosse accorso a vietarlo avrebbe facilmente lasciata libera o poco munita la strada da Brescia a Verona. Ma nel lago non avevano i Veneziani alcun naviglio, mentre il nemico teneva un'armetta a Peschiera, e altri posti fortificati all'intorno. In tanta difficoltà la Repubblica aveva accolto fino dal dicembre 1438 il temerario progetto di un Blasio de Arboribus (o Nicolò Carcavilla o Caravilla) e Nicolò Sorbolo di far passare pei monti una flottiglia dall'Adige nel lago. Componevasi di venticinque barche e sei galere, le quali dalla foce dell' Adige furono fatte salire fino quasi a Roveredo, ma di là erano ancora da dodici a quindici miglia per giungere a Torbole per terreno erto ed alpestre. In mezzo a quei monti e alle falde della catena del monte Baldo trovasi il lago di s. Andrea, nel quale appunto volevasi far entrare la flottiglia. A quest'uopo furono radunati fino a duemila buoi, abbisognandone ben cento venti paia per ogni galera; gran numero di guastatori, operai, ingegneri sgombravano i borri, costruivano ponti, spianavano la strada, e così, dopo indicibili sforzi e fatiche, poté giungere l'armetta nel lago di s. Andrea. Restava a superare il monte Baldo, e l'umana industria e il ferreo volere anco a questo pervennero e con istrano spettacolo i navigli trovaronsi alfine sulla vetta del monte. Di colà bisognava gettarli nel lago, operazione non meno difficile pei pericoli della discesa; in quel ripido pendìo legavansi le barche agli alberi e ai macigni, col mezzo di argani allentavansi a poco a poco le funi, e i navigli si calavano da quegli orridi precipizii. Così dopo quindici giorni di viaggio per terra, l'armetta giunse senz'alcun sinistro a Torbole, donde fu lanciata in acqua e munita. Fu impresa maravigliosa che costò alla Repubblica ben quindici mila ducati, ma sciaguratamente presso che inutile per lo scopo di vettovagliare Brescia, poiché accorso il Piccinino col. suo navilio, poco sollievo poterono avere i Bresciani e il comandante veneziano Pietro Zeno dovette ritirarsi a Torbole e mettersi in salvo dietro a forte steccato.
https://en.wikipedia.org/wiki/Galeas_per_montes
Galena , also called lead glance , is the natural mineral form of lead(II) sulfide (PbS). It is the most important ore of lead and an important source of silver . [ 5 ] Galena is one of the most abundant and widely distributed sulfide minerals . It crystallizes in the cubic crystal system often showing octahedral forms. It is often associated with the minerals sphalerite , calcite and fluorite . As a pure specimen held in the hand, under standard temperature and pressure , galena is insoluble in water and so is almost non-toxic. Handling galena under these specific conditions (such as in a museum or as part of geology instruction) poses practically no risk; however, as lead(II) sulfide is reasonably reactive in a variety of environments, it can be highly toxic if swallowed or inhaled, particularly under prolonged or repeated exposure. [ 6 ] Galena is the main ore of lead , used since ancient times, [ 7 ] since lead can be smelted from galena in an ordinary wood fire. [ 8 ] Galena typically is found in hydrothermal veins in association with sphalerite , marcasite , chalcopyrite , cerussite , anglesite , dolomite , calcite , quartz , barite , and fluorite . It is also found in association with sphalerite in low-temperature lead- zinc deposits within limestone beds. Minor amounts are found in contact metamorphic zones, in pegmatites , and disseminated in sedimentary rock. [ 9 ] In some deposits, the galena contains up to 0.5% silver , a byproduct that far surpasses the main lead ore in revenue. [ 10 ] In these deposits significant amounts of silver occur as included silver sulfide mineral phases or as limited silver in solid solution within the galena structure. These argentiferous galenas have long been an important ore of silver. [ 7 ] [ 11 ] Silver-bearing galena is almost entirely of hydrothermal origin; galena in lead-zinc deposits contains little silver. [ 9 ] Galena deposits are found worldwide in various environments. [ 4 ] Noted deposits include those at Freiberg in Saxony ; [ 2 ] Cornwall , the Mendips in Somerset , Derbyshire , and Cumberland in England ; the Linares mines in Spain were worked from before the Roman times until the end of the 20th century; [ 12 ] the Madan and Rhodope Mountains in Bulgaria ; the Sullivan Mine of British Columbia ; Broken Hill and Mount Isa in Australia ; and the ancient mines of Sardinia . In the United States , it occurs most notably as lead-zinc ore in the Mississippi Valley type deposits of the Lead Belt in southeastern Missouri , which is the largest known deposit, [ 2 ] and in the Driftless Area of Illinois , Iowa and Wisconsin , providing the origin of the name of Galena, Illinois , a historical settlement known for the material. Galena also was a major mineral of the zinc -lead mines of the tri-state district around Joplin in southwestern Missouri and the adjoining areas of Kansas and Oklahoma . [ 2 ] Galena is also an important ore mineral in the silver mining regions of Colorado , Idaho , Utah and Montana . Of the latter, the Coeur d'Alene district of northern Idaho was most prominent. [ 2 ] Australia is the world's leading producer of lead as of 2021, most of which is extracted as galena. Argentiferous galena was accidentally discovered at Glen Osmond in 1841, and additional deposits were discovered near Broken Hill in 1876 and at Mount Isa in 1923. [ 13 ] Most galena in Australia is found in hydrothermal deposits emplaced around 1680 million years ago, which have since been heavily metamorphosed. [ 14 ] The largest documented crystal of galena is composite cubo-octahedra from the Great Laxey Mine , Isle of Man , measuring 25 cm × 25 cm × 25 cm (10 in × 10 in × 10 in). [ 15 ] This specimen is on permanent display in the minerals gallery of the Natural History Museum, London . Galena is the official state mineral of the U.S. states of Kansas, [ 16 ] Missouri, [ 17 ] and Wisconsin; [ 18 ] the former mining communities of Galena, Kansas , [ 19 ] [ 20 ] Galena, Illinois , [ 21 ] Galena, South Dakota and Galena, Alaska , [ 22 ] take their names from deposits of this mineral. Galena belongs to the octahedral sulfide group of minerals that have metal ions in octahedral positions, such as the iron sulfide pyrrhotite and the nickel arsenide niccolite . The galena group is named after its most common member, with other isometric members that include manganese bearing alabandite and niningerite . [ 9 ] [ 4 ] Divalent lead (Pb) cations and sulfur (S) anions form a close-packed cubic unit cell much like the mineral halite of the halide mineral group. Zinc, cadmium , iron , copper , antimony , arsenic , bismuth and selenium also occur in variable amounts in galena. Selenium substitutes for sulfur in the structure constituting a solid solution series. The lead telluride mineral altaite has the same crystal structure as galena. [ 9 ] Within the weathering or oxidation zone galena alters to anglesite (lead sulfate) or cerussite (lead carbonate). [ 9 ] Galena exposed to acid mine drainage can be oxidized to anglesite by naturally occurring bacteria and archaea , in a process similar to bioleaching . [ 23 ] One of the oldest uses of galena was to produce kohl , an eye cosmetic now regarded as toxic due to the risk of lead poisoning . [ 24 ] In Ancient Egypt , this was applied around the eyes to reduce the glare of the desert sun and to repel flies, which were a potential source of disease. [ 25 ] In pre-Columbian North America, galena was used by indigenous peoples as an ingredient in decorative paints and cosmetics, and widely traded throughout the eastern United States. [ 26 ] Traces of galena are frequently found at the Mississippian city at Kincaid Mounds in present-day Illinois. [ 27 ] The galena used at the site originated from deposits in southeastern and central Missouri and the Upper Mississippi Valley. [ 26 ] Galena is the primary ore of lead, and is often mined for its silver content. [ 7 ] It is used as a source of lead in ceramic glaze . [ 28 ] Galena is a semiconductor with a small band gap of about 0.4 eV , which found use in early wireless communication systems. It was used as the crystal in crystal radio receivers, in which it was used as a point-contact diode capable of rectifying alternating current to detect the radio signals. The galena crystal was used with a sharp wire, known as a " cat's whisker ", in contact with it. [ 29 ] In modern times, galena is primarily used to extract its constituent minerals. In addition to silver, it is the most important source of lead, for uses such as in lead-acid batteries . [ 10 ]
https://en.wikipedia.org/wiki/Galena
Galenic formulation deals with the principles of preparing and compounding medicines in order to optimize their absorption . Galenic formulation is named after Claudius Galen , a 2nd Century AD Greek physician , who codified the preparation of drugs using multiple ingredients. Today, galenic formulation is part of pharmaceutical formulation . The pharmaceutical formulation of a medicine affects the pharmacokinetics , pharmacodynamics and safety profile of a drug . This article about medicinal chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galenic_formulation
In mathematics , in the area of numerical analysis , Galerkin methods are a family of methods for converting a continuous operator problem, such as a differential equation , commonly in a weak formulation , to a discrete problem by applying linear constraints determined by finite sets of basis functions. They are named after the Soviet mathematician Boris Galerkin . Often when referring to a Galerkin method, one also gives the name along with typical assumptions and approximation methods used: Examples of Galerkin methods are: Let us introduce Galerkin's method with an abstract problem posed as a weak formulation on a Hilbert space V {\displaystyle V} , namely, Here, a ( ⋅ , ⋅ ) {\displaystyle a(\cdot ,\cdot )} is a bilinear form (the exact requirements on a ( ⋅ , ⋅ ) {\displaystyle a(\cdot ,\cdot )} will be specified later) and f {\displaystyle f} is a bounded linear functional on V {\displaystyle V} . Choose a subspace V n ⊂ V {\displaystyle V_{n}\subset V} of dimension n and solve the projected problem: We call this the Galerkin equation . Notice that the equation has remained unchanged and only the spaces have changed. Reducing the problem to a finite-dimensional vector subspace allows us to numerically compute u n {\displaystyle u_{n}} as a finite linear combination of the basis vectors in V n {\displaystyle V_{n}} . The key property of the Galerkin approach is that the error is orthogonal to the chosen subspaces. Since V n ⊂ V {\displaystyle V_{n}\subset V} , we can use v n {\displaystyle v_{n}} as a test vector in the original equation. Subtracting the two, we get the Galerkin orthogonality relation for the error, ϵ n = u − u n {\displaystyle \epsilon _{n}=u-u_{n}} which is the error between the solution of the original problem, u {\displaystyle u} , and the solution of the Galerkin equation, u n {\displaystyle u_{n}} Since the aim of Galerkin's method is the production of a linear system of equations , we build its matrix form, which can be used to compute the solution algorithmically. Let e 1 , e 2 , … , e n {\displaystyle e_{1},e_{2},\ldots ,e_{n}} be a basis for V n {\displaystyle V_{n}} . Then, it is sufficient to use these in turn for testing the Galerkin equation, i.e.: find u n ∈ V n {\displaystyle u_{n}\in V_{n}} such that We expand u n {\displaystyle u_{n}} with respect to this basis, u n = ∑ j = 1 n u j e j {\displaystyle u_{n}=\sum _{j=1}^{n}u_{j}e_{j}} and insert it into the equation above, to obtain This previous equation is actually a linear system of equations A u = f {\displaystyle Au=f} , where Due to the definition of the matrix entries, the matrix of the Galerkin equation is symmetric if and only if the bilinear form a ( ⋅ , ⋅ ) {\displaystyle a(\cdot ,\cdot )} is symmetric. Here, we will restrict ourselves to symmetric bilinear forms , that is While this is not really a restriction of Galerkin methods, the application of the standard theory becomes much simpler. Furthermore, a Petrov–Galerkin method may be required in the nonsymmetric case. The analysis of these methods proceeds in two steps. First, we will show that the Galerkin equation is a well-posed problem in the sense of Hadamard and therefore admits a unique solution. In the second step, we study the quality of approximation of the Galerkin solution u n {\displaystyle u_{n}} . The analysis will mostly rest on two properties of the bilinear form , namely By the Lax-Milgram theorem (see weak formulation ), these two conditions imply well-posedness of the original problem in weak formulation. All norms in the following sections will be norms for which the above inequalities hold (these norms are often called an energy norm). Since V n ⊂ V {\displaystyle V_{n}\subset V} , boundedness and ellipticity of the bilinear form apply to V n {\displaystyle V_{n}} . Therefore, the well-posedness of the Galerkin problem is actually inherited from the well-posedness of the original problem. The error u − u n {\displaystyle u-u_{n}} between the original and the Galerkin solution admits the estimate This means, that up to the constant C / c {\displaystyle C/c} , the Galerkin solution u n {\displaystyle u_{n}} is as close to the original solution u {\displaystyle u} as any other vector in V n {\displaystyle V_{n}} . In particular, it will be sufficient to study approximation by spaces V n {\displaystyle V_{n}} , completely forgetting about the equation being solved. Since the proof is very simple and the basic principle behind all Galerkin methods, we include it here: by ellipticity and boundedness of the bilinear form (inequalities) and Galerkin orthogonality (equals sign in the middle), we have for arbitrary v n ∈ V n {\displaystyle v_{n}\in V_{n}} : Dividing by c ‖ u − u n ‖ {\displaystyle c\|u-u_{n}\|} and taking the infimum over all possible v n {\displaystyle v_{n}} yields the lemma. For simplicity of presentation in the section above we have assumed that the bilinear form a ( u , v ) {\displaystyle a(u,v)} is symmetric and positive-definite, which implies that it is a scalar product and the expression ‖ u ‖ a = a ( u , u ) {\displaystyle \|u\|_{a}={\sqrt {a(u,u)}}} is actually a valid vector norm, called the energy norm . Under these assumptions one can easily prove in addition Galerkin's best approximation property in the energy norm. Using Galerkin a-orthogonality and the Cauchy–Schwarz inequality for the energy norm, we obtain Dividing by ‖ u − u n ‖ a {\displaystyle \|u-u_{n}\|_{a}} and taking the infimum over all possible v n ∈ V n {\displaystyle v_{n}\in V_{n}} proves that the Galerkin approximation u n ∈ V n {\displaystyle u_{n}\in V_{n}} is the best approximation in the energy norm within the subspace V n ⊂ V {\displaystyle V_{n}\subset V} , i.e. u n ∈ V n {\displaystyle u_{n}\in V_{n}} is nothing but the orthogonal, with respect to the scalar product a ( u , v ) {\displaystyle a(u,v)} , projection of the solution u {\displaystyle u} to the subspace V n {\displaystyle V_{n}} . I. Elishakof , M. Amato, A. Marzani, P.A. Arvan, and J.N. Reddy [ 6 ] [ 7 ] [ 8 ] [ 9 ] studied the application of the Galerkin method to stepped structures. They showed that the generalized function, namely unit-step function, Dirac’s delta function, and the doublet function are needed for obtaining accurate results. The approach is usually credited to Boris Galerkin . [ 10 ] [ 11 ] The method was explained to the Western reader by Hencky [ 12 ] and Duncan [ 13 ] [ 14 ] among others. Its convergence was studied by Mikhlin [ 15 ] and Leipholz [ 16 ] [ 17 ] [ 18 ] [ 19 ] Its coincidence with Fourier method was illustrated by Elishakoff et al. [ 20 ] [ 21 ] [ 22 ] Its equivalence to Ritz's method for conservative problems was shown by Singer. [ 23 ] Gander and Wanner [ 24 ] showed how Ritz and Galerkin methods led to the modern finite element method. One hundred years of method's development was discussed by Repin. [ 25 ] Elishakoff, Kaplunov and Kaplunov [ 26 ] show that the Galerkin’s method was not developed by Ritz, contrary to the Timoshenko’s statements.
https://en.wikipedia.org/wiki/Galerkin_method
In materials science , galfenol is the general term for an alloy of iron and gallium . The name was first given to iron-gallium alloys by United States Navy researchers in 1998 when they discovered that adding gallium to iron could amplify iron's magnetostrictive effect up to tenfold. Galfenol is of interest to sonar researchers because magnetostrictor materials are used to detect sound, and amplifying the magnetostrictive effect could lead to better sensitivity of sonar detectors. [ 1 ] Galfenol is also proposed for vibrational energy harvesting, actuators for precision machine tools, active anti-vibration systems, and anti-clogging devices for sifting screens and spray nozzles. Galfenol is machinable and can be produced in sheet and wire form. [ 2 ] [ 3 ] In 2009, scientists from Virginia Polytechnic Institute and State University , and National Institute of Standards and Technology (NIST) used neutron beams to determine the structure of galfenol. They determined that the addition of gallium changes the lattice structure of the iron atoms from regular cubic cells to one in which the faces of some of the cells become slightly rectangular. The elongated cells tend to clump together in the alloy, forming localized clumps within the material. These clumps have been described by Peter Gehring of the NIST Center for Neutron Research as "something like raisins within a cake". [ 1 ] It has also been proposed that there is an intrinsic mechanism generating this enhanced magnetostriction, which has its origins in the electronic structure of the material as described by density functional theory . [ 4 ] It is understood that the addition of gallium to pure iron alters the electronic structure and atomic arrangements in the material in such a way as to enhance the material's magnetoelastic constant. [ 5 ]
https://en.wikipedia.org/wiki/Galfenol
Galilean invariance or Galilean relativity states that the laws of motion are the same in all inertial frames of reference . Galileo Galilei first described this principle in 1632 in his Dialogue Concerning the Two Chief World Systems using the example of a ship travelling at constant velocity, without rocking, on a smooth sea; any observer below the deck would not be able to tell whether the ship was moving or stationary. Specifically, the term Galilean invariance today usually refers to this principle as applied to Newtonian mechanics , that is, Newton's laws of motion hold in all frames related to one another by a Galilean transformation . In other words, all frames related to one another by such a transformation are inertial (meaning, Newton's equation of motion is valid in these frames). In this context it is sometimes called Newtonian relativity . Among the axioms from Newton's theory are: Galilean relativity can be shown as follows. Consider two inertial frames S and S' . A physical event in S will have position coordinates r = ( x , y , z ) and time t in S , and r' = ( x' , y' , z' ) and time t' in S' . By the second axiom above, one can synchronize the clock in the two frames and assume t = t' . Suppose S' is in relative uniform motion to S with velocity v . Consider a point object whose position is given by functions r' ( t ) in S' and r ( t ) in S . We see that The velocity of the particle is given by the time derivative of the position: Another differentiation gives the acceleration in the two frames: It is this simple but crucial result that implies Galilean relativity. Assuming that mass is invariant in all inertial frames, the above equation shows Newton's laws of mechanics, if valid in one frame, must hold for all frames. [ 1 ] But it is assumed to hold in absolute space, therefore Galilean relativity holds. A comparison can be made between Newtonian relativity and special relativity . Some of the assumptions and properties of Newton's theory are: In comparison, the corresponding statements from special relativity are as follows: Both theories assume the existence of inertial frames. In practice, the size of the frames in which they remain valid differ greatly, depending on gravitational tidal forces. In the appropriate context, a local Newtonian inertial frame , where Newton's theory remains a good model, extends to roughly 10 7 light years. [ clarification needed ] In special relativity, one considers Einstein's cabins , cabins that fall freely in a gravitational field. According to Einstein's thought experiment, a man in such a cabin experiences (to a good approximation) no gravity and therefore the cabin is an approximate inertial frame. However, one has to assume that the size of the cabin is sufficiently small so that the gravitational field is approximately parallel in its interior. This can greatly reduce the sizes of such approximate frames, in comparison to Newtonian frames. For example, an artificial satellite orbiting the Earth can be viewed as a cabin. However, reasonably sensitive instruments could detect "microgravity" in such a situation because the "lines of force" of the Earth's gravitational field converge. In general, the convergence of gravitational fields in the universe dictates the scale at which one might consider such (local) inertial frames. For example, a spaceship falling into a black hole or neutron star would (at a certain distance) be subjected to tidal forces strong enough to crush it in width and tear it apart in length. [ 2 ] In comparison, however, such forces might only be uncomfortable for the astronauts inside (compressing their joints, making it difficult to extend their limbs in any direction perpendicular to the gravity field of the star). Reducing the scale further, the forces at that distance might have almost no effects at all on a mouse. This illustrates the idea that all freely falling frames are locally inertial (acceleration and gravity-free) if the scale is chosen correctly. [ 2 ] There are two consistent Galilean transformations that may be used with electromagnetic fields in certain situations. A transformation T { ∗ , v } {\displaystyle T\{*,v\}} is not consistent if T { ∗ , v 1 + v 2 } ≠ T { ∗ , v 1 } + T { ∗ , v 2 } {\displaystyle T\{*,v_{1}+v_{2}\}\neq T\{*,v_{1}\}+T\{*,v_{2}\}} where v 1 {\displaystyle v_{1}} and v 2 {\displaystyle v_{2}} are velocities. A consistent transformation will produce the same results when transforming to a new velocity in one step or multiple steps. It is not possible to have a consistent Galilean transformation that transforms both the magnetic and electric fields. [ 3 ] : 256 There are useful consistent Galilean transformations that may be applied whenever either the magnetic field or the electric field is dominant. Magnetic field systems are those systems in which the electric field in the initial frame of reference is insignificant, but the magnetic field is strong. When the magnetic field is dominant and the relative velocity, v r {\displaystyle v^{\mathbf {r} }} , is low, then the following transformation may be useful: H ′ = H J f ′ = J f B ′ = B M ′ = M E ′ = E + v r × B {\displaystyle {\begin{aligned}\mathbf {H^{'}} &=\mathbf {H} \\\mathbf {J_{f}^{'}} &=\mathbf {J_{f}} \\\mathbf {B^{'}} &=\mathbf {B} \\\mathbf {M^{'}} &=\mathbf {M} \\\mathbf {E^{'}} &=\mathbf {E} +v^{\mathbf {r} }\times \mathbf {B} \\\end{aligned}}} where J f {\displaystyle \mathbf {J_{f}} } is free current density, M {\displaystyle \mathbf {M} } is magnetization density. The electric field is transformed under this transformation when changing frames of reference, but the magnetic field and related quantities are unchanged. [ 3 ] : 261 An example of this situation is a wire is moving in a magnetic field such as would occur in an ordinary generator or motor. The transformed electric field in the moving frame of reference could induce current in the wire. Electric field systems are those systems in which the magnetic field in the initial frame of reference is insignificant, but the electric field is strong. When the electric field is dominant and the relative velocity, v r {\displaystyle v^{r}} , is low, then the following transformation may be useful: E ′ = E D ′ = D ρ f ′ = ρ f P ′ = P H ′ = H − v r × D J f ′ = J f − ρ f v r {\displaystyle {\begin{aligned}\mathbf {E^{'}} &=\mathbf {E} \\\mathbf {D^{'}} &=\mathbf {D} \\\mathbf {\rho _{f}^{'}} &=\mathbf {\rho _{f}} \\\mathbf {P^{'}} &=\mathbf {P} \\\mathbf {H^{'}} &=\mathbf {H} -v^{\mathbf {r} }\times \mathbf {D} \\\mathbf {J_{f}^{'}} &=\mathbf {J_{f}} -\rho _{\mathbf {f} }v^{\mathbf {r} }\\\end{aligned}}} where ρ f {\displaystyle \rho _{\mathbf {f} }} is free charge density, P {\displaystyle \mathbf {P} } is polarization density. The magnetic field and free current density are transformed under this transformation when changing frames of reference, but the electric field and related quantities are unchanged [ 3 ] : 265 Because the distance covered while applying a force to an object depends on the inertial frame of reference, so depends the work done. Due to Newton's law of reciprocal actions there is a reaction force; it does work depending on the inertial frame of reference in an opposite way. The total work done is independent of the inertial frame of reference. Correspondingly the kinetic energy of an object, and even the change in this energy due to a change in velocity, depends on the inertial frame of reference. The total kinetic energy of an isolated system also depends on the inertial frame of reference: it is the sum of the total kinetic energy in a center-of-momentum frame and the kinetic energy the total mass would have if it were concentrated in the center of mass . Due to the conservation of momentum the latter does not change with time, so changes with time of the total kinetic energy do not depend on the inertial frame of reference. By contrast, while the momentum of an object also depends on the inertial frame of reference, its change due to a change in velocity does not.
https://en.wikipedia.org/wiki/Galilean_invariance
In physics , a Galilean transformation is used to transform between the coordinates of two reference frames which differ only by constant relative motion within the constructs of Newtonian physics . These transformations together with spatial rotations and translations in space and time form the inhomogeneous Galilean group (assumed throughout below). Without the translations in space and time the group is the homogeneous Galilean group . The Galilean group is the group of motions of Galilean relativity acting on the four dimensions of space and time, forming the Galilean geometry . This is the passive transformation point of view. In special relativity the homogeneous and inhomogeneous Galilean transformations are, respectively, replaced by the Lorentz transformations and Poincaré transformations ; conversely, the group contraction in the classical limit c → ∞ of Poincaré transformations yields Galilean transformations. The equations below are only physically valid in a Newtonian framework, and not applicable to coordinate systems moving relative to each other at speeds approaching the speed of light . Galileo formulated these concepts in his description of uniform motion . [ 1 ] The topic was motivated by his description of the motion of a ball rolling down a ramp , by which he measured the numerical value for the acceleration of gravity near the surface of the Earth . Although the transformations are named for Galileo, it is the absolute time and space as conceived by Isaac Newton that provides their domain of definition. In essence, the Galilean transformations embody the intuitive notion of addition and subtraction of velocities as vectors . The notation below describes the relationship under the Galilean transformation between the coordinates ( x , y , z , t ) and ( x ′, y ′, z ′, t ′) of a single arbitrary event, as measured in two coordinate systems S and S′ , in uniform relative motion ( velocity v ) in their common x and x ′ directions, with their spatial origins coinciding at time t = t ′ = 0 : [ 2 ] [ 3 ] [ 4 ] [ 5 ] Note that the last equation holds for all Galilean transformations up to addition of a constant, and expresses the assumption of a universal time independent of the relative motion of different observers. In the language of linear algebra , this transformation is considered a shear mapping , and is described with a matrix acting on a vector. With motion parallel to the x -axis, the transformation acts on only two components: Though matrix representations are not strictly necessary for Galilean transformation, they provide the means for direct comparison to transformation methods in special relativity. The Galilean symmetries can be uniquely written as the composition of a rotation , a translation and a uniform motion of spacetime. [ 6 ] Let x represent a point in three-dimensional space, and t a point in one-dimensional time. A general point in spacetime is given by an ordered pair ( x , t ) . A uniform motion, with velocity v , is given by where v ∈ R 3 . A translation is given by where a ∈ R 3 and s ∈ R . A rotation is given by where R : R 3 → R 3 is an orthogonal transformation . [ 6 ] As a Lie group , the group of Galilean transformations has dimension 10. [ 6 ] Two Galilean transformations G ( R , v , a , s ) and G ( R' , v ′, a ′, s ′) compose to form a third Galilean transformation, The set of all Galilean transformations Gal(3) forms a group with composition as the group operation. The group is sometimes represented as a matrix group with spacetime events ( x , t , 1) as vectors where t is real and x ∈ R 3 is a position in space. The action is given by [ 7 ] where s is real and v , x , a ∈ R 3 and R is a rotation matrix . The composition of transformations is then accomplished through matrix multiplication . Care must be taken in the discussion whether one restricts oneself to the connected component group of the orthogonal transformations. Gal(3) has named subgroups. The identity component is denoted SGal(3) . Let m represent the transformation matrix with parameters v , R , s , a : The parameters s , v , R , a span ten dimensions. Since the transformations depend continuously on s , v , R , a , Gal(3) is a continuous group , also called a topological group. The structure of Gal(3) can be understood by reconstruction from subgroups. The semidirect product combination ( A ⋊ B {\displaystyle A\rtimes B} ) of groups is required. The Lie algebra of the Galilean group is spanned by H , P i , C i and L ij (an antisymmetric tensor ), subject to commutation relations , where H is the generator of time translations ( Hamiltonian ), P i is the generator of translations ( momentum operator ), C i is the generator of rotationless Galilean transformations (Galileian boosts), [ 8 ] and L ij stands for a generator of rotations ( angular momentum operator ). This Lie Algebra is seen to be a special classical limit of the algebra of the Poincaré group , in the limit c → ∞ . Technically, the Galilean group is a celebrated group contraction of the Poincaré group (which, in turn, is a group contraction of the de Sitter group SO(1,4) ). [ 9 ] Formally, renaming the generators of momentum and boost of the latter as in where c is the speed of light (or any unbounded function thereof), the commutation relations (structure constants) in the limit c → ∞ take on the relations of the former. Generators of time translations and rotations are identified. Also note the group invariants L mn L mn and P i P i . In matrix form, for d = 3 , one may consider the regular representation (embedded in GL(5; R ) , from which it could be derived by a single group contraction, bypassing the Poincaré group), i H = ( 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0 0 0 ) , {\displaystyle iH=\left({\begin{array}{ccccc}0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\0&0&0&0&1\\0&0&0&0&0\\\end{array}}\right),\qquad } i a → ⋅ P → = ( 0 0 0 0 a 1 0 0 0 0 a 2 0 0 0 0 a 3 0 0 0 0 0 0 0 0 0 0 ) , {\displaystyle i{\vec {a}}\cdot {\vec {P}}=\left({\begin{array}{ccccc}0&0&0&0&a_{1}\\0&0&0&0&a_{2}\\0&0&0&0&a_{3}\\0&0&0&0&0\\0&0&0&0&0\\\end{array}}\right),\qquad } i v → ⋅ C → = ( 0 0 0 v 1 0 0 0 0 v 2 0 0 0 0 v 3 0 0 0 0 0 0 0 0 0 0 0 ) , {\displaystyle i{\vec {v}}\cdot {\vec {C}}=\left({\begin{array}{ccccc}0&0&0&v_{1}&0\\0&0&0&v_{2}&0\\0&0&0&v_{3}&0\\0&0&0&0&0\\0&0&0&0&0\\\end{array}}\right),\qquad } i θ i ϵ i j k L j k = ( 0 θ 3 − θ 2 0 0 − θ 3 0 θ 1 0 0 θ 2 − θ 1 0 0 0 0 0 0 0 0 0 0 0 0 0 ) . {\displaystyle i\theta _{i}\epsilon ^{ijk}L_{jk}=\left({\begin{array}{ccccc}0&\theta _{3}&-\theta _{2}&0&0\\-\theta _{3}&0&\theta _{1}&0&0\\\theta _{2}&-\theta _{1}&0&0&0\\0&0&0&0&0\\0&0&0&0&0\\\end{array}}\right)~.} The infinitesimal group element is then One may consider [ 10 ] a central extension of the Lie algebra of the Galilean group, spanned by H ′, P ′ i , C ′ i , L ′ ij and an operator M : The so-called Bargmann algebra is obtained by imposing [ C i ′ , P j ′ ] = i M δ i j {\displaystyle [C'_{i},P'_{j}]=iM\delta _{ij}} , such that M lies in the center , i.e. commutes with all other operators. In full, this algebra is given as and finally where the new parameter M {\displaystyle M} shows up. This extension and projective representations that this enables is determined by its group cohomology .
https://en.wikipedia.org/wiki/Galilean_transformation
The Galilei-covariant tensor formulation is a method for treating non-relativistic physics using the extended Galilei group as the representation group of the theory. It is constructed in the light cone of a five dimensional manifold. Takahashi et al., in 1988, began a study of Galilean symmetry , where an explicitly covariant non-relativistic field theory could be developed. The theory is constructed in the light cone of a (4,1) Minkowski space . [ 1 ] [ 2 ] [ 3 ] [ 4 ] Previously, in 1985, Duval et al. constructed a similar tensor formulation in the context of Newton–Cartan theory . [ 5 ] Some other authors also have developed a similar Galilean tensor formalism. [ 6 ] [ 7 ] The Galilei transformations are where R {\displaystyle R} stands for the three-dimensional Euclidean rotations, v {\displaystyle \mathbf {v} } is the relative velocity determining Galilean boosts, a stands for spatial translations and b , for time translations. Consider a free mass particle m {\displaystyle m} ; the mass shell relation is given by p 2 − 2 m E = 0 {\displaystyle p^{2}-2mE=0} . We can then define a 5-vector, with i = 1 , 2 , 3 {\displaystyle i=1,2,3} . Thus, we can define a scalar product of the type where is the metric of the space-time, and p ν g μ ν = p μ {\displaystyle p_{\nu }g^{\mu \nu }=p^{\mu }} . [ 3 ] A five dimensional Poincaré algebra leaves the metric g μ ν {\displaystyle g^{\mu \nu }} invariant, We can write the generators as The non-vanishing commutation relations will then be rewritten as An important Lie subalgebra is P 4 {\displaystyle P_{4}} is the generator of time translations ( Hamiltonian ), P i is the generator of spatial translations ( momentum operator ), K i {\displaystyle K_{i}} is the generator of Galilean boosts, and J i {\displaystyle J_{i}} stands for a generator of rotations ( angular momentum operator ). The generator P 5 {\displaystyle P_{5}} is a Casimir invariant and P 2 − 2 P 4 P 5 {\displaystyle P^{2}-2P_{4}P_{5}} is an additional Casimir invariant . This algebra is isomorphic to the extended Galilean Algebra in (3+1) dimensions with P 5 = − M {\displaystyle P_{5}=-M} , The central charge , interpreted as mass, and P 4 = − H {\displaystyle P_{4}=-H} . [ citation needed ] The third Casimir invariant is given by W μ 5 W μ 5 {\displaystyle W_{\mu \,5}W^{\mu }{}_{5}} , where W μ ν = ϵ μ α β ρ ν P α M β ρ {\displaystyle W_{\mu \nu }=\epsilon _{\mu \alpha \beta \rho \nu }P^{\alpha }M^{\beta \rho }} is a 5-dimensional analog of the Pauli–Lubanski pseudovector . [ 4 ] In 1985 Duval, Burdet and Kunzle showed that four-dimensional Newton–Cartan theory of gravitation can be reformulated as Kaluza–Klein reduction of five-dimensional Einstein gravity along a null-like direction. The metric used is the same as the Galilean metric but with all positive entries This lifting is considered to be useful for non-relativistic holographic models. [ 8 ] Gravitational models in this framework have been shown to precisely calculate the Mercury precession. [ 9 ]
https://en.wikipedia.org/wiki/Galilei-covariant_tensor_formulation
In fluid dynamics , the Galilei number ( Ga ), sometimes also referred to as Galileo number (see discussion), is a dimensionless number named after Italian scientist Galileo Galilei (1564-1642). It may be regarded as proportional to gravity forces divided by viscous forces. The Galilei number is used in viscous flow and thermal expansion calculations, for example to describe fluid film flow over walls. These flows apply to condensers or chemical columns.
https://en.wikipedia.org/wiki/Galilei_number
In classical mechanics and kinematics , Galileo's law of odd numbers states that the distance covered by a falling object in successive equal time intervals is linearly proportional to the odd numbers. That is, if a body falling from rest covers a certain distance during an arbitrary time interval, it will cover 3, 5, 7, etc. times that distance in the subsequent time intervals of the same length. This mathematical model is accurate if the body is not subject to any forces besides uniform gravity (for example, it is falling in a vacuum in a uniform gravitational field ). This law was established by Galileo Galilei who was the first to make quantitative studies of free fall . The graph in the figure is a plot of speed versus time. Distance covered is the area under the line. Each time interval is coloured differently. The distance covered in the second and subsequent intervals is the area of its trapezium, which can be subdivided into triangles as shown. As each triangle has the same base and height, they have the same area as the triangle in the first interval. It can be observed that every interval has two more triangles than the previous one. Since the first interval has one triangle, this leads to the odd numbers. [ 1 ] From the equation for uniform linear acceleration, the distance covered s = u t + 1 2 a t 2 {\displaystyle s=ut+{\tfrac {1}{2}}at^{2}} for initial speed u = 0 , {\displaystyle u=0,} constant acceleration a {\displaystyle a} (acceleration due to gravity without air resistance), and time elapsed t , {\displaystyle t,} it follows that the distance s {\displaystyle s} is proportional to t 2 {\displaystyle t^{2}} (in symbols, s ∝ t 2 {\displaystyle s\propto t^{2}} ), thus the distance from the starting point are consecutive squares for integer values of time elapsed. The middle figure in the diagram is a visual proof that the sum of the first n {\displaystyle n} odd numbers is n 2 . {\displaystyle n^{2}.} [ 2 ] In equations: That the pattern continues forever can also be proven algebraically: ∑ k = 1 n ( 2 k − 1 ) = 1 2 ( ∑ k = 1 n ( 2 k − 1 ) + ∑ k = 1 n ( 2 ( n − k + 1 ) − 1 ) ) = 1 2 ∑ k = 1 n ( 2 ( n + 1 ) − 1 − 1 ) = n 2 {\displaystyle {\begin{aligned}\sum _{k=1}^{n}(2\,k-1)&={\frac {1}{2}}\,\left(\sum _{k=1}^{n}(2\,k-1)+\sum _{k=1}^{n}(2\,(n-k+1)-1)\right)\\&={\frac {1}{2}}\,\sum _{k=1}^{n}(2\,(n+1)-1-1)\\&=n^{2}\end{aligned}}} To clarify this proof, since the n {\displaystyle n} th odd positive integer is m : = 2 n − 1 , {\displaystyle m\,\colon =\,2n-1,} if S : = ∑ k = 1 n ( 2 k − 1 ) = 1 + 3 + ⋯ + ( m − 2 ) + m {\displaystyle S\,\colon =\,\sum _{k=1}^{n}(2\,k-1)\,=\,1+3+\cdots +(m-2)+m} denotes the sum of the first n {\displaystyle n} odd integers then S + S = 1 + 3 + ⋯ + ( m − 2 ) + m + m + ( m − 2 ) + ⋯ + 3 + 1 = ( m + 1 ) + ( m + 1 ) + ⋯ + ( m + 1 ) + ( m + 1 ) ( n terms) = n ( m + 1 ) {\displaystyle {\begin{alignedat}{4}S+S&=\;\;1&&+\;\;3&&\;+\cdots +(m-2)&&+\;\;m\\&+\;\;m&&+(m-2)&&\;+\cdots +\;\;3&&+\;\;1\\&=\;(m+1)&&+(m+1)&&\;+\cdots +(m+1)&&+(m+1)\quad {\text{ (}}n{\text{ terms)}}\\&=\;n\,(m+1)&&&&&&&&\\\end{alignedat}}} so that S = 1 2 n ( m + 1 ) . {\displaystyle S={\tfrac {1}{2}}\,n\,(m+1).} Substituting n = 1 2 ( m + 1 ) {\displaystyle n={\tfrac {1}{2}}(m+1)} and m + 1 = 2 n {\displaystyle m+1=2\,n} gives, respectively, the formulas 1 + 3 + ⋯ + m = 1 4 ( m + 1 ) 2 and 1 + 3 + ⋯ + ( 2 n − 1 ) = n 2 {\displaystyle 1+3+\cdots +m\;=\;{\tfrac {1}{4}}(m+1)^{2}\quad {\text{ and }}\quad 1+3+\cdots +(2\,n-1)\;=\;n^{2}} where the first formula expresses the sum entirely in terms of the odd integer m {\displaystyle m} while the second expresses it entirely in terms of n , {\displaystyle n,} which is m {\displaystyle m} 's ordinal position in the list of odd integers 1 , 3 , 5 , … . {\displaystyle 1,3,5,\ldots .} This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galileo's_law_of_odd_numbers
GalileoMobile is a non-profit science education organization that brings astronomy closer to young people worldwide. The GalileoMobile emphasis is on regions with little or no access to knowledge about astronomy. Created in late 2008 on an inspiration from the 2009 International Year of Astronomy , [ 1 ] GalileoMobile is run by a group of astronomers, educators, and science communicators. The GalileoMobile team organises workshops for teachers and astronomy-related activities for students in schools and villages To encourage follow-up activities GalileoMobile also donates a Galileoscope or a "You are Galileotelescope" a copy of the GalileoMobile Handbook, a UNAWE Earthball, and other educational material. In GalileoMobile's first project the team travelled across the Andes High Plateau in Chile , Bolivia , and Peru in 2009. Since then, GalileoMobile has been to Bolivia (2012), India (2012), Uganda (2013), Bolivia and Brazil (2014), and Colombia (2014). Some team members have also carried out activities in Portugal, Nepal, United States, the Dominican Republic , Haiti, and Guatemala . In 2015, GalileoMobile started the Constellation project. This project brings together schools from Argentina, Bolivia, Brazil, Chile, Colombia, Ecuador and Peru to create a South American network of schools . Their students were able to take part in "Space Exploration", a series of astronomical outreach activities created especially for this project. GalileoMobile continues to support independently organising astronomical outreach in the Constellation group The IAU decided to endorse Constellation as a Major Cosmic Light programme of the International Year of Light 2015 . The Office for Astronomical Development sponsored about 30% of the project. [ 2 ]
https://en.wikipedia.org/wiki/GalileoMobile
In the Gallagher–Hollander degradation (1946) pyruvic acid is removed from a linear aliphatic carboxylic acid yielding a new acid with two carbon atoms fewer. [ 1 ] The original publication concerns the conversion of bile acid in a series of reactions: acid chloride (2) formation with thionyl chloride , diazoketone formation (3) with diazomethane , chloromethyl ketone formation (4) with hydrochloric acid , organic reduction of chlorine to methylketone (5), ketone halogenation to 6, elimination reaction with pyridine to enone 7 and finally oxidation with chromium trioxide to bisnorcholanic acid 8.
https://en.wikipedia.org/wiki/Gallagher–Hollander_degradation
In graph theory , the Gallai–Hasse–Roy–Vitaver theorem is a form of duality between the colorings of the vertices of a given undirected graph and the orientations of its edges. It states that the minimum number of colors needed to properly color any graph G {\displaystyle G} equals one plus the length of a longest path in an orientation of G {\displaystyle G} chosen to minimize this path's length. [ 1 ] The orientations for which the longest path has minimum length always include at least one acyclic orientation . [ 2 ] This theorem implies that every orientation of a graph with chromatic number k {\displaystyle k} contains a simple directed path with k {\displaystyle k} vertices; [ 3 ] this path can be constrained to begin at any vertex that can reach all other vertices of the oriented graph. [ 4 ] [ 5 ] A bipartite graph may be oriented from one side of the bipartition to the other. The longest path in this orientation has length one, with only two vertices. Conversely, if a graph is oriented without any three-vertex paths, then every vertex must either be a source (with no incoming edges) or a sink (with no outgoing edges) and the partition of the vertices into sources and sinks shows that it is bipartite. [ 6 ] In any orientation of a cycle graph of odd length, it is not possible for the edges to alternate in orientation all around the cycle, so some two consecutive edges must form a path with three vertices. [ 7 ] Correspondingly, the chromatic number of an odd cycle is three. [ 8 ] To prove that the chromatic number is greater than or equal to the minimum number of vertices in a longest path, suppose that a given graph has a coloring with k {\displaystyle k} colors, for some number k {\displaystyle k} . Then it may be acyclically oriented by numbering colors and by directing each edge from its lower-numbered endpoint to the higher-numbered endpoint. With this orientation, the numbers are strictly increasing along each directed path, so each path can include at most one vertex of each color, for a total of at most k {\displaystyle k} vertices per path. [ 3 ] To prove that the chromatic number is less than or equal to the minimum number of vertices in a longest path, suppose that a given graph has an orientation with at most k {\displaystyle k} vertices per simple directed path, for some number k {\displaystyle k} . Then the vertices of the graph may be colored with k {\displaystyle k} colors by choosing a maximal acyclic subgraph of the orientation, and then coloring each vertex by the length of the longest path in the chosen subgraph that ends at that vertex. Each edge within the subgraph is oriented from a vertex with a lower number to a vertex with a higher number, and is therefore properly colored. For each edge that is not in the subgraph, there must exist a directed path within the subgraph connecting the same two vertices in the opposite direction, for otherwise the edge could have been included in the chosen subgraph; therefore, the edge is oriented from a higher number to a lower number and is again properly colored. [ 1 ] The proof of this theorem was used as a test case for a formalization of mathematical induction by Yuri Matiyasevich . [ 9 ] The theorem also has a natural interpretation in the category of directed graphs and graph homomorphisms . A homomorphism is a map from the vertices of one graph to the vertices of another that always maps edges to edges. Thus, a k {\displaystyle k} -coloring of an undirected graph G {\displaystyle G} may be described by a homomorphism from G {\displaystyle G} to the complete graph K k {\displaystyle K_{k}} . If the complete graph is given an orientation, it becomes a tournament , and the orientation can be lifted back across the homomorphism to give an orientation of G {\displaystyle G} . In particular, the coloring given by the length of the longest incoming path corresponds in this way to a homomorphism to a transitive tournament (an acyclically oriented complete graph), and every coloring can be described by a homomorphism to a transitive tournament in this way. [ 2 ] Considering homomorphisms in the other direction, to G {\displaystyle G} instead of from G {\displaystyle G} , a directed graph G {\displaystyle G} has at most k {\displaystyle k} vertices in its longest path if and only if there is no homomorphism from the path graph P k + 1 {\displaystyle P_{k+1}} to G {\displaystyle G} . [ 2 ] Thus, the Gallai–Hasse–Roy–Vitaver theorem can be equivalently stated as follows: For every directed graph G {\displaystyle G} , there is a homomorphism from G {\displaystyle G} to the k {\displaystyle k} -vertex transitive tournament if and only if there is no homomorphism from the ( k + 1 ) {\displaystyle (k+1)} -vertex path to G {\displaystyle G} . [ 2 ] In the case that G {\displaystyle G} is acyclic, this can also be seen as a form of Mirsky's theorem that the longest chain in a partially ordered set equals the minimum number of antichains into which the set may be partitioned. [ 10 ] This statement can be generalized from paths to other directed graphs: for every polytree P {\displaystyle P} there is a dual directed graph D {\displaystyle D} such that, for every directed graph G {\displaystyle G} , there is a homomorphism from G {\displaystyle G} to D {\displaystyle D} if and only if there is not a homomorphism from P {\displaystyle P} to G {\displaystyle G} . [ 11 ] The Gallai–Hasse–Roy–Vitaver theorem has been repeatedly rediscovered. [ 2 ] It is named after separate publications by Tibor Gallai , [ 12 ] Maria Hasse , [ 13 ] B. Roy, [ 14 ] and L. M. Vitaver. [ 15 ] Roy credits the statement of the theorem to a conjecture in a 1958 graph theory textbook by Claude Berge . [ 14 ] It is a generalization of a much older theorem of László Rédei from 1934, that every tournament (an oriented complete graph) contains a directed Hamiltonian path . [ 16 ] [ 17 ] Rédei's theorem follows immediately from the Gallai–Hasse–Roy–Vitaver theorem applied to an undirected complete graph. [ 16 ] Instead of orienting a graph to minimize the length of its longest path, it is also natural to maximize the length of the shortest path, for a strong orientation (one in which every pair of vertices has a shortest path). Having a strong orientation requires that the given undirected graph be a bridgeless graph . For these graphs, it is always possible to find a strong orientation in which, for some pair of vertices, the shortest path length equals the length of the longest path in the given undirected graph. [ 18 ] [ 19 ]
https://en.wikipedia.org/wiki/Gallai–Hasse–Roy–Vitaver_theorem
Gallic acid (also known as 3,4,5-trihydroxybenzoic acid ) is a trihydroxybenzoic acid with the formula C 6 H 2 ( OH ) 3 CO 2 H. It is classified as a phenolic acid . It is found in gallnuts , sumac , witch hazel , tea leaves, oak bark , and other plants . [ 1 ] It is a white solid, although samples are typically brown owing to partial oxidation. Salts and esters of gallic acid are termed "gallates". Its name is derived from oak galls , which were historically used to prepare tannic acid . Despite the name, gallic acid does not contain gallium . Gallic acid is easily freed from gallotannins by acidic or alkaline hydrolysis . When heated with concentrated sulfuric acid , gallic acid converts to rufigallol . Hydrolyzable tannins break down on hydrolysis to give gallic acid and glucose or ellagic acid and glucose, known as gallotannins and ellagitannins , respectively. [ 2 ] Gallic acid is formed from 3-dehydroshikimate by the action of the enzyme shikimate dehydrogenase to produce 3,5-didehydroshikimate. This latter compound aromatizes . [ 3 ] [ 4 ] Alkaline solutions of gallic acid are readily oxidized by air. The oxidation is catalyzed by the enzyme gallate dioxygenase , an enzyme found in Pseudomonas putida . Oxidative coupling of gallic acid with arsenic acid, permanganate, persulfate, or iodine yields ellagic acid , as does reaction of methyl gallate with iron(III) chloride . [ 5 ] Gallic acid forms intermolecular esters ( depsides ) such as digallic and cyclic ether-esters ( depsidones ). [ 5 ] Hydrogenation of gallic acid gives the cyclohexane derivative hexahydrogallic acid. [ 6 ] Heating gallic acid gives pyrogallol (1,2,3-trihydroxybenzene). This conversion is catalyzed by gallate decarboxylase . Many esters of gallic acid are known, both synthetic and natural. Gallate 1-beta-glucosyltransferase catalyzes the glycosylation (attachment of glucose) of gallic acid. Gallic acid is an important component of iron gall ink , the standard European writing and drawing ink from the 12th to 19th centuries, with a history extending to the Roman empire and the Dead Sea Scrolls . Pliny the Elder (23–79 AD) describes the use of gallic acid as a means of detecting an adulteration of verdigris [ 7 ] and writes that it was used to produce dyes. Galls (also known as oak apples) from oak trees were crushed and mixed with water, producing tannic acid . It could then be mixed with green vitriol ( ferrous sulfate )—obtained by allowing sulfate-saturated water from a spring or mine drainage to evaporate [ citation needed ] —and gum arabic from acacia trees; this combination of ingredients produced the ink. [ 8 ] Gallic acid was one of the substances used by Angelo Mai (1782–1854), among other early investigators of palimpsests , to clear the top layer of text off and reveal hidden manuscripts underneath. Mai was the first to employ it, but did so "with a heavy hand", often rendering manuscripts too damaged for subsequent study by other researchers. [ 9 ] Gallic acid was first studied by the Swedish chemist Carl Wilhelm Scheele in 1786. [ 10 ] In 1818, French chemist and pharmacist Henri Braconnot (1780–1855) devised a simpler method of purifying gallic acid from galls; [ 11 ] gallic acid was also studied by the French chemist Théophile-Jules Pelouze (1807–1867), [ 12 ] among others. When mixed with acetic acid , gallic acid had uses in early types of photography, like the calotype to make the silver more sensitive to light; it was also used in developing photographs. [ 13 ] Gallic acid is found in a number of land plants , such as the parasitic plant Cynomorium coccineum , [ 14 ] the aquatic plant Myriophyllum spicatum , and the blue-green alga Microcystis aeruginosa . [ 15 ] Gallic acid is also found in various oak species, [ 16 ] Caesalpinia mimosoides , [ 17 ] and in the stem bark of Boswellia dalzielii , [ 18 ] among others. Many foodstuffs contain various amounts of gallic acid, especially fruits (including strawberries, grapes, bananas), [ 19 ] [ 20 ] as well as teas , [ 19 ] [ 21 ] cloves, [ 22 ] and vinegars . [ 23 ] [ clarification needed ] Carob fruit is a rich source of gallic acid (24–165 mg per 100 g). [ 24 ] Also known as galloylated esters: Gallate esters are antioxidants useful in food preservation, with propyl gallate being the most commonly used. Their use in human health is scantly supported by evidence. (acetone-d6): d : doublet, dd : doublet of doublets, m : multiplet, s : singlet 7.15 (2H, s, H-3 and H-7) (acetone-d6): 167.39 (C-1), 144.94 (C-4 and C-6), 137.77 (C-5), 120.81 (C-2), 109.14 (C-3 and C-7) [ 17 ]
https://en.wikipedia.org/wiki/Gallic_acid
The Gallic acid reagent is used as a simple spot-test to presumptively identify drug precursor chemicals. It is composed of a mixture of gallic acid and concentrated sulfuric acid . [ 1 ] 0.05 g of gallic acid is used for every 10 mls of sulfuric acid. [ 2 ] The same ratio of gallic acid n-propyl ester in sulfuric acid can also be used. [ 3 ] Because of its short shelf life (changing to pale violet color) it is sometimes prepared by dissolving gallic acid into ethanol and adding the sulfuric acid at the time of testing from a separate bottle. In this case 100 mL ethanol is used and one drop of sulfuric acid is used per drop of gallic acid in ethanol. [ 1 ] This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gallic_acid_reagent
Gallic horse ( Equus caballus gallicus ) is a prehistoric subspecies of Equus caballus (the horse) that lived in the Upper Paleolithic . It first appeared in the Aurignacian period because of climatic changes and roamed the territory of present-day France during the Gravettian and up to the end of the Solutrean . Its fossils , dated from 40,000 to around 15,000 years BC, are close to those of Equus caballus germanicus (the Germanic horse) and may not correspond to a valid subspecies. First described by François Prat in 1968, it is around 1.40 m (4.6 ft) tall and differs from Equus caballus germanicus mainly in its dentition and slightly smaller size. There is no consensus among specialists as to the validity of the subspecies Equus caballus gallicus . Based on paleontological discoveries at numerous sites in present-day France, such as Solutré , Camiac and La Quina , François Prat postulates that Equus caballus gallicus gradually replaced Equus caballus germanicus and that the two subspecies are distinct. On the other hand, Véra Eisenmann, a CNRS and MNHN researcher, postulates that the specimens attributed to Equus caballus gallicus do not present a sufficiently distinct variation from the subspecies Equus caballus germanicus . [ 1 ] However, it is accepted that Equus caballus arcelini , a well-differentiated subspecies, has replaced the populations made up of specimens traditionally attributed to Equus caballus germanicus and Equus caballus gallicus . The discovery of this subspecies followed the examination of horse bones found at Solutré and recovered by Jean Combier. Noting differences in morphology associated with different dating (suggesting different species or subspecies among these fossils), François Prat and Combier postulated the existence of two differentiated types of horse on this site: Equus caballus gallicus and Equus caballus arcelini . [ 2 ] The name chosen refers to the territory that Equus caballus gallicus occupied, Gaul . Because it forms most of the fossils found at Solutré, Equus caballus gallicus is generally referred to by the still-common name of "Solutré horse". [ 3 ] It is considered a subspecies. As the evolutionary history of Equidae remains controversial, it is sometimes (rarely) considered a species of the genus Equus , named Equus gallicus . Not all prehistorians and paleontologists recognize the existence of this taxon . [ 4 ] Vera Eisenmann postulates that Equus caballus germanicus can show variations in size and dentition, and therefore that Equus caballus gallicus never existed. [ 4 ] According to her, Equus caballus arcelini would have succeeded Equus caballus germanicus directly 15,000 years BC, with much more visible morphological changes. [ 5 ] [ 6 ] According to a theory put forward by N. Spassov and N. Iliev in 1997, it would seem that "cut off from the parent population in northern and central Europe by climatic barriers, Equus (caballus) germanicus evolved into gallicus and then arcelini in western Europe", while horses in eastern and southeastern Europe evolved differently. [ 7 ] According to Vera Eisenmann, the transition from Equus caballus germanicus to gallicus appears to have been gradual, accompanying changes in the biotope. As horses eat more and more grasses , their dentition changes. [ 8 ] Equus caballus gallicus was first described by François Prat in 1968. Smaller than Equus caballus germanicus (1.40 m or 4.6 ft on average), it has a different morphology, with more pronounced caballins characters on its dentition . [ 9 ] [ 10 ] [ 11 ] It is also lighter than the latter, with broad hooves and a short, voluminous head with strong teeth, resting on a short, broad neck. Based on cave paintings and primitive horses such as the Przewalski , specialists attribute a dun or pangaré coat (light brownish-yellow, black manes and tips, discoloration of the underside). [ 3 ] Equus caballus gallicus appeared after the first half of Würm III . [ 3 ] It is inseparable from the Aurignacian and Gravettian periods. [ 2 ] It lasted until the Solutrean and Magdalenian periods. [ 12 ] Between 35,000 and 22,000 B.P. , the climate in present-day France was cold or temperate. At this time, there were vast areas of grassland, ideal for herding horses. It is then possible that a new species or subspecies better adapted to climatic constraints succeeded Equus caballus gallicus in south-western France at the end of the Würm IV, but this question remains debated. [ 3 ] Equus caballus gallicus preferred to live in "dry to compound steppe environments" with few hygrophilous plants, [ 2 ] in cold, dry climates , where grass was abundant. Gregarious, it congregated in large herds and preferred large, open areas, enabling it to move quickly in search of meadows where it could feed. It tolerated wide temperature ranges, as well as temperate climates . [ 3 ] Equus caballus gallicus is common in southwestern France, particularly in Aquitaine , Périgord and Quercy . [ 13 ] Its remains have been identified at various prehistoric sites, including Camiac (Gironde, 35,000 years BC) [ 14 ] and Nespouls ( Corrèze , 30,000 years BC). [ 2 ] [ 15 ] This subspecies generally succeeds Equus caballus germanicus , then is itself replaced by Equus caballus arcelini , associated with the Magdalenian . Solutré is the first site where bones of this subspecies have been identified. Equus caballus gallicus appeared in the region in the second half of Würm III , as a successor to Equus caballus germanicus , which had been present since Würm II . [ 3 ] [ 16 ] Horses probably often passed close to the Rocher de Solutré during their seasonal migrations , overwintering in the Rhône and Saône valleys before moving west to the plateaus when the weather warmed up. Paleolithic human groups took advantage of the passage of numerous herds to slaughter animals. [ 13 ] In 1985, Jean-Pierre Penisson summarized the numerous prehistoric horse remains found in the Ardennes region. During the Würm II period, Equus caballus gallicus settled in the Dommery region . According to the Laboratory of Quaternary Geology and Prehistory at the University of Bordeaux 1 , this horse could be the origin of today's Ardennais breed. [ 17 ] For their part, Belgian researchers note that during the same period, Equus caballus germanicus was gradually supplanted by Equus caballus gallicus , which became a highly prized game animal by the end of the Upper Paleolithic . During the Holocene , horses became rarer in the region. [ 17 ] The Ardennais (one of France's oldest horse breeds , [ 18 ] and probably Europe's oldest draft horse [ 19 ] ) has long been considered a direct descendant of the Solutré horse, [ 18 ] [ 20 ] which lived in the Saône and Meuse basins in the 50th millennium BC and settled on schistose plateaus with a harsh climate at the same time. [ 21 ] However, there is no evidence that horses from the Solutré site migrated to the Ardennes. [ 17 ] Also at the La Quina site, Equus caballus gallicus succeeded Equus caballus germanicus . [ 22 ] This evolution is probably linked to climatic changes. Radiocarbon dating puts it at around 43,000 years old, or 35,000 years old, [ 23 ] the differences perhaps being due to the lack of precision of this method. [ 24 ] Located in the commune of Bize-Minervois in the Aude department, this cave also shows a transition between the two subspecies, dated at around 33,000 years BC, [ 5 ] and therefore later than La Quina . [ 5 ] Most of the bones found belong to Equus caballus gallicus . [ 25 ]
https://en.wikipedia.org/wiki/Gallic_horse
The Leblanc process was an early industrial process for making soda ash ( sodium carbonate ) used throughout the 19th century, named after its inventor, Nicolas Leblanc . It involved two stages: making sodium sulfate from sodium chloride , followed by reacting the sodium sulfate with coal and calcium carbonate to make sodium carbonate. The process gradually became obsolete after the development of the Solvay process . Soda ash ( sodium carbonate ) and potash ( potassium carbonate ), collectively termed alkali , are vital chemicals in the glass , textile , soap , and paper industries. The traditional source of alkali in western Europe had been potash obtained from wood ashes. However, by the 13th century, deforestation had rendered this means of production uneconomical, and alkali had to be imported. Potash was imported from North America, Scandinavia, and Russia, where large forests still stood. Soda ash was imported from Spain and the Canary Islands, where it was produced from the ashes of glasswort plants (called barilla ashes in Spain), or imported from Syria. [ 1 ] The soda ash from glasswort plant ashes was mainly a mixture of sodium carbonate and potassium carbonate. In addition in Egypt, naturally occurring sodium carbonate, the mineral natron , was mined from dry lakebeds. In Britain, the only local source of alkali was from kelp , which washed ashore in Scotland and Ireland. [ 2 ] [ 3 ] In 1783, King Louis XVI of France and the French Academy of Sciences offered a prize of 2400 livres for a method to produce alkali from sea salt ( sodium chloride ). In 1791, Nicolas Leblanc , physician to Louis Philip II, Duke of Orléans , patented a solution. That same year he built the first Leblanc plant for the Duke at Saint-Denis , and this began to produce 320 tons of soda per year. [ 4 ] He was denied his prize money because of the French Revolution . [ 5 ] For more recent history, see industrial history below. In the first step, sodium chloride is treated with sulfuric acid in the Mannheim process . This reaction produces sodium sulfate (called the salt cake ) and hydrogen chloride : This chemical reaction had been discovered in 1772 by the Swedish chemist Carl Wilhelm Scheele . Leblanc's contribution was the second step, in which a mixture of the salt cake and crushed limestone ( calcium carbonate ) was reduced by heating with coal . [ 6 ] This conversion entails two parts. First is the carbothermic reaction whereby the coal, a source of carbon , reduces the sulfate to sulfide : In the second stage, is the reaction to produce sodium carbonate and calcium sulfide . This mixture is called black ash . [ citation needed ] The soda ash is extracted from the black ash with water. Evaporation of this extract yields solid sodium carbonate. This extraction process was termed lixiviation. [ citation needed ] In response to the Alkali Act , the noxious calcium sulfide was converted into calcium carbonate: The hydrogen sulfide can be used as a sulfur source for the lead chamber process to produce the sulfuric acid used in the first step of the Leblanc process. Likewise, by 1874 the Deacon process was invented, oxidizing the hydrochloric acid over a copper catalyst: The chlorine would be sold for bleach in paper and textile manufacturing. Eventually, the chlorine sales became the purpose of the Leblanc process. The inexpensive chlorine was a contributor to the development of the chloralkali process . [ citation needed ] The sodium chloride is initially mixed with concentrated sulfuric acid and the mixture exposed to low heat. The hydrogen chloride gas bubbles off and was discarded to atmosphere before gas absorption towers were introduced. This continues until all that is left is a fused mass. This mass still contains enough chloride to contaminate the later stages of the process. The mass is then exposed to direct flame, which evaporates nearly all of the remaining chloride. [ 7 ] [ 8 ] The coal used in the next step must be low in nitrogen to avoid the formation of cyanide . The calcium carbonate, in the form of limestone or chalk, should be low in magnesia and silica. The weight ratio of the charge is 2:2:1 of salt cake, calcium carbonate, and carbon respectively. It is fired in a reverberatory furnace at about 1000 °C. [ 9 ] Sometimes the reverberatory furnace rotated and thus was called a "revolver". [ 10 ] The black-ash product of firing must be lixiviated right away to prevent oxidation of sulfides back to sulfate. [ 9 ] In the lixiviation process, the black-ash is completely covered in water, again to prevent oxidation. To optimize the leaching of soluble material, the lixiviation is done in cascaded stages. That is, pure water is used on the black-ash that has already been through prior stages. The liquor from that stage is used to leach an earlier stage of the black-ash, and so on. [ 9 ] The final liquor is treated by blowing carbon dioxide through it. This precipitates dissolved calcium and other impurities. It also volatilizes the sulfide, which is carried off as H 2 S gas. Any residual sulfide can be subsequently precipitated by adding zinc hydroxide . The liquor is separated from the precipitate and evaporated using waste heat from the reverberatory furnace. The resulting ash is then redissolved into concentrated solution in hot water. Solids that fail to dissolve are separated. The solution is then cooled to recrystallize nearly pure sodium carbonate decahydrate. [ 9 ] Leblanc established the first Leblanc process plant in 1791 in St. Denis . However, French Revolutionaries seized the plant, along with the rest of Louis Philip's estate, in 1794, and publicized Leblanc's trade secrets . Napoleon I returned the plant to Leblanc in 1801, but lacking the funds to repair it and compete against other soda works that had been established in the meantime, Leblanc committed suicide in 1806. [ 5 ] By the early 19th century, French soda ash producers were making 10,000 - 15,000 tons annually. However, it was in Britain that the Leblanc process became most widely practiced. [ 5 ] The first British soda works using the Leblanc process was built by the Losh family of iron founders at the Losh, Wilson and Bell works in Walker on the River Tyne in 1816, but steep British tariffs on salt production hindered the economics of the Leblanc process and kept such operations on a small scale until 1824. Following the repeal of the salt tariff, the British soda industry grew dramatically. The Bonnington Chemical Works was possibly the earliest production, [ 11 ] and the chemical works established by James Muspratt in Liverpool and Flint , and by Charles Tennant near Glasgow became some of the largest in the world. Muspratt's Liverpool works enjoyed proximity and transport links to the Cheshire salt mines, the St Helens coalfields and the North Wales and Derbyshire limestone quarries. [ 12 ] By 1852, annual soda production had reached 140,000 tons in Britain and 45,000 tons in France. [ 5 ] By the 1870s, the British soda output of 200,000 tons annually exceeded that of all other nations in the world combined. [ citation needed ] In 1861, the Belgian chemist Ernest Solvay developed a more direct process for producing soda ash from salt and limestone through the use of ammonia . The only waste product of this Solvay process was calcium chloride , and so it was both more economical and less polluting than the Leblanc method. From the late 1870s, Solvay-based soda works on the European continent provided stiff competition in their home markets to the Leblanc-based British soda industry. Additionally the Brunner Mond Solvay plant which opened in 1874 at Winnington near Northwich provided fierce competition nationally. Leblanc producers were unable to compete with Solvay soda ash, and their soda ash production was effectively an adjunct to their still profitable production of chlorine, bleaching powder etc. (The unwanted by-products had become the profitable products). The development of electrolytic methods of chlorine production removed that source of profits as well, and there followed a decline moderated only by "gentlemen's' agreements" with Solvay producers. [ 13 ] By 1900, 90% of the world's soda production was through the Solvay method, or on the North American continent, through the mining of trona , discovered in 1938, which caused the closure of the last North American Solvay plant in 1986. The last Leblanc-based soda ash plant in the West closed in the early 1920s, [ 3 ] but when during WWII Nationalist China had to evacuate its industry to the inland rural areas, the difficulties in importing and maintaining complex equipment forced them to temporarily re-establish the Leblanc process. [ 14 ] However, the Solvay process does not work for the manufacture of potassium carbonate , because it relies on the low solubility of the corresponding bicarbonate . The Leblanc process plants were quite damaging to the local environment. The process of generating salt cake from salt and sulfuric acid released hydrochloric acid gas , and because this acid was industrially useless in the early 19th century, it was simply vented into the atmosphere. Also, an insoluble smelly solid waste was produced. For every 8 tons of soda ash, the process produced 5.5 tons of hydrogen chloride and 7 tons of calcium sulfide waste. This solid waste (known as galligu) had no economic value, and was piled in heaps and spread on fields near the soda works, where it weathered to release hydrogen sulfide , the toxic gas responsible for the odor of rotten eggs. [ citation needed ] Because of their noxious emissions, Leblanc soda works became targets of lawsuits and legislation. An 1839 suit against soda works alleged, "the gas from these manufactories is of such a deleterious nature as to blight everything within its influence, and is alike baneful to health and property. The herbage of the fields in their vicinity is scorched, the gardens neither yield fruit nor vegetables; many flourishing trees have lately become rotten naked sticks. Cattle and poultry droop and pine away. It tarnishes the furniture in our houses, and when we are exposed to it, which is of frequent occurrence, we are afflicted with coughs and pains in the head ... all of which we attribute to the Alkali works." [ 15 ] In 1863, the British Parliament passed the Alkali Act 1863 , the first of several Alkali Acts , the first modern air pollution legislation. This act allowed that no more than 5% of the hydrochloric acid produced by alkali plants could be vented to the atmosphere. To comply with the legislation, soda works passed the escaping hydrogen chloride gas up through a tower packed with charcoal , where it was absorbed by water flowing in the other direction. The chemical works usually dumped the resulting hydrochloric acid solution into nearby bodies of water, killing fish and other aquatic life. [ citation needed ] The Leblanc process also meant very unpleasant working conditions for the operators. It originally required careful operation and frequent operator interventions (some involving heavy manual labour) into processes giving off hot noxious chemicals. [ 16 ] Sometimes, workmen cleaning the reaction products out of the reverberatory furnace wore cloth mouth-and-nose gags to keep dust and aerosols out of the lungs. [ 17 ] [ 18 ] This improved somewhat later as processes were more heavily mechanised to improve economics and uniformity of product. [ citation needed ] By the 1880s, methods for converting the hydrochloric acid to chlorine gas for the manufacture of bleaching powder and for reclaiming the sulfur in the calcium sulfide waste had been discovered, but the Leblanc process remained more wasteful and more polluting than the Solvay process . The same is true when it is compared with the later electrolytical processes which eventually replaced it for chlorine production. [ citation needed ] There is a strong case for arguing that Leblanc process waste is the most endangered habitat in the UK, since the waste weathers down to calcium carbonate and produces a haven for plants that thrive in lime-rich soils, known as calcicoles . Only four such sites have survived the new millennium; three are protected as local nature reserves of which the largest, at Nob End near Bolton , is an SSSI and Local Nature Reserve - largely for its sparse orchid-calcicole flora, most unusual in an area with acid soils. This alkaline island contains within it an acid island, where acid boiler slag was deposited, which now shows up as a zone dominated by heather, Calluna vulgaris . [ 19 ]
https://en.wikipedia.org/wiki/Galligu
Galling is a form of wear caused by adhesion between sliding surfaces. When a material galls, some of it is pulled with the contacting surface, especially if there is a large amount of force compressing the surfaces together. [ 1 ] Galling is caused by a combination of friction and adhesion between the surfaces, followed by slipping and tearing of crystal structure beneath the surface. [ 2 ] This will generally leave some material stuck or even friction welded to the adjacent surface, whereas the galled material may appear gouged with balled-up or torn lumps of material stuck to its surface. Galling is most commonly found in metal surfaces that are in sliding contact with each other. It is especially common where there is inadequate lubrication between the surfaces. However, certain metals will generally be more prone to galling, due to the atomic structure of their crystals. For example, aluminium is a metal that will gall very easily, whereas annealed (softened) steel is slightly more resistant to galling. Steel that is fully hardened is very resistant to galling. Galling is a common problem in most applications where metals slide in contact with other metals. This can happen regardless of whether the metals are the same or different. Alloys such as brass and bronze are often chosen for bearings , bushings , and other sliding applications because of their resistance to galling, as well as other forms of mechanical abrasion . Galling is adhesive wear that is caused by the microscopic transfer of material between metallic surfaces during transverse motion (sliding). It occurs frequently whenever metal surfaces are in contact, sliding against each other, especially with poor lubrication. It often occurs in high-load, low-speed applications, although it also can occur in high-speed applications with very little load. Galling is a common problem in sheet metal forming , bearings and pistons in engines , hydraulic cylinders , air motors , and many other industrial operations. Galling is distinct from gouging or scratching in that it involves the visible transfer of material as it is adhesively pulled ( mechanically spalled ) from one surface, leaving it stuck to the other in the form of a raised lump (gall). Unlike other forms of wear, galling is usually not a gradual process but occurs quickly and spreads rapidly as the raised lumps induce more galling. It can often occur in screws and bolts, causing the threads to seize and tear free from the fastener or the hole. In extreme cases, the bolt may seize without stripping the threads, which can lead to breakage of the fastener, the tool, or both. Threaded inserts of hardened steel are often used in metals like aluminium or stainless steel that can gall easily. [ 3 ] Galling requires two properties common to most metals, cohesion through metallic-bonding attractions and plasticity (the ability to deform without breaking). The tendency of a material to gall is affected by the ductility of the material. Typically, hardened materials are more resistant to galling, whereas softer materials of the same type will gall more readily. The propensity of a material to gall is also affected by the specific arrangement of the atoms, because crystals arranged in a face-centered cubic (FCC) lattice will usually allow material-transfer to a greater degree than a body-centered cubic (BCC). This is because a face-centered cubic has a greater tendency to produce dislocations in the crystal lattice, which are defects that allow the lattice to shift, or "cross-slip," making the metal more prone to galling. However, if the metal has a high number of stacking faults (a difference in stacking sequence between atomic planes), it will be less apt to cross-slip at the dislocations. Therefore, a material's resistance to galling is primarily determined by its stacking-fault energy . A material with high stacking-fault energy, such as aluminium or titanium , will be far more susceptible to galling than materials with low stacking-fault energy, like copper , bronze , or gold . Conversely, materials with a hexagonal close packed (HCP) structure and a high c/a ratio, such as cobalt -based alloys , are extremely resistant to galling. [ 4 ] Galling occurs initially with material transfer from individual grains on a microscopic scale, which become stuck or even diffusion welded to the adjacent surface. This transfer can be enhanced if one or both metals form a thin layer of hard oxides with high coefficients of friction , such as those found on aluminum or stainless steel. As the lump grows, it pushes against the adjacent material, forcing them apart and concentrating most of the friction heat energy into a very small area. This, in turn, causes more adhesion and material build-up. The localized heat increases the plasticity of the galled surface, deforming the metal until the lump breaks through the surface and begins plowing up large amounts of material from the galled surface. Methods of preventing galling include the use of lubricants like grease and oil , low-friction coatings and thin-film deposits like molybdenum disulfide or titanium nitride , and increasing the surface hardness of the metals using processes such as case hardening and induction hardening . In engineering science and other technical aspects, the term galling is widespread. The influence of acceleration in the contact zone between materials has been mathematically described and correlated to the exhibited friction mechanism found in the tracks during empiric observations of the galling phenomenon. Due to problems with previous incompatible definitions and test methods, better means of measurements in coordination with a greater understanding of the involved frictional mechanisms have led to the attempt to standardize or redefine the term galling to enable a more generalized use. ASTM International has formulated and established a common definition for the technical aspect of the galling phenomenon in the ASTM G40 standard: "Galling is a form of surface damage arising between sliding solids, distinguished by microscopic, usually localized, roughening and creation of protrusions (e.g., lumps) above the original surface". [ 5 ] When two metallic surfaces are pressed against each other, the initial interaction and the mating points are the asperities , or high points, found on each surface. An asperity may penetrate the opposing surface if there is a converging contact and relative movement. The contact between the surfaces initiates friction or plastic deformation and induces pressure and energy in a small area called the contact zone. The elevation in pressure increases the energy density and heat level within the deformed area. This leads to greater adhesion between the surfaces, which initiates the material transfer, galling build-up, lump growth, and creation of protrusions above the original surface. If the lump (or protrusion of transferred material to one surface) grows to a height of several micrometers , it may penetrate the opposing surface oxide-layer and cause damage to the underlying material. Damage in the bulk material is a prerequisite for plastic flow found in the deformed volume surrounding the lump. The geometry and speed of the lump define how the flowing material will be transported, accelerated, and decelerated around the lump. This material flow is critical when defining the contact pressure, energy density, and developed temperature during sliding. The mathematical function describing acceleration and deceleration of flowing material is thereby defined by the geometrical constraints, deduced or given by the lump's surface contour. If the right conditions are met, such as geometric constraints of the lump, an accumulation of energy can cause a clear change in the material's contact and plastic behavior, increasing the friction force required for adhesion and further movement. In sliding friction, increased compressive stress is proportionally equal to a rise in potential energy and temperature within the contact zone. The energy accumulation during sliding can reduce energy loss from the contact zone due to a small surface area on the surface boundary, thus, low heat conductivity. Another reason is the energy continuously forced into the metals, which is a product of acceleration and pressure. In cooperation, these mechanisms allow constant energy accumulation, causing increased energy density and temperature in the contact zone during sliding. The process and contact can be compared to cold welding or friction welding because cold welding is not truly cold, and the fusing points exhibit an increase in temperature and energy density derived from applied pressure and plastic deformation in the contact zone. Galling is often found between metallic surfaces where direct contact and relative motion have occurred. Sheet metal forming, thread manufacturing, and other industrial operations may include moving parts, or contact surfaces made of stainless steel, aluminium, titanium, and other metals whose natural development of an external oxide layer through passivation increases their corrosion resistance but renders them particularly susceptible to galling. [ 6 ] In metalworking that involves cutting (primarily turning and milling), galling is often used to describe a wear phenomenon that occurs when cutting soft metal. The work material is transferred to the cutter and develops a "lump." The developed lump changes the contact behavior between the two surfaces, which usually increases adhesion, and resistance to further cutting, and, due to created vibrations, can be heard as a distinct sound. Galling often occurs with aluminium compounds and is a common cause of tool breakdown. Aluminium is a ductile metal, which means it possesses the ability for plastic flow with relative ease, presupposing a relatively consistent and significant plastic zone. High ductility and flowing material can be considered a general prerequisite for excessive material transfer and galling because frictional heating is closely linked to the structure of plastic zones around penetrating objects. Galling can occur even at relatively low loads and velocities because it is the real energy density in the system that induces a phase transition, which often leads to an increase in material transfer and higher friction. Generally, two major frictional systems affect adhesive wear or galling: solid surface contact and lubricated contact. In terms of prevention, they work in dissimilar ways and set different demands on the surface structure, alloys, and crystal matrix used in the materials. In solid surface contact or unlubricated conditions, the initial contact is characterized by the interaction between asperities and the exhibition of two different sorts of attraction: cohesive surface-energy or the molecules connect and adhere the two surfaces together, notably even if a measurable distance separates them. Direct contact and plastic deformation generate another type of attraction through the constitution of a plastic zone with flowing material where induced energy, pressure, and temperature allow bonding between the surfaces on a much larger scale than cohesive surface energy. In metallic compounds and sheet metal forming, the asperities are usually oxides, and the plastic deformation primarily consists of brittle fracture , which presupposes a very small plastic zone. The accumulation of energy and temperature is low due to the discontinuity in the fracture mechanism. However, during the initial asperity/asperity contact, wear debris or bits and pieces from the asperities adhere to the opposing surface, creating microscopic, usually localized, roughening and creation of protrusions (in effect lumps) above the original surface. The transferred wear debris and lumps penetrate the opposing oxide surface layer and cause damage to the underlying bulk material, plowing it forward. This allows continuous plastic deformation, plastic flow, and accumulation of energy and temperature. The prevention of adhesive material transfer is accomplished by the following or similar approaches: Lubricated contact places other demands on the surface structure of the materials involved, and the main issue is to retain the protective lubrication thickness and avoid plastic deformation. This is important because plastic deformation raises the temperature of the oil or lubrication fluid and changes the viscosity. Any eventual material transfer or creation of protrusions above the original surface will also reduce the ability to retain a protective lubrication thickness. A proper protective lubrication thickness can be assisted or retained by:
https://en.wikipedia.org/wiki/Galling
Gallium(II) selenide ( Ga Se ) is a chemical compound . It has a hexagonal layer structure, similar to that of GaS . [ 1 ] It is a photoconductor, [ 2 ] a second harmonic generation crystal in nonlinear optics , [ 3 ] and has been used as a far-infrared conversion material [ 4 ] at 14–31 THz and above. [ 5 ] It is said to have potential for optical applications [ 6 ] but the exploitation of this potential has been limited by the ability to readily grow single crystals [ 7 ] Gallium selenide crystals show great promise as a nonlinear optical material and as a photoconductor . Non-linear optical materials are used in the frequency conversion of laser light . Frequency conversion involves the shifting of the wavelength of a monochromatic source of light, usually laser light, to a higher or lower wavelength of light that cannot be produced from a conventional laser source. Several methods of frequency conversion using non-linear optical materials exist. Second harmonic generation leads to doubling of the frequency of infrared carbon dioxide lasers . In optical parametric generation, the wavelength of light is doubled. Near-infrared solid-state lasers are usually used in optical parametric generations. [ 8 ] One original problem with using gallium selenide in optics is that it is easily broken along cleavage lines and thus it can be hard to cut for practical application. It has been found, however, that doping the crystals with indium greatly enhances their structural strength and makes their application much more practical. [ 7 ] There remain, however, difficulties with crystal growth that must be overcome before gallium selenide crystals may become more widely used in optics. Single layers of gallium selenide are dynamically stable two-dimensional semiconductors, in which the valence band has an inverted Mexican-hat shape, leading to a Lifshitz transition as the hole-doping is increased. [ 9 ] The integration of gallium selenide into electronic devices has been hindered by its air sensitivity. Several approaches have been developed to encapsulate GaSe mono- and few-layers, leading to improved chemical stability and electronic mobility. [ 10 ] [ 11 ] [ 12 ] Synthesis of GaSe nanoparticles is carried out by the reaction of GaMe 3 with trioctylphosphine selenium (TOPSe) in a high temperature solution of trioctylphosphine (TOP) and trioctylphosphine oxide (TOPO). [ 13 ] A solution of 15 g TOPO and 5 mL TOP is heated to 150 °C overnight under nitrogen, removing any water that may be present in the original TOP solution. This initial TOP solution is vacuum distilled at 0.75 torr, taking the fraction from 204 °C to 235 °C. A TOPSe solution (12.5 mL TOP with 1.579 g TOPSe) is then added and the TOPO/TOP/TOPSe reaction mixture is heated to 278 °C. GaMe 3 (0.8 mL) dissolved in 7.5 mL distilled TOP is then injected. After injection, the temperature drops to 254 °C before stabilizing in the range of 266–268 °C after 10 minutes. GaSe nanoparticles then begin to form, and may be detected by a shoulder in the optical absorption spectrum in the 400–450 nm range. After this shoulder is observed, the reaction mixture is left to cool to room temperature to prevent further reaction. After synthesis and cooling, the reaction vessel is opened and extraction of the GaSe nanoparticle solution is accomplished by addition of methanol . The distribution of nanoparticles between the polar (methanol) and non-polar (TOP) phases depends on experimental conditions. If the mixture is very dry, nanoparticles partition into the methanol phase. If the nanoparticles are exposed to air or water, however, the particles become uncharged and become partitioned into the non-polar TOP phase. [ 13 ]
https://en.wikipedia.org/wiki/Gallium(II)_selenide
Gallium(III) chloride is an inorganic chemical compound with the formula GaCl 3 which forms a monohydrate, GaCl 3 ·H 2 O. Solid gallium(III) chloride is a deliquescent white solid and exists as a dimer with the formula Ga 2 Cl 6 . [ 2 ] It is colourless and soluble in virtually all solvents, even alkanes, which is unusual for a metal halide. It is the main precursor to most derivatives of gallium and a reagent in organic synthesis . [ 3 ] As a Lewis acid , GaCl 3 is milder than aluminium chloride . It is also easier to reduce than aluminium chloride. The coordination chemistry of Ga(III) and Fe(III) are similar, so gallium(III) chloride has been used as a diamagnetic analogue of ferric chloride . Gallium(III) chloride can be prepared from the elements by heating gallium metal in a stream of chlorine at 200 °C and purifying the product by sublimation under vacuum. [ 4 ] [ 5 ] It can also be prepared from by heating gallium oxide with thionyl chloride : [ 6 ] Gallium metal reacts slowly with hydrochloric acid, producing hydrogen gas. [ 7 ] Evaporation of this solution produces the monohydrate. [ 8 ] As a solid, it adopts a bitetrahedral structure with two bridging chlorides. Its structure resembles that of aluminium tribromide . In contrast AlCl 3 and InCl 3 feature contain 6 coordinate metal centers. As a consequence of its molecular nature and associated low lattice energy , gallium(III) chloride has a lower melting point vs the aluminium and indium trihalides. The formula of Ga 2 Cl 6 is often written as Ga 2 (μ-Cl) 2 Cl 4 . [ 1 ] In the gas-phase, the dimeric (Ga 2 Cl 6 ) and trigonal planar monomeric (GaCl 3 ) are in a temperature-dependent equilibrium, with higher temperatures favoring the monomeric form. At 870 K, all gas-phase molecules are effectively in the monomeric form. [ 9 ] In the monohydrate, the gallium is tetrahedrally coordinated with three chlorine molecules and one water molecule. [ 8 ] Gallium(III) chloride is a diamagnetic and deliquescent colorless white solid that melts at 77.9 °C and boils at 201 °C without decomposition to the elements. This low melting point results from the fact that it forms discrete Ga 2 Cl 6 molecules in the solid state. Gallium(III) chloride dissolves in water with the release of heat to form a colorless solution, which when evaporated, produces a colorless monohydrate, which melts at 44.4 °C. [ 8 ] [ 10 ] [ 11 ] Gallium is the lightest member of Group 13 to have a full d shell, (gallium has the electronic configuration [ Ar ] 3 d 10 4 s 2 4 p 1 ) below the valence electrons that could take part in d -π bonding with ligands. The low oxidation state of Ga in Ga(III)Cl 3 , along with the low electronegativity and high polarisability , allow GaCl 3 to behave as a "soft acid" in terms of the HSAB theory . [ 12 ] The strength of the bonds between gallium halides and ligands have been extensively studied. What emerges is: [ 13 ] With a chloride ion as ligand the tetrahedral GaCl 4 − ion is produced, the 6 coordinate GaCl 6 3− cannot be made. Compounds like KGa 2 Cl 7 that have a chloride bridged anion are known. [ 14 ] In a molten mixture of KCl and GaCl 3 , the following equilibrium exists: When dissolved in water, gallium(III) chloride dissociates into the octahederal [Ga(H 2 O) 6 ] 3+ and Cl − ions forming an acidic solution, due to the hydrolysis of the hexaaquogallium(III) ion: [ 15 ] In basic solution, it hydrolyzes to gallium(III) hydroxide , which redissolves with the addition of more hydroxide , possibly to form Ga(OH) 4 − . [ 15 ] Gallium(III) chloride is a Lewis acid catalyst , such as in the Friedel–Crafts reaction , which is able to substitute more common lewis acids such as ferric chloride . Gallium complexes strongly with π-donors, especially silylethynes , producing a strongly electrophilic complex. These complexes are used as an alkylating agent for aromatic hydrocarbons. [ 3 ] It is also used in carbogallation reactions of compounds with a carbon-carbon triple bond. It is also used as a catalyst in many organic reactions. [ 3 ] It is a precursor to organogallium reagents . For example, trimethylgallium , an organogallium compound used in MOCVD to produce various gallium-containing semiconductors , is produced by the reaction of gallium(III) chloride with various alkylating agents, such as dimethylzinc , trimethylaluminium , or methylmagnesium iodide . [ 16 ] [ 17 ] [ 18 ] Gallium(III) chloride is an intermediate in various gallium purification processes, where gallium(III) chloride is fractionally distilled or extracted from acid solutions. [ 7 ] 110 tons of gallium(III) chloride aqueous solution was used in the GALLEX and GNO experiments performed at Laboratori Nazionali del Gran Sasso in Italy to detect solar neutrinos . In these experiments, germanium -71 was produced by neutrino interactions with the isotope gallium-71 (which has a natural abundance of 40%), and the subsequent beta decays of germanium-71 were measured. [ 10 ]
https://en.wikipedia.org/wiki/Gallium(III)_chloride
Gallium(III) selenide ( Ga 2 Se 3 ) is a chemical compound . It has a defect sphalerite (cubic form of ZnS) structure. [ 1 ] It is a p-type semiconductor [ 2 ] It can be formed by union of the elements. It hydrolyses slowly in water and quickly in mineral acids to form toxic hydrogen selenide gas. The reducing capabilities of the selenide ion make it vulnerable to oxidizing agents. It is advised therefore that it not come into contact with bases. [ citation needed ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gallium(III)_selenide
A germanium-68/gallium-68 generator is a device used to extract the positron-emitting isotope 68 Ga of gallium from a source of decaying germanium-68. The parent isotope 68 Ge has a half-life of 271 days and can be easily utilized for in-hospital production of generator produced 68 Ga. Its decay product gallium-68 (with a half-life of only 68 minutes, inconvenient for transport) is extracted and used for certain positron emission tomography nuclear medicine diagnostic procedures, where the radioisotope's relatively short half-life and emission of positrons for creation of 3-dimensional PET scans , are useful. The parent isotope germanium-68 is the longest-lived (271 days) of the radioisotopes of germanium. It has been produced by several methods. [ 1 ] In the U.S., it is primarily produced in proton accelerators: At Los Alamos National Laboratory , it may be separated out as a product of proton capture , after proton irradiation of Nb-encapsulated gallium metal. [ 2 ] At Brookhaven National Laboratories , 40 MeV proton irradiation of a gallium metal target produces germanium-68 by proton capture and double neutron knockout , from gallium-69 (the most common of two stable isotopes of gallium). This reaction is: 69 Ga(p,2n) 68 Ge. A Russian source produces germanium-68 from accelerator-produced helium ion (alpha) irradiation of zinc-66, again after knockout of two neutrons, in the nuclear reaction 66 Zn(α,2n) 68 Ge. When loaded with the parent isotope germanium-68, these generators function similarly to technetium-99m generators , in both cases using a process similar to ion chromatography . The stationary phase is either metal-free or alumina , TiO 2 or SnO 2 , onto which germanium-68 is adsorbed. The use of metal-free columns allows direct labeling of 68 Ga without prepurification, hence making production of gallium-68-radiolabeled compounds more convenient. The mobile phase is a solvent able to elute (wash out) gallium-68 (III) ( 68 Ga 3+ ) after it has been produced by electron capture decay from the immobilized (absorbed) germanium-68. Currently, such 68 Ga (III) is easily eluted with a few mL of 0.05 M, 0.1 M or 1.0 M hydrochloric acid from generators using metal-free tin dioxide [ 3 ] or titanium dioxide adsorbents, respectively, within 1 to 2 minutes. With generators of tin dioxide and titanium dioxide-based adsorbents, there once remained more than an hour of pharmaceutical preparation to attach the gallium-68 (III) as a tracer to the pharmaceutical molecules DOTATOC or DOTA-TATE , so that the total preparation time for the resulting radiopharmaceutical is typically longer than the 68 Ga isotope half-life. This fact required that these radiopharmaceuticals be made on-site in most cases, and the on-site generator is required to minimize the time losses. However, new kits such as "NETSPOT" for more rapidly preparing Ga-68 edotreotide or DOTATATE from Ga-68 (III) ions have increased the flexibility of sourcing of this radiopharmaceutical for Ga-68 endocrine receptor (octreotide) scans. With NETSPOT the preparation of the Ga-68 DOTATATE is immediate once the Ga-68 has been acquired from the generator and mixed with the reagent. [ 4 ] Gallium-67 citrate salt imaging is useful for imaging old or sterile abscesses . Gallium-68 is useful in direct tumor imaging, especially leukocyte -derived malignancies and prostate cancer metastases .
https://en.wikipedia.org/wiki/Gallium-68_generator
Gallium arsenide ( GaAs ) is a III-V direct band gap semiconductor with a zinc blende crystal structure. Gallium arsenide is used in the manufacture of devices such as microwave frequency integrated circuits , monolithic microwave integrated circuits , infrared light-emitting diodes , laser diodes , solar cells and optical windows. [ 6 ] GaAs is often used as a substrate material for the epitaxial growth of other III-V semiconductors, including indium gallium arsenide , aluminum gallium arsenide and others. Gallium arsenide was first synthesized and studied by Victor Goldschmidt in 1926 by passing arsenic vapors mixed with hydrogen over gallium(III) oxide at 600 °C. [ 7 ] [ 8 ] The semiconductor properties of GaAs and other III-V compounds were patented by Heinrich Welker at Siemens-Schuckert in 1951 [ 9 ] and described in a 1952 publication. [ 10 ] Commercial production of its monocrystals commenced in 1954, [ 11 ] and more studies followed in the 1950s. [ 12 ] First infrared LEDs were made in 1962. [ 11 ] In the compound, gallium has a +3 oxidation state . Gallium arsenide single crystals can be prepared by three industrial processes: [ 6 ] Alternative methods for producing films of GaAs include: [ 6 ] [ 14 ] Oxidation of GaAs occurs in air, degrading performance of the semiconductor. The surface can be passivated by depositing a cubic gallium(II) sulfide layer using a tert-butyl gallium sulfide compound such as ( t BuGaS) 7 . [ 15 ] In the presence of excess arsenic, GaAs boules grow with crystallographic defects ; specifically, arsenic antisite defects (an arsenic atom at a gallium atom site within the crystal lattice). The electronic properties of these defects (interacting with others) cause the Fermi level to be pinned to near the center of the band gap, so that this GaAs crystal has very low concentration of electrons and holes. This low carrier concentration is similar to an intrinsic (perfectly undoped) crystal, but much easier to achieve in practice. These crystals are called "semi-insulating", reflecting their high resistivity of 10 7 –10 9 Ω·cm (which is quite high for a semiconductor, but still much lower than a true insulator like glass). [ 16 ] Wet etching of GaAs industrially uses an oxidizing agent such as hydrogen peroxide or bromine water, [ 17 ] and the same strategy has been described in a patent relating to processing scrap components containing GaAs where the Ga 3+ is complexed with a hydroxamic acid ("HA"), for example: [ 18 ] This reaction produces arsenic acid . [ 19 ] GaAs can be used for various transistor types: [ 20 ] The HBT can be used in integrated injection logic (I 2 L). The earliest GaAs logic gate used Buffered FET Logic (BFL). [ 20 ] From c. 1975 to 1995 the main logic families used were: [ 20 ] Some electronic properties of gallium arsenide are superior to those of silicon . It has a higher saturated electron velocity and higher electron mobility , allowing gallium arsenide transistors to function at frequencies in excess of 250 GHz. [ 22 ] GaAs devices are relatively insensitive to overheating, owing to their wider energy band gap, and they also tend to create less noise (disturbance in an electrical signal) in electronic circuits than silicon devices, especially at high frequencies. This is a result of higher carrier mobilities and lower resistive device parasitics. These superior properties are compelling reasons to use GaAs circuitry in mobile phones , satellite communications, microwave point-to-point links and higher frequency radar systems. It is also used in the manufacture of Gunn diodes for the generation of microwaves . [ citation needed ] Another advantage of GaAs is that it has a direct band gap , which means that it can be used to absorb and emit light efficiently. Silicon has an indirect band gap and so is relatively poor at emitting light. [ citation needed ] As a wide direct band gap material with resulting resistance to radiation damage, GaAs is an excellent material for outer space electronics and optical windows in high power applications. [ 22 ] Because of its wide band gap, pure GaAs is highly resistive. Combined with a high dielectric constant , this property makes GaAs a very good substrate for integrated circuits and unlike Si provides natural isolation between devices and circuits. This has made it an ideal material for monolithic microwave integrated circuits (MMICs), where active and essential passive components can readily be produced on a single slice of GaAs. One of the first GaAs microprocessors was developed in the early 1980s by the RCA Corporation and was considered for the Star Wars program of the United States Department of Defense . These processors were several times faster and several orders of magnitude more radiation resistant than their silicon counterparts, but were more expensive. [ 23 ] Other GaAs processors were implemented by the supercomputer vendors Cray Computer Corporation, Convex , and Alliant in an attempt to stay ahead of the ever-improving CMOS microprocessor. Cray eventually built one GaAs-based machine in the early 1990s, the Cray-3 , but the effort was not adequately capitalized, and the company filed for bankruptcy in 1995. Complex layered structures of gallium arsenide in combination with aluminium arsenide (AlAs) or the alloy Al x Ga 1−x As can be grown using molecular-beam epitaxy (MBE) or using metalorganic vapor-phase epitaxy (MOVPE). Because GaAs and AlAs have almost the same lattice constant , the layers have very little induced strain , which allows them to be grown almost arbitrarily thick. This allows extremely high performance and high electron mobility HEMT transistors and other quantum well devices. GaAs is used for monolithic radar power amplifiers (but GaN can be less susceptible to heat damage). [ 24 ] Silicon has three major advantages over GaAs for integrated circuit manufacture. First, silicon is abundant and cheap to process in the form of silicate minerals. The economies of scale available to the silicon industry has also hindered the adoption of GaAs. [ citation needed ] In addition, a Si crystal has a very stable structure and can be grown to very large diameter boules and processed with very good yields. It is also a fairly good thermal conductor, thus enabling very dense packing of transistors that need to get rid of their heat of operation, all very desirable for design and manufacturing of very large ICs . Such good mechanical characteristics also make it a suitable material for the rapidly developing field of nanoelectronics . Naturally, a GaAs surface cannot withstand the high temperatures needed for diffusion; however a viable and actively pursued alternative as of the 1980s was ion implantation. [ 25 ] The second major advantage of Si is the existence of a native oxide ( silicon dioxide , SiO 2 ), which is used as an insulator . Silicon dioxide can be incorporated onto silicon circuits easily, and such layers are adherent to the underlying silicon. SiO 2 is not only a good insulator (with a band gap of 8.9 eV ), but the Si-SiO 2 interface can be easily engineered to have excellent electrical properties, most importantly low density of interface states. GaAs does not have a native oxide, does not easily support a stable adherent insulating layer, and does not possess the dielectric strength or surface passivating qualities of the Si-SiO 2 . [ 25 ] Aluminum oxide (Al 2 O 3 ) has been extensively studied as a possible gate oxide for GaAs (as well as InGaAs ). The third advantage of silicon is that it possesses a higher hole mobility compared to GaAs (500 versus 400 cm 2 V −1 s −1 ). [ 26 ] This high mobility allows the fabrication of higher-speed P-channel field-effect transistors , which are required for CMOS logic. Because they lack a fast CMOS structure, GaAs circuits must use logic styles which have much higher power consumption; this has made GaAs logic circuits unable to compete with silicon logic circuits. For manufacturing solar cells, silicon has relatively low absorptivity for sunlight, meaning about 100 micrometers of Si is needed to absorb most sunlight. Such a layer is relatively robust and easy to handle. In contrast, the absorptivity of GaAs is so high that only a few micrometers of thickness are needed to absorb all of the light. Consequently, GaAs thin films must be supported on a substrate material. [ 27 ] Silicon is a pure element, avoiding the problems of stoichiometric imbalance and thermal unmixing of GaAs. [ 28 ] Silicon has a nearly perfect lattice; impurity density is very low and allows very small structures to be built (down to 5 nm in commercial production as of 2020 [ 29 ] ). In contrast, GaAs has a very high impurity density, [ 30 ] which makes it difficult to build integrated circuits with small structures, so the 500 nm process is a common process for GaAs. [ citation needed ] Silicon has about three times the thermal conductivity of GaAs, with less risk of local overheating in high power devices. [ 24 ] Gallium arsenide (GaAs) transistors are used in the RF power amplifiers for cell phones and wireless communicating. [ 31 ] GaAs wafers are used in laser diodes , photodetectors , and radio frequency (RF) amplifiers for mobile phones and base stations. [ 32 ] GaAs transistors are also integral to monolithic microwave integrated circuits (MMICs) , utilized in satellite communication and radar systems, as well as in low-noise amplifiers (LNAs) that enhance weak signals. [ 33 ] [ 34 ] Gallium arsenide is an important semiconductor material for high-cost, high-efficiency solar cells and is used for single-crystalline thin-film solar cells and for multi-junction solar cells . [ 35 ] The first known operational use of GaAs solar cells in space was for the Venera 3 mission, launched in 1965. The GaAs solar cells, manufactured by Kvant, were chosen because of their higher performance in high temperature environments. [ 36 ] GaAs cells were then used for the Lunokhod rovers for the same reason. [ citation needed ] In 1970, the GaAs heterostructure solar cells were developed by the team led by Zhores Alferov in the USSR , [ 37 ] [ 38 ] [ 39 ] achieving much higher efficiencies. In the early 1980s, the efficiency of the best GaAs solar cells surpassed that of conventional, crystalline silicon -based solar cells. In the 1990s, GaAs solar cells took over from silicon as the cell type most commonly used for photovoltaic arrays for satellite applications. Later, dual- and triple-junction solar cells based on GaAs with germanium and indium gallium phosphide layers were developed as the basis of a triple-junction solar cell, which held a record efficiency of over 32% and can operate also with light as concentrated as 2,000 suns. This kind of solar cell powered the Mars Exploration Rovers Spirit and Opportunity , which explored Mars ' surface. Also many solar cars utilize GaAs in solar arrays, as did the Hubble Telescope. [ 40 ] GaAs-based devices hold the world record for the highest-efficiency single-junction solar cell at 29.1% (as of 2019). This high efficiency is attributed to the extreme high quality GaAs epitaxial growth, surface passivation by the AlGaAs, [ 41 ] and the promotion of photon recycling by the thin film design. [ 42 ] GaAs-based photovoltaics are also responsible for the highest efficiency (as of 2022) of conversion of light to electricity, as researchers from the Fraunhofer Institute for Solar Energy Systems achieved a 68.9% efficiency when exposing a GaAs thin film photovoltaic cell to monochromatic laser light with a wavelength of 858 nanometers. [ 43 ] Today, multi-junction GaAs cells have the highest efficiencies of existing photovoltaic cells and trajectories show that this is likely to continue to be the case for the foreseeable future. [ 44 ] In 2022, Rocket Lab unveiled a solar cell with 33.3% efficiency [ 45 ] based on inverted metamorphic multi-junction (IMM) technology. In IMM, the lattice-matched (same lattice parameters) materials are grown first, followed by mismatched materials. The top cell, GaInP, is grown first and lattice matched to the GaAs substrate, followed by a layer of either GaAs or GaInAs with a minimal mismatch, and the last layer has the greatest lattice mismatch. [ 46 ] After growth, the cell is mounted to a secondary handle and the GaAs substrate is removed. A main advantage of the IMM process is that the inverted growth according to lattice mismatch allows a path to higher cell efficiency. Complex designs of Al x Ga 1−x As-GaAs devices using quantum wells can be sensitive to infrared radiation ( QWIP ). GaAs diodes can be used for the detection of X-rays. [ 47 ] Despite GaAs-based photovoltaics being the clear champions of efficiency for solar cells, they have relatively limited use in today's market. In both world electricity generation and world electricity generating capacity, solar electricity is growing faster than any other source of fuel (wind, hydro, biomass, and so on) for the last decade. [ 48 ] However, GaAs solar cells have not currently been adopted for widespread solar electricity generation. This is largely due to the cost of GaAs solar cells - in space applications, high performance is required and the corresponding high cost of the existing GaAs technologies is accepted. For example, GaAs-based photovoltaics show the best resistance to gamma radiation and high temperature fluctuations, which are of great importance for spacecraft. [ 49 ] But in comparison to other solar cells, III-V solar cells are two to three orders of magnitude more expensive than other technologies such as silicon-based solar cells. [ 50 ] The primary sources of this cost are the epitaxial growth costs and the substrate the cell is deposited on. GaAs solar cells are most commonly fabricated utilizing epitaxial growth techniques such as metal-organic chemical vapor deposition (MOCVD) and hydride vapor phase epitaxy (HVPE). A significant reduction in costs for these methods would require improvements in tool costs, throughput, material costs, and manufacturing efficiency. [ 50 ] Increasing the deposition rate could reduce costs, but this cost reduction would be limited by the fixed times in other parts of the process such as cooling and heating. [ 50 ] The substrate used to grow these solar cells is usually germanium or gallium arsenide which are notably expensive materials. One of the main pathways to reduce substrate costs is to reuse the substrate. An early method proposed to accomplish this is epitaxial lift-off (ELO), [ 51 ] but this method is time-consuming, somewhat dangerous (with its use of hydrofluoric acid ), and requires multiple post-processing steps. However, other methods have been proposed that use phosphide-based materials and hydrochloric acid to achieve ELO with surface passivation and minimal post- etching residues and allows for direct reuse of the GaAs substrate. [ 52 ] There is also preliminary evidence that spalling could be used to remove the substrate for reuse. [ 53 ] An alternative path to reduce substrate cost is to use cheaper materials, although materials for this application are not currently commercially available or developed. [ 50 ] Yet another consideration to lower GaAs solar cell costs could be concentrator photovoltaics . Concentrators use lenses or parabolic mirrors to focus light onto a solar cell, and thus a smaller (and therefore less expensive) GaAs solar cell is needed to achieve the same results. [ 54 ] Concentrator systems have the highest efficiency of existing photovoltaics. [ 55 ] So, technologies such as concentrator photovoltaics and methods in development to lower epitaxial growth and substrate costs could lead to a reduction in the cost of GaAs solar cells and forge a path for use in terrestrial applications. GaAs has been used to produce near-infrared laser diodes since 1962. [ 56 ] It is often used in alloys with other semiconductor compounds for these applications. N -type GaAs doped with silicon donor atoms (on Ga sites) and boron acceptor atoms (on As sites) responds to ionizing radiation by emitting scintillation photons. At cryogenic temperatures it is among the brightest scintillators known [ 57 ] [ 58 ] [ 59 ] and is a promising candidate for detecting rare electronic excitations from interacting dark matter, [ 60 ] due to the following six essential factors: For this purpose an optical fiber tip of an optical fiber temperature sensor is equipped with a gallium arsenide crystal. Starting at a light wavelength of 850 nm GaAs becomes optically translucent. Since the spectral position of the band gap is temperature dependent, it shifts about 0.4 nm/K. The measurement device contains a light source and a device for the spectral detection of the band gap. With the changing of the band gap, (0.4 nm/K) an algorithm calculates the temperature (all 250 ms). [ 69 ] GaAs may have applications in spintronics as it can be used instead of platinum in spin-charge converters and may be more tunable. [ 70 ] The environment, health and safety aspects of gallium arsenide sources (such as trimethylgallium and arsine ) and industrial hygiene monitoring studies of metalorganic precursors have been reported. [ 71 ] California lists gallium arsenide as a carcinogen , [ 72 ] as do IARC and ECA , [ 73 ] and it is considered a known carcinogen in animals. [ 74 ] [ 75 ] On the other hand, a 2013 review (funded by industry) argued against these classifications, saying that when rats or mice inhale fine GaAs powders (as in previous studies), they get cancer from the resulting lung irritation and inflammation, rather than from a primary carcinogenic effect of the GaAs itself—and that, moreover, fine GaAs powders are unlikely to be created in the production or use of GaAs. [ 73 ]
https://en.wikipedia.org/wiki/Gallium_arsenide
Gallium manganese arsenide , chemical formula (Ga,Mn)As is a magnetic semiconductor . It is based on the world's second most commonly used semiconductor , gallium arsenide, (chemical formula GaAs ), and readily compatible with existing semiconductor technologies. Differently from other dilute magnetic semiconductors , such as the majority of those based on II-VI semiconductors , it is not paramagnetic [ 1 ] but ferromagnetic , and hence exhibits hysteretic magnetization behavior. This memory effect is of importance for the creation of persistent devices. In (Ga,Mn)As , the manganese atoms provide a magnetic moment, and each also acts as an acceptor , making it a p -type material. The presence of carriers allows the material to be used for spin-polarized currents. In contrast, many other ferromagnetic magnetic semiconductors are strongly insulating [ 2 ] [ 3 ] and so do not possess free carriers . (Ga,Mn)As is therefore a candidate material for spintronic devices but it is likely to remain only a testbed for basic research as its Curie temperature could only be raised up to approximately 200 K. Like other magnetic semiconductors, [ 4 ] (Ga,Mn)As is formed by doping a standard semiconductor with magnetic elements. This is done using the growth technique molecular beam epitaxy , whereby crystal structures can be grown with atom layer precision. In (Ga,Mn)As the manganese substitute into gallium sites in the GaAs crystal and provide a magnetic moment. Because manganese has a low solubility in GaAs , incorporating a sufficiently high concentration for ferromagnetism to be achieved proves challenging. In standard molecular beam epitaxy growth, to ensure that a good structural quality is obtained, the temperature the substrate is heated to, known as the growth temperature, is normally high, typically ~600 °C. However, if a large flux of manganese is used in these conditions, instead of being incorporated, segregation occurs where the manganese accumulate on the surface and form complexes with elemental arsenic atoms. [ 5 ] This problem was overcome using the technique of low-temperature molecular beam epitaxy. It was found, first in (In,Mn)As [ 6 ] and then later used for (Ga,Mn)As , [ 7 ] that by utilising non-equilibrium crystal growth techniques larger dopant concentrations could be successfully incorporated. At lower temperatures, around 250 °C, there is insufficient thermal energy for surface segregation to occur but still sufficient for a good quality single crystal alloy to form. [ 8 ] In addition to the substitutional incorporation of manganese, low-temperature molecular beam epitaxy also causes the inclusion of other impurities. The two other common impurities are interstitial manganese [ 9 ] and arsenic antisites. [ 10 ] The former is where the manganese atom sits between the other atoms in the zinc-blende lattice structure and the latter is where an arsenic atom occupies a gallium site. Both impurities act as double donors, removing the holes provided by the substitutional manganese, and as such they are known as compensating defects. The interstitial manganese also bond antiferromagnetically to substitutional manganese, removing the magnetic moment. Both these defects are detrimental to the ferromagnetic properties of the (Ga,Mn)As , and so are undesired. [ 11 ] The temperature below which the transition from paramagnetism to ferromagnetism occurs is known as the Curie temperature , T C . Theoretical predictions based on the Zener model suggest that the Curie temperature scales with the quantity of manganese, so T C above 300K is possible if manganese doping levels as high as 10% can be achieved. [ 12 ] After its discovery by Ohno et al. , [ 7 ] the highest reported Curie temperatures in (Ga,Mn)As rose from 60K to 110K. [ 8 ] However, despite the predictions of room-temperature ferromagnetism , no improvements in T C were made for several years. As a result of this lack of progress, predictions started to be made that 110K was a fundamental limit for (Ga,Mn)As . The self-compensating nature of the defects would limit the possible hole concentrations, preventing further gains in T C . [ 13 ] The major breakthrough came from improvements in post-growth annealing. By using annealing temperatures comparable to the growth temperature it was possible to pass the 110K barrier. [ 14 ] [ 15 ] [ 16 ] These improvements have been attributed to the removal of the highly mobile interstitial manganese. [ 17 ] Currently, the highest reported values of T C in (Ga,Mn)As are around 173K, [ 18 ] [ 19 ] still well below the much sought room-temperature. As a result, measurements on this material must be done at cryogenic temperatures, currently precluding any application outside of the laboratory. Naturally, considerable effort is being spent in the search for an alternative magnetic semiconductors that does not share this limitation. [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] In addition to this, as molecular beam epitaxy techniques and equipment are refined and improved it is hoped that greater control over growth conditions will allow further incremental advances in the Curie temperature of (Ga,Mn)As . Regardless of the fact that room-temperature ferromagnetism has not yet been achieved, magnetic semiconductors materials such as (Ga,Mn)As , have shown considerable success. Thanks to the rich interplay of physics inherent to magnetic semiconductors a variety of novel phenomena and device structures have been demonstrated. It is therefore instructive to make a critical review of these main developments. A key result in magnetic semiconductors technology is gateable ferromagnetism , where an electric field is used to control the ferromagnetic properties. This was achieved by Ohno et al. [ 25 ] using an insulating-gate field-effect transistor with (In,Mn)As as the magnetic channel. The magnetic properties were inferred from magnetization dependent Hall measurements of the channel. Using the gate action to either deplete or accumulate holes in the channel it was possible to change the characteristic of the Hall response to be either that of a paramagnet or of a ferromagnet . When the temperature of the sample was close to its T C it was possible to turn the ferromagnetism on or off by applying a gate voltage which could change the T C by ±1K. A similar (In,Mn)As transistor device was used to provide further examples of gateable ferromagnetism . [ 26 ] In this experiment the electric field was used to modify the coercive field at which magnetization reversal occurs. As a result of the dependence of the magnetic hysteresis on the gate bias the electric field could be used to assist magnetization reversal or even demagnetize the ferromagnetic material. The combining of magnetic and electronic functionality demonstrated by this experiment is one of the goals of spintronics and may be expected to have a great technological impact. Another important spintronic functionality that has been demonstrated in magnetic semiconductors is that of spin injection . This is where the high spin polarization inherent to these magnetic materials is used to transfer spin polarized carriers into a non-magnetic material. [ 27 ] In this example, a fully epitaxial heterostructure was used where spin polarized holes were injected from a (Ga,Mn)As layer to an (In,Ga)As quantum well where they combine with unpolarized electrons from an n -type substrate. A polarization of 8% was measured in the resulting electroluminescence . This is again of potential technological interest as it shows the possibility that the spin states in non-magnetic semiconductors can be manipulated without the application of a magnetic field. (Ga,Mn)As offers an excellent material to study domain wall mechanics because the domains can have a size of the order of 100 μm. [ 28 ] Several studies have been done in which lithographically defined lateral constrictions [ 29 ] or other pinning points [ 30 ] are used to manipulate domain walls . These experiments are crucial to understanding domain wall nucleation and propagation which would be necessary for the creation of complex logic circuits based on domain wall mechanics. [ 31 ] Many properties of domain walls are still not fully understood and one particularly outstanding issue is of the magnitude and size of the resistance associated with current passing through domain walls . Both positive [ 32 ] and negative [ 33 ] values of domain wall resistance have been reported, leaving this an open area for future research. An example of a simple device that utilizes pinned domain walls is provided by reference. [ 34 ] This experiment consisted of a lithographically defined narrow island connected to the leads via a pair of nanoconstrictions. While the device operated in a diffusive regime the constrictions would pin domain walls , resulting in a giant magnetoresistance signal. When the device operates in a tunnelling regime another magnetoresistance effect is observed, discussed below. A furtherproperty of domain walls is that of current induced domain wall motion. This reversal is believed to occur as a result of the spin-transfer torque exerted by a spin polarized current. [ 35 ] It was demonstrated in reference [ 36 ] using a lateral (Ga,Mn)As device containing three regions which had been patterned to have different coercive fields, allowing the easy formation of a domain wall . The central region was designed to have the lowest coercivity so that the application of current pulses could cause the orientation of the magnetization to be switched. This experiment showed that the current required to achieve this reversal in (Ga,Mn)As was two orders of magnitude lower than that of metal systems. It has also been demonstrated that current-induced magnetization reversal can occur across a (Ga,Mn)As/GaAs/(Ga,Mn)As vertical tunnel junction. [ 37 ] Another novel spintronic effect, which was first observed in (Ga,Mn)As based tunnel devices, is tunnelling anisotropic magnetoresistance. This effect arises from the intricate dependence of the tunnelling density of states on the magnetization, and can result in magnetoresistance of several orders of magnitude. This was demonstrated first in vertical tunnelling structures [ 34 ] [ 38 ] and then later in lateral devices. [ 39 ] This has established tunnelling anisotropic magnetoresistance as a generic property of ferromagnetic tunnel structures. Similarly, the dependence of the single electron charging energy on the magnetization has resulted in the observation of another dramatic magnetoresistance effect in a (Ga,Mn)As device, the so-called Coulomb blockade anisotropic magnetoresistance.
https://en.wikipedia.org/wiki/Gallium_manganese_arsenide
Gallium monofluoride is an inorganic compound with the formula GaF. The compound has only been observed in the gas-phase. [ 1 ] It can be generated by the oxidation of gallium with either aluminum fluoride or calcium fluoride . [ 2 ] In 2011, a group of Brazilian and German researchers used the molecular absorption of fluorogallium created in a graphite furnace to determine that 5.2 picograms of fluorine is the smallest detectable portion of the element. [ 3 ] Its ionization energy is 10.64 eV. [ 4 ]
https://en.wikipedia.org/wiki/Gallium_monofluoride
Gallium nitride ( Ga N ) is a binary III / V direct bandgap semiconductor commonly used in blue light-emitting diodes since the 1990s. The compound is a very hard material that has a Wurtzite crystal structure . Its wide band gap of 3.4 eV affords it special properties for applications in optoelectronics , [ 9 ] [ 10 ] [ 11 ] high-power and high-frequency devices. For example, GaN is the substrate that makes violet (405 nm) laser diodes possible, without requiring nonlinear optical frequency doubling . Its sensitivity to ionizing radiation is low (like other group III nitrides ), making it a suitable material for solar cell arrays for satellites . Military and space applications could also benefit as devices have shown stability in high radiation environments . [ 12 ] Because GaN transistors can operate at much higher temperatures and work at much higher voltages than gallium arsenide (GaAs) transistors, they make ideal power amplifiers at microwave frequencies. In addition, GaN offers promising characteristics for THz devices. [ 13 ] Due to high power density and voltage breakdown limits GaN is also emerging as a promising candidate for 5G cellular base station applications. Since the early 2020s, GaN power transistors have come into increasing use in power supplies in electronic equipment, converting AC mains electricity to low-voltage DC . GaN is a very hard ( Knoop hardness 14.21 GPa [ 14 ] : 4 ), mechanically stable wide-bandgap semiconductor material with high heat capacity and thermal conductivity. [ 15 ] In its pure form it resists cracking and can be deposited in thin film on sapphire or silicon carbide , despite the mismatch in their lattice constants . [ 15 ] GaN can be doped with silicon (Si) or with oxygen [ 16 ] to n-type and with magnesium (Mg) to p-type . [ 17 ] [ 18 ] However, the Si and Mg atoms change the way the GaN crystals grow, introducing tensile stresses and making them brittle. [ 19 ] Gallium nitride compounds also tend to have a high dislocation density, on the order of 10 8 to 10 10 defects per square centimeter. [ 20 ] The U.S. Army Research Laboratory (ARL) provided the first measurement of the high field electron velocity in GaN in 1999. [ 21 ] Scientists at ARL experimentally obtained a peak steady-state velocity of 1.9 × 10 7 cm/s , with a transit time of 2.5 picoseconds, attained at an electric field of 225 kV/cm. With this information, the electron mobility was calculated, thus providing data for the design of GaN devices. One of the earliest syntheses of gallium nitride was at the George Herbert Jones Laboratory in 1932. [ 22 ] An early synthesis of gallium nitride was by Robert Juza and Harry Hahn in 1938. [ 23 ] GaN with a high crystalline quality can be obtained by depositing a buffer layer at low temperatures. [ 24 ] Such high-quality GaN led to the discovery of p-type GaN, [ 17 ] p–n junction blue/UV- LEDs [ 17 ] and room-temperature stimulated emission [ 25 ] (essential for laser action). [ 26 ] This has led to the commercialization of high-performance blue LEDs and long-lifetime violet laser diodes, and to the development of nitride-based devices such as UV detectors and high-speed field-effect transistors . [ citation needed ] High-brightness GaN light-emitting diodes (LEDs) completed the range of primary colors, and made possible applications such as daylight-visible full-color LED displays, white LEDs and blue laser devices. The first GaN-based high-brightness LEDs used a thin film of GaN deposited via metalorganic vapour-phase epitaxy (MOVPE) on sapphire . Other substrates used are zinc oxide , with lattice constant mismatch of only 2% and silicon carbide (SiC). [ 27 ] Group III nitride semiconductors are, in general, recognized as one of the most promising semiconductor families for fabricating optical devices in the visible short-wavelength and UV region. [ citation needed ] The very high breakdown voltages , [ 28 ] high electron mobility , and high saturation velocity of GaN has made it an ideal candidate for high-power and high-temperature microwave applications, as evidenced by its high Johnson's figure of merit . Potential markets for high-power/high-frequency devices based on GaN include microwave radio-frequency power amplifiers (e.g., those used in high-speed wireless data transmission) and high-voltage switching devices for power grids. A potential mass-market application for GaN-based RF transistors is as the microwave source for microwave ovens , replacing the magnetrons currently used. The large band gap means that the performance of GaN transistors is maintained up to higher temperatures (~400 °C [ 29 ] ) than silicon transistors (~150 °C [ 29 ] ) because it lessens the effects of thermal generation of charge carriers that are inherent to any semiconductor. The first gallium nitride metal semiconductor field-effect transistors (GaN MESFET ) were experimentally demonstrated in 1993 [ 30 ] and they are being actively developed. In 2010, the first enhancement-mode GaN transistors became generally available. [ 31 ] Only n-channel transistors were available. [ 31 ] These devices were designed to replace power MOSFETs in applications where switching speed or power conversion efficiency is critical. These transistors are built by growing a thin layer of GaN on top of a standard silicon wafer, often referred to as GaN-on-Si by manufacturers. [ 32 ] This allows the FETs to maintain costs similar to silicon power MOSFETs but with the superior electrical performance of GaN, and consists of growing GaN on silicon wafers using MOCVD Epitaxy. [ 33 ] Another seemingly viable solution for realizing enhancement-mode GaN-channel HFETs is to employ a lattice-matched quaternary AlInGaN layer of acceptably low spontaneous polarization mismatch to GaN. [ 34 ] GaN power ICs monolithically integrate a GaN FET, GaN-based drive circuitry and circuit protection into a single surface-mount device. [ 35 ] [ 36 ] Integration means that the gate-drive loop has essentially zero impedance, which further improves efficiency by virtually eliminating FET turn-off losses. Academic studies into creating low-voltage GaN power ICs began at the Hong Kong University of Science and Technology (HKUST) and the first devices were demonstrated in 2015. Commercial GaN power IC production began in 2018. In 2016 the first GaN CMOS logic using PMOS and NMOS transistors was reported with gate lengths of 0.5 μm (gate widths of the PMOS and NMOS transistors were 500 μm and 50 μm, respectively). [ 37 ] GaN-based violet laser diodes are used to read Blu-ray Discs . The mixture of GaN with In ( InGaN ) or Al ( AlGaN ) with a band gap dependent on the ratio of In or Al to GaN allows the manufacture of light-emitting diodes ( LEDs ) with colors that can go from red to ultra-violet. [ 27 ] GaN transistors are suitable for high frequency, high voltage, high temperature and high-efficiency applications. [ 38 ] [ 39 ] GaN is efficient at transferring current, and this ultimately means that less energy is lost to heat. [ 40 ] GaN high-electron-mobility transistors (HEMT) have been offered commercially since 2006, and have found immediate use in various wireless infrastructure applications due to their high efficiency and high voltage operation. A second generation of devices with shorter gate lengths will address higher-frequency telecom and aerospace applications. [ 41 ] GaN-based metal–oxide–semiconductor field-effect transistors ( MOSFET ) and metal–semiconductor field-effect transistors ( MESFET ) also offer advantages including lower loss in high power electronics, especially in automotive and electric car applications. [ 42 ] Since 2008 these can be formed on a silicon substrate. [ 42 ] High-voltage (800 V) Schottky barrier diodes (SBDs) have also been made. [ 42 ] The higher efficiency and high power density of integrated GaN power ICs allows them to reduce the size, weight and component count of applications including mobile and laptop chargers, consumer electronics, computing equipment and electric vehicles. GaN-based electronics (not pure GaN) have the potential to drastically cut energy consumption, not only in consumer applications but even for power transmission utilities . Unlike silicon transistors that switch off due to power surges, [ clarification needed ] GaN transistors are typically depletion mode devices (i.e. on / resistive when the gate-source voltage is zero). Several methods have been proposed to reach normally-off (or E-mode) operation, which is necessary for use in power electronics: [ 43 ] [ 44 ] GaN technology is also utilized in military electronics such as active electronically scanned array radars. [ 45 ] Thales Group introduced the Ground Master 400 radar in 2010 utilizing GaN technology. In 2021 Thales put in operation more than 50,000 GaN Transmitters on radar systems. [ 46 ] The U.S. Army funded Lockheed Martin to incorporate GaN active-device technology into the AN/TPQ-53 radar system to replace two medium-range radar systems, the AN/TPQ-36 and the AN/TPQ-37 . [ 47 ] [ 48 ] The AN/TPQ-53 radar system was designed to detect, classify, track, and locate enemy indirect fire systems, as well as unmanned aerial systems. [ 49 ] The AN/TPQ-53 radar system provided enhanced performance, greater mobility, increased reliability and supportability, lower life-cycle cost, and reduced crew size compared to the AN/TPQ-36 and the AN/TPQ-37 systems. [ 47 ] Lockheed Martin fielded other tactical operational radars with GaN technology in 2018, including TPS-77 Multi Role Radar System deployed to Latvia and Romania . [ 50 ] In 2019, Lockheed Martin's partner ELTA Systems Limited , developed a GaN-based ELM-2084 Multi Mission Radar that was able to detect and track air craft and ballistic targets, while providing fire control guidance for missile interception or air defense artillery. On April 8, 2020, Saab flight tested its new GaN designed AESA X-band radar in a JAS-39 Gripen fighter. [ 51 ] Saab already offers products with GaN based radars, like the Giraffe radar , Erieye , GlobalEye , and Arexis EW. [ 52 ] [ 53 ] [ 54 ] [ 55 ] Saab also delivers major subsystems, assemblies and software for the AN/TPS-80 (G/ATOR) [ 56 ] India's Defence Research and Development Organisation is developing Virupaakhsha radar for Sukhoi Su-30MKI based on GaN technology. The radar is a further development of Uttam AESA Radar for use on HAL Tejas which employs GaAs technology. [ 57 ] [ 58 ] [ 59 ] GaN nanotubes and nanowires are proposed for applications in nanoscale electronics , optoelectronics and biochemical-sensing applications. [ 60 ] [ 61 ] When doped with a suitable transition metal such as manganese , GaN is a promising spintronics material ( magnetic semiconductors ). [ 27 ] GaN crystals can be grown from a molten Na/Ga melt held under 100 atmospheres of pressure of N 2 at 750 °C. As Ga will not react with N 2 below 1000 °C, the powder must be made from something more reactive, usually in one of the following ways: Gallium nitride can also be synthesized by injecting ammonia gas into molten gallium at 900–980 °C at normal atmospheric pressure. [ 64 ] Blue, white and ultraviolet LEDs are grown on industrial scale by metalorganic vapour-phase epitaxy (MOVPE) . [ 65 ] [ 66 ] The precursors are ammonia with either trimethylgallium or triethylgallium , the carrier gas being nitrogen or hydrogen . Growth temperature ranges between 800 and 1100 °C . Introduction of trimethylaluminium and/or trimethylindium is necessary for growing quantum wells and other kinds of heterostructures . Commercially, GaN crystals can be grown using molecular beam epitaxy or MBE. This process can be further modified to reduce dislocation densities. First, an ion beam is applied to the growth surface in order to create nanoscale roughness. Then, the surface is polished. This process takes place in a vacuum. Polishing methods typically employ a liquid electrolyte and UV irradiation to enable mechanical removal of a thin oxide layer from the wafer. More recent methods have been developed that utilize solid-state polymer electrolytes that are solvent-free and require no radiation before polishing. [ 67 ] GaN dust is an irritant to skin, eyes and lungs. The environment, health and safety aspects of gallium nitride sources (such as trimethylgallium and ammonia ) and industrial hygiene monitoring studies of MOVPE sources have been reported in a 2004 review. [ 68 ] Bulk GaN is non-toxic and biocompatible . [ 69 ] Therefore, it may be used in the electrodes and electronics of implants in living organisms.
https://en.wikipedia.org/wiki/Gallium_nitride
Gallium palladide (GaPd or PdGa) [ 2 ] is an intermetallic combination of gallium and palladium . It has the iron monosilicide crystal structure. [ 3 ] The compound has been suggested as an improved catalyst for hydrogenation reactions . [ 4 ] [ 5 ] In principle, gallium palladide can be a more selective catalyst since unlike substituted compounds, the palladium atoms are spaced out in a regular crystal structure rather than randomly. [ 6 ] [ 7 ] This inorganic compound –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gallium_palladide
Gallus (the cockerel ) was a constellation introduced in 1612 (or 1613) by Petrus Plancius . It was in the northern part of what is now Puppis . It was not adopted in the atlases of Johannes Hevelius , John Flamsteed and Johann Bode and fell into disuse. This constellation -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gallus_(constellation)
The Galoter process (also known as TSK , UTT , or SHC ; its newest modifications are called Enefit and Petroter ) is a shale oil extraction technology for the production of shale oil , a type of synthetic crude oil. In this process, the oil shale is decomposed into shale oil, oil shale gas , and spent residue . Decomposition is caused by mixing raw oil shale with hot oil shale ash generated by the combustion of carbonaceous residue ( semi-coke ) in the spent residue. The process was developed in the 1950s, and it is used commercially for shale oil production in Estonia . There are projects for further development of this technology and expansion of its usage, e.g., in Jordan and the USA. Research on the solid heat carrier process for pyrolysis of lignite , peat , and oil shale started in 1944 at the G. M. Krzhizhanovsky Power Engineering Institute of the Academy of Sciences of the USSR . [ 1 ] At the laboratory scale, the Galoter process was invented and developed in 1945–1946. [ 2 ] The process was named Galoter after the research team leader, Israel Galynker, whose name was combined with the word "thermal". [ 1 ] [ 3 ] [ 4 ] Further research continued in Estonia. A pilot unit with a capacity of 2.5 tonnes of oil shale per day was built in Tallinn in 1947. [ 1 ] [ 4 ] The first Galoter-type commercial scale pilot retorts were built at Kiviõli , Estonia , in 1953 and 1963 (closed in 1963 and 1981, respectively), with capacities of 200 and 500 tonnes of oil shale per day, respectively. [ 2 ] [ 4 ] [ 5 ] [ 6 ] The Narva Oil Plant , annexed to the Eesti Power Plant and operating two Galoter-type 3000 tonnes per day retorts, was commissioned in Estonia in 1980. [ 5 ] [ 6 ] [ 7 ] These retorts were designed by AtomEnergoProject and developed in cooperation with the Krzhizhanovsky Institute. [ 1 ] [ 5 ] Started as a pilot plant, the process of converting it to a commercial-scale plant took about 20 years. During this period, the company has modernized more than 70% of the equipment compared to the initial design. [ 2 ] In 1978, a 12.5-tonnes pilot plant was built in Verkhne-Sinevidnoy, Ukraine . It was used for testing Lviv–Volinsk lignite, and Carpathian, Kashpir (Russia), and Rotem (Israel) oil shales. In 1996–1997, a test unit was assembled in Tver . [ 1 ] In 2008, Estonian energy company Eesti Energia , an operator of Galoter retorts at the Narva Oil Plant, established a joint venture with the Finnish technology company Outotec called Enefit Outotec Technology to develop and commercialize a modified Galoter process–the Enefit process–which combines the current process with circulating fluidized bed technologies. [ 8 ] In 2013, Enefit Outotec Technology opened an Enefit testing plant in Frankfurt . [ 9 ] [ 10 ] In 2012, Eesti Energia opened a new generation Galoter-type plant in Narva using Enefit 280 technology. [ 11 ] In 2009–2015, VKG Oil , a subsidiary of Viru Keemia Grupp , opened in Kohtla-Järve, Estonia, three modified Galoter-type oil plants called Petroter. [ 12 ] [ 13 ] [ 14 ] The Galoter process is an above-ground oil-shale retorting technology classified as a hot recycled solids technology. [ 15 ] The process uses a horizontal cylindrical rotating kiln -type retort, which is slightly declined. [ 16 ] It has similarities with the TOSCO II process . [ 17 ] [ 18 ] Before retorting , the oil shale is crushed into fine particles with a size of less than 25 millimetres (1.0 in) in diameter. The crushed oil shale is dried in the fluidized bed drier (aerofountain drier) by contact with hot gases. [ 5 ] [ 19 ] After drying and pre-heating to 135 °C (275 °F), oil shale particles are separated from gases by cyclonic separation . Oil shale is transported to the mixer chamber, where it is mixed with hot ash of 800 °C (1,470 °F), produced by combustion of spent oil shale in a separate furnace. [ 20 ] The ratio of oil shale ash to raw oil shale is 2.8–3:1. [ 5 ] The mixture is moved then to the hermetic rotating kiln. When the heat transfers from the hot ash to raw oil shale particles, the pyrolysis (chemical decomposition) begins in oxygen deficit conditions. [ 20 ] The temperature of pyrolysis is kept at 520 °C (970 °F). [ 17 ] Produced oil vapors and gases are cleaned of solids by cyclones and moved to condensation system ( rectification column ) where shale oil condenses and oil shale gas is separated in gaseous form. [ 5 ] [ 16 ] Spent shale (semi-coke) is transported then to the separate furnace for combustion to produce hot ash. A portion of the hot ash is separated from the furnace gas by cyclones and recycled to the rotary kiln for pyrolysis. [ 20 ] The remaining ash is removed from the combustion gas by more cyclones and cooled and removed for disposal by using water. [ 5 ] The cleaned hot gas returns to the oil shale dryer. The Galoter process has high thermal and technological efficiency, and high oil recovery ratio. [ 7 ] [ 16 ] Oil yield reaches 85–90% of Fischer Assay and retort gas yield accounts for 48 cubic meters per tonne. [ 16 ] Oil quality is considered good, but the equipment is sophisticated and capacity is relatively low. [ 7 ] This process creates less pollution than internal combustion technologies, as it uses less water, but it still generates carbon dioxide as also carbon disulfide and calcium sulfide . [ 21 ] Enefit process is a modification of the Galoter process being developed by Enefit Outotec Technology. [ 22 ] In this process, the Galoter technology is combined with proven circulating fluidized bed (CFB) combustion technology used in coal-fired power plants and mineral processing. Oil shale particles and hot oil shale ash are mixed in a rotary drum as in the classical Galoter process. The primary modification is the replacing of the Galoter semi-coke furnace with a CFB furnace. The Enefit process also incorporates fluid bed ash cooler and waste heat boiler commonly used in coal-fired boilers to convert waste heat to steam for power generation. Compared to the traditional Galoter, the Enefit process allows complete combustion of carbonaceous residue, improved energy efficiency by maximum utilization of waste heat, and less water use for quenching. According to promoters, the Enefit process has a lower retorting time compare to the classical Galoter process and therefore it has a greater throughput. Avoidance of moving parts in the retorting zones increases their durability. [ 23 ] Two Galoter retorts built in 1980 are used for oil production by the Narva Oil Plant , a subsidiary of the Estonian energy company Eesti Energia. [ 24 ] Both retorts process 125 tonnes per hour of oil shale. [ 25 ] The annual shale oil production is 135,000 tonnes and oil shale gas production is 40 million cubic metres per annum (1.4 billion cubic feet per annum). [ 2 ] Since 2012, it also uses a new plant employing Enefit 280 technology with a processing capacity of 2.26 million tonnes of oil shale per year and producing 290,000 tonnes of shale oil and 75 million cubic metres (2.6 billion cubic feet) of oil shale gas. [ 11 ] In addition, Eesti Energia planned to begin construction of similar Enefit plants in Jordan [ 26 ] and in USA. [ 27 ] Enefit Outotec Technology analysis suitability of Enefit technology for the Tarfaya oil shale deposit in Morocco , developed by San Leon Energy . [ 8 ] VKG Oil operates in Kohtal-Järve, Estonia three modified Galoter-type oil plants called Petroter. [ 14 ] The basic engineering of these retorts was done by Atomenergoproject of Saint Petersburg . The basic engineering of the condensation and distillation plant was done by Rintekno of Finland. [ 28 ] The plant has a processing capacity of 1.1 million tonnes of oil shale per year and it produces 100,000 tonnes of shale oil, 30 million cubic metres (1.1 billion cubic feet) of oil shale gas, and 150 GWh of steam per year. [ 29 ] Saudi Arabian International Corporation for Oil Shale Investment planned to utilize Galoter (UTT-3000) process to build a 30,000 barrels per day (4,800 m 3 /d) shale oil plant in Jordan. [ 30 ] [ 31 ] Uzbekneftegaz planned to build eight UTT-3000 plants in Uzbekistan . [ 32 ] [ 33 ] However, in December 2015 Uzbekneftegaz announced a postponement of the project. [ 34 ]
https://en.wikipedia.org/wiki/Galoter_process
A galvanic anode , or sacrificial anode , is the main component of a galvanic cathodic protection system used to protect buried or submerged metal structures from corrosion . They are made from a metal alloy with a more "active" voltage (more negative reduction potential / more positive oxidation potential ) than the metal of the structure. The difference in potential between the two metals means that the galvanic anode corrodes, in effect being "sacrificed" in order to protect the structure. In brief, corrosion is a chemical reaction occurring by an electrochemical mechanism (a redox reaction ). [ 1 ] During corrosion of iron or steel there are two reactions, oxidation (equation 1 ), where electrons leave the metal (and the metal dissolves, i.e. actual loss of metal results) and reduction, where the electrons are used to convert oxygen and water to hydroxide ions (equation 2 ): [ 2 ] In most environments, the hydroxide ions and ferrous ions combine to form ferrous hydroxide , which eventually becomes the familiar brown rust: [ 3 ] As corrosion takes place, oxidation and reduction reactions occur and electrochemical cells are formed on the surface of the metal so that some areas will become anodic (oxidation) and some cathodic (reduction). Electrons flow from the anodic areas into the electrolyte as the metal corrodes. Conversely, as electrons flow from the electrolyte to the cathodic areas, the rate of corrosion is reduced. [ 4 ] (The flow of electrons is in the opposite direction of the flow of electric current .) As the metal continues to corrode, the local potentials on the surface of the metal will change and the anodic and cathodic areas will change and move. As a result, in ferrous metals, a general covering of rust is formed over the whole surface, which will eventually consume all the metal. This is rather a simplified view of the corrosion process, because it can occur in several different forms. [ 5 ] Prevention of corrosion by cathodic protection (CP) works by introducing another metal (the galvanic anode) with a much more anodic surface, so that all the current will flow from the introduced anode and the metal to be protected becomes cathodic in comparison to the anode. This effectively stops the oxidation reactions on the metal surface by transferring them to the galvanic anode, which will be sacrificed in favour of the structure under protection. [ 6 ] More simply put, this takes advantage of the relatively low stability of magnesium, aluminum or zinc metals; they dissolve instead of iron because their bonding is weaker compared to iron, which is bonded strongly via its partially filled d-orbitals. For this protection to work there must be an electron pathway between the anode and the metal to be protected (e.g., a wire or direct contact) and an ion pathway between both the oxidizing agent (e.g., oxygen and water or moist soil) and the anode, and the oxidizing agent and the metal to be protected, thus forming a closed circuit; therefore simply bolting a piece of active metal such as zinc to a less active metal, such as mild steel, in air (a poor ionic conductor) will not furnish any protection. There are three main metals used as galvanic anodes: magnesium , aluminum and zinc . They are all available as blocks, rods, plates or extruded ribbon. Each material has advantages and disadvantages. Magnesium has the most negative electropotential of the three (see galvanic series ) and is more suitable for areas where the electrolyte (soil or water) resistivity is higher. This is usually on-shore pipelines and other buried structures, although it is also used on boats in fresh water and in water heaters. In some cases, the negative potential of magnesium can be a disadvantage: if the potential of the protected metal becomes too negative, reduction of water or solvated protons may evolve hydrogen atoms on the cathode surface, for instance according to leading to hydrogen embrittlement or to disbonding of the coating. [ 7 ] [ 8 ] Where this is a concern, zinc anodes may be used. An aluminum-zinc-tin alloy called KA90 is commonly used in marine and water heater applications. [ 9 ] Zinc and aluminium are generally used in salt water, where the resistivity is generally lower and magnesium dissolves relatively quickly by reaction with water under hydrogen evolution (self-corrosion). Typical uses are for the hulls of ships and boats, offshore pipelines and production platforms, in salt-water-cooled marine engines, on small boat propellers and rudders, and for the internal surface of storage tanks. Zinc is considered a reliable material, but is not suitable for use at higher temperatures, as it tends to passivate (the oxide layer formed shields from further oxidation); if this happens, current may cease to flow and the anode stops working. [ 10 ] Zinc has a relatively low driving voltage, which means in higher-resistivity soils or water it may not be able to provide sufficient current. However, in some circumstances — where there is a risk of hydrogen embrittlement , for example — this lower voltage is advantageous, as overprotection is avoided. [ 11 ] Aluminium anodes have several advantages, such as a lighter weight, and much higher capacity than zinc. However, their electrochemical behavior is not considered as reliable as zinc, and greater care must be taken in how they are used. Aluminium anodes will passivate where chloride concentration is below 1,446 parts per million . [ 12 ] One disadvantage of aluminium is that if it strikes a rusty surface, a large thermite spark may be generated, so its use is restricted in tanks where there may be explosive atmospheres and there is a risk of the anode falling. [ 8 ] Since the operation of a galvanic anode relies on the difference in electropotential between the anode and the cathode, practically any metal can be used to protect some other, providing there is a sufficient difference in potential. For example, iron anodes can be used to protect copper. [ 13 ] The design of a galvanic anode CP system should consider many factors, including the type of structure, the resistivity of the electrolyte (soil or water) it will operate in, the type of coating and the service life. The primary calculation is how much anode material will be required to protect the structure for the required time. Too little material may provide protection for a while, but need to be replaced regularly. Too much material would provide protection at an unnecessary cost. The mass in kg is given by equation ( 5 ). [ 14 ] The amount of current required corresponds directly to the surface area of the metal exposed to the soil or water, so the application of a coating drastically reduces the mass of anode material required. The better the coating, the less anode material is needed. Once the required mass of material is known, the particular type of anode is chosen. Differently shaped anodes will have a different resistance to earth, which governs how much current can be produced, so the resistance of the anode is calculated to ensure that sufficient current will be available. If the resistance of the anode is too high, either a differently shaped or sized anode is chosen, or a greater quantity of anodes must be used. [ 14 ] The arrangement of the anodes is then planned so as to provide an even distribution of current over the whole structure. For example, if a particular design shows that a pipeline 10 kilometres (6.2 mi) long needs 10 anodes, then approximately one anode per kilometre would be more effective than putting all 10 anodes at one end or in the centre. As the anode materials used are generally more costly than iron, using this method to protect ferrous metal structures may not appear to be particularly cost effective. However, consideration should also be given to the costs incurred to repair a corroded hull or to replace a steel pipeline or tank because their structural integrity has been compromised by corrosion. However, there is a limit to the cost effectiveness of a galvanic system. On larger structures, such as long pipelines, so many anodes may be needed that it would be more cost-effective to install impressed current cathodic protection . The basic method is to produce sacrificial anodes through a casting process. However, two casting methods can be distinguished. [ 15 ] The high pressure die-casting process for sacrificial anodes is widespread. It is a fully automated machine process. In order for the manufacturing process to run reliably and in a repeatable manner, a modification of the processed sacrificial anode alloy is required. Alternatively, the gravity casting process is used for the production of the sacrificial anodes. This process is performed manually or partially automated. The alloy does not have to be adapted to the manufacturing process, but is designed for 100% optimum corrosion protection. https://www.youtube.com/watch?v=kRh5r_7alf4
https://en.wikipedia.org/wiki/Galvanic_anode
A galvanic cell or voltaic cell , named after the scientists Luigi Galvani and Alessandro Volta , respectively, is an electrochemical cell in which an electric current is generated from spontaneous oxidation–reduction reactions. An example of a galvanic cell consists of two different metals, each immersed in separate beakers containing their respective metal ions in solution that are connected by a salt bridge or separated by a porous membrane. [ 1 ] Volta was the inventor of the voltaic pile , the first electrical battery . Common usage of the word battery has evolved to include a single Galvanic cell, but the first batteries had many Galvanic cells. [ 2 ] In 1780, Luigi Galvani discovered that when two different metals (e.g., copper and zinc) are in contact and then both are touched at the same time to two different parts of a muscle of a frog leg, to close the circuit, the frog's leg contracts. [ 3 ] He called this " animal electricity ". The frog's leg, as well as being a detector of electrical current, was also the electrolyte (to use the language of modern chemistry). A year after Galvani published his work (1790), Alessandro Volta showed that the frog was not necessary, using instead a force-based detector and brine-soaked paper (as electrolyte). (Earlier Volta had established the law of capacitance C = ⁠ Q / V ⁠ with force-based detectors). In 1799 Volta invented the voltaic pile, which is a stack of galvanic cells each consisting of a metal disk, an electrolyte layer, and a disk of a different metal. He built it entirely out of non-biological material to challenge Galvani's (and the later experimenter Leopoldo Nobili )'s animal electricity theory in favor of his own metal-metal contact electricity theory. [ 4 ] Carlo Matteucci in his turn constructed a battery entirely out of biological material in answer to Volta. [ 5 ] Volta's contact electricity view characterized each electrode with a number that we would now call the work function of the electrode. This view ignored the chemical reactions at the electrode-electrolyte interfaces, which include H 2 formation on the more noble metal in Volta's pile. Although Volta did not understand the operation of the battery or the galvanic cell, these discoveries paved the way for electrical batteries; Volta's cell was named an IEEE Milestone in 1999. [ 6 ] Some forty years later, Faraday (see Faraday's laws of electrolysis ) showed that the galvanic cell—now often called a voltaic cell—was chemical in nature. Faraday introduced new terminology to the language of chemistry: electrode ( cathode and anode ), electrolyte , and ion ( cation and anion ). Thus Galvani incorrectly thought the source of electricity (or source of electromotive force (emf), or seat of emf) was in the animal, Volta incorrectly thought it was in the physical properties of the isolated electrodes, but Faraday correctly identified the source of emf as the chemical reactions at the two electrode-electrolyte interfaces. The authoritative work on the intellectual history of the voltaic cell remains that by Ostwald. [ 7 ] It was suggested by Wilhelm König in 1940 that the object known as the Baghdad battery might represent galvanic cell technology from ancient Parthia . Replicas filled with citric acid or grape juice have been shown to produce a voltage. However, it is far from certain that this was its purpose—other scholars have pointed out that it is very similar to vessels known to have been used for storing parchment scrolls. [ 8 ] Galvanic cells are extensions of spontaneous redox reactions, but have been merely designed to harness the energy produced from said reaction. [ 1 ] For example, when one immerses a strip of zinc metal (Zn) in an aqueous solution of copper sulfate (CuSO 4 ), dark-colored solid deposits will collect on the surface of the zinc metal and the blue color characteristic of the Cu 2+ ion disappears from the solution. The depositions on the surface of the zinc metal consist of copper metal, and the solution now contains zinc ions. This reaction is represented by In this redox reaction, Zn is oxidized to Zn 2+ and Cu 2+ is reduced to Cu. When electrons are transferred directly from Zn to Cu 2+ , the enthalpy of reaction is lost to the surroundings as heat. However, the same reaction can be carried out in a galvanic cell, allowing some of the chemical energy released to be converted into electrical energy. In its simplest form, a half-cell consists of a solid metal (called an electrode ) that is submerged in a solution; the solution contains cations (+) of the electrode metal and anions (−) to balance the charge of the cations. [ 9 ] The full cell consists of two half-cells, usually connected by a semi-permeable membrane or by a salt bridge that prevents the ions of the more noble metal from plating out at the other electrode. [ 9 ] A specific example is the Daniell cell (see figure), with a zinc (Zn) half-cell containing a solution of ZnSO 4 (zinc sulfate) and a copper (Cu) half-cell containing a solution of CuSO 4 (copper sulfate). A salt bridge is used here to complete the electric circuit. If an external electrical conductor connects the copper and zinc electrodes, zinc from the zinc electrode dissolves into the solution as Zn 2+ ions (oxidation), releasing electrons that enter the external conductor. To compensate for the increased zinc ion concentration, via the salt bridge zinc ions (cations) leave and sulfate ions (anions) enter the zinc half-cell. In the copper half-cell, the copper ions plate onto the copper electrode (reduction), taking up electrons that leave the external conductor. Since the Cu 2+ ions (cations) plate onto the copper electrode, the latter is called the cathode . Correspondingly the zinc electrode is the anode . The electrochemical reaction is This is the same reaction as given in the previous example. In addition, electrons flow through the external conductor, which is the primary application of the galvanic cell. As discussed under cell voltage , the electromotive force of the cell is the difference of the half-cell potentials, a measure of the relative ease of dissolution of the two electrodes into the electrolyte. The emf depends on both the electrodes and on the electrolyte, an indication that the emf is chemical in nature. A half-cell contains a metal in two oxidation states . Inside an isolated half-cell, there is an oxidation-reduction (redox) reaction that is in chemical equilibrium , a condition written symbolically as follows (here, "M" represents a metal cation, an atom that has a charge imbalance due to the loss of n electrons): A galvanic cell consists of two half-cells, such that the electrode of one half-cell is composed of metal A, and the electrode of the other half-cell is composed of metal B; the redox reactions for the two separate half-cells are thus: The overall balanced reaction is: In other words, the metal atoms of one half-cell are oxidized while the metal cations of the other half-cell are reduced. By separating the metals in two half-cells, their reaction can be controlled in a way that forces transfer of electrons through the external circuit where they can do useful work . In one half-cell, dissolved metal B cations combine with the free electrons that are available at the interface between the solution and the metal B electrode; these cations are thereby neutralized, causing them to precipitate from solution as deposits on the metal B electrode, a process known as plating . This reduction reaction causes the free electrons throughout the metal B electrode, the wire, and the metal A electrode to be pulled into the metal B electrode. Consequently, electrons are wrestled away from some of the atoms of the metal A electrode, as though the metal B cations were reacting directly with them; those metal A atoms become cations that dissolve into the surrounding solution. As this reaction continues, the half-cell with the metal A electrode develops a positively charged solution (because the metal A cations dissolve into it), while the other half-cell develops a negatively charged solution (because the metal B cations precipitate out of it, leaving behind the anions); unabated, this imbalance in charge would stop the reaction. The solutions of the half-cells are connected by a salt bridge or a porous plate that allows ions to pass from one solution to the other, which balances the charges of the solutions and allows the reaction to continue. By definition: By their nature, galvanic cells produce direct current . The Weston cell has an anode composed of cadmium mercury amalgam , and a cathode composed of pure mercury. The electrolyte is a (saturated) solution of cadmium sulfate . The depolarizer is a paste of mercurous sulfate. When the electrolyte solution is saturated, the voltage of the cell is very reproducible; hence, in 1911, it was adopted as an international standard for voltage. For instance, a typical 12 V lead–acid battery has six galvanic cells connected in series, with the anodes composed of lead and cathodes composed of lead dioxide, both immersed in sulfuric acid . Large central office battery rooms – in a telephone exchange to provide power for subscribers' land-line telephones, for instance – may have many cells, connected both in series and parallel: Individual cells are connected in series as a battery of cells with some standard voltage ( c. 40 V ), and banks of such serial batteries, themselves connected in parallel, to provide adequate amperage to supply a typical peak demand for telephone connections. The voltage ( electromotive force E o ) produced by a galvanic cell can be estimated from the standard Gibbs free energy change in the electrochemical reaction according to: E c e l l o = − Δ r G o ν e F {\displaystyle \ E_{\mathsf {cell}}^{\mathsf {\ \!o}}~~=\ -{\frac {\ \Delta _{r}G^{\mathsf {\ \!o}}\ }{\ \nu _{\mathsf {e}}F\ }}\ } where ν e is the number of electrons transferred in the balanced half reactions, and F is Faraday's constant . However, it can be determined more conveniently by the use of a standard potential table for the two half cells involved. The first step is to identify the two metals and their ions reacting in the cell. Then one looks up the standard electrode potential , E o , in volts , for each of the two half reactions . The standard potential of the cell is equal to the more positive E o value minus the more negative E o value. For example, in the figure above the solutions are CuSO 4 and ZnSO 4 . Each solution has a corresponding metal strip in it, and a salt bridge or porous disk connecting the two solutions and allowing SO 2− 4 ions to flow freely between the copper and zinc solutions. To calculate the standard potential one looks up copper and zinc's half reactions and finds: Thus the overall reaction is: The standard potential for the reaction is then +0.34 V − (−0.76 V) = +1.10 V . The polarity of the cell is determined as follows. Zinc metal is more strongly reducing than copper metal because the standard (reduction) potential for zinc is more negative than that of copper. Thus, zinc metal will lose electrons to copper ions and develop a positive electrical charge. The equilibrium constant , K , for the cell is given by: log e ⁡ K = ν e F E c e l l o R T {\displaystyle \ \log _{e}K~~=~~{\frac {~~~\nu _{\mathsf {e}}\;\!F\ E_{\mathsf {cell}}^{\mathsf {\ \!o}}\ }{\ R\ T\ }}\ } where For the Daniell cell K ≈ 1.5 × 10 37 . Thus, at equilibrium, a few electrons are transferred, enough to cause the electrodes to be charged. [ 11 ] (ch. 7, "Equilibrium electrochemistry" §§) Actual half-cell potentials must be calculated by using the Nernst equation as the solutes are unlikely to be in their standard states: E h a l f − c e l l = E o − R T ν e F log e ⁡ Q {\displaystyle \ E_{\mathsf {\;\!half-cell}}~~=~~E^{\mathsf {\ \!o}}\ -\ {\frac {\ R\ T\ }{\ \nu _{\mathsf {e}}\ F\ }}\ \log _{e}Q\ } where Q is the reaction quotient . When the charges of the ions in the reaction are equal, this simplifies to: E h a l f − c e l l = E o − 2.303 R T ν e F log 10 ⁡ { M n + } {\displaystyle \ E_{\;\!{\mathsf {half-cell}}}~~=~~E^{\mathsf {\ \!o}}\ -\ 2.303\ {\frac {\ R\ T\ }{\ \nu _{\mathsf {e}}F\ }}\ \log _{10}\left\{~~{\mathsf {M}}^{n+}\right\}\ } where M n + is the activity of the metal ion in solution. In practice concentration in ⁠ mol / L ⁠ is used in place of activity. The metal electrode is in its standard state so by definition has unit activity. The potential of the whole cell is obtained as the difference between the potentials for the two half-cells, so it depends on the concentrations of both dissolved metal ions. If the concentrations are the same the Nernst equation is not needed, and E c e l l = E c e l l o {\displaystyle ~E_{\mathsf {cell}}~=~E_{\mathsf {cell}}^{\mathsf {\;\!o}}~~} under the conditions assumed here. The value of 2.303 ⁠ R / F ⁠ is 1.9845 × 10 −4 ⁠ V / K ⁠ , so at T = 25 °C (298.15 K) the half-cell potential will change by only ⁠ 0.05918 V / ν e ⁠ if the concentration of a metal ion is increased or decreased by a factor of 10 . E h a l f − c e l l = E o − 0.05918 V ν e log 10 ⁡ { M n + } {\displaystyle \ E_{\;\!{\mathsf {half-cell}}}~~=~~E^{\mathsf {\ \!o}}\ -\ {\frac {\ 0.05918\ {\mathsf {V}}\ }{\ \nu _{\mathsf {e}}\ }}\log _{10}\left\{~~{\mathsf {M}}^{n+}\right\}\ } These calculations are based on the assumption that all chemical reactions are in equilibrium. When a current flows in the circuit, equilibrium conditions are not achieved and the cell voltage will usually be reduced by various mechanisms, such as the development of overpotentials . [ 11 ] (§ 25.12 "Working galvanic cells") Also, since chemical reactions occur when the cell is producing power, the electrolyte concentrations change and the cell voltage is reduced. A consequence of the temperature dependency of standard potentials is that the voltage produced by a galvanic cell is also temperature dependent. Galvanic corrosion is the electrochemical erosion of metals. Corrosion occurs when two dissimilar metals are in contact with each other in the presence of an electrolyte , such as salt water. This forms a galvanic cell, with hydrogen gas forming on the more noble (less active) metal. The resulting electrochemical potential then develops an electric current that electrolytically dissolves the less noble material. A concentration cell can be formed if the same metal is exposed to two different concentrations of electrolyte.
https://en.wikipedia.org/wiki/Galvanic_cell
Galvanic corrosion (also called bimetallic corrosion or dissimilar metal corrosion ) is an electrochemical process in which one metal corrodes preferentially when it is in electrical contact with another, different metal, when both in the presence of an electrolyte . A similar galvanic reaction is exploited in single-use battery cells to generate a useful electrical voltage to power portable devices. This phenomenon is named after Italian physician Luigi Galvani (1737–1798). A similar type of corrosion caused by the presence of an external electric current is called electrolytic corrosion . Dissimilar metals and alloys have different electrode potentials , and when two or more come into contact in an electrolyte, one metal (that is more reactive ) acts as anode and the other (that is less reactive ) as cathode . The electropotential difference between the reactions at the two electrodes is the driving force for an accelerated attack on the anode metal, which dissolves into the electrolyte. This leads to the metal at the anode corroding more quickly than it otherwise would and corrosion at the cathode being inhibited. The presence of an electrolyte and an electrical conducting path between the metals is essential for galvanic corrosion to occur. The electrolyte provides a means for ion migration whereby ions move to prevent charge build-up that would otherwise stop the reaction. If the electrolyte contains only metal ions that are not easily reduced (such as Na + , Ca 2+ , K + , Mg 2+ , or Zn 2+ ), the cathode reaction is the reduction of dissolved H + to H 2 or O 2 to OH − . [ 1 ] [ 2 ] [ 3 ] [ 4 ] In some cases, this type of reaction is intentionally encouraged. For example, low-cost household batteries typically contain carbon-zinc cells . As part of a closed circuit (the electron pathway), the zinc within the cell will corrode preferentially (the ion pathway) as an essential part of the battery producing electricity. Another example is the cathodic protection of buried or submerged structures as well as hot water storage tanks . In this case, sacrificial anodes work as part of a galvanic couple, promoting corrosion of the anode, while protecting the cathode metal. In other cases, such as mixed metals in piping (for example, copper, cast iron and other cast metals), galvanic corrosion will contribute to accelerated corrosion of parts of the system. Corrosion inhibitors such as sodium nitrite or sodium molybdate can be injected into these systems to reduce the galvanic potential. However, the application of these corrosion inhibitors must be monitored closely. If the application of corrosion inhibitors increases the conductivity of the water within the system, the galvanic corrosion potential can be greatly increased. Acidity or alkalinity ( pH ) is also a major consideration with regard to closed loop bimetallic circulating systems. Should the pH and corrosion inhibition doses be incorrect, galvanic corrosion will be accelerated. In most HVAC systems, the use of sacrificial anodes and cathodes is not an option, as they would need to be applied within the plumbing of the system and, over time, would corrode and release particles that could cause potential mechanical damage to circulating pumps, heat exchangers, etc. [ 5 ] A common example of galvanic corrosion occurs in galvanized iron , a sheet of iron or steel covered with a zinc coating. Even when the protective zinc coating is broken, the underlying steel is not attacked. Instead, the zinc is corroded because it is less "noble". Only after it has been consumed can rusting of the base metal occur. By contrast, with a conventional tin can , the opposite of a protective effect occurs: because the tin is more noble than the underlying steel, when the tin coating is broken, the steel beneath is immediately attacked preferentially. A spectacular example of galvanic corrosion occurred in the Statue of Liberty when regular maintenance checks in the 1980s revealed that corrosion had taken place between the outer copper skin and the wrought iron support structure. Although the problem had been anticipated when the structure was built by Gustave Eiffel to Frédéric Bartholdi 's design in the 1880s, the insulation layer of shellac between the two metals had failed over time and resulted in rusting of the iron supports. An extensive renovation was carried out with replacement of the original insulation with PTFE . The structure was far from unsafe owing to the large number of unaffected connections, but it was regarded as a precautionary measure to preserve a national symbol of the United States. [ 6 ] In 1681, Samuel Pepys (then serving as Admiralty Secretary) agreed to the removal of lead sheathing from English Royal Navy vessels to prevent the mysterious disintegration of their rudder-irons and bolt-heads, though he confessed himself baffled as to the reason the lead caused the corrosion. [ 7 ] [ 8 ] The problem recurred when vessels were sheathed in copper to reduce marine weed accumulation and protect against shipworm . In an experiment, the Royal Navy in 1761 had tried fitting the hull of the frigate HMS Alarm with 12-ounce copper plating. Upon her return from a voyage to the West Indies, it was found that although the copper remained in fine condition and had indeed deterred shipworm, it had also become detached from the wooden hull in many places because the iron nails used during its installation "were found dissolved into a kind of rusty Paste". [ 9 ] To the surprise of the inspection teams, however, some of the iron nails were virtually undamaged. Closer inspection revealed that water-resistant brown paper trapped under the nail head had inadvertently protected some of the nails: "Where this covering was perfect, the Iron was preserved from Injury". The copper sheathing had been delivered to the dockyard wrapped in the paper which was not always removed before the sheets were nailed to the hull. The conclusion therefore reported to the Admiralty in 1763 was that iron should not be allowed direct contact with copper in sea water. [ 10 ] [ 11 ] Serious galvanic corrosion has been reported on the latest US Navy attack littoral combat vessel the USS Independence caused by steel water jet propulsion systems attached to an aluminium hull. Without electrical isolation between the steel and aluminium, the aluminium hull acts as an anode to the stainless steel, resulting in aggressive galvanic corrosion. [ 12 ] The unexpected fall in 2011 of a heavy light fixture from the ceiling of the Big Dig vehicular tunnel in Boston revealed that corrosion had weakened its support. Improper use of aluminium in contact with stainless steel had caused rapid corrosion in the presence of salt water. [ 13 ] The electrochemical potential difference between stainless steel and aluminium is in the range of 0.5 to 1.0 V, depending on the exact alloys involved, and can cause considerable corrosion within months under unfavorable conditions. Thousands of failing lights would have to be replaced, at an estimated cost of $54 million. [ 14 ] A " lasagna cell" is accidentally produced when salty moist food such as lasagna or sauerkraut is stored in a steel baking pan and is covered with aluminium foil. After a few hours the foil develops small holes where it touches the lasagna, and the food surface becomes covered with small spots composed of corroded aluminium. [ 15 ] In this example, the salty food (lasagna) is the electrolyte, the aluminium foil is the anode, and the steel pan is the cathode. If the aluminium foil touches the electrolyte only in small areas, the galvanic corrosion is concentrated, and corrosion can occur fairly rapidly. If the aluminium foil was not used with a dissimilar metal container, the reaction was probably a chemical one. It is possible for heavy concentrations of salt, vinegar or some other acidic compounds to cause the foil to disintegrate. The product of either of these reactions is an aluminium salt . It does not harm the food, but any deposit may impart an undesired flavor and color. [ 16 ] The common technique of cleaning silverware by immersion in a hot electrolytic bath with a piece of aluminium is an example of galvanic corrosion. Aluminium foil is preferred because of its much greater surface area than that of ingots, although if the foil has a "non-stick" face, this must be removed with steel wool first. The electrolytic bath is usually composed of water and sodium bicarbonate , i.e., household baking soda. Silver darkens and corrodes in the presence of airborne sulfur molecules, and the copper in sterling silver corrodes under a variety of conditions. These layers of corrosion can be largely removed through the electrochemical reduction of silver sulfide molecules: the presence of aluminium (which is less noble than either silver or copper) in the bath of sodium bicarbonate strips the sulfur atoms off the silver sulfide and transfers them onto and thereby corrodes the piece of aluminium (a much more reactive metal), leaving elemental silver behind. No silver is lost in the process. [ 17 ] There are several ways of reducing and preventing this form of corrosion: All metals can be classified into a galvanic series representing the electrical potential they develop in a given electrolyte against a standard reference electrode. The relative position of two metals on such a series gives a good indication of which metal is more likely to corrode more quickly. However, other factors such as water aeration and flow rate can influence the rate of the process markedly. The compatibility of two different metals may be predicted by consideration of their anodic index. This parameter is a measure of the electrochemical voltage that will be developed between the metal and gold. To find the relative voltage of a pair of metals it is only required to subtract their anodic indices. [ 18 ] To reduce galvanic corrosion for metals stored in normal environments such as storage in warehouses or non-temperature and humidity controlled environments, there should not be more than 0.25 V difference in the anodic index of the two metals in contact. For controlled environments in which temperature and humidity are controlled, 0.50 V can be tolerated. For harsh environments such as outdoors, high humidity, and salty environments, there should be not more than 0.15 V difference in the anodic index. For example: gold and silver have a difference of 0.15 V, therefore the two metals will not experience significant corrosion even in a harsh environment. [ 19 ] [ page needed ] When design considerations require that dissimilar metals come in contact, the difference in anodic index is often managed by finishes and plating. The finishing and plating selected allow the dissimilar materials to be in contact, while protecting the more base materials from corrosion by the more noble. [ 19 ] [ page needed ] It will always be the metal with the most negative anodic index which will ultimately suffer from corrosion when galvanic incompatibility is in play. This is why sterling silver and stainless steel tableware should never be placed together in a dishwasher at the same time, as the steel items will likely experience corrosion by the end of the cycle (soap and water having served as the chemical electrolyte, and heat having accelerated the process). The term "electrolytic corrosion" is most frequently used to indicate corrosion caused by electric current applied to the metal from external sources. The mechanism of this corrosion is essentially the same as for the galvanic corrosion. [ 20 ]
https://en.wikipedia.org/wiki/Galvanic_corrosion
The galvanic series (or electropotential series ) determines the nobility of metals and semi-metals . When two metals are submerged in an electrolyte , while also electrically connected by some external conductor, the less noble (base) will experience galvanic corrosion . The rate of corrosion is determined by the electrolyte, the difference in nobility, and the relative areas of the anode and cathode exposed to the electrolyte. The difference can be measured as a difference in voltage potential: the less noble metal is the one with a lower (that is, more negative) electrode potential than the nobler one, and will function as the anode (electron or anion attractor) within the electrolyte device functioning as described above (a galvanic cell ). Galvanic reaction is the principle upon which batteries are based. See the table of standard electrode potentials for more details. The following is the galvanic series for stagnant (that is, low oxygen content) seawater . The order may change in different environments. [ 1 ] The unshaded bars indicate the location on the chart of those steels when in acidic/stagnant water ( like in the bilge ), where crevice-corrosion happens. Notice how the *same* steel has much different galvanic-series location, depending on the electrolyte it is in, making prevention of corrosion .. more difficult. This chart is from the link, below, to the Australian site's document.. This corrosion -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galvanic_series
This discussion of the dental amalgam controversy outlines the debate over whether dental amalgam (the mercury alloy in dental fillings ) should be used. Supporters claim that it is safe, effective and long-lasting, while critics argue that amalgam is unsafe because it may cause mercury poisoning and other toxicity . [ 1 ] [ 2 ] [ 3 ] Supporters of amalgam fillings point out that it is safe, durable, [ 4 ] relatively inexpensive, and easy to use. [ 5 ] On average, amalgam lasts twice as long as resin composites , takes less time to place, is tolerant of saliva or blood contamination during placement (unlike composites), and is often about 20–30% less expensive. [ 6 ] Consumer Reports has suggested that many who claim dental amalgam is not safe are "prospecting for disease" and using pseudoscience to scare patients into more lucrative treatment options. [ 7 ] Those opposed to amalgam use suggest that modern composites are improving in strength. [ 8 ] In addition to their claims of possible health and ethical issues, opponents of dental amalgam fillings claim amalgam fillings contribute to mercury contamination of the environment. The World Health Organization (WHO) reports that health care facilities, including dental offices, account for as much as 5% of total wastewater mercury emissions. [ 9 ] The WHO also points out that amalgam separators, installed in the waste water lines of many dental offices, dramatically decrease the release of mercury into the public sewer system. [ 9 ] In the United States, most dental practices are prohibited from disposing amalgam waste down the drain. [ 10 ] Critics also point to cremation of dental fillings as an additional source of air pollution, contributing about 1% of global emissions. [ 11 ] The World Health Organization recommends a global phase out of dental mercury in their 2009 report on "Future Use of Materials For Dental Restorations, based on aiming for a general reduction of the use of mercury in all sectors, and based on the environmental impacts of mercury product production." [ 12 ] It is the position of the FDI World Dental Federation [ 13 ] as well as numerous dental associations and dental public health agencies worldwide [ 14 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] that amalgam restorations are safe and effective. Numerous other organizations have also publicly declared the safety and effectiveness of amalgam. These include the Mayo Clinic , [ 21 ] Health Canada , [ 22 ] Alzheimer's Association , [ 23 ] American Academy of Pediatrics, [ 24 ] Autism Society of America , [ 25 ] U.S. Environmental Protection Agency (EPA), [ 26 ] National Multiple Sclerosis Society, [ 27 ] New England Journal of Medicine , [ 28 ] International Journal of Dentistry , [ 29 ] National Council Against Health Fraud , [ 30 ] The National Institute of Dental and Craniofacial Research NIDCR, [ 31 ] American Cancer Society , [ 32 ] Lupus Foundation of America , [ 33 ] the American College of Medical Toxicology , [ 34 ] the American Academy of Clinical Toxicology , [ 34 ] Consumer Reports [ 7 ] Prevention , [ 35 ] WebMD [ 36 ] and the International Association for Dental Research . [ 37 ] The U.S. Food and Drug Administration (FDA) formerly stated that amalgam is "safe for adults and children ages 6 and above" [ 38 ] but now recommends against amalgam for children, pregnant/nursing women, and other high-risk groups. [ 39 ] Dental amalgam has had a long history and global impact. [ 3 ] It was first introduced in the Chinese materia medica of Su Kung in 659 A.D. during the Tang dynasty. [ 3 ] In Europe, Johannes Stockerus, a municipal physician in Ulm, Germany, recommended amalgam as a filling material as early as 1528. [ 3 ] In 1818, Parisian physician Louis Nicolas Regnart added one-tenth by weight of mercury to the fusable metals used as fillings at the time to create a temporarily soft metal alloy at room temperature. Thus, amalgam (an alloy of mercury with another metal or metals, from the French word amalgame) was invented. This was further perfected in 1826, when Auguste Taveau of Paris used a silver paste made from mixing French silver-tin coins with mercury, which offered more plasticity and a quicker setting time. [ 3 ] In Europe, before 1818, carious teeth were either filled with a melted metal, usually gold or silver (which would often lead to death of the nerve of the tooth from thermal trauma), or the tooth would be extracted. [ 3 ] In 1855, Dr. J. Foster Flagg, a professor of dental pathology in Philadelphia, experimented with new mixtures of amalgam. In 1861, he presented his findings to the Pennsylvania Association of Dental Surgeons and, in 1881, he published his book, Plastic and Plastic Fillings . (Amalgam fillings were often called "plastic fillings" at the time.) The inevitable result of this affair was that silver amalgam was proven to be "an excellent filling material", and expanded dentistry's "ability to save teeth". Around the same time, John and Charles Tomes in England conducted research on the expansion and contraction of the various amalgam products. During the American Civil War, the debate on the merits of amalgam continued. In dental meetings, with now decades of use and dental research came the recognition of the importance of good technique and proper mixture on long-term success. It was argued, "the fault was not in the material but in the manipulation.... Some men's amalgam is good universally, and some men's gold is bad universally; the difference lies in the preparation of the tooth and in the plug (filling)." [ 40 ] More controversy came in 1872, when an amalgam filling was reported as the cause of death of a Nebraska middle-aged man, resulting in a public outcry against the use of amalgam. [ 41 ] His physicians reported that the filling caused swelling of his mouth, throat, and windpipe, completely hindering respiration. Given that the involved tooth was a lower second molar, it was later considered very likely that the patient died from Ludwig's angina , which is a type of cellulitis , rather than mercury poisoning . Another alleged case of " pytalism " causing headache, fever, rapid pulse, metallic taste, loss of appetite, and generalized malaise was reported in 1872 in a female patient following the insertion of eight amalgam fillings. [ 42 ] Later, however, another dentist examined the fillings and noted they had, in a short period, washed away, and that upon gentle pressure the metal crumbled away. He removed all the fillings with an explorer in three minutes and concluded poor workmanship alone could have explained the patient's symptoms. Alfred Stock was a German chemist who reported becoming very ill in the 1920s and traced his illness to his amalgam fillings and resulting mercury intoxication. He described his recovery after the fillings were removed and believed that amalgam fillings would come to be seen as a "sin against humanity". [ 43 ] Stock had also previously been exposed to toxic levels of mercury vapor during his work, due to his use of liquid mercury in some novel laboratory apparatus he invented. [ 44 ] Oral galvanism , amalgam disease , or Galvanic shock was a term for the association of oral or systemic symptoms to either: electric currents between metal in dental restorations and electrolytes in saliva or dental pulp . [ 45 ] [ 46 ] [ 47 ] Any existence of galvanic pain or association of either currents or mercury to presence of symptoms has been disproven. [ 46 ] [ 45 ] Beyond acute allergic reaction amalgam has not been found to be associated with any adverse effects . [ 48 ] Very weak currents have been measured in the mouth of those with multiple dental fillings consisting of different alloys, but there was no association between presence of current and symptoms, [ 45 ] and any symptoms associated with currents between oral fillings are likely to be psychosomatic in nature. [ 46 ] No association between presence of mercury and symptoms have been found, with symptoms likely to be psychosomatic in nature and do not improve with chelation therapy . [ 45 ] [ 47 ] [ 49 ] Claims of causing a variety of symptoms such as oral discomfort, skin irritation , headaches and a metallic taste in the mouth have been discredited. [ 45 ] The condition was originally proposed in 1878, [ 50 ] and became well known in Sweden during the 1970s and 80s, because of a campaign to educate about and replace oral amalgam fillings with mercury with other compounds such as ceramic or polymer restorations. [ 45 ] In the 1990s, several governments evaluated the effects of dental amalgam and concluded that the most likely health effects would be due to hypersensitivity or allergy. Germany, Austria, and Canada recommended against placing amalgam in certain individuals, such as pregnant women, children, those with renal dysfunction, and those with an allergy to metals. In 2004, the Life Sciences Research Office analyzed studies related to dental amalgam published after 1996 and concluded that mean urinary mercury concentration (μg of Hg/L in urine, HgU) was the most reliable estimate of mercury exposure. [ 51 ] It found that those with dental amalgam were unlikely to reach the levels where adverse effects are seen from occupational exposure (35 μg HgU). Some 95% of study participants had μg HgU below 4–5. Chewing gum, particularly for nicotine, along with more amalgam, seemed to pose the greatest risk of increasing exposure. One gum-chewer had 24.8 μg HgU. Studies have shown that the amount of mercury released during normal chewing is extremely low. It concluded that there was not enough evidence to support or refute many of the other claims such as increased risk of autoimmune disorders , but stated that the broad and nonspecific illness attributed to dental amalgam is not supported by the data. [ 51 ] Mutter in Germany, however, concludes, "removal of dental amalgam leads to permanent improvement of various chronic complaints in a relevant number of patients in various trials." [ 52 ] Hal Huggins , a Colorado dentist (previous to having his license revoked), was a notable critic of dental amalgams and other dental therapies he believed to be harmful. [ 53 ] His views on amalgam toxicity were featured on 60 Minutes [ 54 ] and he was later criticized as a dentist, "prospecting for disease" and having only an "aura of science" by Consumer Reports . [ 7 ] In 1996, a Colorado state judge recommended that Huggins's dental license be revoked, for tricking chronically ill patients into thinking that the true cause of their illness was mercury. Time reported the judge's conclusion that Huggins, "diagnosed 'mercury toxicity' in all his patients, including some without amalgam fillings." [ 55 ] Huggins's license was subsequently revoked by the Colorado State Board of Dental Examiners for gross negligence and the use of unnecessary and unproven procedures. [ 56 ] [ 57 ] [ 58 ] According to the WHO , all humans are exposed to some level of mercury. [ 59 ] Factors that determine whether health effects occur and their severity include the type of mercury concerned ( methylmercury and ethylmercury , commonly found in fish, being more serious than elemental mercury); the dose; the age or developmental stage of the person exposed (the foetus is most susceptible); the duration of exposure; and the route of exposure (inhalation, ingestion or dermal contact). [ 59 ] The universal standard for examining mercury toxicity is usually discussed in terms of the amount of mercury in the bloodstream for short-term exposure or the amount of mercury excreted in the urine relative to creatine for long-term mercury exposure. [ 7 ] Elemental mercury (which is a component of amalgam) is absorbed very differently than methylmercury (which is found in fish). [ 2 ] The exposure to mercury from amalgam restorations depends on the number and size of restorations, composition, chewing habits, food texture, grinding, brushing of teeth, and many other physiological factors. [ 2 ] The greatest degree of mercury exposure occurs during filling placement and removal. However, this is not the only time mercury vapors are released. When chewing for extended periods (more than 30 minutes) an increased level of mercury vapor is released. Vapor levels will return to normal approximately 90 minutes following chewing cessation. This contributes to a daily mercury exposure for those with amalgam fillings. [ 60 ] According to one dental textbook, eating seafood once a week raises urine mercury levels to 5 to 20 μg/L, which is equivalent to two to eight times the level of exposure that comes from numerous amalgam fillings. Neither exposure has any known health effect. [ 61 ] Scientists agree that dental amalgam fillings release elemental mercury vapor, but studies report different amounts. Estimates range from 1 to 3 micrograms (μg) per day according to the FDA. [ 62 ] The effects of that amount of exposure are also disputed. [ 51 ] [ 52 ] Newer studies sometimes use mercury vapor analysis instead of the standard exposure test. Because this test was designed for factories and large enclosures, Consumer Reports has reported that this is not an accurate method of analysis for the mouth. It is less reliable, less consistent, and tends to greatly exaggerate the amount of mercury inhaled. [ 7 ] Moreover, it is argued that this test additionally exaggerates the amount of mercury inhaled by assuming that all the mercury vapor released is inhaled. This assumption was reviewed by the U.S. Department of Health and Human Services and not found to be valid. Their research review found that most of the mercury vapor released from amalgam fillings is mixed with saliva and swallowed, some part is exhaled, and the remaining fraction is inhaled. [ 63 ] Of these amounts, it is important to note that the lungs absorb about 80% of inhaled mercury. [ 63 ] A study conducted by measuring the intraoral vapour levels over 24 hours in patients with at least nine amalgam restorations showed that the average daily dose of inhaled mercury vapour was 1.7 μg (range from 0.4 to 4.4 μg), which is approximately 1% of the threshold limit value of 300 to 500 μg/day established by the WHO, based on a maximum allowable environmental level of 50 μg/day in the workplace. [ 2 ] Critics point out that: (1) the workplace safety standards are based on allowable maxima in the workplace, not mercury body burden ; (2) the workplace safety numbers do not apply to continuous 24-hour exposure, but are limited to a normal work day and a 40-hour workweek; [ 64 ] and (3) the uptake/absorption numbers are averages and not worst-case patients (those most at risk). [ 65 ] A test that was done throughout the 1980s by some opposition groups and holistic dentists was the skin patch test for mercury allergies. As part of "prospecting for disease", Consumer Reports wrote that these groups had placed high doses of mercuric chloride on a skin patch, which was guaranteed to produce irritation on the patient's skin and subsequent revenue for the person administering the test. [ 7 ] The current recommendations for residential exposure (not including amalgam fillings already accounted for) are as follows: The ATSDR Action Level for indoor mercury vapor in residential settings is 1 μg/m 3 and the ATSDR MRL (Minimal Risk Level) for chronic exposure is 0.2 μg/m 3 [ 66 ] According to the ATSDR, the MRL(Minimal Risk Level) is an estimate of the level of daily exposure to a substance that is unlikely to cause adverse non-cancerous health effects. The Action Level is defined as an indoor air concentration of mercury that would prompt officials to consider implementing response actions. It is a recommendation and does not necessarily imply toxicity or health risks. [ 66 ] Breathing air with a concentration of 0.2 μg mercury/m 3 would lead to an inhaled amount of approximately 4 μg/day (respiratory volume of 20m 3 /day). About 80% of inhaled mercury vapor would be absorbed. [ 67 ] A 2003 monograph on mercury toxicity from the WHO concluded that dental amalgam contributes significantly to mercury body burden in humans with amalgam fillings and that dental amalgam is the most common form of exposure to elemental mercury in the general population, constituting a potentially significant source of exposure to elemental mercury. Estimates of daily intake from amalgam restorations range from 1 to 12.5 μg/day, with the majority of dental amalgam holders being exposed to less than 5 μg mercury/day. [ 67 ] They also note that this will continue to decline as the number of amalgam restorations is declining. As public pressure demands more research on amalgam safety, an increasing number of studies with larger sample sizes are being conducted. Those who are not opposed to amalgam claim that, aside from rare and localized tissue irritation, recent evidence-based research has continued to demonstrate no ill effects from the minute amounts of mercury exposure from amalgam fillings. [ 14 ] [ 68 ] [ 69 ] A 2004 systematic review conducted by the Life Sciences Research Office , whose clients include the FDA and NIH, concluded, "the current data are insufficient to support an association between mercury release from dental amalgam and the various complaints that have been attributed to this restoration material." [ 51 ] A systematic review in 2009 demonstrated that mercury released from amalgam restorations does not give rise to toxic effects on the nervous system of children. [ 70 ] In 2014, a Cochrane Systematic review found "insufficient evidence to support or refute any adverse effects associated with amalgam or composite restorations." [ 71 ] Those opposed to dental amalgam suggest that mercury from dental amalgam may lead to nephrotoxicity , neurobehavioural changes, autoimmunity , oxidative stress, autism , skin and mucosa alterations, non-specific symptoms and complaints, Alzheimer's disease , calcium-building in the kidneys, kidney stones, thyroid issues, and multiple sclerosis . [ 52 ] Both those opposed and those not opposed to dental amalgam recognize that amalgam has been found to be a rare contributor to localized and temporary tissue irritation known as oral lichenoid lesions . [ 14 ] [ 68 ] [ 69 ] [ 72 ] These mild, lichenoid reactions have also been reported in composite resin fillings. [ 73 ] Those opposed to amalgam believe that amalgam fillings are also associated with increased risk of other autoimmune conditions such as multiple sclerosis (MS), lupus, thyroiditis, and eczema. [ 74 ] Consumer Reports considered these alleged associations between amalgam and chronic disease made by some health practitioners as "prospecting for diseases". [ 7 ] The National Multiple Sclerosis Society (USA) similarly has stated, "There is no scientific evidence to connect the development or worsening of MS with dental fillings containing mercury, and therefore no reason to have those fillings removed. Although poisoning with heavy metals-such as mercury, lead, or manganese can damage the nervous system and produce symptoms such as tremor and weakness, the damage is inflicted in a different way than occurs in MS and the process is also different." [ 27 ] The Lupus Foundation of America also states on their website, "At the present time, we do not have any scientific data that indicates that dental fillings may act as a trigger of lupus. In fact, it is highly unlikely that dental fillings aggravate or cause SLE." [ 33 ] In 2006, a literature review was undertaken to evaluate the research on amalgam and its potential health effects on dentists and dental staff. [ 75 ] It was reported that there is currently no conclusive epidemiological evidence regarding risks for adverse reproductive outcomes associated with mercury and dental professionals. It is mentioned that evidence to date fails to account for all confounding variables (such as alcohol consumption) and recommends more comprehensive and rigorous studies to adequately assess the hazards faced by dental personnel. [ 75 ] The American College of Medical Toxicology and the American Academy of Clinical Toxicology still claim that mercury from amalgams does not cause illness because "the amount of mercury that they release is not enough to cause a health problem". [ 34 ] In response to some people wanting their existing amalgam removed for fear of mercury poisoning, these societies advise that the removal of fillings is likely to cause a greater exposure to mercury than leaving the fillings in place. [ 34 ] These societies also claim that removal of amalgam fillings, in addition to being unnecessary health care and likely to cause more mercury exposure than leaving them in place, is also expensive. [ 34 ] Dentists who advocate the removal of amalgam fillings often recommend wearing breathing apparatus, using high-volume aspiration, and performing the procedure as quickly as possible. Sources of mercury from the diet, and the potential harm of the composite resins to replace the purportedly harmful amalgam fillings, may also need to be considered. [ 76 ] Alternative materials which may be suitable in some situations include composite resins, glass ionomer cements, porcelain, and gold alloys. [ 77 ] Most of these materials, with the notable exception of gold, have not been used as long as amalgam, and some are known to contain other potentially hazardous compounds. Teaching of amalgam techniques to dental students is declining in some schools in favor of composite resin, [ 78 ] and at least one school, the University of Nijmegen in the Netherlands, eliminated dental amalgam from the curriculum entirely in 2001. [ 79 ] This is largely a response to consumer pressure for white fillings for cosmetic reasons, and also because of the increasing longevity of modern resin composites. These alternative dental restorative materials are not free of potential health risks, such as allergenicity, inhalation of resin dust, cytotoxicity, and retinal damage from blue curing light. [ 80 ] Anti-amalgam sources typically promote the removal of amalgam fillings and the substitution with other materials. Detoxification may also be advised, including fasting, restricted dieting to avoid mercury-containing foods, and quasi- chelation therapies , allegedly to remove accumulated mercury from the body. [ 81 ] The American College of Medical Toxicology and the American Academy of Clinical Toxicology recommend against chelation therapy and say that chelation therapy can artificially and temporarily elevate the levels of heavy metals in the urine (a practice referred to as "provoked" urine testing). [ 34 ] They also mention that the chelating drugs may have significant side effects, including dehydration, hypocalcemia, kidney injury, liver enzyme elevations, hypotension, allergic reactions, and mineral deficiencies. [ 34 ] Better dental health overall coupled with increased demand for more modern alternatives such as resin composite fillings (which match the tooth color), as well as public concern about the mercury content of dental amalgam, have resulted in a steady decline in dental amalgam use [ 82 ] in developed countries, though overall amalgam use continues to rise worldwide. Given its superior strength, durability, and long life relative to the more expensive composite fillings, it will likely be used for many years to come. [ 83 ] [ 84 ] Over a lifetime, dietary sources of mercury are far higher than would ever be received from the presence of amalgam fillings in the mouth. For example, due to pollution of the world's oceans with heavy metals, products such as cod liver oil may contain significant levels of mercury. There is little evidence to suggest that amalgam fillings have any negative direct effects on pregnancy outcomes or on an infant post-pregnancy. A study, consisting of 72 pregnant women, was conducted to determine the effects of dental amalgam on fetuses in utero. Results indicated that although the amount of amalgam the mother had was directly related to the amount of mercury in the amniotic fluid, no negative effects on the fetus were found. A larger study, consisting of 5,585 women who had recently given birth, was used to determine if amalgam restorations during pregnancy had any effects on infant birthweight. Among the study group, 1,117 women had infants with low birth weights and 4,468 women had infants with normal birth weights. Approximately five percent of the women had one or more amalgam filling restorations during their pregnancy. These women had little to no difference in infant birth weight compared to the women who did not undergo amalgam restoration during pregnancy. [ 2 ] A 2006 Zogby International poll of 2,590 US adults found that 72% of respondents were not aware that mercury was a main component of dental amalgam and 92% of respondents would prefer to be told about mercury in dental amalgam before receiving it as a filling. [ 85 ] A 1993 study published in FDA Consumer found that 50% of Americans believed fillings containing mercury caused health problems. [ 86 ] Some dentists (including a member of the FDA's Dental Products Panel) suggest that there is an obligation to inform patients that amalgam contains mercury. [ 87 ] [ 88 ] A prominent debate occurred in the late 20th century, with consumer and regulatory pressure to eliminate amalgam being "at an all-time high". [ 88 ] In a 2006 nationwide poll, 76% of Americans were unaware that mercury is the primary component in amalgam fillings, [ 85 ] and this lack of informed consent was the most consistent issue raised in a recent U.S. Food and Drug Administration (FDA) panel on the issue by panel members. [ 88 ] The broad lack of knowledge among the public was also displayed when a December 1990 episode of the CBS news program 60 Minutes covered mercury in amalgam. This resulted in a nationwide amalgam scare and additional research into mercury release from amalgam. The following month Consumer Reports published an article criticizing the content of the broadcast, stating that it contained a great deal of false information and that the ADA spokesperson on the program was ill-prepared to defend the claims. [ 7 ] For example, 60 Minutes reported that Germany was planning to pass legislation within the year to ban amalgam, but the Institute of German Dentists said one month later that there was no such law pending. Also, one physiologist interviewed by Consumer Reports noted that the testimonials are mostly anecdotal, and both the reported symptoms and the rapid recovery time after the fillings are removed are physiologically inconsistent with that of mercury poisoning. Consumer Reports goes on to criticize how 60 Minutes failed to interview the many patients who had fillings or teeth removed, only to have the symptoms stay the same or get worse. [ 7 ] In 1991, the United States Food and Drug Administration concluded, "none of the data presented show a direct hazard to humans from dental amalgams." [ 89 ] In 2002, a class action lawsuit was initiated by patients who felt their amalgam fillings caused them harm. The lawsuit named the ADA, the New York Dental Association, and the Fifth District Dental Society for deceiving "[the] public about health risks allegedly associated with dental amalgam." On 18 February 2003, the New York Supreme Court dismissed the two amalgam-related lawsuits against organized dentistry, stating the plaintiffs had "failed to show a 'cognizable cause of action'". [ 90 ] The proper interpretation of the data is considered controversial only by those opposed to amalgam. The vast majority of past studies have concluded that amalgams are safe. However, although the vast majority of patients with amalgam fillings are exposed to levels too low to pose a health risk, many patients (i.e. those in top 0.1%) exhibit urine test results which are comparable to the maximum allowable legal limits for long-term work place (occupational) safety. [ 64 ] [ 65 ] Two recent randomized clinical trials in children [ 91 ] discovered no statistically significant differences in adverse neuropsychological or renal effects observed over five years in children whose caries were restored using dental amalgam or composite materials. In contrast, one study showed a trend of higher dental treatment need later in children with composite dental fillings, and thus claimed that amalgam fillings are more durable. However, the other study (published in JAMA ) cites increased mercury blood levels in children with amalgam fillings. The study states, "during follow-up [blood mercury levels were] 1.0 to 1.5 μg higher in the amalgam group than in the composite group." EPA considers high blood mercury levels to be harmful to the fetus and also states, "exposure at high levels can harm the brain, heart, kidneys, lungs, and immune system of people of all ages." Currently, the EPA has set the "safe" mercury exposure level to be at 5.8 μg of mercury per one liter of blood. [ 92 ] While mercury fillings themselves do not increase mercury levels above "safe" levels, they have been shown to contribute to such an increase. However, such studies were unable to find any negative neurobehavioral effects. [ 93 ] [ 92 ] [ 94 ] Environmental concerns over external costs exist as well. [ 95 ] In the United States, dental amalgam is the largest source of mercury received by sewage treatment plants. The mercury contaminates the treatment plant sludge , which is typically disposed of by land application , landfilling or incineration . [ 10 ] In the United States, several states, including New Jersey , [ 96 ] New York , [ 97 ] and Michigan , [ 98 ] required the installation of dental amalgam separators before 2017. [ 99 ] EPA promulgated an effluent guidelines regulation in 2017 which prohibits most dental practices from disposing amalgam waste down the drain. Most dental offices nationwide are now required to use amalgam separators. [ 10 ] [ 100 ] The WHO reported in 2005 that in the United Kingdom, mercury from amalgam accounted for 5% of total mercury emissions. [ 9 ] In Canada, dental amalgam was estimated to contribute one-third of the mercury in sewer system waste, but it is believed amalgam separators in dental offices may dramatically decrease this burden on the public sewer system. [ 9 ] The 2005 WHO report stated that mercury from amalgam was approximately 1% of total global mercury emissions, and that one-third of the total mercury in most sewage systems was discharged from dental offices. [ 9 ] Other studies have shown this to be a gross exaggeration or not reflective of developed countries. Concerning pollution in the United States , a study done in 1992 showed that batteries "accounted for 86 percent of discarded mercury and dental amalgam a mere 0.56 percent". [ 101 ] Mercury is an environmental contaminant and the WHO, OSHA , and NIOSH have established specific occupational exposure limits. Mercury imposes health risks upon the surrounding population. In economics, this pollution is considered an external cost not factored into the private costs of using mercury-based products. Environmental risks from amalgam can be mitigated by amalgam separators, and the ISO has issued standards regarding the proper handling and disposal of amalgam waste. [ 102 ] Mercury is a naturally occurring element that is present throughout the environment [ 103 ] [ 104 ] and the vast majority of the pollution (about 99%) comes from large-scale human industrial activity (such as coal-fired electricity generation, hydroelectric dams, and mining, which increase both airborne and waterborne mercury levels). [ 104 ] [ 105 ] Eventually, the airborne mercury finds its way into lakes, rivers, and oceans, where it is consumed by aquatic life. [ 104 ] Amalgam separators may dramatically decrease the release of mercury into the public sewer system, but they are not mandatory in some jurisdictions. [ 106 ] When mercury from these sources enters bodies of water, especially acidic bodies of water, it can be converted into the more toxic methylmercury. [ 107 ] Cremation of bodies containing amalgam restorations results in near-complete emission of the mercury into the atmosphere, as the temperature in cremation is far greater than the boiling point of mercury. In countries with high cremation rates (such as the UK), mercury has become a great concern. Proposals to remedy the situation have ranged from removing amalgam-containing teeth before cremation to installing activated carbon adsorption or other post-combustion mercury capture technology in the flue gas stream. According to the United Nations Environment Programme, it is estimated that globally about 3.6 tonnes of mercury vapor was emitted into the air through cremation in 2010, or about 1% of total global emissions. [ 11 ] Mercury emissions from cremation are growing in the US, both because cremation rates are increasing and because the number of teeth in the deceased is increasing due to better dental care. [ citation needed ] Since amalgam restorations are very durable and relatively inexpensive, many of the older deceased have amalgam restorations. [ citation needed ] According to work done in Great Britain, [ citation needed ] mercury emissions from cremation are expected to increase until at least 2020. The American Dental Association (ADA) has asserted that dental amalgam is safe and has held, "the removal of amalgam restorations from the non-allergic patient for the alleged purpose of removing toxic substances from the body, when such treatment is performed solely at the recommendation or suggestion of the dentist, is improper and unethical". [ 108 ] Under the comments of the American Dental Association before the FDA's Dental Products Panel [ 109 ] of the Medical Devices Advisory Committee, [ 110 ] the ADA supports the 2009 FDA ruling on dental amalgam. [ 14 ] [ 111 ] ADA states, "dental amalgam has an established record of safety and effectiveness, which the scientific community has extensively reviewed and affirmed." [ 112 ] [ 113 ] [ 114 ] The ADA also supports the 2017 EPA wastewater regulation and is providing information and assistance to its members in the implementation of amalgam separators. [ 115 ] The ADA asserts the best scientific evidence supports the safety of dental amalgam. [ 116 ] Clinical studies have not established an occasional connection between dental amalgam and adverse health effects in the general population. [ 117 ] In 2002, Dr. Maths Berlin of the Dental Material Commission published an overview and assessment of the scientific literature published between November 1997 and 2002 for the Swedish Government on amalgam and its possible environmental and health risks. [ 118 ] A final report was submitted to the Swedish Government in 2003 and included his report as an annex to the full report. In the final report from 2003, Berlin states that the 1997 summary had found "... no known epidemiological population study has demonstrated any adverse health effects in amalgam". He reports that researchers have been able to show effects of mercury at lower concentrations than before and states, "... the safety margin that it was thought existed concerning mercury exposure from amalgam has been erased." He recommends eliminating amalgam in dentistry for medical and environmental reasons as soon as possible. [ 118 ] After the FDA's deliberations and review of hundreds of scientific studies relating to the safety of dental amalgam, the FDA concluded, "clinical studies have not established a causal link between dental amalgam and adverse health effects in adults and children age six and older." [ 119 ] The FDA concluded that individuals age six and older are not at risk of mercury-associated health effects from mercury vapor exposure that come from dental amalgam. [ 111 ] In 2009, the FDA issued a final rule that classified dental amalgam as a "Class II" (moderate risk) device, placing it in the same category as composite resins and gold fillings. [ 14 ] In a press release announcing the reclassification, the agency again stated, "the levels [of mercury] released by dental amalgam fillings are not high enough to cause harm in patients." [ 120 ] Also, in the FDA final regulation on dental amalgam in 2009, the FDA recommended the product labeling of dental amalgam. The suggested labeling included: a warning against the use of dental amalgam in patients with mercury allergy, a warning that dental professionals use appropriate ventilation when handling dental amalgam, and a statement discussing scientific evidence on dental amalgam's risks and benefits to make informed decisions among patients and professional dentists. [ 111 ] [ 121 ] In 2020, the FDA updated its guidelines to recommend against amalgam for certain high-risk groups, including children, pregnant and nursing women, people with neurological disease, impaired kidney function, and known sensitivity to mercury, due to the potential harmful health effects of mercury vapor. [ 39 ] They acknowledge that breathing in mercury vapor may harm certain populations, but recommend against removal of amalgam fillings unless medically necessary. [ 122 ] Mercury in dental fillings is considered safe and effective in all countries practicing modern dentistry (see below). There are currently two countries, Norway and Sweden, that have introduced legislation to prohibit or restrict use of amalgam fillings; however, in both cases amalgam is part of a larger program of reducing mercury in the environment and includes the banning of mercury-based batteries, thermometers, light bulbs, sphygmomanometers, consumer electronics, vehicle components, etc. In many countries, unused dental amalgam after a treatment is subject to disposal protocols for environmental reasons. Over 100 countries are signatories to the United Nations " Minamata Convention on Mercury ". [ 123 ] Unlike mercury-based batteries, cosmetics, and medical devices, which were banned as of the year 2020, the treaty has not banned the use of dental amalgam, but allows phasing down amalgam use over some time appropriate to domestic needs, an approach advocated by the World Health Organization (WHO). [ 124 ] [ 125 ] FDI World Dental Federation recognizes the safety and effectiveness of amalgam restorations. FDI is a federation of approximately 200 national dental associations and specialist groups representing over 1.5 million dentists. In collaboration with the WHO, they have produced an FDI position statement and WHO consensus statement on dental amalgam. [ 13 ] Their position regarding the safety of dental amalgam is that, aside from rare allergic reactions and local side effects, "the small amount of mercury released from amalgam restorations, especially during placement and removal, has not been shown to cause any other adverse health effects." The paper goes on to say that there have been "no controlled studies published that show adverse systemic effects" from amalgam restorations, and there is no evidence that removing amalgam restorations relieves any general symptoms. More recently, FDI has published a resolution confirming that their position on the safety and effectiveness of amalgam has not changed despite the phasing out in some countries. [ 126 ] In the United States, numerous respected professional and non-profit organizations consider amalgam use safe and effective and have publicly declared such. [ 5 ] In addition to the American Dental Association , [ 14 ] [ 127 ] other American organizations, including the Mayo Clinic , [ 21 ] the U.S. Food and Drug Administration (FDA), [ 38 ] Alzheimer's Association , [ 23 ] American Academy of Pediatrics, [ 24 ] Autism Society of America , [ 25 ] U.S. Environmental Protection Agency , [ 26 ] National Multiple Sclerosis Society, [ 27 ] New England Journal of Medicine, [ 28 ] International Journal of Dentistry, [ 29 ] National Council Against Health Fraud , [ 30 ] The National Institute of Dental and Craniofacial Research NIDCR, [ 31 ] American Cancer Society , [ 32 ] Lupus Foundation of America , [ 33 ] Consumer Reports [ 7 ] and WebMD [ 36 ] have all given formal, public statements declaring that amalgam fillings are safe based on the best scientific evidence. On 28 July 2009, the U.S. Food and Drug Administration (FDA) recategorized amalgam as a class II medical device, which critics claim indicates a change in their perception of safety. The ADA has indicated that this new regulation places encapsulated amalgam in the same class of devices as most other restorative materials, including composite and gold fillings. [ 14 ] Despite the research regarding the safety of amalgam fillings, the state of California requires warning information given to patients for legal reasons (informed consent) as part of Proposition 65 . This warning also applied to resin fillings for a time, since they contain bisphenol A (BPA), a chemical known to cause reproductive toxicity at high doses. [ 128 ] In Canada, amalgam use is considered safe and effective by some groups. A 2005 position statement from the Canadian Dental Association (CDA) states, "current scientific evidence on the use of dental amalgam supports that amalgam is an effective and safe filling material that provides a long-lasting solution for a broad range of clinical situations. The CDA has established its position based on the current consensus of scientific and clinical experts and on recent extensive reviews of strong evidence by major North American and international organizations, which have satisfactorily countered any safety concerns." [ 15 ] Amalgam use is regulated by Health Canada as are all medical treatments [ 129 ] and Health Canada has also stated that dental amalgam is not causing illness in the general population. [ 22 ] Australia recognizes the safety and effectiveness of amalgam restorations. In 2012, the Australian Dental Association published a position paper on the safety of dental amalgam. [ 16 ] Their position is "Dental Amalgam has been used as a dental restorative material for more than 150 years. It has proved to be a durable, safe, and effective material which has been the subject of extensive research over this time" and "amalgam should continue to be available as a dental restorative material". [ 130 ] Scientific Committee on Emerging and Newly Identified Health Risks (SCENIHR) is a scientific committee within the European Commission. In a 2008 document of 74 pages, its research on the subject of amalgam safety concluded "there is no scientific evidence for risks of adverse systemic effects exist [ sic ] and the current use of dental amalgam does not pose a risk of systemic disease." [ 131 ] England and Scotland recognize the safety and effectiveness of amalgam restorations. A policy statement from the British Dental Health Foundation states that they do not consider amalgams containing mercury a significant health risk. [ 132 ] Ireland recognizes the safety and effectiveness of amalgam restorations. The Irish Dental Association has published on their website: "Dental amalgam has been used on patients for over 150 years. All available worldwide research indicates that amalgam is not harmful to health.... No Government or reputable scientific, medical or dental body anywhere in the world accepts, on any published evidence, that dental amalgam is a health hazard." [ 18 ] The Irish Dental Association provides additional detail in a published patient information letter. [ 19 ] France has publicly recognized the safety and effectiveness of amalgam restorations. A position paper on the Association Dentaire Française website states that amalgam restorations have proven to have no biological or toxic effect. [ 20 ] They also mention that no serious pathological fact has ever been observed and no general toxic action has been scientifically proven to date. [ 20 ] The most exposed subjects remain dentists, in whom it did not identify occupational diseases related to mercury and other rare that any allergies. These amalgam allergies are about 40 times less than that of resin fillings [ 20 ] During the 1980s and 1990s in Norway, there was considerable and intense public debate on the use of dental amalgam. [ 133 ] The Norwegian Dental Patients Association (Forbundet Tenner og Helse), made up of people who believe they suffered health effects from amalgam fillings, was a driving force in this debate. [ 133 ] During this time, the media often featured interviews with people claiming that their health problems were caused by amalgam fillings, and who have regained their health after replacing their amalgam fillings with a different material. Some scientific studies also reported that patients have been restored to health after having had their amalgam fillings replaced. However, these studies were heavily disputed at the time and the Norwegian Board of Health still maintains there is no scientifically proven connection between dental amalgam and health problems. [ 133 ] In 1991, organized through the ministry of the environment, Norway began phasing out the use of most mercury-containing products (not limited to amalgam fillings but also including mercury-based batteries, thermometers, sphygmomanometers, consumer electronics, vehicle components, etc.). [ 134 ] The ban on the import, export, and use of most mercury-based products began on 1 January 2008. [ 134 ] The Norwegian officials stressed that this is not a decision based on using an unsafe health product, but rather that the "overall, long term goal is to eliminate the use and release of mercury to the environment". [ 133 ] Despite the mercury ban, dental offices in Norway may apply for exemptions to use amalgam on a case-by-case basis. [ 133 ] Similar to Norway, from 1995 to 2009 the Environment Ministry of the Government of Sweden gradually banned the import and use of all mercury-based products (not limited to amalgam fillings alone, but also including mercury-based batteries, thermometers, sphygmomanometers, consumer electronics, vehicle components, lightbulbs, analytical chemicals, cosmetics, etc.). [ 135 ] [ 136 ] These mercury-based products were phased out for environmental reasons and precautionary health reasons. [ 137 ] Like Norway, there was considerable public pressure in the years leading up to the ban. [ 133 ] Since the ban, the Government of Sweden continued to investigate ways of reducing mercury pollution. [ 138 ] The Swedish Chemicals Agency states that they may grant exemptions on the use of amalgam on a case-by-case basis. [ 138 ] Following the Minamata Convention on Mercury, from July 2018 onwards, the EU Mercury Regulation prohibits the use of dental amalgam in children under 15 years old and pregnant or breastfeeding women. Additional requirements include the use of pre-encapsulated mercury and the ethical disposal of waste amalgam. [ 139 ] On 1 January 2025, dental amalgam was prohibited for use in the EU, except when deemed strictly necessary by the dentist based on the specific needs of the patient. [ 140 ] The British Dental Association has worked with the Council of European Dentists to prevent an immediate ban of amalgam until further research into practicalities has been undertaken, [ 141 ] which is currently ongoing. [ 142 ] [ 143 ] The European Commission will report to European Parliament by June 2020, and to the European Council by 2030 regarding the viability of ending dental amalgam use by 2030. [ 139 ] In Japan , the use of amalgam began to decline around the 1990s; since 2016, fillings with amalgam alloys have been excluded from insurance coverage . Amalgam is still allowed as of 2023, but is rarely used because it is very expensive. Dental composite and palladium alloys are used instead. [ 144 ]
https://en.wikipedia.org/wiki/Galvanic_shock
Galvanism is a term invented by the late 18th-century physicist and chemist Alessandro Volta to refer to the generation of electric current by chemical action. [ 2 ] The term also came to refer to the discoveries of its namesake, Luigi Galvani , specifically the generation of electric current within biological organisms and the contraction/convulsion of biological muscle tissue upon contact with electric current. [ 3 ] While Volta theorized and later demonstrated the phenomenon of his "Galvanism" to be replicable with otherwise inert materials, Galvani thought his discovery to be a confirmation of the existence of "animal electricity," a vital force which gave life to organic matter. [ 4 ] Galvanic phenomena were described in the literature before it was understood that they were of an electrical nature. In 1752, when the Swiss mathematician and physicist Johann Georg Sulzer placed his tongue between a piece of lead and a piece of silver, joined at their edges, he perceived a taste similar to that of iron(II) sulfate . Neither of the metals alone produced this taste. He realized that the contact between the metals probably did not produce a solution of either on the tongue. He did, however, not realize that this was an electrical phenomenon. [ 5 ] He concluded that the contact between the metals caused their particles to vibrate, producing this taste by stimulating the nerves of the tongue. [ 6 ] If we join two pieces, one of lead, and the other of silver, so that the two edges join, and if we approach them with the tongue we will feel some taste, quite similar to the taste of vitriol of iron [iron(II) sulfate], while each piece apart gives no trace of this taste. It is not probable that through this junction of the two metals, any solution of one or the other occurs, and that the dissolved particles penetrate the tongue. We must therefore conclude that the junction of these metals produces in one or the other, or in both, a vibration in their particles, and that this vibration, which must necessarily affect the nerves of the tongue, produces there the taste mentioned. According to popular legend, Galvani discovered the effects of electricity on muscle tissue when investigating an unrelated phenomenon which required skinned frogs in the 1780s and 1790s. His assistant is claimed to have accidentally touched a scalpel to the sciatic nerve of the frog and this resulted in a spark and animation of its legs. [ 7 ] This was building on the theories of Giovanni Battista Beccaria , Felice Fontana , Leopoldo Marco Antonio Caldani , and Tommaso Laghi [ it ] . [ 3 ] Galvani was investigating the effects of distant atmospheric electricity (lightning) on prepared frog legs when he discovered the legs convulsed not only when lightning struck but also when he pressed the brass hooks attached to the frog's spinal cord to the iron railing they were suspended from. [ 8 ] In his laboratory, Galvani later discovered that he could replicate this phenomenon by touching metal electrodes of brass connected to the frog's spinal cord to an iron plate. He concluded that this was proof of "animal electricity," the electric power which animated living things. [ 3 ] Alessandro Volta, a contemporary physicist, believed that the effect was explicable not by any vital force but rather it was the presence of two different metals that was generating the electricity. Volta demonstrated his theory by creating the first chemical electric battery. [ 9 ] Despite their differences in opinion, Volta named the phenomenon of the chemical generation of electricity "Galvanism" after Galvani. [ 2 ] On March 27, 1791, Galvani published a book about his work on animal electricity. It contained comprehensive details of his 11 years of research and experimentation on the topic. [ 10 ] The 1797 edition of Gren ’s Grundriss der Naturlehre provides the first explicit definition of 'galvanism' as clearly reflecting Volta’s opinion in the following terms: Galvani from Bologna was the first to observe muscular motions elicited by the contact between two different metals; after him, the phenomena of this sort were termed and included under the name of Galvanism. [ 11 ] Giovanni Aldini , Galvani's nephew, continued his uncle's work after Luigi Galvani died in 1798. [ 12 ] In 1803, Aldini performed a famous public demonstration of the electro-stimulation technique of deceased limbs on the corpse of an executed criminal George Foster at Newgate in London . [ 13 ] [ 14 ] The Newgate Calendar describes what happened when the galvanic process was used on the body: On the first application of the process to the face, the jaws of the deceased criminal began to quiver, and the adjoining muscles were horribly contorted, and one eye was actually opened. In the subsequent part of the process the right hand was raised and clenched, and the legs and thighs were set in motion. [ 15 ] Galvani has been called the father of electrophysiology . The debate between Galvani and Volta "would result in the creation of electrophysiology, electromagnetism, electrochemistry and the electric battery." [ 16 ] Mary Shelley's Frankenstein , wherein a man stitches together a human body from corpses and brings it to life, was inspired in part by the theory and demonstrations of Galvanism which may have been conducted by James Lind . [ 17 ] [ 18 ] Although the Creature was described in later works as a composite of whole body parts grafted together from cadavers and reanimated by the use of electricity, this description is not consistent with Shelley's work; [ 19 ] both the use of electricity and the cobbled-together image of Frankenstein's monster were more the result of James Whale's popular 1931 film adaptation of the story . Galvanism influenced metaphysical thought in the domain of abiogenesis , the underlying process of the generation of living forms. In 1836, Andrew Crosse recorded what he referred to as "the perfect insect, standing erect on a few bristles which formed its tail," as having appeared during an experiment wherein he used electricity to produce mineral crystals. While Crosse himself never claimed to have generated the insects, even in private, the scientific world at the time viewed the connection between life and electricity to be sufficiently clear that he received threats against his life for this "blasphemy." [ 20 ] Giovanni Aldini is claimed to have applied Galvanic principles (application of electricity to biological organisms) in successfully alleviating the symptoms of "several cases of insanity", and with "complete success". [ 21 ] Today, electroconvulsive therapy is used as a treatment option for severely depressed pregnant mothers [ 22 ] (as it is the least harmful for the developing fetus) and people suffering treatment-resistant major depressive disorder . It is found to be effective for half of those who receive treatment while the other half may relapse within 12 months. [ 23 ] The modern application of electricity to the human body for medical diagnostics and treatments is practiced under the term electrophysiology . This includes the monitoring of the electric activity of the heart, muscles, and even the brain, respectively termed electrocardiography , electromyography , and electrocorticography .
https://en.wikipedia.org/wiki/Galvanism
Galvanization ( also spelled galvanisation ) [ 1 ] is the process of applying a protective zinc coating to steel or iron , to prevent rusting . The most common method is hot-dip galvanizing , in which the parts are coated by submerging them in a bath of hot, molten zinc. [ 2 ] The zinc coating, when intact, prevents corrosive substances from reaching the underlying iron. [ 3 ] Additional electroplating such as a chromate conversion coating may be applied to provide further surface passivation to the substrate material. [ 4 ] The process is named after the Italian physician, physicist, biologist and philosopher Luigi Galvani (9 September 1737 – 4 December 1798). The earliest known example of galvanized iron was discovered on 17th-century Indian armour in the Royal Armouries Museum collection in the United Kingdom. [ 5 ] The term "galvanized" can also be used metaphorically of any stimulus which results in activity by a person or group of people. [ 6 ] In modern usage, the term "galvanizing" has largely come to be associated with zinc coatings, to the exclusion of other metals. Galvanic paint, a precursor to hot-dip galvanizing , was patented by Stanislas Sorel , of Paris , on June 10, 1837, as an adoption of a term from a highly fashionable field of contemporary science, despite having no evident relation to it. [ 7 ] Hot-dip galvanizing deposits a thick, robust layer of zinc iron alloys on the surface of a steel item. In the case of automobile bodies, where additional decorative coatings of paint will be applied, a thinner form of galvanizing is applied by electrogalvanizing . The hot-dip process generally does not reduce strength to a measurable degree, with the exception of high-strength steels where hydrogen embrittlement can become a problem. [ 8 ] Thermal diffusion galvanizing, or Sherardizing , provides a zinc diffusion coating on iron- or copper-based materials. [ 9 ] [ 10 ] Galvanized steel can last for many decades if other supplementary measures are maintained, such as paint coatings and additional sacrificial anodes . Corrosion in non-salty environments is caused mainly by levels of sulfur dioxide in the air. [ 11 ] This is the most common use for galvanized metal; hundreds of thousands of tons of steel products are galvanized annually worldwide. In developed countries, most larger cities have several galvanizing factories, and many items of steel manufacture are galvanized for protection. Typically these include street furniture, building frameworks, balconies, verandahs, staircases, ladders, walkways, and more. Hot dip galvanized steel is also used for making steel frames as a basic construction material for steel frame buildings. [ 12 ] In the early 20th century, galvanized piping swiftly took the place of previously used cast iron and lead in cold-water plumbing . Galvanized piping rusts from the inside out, building up layers of plaque on the inside of the piping, causing both water pressure problems and eventual pipe failure. These plaques can flake off, leading to visible impurities in water and a slight metallic taste. The life expectancy of galvanized piping is about 40–50 years, [ 13 ] but it may vary on how well the pipes were built and installed. Pipe longevity also depends on the thickness of zinc in the original galvanizing, which ranges on a scale from G01 to G360. [ 14 ]
https://en.wikipedia.org/wiki/Galvanization
Galvannealed or galvanneal (galvannealed steel) is the result from the processes of galvanizing followed by annealing of sheet steel . Galvannealed steel is a matte uniform grey color, which can be easily painted. In comparison to galvanized steel the coating is harder, and more brittle. Production of galvannealed sheet steel begins with hot dip galvanization of sheet steel. After passing through the galvanizing zinc bath the sheet steel passes through air knives to remove excess zinc, and is then heated in an annealing furnace for several seconds causing iron and zinc layers to diffuse into one another causing the formation of zinc-iron alloy layers at the surface. The annealing step is performed with the strip still hot after the galvanizing step, with the zinc still liquid. [ 1 ] The galvanising bath contains slightly over 0.1% aluminium, added to form a layer bonding between the iron and coated zinc. [ 2 ] [ 3 ] Annealing temperatures are around 500 to 565 °C. [ 2 ] Pre-1990 annealing lines used gas-fired heating; post-1990s the use of induction furnaces became common. [ 1 ] Three distinct alloys are identified in the galvannealed surface. From the steel boundary these are named the Gamma (Γ), Zeta (ζ), and Delta (δ) layers, of compositions Fe 3 Zn 10 , FeZn 10 , FeZn 13 respectively; resulting in an overall bulk iron content of 9-12%. The layers also contain around 1-4% aluminium. Composition depends primarily on heating time and temperature, limited by the diffusion of the two metals. [ 2 ] [ 3 ] [ 1 ] The resulting coating has a matte appearance, and is hard and brittle - under further working such as pressing or bending powder is produced from degradation of the coating, together with cracks on the surface. [ 3 ] In comparison to a zinc (galvanized) coating galvannealed has better spot weldability, and is paintable, [ 4 ] Due to iron present in the surface alloy phase galvanneal develops a reddish patina in moist environments - it is generally used painted. [ 5 ] Zinc phosphate coating is a common pre-painting surface treatment. [ 4 ] Galvannealed sheet can also be produced from electroplated zinc steel sheet. [ 6 ] Patents relating to Galvannealed wire were obtained by the Keystone Steel and Wire Company ( Peoria, Illinois , USA) c. 1923. The company used the name "Galvannealed" as a brand name. [ 7 ] The key early patent was US patent No. 1430648 (J.L. Herman, 1922, Peoria, Illinois, USA) "Process of coating and treating materials having an iron base" . The patent described the galvannealing process with specific reference to iron wires. [ 8 ] A major market for galvannealed steel is the automobile industry . [ 9 ] In the mid 1980s, the Chrysler Corporation pioneered the use of Galvannealed sheet steels in the manufacture of their vehicles. In the 1990s galvannealled coatings were used by Honda , Toyota and Ford , with hot dip galvanized , electrogalvanized and other coatings (e.g. Zn-Ni) being used by other manufacturers, with variations depending on the part within the car frame, as well as due to local price differences. [ 10 ] Galvannealed steel is the preferred material for use in the construction of permanent debris and linen chute systems. [ 11 ]
https://en.wikipedia.org/wiki/Galvannealed
Galvanoluminescence [ 1 ] [ 2 ] is the emission of light produced by the passage of an electric current through an appropriate electrolyte in which an electrode , made of certain metals such as aluminium or tantalum , has been immersed. An example being the electrolysis of sodium bromide (NaBr). This article about materials science is a stub . You can help Wikipedia by expanding it . This atomic, molecular, and optical physics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Galvanoluminescence
Galvinoxyl is a commercially available radical scavenger . [ 1 ] It finds use both as a probe for studying radical reactions and as an inhibitor of radical polymerization . It may be synthesized by oxidation of the parent phenol with lead dioxide or potassium hexacyanoferrate(III) . Its radical structure is confirmed by the loss of the O–H stretch in the IR spectrum and by electron spin resonance ; it is stable even in the presence of oxygen. [ 2 ] [ 3 ]
https://en.wikipedia.org/wiki/Galvinoxyl
Gamas's theorem is a result in multilinear algebra which states the necessary and sufficient conditions for a tensor symmetrized by an irreducible representation of the symmetric group S n {\displaystyle S_{n}} to be zero. It was proven in 1988 by Carlos Gamas. [ 1 ] Additional proofs have been given by Pate [ 2 ] and Berget. [ 3 ] Let V {\displaystyle V} be a finite-dimensional complex vector space and λ {\displaystyle \lambda } be a partition of n {\displaystyle n} . From the representation theory of the symmetric group S n {\displaystyle S_{n}} it is known that the partition λ {\displaystyle \lambda } corresponds to an irreducible representation of S n {\displaystyle S_{n}} . Let χ λ {\displaystyle \chi ^{\lambda }} be the character of this representation. The tensor v 1 ⊗ v 2 ⊗ ⋯ ⊗ v n ∈ V ⊗ n {\displaystyle v_{1}\otimes v_{2}\otimes \dots \otimes v_{n}\in V^{\otimes n}} symmetrized by χ λ {\displaystyle \chi ^{\lambda }} is defined to be χ λ ( e ) n ! ∑ σ ∈ S n χ λ ( σ ) v σ ( 1 ) ⊗ v σ ( 2 ) ⊗ ⋯ ⊗ v σ ( n ) , {\displaystyle {\frac {\chi ^{\lambda }(e)}{n!}}\sum _{\sigma \in S_{n}}\chi ^{\lambda }(\sigma )v_{\sigma (1)}\otimes v_{\sigma (2)}\otimes \dots \otimes v_{\sigma (n)},} where e {\displaystyle e} is the identity element of S n {\displaystyle S_{n}} . Gamas's theorem states that the above symmetrized tensor is non-zero if and only if it is possible to partition the set of vectors { v i } {\displaystyle \{v_{i}\}} into linearly independent sets whose sizes are in bijection with the lengths of the columns of the partition λ {\displaystyle \lambda } .
https://en.wikipedia.org/wiki/Gamas's_theorem
Statistical inference might be thought of as gambling theory applied to the world around us. The myriad applications for logarithmic information measures tell us precisely how to take the best guess in the face of partial information. [ 1 ] In that sense, information theory might be considered a formal expression of the theory of gambling. It is no surprise, therefore, that information theory has applications to games of chance. [ 2 ] Kelly betting or proportional betting is an application of information theory to investing and gambling . Its discoverer was John Larry Kelly, Jr . Part of Kelly's insight was to have the gambler maximize the expectation of the logarithm of his capital, rather than the expected profit from each bet. This is important, since in the latter case, one would be led to gamble all he had when presented with a favorable bet, and if he lost, would have no capital with which to place subsequent bets. Kelly realized that it was the logarithm of the gambler's capital which is additive in sequential bets, and "to which the law of large numbers applies." A bit is the amount of entropy in a bettable event with two possible outcomes and even odds. Obviously we could double our money if we knew beforehand what the outcome of that event would be. Kelly's insight was that no matter how complicated the betting scenario is, we can use an optimum betting strategy, called the Kelly criterion , to make our money grow exponentially with whatever side information we are able to obtain. The value of this "illicit" side information is measured as mutual information relative to the outcome of the betable event: where Y is the side information, X is the outcome of the betable event, and I is the state of the bookmaker's knowledge. This is the average Kullback–Leibler divergence , or information gain, of the a posteriori probability distribution of X given the value of Y relative to the a priori distribution, or stated odds, on X . Notice that the expectation is taken over Y rather than X : we need to evaluate how accurate, in the long term, our side information Y is before we start betting real money on X . This is a straightforward application of Bayesian inference . Note that the side information Y might affect not just our knowledge of the event X but also the event itself. For example, Y might be a horse that had too many oats or not enough water. The same mathematics applies in this case, because from the bookmaker's point of view, the occasional race fixing is already taken into account when he makes his odds. The nature of side information is extremely finicky. We have already seen that it can affect the actual event as well as our knowledge of the outcome. Suppose we have an informer, who tells us that a certain horse is going to win. We certainly do not want to bet all our money on that horse just upon a rumor: that informer may be betting on another horse, and may be spreading rumors just so he can get better odds himself. Instead, as we have indicated, we need to evaluate our side information in the long term to see how it correlates with the outcomes of the races. This way we can determine exactly how reliable our informer is, and place our bets precisely to maximize the expected logarithm of our capital according to the Kelly criterion. Even if our informer is lying to us, we can still profit from his lies if we can find some reverse correlation between his tips and the actual race results. Doubling rate in gambling on a horse race is [ 3 ] where there are m {\displaystyle m} horses, the probability of the i {\displaystyle i} th horse winning being p i {\displaystyle p_{i}} , the proportion of wealth bet on the horse being b i {\displaystyle b_{i}} , and the odds (payoff) being o i {\displaystyle o_{i}} (e.g., o i = 2 {\displaystyle o_{i}=2} if the i {\displaystyle i} th horse winning pays double the amount bet). This quantity is maximized by proportional (Kelly) gambling: for which where H ( p ) {\displaystyle H(p)} is information entropy . An important but simple relation exists between the amount of side information a gambler obtains and the expected exponential growth of his capital (Kelly): for an optimal betting strategy, where K 0 {\displaystyle K_{0}} is the initial capital, K t {\displaystyle K_{t}} is the capital after the t th bet, and H i {\displaystyle H_{i}} is the amount of side information obtained concerning the i th bet (in particular, the mutual information relative to the outcome of each betable event). This equation applies in the absence of any transaction costs or minimum bets. When these constraints apply (as they invariably do in real life), another important gambling concept comes into play: in a game with negative expected value, the gambler (or unscrupulous investor) must face a certain probability of ultimate ruin, which is known as the gambler's ruin scenario. Note that even food, clothing, and shelter can be considered fixed transaction costs and thus contribute to the gambler's probability of ultimate ruin. This equation was the first application of Shannon's theory of information outside its prevailing paradigm of data communications (Pierce). The logarithmic probability measure self-information or surprisal, [ 4 ] whose average is information entropy /uncertainty and whose average difference is KL-divergence , has applications to odds-analysis all by itself. Its two primary strengths are that surprisals: (i) reduce minuscule probabilities to numbers of manageable size, and (ii) add whenever probabilities multiply. For example, one might say that "the number of states equals two to the number of bits" i.e. #states = 2 #bits . Here the quantity that's measured in bits is the logarithmic information measure mentioned above. Hence there are N bits of surprisal in landing all heads on one's first toss of N coins. The additive nature of surprisals, and one's ability to get a feel for their meaning with a handful of coins, can help one put improbable events (like winning the lottery, or having an accident) into context. For example if one out of 17 million tickets is a winner, then the surprisal of winning from a single random selection is about 24 bits. Tossing 24 coins a few times might give you a feel for the surprisal of getting all heads on the first try. The additive nature of this measure also comes in handy when weighing alternatives. For example, imagine that the surprisal of harm from a vaccination is 20 bits. If the surprisal of catching a disease without it is 16 bits, but the surprisal of harm from the disease if you catch it is 2 bits, then the surprisal of harm from NOT getting the vaccination is only 16+2=18 bits. Whether or not you decide to get the vaccination (e.g. the monetary cost of paying for it is not included in this discussion), you can in that way at least take responsibility for a decision informed to the fact that not getting the vaccination involves more than one bit of additional risk. More generally, one can relate probability p to bits of surprisal sbits as probability = 1/2 sbits . As suggested above, this is mainly useful with small probabilities. However, Jaynes pointed out that with true-false assertions one can also define bits of evidence ebits as the surprisal against minus the surprisal for. This evidence in bits relates simply to the odds ratio = p/(1-p) = 2 ebits , and has advantages similar to those of self-information itself. Information theory can be thought of as a way of quantifying information so as to make the best decision in the face of imperfect information. That is, how to make the best decision using only the information you have available. The point of betting is to rationally assess all relevant variables of an uncertain game/race/match, then compare them to the bookmaker's assessments, which usually comes in the form of odds or spreads and place the proper bet if the assessments differ sufficiently. [ 5 ] The area of gambling where this has the most use is sports betting. Sports handicapping lends itself to information theory extremely well because of the availability of statistics. For many years noted economists have tested different mathematical theories using sports as their laboratory, with vastly differing results. One theory regarding sports betting is that it is a random walk . Random walk is a scenario where new information, prices and returns will fluctuate by chance, this is part of the efficient-market hypothesis . The underlying belief of the efficient market hypothesis is that the market will always make adjustments for any new information. Therefore no one can beat the market because they are trading on the same information from which the market adjusted. However, according to Fama, [ 6 ] to have an efficient market three qualities need to be met: Statisticians have shown that it's the third condition which allows for information theory to be useful in sports handicapping. When everyone doesn't agree on how information will affect the outcome of the event, we get differing opinions.
https://en.wikipedia.org/wiki/Gambling_and_information_theory
A gambrel or gambrel roof is a usually symmetrical two-sided roof with two slopes on each side. The upper slope is positioned at a shallow angle, while the lower slope is steep. This design provides the advantages of a sloped roof while maximizing headroom inside the building's upper level and shortening what would otherwise be a tall roof, as well as reducing the span of each set of rafters . The name comes from the Medieval Latin word gamba , meaning horse's hock or leg. [ 1 ] [ 2 ] The term gambrel is of American origin, [ 3 ] the older, European name being a curb (kerb, kirb) roof. Europeans historically did not distinguish between a gambrel roof and a mansard roof but called both types a mansard. In the United States, various shapes of gambrel roofs are sometimes called Dutch gambrel or Dutch Colonial gambrel with bell-cast eaves, Swedish, German, English, French, or New England gambrel. The cross-section of a gambrel roof is similar to that of a mansard roof, but a gambrel has vertical gable ends instead of being hipped at the four corners of the building. A gambrel roof overhangs the façade , whereas a mansard normally does not. Gambrel is a Norman English word, sometimes spelled gambol such as in the 1774 Boston carpenters' price book (revised 1800). Other spellings include gamerel, gamrel, gambril, gameral, gambering, cambrel, cambering, chambrel [ 4 ] referring to a wooden bar used by butchers to hang the carcasses of slaughtered animals. [ 1 ] Butcher's gambrels, later made of metal, resembled the two-sloped appearance of a gambrel roof when in use. [ 5 ] Gambrel is also a term for the joint in the upper part of a horse's hind leg, the hock . In 1858, Oliver Wendell Holmes Sr. wrote: Know old Cambridge? Hope you do.— Born there? Don't say so! I was, too. (Born in a house with a gambrel-roof,— Standing still, if you must have proof.— "Gambrel?—Gambrel?"—Let me beg You'll look at a horse's hinder leg,— First great angle above the hoof,— That's the gambrel; hence gambrel-roof.) [ 6 ] An earlier reference from the Dictionary of Americanisms , published in 1848, defines gambrel as "A hipped roof of a house, so called from the resemblance to the hind leg of a horse which by farriers is termed the gambrel ." [ 7 ] Webster's Dictionary also confusingly used the term hip in the definition of this roof. The term is also used for a single mansard roof in France and Germany. In Dutch the term 'two-sided mansard roof' is used for gambrel roofs. The origin of the gambrel roof form in North America is unknown. [ 8 ] The oldest known gambrel roof in America was on the second Harvard Hall at Harvard University built in 1677. [ 9 ] Possibly the oldest surviving house in the U.S. with a gambrel roof is the c. 1677–78 Peter Tufts House . The oldest surviving framed house in North America, the Fairbanks House , has an ell with a gambrel roof, but this roof was a later addition. Claims to the origin of the gambrel roof form in North America include:
https://en.wikipedia.org/wiki/Gambrel
Game design is the process of creating and shaping the mechanics, systems, rules, and gameplay of a game . Game design processes apply to board games , card games , dice games , casino games , role-playing games , sports , war games , or simulation games. In Elements of Game Design , game designer Robert Zubek defines game design by breaking it down into three elements: [ 3 ] In academic research , game design falls within the field of game studies (not to be confused with game theory , which studies strategic decision making, primarily in non-game situations). Game design is part of a game's development from concept to final form. Typically, the development process is iterative , with repeated phases of testing and revision. During revision, additional design or re-design may be needed. A game designer (or inventor) is a person who invents a game's concept, central mechanisms, rules, and themes. Game designers may work alone or in teams. A game developer is a person who fleshes out the details of a game's design, oversees its testing, and revises the game in response to player feedback. Often game designers also do development work on the same project. However, some publishers commission extensive development of games to suit their target audience after licensing a game from a designer. For larger games, such as collectible card games , designers and developers work in teams with separate roles. A game artist creates visual art for games. Game artists are often vital to role-playing games and collectible card games . [ 5 ] Many graphic elements of games are created by the designer when producing a prototype of the game, revised by the developer based on testing, and then further refined by the artist and combined with artwork as a game is prepared for publication or release. A game concept is an idea for a game, briefly describing its core play mechanisms, objectives, themes, and who the players represent. A game concept may be pitched to a game publisher in a similar manner as film ideas are pitched to potential film producers. Alternatively, game publishers holding a game license to intellectual property in other media may solicit game concepts from several designers before picking one to design a game. During design, a game concept is fleshed out. Mechanisms are specified in terms of components (boards, cards, tokens, etc.) and rules. The play sequence and possible player actions are defined, as well as how the game starts, ends, and win conditions (if any). A game prototype is a draft version of a game used for testing. Uses of prototyping include exploring new game design possibilities and technologies. [ 6 ] Play testing is a major part of game development. During testing, players play the prototype and provide feedback on its gameplay, the usability of its components, the clarity of its goals and rules, ease of learning, and entertainment value. During testing, various balance issues may be identified, requiring changes to the game's design. The developer then revises the design, components, presentation, and rules before testing it again. Later testing may take place with focus groups to test consumer reactions before publication. Many games have ancient origins and were not designed in the modern sense, but gradually evolved over time through play. The rules of these games were not codified until early modern times and their features gradually developed and changed through the folk process . For example, sports (see history of sports ), gambling, and board games are known, respectively, to have existed for at least nine thousand, [ 7 ] six thousand, [ 8 ] and four thousand years. [ 9 ] Tabletop games played today whose descent can be traced from ancient times include chess , [ 10 ] [ 11 ] go , [ 12 ] pachisi , [ 13 ] mancala , [ 14 ] [ 15 ] and pick-up sticks . [ 16 ] These games are not considered to have had a designer or been the result of a contemporary design process . After the rise of commercial game publishing in the late 19th century, many games that had formerly evolved via folk processes became commercial properties, often with custom scoring pads or preprepared material. For example, the similar public domain games Generala , Yacht , and Yatzy led to the commercial game Yahtzee in the mid-1950s. [ 17 ] [ 18 ] Today, many commercial games, such as Taboo , Balderdash , Pictionary , or Time's Up! , are descended from traditional parlour games . Adapting traditional games to become commercial properties is an example of game design. Similarly, many sports, such as soccer and baseball , are the result of folk processes, while others were designed, such as basketball , invented in 1891 by James Naismith . [ 19 ] [ 20 ] The first games in a new medium are frequently adaptations of older games. Later games often exploit the distinctive properties of a new medium. Adapting older games and creating original games for new media are both examples of game design. Technological advances have provided new media for games throughout history. For example, accurate topographic maps produced as lithographs and provided free to Prussian officers helped popularize wargaming . [ citation needed ] Cheap bookbinding (printed labels wrapped around cardboard) led to mass-produced board games with custom boards. [ citation needed ] Inexpensive (hollow) lead figurine casting contributed to the development of miniature wargaming . [ citation needed ] Cheap custom dice led to poker dice . [ citation needed ] Flying discs led to Ultimate frisbee . [ 21 ] [ 22 ] Games can be designed for entertainment, education, exercise or experimental purposes. Additionally, elements and principles of game design can be applied to other interactions, in the form of gamification . Games have historically inspired seminal research in the fields of probability , artificial intelligence , economics, and optimization theory . Applying game design to itself is a current research topic in metadesign . By learning through play [ a ] children can develop social and cognitive skills, mature emotionally, and gain the self-confidence required to engage in new experiences and environments. [ 23 ] Key ways that young children learn include playing, being with other people, being active, exploring and new experiences, talking to themselves, communicating with others, meeting physical and mental challenges, being shown how to do new things, practicing and repeating skills, and having fun. [ 24 ] Play develops children's content knowledge and provides children the opportunity to develop social skills, competencies, and disposition to learn. [ 25 ] Play-based learning is based on a Vygotskian model of scaffolding where the teacher pays attention to specific elements of the play activity and provides encouragement and feedback on children's learning. [ 26 ] When children engage in real-life and imaginary activities, play can be challenging in children's thinking. [ 27 ] To extend the learning process, sensitive intervention can be provided with adult support when necessary during play-based learning. [ 26 ] Different types of games pose specific game design issues. Board game design is the development of rules and presentational aspects of a board game. When a player takes part in a game, it is the player's self-subjection to the rules that create a sense of purpose for the duration of the game. [ 1 ] Maintaining the players' interest throughout the gameplay experience is the goal of board game design. [ 2 ] To achieve this, board game designers emphasize different aspects such as social interaction, strategy, and competition, and target players of differing needs by providing for short versus long-play, and luck versus skill. [ 2 ] Beyond this, board game design reflects the culture in which the board game is produced. The most ancient board games known today are over 5000 years old. They are frequently abstract in character and their design is primarily focused on a core set of simple rules. Of those that are still played today, games like go ( c. 400 BC ), mancala ( c. 700 AD ), and chess ( c. 600 AD ) have gone through many presentational and/or rule variations. In the case of chess, for example, new variants are developed constantly, to focus on certain aspects of the game, or just for variation's sake. Traditional board games date from the nineteenth and early twentieth century. Whereas ancient board game design was primarily focused on rules alone, traditional board games were often influenced by Victorian mores. Academic (e.g. history and geography) and moral didacticism were important design features for traditional games, and Puritan associations between dice and the Devil meant that early American game designers eschewed their use in board games entirely. [ 28 ] Even traditional games that did use dice, like Monopoly (based on the 1906 The Landlord's Game ), were rooted in educational efforts to explain political concepts to the masses. By the 1930s and 1940s, board game design began to emphasize amusement over education, and characters from comic strips, radio programmes, and (in the 1950s) television shows began to be featured in board game adaptations. [ 28 ] Recent developments in modern board game design can be traced to the 1980s in Germany, and have led to the increased popularity of " German-style board games " (also known as "Eurogames" or "designer games"). The design emphasis of these board games is to give players meaningful choices. [ 1 ] This is manifested by eliminating elements like randomness and luck to be replaced by skill, strategy, and resource competition, by removing the potential for players to fall irreversibly behind in the early stages of a game, and by reducing the number of rules and possible player options to produce what Alan R. Moon has described as "elegant game design". [ 1 ] The concept of elegant game design has been identified by The Boston Globe ' s Leon Neyfakh as related to Mihaly Csikszentmihalyi 's the concept of " flow " from his 1990 book, "Flow: The Psychology of Optimal Experience". [ 1 ] Modern technological advances have had a democratizing effect on board game production, with services like Kickstarter providing designers with essential startup capital and tools like 3D printers facilitating the production of game pieces and board game prototypes. [ 29 ] [ 30 ] A modern adaptation of figure games are miniature wargames like Warhammer 40,000 . Card games can be designed as gambling games, such as Poker , or simply for fun, such as Go Fish . As cards are typically shuffled and revealed gradually during play, most card games involve randomness, either initially or during play, and hidden information, such as the cards in a player's hand. How players play their cards, revealing information and interacting with previous plays as they do so, is central to card game design. In partnership card games, such as Bridge , rules limiting communication between players on the same team become an important part of the game design. This idea of limited communication has been extended to cooperative card games, such as Hanabi . Dice games differ from card games in that each throw of the dice is an independent event , whereas the odds of a given card being drawn are affected by all the previous cards drawn or revealed from a deck. For this reason, dice game design often centers around forming scoring combinations and managing re-rolls, either by limiting their number, as in Yahtzee or by introducing a press-your-luck element, as in Can't Stop . Casino game design can entail the creation of an entirely new casino game, the creation of a variation on an existing casino game, or the creation of a new side bet on an existing casino game. [ 32 ] Casino game mathematician, Michael Shackleford has noted that it is much more common for casino game designers today to make successful variations than entirely new casino games. [ 33 ] Gambling columnist John Grochowski points to the emergence of community-style slot machines in the mid-1990s, for example, as a successful variation on an existing casino game type. [ 34 ] Unlike the majority of other games which are designed primarily in the interest of the player, one of the central aims of casino game design is to optimize the house advantage and maximize revenue from gamblers . Successful casino game design works to provide entertainment for the player and revenue for the gambling house. To maximise player entertainment, casino games are designed with simple easy-to-learn rules that emphasize winning (i.e. whose rules enumerate many victory conditions and few loss conditions [ 33 ] ), and that provide players with a variety of different gameplay postures (e.g. card hands ). [ 32 ] Player entertainment value is also enhanced by providing gamblers with familiar gaming elements (e.g. dice and cards) in new casino games. [ 32 ] [ 33 ] To maximise success for the gambling house, casino games are designed to be easy for croupiers to operate and for pit managers to oversee. [ 32 ] [ 33 ] The two most fundamental rules of casino game design are that the games must be non-fraudable [ 32 ] (including being as nearly as possible immune from advantage gambling [ 33 ] ) and that they must mathematically favor the house winning. Shackleford suggests that the optimum casino game design should give the house an edge of smaller than 5%. [ 33 ] The design of tabletop role-playing games typically requires the establishment of setting , characters , and gameplay rules or mechanics . After a role-playing game is produced, additional design elements are often devised by the players themselves. In many instances, for example, character creation is left to the players. Early role-playing game theories developed on indie role-playing game design forums in the early 2000s. [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] Game design is a topic of study in the academic field of game studies. Game studies is a discipline that deals with the critical study of games, game design, players, and their role in society and culture. Prior to the late-twentieth century, the academic study of games was rare and limited to fields such as history and anthropology . As the video game revolution took off in the early 1980s, so did academic interest in games, resulting in a field that draws on diverse methodologies and schools of thought. [ 40 ] Social scientific approaches have concerned themselves with the question of, "What do games do to people?" Using tools and methods such as surveys, controlled laboratory experiments, and ethnography, researchers have investigated the impacts that playing games have on people and the role of games in everyday life. [ 41 ] Humanities approaches have concerned themselves with the question of, "What meanings are made through games?" Using tools and methods such as interviews, ethnographies, and participant observation, researchers have investigated the various roles that games play in people's lives and the meanings players assign to their experiences. [ 42 ] From within the game industry, central questions include, "How can we create better games?" and, "What makes a game good?" "Good" can be taken to mean different things, including providing an entertaining experience, being easy to learn and play, being innovative, educating the players, and/or generating novel experiences. [ 43 ] [ 44 ] [ 45 ]
https://en.wikipedia.org/wiki/Game_design
In game theory and related fields, a game form , game frame , ruleset , or outcome function is the set of rules that govern a game and determine its outcome based on each player's choices. A game form differs from a game in that it does not stipulate the utilities or payoffs for each agent. [ 1 ] Mathematically , a game form can be defined as a mapping going from an action space [ 2 ] [ 3 ] —which describes all the possible moves a player can make— to an outcome space . The action space is also often called a message space when the actions consist of providing information about beliefs or preferences, in which case it is called a direct mechanism . [ 3 ] For example, an electoral system is a game form mapping a message space consisting of ballots to a winning candidate (the outcome). [ 1 ] Similarly, an auction is a game form that takes each bidder's price and maps them to both a winner and a set of payments by the bidders. Often, a game form is a set of rules or institutions designed to implement some normative goal (called a social choice function ), by motivating agents to act in a particular way through an appropriate choice of incentives . Then, the game form is called an implementation or mechanism . This approach is widely used in the study of auctions and electoral systems . [ 4 ] The social choice function represents the desired outcome or goal of the game, such as maximizing social welfare or achieving a fair allocation of resources. The mechanism designer 's task is to design the game form in such a way that when each player plays their best response (i.e. behaves strategically), the resulting equilibrium implements the desired social choice function. This economics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Game_form
Game semantics is an approach to formal semantics that grounds the concepts of truth or validity on game-theoretic concepts, such as the existence of a winning strategy for a player. In this framework, logical formulas are interpreted as defining games between two players. The term encompasses several related but distinct traditions, including dialogical logic (developed by Paul Lorenzen and Kuno Lorenz in Germany starting in the 1950s) and game-theoretical semantics (developed by Jaakko Hintikka in Finland). Game semantics represents a significant departure from traditional model-theoretic approaches by emphasizing the dynamic, interactive nature of logical reasoning rather than static truth assignments. It provides intuitive interpretations for various logical systems, including classical logic , intuitionistic logic , linear logic , and modal logic . The approach bears conceptual resemblances to ancient Socratic dialogues , medieval theory of Obligationes , and constructive mathematics . Since the 1990s, game semantics has found important applications in theoretical computer science , particularly in the semantics of programming languages , concurrency theory , and the study of computational complexity . In the late 1950s Paul Lorenzen was the first to introduce a game semantics for logic , and it was further developed by Kuno Lorenz . At almost the same time as Lorenzen, Jaakko Hintikka developed a model-theoretical approach known in the literature as GTS (game-theoretical semantics). Since then, a number of different game semantics have been studied in logic. Shahid Rahman ( Lille III ) and collaborators developed dialogical logic into a general framework for the study of logical and philosophical issues related to logical pluralism . Beginning 1994 this triggered a kind of renaissance with lasting consequences. This new philosophical impulse experienced a parallel renewal in the fields of theoretical computer science , computational linguistics , artificial intelligence , and the formal semantics of programming languages , for instance the work of Johan van Benthem and collaborators in Amsterdam who looked thoroughly at the interface between logic and games, and Hanno Nickau who addressed the full abstraction problem in programming languages by means of games. New results in linear logic by Jean-Yves Girard in the interfaces between mathematical game theory and logic on one hand and argumentation theory and logic on the other hand resulted in the work of many others, including S. Abramsky , J. van Benthem, A. Blass , D. Gabbay , M. Hyland , W. Hodges , R. Jagadeesan, G. Japaridze , E. Krabbe, L. Ong, H. Prakken, G. Sandu, D. Walton, and J. Woods, who placed game semantics at the center of a new concept in logic in which logic is understood as a dynamic instrument of inference. There has also been an alternative perspective on proof theory and meaning theory, advocating that Wittgenstein 's "meaning as use" paradigm as understood in the context of proof theory, where the so-called reduction rules (showing the effect of elimination rules on the result of introduction rules) should be seen as appropriate to formalise the explanation of the (immediate) consequences one can draw from a proposition, thus showing the function/purpose/usefulness of its main connective in the calculus of language ( de Queiroz (1988) , de Queiroz (1991) , de Queiroz (1994) , de Queiroz (2001) , de Queiroz (2008) , de Queiroz (2023) ). The simplest application of game semantics is to propositional logic . Each formula of this language is interpreted as a game between two players, known as the "Verifier" and the "Falsifier". The Verifier is given "ownership" of all the disjunctions in the formula, and the Falsifier is likewise given ownership of all the conjunctions . Each move of the game consists of allowing the owner of the principal connective to pick one of its branches; play will then continue in that subformula, with whichever player controls its principal connective making the next move. Play ends when a primitive proposition has been so chosen by the two players; at this point the Verifier is deemed the winner if the resulting proposition is true, and the Falsifier is deemed the winner if it is false. The original formula will be considered true precisely when the Verifier has a winning strategy , while it will be false whenever the Falsifier has the winning strategy. If the formula contains negations or implications, other, more complicated, techniques may be used. For example, a negation should be true if the thing negated is false, so it must have the effect of interchanging the roles of the two players. More generally, game semantics may be applied to predicate logic ; the new rules allow a principal quantifier to be removed by its "owner" (the Verifier for existential quantifiers and the Falsifier for universal quantifiers ) and its bound variable replaced at all occurrences by an object of the owner's choosing, drawn from the domain of quantification. Note that a single counterexample falsifies a universally quantified statement, and a single example suffices to verify an existentially quantified one. Assuming the axiom of choice , the game-theoretical semantics for classical first-order logic agree with the usual model-based (Tarskian) semantics . For classical first-order logic the winning strategy for the Verifier essentially consists of finding adequate Skolem functions and witnesses . For example, if S denotes ∀ x ∃ y ϕ ( x , y ) {\displaystyle \forall x\exists y\,\phi (x,y)} then an equisatisfiable statement for S is ∃ f ∀ x ϕ ( x , f ( x ) ) {\displaystyle \exists f\forall x\,\phi (x,f(x))} . The Skolem function f (if it exists) actually codifies a winning strategy for the Verifier of S by returning a witness for the existential sub-formula for every choice of x the Falsifier might make. [ 1 ] The above definition was first formulated by Jaakko Hintikka as part of his GTS interpretation. The original version of game semantics for classical (and intuitionistic) logic due to Paul Lorenzen and Kuno Lorenz was not defined in terms of models but of winning strategies over formal dialogues (P. Lorenzen, K. Lorenz 1978, S. Rahman and L. Keiff 2005). Shahid Rahman and Tero Tulenheimo developed an algorithm to convert GTS-winning strategies for classical logic into the dialogical winning strategies and vice versa. Formal dialogues and GTS games may be infinite and use end-of-play rules rather than letting players decide when to stop playing. Reaching this decision by standard means for strategic inferences ( iterated elimination of dominated strategies or IEDS) would, in GTS and formal dialogues, be equivalent to solving the halting problem and exceeds the reasoning abilities of human agents. GTS avoids this with a rule to test formulas against an underlying model; logical dialogues, with a non-repetition rule (similar to threefold repetition in Chess). Genot and Jacot (2017) [ 2 ] proved that players with severely bounded rationality can reason to terminate a play without IEDS. For most common logics, including the ones above, the games that arise from them have perfect information —that is, the two players always know the truth values of each primitive, and are aware of all preceding moves in the game. However, with the advent of game semantics, logics, such as the independence-friendly logic of Hintikka and Sandu, with a natural semantics in terms of games of imperfect information have been proposed. The primary motivation for Lorenzen and Kuno Lorenz was to find a game-theoretic (their term was dialogical , in German Dialogische Logik [ de ] ) semantics for intuitionistic logic . Andreas Blass [ 3 ] was the first to point out connections between game semantics and linear logic . This line was further developed by Samson Abramsky , Radhakrishnan Jagadeesan , Pasquale Malacaria and independently Martin Hyland and Luke Ong , who placed special emphasis on compositionality, i.e. the definition of strategies inductively on the syntax. Using game semantics, the authors mentioned above have solved the long-standing problem of defining a fully abstract model for the programming language PCF . Consequently, game semantics has led to fully abstract semantic models for a variety of programming languages, and to new semantic-directed methods of software verification by software model checking . Shahid Rahman [ fr ] and Helge Rückert extended the dialogical approach to the study of several non-classical logics such as modal logic , relevance logic , free logic and connexive logic . Recently, Rahman and collaborators developed the dialogical approach into a general framework aimed at the discussion of logical pluralism. Foundational considerations of game semantics have been more emphasised by Jaakko Hintikka and Gabriel Sandu, especially for independence-friendly logic (IF logic, more recently information -friendly logic), a logic with branching quantifiers . It was thought that the principle of compositionality fails for these logics, so that a Tarskian truth definition could not provide a suitable semantics. To get around this problem, the quantifiers were given a game-theoretic meaning. Specifically, the approach is the same as in classical propositional logic, except that the players do not always have perfect information about previous moves by the other player. Wilfrid Hodges has proposed a compositional semantics and proved it equivalent to game semantics for IF-logics. More recently Shahid Rahman [ fr ] and the team of dialogical logic in Lille implemented dependences and independences within a dialogical framework by means of a dialogical approach to intuitionistic type theory called immanent reasoning . [ 4 ] Japaridze ’s computability logic is a game-semantical approach to logic in an extreme sense, treating games as targets to be serviced by logic rather than as technical or foundational means for studying or justifying logic. Its starting philosophical point is that logic is meant to be a universal, general-utility intellectual tool for ‘navigating the real world’ and, as such, it should be construed semantically rather than syntactically, because it is semantics that serves as a bridge between real world and otherwise meaningless formal systems (syntax). Syntax is thus secondary, interesting only as much as it services the underlying semantics. From this standpoint, Japaridze has repeatedly criticized the often followed practice of adjusting semantics to some already existing target syntactic constructions, with Lorenzen ’s approach to intuitionistic logic being an example. This line of thought then proceeds to argue that the semantics, in turn, should be a game semantics, because games “offer the most comprehensive, coherent, natural, adequate and convenient mathematical models for the very essence of all ‘navigational’ activities of agents: their interactions with the surrounding world”. [ 5 ] Accordingly, the logic-building paradigm adopted by computability logic is to identify the most natural and basic operations on games, treat those operators as logical operations, and then look for sound and complete axiomatizations of the sets of game-semantically valid formulas. On this path a host of familiar or unfamiliar logical operators have emerged in the open-ended language of computability logic, with several sorts of negations, conjunctions, disjunctions, implications, quantifiers and modalities. Games are played between two agents: a machine and its environment, where the machine is required to follow only computable strategies. This way, games are seen as interactive computational problems, and the machine's winning strategies for them as solutions to those problems. It has been established that computability logic is robust with respect to reasonable variations in the complexity of allowed strategies, which can be brought down as low as logarithmic space and polynomial time (one does not imply the other in interactive computations) without affecting the logic. All this explains the name “computability logic” and determines applicability in various areas of computer science. Classical logic , independence-friendly logic and certain extensions of linear and intuitionistic logics turn out to be special fragments of computability logic, obtained merely by disallowing certain groups of operators or atoms.
https://en.wikipedia.org/wiki/Game_semantics
Game theory has been used as a tool for modeling and studying interactions between cognitive radios envisioned to operate in future communications systems. Such terminals will have the capability to adapt to the context they operate in, through possibly power and rate control as well as channel selection. Software agents embedded in these terminals will potentially be selfish, meaning they will only try to maximize the throughput/connectivity of the terminal they function for, as opposed to maximizing the welfare (total capacity) of the system they operate in. Thus, the potential interactions among them can be modeled through non-cooperative games. The researchers in this field often strive to determine the stable operating points of systems composed of such selfish terminals, and try to come up with a minimum set of rules (etiquette) so as to make sure that the optimality loss compared to a cooperative – centrally controlled setting – is kept at a minimum. [ 1 ] Game theory is the study of strategic decision making. More formally, it is "the study of mathematical models of conflict and cooperation between intelligent rational decision-makers." [ 1 ] An alternative term suggested "as a more descriptive name for the discipline" is interactive decision theory. [ 2 ] Game theory is mainly used in economics, political science, and psychology, as well as logic and biology. The subject first addressed zero-sum games, such that one person's gains exactly equal net losses of the other participant(s). Today, however, game theory applies to a wide range of class relations, and has developed into an umbrella term for the logical side of science, to include both human and non-humans, like computers. Classic uses include a sense of balance in numerous games, where each person has found or developed a tactic that cannot successfully better his results, given the other approach. Game theory has been used extensively in wireless networks research to develop understanding of stable operation points for networks made of autonomous/selfish nodes. The nodes are considered as the players. Utility functions are often chosen to correspond to achieved connection rate or similar technical metrics. The studies done in this context can be grouped as below: [ 2 ] Various studies have analyzed radio resource management problems in 802.11 WLAN networks. In such random access studies, researchers have considered selfish nodes, who try to maximize their own utility (throughput) only, and control their channel access probabilities to maximize their utilities. Power control refers to the process through which mobiles in CDMA cellular settings adjust their transmission powers so that they do not create unnecessary interference to other mobiles, trying, nevertheless, to achieve the required quality of service . Power control can be centralized in nature, where the base station dictates and assigns transmitter power levels to mobiles based on their link qualities, or they can be distributed, in which mobiles update their powers autonomously, independent of the base station, based on perceived service quality. In such distributed settings, the mobiles can be considered to be selfish agents (players) who try to maximize their utilities (often modeled as corresponding throughputs). Game theory is considered to be a powerful tool to study such scenarios. [ 3 ] Coalitional game theory is a branch of game theory that deals with cooperative behavior. In a coalitional game, the key idea is to study the formation of cooperative groups, i.e., coalitions among a number of players. By cooperating, the players can strengthen their position in a given game as well as improve their utilities. In this context, coalitional game theory proves to be a powerful tool for modeling cooperative behavior in many wireless networking applications such as cognitive radio networks, wireless system, physical layer security, virtual MIMO, among others. [ 4 ] [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Game_theory_in_communication_networks
In the mathematical theory of games , in particular the study of zero-sum continuous games , not every game has a minimax value. This is the expected value to one of the players when both play a perfect strategy (which is to choose from a particular PDF ). This article gives an example of a zero-sum game that has no value . It is due to Sion and Wolfe . [ 1 ] Zero-sum games with a finite number of pure strategies are known to have a minimax value (originally proved by John von Neumann ) but this is not necessarily the case if the game has an infinite set of strategies. There follows a simple example of a game with no minimax value. The existence of such zero-sum games is interesting because many of the results of game theory become inapplicable if there is no minimax value. Players I and II choose numbers x {\displaystyle x} and y {\displaystyle y} respectively, between 0 and 1. The payoff to player I is K ( x , y ) = { − 1 if x < y < x + 1 / 2 , 0 if x = y or y = x + 1 / 2 , 1 otherwise. {\displaystyle K(x,y)={\begin{cases}-1&{\text{if }}x<y<x+1/2,\\0&{\text{if }}x=y{\text{ or }}y=x+1/2,\\1&{\text{otherwise.}}\end{cases}}} That is, after the choices are made, player II pays K ( x , y ) {\displaystyle K(x,y)} to player I (so the game is zero-sum ). If the pair ( x , y ) {\displaystyle (x,y)} is interpreted as a point on the unit square, the figure shows the payoff to player I. Player I may adopt a mixed strategy, choosing a number according to a probability density function (pdf) f {\displaystyle f} , and similarly player II chooses from a pdf g {\displaystyle g} . Player I seeks to maximize the payoff K ( x , y ) {\displaystyle K(x,y)} , player II to minimize the payoff, and each player is aware of the other's objective. Sion and Wolfe show that sup f inf g ∬ K d f d g = 1 3 {\displaystyle \sup _{f}\inf _{g}\iint K\,df\,dg={\frac {1}{3}}} but inf g sup f ∬ K d f d g = 3 7 . {\displaystyle \inf _{g}\sup _{f}\iint K\,df\,dg={\frac {3}{7}}.} These are the maximal and minimal expectations of the game's value of player I and II respectively. The sup {\displaystyle \sup } and inf {\displaystyle \inf } respectively take the supremum and infimum over pdf's on the unit interval (actually Borel probability measures ). These represent player I and player II's (mixed) strategies. Thus, player I can assure himself of a payoff of at least 3/7 if he knows player II's strategy, and player II can hold the payoff down to 1/3 if he knows player I's strategy. There is no epsilon equilibrium for sufficiently small ε {\displaystyle \varepsilon } , specifically, if ε < 1 2 ( 3 7 − 1 3 ) ≃ 0.0476 {\displaystyle \varepsilon <{\frac {1}{2}}\left({\frac {3}{7}}-{\frac {1}{3}}\right)\simeq 0.0476} . Dasgupta and Maskin [ 2 ] assert that the game values are achieved if player I puts probability weight only on the set { 0 , 1 / 2 , 1 } {\displaystyle \left\{0,1/2,1\right\}} and player II puts weight only on { 1 / 4 , 1 / 2 , 1 } {\displaystyle \left\{1/4,1/2,1\right\}} . Glicksberg's theorem shows that any zero-sum game with upper or lower semicontinuous payoff function has a value (in this context, an upper (lower) semicontinuous function K is one in which the set { P ∣ K ( P ) < c } {\displaystyle \{P\mid K(P)<c\}} (resp { P ∣ K ( P ) > c } {\displaystyle \{P\mid K(P)>c\}} ) is open for any real number c ). The payoff function of Sion and Wolfe's example is not semicontinuous. However, it may be made so by changing the value of K ( x , x ) and K ( x , x + 1/2) (the payoff along the two discontinuities) to either +1 or −1, making the payoff upper or lower semicontinuous, respectively. If this is done, the game then has a value. Subsequent work by Heuer [ 3 ] discusses a class of games in which the unit square is divided into three regions, the payoff function being constant in each of the regions.
https://en.wikipedia.org/wiki/Game_without_a_value
A Gameframe is a hybrid computer system that was first used in the online video game industry . It is a combination of the technologies and architectures for supercomputers and mainframes , namely high computing power and high throughput. In 2007, Hoplon and IBM jointly started the gameframe project, in which they used an IBM System z mainframe computer with attached Cell /B.E. blades (the eight-core parallel-processing chips that power Sony 's PlayStation 3 ) to host [ 1 ] their online game Taikodom . The project was carried further by a co-operation between IBM and the University of California, San Diego in 2009. [ 2 ] System z provides a high level of security and massive workload handling, ensuring the execution of its administrative tasks and guaranteeing an enduring connectivity to a huge number of clients. [ 3 ] Cell/B.E. takes over the most resource demanding calculations thus enabling System z to fulfill its job. The combination is both an effective and financially attractive game server system, as the most computation-intensive tasks are offloaded from the expensive CPU cycles of System z and carried out on the more economical Cell blades. Without offloading, the server system required would not be financially feasible. [ 4 ] The gameframe can handle the required transactions (e.g., keeping track of each user's spaceships, weapons, and virtual money even between the players) and the simulation (trajectory of objects and checking for collisions) in a unified and consistent fashion. Thus, it can host a few thousand users at a time, and higher efficiency is experienced when more users are added. Games with numerous players like World of Warcraft , have tackled this problem by splitting the work among multiple clusters , creating duplicate worlds that don't communicate. [ 5 ] The Cell-augmented mainframe runs Hoplon's virtual-world middleware , called bitVerse , which uses IBM's WebSphere XD and DB2 software. [ 6 ] Around the gameframe, the IBM Virtual Universe Community has evolved.
https://en.wikipedia.org/wiki/Gameframe
Gametangiogamy is the fusion or copulation of whole gametangia in certain members of the phyla Zygomycota and Ascomycota . The copulated union of multinuclear cells is followed after a more or less long period dikaryophase , by a pairwise fusion (karyogamy) of sexually different nuclei. In this case, karyogamy takes place simultaneously between the nuclei of many pairs of nuclei, not as in gametogamy between two gametic nuclei ( polyfertilization ). [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Gametangiogamy
A gametangium ( pl. : gametangia ) is a sex organ or cell in which gametes are produced that is found in many multicellular protists , algae , fungi , and the gametophytes of plants . In contrast to gametogenesis in animals , a gametangium is a haploid structure and formation of gametes does not involve meiosis . Depending on the type of gamete produced in a gametangium, several types can be distinguished. [ 1 ] Female gametangia are most commonly called archegonia . [ 2 ] They produce egg cells and are the sites for fertilization . Archegonia are common in algae and primitive plants as well as gymnosperms . In flowering plants , they are replaced by the embryo sac inside the ovule . The male gametangia are most commonly called antheridia . [ 2 ] They produce sperm cells that they release for fertilization. Antheridia producing non-motile sperm (spermatia) are called spermatangia . Some antheridia do not release their sperm. For example, the oomycete antheridium is a syncytium with many sperm nuclei and fertilization occurs via fertilization tubes growing from the antheridium and making contact with the egg cells. Antheridia are common in the gametophytes in "lower" plants such as bryophytes , ferns , cycads and ginkgo . In "higher" plants such as conifers and flowering plants, they are replaced by pollen grains . In isogamy , the gametes look alike and cannot be classified into "male" or "female." For example, in zygomycetes , two gametangia (single multinucleate cells at the end of hyphae ) form good contact with each other and fuse into a zygosporangium . Inside the zygosporangium, the nuclei from each of the original two gametangia pair up. [ clarification needed ]
https://en.wikipedia.org/wiki/Gametangium
A haplotype ( haploid genotype ) is a group of alleles in an organism that are inherited together from a single parent. [ 1 ] [ 2 ] Many organisms contain genetic material ( DNA ) which is inherited from two parents. Normally these organisms have their DNA organized in two sets of pairwise similar chromosomes . The offspring gets one chromosome in each pair from each parent. A set of pairs of chromosomes is called diploid and a set of only one half of each pair is called haploid. The haploid genotype (haplotype) is a genotype that considers the singular chromosomes rather than the pairs of chromosomes. It can be all the chromosomes from one of the parents or a minor part of a chromosome, for example a sequence of 9000 base pairs or a small set of alleles. Specific contiguous parts of the chromosome are likely to be inherited together and not be split by chromosomal crossover , a phenomenon called genetic linkage . [ 3 ] [ 4 ] As a result, identifying these statistical associations and a few alleles of a specific haplotype sequence can facilitate identifying all other such polymorphic sites that are nearby on the chromosome ( imputation ). [ 5 ] Such information is critical for investigating the genetics of common diseases ; which have been investigated in humans by the International HapMap Project . [ 6 ] [ 7 ] Other parts of the genome are almost always haploid and do not undergo crossover: for example, human mitochondrial DNA is passed down through the maternal line and the Y chromosome is passed down the paternal line. In these cases, the entire sequence can be grouped into a simple evolutionary tree, with each branch founded by a unique-event polymorphism mutation (often, but not always, a single-nucleotide polymorphism (SNP)). Each clade under a branch, containing haplotypes with a single shared ancestor, is called a haplogroup . [ 8 ] [ 9 ] [ 10 ] An organism's genotype may not define its haplotype uniquely. For example, consider a diploid organism and two bi-allelic loci (such as SNPs ) on the same chromosome. Assume the first locus has alleles A or T and the second locus G or C . Both loci, then, have three possible genotypes : ( AA , AT , and TT ) and ( GG , GC , and CC ), respectively. For a given individual, there are nine possible configurations (haplotypes) at these two loci (shown in the Punnett square below). For individuals who are homozygous at one or both loci, the haplotypes are unambiguous - meaning that there is not any differentiation of haplotype T1T2 vs haplotype T2T1; where T1 and T2 are labeled to show that they are the same locus, but labeled as such to show it does not matter which order you consider them in, the end result is two T loci. For individuals heterozygous at both loci, the gametic phase is ambiguous - in these cases, an observer does not know which haplotype the individual has, e.g., TA vs AT. The only unequivocal method of resolving phase ambiguity is by sequencing . However, it is possible to estimate the probability of a particular haplotype when phase is ambiguous using a sample of individuals. Given the genotypes for a number of individuals, the haplotypes can be inferred by haplotype resolution or haplotype phasing techniques. These methods work by applying the observation that certain haplotypes are common in certain genomic regions. Therefore, given a set of possible haplotype resolutions, these methods choose those that use fewer different haplotypes overall. The specifics of these methods vary - some are based on combinatorial approaches (e.g., parsimony ), whereas others use likelihood functions based on different models and assumptions such as the Hardy–Weinberg principle , the coalescent theory model, or perfect phylogeny. The parameters in these models are then estimated using algorithms such as the expectation-maximization algorithm (EM), Markov chain Monte Carlo (MCMC), or hidden Markov models (HMM). Microfluidic whole genome haplotyping is a technique for the physical separation of individual chromosomes from a metaphase cell followed by direct resolution of the haplotype for each allele. In genetics , a gametic phase represents the original allelic combinations that a diploid individual inherits from both parents. [ 11 ] It is therefore a particular association of alleles at different loci on the same chromosome . Gametic phase is influenced by genetic linkage . [ 12 ] Unlike other chromosomes, Y chromosomes generally do not come in pairs. Every human male (excepting those with XYY syndrome ) has only one copy of that chromosome. This means that there is not any chance variation of which copy is inherited, and also (for most of the chromosome) not any shuffling between copies by recombination ; so, unlike autosomal haplotypes, there is effectively not any randomisation of the Y-chromosome haplotype between generations. A human male should largely share the same Y chromosome as his father, give or take a few mutations; thus Y chromosomes tend to pass largely intact from father to son, with a small but accumulating number of mutations that can serve to differentiate male lineages. In particular, the Y-DNA represented as the numbered results of a Y-DNA genealogical DNA test should match, except for mutations. Unique-event polymorphisms (UEPs) such as SNPs represent haplogroups . STRs represent haplotypes. The results that comprise the full Y-DNA haplotype from the Y chromosome DNA test can be divided into two parts: the results for UEPs, sometimes loosely called the SNP results as most UEPs are single-nucleotide polymorphisms , and the results for microsatellite short tandem repeat sequences ( Y-STRs ). The UEP results represent the inheritance of events it is believed can be assumed to have happened only once in all human history. These can be used to identify the individual's Y-DNA haplogroup , his place in the "family tree" of the whole of humanity. Different Y-DNA haplogroups identify genetic populations that are often distinctly associated with particular geographic regions; their appearance in more recent populations located in different regions represents the migrations tens of thousands of years ago of the direct patrilineal ancestors of current individuals. Genetic results also include the Y-STR haplotype , the set of results from the Y-STR markers tested. Unlike the UEPs, the Y-STRs mutate much more easily, which allows them to be used to distinguish recent genealogy. But it also means that, rather than the population of descendants of a genetic event all sharing the same result, the Y-STR haplotypes are likely to have spread apart, to form a cluster of more or less similar results. Typically, this cluster will have a definite most probable center, the modal haplotype (presumably similar to the haplotype of the original founding event), and also a haplotype diversity — the degree to which it has become spread out. The further in the past the defining event occurred, and the more that subsequent population growth occurred early, the greater the haplotype diversity will be for a particular number of descendants. However, if the haplotype diversity is smaller for a particular number of descendants, this may indicate a more recent common ancestor, or a recent population expansion. It is important to note that, unlike for UEPs, two individuals with a similar Y-STR haplotype may not necessarily share a similar ancestry. Y-STR events are not unique. Instead, the clusters of Y-STR haplotype results inherited from different events and different histories tend to overlap. In most cases, it is a long time since the haplogroups' defining events, so typically the cluster of Y-STR haplotype results associated with descendants of that event has become rather broad. These results will tend to significantly overlap the (similarly broad) clusters of Y-STR haplotypes associated with other haplogroups. This makes it impossible for researchers to predict with absolute certainty to which Y-DNA haplogroup a Y-STR haplotype would point. If the UEPs are not tested, the Y-STRs may be used only to predict probabilities for haplogroup ancestry, but not certainties. A similar scenario exists in trying to evaluate whether shared surnames indicate shared genetic ancestry. A cluster of similar Y-STR haplotypes may indicate a shared common ancestor, with an identifiable modal haplotype, but only if the cluster is sufficiently distinct from what may have happened by chance from different individuals who historically adopted the same name independently. Many names were adopted from common occupations, for instance, or were associated with habitation of particular sites. More extensive haplotype typing is needed to establish genetic genealogy. Commercial DNA-testing companies now offer their customers testing of more numerous sets of markers to improve definition of their genetic ancestry. The number of sets of markers tested has increased from 12 during the early years to 111 more recently. Establishing plausible relatedness between different surnames data-mined from a database is significantly more difficult. The researcher must establish that the very nearest member of the population in question, chosen purposely from the population for that reason, would be unlikely to match by accident. This is more than establishing that a randomly selected member of the population is unlikely to have such a close match by accident. Because of the difficulty, establishing relatedness between different surnames as in such a scenario is likely to be impossible, except in special cases where there is specific information to drastically limit the size of the population of candidates under consideration. Haplotype diversity is a measure of the uniqueness of a particular haplotype in a given population. The haplotype diversity (H) is computed as: [ 13 ] H = N N − 1 ( 1 − ∑ i x i 2 ) {\displaystyle H={\frac {N}{N-1}}(1-\sum _{i}x_{i}^{2})} where x i {\displaystyle x_{i}} is the (relative) haplotype frequency of each haplotype in the sample and N {\displaystyle N} is the sample size. Haplotype diversity is given for each sample. The term "haplotype" was first introduced by MHC biologist Ruggero Ceppellini during the Third International Histocompatibility Workshop to substitute "pheno-group". [ 14 ] [ 15 ]
https://en.wikipedia.org/wiki/Gametic_phase
Gametogamy is sexual fusion – copulation or fertlization – of two single-celled gametes of different sex and the union of their gamete nuclei (and corresponding extranuclear structures) giving the zygote nucleus, as well as whole zygotic content. [ 1 ] [ 2 ] According to its morphology , size and other properties, most forms of gametogamy are as follows: From such seeds develop plants whose features are identical properties of mothers from which the seed was taken.
https://en.wikipedia.org/wiki/Gametogamy
Gametogenesis is a biological process by which diploid or haploid precursor cells undergo cell division and differentiation to form mature haploid gametes . Depending on the biological life cycle of the organism , gametogenesis occurs by meiotic division of diploid gametocytes into various gametes, or by mitosis. For example, plants produce gametes through mitosis in gametophytes. The gametophytes grow from haploid spores after sporic meiosis. The existence of a multicellular, haploid phase in the life cycle between meiosis and gametogenesis is also referred to as alternation of generations . It is the biological process of gametogenesis during which cells that are haploid or diploid divide to create other cells. It can take place either through mitotic or meiotic division of diploid gametocytes into different cells depending on an organism's biological life cycle. For instance, gametophytes in plants undergo mitosis to produce gametes. Both male and female have different forms. [ 1 ] Animals produce gametes directly through meiosis from diploid mother cells in organs called gonads ( testis in males and ovaries in females). In mammalian germ cell development, sexually dimorphic gametes differentiates into primordial germ cells from pluripotent cells during initial mammalian development. [ 2 ] Males and females of a species that reproduce sexually have different forms of gametogenesis: However, before turning into gametogonia, the embryonic development of gametes is the same in males and females. Gametogonia are usually seen as the initial stage of gametogenesis. However, gametogonia are themselves successors of primordial germ cells (PGCs) from the dorsal endoderm of the yolk sac migrate along the hindgut to the genital ridge . They multiply by mitosis , and, once they have reached the genital ridge in the late embryonic stage, are referred to as gametogonia. Once the germ cells have developed into gametogonia, they are no longer the same between males and females. From gametogonia, male and female gametes develop differently - males by spermatogenesis and females by oogenesis. However, by convention, the following pattern is common for both: primary spermatocyte In vitro gametogenesis (IVG) is the technique of developing in vitro generated gametes , i.e., "the generation of eggs and sperm from pluripotent stem cells in a culture dish." [ 3 ] This technique is currently feasible in mice and will likely have future success in humans and nonhuman primates. [ 3 ] It allows scientists to create sperms and egg cells by reprograming adult cells. This way, they could grow embryos in a laboratory. Even though it is a promising technique for fighting disease, it raises several ethical problems. [ 4 ] Fungi, algae, and primitive plants form specialized haploid structures called gametangia , where gametes are produced through mitosis. In some fungi, such as the Zygomycota , the gametangia are single cells, situated on the ends of hyphae , which act as gametes by fusing into a zygote . More typically, gametangia are multicellular structures that differentiate into male and female organs: In angiosperms , the male gametes (always two) are produced inside the pollen tube (in 70% of the species) or inside the pollen grain (in 30% of the species) through the division of a generative cell into two sperm nuclei. Depending on the species, this can occur while the pollen forms in the anther (pollen tricellular) or after pollination and growth of the pollen tube (pollen bicellular in the anther and in the stigma). The female gamete is produced inside the embryo sac of the ovule . In angiosperms the division of a generative cell into two, sperm nuclei, resulting in the production male gametes (always two), which develop inside the pollen grain (in 30% of species) or the pollen tube (in 70% of species), respectively, of the plant. This may happen before pollination and the development of the pollen tube, depending on the species, or while the pollen is still forming in the anther (pollen is tricellular) (pollen bicellular in the anther and in the stigma). Inside the embryo sac of the ovule, the female gamete is created. Meiosis is a central feature of gametogenesis, but the adaptive function of meiosis is currently a matter of debate. A key event during meiosis is the pairing of homologous chromosomes and recombination (exchange of genetic information) between homologous chromosomes. This process promotes the production of increased genetic diversity among progeny and the recombinational repair of damage in the DNA to be passed on to progeny. To explain the adaptive function of meiosis (as well as of gametogenesis and the sexual cycle), some authors emphasize diversity, [ 5 ] and others emphasize DNA repair . [ 6 ] Although meiosis is a crucial component of gametogenesis, its function in adaptation is still unknown. In sexually reproducing organisms, it is a type of cell division that results in fewer chromosomes being present in gametes. [ 7 ] HOMOLOGY EFFECTS There are two key differences between mammalian and plant gametogenesis. First, there is no predetermined germline in plants. Male or female gametophyte-producing cells diverge from the reproductive meristem, a totipotent clump of developing cells in the adult plant that creates all the flower's features (both sexual and asexual structures). Second, meiosis is followed by mitotic divisions and differentiation to create the gametes. In plants, sister, non-gametic cells are connected to the female gametes (the egg cell and the central cell) (the synergids and the antipodal cells). The haploid microspore passes through a mitosis to create a vegetative and generative cell during male gametogenesis. The generative cell undergoes a second mitotic division, resulting in the creation of two. Premeiotic, post meiotic, pre mitotic, or postmitotic events are all possibilities if imprints are created during male and female gametogenesis. However, if only one of the daughter cells receives parental imprints following mitosis, this would result in two functionally different female gametes or two functionally different sperm cells. Demethylation is seen in the pollen grain following the second meiosis and before to the generative cell's mitosis, as was discussed in the section before this one. Along with pollen differentiation, various structural and compositional DNA alterations also occur. These modifications are potential steps for the genome-wide erasure and/or reprogramming of the imprinting that happens in animals. During the growth of sperm cells, the male DNA is extensively demethylated in plants, whereas the converse is true in animals.
https://en.wikipedia.org/wiki/Gametogenesis
Gametophytic selection is the selection of one haploid pollen grain over another through the means of pollen competition (see also certation ), and that resulting sporophytic generations are positively affected by this competition. [ 1 ] Evidence for the positive effects of gametophytic selection on the sporophyte generation has been observed in several flowering plant species, but there are is still some debate as to the biological significance of gametophytic selection. [ 1 ] [ 2 ] The competitive ability of pollen grains ( microgapmetophytes ) is rooted in the expression of their haploid genomes. The haploid genes are expressed immediately after pollen development and during pollen germination and pollen-tube growth. [ 2 ] About 60% of genes expressed in the sporophyte are also expressed in the microgametophyte. [ 3 ] This expression influences the ability of pollen tubes to compete during growth. [ 2 ] When pollen competition occurs, the competitive ability is determined by differences between tube growth rate or the time it takes for germination to occur. [ 4 ] Pollen completion is increased when pollen is not limiting and when pollen is in abundance relative to the number of ovules present in the ovary, but this does not guarantee pollen competition. [ 2 ] [ 4 ] Studies on corn have observed a non-random success of pollen grains possessing different alleles resulting in ratios that differ than those expected by Mendel's Law of Segregation of Genes (certation). Pollen from a heterozygous sporophyte should exhibit an equal distribution of gametes inherited by offspring. Evidence of higher fertilization frequencies by pollen carrying one allele resulted in differences from expected random mating ratios. [ 5 ] [ 6 ] Evidence suggests that gametophytic selection may influence the fitness of seedlings in the next sporophytic generation. [ 2 ] Studies on specific species have observed improvement of offspring quality suggesting that the rate of pollen-tube growth in the style is positively correlated with the rate of seedling growth in the next generation. [ 7 ] In experiments, Dianthus chinensis demonstrated that when pollen tubes had to grow a longer distance through the style the offspring had increased vigor and competitive ability. [ 2 ] Pollen competition is also one of the primary drivers for cryptic self-incompatibility favoring outcrossed pollen for fertilization. [ 7 ] Faster pollen tube growth rate in Dalechampia scandens results in reduced inbreeding depression in mixed-mating systems due to intense pollen competition after self-pollination. [ 1 ] Gametophytic selection was apparently responsible for increased seed mass and radicle growth in selfed seedlings. [ 8 ] Experiments on Rumex hastatulus demonstrated that sex ratios differences were not induced by environmental or biotic variables, but that pollen competition did result in skewed sex ratios. [ 9 ] Current hypotheses suggest that gametophytic selection in early seedless land plants would have seen negative repercussions due to the limitations imposed by environmental selection on independent gametophytes , like those of bryophytes and ferns. Polyploidy may have been a mechanism that avoided these repercussions in modern ferns. [ 7 ] Flowering plants may have seen benefits from gametophytic selection occurring during pollen-tube growth in the style. [ 7 ] It has been proposed that gametophytic selection contributed to the radiation of flowering plants with closed carpels and more efficient pollen transfer by insects enhancing selective pressure on microgametophytes. [ 1 ] The biological importance of gametophytic selection continues to be a subject of discussion. Suggestions have been made that the significance of the heritable ability of the genes passed on from haploid gametes may not significant and that differences in the number of pollen grains on the stigma or the distance pollen tubes travel through the style may have promoted differences in seed provisioning that resulted in differences in seedling growth instead of heritable genetic differences resulting from pollen competition. [ 10 ]
https://en.wikipedia.org/wiki/Gametophytic_selection
Gamma-glutamyltransferase (also γ-glutamyltransferase , GGT , gamma-GT , gamma-glutamyl transpeptidase ; [ 1 ] EC 2.3.2.2 ) is a transferase (a type of enzyme ) that catalyzes the transfer of gamma- glutamyl functional groups from molecules such as glutathione to an acceptor that may be an amino acid , a peptide or water (forming glutamate ). [ 1 ] [ 2 ] : 268 GGT plays a key role in the gamma-glutamyl cycle , a pathway for the synthesis and degradation of glutathione as well as drug and xenobiotic detoxification. [ 3 ] Other lines of evidence indicate that GGT can also exert a pro-oxidant role, with regulatory effects at various levels in cellular signal transduction and cellular pathophysiology. [ 4 ] This transferase is found in many tissues, the most notable one being the liver , and has significance in medicine as a diagnostic marker. The name γ-glutamyltransferase is preferred by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology . [ 5 ] [ 2 ] The Expert Panel on Enzymes of the International Federation of Clinical Chemistry also used this name. [ 6 ] [ 2 ] The older name is gamma-glutamyl transpeptidase (GGTP). [ 2 ] GGT is present in the cell membranes of many tissues, including the kidneys , bile duct , pancreas , gallbladder , spleen , heart , brain , and seminal vesicles . [ 7 ] It is involved in the transfer of amino acids across the cellular membrane [ 8 ] and leukotriene metabolism. [ 9 ] It is also involved in glutathione metabolism by transferring the glutamyl moiety to a variety of acceptor molecules including water, certain L -amino acids, and peptides, leaving the cysteine product to preserve intracellular homeostasis of oxidative stress . [ 10 ] [ 11 ] This general reaction is: In prokaryotes and eukaryotes, GGT consists of two polypeptide chains, a heavy and a light subunit, processed from a single chain precursor by an autocatalytic cleavage. [ 12 ] The active site of GGT is known to be located in the light subunit. [ citation needed ] Co-translational N -glycosylation serves a significant role in the proper autocatalytic cleavage and proper folding of GGT. Single site mutations at asparagine residues were shown to result in a functionally active yet slightly less thermally stable version of the enzyme in vitro, while knockout of all asparagine residues resulted in an accumulation of the uncleaved, propeptide form of the enzyme. [ 12 ] A GGT test is predominantly used as a diagnostic marker for liver disease . [ citation needed ] Elevated serum GGT activity can be found in diseases of the liver, biliary system, pancreas and kidneys. [ 13 ] [ 14 ] Latent elevations in GGT are typically seen in patients with chronic viral hepatitis infections often taking 12 months or more to present. [ citation needed ] Individual test results should always be interpreted using the reference range from the laboratory that performed the test, though example reference ranges are 15–85 IU/L for men, and 5–55 IU/L for women. [ 15 ] GGT is similar to alkaline phosphatase (ALP) in detecting disease of the biliary tract . Indeed, the two markers correlate well, though there are conflicting data about whether GGT has better sensitivity . [ 16 ] [ 17 ] In general, ALP is still the first test for biliary disease . The main value of GGT is in verifying that ALP elevations are, in fact, due to biliary disease; ALP can also be increased in certain bone diseases, but GGT is not. [ 17 ] GGT is elevated by ingestion of large quantities of alcohol . [ citation needed ] However, determination of high levels of total serum GGT activity is not specific to alcohol intoxication, [ 18 ] and the measurement of selected serum forms of the enzyme offer more specific information. [ 19 ] Isolated elevation or disproportionate elevation compared to other liver enzymes (such as ALT or alanine transaminase ) can indicate harmful alcohol use or alcoholic liver disease , [ 20 ] and can indicate excess alcohol consumption up to 3 or 4 weeks prior to the test. [ citation needed ] The mechanism for this elevation is unclear. Alcohol might increase GGT production by inducing hepatic microsomal production, or it might cause the leakage of GGT from hepatocytes . [ 21 ] Numerous drugs can raise GGT levels, including phenobarbitone and phenytoin . [ 22 ] GGT elevation has also been occasionally reported following nonsteroidal anti-inflammatory drugs (including aspirin ) [ citation needed ] , St. John's wort [ citation needed ] and kava . [ 23 ] [ failed verification ] More recently, slightly elevated serum GGT has also been found to correlate with cardiovascular diseases and is under active investigation as a cardiovascular risk marker. GGT in fact accumulates in atherosclerotic plaques , [ 24 ] suggesting a potential role in pathogenesis of cardiovascular diseases, [ 25 ] and circulates in blood in the form of distinct protein aggregates, [ 19 ] some of which appear to be related to specific pathologies such as metabolic syndrome , alcohol addiction and chronic liver disease . Elevated levels of GGT can also be due to congestive heart failure . [ 26 ] GGT is expressed in high levels in many different tumors. It is known to accelerate tumor growth and to increase resistance to cisplatin in tumors. [ 27 ] Human proteins that belong to this family include GGT1 , GGT2 , GGT6 , GGTL3 , GGTL4 , GGTLA1 and GGTLA4 . (See Template:Leucine metabolism in humans – this diagram does not include the pathway for β-leucine synthesis via leucine 2,3-aminomutase)
https://en.wikipedia.org/wiki/Gamma-glutamyltransferase
Gamma-ray astronomy is a subfield of astronomy where scientists observe and study celestial objects and phenomena in outer space which emit cosmic electromagnetic radiation in the form of gamma rays , [ nb 1 ] i.e. photons with the highest energies (above 100 keV ) at the very shortest wavelengths. Radiation below 100 keV is classified as X-rays and is the subject of X-ray astronomy . In most cases, gamma rays from solar flares and Earth's atmosphere fall in the MeV range, but it's now known that solar flares can also produce gamma rays in the GeV range, contrary to previous beliefs. Much of the detected gamma radiation stems from collisions between hydrogen gas and cosmic rays within our galaxy . These gamma rays, originating from diverse mechanisms such as electron-positron annihilation , the inverse Compton effect and in some cases gamma decay , [ 2 ] occur in regions of extreme temperature, density, and magnetic fields, reflecting violent astrophysical processes like the decay of neutral pions . They provide insights into extreme events like supernovae , hypernovae , and the behavior of matter in environments such as pulsars and blazars . A huge number of gamma ray emitting high-energy systems like black holes , stellar coronas , neutron stars , white dwarf stars, remnants of supernova, clusters of galaxies, including the Crab Nebula and the Vela pulsar (the most powerful source so far), have been identified, alongside an overall diffuse gamma-ray background along the plane of the Milky Way galaxy. Cosmic radiation with the highest energy triggers electron-photon cascades in the atmosphere, while lower-energy gamma rays are only detectable above it. Gamma-ray bursts , like GRB 190114C , are transient phenomena challenging our understanding of high-energy astrophysical processes , ranging from microseconds to several hundred seconds. Gamma rays are difficult to detect due to their high energy and their blocking by the Earth’s atmosphere, necessitating balloon-borne detectors and artificial satellites in space. Early experiments in the 1950s and 1960s used balloons to carry instruments to access altitudes where the atmospheric absorption of gamma rays is low, followed by the launch of the first gamma-ray satellites: SAS 2 (1972) and COS-B (1975). These were defense satellites originally designed to detect gamma rays from secret nuclear testing, but they discovered puzzling gamma-ray bursts coming from deep space. In the 1970s, satellite observatories found several gamma-ray sources, among which a very strong source called Geminga was later identified as a pulsar in proximity. The Compton Gamma Ray Observatory (launched in 1991) revealed numerous gamma-ray sources in space. Today, both ground-based observatories like the VERITAS array and space-based telescopes like the Fermi Gamma-ray Space Telescope (launched in 2008) contribute significantly to gamma-ray astronomy. This interdisciplinary field involves collaboration among physicists, astrophysicists, and engineers in projects like the High Energy Stereoscopic System (H.E.S.S.), which explores extreme astrophysical environments like the vicinity of black holes in active galactic nuclei . Studying gamma rays provides valuable insights into extreme astrophysical environments, as observed by the H.E.S.S. Observatory. Ongoing research aims to expand our understanding of gamma-ray sources, such as blazars, and their implications for cosmology. As GeV gamma rays are important in the study of extra-solar, and especially extragalactic , astronomy, new observations may complicate some prior models and findings. [ 3 ] [ 4 ] Future developments in gamma-ray astronomy will integrate data from gravitational wave and neutrino observatories ( Multi-messenger astronomy ), enriching our understanding of cosmic events like neutron star mergers. Technological advancements, including advanced mirror designs, better camera technologies, improved trigger systems, faster readout electronics , high-performance photon detectors like Silicon photomultipliers (SiPMs), alongside innovative data processing algorithms like time-tagging techniques and event reconstruction methods, will enhance spatial and temporal resolution . Machine learning algorithms and big data analytics will facilitate the extraction of meaningful insights from vast datasets, leading to discoveries of new gamma-ray sources, identification of specific gamma-ray signatures, and improved modeling of gamma-ray emission mechanisms. Future missions may include space telescopes and lunar gamma-ray observatories (taking advantage of the Moon 's lack of atmosphere and stable environment for prolonged observations), enabling observations in previously inaccessible regions. The ground-based Cherenkov Telescope Array project, a next-generation gamma ray observatory which will incorporate many of these improvements and will be ten times more sensitive, is planned to be fully operational by 2025. [ 5 ] Long before experiments could detect gamma rays emitted by cosmic sources, scientists had known that the universe should be producing them. Work by Eugene Feenberg and Henry Primakoff in 1948, Sachio Hayakawa and I.B. Hutchinson in 1952, and, especially, Philip Morrison in 1958 [ 6 ] had led scientists to believe that a number of different processes which were occurring in the universe would result in gamma-ray emission. These processes included cosmic ray interactions with interstellar gas , supernova explosions, and interactions of energetic electrons with magnetic fields . However, it was not until the 1960s that our ability to actually detect these emissions came to pass. [ 7 ] Most gamma rays coming from space are absorbed by the Earth's atmosphere, so gamma-ray astronomy could not develop until it was possible to get detectors above all or most of the atmosphere using balloons and spacecraft. The first gamma-ray telescope carried into orbit, on the Explorer 11 satellite in 1961, picked up fewer than 100 cosmic gamma-ray photons. They appeared to come from all directions in the Universe, implying some sort of uniform "gamma-ray background". Such a background would be expected from the interaction of cosmic rays (very energetic charged particles in space) with interstellar gas. The first true astrophysical gamma-ray sources were solar flares, which revealed the strong 2.223 MeV line predicted by Morrison. This line results from the formation of deuterium via the union of a neutron and proton; in a solar flare the neutrons appear as secondaries from interactions of high-energy ions accelerated in the flare process. These first gamma-ray line observations were from OSO 3 , OSO 7 , and the Solar Maximum Mission , the latter spacecraft launched in 1980. The solar observations inspired theoretical work by Reuven Ramaty and others. [ 8 ] Significant gamma-ray emission from our galaxy was first detected in 1967 [ 9 ] by the detector aboard the OSO 3 satellite. It detected 621 events attributable to cosmic gamma rays. However, the field of gamma-ray astronomy took great leaps forward with the SAS-2 (1972) and the Cos-B (1975–1982) satellites. These two satellites provided an exciting view into the high-energy universe (sometimes called the 'violent' universe, because the kinds of events in space that produce gamma rays tend to be high-speed collisions and similar processes). They confirmed the earlier findings of the gamma-ray background, produced the first detailed map of the sky at gamma-ray wavelengths, and detected a number of point sources. However the resolution of the instruments was insufficient to identify most of these point sources with specific visible stars or stellar systems. A discovery in gamma-ray astronomy came in the late 1960s and early 1970s from a constellation of military defense satellites. Detectors on board the Vela satellite series, designed to detect flashes of gamma rays from nuclear bomb blasts, began to record bursts of gamma rays from deep space rather than the vicinity of the Earth. Later detectors determined that these gamma-ray bursts are seen to last for fractions of a second to minutes, appearing suddenly from unexpected directions, flickering, and then fading after briefly dominating the gamma-ray sky. Studied since the mid-1980s with instruments on board a variety of satellites and space probes, including Soviet Venera spacecraft and the Pioneer Venus Orbiter , the sources of these enigmatic high-energy flashes remain a mystery. They appear to come from far away in the Universe, and currently the most likely theory seems to be that at least some of them come from so-called hypernova explosions—supernovas creating black holes rather than neutron stars . Nuclear gamma rays were observed from the solar flares of August 4 and 7, 1972, and November 22, 1977. [ 10 ] A solar flare is an explosion in a solar atmosphere and was originally detected visually in the Sun . Solar flares create massive amounts of radiation across the full electromagnetic spectrum from the longest wavelength, radio waves , to high energy gamma rays. The correlations of the high energy electrons energized during the flare and the gamma rays are mostly caused by nuclear combinations of high energy protons and other heavier ions. These gamma rays can be observed and allow scientists to determine the major results of the energy released, which is not provided by the emissions from other wavelengths. [ 11 ] See also Magnetar#1979 discovery detection of a soft gamma repeater . Observation of gamma rays first became possible in the 1960s. Their observation is much more problematic than that of X-rays or of visible light, because gamma-rays are comparatively rare, even a "bright" source needing an observation time of several minutes before it is even detected, and because gamma rays are difficult to focus, resulting in a very low resolution. The most recent generation of gamma-ray telescopes (2000s) have a resolution of the order of 6 arc minutes in the GeV range (seeing the Crab Nebula as a single "pixel"), compared to 0.5 arc seconds seen in the low energy X-ray (1 keV) range by the Chandra X-ray Observatory (1999), and about 1.5 arc minutes in the high energy X-ray (100 keV) range seen by High-Energy Focusing Telescope (2005). Very energetic gamma rays, with photon energies over ~30 GeV, can also be detected by ground-based experiments. The extremely low photon fluxes at such high energies require detector effective areas that are impractically large for current space-based instruments. Such high-energy photons produce extensive showers of secondary particles in the atmosphere that can be observed on the ground, both directly by radiation counters and optically via the Cherenkov light which the ultra-relativistic shower particles emit. The Imaging Atmospheric Cherenkov Telescope technique currently achieves the highest sensitivity. Gamma radiation in the TeV range emanating from the Crab Nebula was first detected in 1989 by the Fred Lawrence Whipple Observatory at Mt. Hopkins , in Arizona in the USA. Modern Cherenkov telescope experiments like H.E.S.S. , VERITAS , MAGIC , and CANGAROO III can detect the Crab Nebula in a few minutes. The most energetic photons (up to 16 TeV ) observed from an extragalactic object originate from the blazar , Markarian 501 (Mrk 501). These measurements were done by the High-Energy-Gamma-Ray Astronomy ( HEGRA ) air Cherenkov telescopes. Gamma-ray astronomy observations are still limited by non-gamma-ray backgrounds at lower energies, and, at higher energy, by the number of photons that can be detected. Larger area detectors and better background suppression are essential for progress in the field. [ 12 ] A discovery in 2012 may allow focusing gamma-ray telescopes. [ 13 ] At photon energies greater than 700 keV, the index of refraction starts to increase again. [ 13 ] On June 19, 1988, from Birigüi (50° 20' W, 21° 20' S) at 10:15 UTC a balloon launch occurred which carried two NaI(Tl) detectors ( 600 cm 2 total area) to an air pressure altitude of 5.5 mb for a total observation time of 6 hours. [ 14 ] The supernova SN1987A in the Large Magellanic Cloud (LMC) was discovered on February 23, 1987, and its progenitor, Sanduleak -69 202 , was a blue supergiant with luminosity of 2-5 × 10 38 erg/s. [ 14 ] The 847 keV and 1238 keV gamma-ray lines from 56 Co decay have been detected. [ 14 ] During its High Energy Astronomy Observatory program in 1977, NASA announced plans to build a "great observatory" for gamma-ray astronomy. The Compton Gamma Ray Observatory (CGRO) was designed to take advantage of the major advances in detector technology during the 1980s, and was launched in 1991. The satellite carried four major instruments which have greatly improved the spatial and temporal resolution of gamma-ray observations. The CGRO provided large amounts of data which are being used to improve our understanding of the high-energy processes in our Universe. CGRO was de-orbited in June 2000 as a result of the failure of one of its stabilizing gyroscopes . BeppoSAX was launched in 1996 and deorbited in 2003. It predominantly studied X-rays, but also observed gamma-ray bursts. By identifying the first non-gamma ray counterparts to gamma-ray bursts, it opened the way for their precise position determination and optical observation of their fading remnants in distant galaxies. The High Energy Transient Explorer 2 (HETE-2) was launched in October 2000 (on a nominally 2-year mission) and was still operational (but fading) in March 2007. The HETE-2 mission ended in March 2008. Swift , a NASA spacecraft, was launched in 2004 and carries the BAT instrument for gamma-ray burst observations. Following BeppoSAX and HETE-2, it has observed numerous X-ray and optical counterparts to bursts, leading to distance determinations and detailed optical follow-up. These have established that most bursts originate in the explosions of massive stars ( supernovas and hypernovas ) in distant galaxies. As of 2021, Swift remains operational. [ 16 ] Currently the (other) main space-based gamma-ray observatories are INTEGRAL (International Gamma-Ray Astrophysics Laboratory), Fermi , and AGILE (Astro-rivelatore Gamma a Immagini Leggero). In November 2010, using the Fermi Gamma-ray Space Telescope , two gigantic gamma-ray bubbles, spanning about 25,000 light-years across, were detected at the heart of the Milky Way . These bubbles of high-energy radiation are suspected as erupting from a massive black hole or evidence of a burst of star formations from millions of years ago. They were discovered after scientists filtered out the "fog of background gamma-rays suffusing the sky". This discovery confirmed previous clues that a large unknown "structure" was in the center of the Milky Way. [ 17 ] In 2011 the Fermi team released its second catalog of gamma-ray sources detected by the satellite's Large Area Telescope (LAT), which produced an inventory of 1,873 objects shining with the highest-energy form of light. 57% of the sources are blazars . Over half of the sources are active galaxies , their central black holes created gamma-ray emissions detected by the LAT. One third of the sources have not been detected in other wavelengths. [ 15 ] Ground-based gamma-ray observatories include HAWC , MAGIC , HESS , and VERITAS . Ground-based observatories probe a higher energy range than space-based observatories, since their effective areas can be many orders of magnitude larger than a satellite. In April 2018, the largest catalog yet of high-energy gamma-ray sources in space was published. [ 18 ] In a 18 May 2021 press release, China's Large High Altitude Air Shower Observatory (LHAASO) reported the detection of a dozen ultra-high-energy gamma rays with energies exceeding 1 peta-electron-volt (quadrillion electron-volts or PeV), including one at 1.4 PeV, the highest energy photon ever observed. The authors of the report have named the sources of these PeV gamma rays PeVatrons. [ citation needed ] In 2024, LHAASO announced the detection of a 2.5 PeV gamma ray originating from the Cygnus X region. Astronomers using the Gemini South telescope located in Chile observed flash from a Gamma-Ray Burst identified as GRB221009A , on 14 October 2022. Gamma-ray bursts are the most energetic flashes of light known to occur in the universe. Scientists of NASA estimated that the burst occurred at a point 2.4 billion light-years from earth. The gamma-ray burst occurred as some giant stars exploded at the ends of their lives before collapsing into black holes, in the direction of the constellation Sagitta . It has been estimated that the burst released up to 18 teraelectronvolts of energy, or even a possible TeV of 251. It seemed that GRB221009A was a long gamma-ray burst, possibly triggered by a supernova explosion. [ 19 ] [ 20 ]
https://en.wikipedia.org/wiki/Gamma-ray_astronomy
Gamma was a Soviet gamma ray telescope . It was launched on 11 July 1990 into an orbit around Earth with a height of 375 km and an inclination of 51.6 degrees. It lasted for around 2 years. On board the mission were three telescopes, all of which could be pointed at the same source. The project was a joint Soviet-French project. [ 4 ] The Gamma-1 telescope was the main telescope. It consisted of 2 scintillation counters and a gas Cerenkov counter . With an effective area of around 0.2 square metres (2.2 sq ft), it operated in the energy range of 50 MeV to 6 GeV . At 100 MeV it initially had an angular resolution of 1.5 degrees , with a field of view of 5 degrees and an energy resolution of 12%. A Telezvezda star tracker increased the pointing position accuracy of the Gamma-1 telescope to 2 arcminutes by tracking stars up to an apparent magnitude of 5 within its 6 by 6 degree field of view. However, due to the failure of power to a spark chamber , for most of the mission the resolution was around 10 degrees. [ 4 ] The telescope was conceived in 1965, as part of the Soviet Cloud Space Station , which evolved into the Multi-module Orbital Complex (MOK). [ 5 ] When work on Gamma finally began in 1972, it was intended to create a Gamma observatory , the first space station module for MOK, the first modular space station in the Salyut programme . [ 6 ] For this, it was designed to add the scientific instruments of the observatory to a spacecraft derived from the Progress spacecraft – with the Progress in turn being a Soyuz spacecraft derivate – and that this spacecraft would dock to a MOK space station. However, in 1974, at the time it became a joint venture with France, the MOK space station project was canceled, and in February 1976, the Soviet space program was reconfigured. When on 16 February 1979 production of the telescope was authorized, the plans for the Soviet space station modules had evolved to use the Functional Cargo Block of the TKS spacecraft instead, with the Kvant-1 Roentgen observatory eventually becoming the first such module for Mir – as a result of these changes the Gamma observatory was redesigned as the free flying Gamma satellite . At that time the telescope was authorized in 1979, it was planned to be launched in 1984, but the actual launch was delayed until 1990. The Disk-M telescope operated in the energy range 20 keV – 5 MeV. It consisted of Sodium iodide scintillation crystals , and had an angular resolution of 25 arcminutes. However, it stopped working shortly after the mission was launched. [ 4 ] Finally, the Pulsar X-2 telescope had 30 arcminute resolution and a 10 deg x 10 deg field of view, and operated in the energy range 2–25 keV. [ 4 ] Observations included studies of the Vela Pulsar , the Galactic Center , Cygnus X-1 , Hercules X-1 and the Crab Nebula . The telescopes also measured the Sun during peak solar activity . [ 4 ]
https://en.wikipedia.org/wiki/Gamma_(satellite)
Gamma helix (or γ-helix ) [ 2 ] [ 3 ] is a type of secondary structure in proteins that has been predicted by Pauling , Corey , and Branson , [ 1 ] [ 4 ] but has never been observed in natural proteins. [ 3 ] The hydrogen bond in this type of helix was predicted to be between N-H group of one amino acid and the C=O group of the amino acid six residues earlier (or, as described by Pauling, Corey, Branson, "to the fifth amide group beyond it"). This can also be described as i + 6 → i bond and would be a continuation of the series ( 3 10 helix , alpha helix , pi helix and gamma helix). This theoretical helix contains 5.1 residues per turn. [ 1 ] However, a fully developed gamma helix has characteristics of a structure that has 2.2 amino acid residues per turn, a rise of 2.75Å per residue, and a pseudo-cyclic (C7) structure closed by intramolecular H-bond. Depending on the amino acid's side chain (R) involved in this main-chain reversal motif, two stereoisomers can occur with their Cα-substituent located either in the axial or in the equatorial position relative to the H-bonded pseudo-cycle . [ 5 ] This biochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gamma_helix
In mathematical physics , the gamma matrices , { γ 0 , γ 1 , γ 2 , γ 3 } , {\displaystyle \ \left\{\gamma ^{0},\gamma ^{1},\gamma ^{2},\gamma ^{3}\right\}\ ,} also called the Dirac matrices , are a set of conventional matrices with specific anticommutation relations that ensure they generate a matrix representation of the Clifford algebra C l 1 , 3 ( R ) . {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )~.} It is also possible to define higher-dimensional gamma matrices . When interpreted as the matrices of the action of a set of orthogonal basis vectors for contravariant vectors in Minkowski space , the column vectors on which the matrices act become a space of spinors , on which the Clifford algebra of spacetime acts. This in turn makes it possible to represent infinitesimal spatial rotations and Lorentz boosts . Spinors facilitate spacetime computations in general, and in particular are fundamental to the Dirac equation for relativistic spin 1 2 {\displaystyle {\tfrac {\ 1\ }{2}}} particles. Gamma matrices were introduced by Paul Dirac in 1928. [ 1 ] [ 2 ] In Dirac representation , the four contravariant gamma matrices are γ 0 {\displaystyle \gamma ^{0}} is the time-like, Hermitian matrix . The other three are space-like, anti-Hermitian matrices . More compactly, γ 0 = σ 3 ⊗ I 2 , {\displaystyle \ \gamma ^{0}=\sigma ^{3}\otimes I_{2}\ ,} and γ j = i σ 2 ⊗ σ j , {\displaystyle \ \gamma ^{j}=i\sigma ^{2}\otimes \sigma ^{j}\ ,} where ⊗ {\displaystyle \ \otimes \ } denotes the Kronecker product and the σ j {\displaystyle \ \sigma ^{j}\ } (for j = 1, 2, 3 ) denote the Pauli matrices . In addition, for discussions of group theory the identity matrix ( I ) is sometimes included with the four gamma matricies, and there is an auxiliary, "fifth" traceless matrix used in conjunction with the regular gamma matrices The "fifth matrix" γ 5 {\displaystyle \ \gamma ^{5}\ } is not a proper member of the main set of four; it is used for separating nominal left and right chiral representations . The gamma matrices have a group structure, the gamma group , that is shared by all matrix representations of the group, in any dimension, for any signature of the metric. For example, the 2×2 Pauli matrices are a set of "gamma" matrices in three dimensional space with metric of Euclidean signature (3, 0). In five spacetime dimensions, the four gammas, above, together with the fifth gamma-matrix to be presented below generate the Clifford algebra. The defining property for the gamma matrices to generate a Clifford algebra is the anticommutation relation where the curly brackets { , } {\displaystyle \ \{,\}\ } represent the anticommutator , η μ ν {\displaystyle \ \eta _{\mu \nu }\ } is the Minkowski metric with signature (+ − − −) , and I 4 {\displaystyle I_{4}} is the 4 × 4 identity matrix . This defining property is more fundamental than the numerical values used in the specific representation of the gamma matrices. Covariant gamma matrices are defined by and Einstein notation is assumed. Note that the other sign convention for the metric, (− + + +) necessitates either a change in the defining equation: or a multiplication of all gamma matrices by i {\displaystyle i} , which of course changes their hermiticity properties detailed below. Under the alternative sign convention for the metric the covariant gamma matrices are then defined by The Clifford algebra C l 1 , 3 ( R ) {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )\ } over spacetime V can be regarded as the set of real linear operators from V to itself, End( V ) , or more generally, when complexified to C l 1 , 3 ( R ) C , {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )_{\mathbb {C} }\ ,} as the set of linear operators from any four-dimensional complex vector space to itself. More simply, given a basis for V , C l 1 , 3 ( R ) C {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )_{\mathbb {C} }\ } is just the set of all 4×4 complex matrices, but endowed with a Clifford algebra structure. Spacetime is assumed to be endowed with the Minkowski metric η μν . A space of bispinors, U x , is also assumed at every point in spacetime, endowed with the bispinor representation of the Lorentz group . The bispinor fields Ψ of the Dirac equations, evaluated at any point x in spacetime, are elements of U x (see below). The Clifford algebra is assumed to act on U x as well (by matrix multiplication with column vectors Ψ( x ) in U x for all x ). This will be the primary view of elements of C l 1 , 3 ( R ) C {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )_{\mathbb {C} }\ } in this section. For each linear transformation S of U x , there is a transformation of End( U x ) given by S E S −1 for E in C l 1 , 3 ( R ) C ≈ End ⁡ ( U x ) . {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )_{\mathbb {C} }\approx \operatorname {End} (U_{x})~.} If S belongs to a representation of the Lorentz group, then the induced action E ↦ S E S −1 will also belong to a representation of the Lorentz group, see Representation theory of the Lorentz group . If S(Λ) is the bispinor representation acting on U x of an arbitrary Lorentz transformation Λ in the standard (4 vector) representation acting on V , then there is a corresponding operator on End ⁡ ( U x ) = C l 1 , 3 ( R ) C {\displaystyle \ \operatorname {End} \left(U_{x}\right)=\mathrm {Cl} _{1,3}\left(\mathbb {R} \right)_{\mathbb {C} }\ } given by equation: showing that the quantity of γ μ can be viewed as a basis of a representation space of the 4 vector representation of the Lorentz group sitting inside the Clifford algebra. The last identity can be recognized as the defining relationship for matrices belonging to an indefinite orthogonal group , which is η Λ T η = Λ − 1 , {\displaystyle \ \eta \Lambda ^{\textsf {T}}\eta =\Lambda ^{-1}\ ,} written in indexed notation. This means that quantities of the form should be treated as 4 vectors in manipulations. It also means that indices can be raised and lowered on the γ using the metric η μν as with any 4 vector. The notation is called the Feynman slash notation . The slash operation maps the basis e μ of V , or any 4 dimensional vector space, to basis vectors γ μ . The transformation rule for slashed quantities is simply One should note that this is different from the transformation rule for the γ μ , which are now treated as (fixed) basis vectors. The designation of the 4 tuple ( γ μ ) μ = 0 3 = ( γ 0 , γ 1 , γ 2 , γ 3 ) {\displaystyle \left(\gamma ^{\mu }\right)_{\mu =0}^{3}=\left(\gamma ^{0},\gamma ^{1},\gamma ^{2},\gamma ^{3}\right)} as a 4 vector sometimes found in the literature is thus a slight misnomer. The latter transformation corresponds to an active transformation of the components of a slashed quantity in terms of the basis γ μ , and the former to a passive transformation of the basis γ μ itself. The elements σ μ ν = γ μ γ ν − γ ν γ μ {\displaystyle \ \sigma ^{\mu \nu }=\gamma ^{\mu }\gamma ^{\nu }-\gamma ^{\nu }\gamma ^{\mu }\ } form a representation of the Lie algebra of the Lorentz group. This is a spin representation. When these matrices, and linear combinations of them, are exponentiated, they are bispinor representations of the Lorentz group, e.g., the S(Λ) of above are of this form. The 6 dimensional space the σ μν span is the representation space of a tensor representation of the Lorentz group. For the higher order elements of the Clifford algebra in general and their transformation rules, see the article Dirac algebra . The spin representation of the Lorentz group is encoded in the spin group Spin(1, 3) (for real, uncharged spinors) and in the complexified spin group Spin(1, 3) for charged (Dirac) spinors. In natural units , the Dirac equation may be written as where ψ {\displaystyle \ \psi \ } is a Dirac spinor. Switching to Feynman notation , the Dirac equation is It is useful to define a product of the four gamma matrices as γ 5 = σ 1 ⊗ I {\displaystyle \gamma ^{5}=\sigma _{1}\otimes I} , so that Although γ 5 {\displaystyle \ \gamma ^{5}\ } uses the letter gamma, it is not one of the gamma matrices of C l 1 , 3 ( R ) . {\displaystyle \ \mathrm {Cl} _{1,3}(\mathbb {R} )~.} The index number 5 is a relic of old notation: γ 0 {\displaystyle \ \gamma ^{0}\ } used to be called " γ 4 {\displaystyle \gamma ^{4}} ". γ 5 {\displaystyle \ \gamma ^{5}\ } has also an alternative form: using the convention ε 0123 = 1 , {\displaystyle \varepsilon _{0123}=1\ ,} or using the convention ε 0123 = 1 . {\displaystyle \varepsilon ^{0123}=1~.} Proof: This can be seen by exploiting the fact that all the four gamma matrices anticommute, so where δ μ ν ϱ σ α β γ δ {\displaystyle \delta _{\mu \nu \varrho \sigma }^{\alpha \beta \gamma \delta }} is the type (4,4) generalized Kronecker delta in 4 dimensions, in full antisymmetrization . If ε α … β {\displaystyle \ \varepsilon _{\alpha \dots \beta }\ } denotes the Levi-Civita symbol in n dimensions, we can use the identity δ μ ν ϱ σ α β γ δ = ε α β γ δ ε μ ν ϱ σ {\displaystyle \delta _{\mu \nu \varrho \sigma }^{\alpha \beta \gamma \delta }=\varepsilon ^{\alpha \beta \gamma \delta }\varepsilon _{\mu \nu \varrho \sigma }} . Then we get, using the convention ε 0123 = 1 , {\displaystyle \ \varepsilon ^{0123}=1\ ,} This matrix is useful in discussions of quantum mechanical chirality . For example, a Dirac field can be projected onto its left-handed and right-handed components by: Some properties are: In fact, ψ L {\displaystyle \ \psi _{\mathrm {L} }\ } and ψ R {\displaystyle \ \psi _{\mathrm {R} }\ } are eigenvectors of γ 5 {\displaystyle \ \gamma ^{5}\ } since The Clifford algebra in odd dimensions behaves like two copies of the Clifford algebra of one less dimension, a left copy and a right copy. [ 3 ] : 68 Thus, one can employ a bit of a trick to repurpose i γ 5 as one of the generators of the Clifford algebra in five dimensions. In this case, the set { γ 0 , γ 1 , γ 2 , γ 3 , i γ 5 } therefore, by the last two properties (keeping in mind that i 2 ≡ −1 ) and those of the ‘old’ gammas, forms the basis of the Clifford algebra in 5 spacetime dimensions for the metric signature (1,4) . [ a ] . [ 4 ] : 97 In metric signature (4,1) , the set { γ 0 , γ 1 , γ 2 , γ 3 , γ 5 } is used, where the γ μ are the appropriate ones for the (3,1) signature. [ 5 ] This pattern is repeated for spacetime dimension 2 n even and the next odd dimension 2 n + 1 for all n ≥ 1 . [ 6 ] : 457 For more detail, see higher-dimensional gamma matrices . The following identities follow from the fundamental anticommutation relation, so they hold in any basis (although the last one depends on the sign choice for γ 5 {\displaystyle \gamma ^{5}} ). 1. γ μ γ μ = 4 I 4 {\displaystyle \gamma ^{\mu }\gamma _{\mu }=4I_{4}} Take the standard anticommutation relation: One can make this situation look similar by using the metric η {\displaystyle \eta } : 2. γ μ γ ν γ μ = − 2 γ ν {\displaystyle \gamma ^{\mu }\gamma ^{\nu }\gamma _{\mu }=-2\gamma ^{\nu }} Similarly to the proof of 1, again beginning with the standard anticommutation relation: 3. γ μ γ ν γ ρ γ μ = 4 η ν ρ I 4 {\displaystyle \gamma ^{\mu }\gamma ^{\nu }\gamma ^{\rho }\gamma _{\mu }=4\eta ^{\nu \rho }I_{4}} To show Use the anticommutator to shift γ μ {\displaystyle \gamma ^{\mu }} to the right Using the relation γ μ γ μ = 4 I {\displaystyle \gamma ^{\mu }\gamma _{\mu }=4I} we can contract the last two gammas, and get Finally using the anticommutator identity, we get 4. γ μ γ ν γ ρ γ σ γ μ = − 2 γ σ γ ρ γ ν {\displaystyle \gamma ^{\mu }\gamma ^{\nu }\gamma ^{\rho }\gamma ^{\sigma }\gamma _{\mu }=-2\gamma ^{\sigma }\gamma ^{\rho }\gamma ^{\nu }} 5. γ μ γ ν γ ρ = η μ ν γ ρ + η ν ρ γ μ − η μ ρ γ ν − i ϵ σ μ ν ρ γ σ γ 5 {\displaystyle \gamma ^{\mu }\gamma ^{\nu }\gamma ^{\rho }=\eta ^{\mu \nu }\gamma ^{\rho }+\eta ^{\nu \rho }\gamma ^{\mu }-\eta ^{\mu \rho }\gamma ^{\nu }-i\epsilon ^{\sigma \mu \nu \rho }\gamma _{\sigma }\gamma ^{5}} If μ = ν = ρ {\displaystyle \mu =\nu =\rho } then ϵ σ μ ν ρ = 0 {\displaystyle \epsilon ^{\sigma \mu \nu \rho }=0} and it is easy to verify the identity. That is the case also when μ = ν ≠ ρ {\displaystyle \mu =\nu \neq \rho } , μ = ρ ≠ ν {\displaystyle \mu =\rho \neq \nu } or ν = ρ ≠ μ {\displaystyle \nu =\rho \neq \mu } . On the other hand, if all three indices are different, η μ ν = 0 {\displaystyle \eta ^{\mu \nu }=0} , η μ ρ = 0 {\displaystyle \eta ^{\mu \rho }=0} and η ν ρ = 0 {\displaystyle \eta ^{\nu \rho }=0} and both sides are completely antisymmetric; the left hand side because of the anticommutativity of the γ {\displaystyle \gamma } matrices, and on the right hand side because of the antisymmetry of ϵ σ μ ν ρ {\displaystyle \epsilon _{\sigma \mu \nu \rho }} . It thus suffices to verify the identities for the cases of γ 0 γ 1 γ 2 {\displaystyle \gamma ^{0}\gamma ^{1}\gamma ^{2}} , γ 0 γ 1 γ 3 {\displaystyle \gamma ^{0}\gamma ^{1}\gamma ^{3}} , γ 0 γ 2 γ 3 {\displaystyle \gamma ^{0}\gamma ^{2}\gamma ^{3}} and γ 1 γ 2 γ 3 {\displaystyle \gamma ^{1}\gamma ^{2}\gamma ^{3}} . 6. γ 5 σ ν ρ = i 2 ϵ σ μ ν ρ σ σ μ , {\displaystyle \gamma ^{5}\sigma ^{\nu \rho }={\tfrac {i}{2}}\epsilon ^{\sigma \mu \nu \rho }\sigma _{\sigma \mu }\ ,} where σ μ ν = i 2 [ γ μ , γ ν ] = i 2 ( γ μ γ ν − γ ν γ μ ) {\displaystyle \ \sigma _{\mu \nu }={\tfrac {i}{2}}[\gamma _{\mu },\gamma _{\nu }]={\tfrac {i}{2}}(\gamma _{\mu }\gamma _{\nu }-\gamma _{\nu }\gamma _{\mu })\ } For ν = ρ , {\displaystyle \ \nu =\rho \ ,} ϵ σ μ ν ρ = 0 {\displaystyle \ \epsilon ^{\sigma \mu \nu \rho }=0\ } and both sides vanish. Otherwise, multiplying identity 5 by γ μ {\displaystyle \ \gamma _{\mu }\ } from the right gives that where 4 η ν ρ I 4 = 0 {\displaystyle 4\eta ^{\nu \rho }I_{4}=0} since ν ≠ ρ {\displaystyle \nu \neq \rho } . The left hand side of this equation also vanishes since γ μ γ ν γ ρ γ μ = 4 η ν ρ I 4 {\displaystyle \gamma ^{\mu }\gamma ^{\nu }\gamma ^{\rho }\gamma _{\mu }=4\eta ^{\nu \rho }I_{4}} by property 3. Rearranging gives that Note that 2 γ σ γ μ = γ σ γ μ − γ μ γ σ = [ γ σ , γ μ ] {\displaystyle 2\gamma _{\sigma }\gamma _{\mu }=\gamma _{\sigma }\gamma _{\mu }-\gamma _{\mu }\gamma _{\sigma }=[\gamma _{\sigma },\gamma _{\mu }]} for σ ≠ μ {\displaystyle \sigma \neq \mu } (for σ = μ {\displaystyle \sigma =\mu } , ϵ σ μ ν ρ {\displaystyle \epsilon ^{\sigma \mu \nu \rho }} vanishes) by the standard anticommutation relation. It follows that Multiplying from the left times − i 2 γ 5 {\displaystyle \ -{\tfrac {i}{2}}\gamma ^{5}\ } and using that ( γ 5 ) 2 = I 4 {\displaystyle \ (\gamma ^{5})^{2}=I_{4}\ } yields the desired result. The gamma matrices obey the following trace identities : Proving the above involves the use of three main properties of the trace operator: From the definition of the gamma matrices, We get or equivalently, where η μ μ {\displaystyle \ \eta ^{\mu \mu }\ } is a number, and γ μ γ μ {\displaystyle \ \gamma ^{\mu }\gamma ^{\mu }\ } is a matrix. This implies tr ⁡ ( γ ν ) = 0 {\displaystyle \operatorname {tr} (\gamma ^{\nu })=0} To show First note that We'll also use two facts about the fifth gamma matrix γ 5 {\displaystyle \gamma ^{5}} that says: So lets use these two facts to prove this identity for the first non-trivial case: the trace of three gamma matrices. Step one is to put in one pair of γ 5 {\displaystyle \gamma ^{5}} 's in front of the three original γ {\displaystyle \gamma } 's, and step two is to swap the γ 5 {\displaystyle \gamma ^{5}} matrix back to the original position, after making use of the cyclicity of the trace. This can only be fulfilled if The extension to 2n + 1 (n integer) gamma matrices, is found by placing two gamma-5s after (say) the 2n-th gamma-matrix in the trace, commuting one out to the right (giving a minus sign) and commuting the other gamma-5 2n steps out to the left [with sign change (-1)^2n = 1]. Then we use cyclic identity to get the two gamma-5s together, and hence they square to identity, leaving us with the trace equalling minus itself, i.e. 0. If an odd number of gamma matrices appear in a trace followed by γ 5 {\displaystyle \gamma ^{5}} , our goal is to move γ 5 {\displaystyle \gamma ^{5}} from the right side to the left. This will leave the trace invariant by the cyclic property. In order to do this move, we must anticommute it with all of the other gamma matrices. This means that we anticommute it an odd number of times and pick up a minus sign. A trace equal to the negative of itself must be zero. To show Begin with, For the term on the right, we'll continue the pattern of swapping γ σ {\displaystyle \gamma ^{\sigma }} with its neighbor to the left, Again, for the term on the right swap γ σ {\displaystyle \gamma ^{\sigma }} with its neighbor to the left, Eq (3) is the term on the right of eq (2), and eq (2) is the term on the right of eq (1). We'll also use identity number 3 to simplify terms like so: So finally Eq (1), when you plug all this information in gives The terms inside the trace can be cycled, so So really (4) is or To show begin with Add tr ⁡ ( γ 5 ) {\displaystyle \operatorname {tr} \left(\gamma ^{5}\right)} to both sides of the above to see Now, this pattern can also be used to show Simply add two factors of γ α {\displaystyle \gamma ^{\alpha }} , with α {\displaystyle \alpha } different from μ {\displaystyle \mu } and ν {\displaystyle \nu } . Anticommute three times instead of once, picking up three minus signs, and cycle using the cyclic property of the trace. So, For a proof of identity 7, the same trick still works unless ( μ ν ρ σ ) {\displaystyle \left(\mu \nu \rho \sigma \right)} is some permutation of (0123), so that all 4 gammas appear. The anticommutation rules imply that interchanging two of the indices changes the sign of the trace, so tr ⁡ ( γ μ γ ν γ ρ γ σ γ 5 ) {\displaystyle \operatorname {tr} \left(\gamma ^{\mu }\gamma ^{\nu }\gamma ^{\rho }\gamma ^{\sigma }\gamma ^{5}\right)} must be proportional to ϵ μ ν ρ σ {\displaystyle \epsilon ^{\mu \nu \rho \sigma }} ( ϵ 0123 = η 0 μ η 1 ν η 2 ρ η 3 σ ϵ μ ν ρ σ = η 00 η 11 η 22 η 33 ϵ 0123 = − 1 ) {\displaystyle \left(\epsilon ^{0123}=\eta ^{0\mu }\eta ^{1\nu }\eta ^{2\rho }\eta ^{3\sigma }\epsilon _{\mu \nu \rho \sigma }=\eta ^{00}\eta ^{11}\eta ^{22}\eta ^{33}\epsilon _{0123}=-1\right)} . The proportionality constant is 4 i {\displaystyle 4i} , as can be checked by plugging in ( μ ν ρ σ ) = ( 0123 ) {\displaystyle (\mu \nu \rho \sigma )=(0123)} , writing out γ 5 {\displaystyle \gamma ^{5}} , and remembering that the trace of the identity is 4. Denote the product of n {\displaystyle n} gamma matrices by Γ = γ μ 1 γ μ 2 … γ μ n . {\displaystyle \Gamma =\gamma ^{\mu 1}\gamma ^{\mu 2}\dots \gamma ^{\mu n}.} Consider the Hermitian conjugate of Γ {\displaystyle \Gamma } : Conjugating with γ 0 {\displaystyle \gamma ^{0}} one more time to get rid of the two γ 0 {\displaystyle \gamma ^{0}} s that are there, we see that γ 0 Γ † γ 0 {\displaystyle \gamma ^{0}\Gamma ^{\dagger }\gamma ^{0}} is the reverse of Γ {\displaystyle \Gamma } . Now, The gamma matrices can be chosen with extra hermiticity conditions which are restricted by the above anticommutation relations however. We can impose and for the other gamma matrices (for k = 1, 2, 3 ) One checks immediately that these hermiticity relations hold for the Dirac representation. The above conditions can be combined in the relation The hermiticity conditions are not invariant under the action γ μ → S ( Λ ) γ μ S ( Λ ) − 1 {\displaystyle \gamma ^{\mu }\to S(\Lambda )\gamma ^{\mu }{S(\Lambda )}^{-1}} of a Lorentz transformation Λ {\displaystyle \Lambda } because S ( Λ ) {\displaystyle S(\Lambda )} is not necessarily a unitary transformation due to the non-compactness of the Lorentz group. [ citation needed ] The charge conjugation operator, in any basis, may be defined as where ( ⋅ ) T {\displaystyle (\cdot )^{\textsf {T}}} denotes the matrix transpose . The explicit form that C {\displaystyle C} takes is dependent on the specific representation chosen for the gamma matrices, up to an arbitrary phase factor. This is because although charge conjugation is an automorphism of the gamma group , it is not an inner automorphism (of the group). Conjugating matrices can be found, but they are representation-dependent. Representation-independent identities include: The charge conjugation operator is also unitary C − 1 = C † {\displaystyle C^{-1}=C^{\dagger }} , while for C l 1 , 3 ( R ) {\displaystyle \mathrm {Cl} _{1,3}(\mathbb {R} )} it also holds that C T = − C {\displaystyle C^{\textsf {T}}=-C} for any representation. Given a representation of gamma matrices, the arbitrary phase factor for the charge conjugation operator can not always be chosen such that C † = C T {\displaystyle C^{\dagger }=C^{\textsf {T}}} , as is the case for the common four representations given below, known as Dirac, chiral and Majorana representation. The Feynman slash notation is defined by for any 4-vector a {\displaystyle a} . Here are some similar identities to the ones above, but involving slash notation: Many follow directly from expanding out the slash notation and contracting expressions of the form a μ b ν c ρ … {\displaystyle \ a_{\mu }b_{\nu }c_{\rho }\ \ldots \ } with the appropriate identity in terms of gamma matrices. The matrices are also sometimes written using the 2×2 identity matrix , I 2 {\displaystyle I_{2}} , and where k runs from 1 to 3 and the σ k are Pauli matrices . The gamma matrices we have written so far are appropriate for acting on Dirac spinors written in the Dirac basis ; in fact, the Dirac basis is defined by these matrices. To summarize, in the Dirac basis: In the Dirac basis, the charge conjugation operator is real antisymmetric, [ 9 ] : 691–700 Another common choice is the Weyl or chiral basis , in which γ k {\displaystyle \gamma ^{k}} remains the same but γ 0 {\displaystyle \gamma ^{0}} is different, and so γ 5 {\displaystyle \gamma ^{5}} is also different, and diagonal, or in more compact notation: The Weyl basis has the advantage that its chiral projections take a simple form, The idempotence of the chiral projections is manifest. By slightly abusing the notation and reusing the symbols ψ L / R {\displaystyle \psi _{\mathrm {L} /R}} we can then identify where now ψ L {\displaystyle \psi _{\mathrm {L} }} and ψ R {\displaystyle \psi _{\mathrm {R} }} are left-handed and right-handed two-component Weyl spinors. The charge conjugation operator in this basis is real antisymmetric, The Weyl basis can be obtained from the Dirac basis as via the unitary transform Another possible choice [ 10 ] of the Weyl basis has The chiral projections take a slightly different form from the other Weyl choice, In other words, where ψ L {\displaystyle \psi _{\mathrm {L} }} and ψ R {\displaystyle \psi _{\mathrm {R} }} are the left-handed and right-handed two-component Weyl spinors, as before. The charge conjugation operator in this basis is This basis can be obtained from the Dirac basis above as γ W μ = U γ D μ U † , ψ W = U ψ D {\displaystyle \gamma _{\mathrm {W} }^{\mu }=U\gamma _{\mathrm {D} }^{\mu }U^{\dagger },~~\psi _{\mathrm {W} }=U\psi _{\mathrm {D} }} via the unitary transform There is also the Majorana basis, in which all of the Dirac matrices are imaginary, and the spinors and Dirac equation are real. Regarding the Pauli matrices , the basis can be written as where C {\displaystyle C} is the charge conjugation matrix, which matches the Dirac version defined above. The reason for making all gamma matrices imaginary is solely to obtain the particle physics metric (+, −, −, −) , in which squared masses are positive. The Majorana representation, however, is real. One can factor out the i {\displaystyle \ i\ } to obtain a different representation with four component real spinors and real gamma matrices. The consequence of removing the i {\displaystyle \ i\ } is that the only possible metric with real gamma matrices is (−, +, +, +) . The Majorana basis can be obtained from the Dirac basis above as γ M μ = U γ D μ U † , ψ M = U ψ D {\displaystyle \gamma _{\mathrm {M} }^{\mu }=U\gamma _{\mathrm {D} }^{\mu }U^{\dagger },~~\psi _{\mathrm {M} }=U\psi _{\mathrm {D} }} via the unitary transform The Dirac algebra can be regarded as a complexification of the real algebra Cl 1,3 ( R {\displaystyle \mathbb {R} } ), called the space time algebra : Cl 1,3 ( R {\displaystyle \mathbb {R} } ) differs from Cl 1,3 ( C {\displaystyle \mathbb {C} } ): in Cl 1,3 ( R {\displaystyle \mathbb {R} } ) only real linear combinations of the gamma matrices and their products are allowed. Two things deserve to be pointed out. As Clifford algebras , Cl 1,3 ( C {\displaystyle \mathbb {C} } ) and Cl 4 ( C {\displaystyle \mathbb {C} } ) are isomorphic, see classification of Clifford algebras . The reason is that the underlying signature of the spacetime metric loses its signature (1,3) upon passing to the complexification. However, the transformation required to bring the bilinear form to the complex canonical form is not a Lorentz transformation and hence not "permissible" (at the very least impractical) since all physics is tightly knit to the Lorentz symmetry and it is preferable to keep it manifest. Proponents of geometric algebra strive to work with real algebras wherever that is possible. They argue that it is generally possible (and usually enlightening) to identify the presence of an imaginary unit in a physical equation. Such units arise from one of the many quantities in a real Clifford algebra that square to −1, and these have geometric significance because of the properties of the algebra and the interaction of its various subspaces. Some of these proponents also question whether it is necessary or even useful to introduce an additional imaginary unit in the context of the Dirac equation. [ 11 ] : x–xi In the mathematics of Riemannian geometry , it is conventional to define the Clifford algebra Cl p,q ( R {\displaystyle \mathbb {R} } ) for arbitrary dimensions p,q . The Weyl spinors transform under the action of the spin group S p i n ( n ) {\displaystyle \mathrm {Spin} (n)} . The complexification of the spin group, called the spinc group S p i n C ( n ) {\displaystyle \mathrm {Spin} ^{\mathbb {C} }(n)} , is a product S p i n ( n ) × Z 2 S 1 {\displaystyle \mathrm {Spin} (n)\times _{\mathbb {Z} _{2}}S^{1}} of the spin group with the circle S 1 ≅ U ( 1 ) . {\displaystyle S^{1}\cong U(1).} The product × Z 2 {\displaystyle \times _{\mathbb {Z} _{2}}} just a notational device to identify ( a , u ) ∈ S p i n ( n ) × S 1 {\displaystyle (a,u)\in \mathrm {Spin} (n)\times S^{1}} with ( − a , − u ) . {\displaystyle (-a,-u).} The geometric point of this is that it disentangles the real spinor, which is covariant under Lorentz transformations, from the U ( 1 ) {\displaystyle U(1)} component, which can be identified with the U ( 1 ) {\displaystyle \mathrm {U} (1)} fiber of the electromagnetic interaction. The × Z 2 {\displaystyle \times _{\mathbb {Z} _{2}}} is entangling parity and charge conjugation in a manner suitable for relating the Dirac particle/anti-particle states (equivalently, the chiral states in the Weyl basis). The bispinor , insofar as it has linearly independent left and right components, can interact with the electromagnetic field. This is in contrast to the Majorana spinor and the ELKO spinor (Eigenspinoren des Ladungskonjugationsoperators), which cannot ( i.e. they are electrically neutral), as they explicitly constrain the spinor so as to not interact with the S 1 {\displaystyle S^{1}} part coming from the complexification. The ELKO spinor is a Lounesto class 5 spinor. [ 12 ] : 84 However, in contemporary practice in physics, the Dirac algebra rather than the space-time algebra continues to be the standard environment the spinors of the Dirac equation "live" in. The gamma matrices are diagonalizable with eigenvalues ± 1 {\displaystyle \pm 1} for γ 0 {\displaystyle \gamma ^{0}} , and eigenvalues ± i {\displaystyle \pm i} for γ k {\displaystyle \gamma ^{k}} . This can be demonstrated for γ 0 {\displaystyle \gamma ^{0}} and follows similarly for γ i {\displaystyle \gamma ^{i}} . We can rewrite as By a well-known result in linear algebra , this means there is a basis in which γ 0 {\displaystyle \gamma ^{0}} is diagonal with eigenvalues { ± 1 } {\displaystyle \{\pm 1\}} . In particular, this implies that γ 0 {\displaystyle \gamma ^{0}} is simultaneously Hermitian and unitary, while the γ i {\displaystyle \gamma ^{i}} are simultaneously anti–Hermitian and unitary. Further, the multiplicity of each eigenvalue is two. If v {\displaystyle v} is an eigenvector of γ 0 , {\displaystyle \ \gamma ^{0}\ ,} then γ 1 v {\displaystyle \ \gamma ^{1}v\ } is an eigenvector with the opposite eigenvalue. Then eigenvectors can be paired off if they are related by multiplication by γ 1 . {\displaystyle \ \gamma ^{1}~.} Result follows similarly for γ i . {\displaystyle \ \gamma ^{i}~.} More generally, if γ μ X μ {\displaystyle \ \gamma ^{\mu }X_{\mu }\ } is not null, a similar result holds. For concreteness, we restrict to the positive norm case γ μ p μ = p / {\displaystyle \ \gamma ^{\mu }p_{\mu }=p\!\!\!/\ } with p ⋅ p = m 2 > 0 . {\displaystyle \ p\cdot p=m^{2}>0~.} The negative case follows similarly. It can be shown so by the same argument as the first result, p / {\displaystyle \ p\!\!\!/\ } is diagonalizable with eigenvalues ± m . {\displaystyle \ \pm m~.} We can adapt the argument for the second result slightly. We pick a non-null vector q μ {\displaystyle \ q_{\mu }\ } which is orthogonal to p μ . {\displaystyle p_{\mu }~.} Then eigenvectors can be paired off similarly if they are related by multiplication by q / . {\displaystyle \ q\!\!\!/~.} It follows that the solution space to p / − m = 0 {\displaystyle \ p\!\!\!/-m=0\ } (that is, the kernel of the left-hand side) has dimension 2. This means the solution space for plane wave solutions to Dirac's equation has dimension 2. This result still holds for the massless Dirac equation. In other words, if p μ {\displaystyle p_{\mu }} null, then p / {\displaystyle p\!\!\!/} has nullity 2. If p μ {\displaystyle p_{\mu }} null, then p / p / = 0. {\displaystyle p\!\!\!/p\!\!\!/=0.} By generalized eigenvalue decomposition, this can be written in some basis as diagonal in 2 × 2 {\displaystyle 2\times 2} Jordan blocks with eigenvalue 0, with either 0, 1, or 2 blocks, and other diagonal entries zero. It turns out to be the 2 block case. The zero case is not possible as if γ μ p μ = 0 , {\displaystyle \ \gamma ^{\mu }p_{\mu }=0\ ,} by linear independence of the γ μ {\displaystyle \ \gamma ^{\mu }\ } we must have p μ = 0 . {\displaystyle \ p_{\mu }=0~.} But null vectors are by definition non-zero. Consider ( q μ ) = ( | p | , − p ) {\displaystyle (q_{\mu })=(|\mathbf {p} |,-\mathbf {p} )} and a zero-eigenvector v {\displaystyle v} of p / {\displaystyle p\!\!\!/} . Note q μ {\displaystyle q_{\mu }} is also null and satisfies If p / v = 0 {\displaystyle p\!\!\!/v=0} , then it cannot simultaneously be a zero eigenvector of q / {\displaystyle q\!\!\!/} by (*). Considering q / v {\displaystyle q\!\!\!/v} , if we apply p / {\displaystyle p\!\!\!/} then we get p / q / v = 4 | p | v {\displaystyle p\!\!\!/q\!\!\!/v=4|\mathbf {p} |v} . Therefore, after a rescaling, v {\displaystyle v} and q / v {\displaystyle q\!\!\!/v} give a 2 × 2 {\displaystyle 2\times 2} Jordan block. This gives a pairing. There must be another zero eigenvector of There is also a pleasant structure to these pairs. If left arrows correspond to application of p / {\displaystyle p\!\!\!/} , and right arrows to application of q / {\displaystyle q\!\!\!/} , and v {\displaystyle v} is a zero eigenvector of p / {\displaystyle p\!\!\!/} , up to scalar factors we have In quantum field theory one can Wick rotate the time axis to transit from Minkowski space to Euclidean space . This is particularly useful in some renormalization procedures as well as lattice gauge theory . In Euclidean space, there are two commonly used representations of Dirac matrices: Notice that the factors of i {\displaystyle i} have been inserted in the spatial gamma matrices so that the Euclidean Clifford algebra will emerge. It is also worth noting that there are variants of this which insert instead − i {\displaystyle -i} on one of the matrices, such as in lattice QCD codes which use the chiral basis. In Euclidean space, Using the anti-commutator and noting that in Euclidean space ( γ μ ) † = γ μ {\displaystyle \left(\gamma ^{\mu }\right)^{\dagger }=\gamma ^{\mu }} , one shows that In chiral basis in Euclidean space, which is unchanged from its Minkowski version.
https://en.wikipedia.org/wiki/Gamma_matrices
A gamma ray cross section is a measure of the probability that a gamma ray interacts with matter. The total cross section of gamma ray interactions is composed of several independent processes: photoelectric effect , Compton (incoherent) scattering , electron-positron pair production in the nucleus field and electron-positron pair production in the electron field (triplet production). The cross section for single process listed above is a part of the total gamma ray cross section. Other effects, like the photonuclear absorption, Thomson or Rayleigh (coherent) scattering can be omitted because of their nonsignificant contribution in the gamma ray range of energies. The detailed equations for cross sections (barn/atom) of all mentioned effects connected with gamma ray interaction with matter are listed below. The photoelectric effect phenomenon describes the interaction of a gamma photon with an electron located in the atomic structure . This results in the ejection of that electron from the atom . The photoelectric effect is the dominant energy transfer mechanism for X-ray and gamma ray photons with energies below 50 keV . It is much less important at higher energies, but still needs to be taken into consideration. Usually, the cross section of the photoeffect can be approximated by the simplified equation of [ 1 ] [ 2 ] σ p h = 16 3 2 π r e 2 α 4 Z 5 k 3.5 ≈ 5 ⋅ 10 11 Z 5 E γ 3.5 b {\displaystyle \sigma _{ph}={\frac {16}{3}}{\sqrt {2}}\pi r_{e}^{2}\alpha ^{4}{\frac {Z^{5}}{k^{3.5}}}\approx 5\cdot 10^{11}{\frac {Z^{5}}{E_{\gamma }^{3.5}}}\,\mathrm {b} } where k = E γ / E e , and where E γ = hν is the photon energy given in eV and E e = m e c 2 ≈ 5,11∙10 5 eV is the electron rest mass energy , Z is an atomic number of the absorber's element, α = e 2 /(ħc) ≈ 1/137 is the fine structure constant , and r e 2 = e 4 /E e 2 ≈ 0.07941 b is the square of the classical electron radius in barns . For higher precision, however, the Sauter equation [ 3 ] is more appropriate: σ p h = 3 2 ϕ 0 α 4 ( Z E e E γ ) 5 ( γ 2 − 1 ) 3 / 2 [ 4 3 + γ ( γ − 2 ) γ + 1 ( 1 − 1 2 γ ( γ 2 − 1 ) 1 / 2 ln ⁡ γ + ( γ 2 − 1 ) 1 / 2 γ − ( γ 2 − 1 ) 1 / 2 ) ] {\displaystyle \sigma _{ph}={\frac {3}{2}}\phi _{0}\alpha ^{4}{\biggl (}Z{\frac {E_{e}}{E_{\gamma }}}{\biggr )}^{5}(\gamma ^{2}-1)^{3/2}{\Biggl [}{\frac {4}{3}}+{\frac {\gamma (\gamma -2)}{\gamma +1}}{\Biggl (}1-{\frac {1}{2\gamma (\gamma ^{2}-1)^{1/2}}}\ln {\frac {\gamma +(\gamma ^{2}-1)^{1/2}}{\gamma -(\gamma ^{2}-1)^{1/2}}}{\Biggr )}{\Biggr ]}} where γ = E γ − E B + E e E e {\displaystyle \gamma ={\frac {E_{\gamma }-E_{B}+E_{e}}{E_{e}}}} and E B is a binding energy of electron, and ϕ 0 is a Thomson cross section (ϕ 0 = 8 πe 4 /(3E e 2 ) ≈ 0.66526 barn). For higher energies (>0.5 MeV ) the cross section of the photoelectric effect is very small because other effects (especially Compton scattering ) dominates. However, for precise calculations of the photoeffect cross section in high energy range, the Sauter equation shall be substituted by the Pratt-Scofield equation [ 4 ] [ 5 ] [ 6 ] σ p h = Z 5 ( ∑ n = 1 4 a n + b n Z 1 + c n Z k − p n ) {\displaystyle \sigma _{ph}=Z^{5}{\Biggl (}\sum _{n=1}^{4}{\frac {a_{n}+b_{n}Z}{1+c_{n}Z}}k^{-p_{n}}{\Biggr )}} where all input parameters are presented in the Table below. Compton scattering (or Compton effect) is an interaction in which an incident gamma photon interacts with an atomic electron to cause its ejection and scatter of the original photon with lower energy. The probability of Compton scattering decreases with increasing photon energy. Compton scattering is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV. The cross section of the Compton effect is described by the Klein-Nishina equation : σ C = Z 2 π r e 2 { 1 + k k 2 [ 2 ( 1 + k ) 1 + 2 k − ln ⁡ ( 1 + 2 k ) k ] + ln ⁡ ( 1 + 2 k ) 2 k − 1 + 3 k ( 1 + 2 k ) 2 } {\displaystyle \sigma _{C}=Z2\pi r_{e}^{2}{\Biggl \{}{\frac {1+k}{k^{2}}}{\Biggl [}{\frac {2(1+k)}{1+2k}}-{\frac {\ln {(1+2k)}}{k}}{\Biggr ]}+{\frac {\ln {(1+2k)}}{2k}}-{\frac {1+3k}{(1+2k)^{2}}}{\Biggr \}}} for energies higher than 100 keV (k>0.2). For lower energies, however, this equation shall be substituted by: [ 6 ] σ C = Z 8 3 π r e 2 1 ( 1 + 2 k ) 2 ( 1 + 2 k + 6 5 k 2 − 1 2 k 3 + 2 7 k 4 − 6 35 k 5 + 8 105 k 6 + 4 105 k 7 ) {\displaystyle \sigma _{C}=Z{\frac {8}{3}}\pi r_{e}^{2}{\frac {1}{(1+2k)^{2}}}{\biggl (}1+2k+{\frac {6}{5}}k^{2}-{\frac {1}{2}}k^{3}+{\frac {2}{7}}k^{4}-{\frac {6}{35}}k^{5}+{\frac {8}{105}}k^{6}+{\frac {4}{105}}k^{7}{\biggr )}} which is proportional to the absorber's atomic number , Z . The additional cross section connected with the Compton effect can be calculated for the energy transfer coefficient only – the absorption of the photon energy by the electron: [ 7 ] σ C , a b s = Z 2 π r e 2 [ 2 ( 1 + k ) 2 k 2 ( 1 + 2 k ) − 1 + 3 k ( 1 + 2 k ) 2 − ( 1 + k ) ( 2 k 2 − 2 k − 1 ) k 2 ( 1 + 2 k ) 2 − 4 k 2 3 ( 1 + 2 k ) 3 − ( 1 + k k 3 − 1 2 k + 1 2 k 3 ) ln ⁡ ( 1 + 2 k ) ] {\displaystyle \sigma _{C,abs}=Z2\pi r_{e}^{2}{\biggl [}{\frac {2(1+k)^{2}}{k^{2}(1+2k)}}-{\frac {1+3k}{(1+2k)^{2}}}-{\frac {(1+k)(2k^{2}-2k-1)}{k^{2}(1+2k)^{2}}}-{\frac {4k^{2}}{3(1+2k)^{3}}}-{\Bigl (}{\frac {1+k}{k^{3}}}-{\frac {1}{2k}}+{\frac {1}{2k^{3}}}{\Bigr )}\ln {(1+2k)}{\biggr ]}} which is often used in radiation protection calculations. By interaction with the electric field of a nucleus , the energy of the incident photon is converted into the mass of an electron- positron (e − e + ) pair . The cross section for the pair production effect is usually described by the Maximon equation: [ 8 ] [ 6 ] σ p a i r = Z 2 α r e 2 2 π 3 ( k − 2 k ) 3 ( 1 + 1 2 ρ + 23 40 ρ 2 + 11 60 ρ 3 + 29 960 ρ 4 ) {\displaystyle \sigma _{pair}=Z^{2}\alpha r_{e}^{2}{\frac {2\pi }{3}}{\biggl (}{\frac {k-2}{k}}{\biggr )}^{3}{\biggl (}1+{\frac {1}{2}}\rho +{\frac {23}{40}}\rho ^{2}+{\frac {11}{60}}\rho ^{3}+{\frac {29}{960}}\rho ^{4}{\biggr )}} for low energies ( k <4), where ρ = 2 k − 4 2 + k + 2 2 k {\displaystyle \rho ={\frac {2k-4}{2+k+2{\sqrt {2k}}}}} . However, for higher energies ( k >4) the Maximon equation has a form of σ p a i r = Z 2 α r e 2 { 28 9 ln ⁡ 2 k − 218 27 + ( 2 k ) 2 [ 6 ln ⁡ 2 k − 7 2 + 2 3 ln 3 ⁡ 2 k − ln 2 ⁡ 2 k − 1 3 π 2 ln ⁡ 2 k + 2 ζ ( 3 ) + π 2 6 ] − ( 2 k ) 4 [ 3 16 ln ⁡ 2 k + 1 8 ] − ( 2 k ) 6 [ 29 9 ⋅ 256 ln ⁡ 2 k − 77 27 ⋅ 512 ] } {\displaystyle \sigma _{pair}=Z^{2}\alpha r_{e}^{2}{\Biggl \{}{\frac {28}{9}}\ln {2k}-{\frac {218}{27}}+({\frac {2}{k}})^{2}{\biggl [}6\ln {2k}-{\frac {7}{2}}+{\frac {2}{3}}\ln ^{3}{2k}-\ln ^{2}{2k}-{\frac {1}{3}}\pi ^{2}\ln {2k}+2\zeta (3)+{\frac {\pi ^{2}}{6}}{\biggr ]}-({\frac {2}{k}})^{4}{\biggl [}{\frac {3}{16}}\ln {2k}+{\frac {1}{8}}{\biggr ]}-({\frac {2}{k}})^{6}{\biggl [}{\frac {29}{9\cdot 256}}\ln {2k}-{\frac {77}{27\cdot 512}}{\biggr ]}{\Biggr \}}} where ζ(3)≈1.2020569 is the Riemann zeta function . The energy threshold for the pair production effect is k =2 (the positron and electron rest mass energy ). The triplet production effect, where positron and electron is produced in the field of other electron, is similar to the pair production, with the threshold at k =4. This effect, however, is much less probable than the pair production in the nucleus field. The most popular form of the triplet cross section was formulated as Borsellino-Ghizzetti equation [ 6 ] where a =-2.4674 and b =-1.8031. This equation is quite long, so Haug [ 9 ] proposed simpler analytical forms of triplet cross section. Especially for the lowest energies 4< k <4.6: For 4.6< k <6: For 6< k <18: For k >14 Haug proposed to use a shorter form of Borsellino equation: [ 9 ] [ 10 ] One can present the total cross section per atom as a simple sum of each effects: [ 2 ] σ t o t a l = σ p h + σ C + σ p a i r + σ t r i p {\displaystyle \sigma _{total}=\sigma _{ph}+\sigma _{C}+\sigma _{pair}+\sigma _{trip}} Next, using the Beer–Lambert–Bouguer law , one can calculate the linear attenuation coefficient for the photon interaction with an absorber of atomic density N : μ = σ t o t a l N {\displaystyle \mu =\sigma _{total}N} or the mass attenuation coefficient : μ d = μ ρ = σ t o t a l u A {\displaystyle \mu _{d}={\frac {\mu }{\rho }}={\frac {\sigma _{total}}{uA}}} where ρ is mass density , u is an atomic mass unit , a A is the atomic mass of the absorber. This can be directly used in practice, e.g. in the radiation protection . The analytical calculation of the cross section of each specific phenomenon is rather difficult because appropriate equations are long and complicated. Thus, the total cross section of gamma interaction can be presented in one phenomenological equation formulated by Fornalski, [ 11 ] which can be used instead: σ t o t a l ( k , Z ) = ∑ i = 0 6 [ ( ln ⁡ k ) i ∑ j = 0 4 a i , j Z j ] {\displaystyle \sigma _{total}(k,Z)=\sum _{i=0}^{6}{\biggl [}(\ln {k})^{i}\sum _{j=0}^{4}a_{i,j}Z^{j}{\biggr ]}} where a i,j parameters are presented in Table below. This formula is an approximation of the total cross section of gamma rays interaction with matter, for different energies (from 1 MeV to 10 GeV, namely 2< k <20,000) and absorber's atomic numbers (from Z =1 to 100). For lower energy region (<1 MeV) the Fornalski equation is more complicated due to the larger function variability of different elements . Therefore, the modified equation [ 11 ] σ t o t a l ( E , Z ) = exp ⁡ ∑ i = 0 6 [ ( ln ⁡ E ) i ∑ j = 0 6 a i , j Z j ] {\displaystyle \sigma _{total}(E,Z)=\exp \sum _{i=0}^{6}{\biggl [}(\ln {E})^{i}\sum _{j=0}^{6}a_{i,j}Z^{j}{\biggr ]}} is a good approximation for photon energies from 150 keV to 10 MeV, where the photon energy E is given in MeV, and a i,j parameters are presented in Table below with much better precision. Analogically, the equation is valid for all Z from 1 to 100. The US National Institute of Standards and Technology published on-line [ 12 ] a complete and detailed database of cross section values of X-ray and gamma ray interactions with different materials in different energies. The database, called XCOM, contains also linear and mass attenuation coefficients, which are useful for practical applications.
https://en.wikipedia.org/wiki/Gamma_ray_cross_section
Gamma ray logging is a method of measuring naturally occurring gamma radiation to characterize the rock or sediment in a borehole or drill hole. It is a wireline logging method used in mining, mineral exploration, water-well drilling, for formation evaluation in oil and gas well drilling and for other related purposes. [ 1 ] Different types of rock emit different amounts and different spectra of natural gamma radiation . In particular, shales usually emit more gamma rays than other sedimentary rocks, such as sandstone , gypsum , salt , coal , dolomite , or limestone because radioactive potassium is a common component in their clay content, and because the cation-exchange capacity of clay causes them to absorb uranium and thorium . This difference in radioactivity between shales and sandstones/carbonate rocks allows the gamma ray tool to distinguish between shales and non-shales. But it cannot distinguish between carbonates and sandstone as they both have similar deflections on the gamma ray log. Thus gamma ray logs cannot be said to make good lithological logs by themselves, but in practice, gamma ray logs are compared side-by-side with stratigraphic logs. The gamma ray log, like other types of well logging , is done by lowering an instrument down the drill hole and recording gamma radiation variation with depth. In the United States , the device most commonly records measurements at 1/2-foot intervals. Gamma radiation is usually recorded in API units , a measurement originated by the petroleum industry. Gamma rays attenuate according to the diameter of the borehole mainly because of the properties of the fluid filling the borehole, but because gamma logs are generally used in a qualitative way, amplitude corrections are usually not necessary. Three elements and their decay chains are responsible for the radiation emitted by rock: potassium , thorium and uranium . Shales often contain potassium as part of their clay content and tend to absorb uranium and thorium as well. A common gamma-ray log records the total radiation and cannot distinguish between the radioactive elements, while a spectral gamma ray log (see below) can. For standard gamma-ray logs, the measured value of gamma-ray radiation is calculated from concentration of uranium in ppm, thorium in ppm, and potassium in weight percent: e.g., GR API = 8 × uranium concentration in ppm + 4 × thorium concentration in ppm + 16 × potassium concentration in weight percent. Due to the weighted nature of uranium concentration in the GR API calculation, anomalous concentrations of uranium can cause clean sand reservoirs to appear shaley. For this reason, spectral gamma ray is used to provide an individual reading for each element so that anomalous concentrations can be found and properly interpreted. An advantage of the gamma log over some other types of well logs is that it works through the steel and cement walls of cased boreholes. Although concrete and steel absorb some of the gamma radiation, enough travels through the steel and cement to allow for qualitative determinations. In some places, non-shales exhibit elevated levels of gamma radiation. For instance, sandstones can contain uranium minerals, potassium feldspar , clay filling, or lithic fragments that cause the rock to have higher than usual gamma readings. Coal and dolomite may contain absorbed uranium. Evaporite deposits may contain potassium minerals such as sylvite and carnallite . When this is the case, spectral gamma ray logging should be done to identify the source of these anomalies. Spectral logging is the technique of measuring the spectrum, or number and energy, of gamma rays emitted via natural radioactivity of the rock formation. There are three main sources of natural radioactivity on Earth: potassium (40K), thorium (principally 232Th and 230Th), and uranium (principally 238U and 235U). These radioactive isotopes each emit gamma rays that have a characteristic energy level measured in MeV. The quantity and energy of these gamma rays can be measured by a scintillometer. A log of the spectroscopic response to natural gamma ray radiation is usually presented as a total gamma ray log that plots the weight fraction of potassium (%), thorium (ppm) and uranium (ppm). The primary standards for the weight fractions are geological formations with known quantities of the three isotopes. Natural gamma ray spectroscopy logs became routinely used in the early 1970s, although they had been studied from the 1950s. The characteristic gamma ray line that is associated with each radioactive component: Another example of the use of spectral gamma ray logs is to identify specific clay types, like kaolinite or illite . This may be useful for interpreting the environment of deposition as kaolinite can form from feldspars in tropical soils by leaching of potassium; and low potassium readings may thus indicate the presence of one or more paleosols . [ 2 ] The identification of specific clay minerals is also useful for calculating the effective porosity of reservoir rock. Gamma ray logs are also used in mineral exploration, especially exploration for phosphates, uranium , and potassium salts.
https://en.wikipedia.org/wiki/Gamma_ray_logging
Gamma ray tomography ( GRT ) is a non-invasive imaging technique primarily used to characterize multiphase flows within industrial processes. Utilizing gamma radiation attenuation, this technique allows for visualization and detailed analysis of the internal structure and dynamics of materials flowing through pipelines or vessels. Gamma ray tomography experienced substantial advancements starting in the 1990s, notably driven by research conducted at the University of Bergen , Norway. [ 1 ] [ 2 ] The university pioneered high-speed gamma-ray tomography setups optimized for studying complex multiphase flows, establishing itself as a leader in industrial tomography research. A significant development occurred with the second-generation gamma ray tomography system, collaboratively designed by the University of Bergen and Prototech [ no ] for the Saskatchewan Research Council (SRC). [ 3 ] Delivered in 2016, this advanced unit significantly enhanced real-time imaging capabilities, capturing up to 100 frames per second with improved spatial resolution. This unit has since become integral to SRC's Pipe Flow Technology Centre, facilitating advanced analysis of slurry pipeline dynamics and predictive modeling of multiphase flows. [ 4 ] Gamma ray tomography operates based on gamma-ray densitometry, governed by Beer–Lambert's law : I = I 0 B e − ∫ μ ( x ) d x {\displaystyle I=I_{0}Be^{-\int {\mu (x)dx}}} Here, I {\displaystyle I} is the measured intensity, I 0 {\displaystyle I_{0}} is the source intensity, B {\displaystyle B} is the build-up factor, μ {\displaystyle \mu } is the linear attenuation coefficient , and x {\displaystyle x} is the path length between the source and detector. According to this principle, a narrow beam of monochromatic gamma radiation emitted from a source attenuates exponentially when passing through a material, enabling measurement of the material's density distribution along defined paths. Multiple gamma-ray sources and detectors arranged around the investigated material facilitate detailed cross-sectional image reconstruction using algorithms such as the Iterative Least Squares Technique (ILST). [ 5 ] [ 6 ] The linear attenuation coefficient μ {\displaystyle \mu } depends on material properties and photon energy. To optimize measurement accuracy, careful selection of the geometry dimensions and radioactive source with an appropriate photon energy level is crucial. The relative uncertainty in gamma densitometry measurements can be expressed as: σ μ μ = 1 μ x e μ x I 0 τ {\displaystyle {\sigma _{\mu } \over {\mu }}={{1 \over \mu x}{\sqrt {e^{\mu x} \over I_{0}\tau }}}} where σ μ {\displaystyle \sigma _{\mu }} is the absolute uncertainty of μ {\displaystyle \mu } , x {\displaystyle x} is the distance between the source and detector, and τ {\displaystyle \tau } is the integration time. Since the function e x / x {\displaystyle {{\sqrt {e^{x}}}/x}} has a minimum at x = 2 {\displaystyle x=2} , selecting the product μ x {\displaystyle \mu x} close to this value minimizes uncertainty. Typical multiphase flow setups for gamma ray tomography require high temporal resolution. Rather than using scanning setups, these configurations consist of fixed pairs of radioactive sources and detector arrays symmetrically arranged around the pipe center. Gamma radiation emitted from radioactive sources such as Americium-241 is collimated into a fan-shaped beam covering the pipe's cross-section. Opposite these sources, detector arrays individually collimated capture narrow-beam measurements, allowing detailed cross-sectional imaging of phase distributions and densities. Semiconductor-based CdZnTe detectors are commonly utilized. [ 6 ] Though not widely implemented in daily industrial operations due to cost and complexity, gamma ray tomography remains an essential reference instrument in multiphase flow research and metering . It provides critical bench-marking data to validate and calibrate alternative multiphase measurement techniques, significantly enhancing multiphase flow research capabilities. [ 7 ] It has also been used extensively to study different multiphase flows like slurry flow, [ 8 ] and oil-water-gas flow in various geometries. [ 9 ] [ 10 ] Combining gamma ray tomography with techniques like electrical capacitance tomography (ECT) [ 7 ] or electrical resistance tomography (ERT) [ 8 ] enhances multiphase characterization by utilizing complementary high spatial and temporal resolutions of these modalities.
https://en.wikipedia.org/wiki/Gamma_ray_tomography
Gamma-ray spectroscopy is the qualitative study of the energy spectra of gamma-ray sources , such as in the nuclear industry, geochemical investigation, and astrophysics. Gamma-ray spectrometry , on the other hand, is the method used to acquire a quantitative spectrum measurement. [ 1 ] Most radioactive sources produce gamma rays, which are of various energies and intensities. When these emissions are detected and analyzed with a spectroscopy system, a gamma-ray energy spectrum can be produced. A detailed analysis of this spectrum is typically used to determine the identity and quantity of gamma emitters present in a gamma source, and is a vital tool in radiometric assay. The gamma spectrum is characteristic of the gamma-emitting nuclides contained in the source, just like in an optical spectrometer , the optical spectrum is characteristic of the material contained in a sample. Gamma rays are the highest-energy form of electromagnetic radiation , being physically the same as all other forms (e.g., X-rays , visible light, infrared, radio) but having (in general) higher photon energy due to their shorter wavelength. Because of this, the energy of gamma-ray photons can be resolved individually, and a gamma-ray spectrometer can measure and display the energies of the gamma-ray photons detected. Radioactive nuclei ( radionuclides ) commonly emit gamma rays in the energy range from a few keV to ~10 MeV , corresponding to the typical energy levels in nuclei with reasonably long lifetimes. Such sources typically produce gamma-ray "line spectra" (i.e., many photons emitted at discrete energies ), whereas much higher energies (upwards of 1 TeV ) may occur in the continuum spectra observed in astrophysics and elementary particle physics. The difference between gamma rays and X-rays is somewhat blurred. Gamma rays arise from transitions between nuclear energy levels and are monoenergetic, whereas X-rays are either related to transitions between atomic energy levels ( characteristic X rays , which are monoenergetic), or are electrically generated (X-ray tube, linear accelerator) and have a broad energy range. [ 2 ] The main components of a gamma spectrometer are the energy-sensitive radiation detector and the electronic devices that analyse the detector output signals, such as a pulse sorter (i.e., multichannel analyzer ). Additional components may include signal amplifiers, rate meters, peak position stabilizers, and data handling devices. Gamma spectroscopy detectors are passive materials that are able to interact with incoming gamma rays. The most important interaction mechanisms are the photoelectric effect , the Compton effect , and pair production . Through these processes, the energy of the gamma ray is absorbed and converted into a voltage signal by detecting the energy difference before and after the interaction [ citation needed ] (or, in a scintillation counter , the emitted photons using a photomultiplier ). The voltage of the signal produced is proportional to the energy of the detected gamma ray. Common detector materials include sodium iodide (NaI) scintillation counters, high-purity germanium detectors such as Bismuth germanate , and more recently, GAGG:Ce . To accurately determine the energy of the gamma ray, it is advantageous if the photoelectric effect occurs, as it absorbs all of the energy of the incident ray. Absorbing all the energy is also possible when a series of these interaction mechanisms take place within the detector volume. With Compton interaction or pair production, a portion of the energy may escape from the detector volume, without being absorbed. The absorbed energy thus gives rise to a signal that behaves like a signal from a ray of lower energy. This leads to a spectral feature overlapping the regions of lower energy. Using larger detector volumes reduces this effect. More sophisticated methods of reducing this effect include using Compton-suppression shields and employing segmented detectors with add-back (see: clover (detector) ). [ 3 ] The voltage pulses produced for every gamma ray that interacts within the detector volume are then analyzed by a multichannel analyzer (MCA). In the MCA, a pulse-shaping amplifier takes the transient voltage signal and reshapes it into a Gaussian or trapezoidal shape. From this shape, the signal is then converted into a digital form, using a fast analog-to-digital converter (ADC). In new systems with a very high-sampling-rate ADC, the analog-to-digital conversion can be performed without reshaping. Additional logic in the MCA then performs pulse-height analysis , sorting the pulses by their height into specific bins, or channels . Each channel represents a specific range of energy in the spectrum, the number of detected signals for each channel represents the spectral intensity of the radiation in this energy range. By changing the number of channels, it is possible to fine-tune the spectral resolution and sensitivity . [ 4 ] The MCA can send its data to a computer, which stores, displays, and further analyzes the data. A variety of software packages are available from several manufacturers, and generally include spectrum analysis tools such as energy calibration (converting bins to energies), peak area and net area calculation, and resolution calculation. [ 5 ] A USB sound card can serve as a cheap, consumer off-the-shelf ADC, a technique pioneered by Marek Dolleiser. Specialized computer software performs pulse-height analysis on the digitized waveform, forming a complete MCA. [ 6 ] Sound cards have high-speed but low-resolution (up to 192 kHz) ADC chips, allowing for reasonable quality for a low-to-medium count rate. [ 7 ] The "sound card spectrometer" has been further refined in amateur and professional circles. [ 8 ] [ 9 ] Gamma spectroscopy systems are selected to take advantage of several performance characteristics. Two of the most important include detector resolution and detector efficiency. Gamma rays detected in a spectroscopic system produce peaks in the spectrum. These peaks can also be called lines by analogy to optical spectroscopy. The width of the peaks is determined by the resolution of the detector, a very important characteristic of gamma spectroscopic detectors, and high resolution enables the spectroscopist to separate two gamma lines that are close to each other. Gamma spectroscopy systems are designed and adjusted to produce symmetrical peaks of the best possible resolution. The peak shape is usually a Gaussian distribution . In most spectra the horizontal position of the peak is determined by the gamma ray's energy, and the area of the peak is determined by the intensity of the gamma ray and the efficiency of the detector. The most common figure used to express detector resolution is full width at half maximum (FWHM). This is the width of the gamma ray peak at half of the highest point on the peak distribution. Energy resolution figures are given with reference to specified gamma ray energies. Resolution can be expressed in absolute (i.e., eV or MeV) or relative terms. For example, a sodium iodide (NaI) detector may have a FWHM of 9.15 keV at 122 keV, and 82.75 keV at 662 keV. These resolution values are expressed in absolute terms. To express the energy resolution in relative terms, the FWHM in eV or MeV is divided by the energy of the gamma ray and usually shown as percentage. Using the preceding example, the resolution of the detector is 7.5% at 122 keV, and 12.5% at 662 keV. A typical resolution of a coaxial germanium detector is about 2 keV at 1332 keV, yielding a relative resolution of 0.15%. Not all gamma rays emitted by the source that pass through the detector will produce a count in the system. The probability that an emitted gamma ray will interact with the detector and produce a count is the efficiency of the detector. High-efficiency detectors produce spectra in less time than low-efficiency detectors. In general, larger detectors have higher efficiency than smaller detectors, although the shielding properties of the detector material are also important factors. Detector efficiency is measured by comparing a spectrum from a source of known activity to the count rates in each peak to the count rates expected from the known intensities of each gamma ray. Efficiency, like resolution, can be expressed in absolute or relative terms. The same units are used (i.e., percentages); therefore, the spectroscopist must take care to determine which kind of efficiency is being given for the detector. Absolute efficiency values represent the probability that a gamma ray of a specified energy passing through the detector will interact and be detected. Relative efficiency values are often used for germanium detectors, and compare the efficiency of the detector at 1332 keV to that of a 3 in × 3 in NaI detector (i.e., 1.2×10 −3 cp s / Bq at 25 cm). Relative efficiency values greater than one hundred percent can therefore be encountered when working with very large germanium detectors. The energy of the gamma rays being detected is an important factor in the efficiency of the detector. An efficiency curve can be obtained by plotting the efficiency at various energies. This curve can then be used to determine the efficiency of the detector at energies different from those used to obtain the curve. High-purity germanium (HPGe) detectors typically have higher sensitivity. Scintillation detectors use crystals that emit light when gamma rays interact with the atoms in the crystals. The intensity of the light produced is usually proportional to the energy deposited in the crystal by the gamma ray; a well known situation where this relationship fails is the absorption of < 200 keV radiation by intrinsic and doped sodium iodide detectors. The mechanism is similar to that of a thermoluminescent dosimeter . The detectors are joined to photomultipliers ; a photocathode converts the light into electrons; and then by using dynodes to generate electron cascades through delta ray production, the signal is amplified. Common scintillators include thallium - doped sodium iodide (NaI(Tl))—often simplified to sodium iodide (NaI) detectors—and bismuth germanate (BGO). Because photomultipliers are also sensitive to ambient light, scintillators are encased in light-tight coverings. Scintillation detectors can also be used to detect alpha - and beta -radiation. Thallium-doped sodium iodide (NaI(Tl)) has two principal advantages: NaI(Tl) is also convenient to use, making it popular for field applications such as the identification of unknown materials for law enforcement purposes. Electron hole recombination will emit light that can re-excite pure scintillation crystals; however, the thallium dopant in NaI(Tl) provides energy states within the band gap between the conduction and valence bands. Following excitation in doped scintillation crystals, some electrons in the conduction band will migrate to the activator states; the downward transitions from the activator states will not re-excite the doped crystal, so the crystal is transparent to this radiation. An example of a NaI spectrum is the gamma spectrum of the caesium isotope 137 Cs — see Figure 1 . 137 Cs emits a single gamma line of 662 keV. The 662 keV line shown is actually produced by 137m Ba , the decay product of 137 Cs , which is in secular equilibrium with 137 Cs . The spectrum in Figure 1 was measured using a NaI-crystal on a photomultiplier, an amplifier, and a multichannel analyzer. The figure shows the number of counts within the measuring period versus channel number. The spectrum indicates the following peaks (from left to right): The Compton distribution is a continuous distribution that is present up to channel 150 in Figure 1. The distribution arises because of primary gamma rays undergoing Compton scattering within the crystal: Depending on the scattering angle, the Compton electrons have different energies and hence produce pulses in different energy channels. If many gamma rays are present in a spectrum, Compton distributions can present analysis challenges. To reduce gamma rays, an anticoincidence shield can be used— see Compton suppression . Gamma ray reduction techniques are especially useful for small lithium -doped germanium (Ge(Li)) detectors. The gamma spectrum shown in Figure 2 is of the cobalt isotope 60 Co , with two gamma rays with 1.17 MeV and 1.33 MeV respectively. ( See the decay scheme article for the decay scheme of cobalt-60. ) The two gamma lines can be seen well-separated; the peak to the left of channel 200 most likely indicates a strong background radiation source that has not been subtracted. A backscatter peak can be seen near channel 150, similar to the second peak in Figure 1. Sodium iodide systems, as with all scintillator systems, are sensitive to changes in temperature. Changes in the operating temperature caused by changes in environmental temperature will shift the spectrum on the horizontal axis. Peak shifts of tens of channels or more are commonly observed. Such shifts can be prevented by using spectrum stabilizers . Because of the poor resolution of NaI-based detectors, they are not suitable for the identification of complicated mixtures of gamma ray-producing materials. Scenarios requiring such analyses require detectors with higher resolution. Semiconductor detectors , also called solid-state detectors, are fundamentally different from scintillation detectors: They rely on detection of the charge carriers (electrons and holes) generated in semiconductors by energy deposited by gamma ray photons. In semiconductor detectors, an electric field is applied to the detector volume. An electron in the semiconductor is fixed in its valence band in the crystal until a gamma ray interaction provides the electron enough energy to move to the conduction band . Electrons in the conduction band can respond to the electric field in the detector, and therefore move to the positive contact that is creating the electrical field. The gap created by the moving electron is called a "hole", and is filled by an adjacent electron. This shuffling of holes effectively moves a positive charge to the negative contact. The arrival of the electron at the positive contact and the hole at the negative contact produces the electrical signal that is sent to the preamplifier, the MCA, and on through the system for analysis. The movement of electrons and holes in a solid-state detector is very similar to the movement of ions within the sensitive volume of gas-filled detectors such as ionization chambers . Common semiconductor-based detectors include germanium , cadmium telluride , and cadmium zinc telluride . Germanium detectors provide significantly improved energy resolution in comparison to sodium iodide detectors, as explained in the preceding discussion of resolution. Germanium detectors produce the highest resolution commonly available today. However, a disadvantage is the requirement of cryogenic temperatures for the operation of germanium detectors, typically by cooling with liquid nitrogen . In a real detector setup, some photons can and will undergo one or potentially more Compton scattering processes (e.g. in the housing material of the radioactive source, in shielding material or material otherwise surrounding the experiment) before entering the detector material. This leads to a peak structure that can be seen in the above shown energy spectrum of 137 Cs (Figure 1, the first peak left of the Compton edge), the so-called backscatter peak. The detailed shape of backscatter peak structure is influenced by many factors, such as the geometry of the experiment (source geometry, relative position of source, shielding and detector) or the type of the surrounding material (giving rise to different ratios of the cross sections of Photo- and Compton-effect). The basic principle, however, is as follows: The backscatter peak usually appears wide and occurs at lower than 250 keV. [ 11 ] [ 12 ] For incident photon energies E larger than two times the rest mass of the electron (1.022 MeV), pair production can occur. The resulting positron annihilates with one of the surrounding electrons, typically producing two photons with 511 keV. In a real detector (i.e. a detector of finite size) it is possible that after the annihilation: The above Am-Be-source spectrum shows an example of single and double escape peak in a real measurement. If a gamma spectrometer is used for identifying samples of unknown composition, its energy scale must be calibrated first. Calibration is performed by using the peaks of a known source, such as caesium-137 or cobalt-60. Because the channel number is proportional to energy, the channel scale can then be converted to an energy scale. If the size of the detector crystal is known, one can also perform an intensity calibration, so that not only the energies but also the intensities of an unknown source—or the amount of a certain isotope in the source—can be determined. Because some radioactivity is present everywhere (i.e., background radiation ), the spectrum should be analyzed when no source is present. The background radiation must then be subtracted from the actual measurement. Lead absorbers can be placed around the measurement apparatus to reduce background radiation.
https://en.wikipedia.org/wiki/Gamma_spectroscopy
The Gamow factor , Sommerfeld factor or Gamow–Sommerfeld factor , [ 1 ] named after physicists George Gamow or after Arnold Sommerfeld , is a probability factor for two nuclear particles' chance of overcoming the Coulomb barrier in order to undergo nuclear reactions, for example in nuclear fusion . By classical physics , there is almost no possibility for protons to fuse by crossing each other's Coulomb barrier at temperatures commonly observed to cause fusion, such as those found in the Sun . In 1927 it was discovered that there is a significant chance for nuclear fusion due to quantum tunnelling . While the probability of overcoming the Coulomb barrier increases rapidly with increasing particle energy, for a given temperature, the probability of a particle having such an energy falls off very fast, as described by the Maxwell–Boltzmann distribution . Gamow found that, taken together, these effects mean that for any given temperature, the particles that fuse are mostly in a temperature-dependent narrow range of energies known as the Gamow window . The maximum of the distribution is called the Gamow peak . The probability of two nuclear particles overcoming their electrostatic barriers is given by the following factor: [ 2 ] where E G {\displaystyle E_{\text{G}}} is the Gamow energy where μ = m a m b m a + m b {\displaystyle \mu ={\frac {m_{\text{a}}m_{\text{b}}}{m_{\text{a}}+m_{\text{b}}}}} is the reduced mass of the two particles. [ a ] The constant α {\displaystyle \alpha } is the fine-structure constant , c {\displaystyle c} is the speed of light , and Z a {\displaystyle Z_{\text{a}}} and Z b {\displaystyle Z_{\text{b}}} are the respective atomic numbers of each particle. It is sometimes rewritten using the Sommerfeld parameter η , such that where η is a dimensionless quantity used in nuclear astrophysics in the calculation of reaction rates between two nuclei and it also appears in the definition of the astrophysical S -factor . It is defined as [ 3 ] [ 4 ] where e is the elementary charge , v is the magnitude of the relative incident velocity in the centre-of-mass frame. [ b ] The derivation consists in the one-dimensional case of quantum tunnelling using the WKB approximation . [ 5 ] Considering a wave function of a particle of mass m , we take area 1 to be where a wave is emitted, area 2 the potential barrier which has height V and width l (at 0 < x < l {\textstyle 0<x<l} ), and area 3 its other side, where the wave is arriving, partly transmitted and partly reflected. For wave numbers k [m -1 ] and energy E we get: where k = 2 m E / ℏ 2 {\displaystyle k={\sqrt {2mE/\hbar ^{2}}}} and k ′ = 2 m ( V − E ) / ℏ 2 , {\textstyle k'={\sqrt {2m(V-E)/\hbar ^{2}}},} both in [1/m]. This is solved for given A and phase α by taking the boundary conditions at the barrier edges, at x = 0 {\displaystyle x=0} and x = l {\displaystyle x=l} : there Ψ 1 , 3 ( t ) {\textstyle \Psi _{1,3}(t)} and its derivatives must be equal on both sides. For k ′ l ≫ 1 {\displaystyle k'l\gg 1} , this is easily solved by ignoring the time exponential and considering the real part alone (the imaginary part has the same behaviour). We get, up to factors Ψ 1 = A e i ( k x + α ) , Ψ 3 = C 1 e − i ( k x + β ) + C 2 e i ( k x + β ′ ) , {\displaystyle \Psi _{1}=Ae^{i(kx+\alpha )},\Psi _{3}=C_{1}e^{-i(kx+\beta )}+C_{2}e^{i(kx+\beta ')},} Ψ 2 ≈ A e − k ′ x + A e k ′ x : B 1 , B 2 ≈ A {\displaystyle \Psi _{2}\approx Ae^{-k'x}+Ae^{k'x}:B_{1},B_{2}\approx A} and C 1 , C 2 ≈ 1 2 A k ′ k e k ′ l . {\displaystyle C_{1},C_{2}\approx {\frac {1}{2}}A{\frac {k'}{k}}e^{k'l}.} Next, the alpha decay can be modelled as a symmetric one-dimensional problem, with a standing wave between two symmetric potential barriers at q 0 < x < q 0 + l {\displaystyle q_{0}<x<q_{0}+l} and − ( q 0 + l ) < x < − q 0 {\displaystyle -(q_{0}+l)<x<-q_{0}} , and emitting waves at both outer sides of the barriers. Solving this can in principle be done by taking the solution of the first problem, translating it by q 0 {\displaystyle q_{0}} and gluing it to an identical solution reflected around x = 0 {\displaystyle x=0} . Due to the symmetry of the problem, the emitting waves on both sides must have equal amplitudes ( A ), but their phases ( α ) may be different. This gives a single extra parameter; however, gluing the two solutions at x = 0 {\textstyle x=0} requires two boundary conditions (for both the wave function and its derivative), so in general there is no solution. In particular, re-writing Ψ 3 {\textstyle \Psi _{3}} (after translation by q 0 {\textstyle q_{0}} ) as a sum of a cosine and a sine of k x {\displaystyle kx} , each having a different factor that depends on k and β; the factor of the sine must vanish, so that the solution can be glued symmetrically to its reflection. Since the factor is in general complex (hence its vanishing imposes two constraints, representing the two boundary conditions), this can in general be solved by adding an imaginary part of k , which gives the extra parameter needed. Thus E will have an imaginary part as well. The physical meaning of this is that the standing wave in the middle decays; the waves newly emitted have therefore smaller amplitudes, so that their amplitude decays in time but grows with distance. The decay constant , denoted λ [1/s], is assumed small compared to E / ℏ {\textstyle E/\hbar } . λ can be estimated without solving explicitly, by noting its effect on the probability current conservation law. Since the probability flows from the middle to the sides, we have: note the factor of 2 is due to having two emitted waves. Taking Ψ ∼ e − λ t {\displaystyle \Psi \sim e^{-\lambda t}} , this gives: Since the quadratic dependence on k ′ l {\displaystyle k'l} is negligible relative to its exponential dependence, we may write: Remembering the imaginary part added to k is much smaller than the real part, we may now neglect it and get: Note that ℏ k m = 2 E / m {\textstyle {\frac {\hbar k}{m}}={\sqrt {2E/m}}} is the particle velocity, so the first factor is the classical rate by which the particle trapped between the barriers ( 2 q 0 {\textstyle 2q_{0}} apart) hits them. Finally, moving to the three-dimensional problem, the spherically symmetric Schrödinger equation reads (expanding the wave function ψ ( r , θ , ϕ ) = χ ( r ) u ( θ , ϕ ) {\displaystyle \psi (r,\theta ,\phi )=\chi (r)u(\theta ,\phi )} in spherical harmonics and looking at the l -th term): Since ℓ > 0 {\displaystyle \ell >0} amounts to enlarging the potential, and therefore substantially reducing the decay rate (given its exponential dependence on V − E {\textstyle {\sqrt {V-E}}} ): we focus on ℓ = 0 {\displaystyle \ell =0} , and get a very similar problem to the previous one with χ ( r ) = Ψ ( r ) / r {\displaystyle \chi (r)=\Psi (r)/r} , except that now the potential as a function of r is not a step function . In short ℏ 2 2 m ( χ ¨ + 2 r χ ˙ ) = ( V ( r ) − E ) χ . {\textstyle {\frac {\hbar ^{2}}{2m}}\left({\ddot {\chi }}+{\frac {2}{r}}{\dot {\chi }}\right)=\left(V(r)-E\right)\chi .} The main effect of this on the amplitudes is that we must replace the argument in the exponent, taking an integral of 2 2 m ( V − E ) / ℏ {\textstyle 2{\sqrt {2m(V-E)}}/\hbar } over the distance where V ( r ) > E {\displaystyle V(r)>E} rather than multiplying by width l . We take the Coulomb potential : where ε 0 {\displaystyle \varepsilon _{0}} is the vacuum electric permittivity , e the electron charge , z = 2 is the charge number of the alpha particle and Z the charge number of the nucleus ( Z – z after emitting the particle). The integration limits are then: r 2 = z ( Z − z ) e 2 4 π ε 0 E , {\displaystyle r_{2}={\frac {z(Z-z)e^{2}}{4\pi \varepsilon _{0}E}},} where we assume the nuclear potential energy is still relatively small, and r 1 {\displaystyle r_{1}} , which is where the nuclear negative potential energy is large enough so that the overall potential is smaller than E . Thus, the argument of the exponent in λ is: This can be solved by substituting t = r / r 2 {\textstyle t={\sqrt {r/r_{2}}}} and then t = cos ⁡ ( θ ) {\textstyle t=\cos(\theta )} and solving for θ, giving: where x = r 1 / r 2 {\displaystyle x=r_{1}/r_{2}} . Since x is small, the x -dependent factor is of the order 1. Assuming x ≪ 1 {\textstyle x\ll 1} , the x -dependent factor can be replaced by arccos ⁡ 0 = π / 2 , {\textstyle \arccos 0=\pi /2,} giving: λ ≈ e − E G / E {\displaystyle \lambda \approx e^{-{\sqrt {{E_{\mathrm {G} }}/{E}}}}} with E G = π 2 m / 2 [ z ( Z − z ) e 2 ] 2 ( 4 π ε 0 ℏ ) 2 . {\displaystyle E_{\mathrm {G} }={\frac {\pi ^{2}m/2\left[z(Z-z)e^{2}\right]^{2}}{(4\pi \varepsilon _{0}\hbar )^{2}}}.} Which is the same as the formula given in the beginning of the article with Z a = z {\textstyle Z_{\text{a}}=z} , Z b = Z − z {\textstyle Z_{\text{b}}=Z-z} and the fine-structure constant α = e 2 4 π ε 0 ℏ c : E G = m / 2 / ( 4 ϵ 0 ℏ ) [ Z a e Z b e ] . {\textstyle \alpha ={\frac {e^{2}}{4\pi \varepsilon _{0}\hbar c}}:{\sqrt {E_{\rm {G}}}}={\sqrt {m/2}}/(4\epsilon _{0}\hbar )[Z_{a}eZ_{b}e].} For a radium alpha decay, Z = 88, z = 2 and m ≈ 4 m p , E G is approximately 50 GeV . Gamow calculated the slope of log ⁡ ( λ ) {\textstyle \log(\lambda )} with respect to E at an energy of 5 MeV to be ~ 10 14 J −1 , compared to the experimental value of 0.7 × 10 14 J −1 . [ c ] For an ideal gas, the Maxwell–Boltzmann distribution is proportional to where ⟨ v 2 ⟩ {\displaystyle \langle v^{2}\rangle } is the average squared speed of all particles, k B {\textstyle k_{\rm {B}}} is the Boltzmann constant and T is absolute temperature. The fusion probability is the product of the Maxwell–Boltzmann distribution factor and the Gamow factor The maximum of the fusion probability is given by ∂ P fusion / ∂ E = 0 , {\textstyle \partial P_{\text{fusion}}/\partial E=0,} which yields [ 6 ] This quantity is known as the Gamow peak. [ d ] Expanding P fusion {\displaystyle P_{\text{fusion}}} around E m a x {\displaystyle E_{\rm {max}}} gives: [ 6 ] where (in joule) is the Gamow window. [ e ] In 1927, Ernest Rutherford published an article in Philosophical Magazine on a problem related to Hans Geiger 's 1921 experiment of scattering alpha particles from uranium . [ 7 ] Previous experiments with thorium C' (now called polonium -262) [ f ] confirmed that uranium has a Coulomb barrier of 8.57 MeV, however uranium emitted alpha particles of 4.2 MeV. [ 7 ] The emitted energy was too low to overcome the barrier. On 29 July 1928, George Gamow, and independently the next day Ronald Wilfred Gurney and Edward Condon submitted their solution based on quantum tunnelling to the journal Zeitschrift für Physik . [ 7 ] Their work was based on previous work on tunnelling by J. Robert Oppenheimer , Gregor Wentzel , Lothar Wolfgang Nordheim , and Ralph H. Fowler . [ 7 ] Gurney and Condon cited also Friedrich Hund . [ 7 ] In 1931, Arnold Sommerfeld introduced a similar factor (a Gaunt factor ) for the discussion of bremsstrahlung . [ 8 ] Gamow popularized his personal version of the discovery in his 1970's book, My World Line: An Informal Autobiography. [ 7 ]
https://en.wikipedia.org/wiki/Gamow_factor
In organic chemistry , the Ganem oxidation is a name reaction that allows for the preparation of carbonyls from primary or secondary alkyl halides with the use of trialkylamine N -oxides, such as N -methylmorpholine N -oxide or trimethylamine N -oxide . [ 1 ] As in other oxoammonium-catalyzed oxidation reactions, the negatively charged oxygen atom of the trialkylamine N -oxide molecule attacks the alkyl halide in a S N 2 manner, kicking of the halide as a leaving group. A trialkylamine deprotonates the α-carbon atom, the resulting electron pair shifts onto the oxygen atom, which shifts its own excess electron pair onto the nitrogen atom. This generates the desired carbonyl, as well as the aforementioned trialkylamine. The reaction is an enhancement of the Kornblum oxidation protocol, which was originally developed using dimethyl sulfoxide or pyridine- N -oxide as the nucleophile. The Ganem oxidation has been used as an intermediate step in the total synthesis of (−)-okilactomycin, converting a primary alkyl halide into an aldehyde . [ 2 ] This chemical reaction article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Ganem_oxidation
Ganga Water Lift Project ( गंगाजल उद्वह परियोजना ( Hindi ) ) is a multi-phase drinking water project in Bihar , India . [ 1 ] It is the ambitious project of Chief Minister of Bihar, Nitish Kumar to supply safe drinking water to the water-alarmed towns like Gaya, Rajgir and Nawada, located in southern part of the state through pipeline by lifting water from Ganga river near Hathidah Ghat in Mokama in Patna district. [ 2 ] The cost of first phase of this project was initially approved with ₹ 2,836 crore (US$340 million), the cost was later revised to ₹ 4,174 crore (US$490 million). [ 3 ] Government of Bihar approved the first phase of the coveted Ganga Water Lift Scheme (GWLS) of Water Resource Department (WRD) in December 2019. [ 4 ] Ganga Water Lift Project is part of Nitish Kumar's ‘Jal-Jivan-Hariyali Abhiyan' which is aimed to minimize the bad effects of climate change. [ 5 ] The total length of pipeline that supply Ganga waters to three towns is 190.90 km. The Ganga water is lifted near Hathidah Ghat in Mokama and the pipeline crosses alongside the national highways and state highways. The main pipeline runs from Hathidah to Giriyak via Sarmera and Barbigha. From Giriyak, one pipeline goes to Rajgir, while another to Nawada. The water from Ganga is brought to Motnaje water treatment plant in Nawada district through a pipeline. [ 6 ] In Gaya, urban development & housing department (UDHD) will ensure supply of water to the households through pipeline. The Public health & engineering department (PHED) will be responsible for Ganga water supply in Nawada. The length of the pipeline on Hathidah-Motnaje-Tetar-Abgilla pipe route is 150 km. [ 7 ] Third pipeline goes to Manpur (near Gaya) via Vanganga, Tapovan and Jethia. A major water storage point is constructed near Manpur. Similar storage point is to be constructed for other towns too. The project is completed in three phases. Ganga water will be supplied to Gaya, Bodhgaya and Rajgir in the first phase of the project. Nawada town would be covered in the second phase. Hyderabad-based infrastructure firm Megha Engineering & Infrastructures Ltd (MEIL) completed phase 1 of Ganga Water Lift Project in 2022. [ 8 ] [ 9 ]
https://en.wikipedia.org/wiki/Ganga_Water_Supply_Scheme
Ganoderma microsporum immunomodulatory protein or GMI is a protein discovered from the mushroom species Ganoderma microsporum . [ 1 ] [ 2 ] GMI is a pure protein composed of 111 amino acids and exists in nature as a tetramer. [ 3 ] GMI is found in the mycelium of Ganoderma microsporum. During the life cycle of G. microsporum, GMI acts as an important signaling factor in the transition from the fungi's mycelium phase to the fruiting body phase. However, the levels of GMI found in both the mycelium and fruiting body are very low. [ citation needed ] In 2005, researchers utilized genetic and bio-engineering methods to obtain purified GMI, and proved that the protein is structurally similar to LZ-8, the first fungal immunomodulatory protein discovered in 1989. The name GMI is derived from the fact that when cultured with immune cells, GMI was found to not only increase the cells’ hormone production, but also induce higher levels of cellular activity. [ 4 ]
https://en.wikipedia.org/wiki/Ganoderma_microsporum_immunomodulatory_protein
Gans theory or Mie-Gans theory is the extension of Mie theory for the case of spheroidal particles. It gives the scattering characteristics of both oblate and prolate spheroidal particles much smaller than the excitation wavelength. Since it is a solution of the Maxwell equations it should technically not be called a theory. The theory is named after Richard Gans who first published the solution for gold particles in 1912 in an article entitled "Über die Form ultramikroskopischer Goldteilchen". [ 1 ] A subsequent article in 1915 discussed the case of silver particles. [ 2 ] In Gans theory, the absorption is only dependent on the aspect ratio of the particles and not on the absolute dimensions. This dependence is introduced through so called polarization- or shape factors related to the three dimensions of the particle. For the case of spheroids, this reduces to only two different factors since the particle is rotational symmetric around one axis. It is currently being applied in the field of nanotechnology to characterize silver and gold nanorods . [ 3 ] A popular alternative for this is the Discrete dipole approximation (DDA) method. Gans theory gives the exact solution for spheroidal particles; real nanorods, however, have a more cylindrical shape. Using DDA, it is possible to better model the exact shape of the particles. As the name suggests, this will only give an approximation. This optics -related article is a stub . You can help Wikipedia by expanding it . This scattering –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gans_theory
In mathematics , the Gan–Gross–Prasad conjecture is a restriction problem in the representation theory of real or p -adic Lie groups posed by Gan Wee Teck , Benedict Gross , and Dipendra Prasad . [ 1 ] The problem originated from a conjecture of Gross and Prasad for special orthogonal groups but was later generalized to include all four classical groups . In the cases considered, it is known that the multiplicity of the restrictions is at most one [ 2 ] [ 3 ] [ 4 ] and the conjecture describes when the multiplicity is precisely one. A motivating example is the following classical branching problem in the theory of compact Lie groups . Let π {\displaystyle \pi } be an irreducible finite-dimensional representation of the compact unitary group U ( n ) {\displaystyle U(n)} , and consider its restriction to the naturally embedded subgroup U ( n − 1 ) {\displaystyle U(n-1)} . It is known that this restriction is multiplicity-free, but one may ask precisely which irreducible representations of U ( n − 1 ) {\displaystyle U(n-1)} occur in the restriction. By the Cartan–Weyl theory of highest weights , there is a classification of the irreducible representations of U ( n ) {\displaystyle U(n)} via their highest weights which are in natural bijection with sequences of integers a _ = ( a 1 ≥ a 2 ≥ ⋯ ≥ a n ) {\displaystyle {\underline {a}}=(a_{1}\geq a_{2}\geq \cdots \geq a_{n})} . Now suppose that π {\displaystyle \pi } has highest weight a _ {\displaystyle {\underline {a}}} . Then an irreducible representation τ {\displaystyle \tau } of U ( n − 1 ) {\displaystyle U(n-1)} with highest weight b _ {\displaystyle {\underline {b}}} occurs in the restriction of π {\displaystyle \pi } to U ( n − 1 ) {\displaystyle U(n-1)} (viewed as a subgroup of U ( n ) {\displaystyle U(n)} ) if and only if a _ {\displaystyle {\underline {a}}} and b _ {\displaystyle {\underline {b}}} are interlacing, i.e. a 1 ≥ b 1 ≥ a 2 ≥ b 2 ≥ ⋯ ≥ b n − 1 ≥ a n {\displaystyle a_{1}\geq b_{1}\geq a_{2}\geq b_{2}\geq \cdots \geq b_{n-1}\geq a_{n}} . [ 5 ] The Gan–Gross–Prasad conjecture then considers the analogous restriction problem for other classical groups. [ 6 ] The conjecture has slightly different forms for the different classical groups. The formulation for unitary groups is as follows. Let V {\displaystyle V} be a finite-dimensional vector space over a field k {\displaystyle k} not of characteristic 2 {\displaystyle 2} equipped with a non-degenerate sesquilinear form that is ε {\displaystyle \varepsilon } -Hermitian (i.e. ε = 1 {\displaystyle \varepsilon =1} if the form is Hermitian and ε = − 1 {\displaystyle \varepsilon =-1} if the form is skew-Hermitian). Let W {\displaystyle W} be a non-degenerate subspace of V {\displaystyle V} such that V = W ⊕ W ⊥ {\displaystyle V=W\oplus W^{\perp }} and W ⊥ {\displaystyle W^{\perp }} is of dimension ( ε + 1 ) / 2 {\displaystyle (\varepsilon +1)/2} . Then let G = G ( V ) × G ( W ) {\displaystyle G=G(V)\times G(W)} , where G ( V ) {\displaystyle G(V)} is the unitary group preserving the form on V {\displaystyle V} , and let H = Δ G ( W ) {\displaystyle H=\Delta G(W)} be the diagonal subgroup of G {\displaystyle G} . Let π = π 1 ⊠ π 2 {\displaystyle \pi =\pi _{1}\boxtimes \pi _{2}} be an irreducible smooth representation of G {\displaystyle G} and let ν {\displaystyle \nu } be either the trivial representation (the "Bessel case") or the Weil representation (the "Fourier–Jacobi case"). Let φ = φ 1 × φ 2 {\displaystyle \varphi =\varphi _{1}\times \varphi _{2}} be a generic L-parameter for G = G ( V ) × G ( W ) {\displaystyle G=G(V)\times G(W)} , and let Π φ {\displaystyle \Pi _{\varphi }} be the associated Vogan L-packet. If φ {\displaystyle \varphi } is a local L-parameter for G {\displaystyle G} , then Letting η G P {\displaystyle \eta _{\mathrm {GP} }} be the "distinguished character" defined in terms of the Langlands–Deligne local constant , then furthermore For a quadratic field extension E / F {\displaystyle E/F} , let L E ( s , π 1 × π 2 ) := L E ( s , π 1 ⊠ π 2 , s t d n ⊠ s t d n − 1 ) {\displaystyle L_{E}(s,\pi _{1}\times \pi _{2}):=L_{E}(s,\pi _{1}\boxtimes \pi _{2},\mathrm {std} _{n}\boxtimes \mathrm {std} _{n-1})} where L E {\displaystyle L_{E}} is the global L-function obtained as the product of local L-factors given by the local Langlands conjectures . The conjecture states that the following are equivalent: In a series of four papers between 2010 and 2012, Jean-Loup Waldspurger proved the local Gan–Gross–Prasad conjecture for tempered representations of special orthogonal groups over p -adic fields . [ 7 ] [ 8 ] [ 9 ] [ 10 ] In 2012, Colette Moeglin and Waldspurger then proved the local Gan–Gross–Prasad conjecture for generic non-tempered representations of special orthogonal groups over p -adic fields. [ 11 ] In his 2013 thesis, Raphaël Beuzart-Plessis proved the local Gan–Gross–Prasad conjecture for the tempered representations of unitary groups in the p -adic Hermitian case under the same hypotheses needed to establish the local Langlands conjecture . [ 12 ] Hongyu He proved the Gan-Gross-Prasad conjectures for discrete series representations of the real unitary group U(p,q). [ 13 ] In a series of papers between 2004 and 2009, David Ginzburg , Dihua Jiang , and Stephen Rallis showed the (1) implies (2) direction of the global Gan–Gross–Prasad conjecture for all quasisplit classical groups. [ 14 ] [ 15 ] [ 16 ] In the Bessel case of the global Gan–Gross–Prasad conjecture for unitary groups, Wei Zhang used the theory of the relative trace formula by Hervé Jacquet and the work on the fundamental lemma by Zhiwei Yun to prove that the conjecture is true subject to certain local conditions in 2014. [ 17 ] In the Fourier–Jacobi case of the global Gan–Gross–Prasad conjecture for unitary groups, Yifeng Liu and Hang Xue showed that the conjecture holds in the skew-Hermitian case, subject to certain local conditions. [ 18 ] [ 19 ] In the Bessel case of the global Gan–Gross–Prasad conjecture for special orthogonal groups and unitary groups, Dihua Jiang and Lei Zhang used the theory of twisted automorphic descents to prove that (1) implies (2) in its full generality, i.e. for any irreducible cuspidal automorphic representation with a generic global Arthur parameter, and that (2) implies (1) subject to a certain global assumption. [ 20 ]
https://en.wikipedia.org/wiki/Gan–Gross–Prasad_conjecture
In management literature, gap analysis involves the comparison of actual performance with potential or desired performance. [ 1 ] If an organization does not make the best use of current resources, or forgoes investment in productive physical capital or technology, it may produce or perform below an idealized potential. This concept is similar to an economy's production being below the production possibilities frontier . Gap analysis identifies gaps between the optimized allocation and integration of the inputs (resources), and the current allocation-level. This reveals areas that can be improved. Gap analysis involves determining, documenting and improving the difference between business requirements and current capabilities. Gap analysis naturally flows from benchmarking and from other assessments. Once the general expectation of performance in an industry is understood, it is possible to compare that expectation with the company's current level of performance. This comparison becomes the gap analysis. Such analysis can be performed at the strategic or at the operational level of an organization. Gap analysis is a formal study of what a business is doing currently and where it wants to go in the future. It can be conducted, in different perspectives, as follows: Gap analysis provides a foundation for measuring investment of time, money and human resources required to achieve a particular outcome (e.g. to turn the salary payment process from paper-based to paperless with the use of a system). Note that "GAP analysis" has also been used [ by whom? ] as a means of classifying how well a product or solution meets a targeted need or set of requirements. In this case, "GAP" can be used as a ranking of "Good", "Average" or "Poor". (This terminology appears in the PRINCE2 project management publication.) The need for new products or additions to existing lines may emerge from portfolio analysis, in particular from the use of the Boston Consulting Group Growth-share matrix —or the need may emerge from the regular process of following trends in the requirements of consumers. At some point, a gap emerges between what existing products offer and what the consumer demands. The organization must fill that gap to survive and grow. Gap analysis can identify gaps in the market. Thus, comparing forecast profits to desired profits reveals the planning gap . This represents a goal for new activities in general, and new products in particular. The planning gap can be divided into three main elements: usage gap, existing gap, and product gap. The usage gap is the gap between the total potential for the market and actual current usage by all consumers in the market. Data for this calculation includes: Existing consumer usage makes up the total current market, from which market shares, for example, are calculated. It usually derives from marketing research, most accurately from panel research, but also from adhoc work. Sometimes it may be available from figures that governments or industries have collected. However, these are often based on categories that make bureaucratic sense but are less helpful in marketing terms. The ' usage gap' is thus: This is an important calculation. Many, if not most, marketers accept existing market size—suitably projected their forecast timescales—as the boundary for expansion plans. Though this is often the most realistic assumption, it may impose an unnecessary limit on horizons. For example: the original market for video-recorders was limited to professional users who could afford high prices. Only after some time did the technology extend to the mass market. In the public sector, where service providers usually enjoy a monopoly, the usage gap is probably the most important factor in activity development. However, persuading more consumers to take up family benefits, for example, is probably more important to the relevant government department than opening more local offices. Usage gap is most important for brand leaders. If a company has a significant share of the whole market, they may find it worthwhile to invest in making the market bigger. This option is not generally open to minor players, though they may still profit by targeting specific offerings as market extensions. All other gaps relate to the difference between existing sales (market share) and total sales of the market as a whole. The difference is the competitor share. These gaps therefore, relate to competitive activity. The product gap —also called the segment or positioning gap —is that part of the market a particular organization is excluded from because of product or service characteristics. This may be because the market is segmented and the organization does not have offerings in some segments, or because the organization positions its offerings in a way that effectively excludes certain potential consumers—because competitive offerings are much better placed for these consumers. This segmentation may result from deliberate policy. Segmentation and positioning are powerful marketing techniques, but the trade-off—against better focus—is that market segments may effectively be put beyond reach. On the other hand, product gap can occur by default; the organization has thought out its positioning, its offerings drifted to a particular market segment. The product gap may be the main element of the planning gap where an organization can have productive input; hence the emphasis on the importance of correct positioning. A gap analysis can also be used to analyze gaps in processes and the gulf between the existing outcome and the desired outcome. This step process can be illustrated by the example below: A gap analysis can also be used to compare one process to others performed elsewhere, which are often identified through benchmarking. In this usage, one compares each process side-by-side and step-by-step and then notes the differences. One then analyzes each deviation to determine if there is any benefit to changing to the alternate process. The results of this analysis (in the context of the benefits and detriments of changing processes) may support the maintenance of the current process, the wholesale adoption of an alternate process, or a fusion of different aspects of each process.
https://en.wikipedia.org/wiki/Gap_analysis
A gap surface plasmon (or gap plasmon) is a guided electromagnetic wave which propagates in a transparent medium located between two extremely close metallic regions. Propagating in a gap between metals forces light to propagate partially inside the metallic regions, [ 1 ] causing the gap plasmon to slow down. The velocity of the gap-plasmon can be modulated by changing the thickness of the gap even by a few nanometers. A gap-plasmon is a guided mode, a solution of Maxwell's equations without source. It is the form under which light propagates inside an extremely thin gap between two metals (having the same nature or not).  As a gap plasmon, the electromagnetic wave can propagate up to four to five times slower than in vacuum. Such a guided mode only exists for parallel to the interface magnetic fields (p polarization). The distance between the metallic area has to be typically smaller than 50 nm in order to noticeably slow the guided mode. Actually, the GAP plasmon propagates partially inside the metal : the field of the GAP plasmon penetrates the metal to a depth of typically 25 nm, called the skin depth. A slow guided mode presents a short effective wavelength and so a very large wave vector (noted kx when the wave propagates along an Ox axis). As the thickness of the dielectric region decreases, the gap-plasmon is slowed by the metal and its effective index (as well as its wavevector) increases, while its effective wavelength shrinks. Devices based on gap-plasmon, such as resonators, present a typical size which is of the order of the effective wavelength. Gap-plasmon resonators have in general a reduced size compared to the wavelength of light in vacuum. Such a miniaturization is particularly sought after in plasmonics . They can be obtained by self-assembly of chemically synthesized nanocubes or by lithography. A GAP plasmon resonator is a cavity for the guided mode : the wave is reflected back and forth inside the resonator. Such structures (see picture) present a very small volume compared to the wavelength is vacuum (which allow to reach a very important Purcell effect). Such resonators can then be used to design metasurfaces, fabricate reflection holograms or for subwavelength color printing. Example : chemically synthesized silver nanocubes on a gold layer, separated by polymer (see picture). Electro-optical modulators are designed to modulate a light signal,  i.e. they modulate on the characteristics of a light beam (such as its wavelength, polarization state or intensity) to encode a signal. The gap plasmon based modulators are the smallest existing modulators. [ 3 ] Losses are reduced thanks to this small size. They operate over a large frequency range. Actually, the upper frequency limit of such devices is currently beyond the reach of electronic measuring devices.
https://en.wikipedia.org/wiki/Gap_surface_plasmon
In many-body physics , most commonly within condensed-matter physics , a gapped Hamiltonian is a Hamiltonian for an infinitely large many-body system where there is a finite energy gap separating the (possibly degenerate) ground space from the first excited states . A Hamiltonian that is not gapped is called gapless . The property of being gapped or gapless is formally defined through a sequence of Hamiltonians on finite lattices in the thermodynamic limit . [ 1 ] [ unreliable source? ] An example is the BCS Hamiltonian in the theory of superconductivity. In quantum many-body systems, ground states of gapped Hamiltonians have exponential decay of correlations. [ 2 ] [ 3 ] [ 4 ] In quantum field theory , a continuum limit of many-body physics, a gapped Hamiltonian induces a mass gap . This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gapped_Hamiltonian
Despite the best efforts of the government, health, and environmental agencies, improper use of hazardous chemicals is pervasive in commercial products, and can yield devastating effects, from people developing brittle bones [ 1 ] and severe congenital defects, to strips of wildlife laying dead by poisoned rivers. [ 2 ] Mevinphos is a broad spectrum of insecticides that are used on a wide variety of crops, including apples, peaches, strawberries, nectarines, celery, and cucumbers. [ 3 ] They belong to the chemical group known as organophosphates , which have neural toxic effects, not in only insecticides, but also in birds, fish, amphibians, and mammals. [ 4 ] While not carcinogenic , mevinphos are potent via all means of exposure, including absorption, ingestion, and inhalation. [ 5 ] Organophosphates inhibit acetylcholinesterase (AChE), an enzyme responsible for regulating levels of the muscle-stimulating neurotransmitter, acetylcholine (ACh). This results in high levels of acetylcholine levels in the body, which causes nearly every muscle in the body to be stimulated without cessation. [ 4 ] Symptoms of organophosphate poisoning include violent convulsions, vomiting, miosis , lachrymation , sweating, salivation, diarrhea, and potentially death. [ 3 ] Most nerve gases, including sarin , soman , and tabun , are organophosphates, [ 6 ] which are all banned by the Geneva Convention of 1925, as they are deemed a war crime . [ 7 ] From 1981 to 1984, 1,156 people consisting of field workers and agricultural officials in Salinas Valley, California were reported to have been exposed to mevinphos, developing insecticide-related illnesses. The exposure began on April 23, 1981, when the field was sprayed with mevinphos at 5:00 am that morning, despite a cancellation order having been given the day before. Later that morning at 7:00 am, 44 field workers began harvesting iceberg lettuce on the farm. Two hours later, many of these workers developed symptoms of dizziness, headaches, eye irritation, visual disturbances, and nausea. [ 8 ] Thirty-one farm workers along with three agricultural officials who were in the field that morning were sent to a local hospital to be tested for plasma cholinesterase, which looks at two substances levels that are necessary for the nervous system to work properly. [ 9 ] Two workers were kept in the hospital for further observation and treatment due to respiratory complications. Two other people had levels of plasma cholinesterase below normal limit. The rest of the workers were disrobed, hosed down with water, asked to get dressed, go home, and wash their clothes at home. No one was told not to come to work the next day. However, due to ongoing symptoms, many of the workers were not able to report to work the next morning. A union representative arranged for 29 workers to be taken to a second hospital for further testing and evaluation. One person was hospitalized due to bradycardia . [ 8 ] The National Institute for Occupational Safety and Health (NIOSH) began an investigation on April 24 working closely with staff from the second hospital during this acute phase of this incident. The 29 workers reported the following signs and symptoms: “eye irritation (76%), headache (48%), visual disturbances (48%), dizziness (41%), nausea (38%), fatigue (28%), chest pain or shortness of breath (21%), skin irritation (17%), fasciculation of the eyelids (10%), fasciculation of muscles in the arm (7%), excessive sweating (7%), and diarrhea (7%), with twenty-two (76%) of the workers reporting three or more symptoms or signs.” [ 8 ] The workers were tested approximately every week over the course of 8 to 12 weeks. When initially tested the first week, everyone's plasm cholinesterase and red blood cell (RBC) cholinesterase was above normal levels. Test levels from the following week increase by 5% and the week after that by 14%. Over the course of time, their levels kept increasing. This is believed to be due to organophosphates inhibiting the enzyme, cholinesterase, resulting in toxic effects by allowing an increase of the neurotransmitter in the nervous system. It is not known how many other cases were not reported and followed from this incident. [ 8 ] Mevinphos is considered among the ten highest health risk posing pesticides and reported to have acute total illnesses in 1984–1990, low oral LD50 , and a low Reference Dose (RfD). [ 10 ] On February 28, 1994, the California Environmental Protection Agency, Pesticide and Environmental Toxicology Section, recommended the cancellation of mevinphos use in California due to the inability to implement safe mitigative measures and the inability to prevent unacceptable dietary and worker exposures. [ 11 ] Following the end of WWII, the production of organochlorines , such as DDT , Polychlorinated Biphenyls (PCBs) , and other synthetic chemicals were developed for use in agriculture. [ 12 ] These purposes included insecticides, fungicides, and in some cases, fire retardants; while effective for agriculture and forest services, these chemicals are known lipophiles (meaning that they attach to fat cells in organisms) and have been shown to bioaccumulate , passing from prey to predator and from mother to offspring throughout embryonic development and lactation. [ 12 ] Studies have shown that exposure to organochlorines like DDT can lead to increased risks of pancreatic cancer , non-Hodgkin’s lymphoma , impaired lactation, possible male infertility and testicular cancers, and DDT poisoning in those who work to manufacture the chemicals. [ 12 ] Rachel Carson 's exposé novel, Silent Spring , is largely accredited for spurring public awareness of the ecological and human health impacts of organochlorines such as DDT. [ 12 ] Her work began a national movement to ban chemicals, such as DDT. [ 12 ] However, chemical companies responded with backlash claiming that her work was falsified and DDT was not banned until 1972. [ 12 ] Since then, many other organochlorines have been placed under similar restrictions and bans, yet there are few regulations in place for new organochlorines being produced in labs. [ 13 ] This lack of regulation has raised concerns among members of the environmental community about the hazards of these unstudied pollutants not being monitored by the United States Environmental Protection Agency (EPA) , especially in light of budget cuts and bureaucratic inefficacy. [ 14 ] All pesticides in the U.S. must be reviewed by the Environmental Protection Agency (EPA), under the regulations of the Federal Insecticide, Fungicide, and Rodenticide Act (FIFRA), at least once every 15 years. [ 15 ] The EPA defines “pesticides” in the FIFRA by four criteria: [ 16 ] Radium is a radioactive element that is naturally found at low levels in the environment from uranium and thorium decay. It can be found virtually everywhere, including the soil, water, rocks, and flora. As radium is naturally everywhere in the environment, all humans are almost always exposed to radium. [ 17 ] At natural levels, radium is quite benign. However, at excessive levels, radium poisoning can occur. When the body takes in radium, it perceives it as calcium. Consequently, the body fills bones with radium, which can lead to brittle bones, collapsed spines, and teeth to fall out. [ 18 ] Radium was discovered in 1898 by Marie and Pierre Curie . [ 18 ] In the early 1900s it was used as radiation treatment for cancer. Expansion of radium's use in medical practices was earnestly attempted in the treatment of rheumatism and mental disorders. [ 17 ] However, these pursuits were unsuccessful. Due to radium's luminescent properties, American inventor William J. Hammer mixed radium with glue and zinc sulfide to make glow-in-the-dark paint. [ 18 ] This paint was used by the U.S. Radium Corporation and named Undark . [ 18 ] The paint was primarily used for wristwatch dials. Further application of the paint reached military equipment after the company accepted a contract with the U.S. government during WWI. [ 18 ] Starting in 1916, the U.S. Radium Corporation established factories in New Jersey that recruited dozens of women to paint the watch dials with Undark. [ 19 ] No safety equipment were given to the women, nor were any precautions taken. [ 18 ] The women were instructed to frequently lick the paint brushes, in order to keep them wet and shaped to a fine point. Throughout every day, the women's clothes and skin were covered in radium paint. This led to the women developing fatal radium poisoning. [ 18 ] By the mid-1920s dozens upon dozens of these women were falling ill and dying from prolonged, horrific deaths. The radium they ingested was dissolving their bones from the inside, causing severe pain and enormous deformities with pieces of their bodies easily breaking and falling off. By 1927, over 50 women died from radium poisoning. [ 20 ] A smaller number of these female workers tried suing the company, not only for financial compensation to pay for their large medical bills and to afford themselves income to be able to live out the rest of their lives, but also to expose the company to their wrongful doing. [ 20 ] Dial painter, Grace Fryer, as well as four other women sued U.S. Radium for $250,000. However, this lawsuit was unsuccessful because of U.S. Radium's vast team of lawyers and contractual affiliation with the U.S. government. The women were so desperate to afford medical treatment and food, that they had to settle for $10,000 each and a $600 annual payment. Unfortunately, all five women passed away within two years after the settlement. [ 18 ] The fate of the Radium Girls raised a lot of national concern for workers’ health rights and conditions. After the incident, precautions and safety equipment were mandated to protect workers handling radioactive material in several occupations. The Manhattan Project notably used fume hoods, personal protective equipment, sanitation, and frequent checks for contamination, in order to prevent a repeat of the Radium Girls tragedy. In 1949, the U.S. Congress passed a law that compensates workers for occupational illnesses. [ 18 ] By the time World War II began, safety limits for handling radiation had been set by the federal government. [ 20 ] In 1934, the International Commission on Radiological Protection (ICRP) established a tolerance dose for workers of 0.2 roentgens of radiation exposure to workers per day. In 1936, the National Committee on Radiation Protection and Measurements (NCRP) reduced the limit to 0.1 roentgens per day which held through World War II. From 1936 through 1977 there were continual revisions by professional scientists and the government agencies as to what constituted safe doses. By the end of World War II, arguments between the U.S. military leaders and civilian officials ensued as to what were considered best practices to controlling nuclear energy and repressing fabrication of nuclear weapons by other nations. This led to the dispute in going to congress and resulting in congress passing the Atomic Energy Act (AEA) of 1946. [ 1 ] The EPA was created in 1970 to accept certain functions and responsibilities from other federal agencies and departments. Since its inception, the EPA has run environmental programs that address radioactive waste disposal sites, off-site monitoring around nuclear power plants, and keeps an eye on natural sources of radioactivity, such as radon. [ 1 ] The EPA has developed guidance on topics such as occupational radiation limits and exposures for federal agencies and members of the public. The EPA can offer recommendations on quality assurance programs for nuclear medicine under its FRC-derived authority. [ 1 ] Mercury is a naturally occurring element in the Earth’s lithosphere that can be found in its elemental, inorganic, and organic-compounded forms. It is often found in coal deposits and regions rich in fossil fuels. [ 21 ] Like other heavy metals, mercury can be released from both natural and anthropogenic sources. [ 22 ] Mercury from natural sources (such as soil/sedimentary erosion or volcanic eruptions) accounts for a small percentage of rising mercury levels. [ 22 ] Meanwhile, the release of industrial mercury from mining and fossil fuel combustion has led to heightened mercury pollution in the atmosphere. [ 22 ] In fact, fossil fuel combustion accounts for 45% of human mercury release. [ 23 ] Mercury exposures are variant, relative to the degree of exposure, demographics of the individual exposed, and the mercury form or compound that they are exposed to. [ 24 ] Because it is a neurotoxin , mercury can be particularly damaging to developing fetuses when exposed in vitro and young children. Inhalation of mercury gas is more deadly and can cause kidney failure and respiratory problems if not treated. [ 24 ] From an ecological perspective, mercury is concerning because of its ability to bioaccumulate in food chains, particularly in marine environments. [ 23 ] For this reason, the EPA advises against the regular consumption of fish and shellfish that are documented to contain high levels of mercury, especially for pregnant and nursing mothers and young children. [ 24 ] In recent years the United States Environmental Protection Agency has established policies to mitigate atmospheric mercury release from the combustion of fossil fuels and waste. [ 25 ] These policies include the 2011 Mercury and Air Toxics Standards which require that power plants use controls and technologies that mitigate mercury pollution. [ 25 ] Between 2011 and 2013, additional policies were applied to municipal and medical waste management facilities mandating that all sewage and waste containing mercury cannot be incinerated. [ 25 ] Earlier standards from 1991 also established a maximum contaminant level, or MCM, of 0.002 mg of mercury per liter of municipal drinking water. [ 25 ] Legal measures such as the Clean Air and Clean Water Acts, Safe Drinking Water Act , and the Resource Conservation and Recovery Act also set standards for pollutant release and clean-up for the United States. [ 26 ] Perfluorooctanoic acid , a.k.a. PFOA or C8, is a synthetic chemical surfactant that is often used in the process of making non-stick cookware. PFOA is extremely bio-persistent, with a half-life of 8 years in humans. [ 27 ] PFOA can stay in the environment and the human body over long periods of time, and can have harmful effects to people exposed in high doses. [ 28 ] A worldwide study was conducted to compare “clean blood,” i.e., blood without C8 as a control with blood that contains C8, in order to illuminate the hazardous effects on humans. However, “clean blood,” could not be found from participants because 99% across the globe had derivatives of C8 found in their blood. Instead, samples of preserved blood from American Soldiers during the Korean War were used as the control. The blood was obtained in 1950, a year before Teflon was ever sold to the public. None of the preserved blood was found to contain C8, strongly suggesting that the worldwide use of Teflon caused a nigh-ubiquitous absence of “clean blood.” [ 29 ] [ 2 ] PFOA has been used to make Teflon , a non-stick cookware by the chemical company DuPont , since 1951. Fortunately for consumers, it does not exist in significant amounts in the final product of Teflon that could cause noticeable harm upon normal use. [ 28 ] However, DuPont and 3M workers that handle PFOA, as well as people who live near the plants, are not as fortunate. Starting in the early 1950s, PFOA was released by DuPont into private wells and the Ohio River without disclosure to the public of EPA. [ 30 ] Although both companies conducted independent studies demonstrating the harmful side effects of PFOA exposure, these results were hidden from the public as a result of the EPA’s self-reporting policy on chemical toxicology in manufacturing. [ 31 ] The 2012 C8 Science Panel conducted a survey using blood samples from approximately 69,000 residents of regions with heightened PFOA levels as a result of a class action lawsuit against DuPont to determine correlations between PFOA exposure and chronic illnesses. [ 30 ] Those surveyed had a range of PFOA levels from 0.2-22,412 μg/L, with a median exposure of 28.2 μg/L. [ 30 ] These levels were significantly higher than the levels detected in the general American population, which had a median exposure of 3.9μg/L. [ 30 ] Results from the study concluded that PFOA exposure was linked to pancreatic cancer and testicular cancer , among other conditions, and possible correlations with kidney and prostate cancer . [ 30 ] Other chronic conditions included high cholesterol , thyroid disease , ulcerative colitis , pre-eclampsia , and hypertension . [ 32 ] In 1981, two babies of female workers have been found to have eye-related defects. In 1986, Buck Bailey was born with a single nostril, a serrated eyelid, and a keyhole pupil , due to his mother being exposed to PFOA on a daily basis when she worked at DuPont. [ 2 ] Known as a “forever chemical” PFOAs do not biodegrade naturally and thus, are at a high risk for bioaccumulation in exposed populations if governmental regulators do not take action. [ 32 ] The conclusions from the C8 Panel were used to justify medical monitoring among all residents affected by PFOA exposure at the expense of DuPont, however, some claims remain disputed by the chemical giant. [ 31 ] It was not until the early 1990s that the toxic effects of PFOA became a public concern. Wilbur Tennant, a farmer who lives on his own private land near the DuPont plant in Parkersburg, West Virginia , videotaped the calamitous effects of PFOA on his cattle and local wildlife. Calves were born with black teeth and opaque eyes. Several cows and deer were found dead by the stream. [ 29 ] [ 2 ] As it turned out, DuPont was dumping large amounts of waste PFOA into local streams that fed into Parkersburg's town water supply. So much waste PFOA was dumped that DuPont quickly lost count. Eventually, children were noticed to have black teeth, much like the calves in Tennant's farm. [ 2 ] Wilbur Tennant filed a lawsuit with environmental lawyer, Robert Billot , who also started a class-action lawsuit with nearly 80,000 plaintiffs in the same year as a result of the widespread impacts of PFOA chemicals across six water districts polluted by DuPont. [ 31 ] The class-action lawsuit settled for $343 million in damages to residents and DuPont was ordered to pay for costs of medical monitoring. [ 31 ] In 2003, 3M phased out C8 for C4 in attempt to avoid public backlash. They urged DuPont to do the same. Instead, DuPont seized the opportunity to become the sole manufacturer of C8 and increased production. [ 29 ] [ 2 ] This lasted until the EPA banned the production of C8 in 2013. DuPont soon substituted C8 with Gen-X, which is a chemical that has not yet been researched or regulated. [ 29 ] In order to detach their name from the toxic reputation of PFOA's, DuPont created the spin-off company, Chemours, to handle production and continued dumping of Gen-X. [ 2 ] After facing several class-action lawsuits, DuPont paid 43,000 residents of Ohio each $400 to participate in a study to determine whether C8 could be linked to any diseases. Participation in the study also required each person to waive their rights to sue DuPont if no links could be made. The study lasted over seven years and grew to almost 70,000 participants, including West Virginia residents. The results were concluded in 2012. Six diseases were linked: “ testicular and kidney cancer , ulcerative colitis , thyroid disease , pre-eclampsia , and high cholesterol .” [ 2 ] Part of the reason it has been difficult for residents of Parkersburg, West Virginia to challenge DuPont is because the chemical company makes large contributions to the local economy, education system, and local government. There are even buildings and a street named after DuPont. [ 2 ] The struggles to regulate and determine the toxicity of synthetic chemicals are strongly reflected in the case of DuPont’s PFOA pollution. [ 31 ] Because the EPA only regulates chemicals that have been proven toxic and uses a self-reporting system, past uses of synthetic chemicals have gone unregulated until health risks are observed, as seen with the case of C8 and PFOA. [ 32 ] This lack of communication between the EPA and potential polluters is one of the reasons that many policies aimed at widespread chemical regulation often fail. [ 32 ] Current EPA policies are looking to increase involvement in reporting to reduce these breaches, however, cuts to the EPA budget limit the feasibility of these goals without resources to expand their workforce. [ 33 ] Legal fees, lawsuits, and governmental fines have been used to discourage companies from releasing untested chemicals into the environment, however, these are often insignificant when compared to the net worth of the company. [ 31 ] So long as the market exists for these products (as seen with the successes of Teflon ) companies are likely to continue valuing their profits over environmental and health concerns. [ 32 ] PFOA is a group of chemicals that are still being studied. To date EPA has not yet established statutory clean-up levels for PFOA. However, the agency has established health advisory levels for these substances based on EPA's assessment of the latest peer-reviewed science. This advisory is meant to provide states, tribal and local officials, and drinking water system operators, information on the health risks due to these chemicals in order to enable these people to take appropriate measures to protect their communities. EPA has established a health advisory number of 70 parts per trillion for PFOA in drinking water to provide a conservative margin of protection to the most sensitive populations, thus ensuring protection for everyone. [ 34 ] Recent settlements between the EPA and Dupont/Chemours have worked to improve the lives and environment of residents near the Washington Works plant in Parkersburg, WV by mandating that DuPont pay for clean-up efforts in the region under the Safe Drinking Water Act . [ 35 ] Unfortunately, technology is not able to fully remove PFOAs and thus, the existing reparations include providing bottled water and installing filtration systems with partial removal abilities. [ 35 ]
https://en.wikipedia.org/wiki/Gaps_in_regulation_of_chemical_agents
A garage kit (ガレージキット) or resin kit is an assembly scale model kit most commonly cast in polyurethane resin . [ 1 ] They are often model figures portraying humans or other living creatures. In Japan, kits often depict anime characters, and in the United States, depictions of movie monsters are common. However, kits can be produced depicting a wide range of subjects, from characters in horror , science fiction , fantasy films , television and comic books to nudes, pin-up girls and original works of art, as well as upgrade and conversion kits for existing models and airsoft guns. [ 2 ] Originally garage kits were amateur-produced, and the term originated with dedicated hobbyists using their garages as workshops. [ 3 ] Unable to find model kits of subjects they wanted on the market, they began producing kits of their own. As the market expanded, professional companies began making similar kits. [ 2 ] Sometimes a distinction is made between true garage kits, made by amateurs, and resin kits, manufactured professionally by companies. Because of the labor-intensive casting process, garage kits are usually produced in limited numbers and are more expensive than injection-molded plastic kits. The parts are glued together using cyanoacrylate or an epoxy cement and the completed figure is painted. Some figures are sold completed, but most commonly they are sold in parts for the buyer to assemble and finish. [ 4 ] Japanese garage kits are often anime figures depicting popular characters. [ 2 ] Another major subject is " Kaiju " monsters such as Godzilla , [ 5 ] and they may also include subjects such as mecha and science fiction spaceships. [ 6 ] Garage kits can be as simple as a one piece figure, or as complex as kits with well over one hundred parts. Most commonly they are cast in polyurethane resin , but may also be fabricated of diverse substances such as soft vinyl , white metal (a type of lead alloy ) and fabric . Originally the kits were sold and traded between hobbyists at conventions like Wonder Festival . [ 5 ] As the market grew, a number of companies began producing resin kits professionally, such as Federation Models, Volks , WAVE/Be-J, Kaiyodo , Kotobukiya and B-Club, a subsidiary of Bandai producing Gundam kits ( Gunpla ). [ 2 ] The scale of figure kits varies, but as of 2008, 1/8 seems to be the predominant scale. Prior to 1990 the dominant scale was 1/6. This scale shrink coincided with the rise in material, labor, and licensing costs. [ citation needed ] Other scales, such as 1/3, 1/4, 1/6, 1/7 also exist, but are less common. Larger kits (1/3, 1/4, etc.) generally command higher prices due to the greater amounts of material required to produce them. Japanese garage kits are usually cast as separate parts which are packed with instructions and sometimes photographs of the final product. Most professionally manufactured kits come in a box while amateur-produced kits sold at conventions come in plastic bags, blank boxes or even boxes with copied sheet information glued onto them. They are not painted, but some of them do have decals provided by the sculptor or circle. The builder then paints and assembles the model, ideally using an airbrush . However, they can also be painted with a regular brush using a variety of techniques to achieve similar effects as when painting with a conventional airbrush. [ 4 ] In the 1950s and 60s, Aurora and other companies produced cheap plastic models of movie monsters, comic book heroes, and movie and television characters. This market has since disappeared, but through the 1980s an underground market grew through which enthusiasts could acquire the old plastic model kits. [ 5 ] In the early to mid-1980s, hobbyists began creating their own garage kits of movie monsters. There was a small but enthusiastic market for these new model kits. They were poured into flexible molds which could produce rigid reproductions of new figures which were then sculpted more accurately and with more detail than the old plastic model kits. They were usually produced in limited numbers and sold primarily by mail order and at toy and hobby conventions. In the mid- to late 1980s the monster model kit hobby grew toward the mainstream. By the 1990s, model kits were produced in the US and the UK, as well as in Japan, and distributed through hobby and comic stores. There was an unprecedented variety of licensed model figure kits. In the late 1990s, model kit sales went down. Hobby and comic stores and their distributors began either carrying fewer garage kits or closing down, along with their producers. As of 2009, there are two American garage kit magazines , Kitbuilders Magazine and Amazing Figure Modeler , [ 7 ] and there are garage kit conventions held annually, like WonderFest USA in Louisville, Kentucky . [ 8 ] Garage kits are generally produced in small quantities, from the tens to a few hundred copies, compared to injection-molded plastic kits which are produced in many thousands. This is due to the labor-intensive nature of the manufacturing process and the relatively low market demand. Resin casting garage kit production is the most labor-intensive. The upside is that creating the initial mold is much less costly than in the injection-molding process. Vinyl garage kits are produced by using liquid vinyl Plastisol in a spin casting process known as slush molding . It is more complex than resin casting, but less expensive and less sophisticated than the injection molding used for most plastic products. It is not something that is commonly done in a basement or garage. The legality of amateur garage kits can be questionable as they are not always properly licensed . The model might be of a copyrighted character or design that was produced by fans because no official model exists. The relatively low initial investment and ease of resin casting means that it's also easy to create recast copies of existing original kits. Recasts are produced by making molds of parts from original model kits and then doing recasts from the new molds. This can be done for personal use, such as modification of an existing kit, but unlicensed recast copies are sometimes sold unlawfully. In some cases the original kit is no longer available, but in others they are still in active production. The recasts can be of officially licensed model kits, but when they are of unlicensed kits the sculptor usually has a hard time pursuing litigation. The recasts are usually of inferior quality when produced in Thailand, however, other recasters in Hong Kong rival originals in quality and casting and offer at a price that undercuts the original. Recast kits can be found on online auction sites , where they can be difficult to control due to potential cumbersome site policies and seller pseudonymity . Many recasters are in East Asia but can be found all over the globe. [ 9 ] In an effort to legitimize amateur garage kit production and sales in Japan, it is not uncommon for a license holder to issue a 'single day license' ( ja:当日版権システム ) where for one day only, license is granted for the sale of amateur garage kits. These licensing agreements are typically negotiated between an event organizer (Wonder Festival, Character Hobby, Figure Mania, etc.) and various licensing entities for license to characters from specific TV shows and movies. Typically, the event organizer publishes a list of licenses available in advance, and sculptors intending to sell their sculptures then submit applications (including photos of their sculpture) for approval. Applications may be rejected.
https://en.wikipedia.org/wiki/Garage_kit
The garbage can model (also known as garbage can process , or garbage can theory ) describes the chaotic reality of organizational decision making in an organized anarchy . [ 2 ] The model originated in the 1972 seminal paper, A Garbage Can Model of Organizational Choice , written by Michael D. Cohen , James G. March , and Johan P. Olsen . [ 1 ] Organized anarchies are organizations , or decision situations (also known as choice opportunities), characterized by problematic preferences, unclear technology, and fluid participation. [ 1 ] While some organizations (such as public, educational, and illegitimate organizations) are more frequently characterized by these traits of organized anarchy, the traits can be partially descriptive of any organization, part of the time. [ 1 ] [ 3 ] Within this context, of an organized anarchy view of organizational decision making, the garbage can model symbolizes the choice-opportunity/decision-situation (for example: a meeting where ideas are discussed and decided on) as a "garbage can" that participants are chaotically dumping problems and solutions into, as they are being generated. The "garbage can" term's significance is best understood by considering the manner in which items in a trash can are organized, which is a messy, chaotic mix. The model portrays problems, solutions, and participants/decision-makers as three independent "streams" that are each generated separately, and flow disconnected from each other. These three streams only meet when the fourth stream of choice opportunity arises, as a garbage can, for the streams to flow into. The mix of garbage (streams) in a single can (choice opportunity) depends on the mix of cans available, on the labels attached to each can, and on what garbage is currently being generated. The mix of garbage in a single can also depend on the speed at which the garbage is collected and removed from the scene, for example, how long before problems, solutions, or participants move on to other choice opportunities, or, depending on how long the current choice opportunity remains available. [ 1 ] This anarchic view of decision making contrasts with traditional decision theory . Organized anarchies can be characterized by a sense of chaos and dynamism. Problems and solutions are loosely coupled. Proposed solutions change during bargaining. All participants involved do not get the chance to fully participate, and have limitations on their time and energy. Many things happen at once, all competing with each other for attention. [ 2 ] Amongst the confusion, participants try to make sense of their role in the organization. [ 2 ] The behavioral theory of organized anarchy views organizations, or decision-situations/choice-opportunities, as generally characterized by the three properties of problematic preferences, unclear technology, and fluid participation (detailed below). [ 2 ] These properties of organized anarchy are characteristic of any organization in part, part of the time. [ 1 ] The organization has no clear preference or guidelines. [ 1 ] It operates on the basis of a variety of inconsistent and ill-defined preferences, goals, and identities. [ 2 ] The organization can be described more accurately as a loose collection of ideas, rather than as a coherent structure. Organizations discover their preferences through actions, more than actions are taken on the basis of preferences. [ 1 ] It is unclear which problems matter, and which do not. [ 2 ] The organization's processes are not understood by the organization's own members. The organization operates based on trial and error procedures, learning from accidents of past experiences, and pragmatic inventions of necessity. [ 1 ] It is not clear what the consequences are for proposed solutions, or how to solve problems with solutions that lack evidence. [ 2 ] Participants vary in how much time and effort they commit to different domains. Participant involvement also varies, depending on the time. Consequently, the boundaries of the organization are continuously uncertain and changing. Audiences and decision makers for any type of choice change suddenly and unpredictably. [ 1 ] Organizations can be viewed as vehicles for solving problems, or structures where conflict is resolved through bargaining. However, organizations also provide procedures through which participants gain an understanding of what they are doing and what they have done. [ 1 ] Organizations, especially organized anarchies, may have difficulty creating their collective platform and identity. [ 2 ] In situations of ambiguity, decision making moves away from ideas of reality, causality, and intentionality, to thoughts of meaning. Therefore, decisions become seen as vehicles for constructing meaningful interpretations of fundamentally confusing worlds, instead of outcomes produced by comprehensible environments. [ 2 ] As the complexity of decision situations increase so that they more closely resemble reality, they become meaning generators instead of consequence generators. Organized anarchies need structures and processes that symbolically reinforce their espoused values, that provide opportunities for individuals to assert and confirm their status, and that allow people to understand to which of many competing claims on their attention they should respond. They require a means through which irrelevant problems and participants can be encouraged to seek alternative ways of expressing themselves so that decision-makers can do their jobs. They should also be able to "keep people busy, occasionally entertain them, give them a variety of experiences, keep them off the streets, provide pretexts of storytelling, and allow socializing" (Weick's The Social Psychology of Organizing , p. 264). Hence, we understand organized anarchies as meaning makers that we need within organizations so that we can feel like we have reasons and identities for which to be present at the organization and to address many types of concerns, such as in meetings, where the issues may or may not be relevant to the existing topic of discussion. [ 2 ] Within this perspective, an organization is a collection of choices seeking problems, issues and feelings seeking decision situations where they can be raised, solutions seeking issues to which they may be able to solve, and decision makers seeking out work. [ 1 ] Whereas the theory of organized anarchy provided a larger view to describe how organizations and decision situations function, the garbage can model focuses in on how decisions get made within these organized anarchies. [ 2 ] [ 1 ] The model details what elements are involved in the decision-making process, how the outcomes are generated, and who/what is able to access this interaction. The garbage can model views decisions as outcomes of four independent streams (detailed below) within organizations. Prior to the garbage can model, the decision process was imagined very differently, as visually displayed, based on references from the foundational literature, in the figures below. [ 1 ] Problems arise from people both inside and outside of the organization, and for many different reasons, all consuming attention. Examples may include family, career, distribution of status and money, or even current events in the media. [ 1 ] These problems do not need to be real, or actually important, but only to be perceived as such by the decision makers. [ 2 ] Solutions are an individual's or a collective's product. Examples may include ideas, bills, programs, and operating procedures. [ 2 ] None of the solutions need to pertain to an existing problem. Instead, participants use the solutions generated to actively seek out problems that the solutions may be able to solve. [ 1 ] Participants have other demands on their time, and actively arrive to, and leave from, the decision-making process. They may also have different preferences for different solutions. [ 1 ] Choice opportunities give the organizations chances to act in ways that can be called decisions. These opportunities occur regularly, and organizations are able to determine moments for choice. Examples may include the signing of contracts, hiring and firing employees, spending money, and assigning tasks. [ 1 ] [ 2 ] The first three streams of problems, solutions, and participants, flow into the fourth stream of choice opportunities, and mix based on chance, timing, and who happens to be present. [ 2 ] While the first three streams of problems, solutions, and participants, meet in the stream of choice opportunity (for example, a choice to hire a new employee), the decision/choice arena is the larger domain where all four of these streams meet. [ 2 ] [ 1 ] This arena can be the type of organization (government, school, university) or the greater setting in which this interaction is occurring. For example, a board or committee may be a choice arena, while the committee's annual elections may be a choice opportunity. Choice opportunities may also move between different choice arenas, such as a decision being passed between committees, or departments. [ 2 ] The outcomes of how the four streams mix in a choice arena can vary. Sometimes decisions are made. Other times no decisions are made. Still other times, decisions are made, but do not address the problem that they were meant to solve. [ 2 ] [ 1 ] Resolution occurs when the choices taken resolve the problem that was being addressed. This success occurs when problems arise in choice opportunities, and the decision makers present have the energy/ability to properly address the problems' demands. [ 1 ] [ 2 ] Oversight occurs when a decision is taken before the problem reaches it. This happens when choice opportunities arrive and no problems are attached to them. This may be due to problems being attached to other choice arenas at the moment. If there is sufficient energy available to make a choice quickly, participants will make the choice and move on before the relevant problem arrives. [ 1 ] [ 2 ] Flight occurs when a decision is taken after the problem goes away. This happens when problems are attached to choice opportunities for a period of time and exceed the energy of their respective decision makers to stay focused on the problem. The original problem may then move to another choice arena. Examples are tabling, or sending decisions to subcommittees, where the problems may not get attached to solutions. [ 2 ] [ 1 ] The Fortran model simulations, used in the original paper, found that, most often, decisions are not made to resolve problems. [ 1 ] Decision-making processes were found to be very sensitive to variations in energy and time. [ 1 ] Decision makers and problems were also found to seek each other out, and continue to find each other. [ 1 ] Three key aspects of the efficiency of the decision process are problem activity, problem latency, and decision time. [ 1 ] Problem activity is the amount of time unresolved problems are actively attached to choice situations. This is a rough measure of the potential for decision conflict in an organization. [ 1 ] Problem latency is the amount of time problems spend activated but not linked to choices. [ 1 ] Decision time is the persistence of choices. [ 1 ] Good organizational structures would be assumed to keep problem activity and problem latency low by quickly solving problems with choices. Notably, this result was not observed in the garbage can model. [ 1 ] The model's processes are very interactive, and some phenomena are dependent on specific combinations of other structures at play. Important problems were found more likely to be solved than unimportant ones, and important choices were less likely to solve problems than unimportant ones. [ 1 ] Access structures and deadlines provide limitations on what can enter into the garbage can model's processes. [ 1 ] [ 2 ] Access structures are the social boundaries that influence which persons, problems, and solutions are allowed access to the choice arena. [ 2 ] The loosest access structure, unrestricted/ democratic access allows all problems, solutions, and people to enter. Any active problem has access to any active choice. [ 1 ] This creates more energy, but also permits problems, solutions, and participants to interfere with each other. Conflict and time devoted to problems (anarchy) are increased. [ 2 ] An example could be an open forum, town hall, or general body meeting. Hierarchical access gives priority entry to important actors, problems, and solutions. Both choices and problems are arranged in a hierarchy so that important problems (having low numbers) have access to many choices, and important choices (also having low numbers) are accessible to only important problems. [ 1 ] An example could be making big decisions in an executive meeting/committee, while small decisions are left for the general population. [ 2 ] Specialized access happens when only special problems and solutions can gain entry to certain meetings. Specific specialists have access to specific choices that fit their expertise. [ 2 ] Each problem has access to only one choice and each choice is accessible to only two problems. [ 1 ] Hence, choices specialize in the types of problems that can be connected to them. [ 1 ] An example could be computer specialists in a technology committee addressing technical issues. Deadlines characterize temporal boundaries, the timing of decision arenas and what flows access them. [ 2 ] Constraints include arrival times of problems (seasonal or weather issues, such as a heat wave, or a blizzard), solutions (time delayed, for example by 1 or 5 year plans), participants (determined by the timing of business days, school semesters, etc.), and choice opportunities (for example, meetings based on budget cycles, or student admissions). Decisions arise from the constraints of access structures and deadlines interacting with the time-dependent flows of problems, solutions, and participants. [ 2 ] While still a doctoral student at the University of Bergen in Norway, Johan P. Olsen came to the University of California, Irvine as a visiting scholar from 1968 to 1969. At that time, James G. March was both the Dean of the School of Social Sciences (1964–1969), and a professor of psychology and sociology at the University of California, Irvine (1964–1970). Coinciding with the time of Olsen's visit, and March's last year serving as a dean, Michael D. Cohen was a doctoral student at the University of California, Irvine, and was just beginning his work as a research assistant to March. All three scholars were present at the right time, to witness the university conduct a search process to hire a new dean. Ultimately, the search process ended with none of the potential candidates being chosen, and the head of the search committee taking the position of dean. During an interview, Olsen describes the chaotic decision-making process that he observed at the university throughout this search process, and how it served as a foundational experience for the three scholars to later collaborate and produce their model. [ 4 ] Olsen explains in this interview how topics previously considered to be important to the decision-making process, such as if the actors were reasonable or rational, actually proved to be less important, and were instead trumped by issues such as time constraints of the participants involved. An example provided was a professor being present in one meeting, only to be absent from the following meeting due to professional travel commitments, which can be common for university faculty. This prompted Olsen to consider a contextual model of decision making, one that examined the ability to make calculations and implement them, as opposed to models that focused on motivation. Olsen observed decision makers give each other head nods, and other non-verbal communication, in meetings, and noted the possible communication, or miscommunication this may have entailed. Olsen also highlighted how the search committee's decision-making process was affected by misinterpreting the silence of the current dean (March) regarding applicants as a sign for lack of support, when in fact this was not an accurate interpretation of the dean's preferences. Olsen, therefore, gained an interest to examine collective, as opposed to individual, decision making, and how routines and chance may affect the decision-making process. [ 4 ] All of these factors would lead into the development of the garbage can model. By 1972, March, Cohen, and Olsen had all found their way from the University of California, Irvine to Stanford University , in the positions of professor, post-doctoral fellow, and visiting professor, respectively. That year, they published the seminal paper A Garbage Can Model of Organizational Choice. [ 1 ] In this paper, the authors used version 5 of the programming language Fortran to translate their ideas into a computer simulation model of a garbage can decision-making process. [ 1 ] The model enables choices to be made and problems resolved, even when an organization may be plagued by conflict, goal ambiguity , poorly understood problems that come and go, variable environments, and distracted decision makers. [ 1 ] There are many situations where the garbage can process of decision making cannot be eliminated, and in some of these instances, such as research, or family, the garbage can process should not be eliminated. [ 1 ] Knowing the characteristics of an organizational anarchy and a garbage can model can help people to properly identify when and where these phenomena exist, and approach them strategically. Understanding how these decision arenas operate provide tools to successfully manage what could otherwise be a problematic decision-making process. Organized anarchies can be managed, to use the garbage can model to your advantage. Three different management styles can be used, as detailed below. A reformer eliminates the chaotic garbage can elements from decisions. [ 2 ] This creates greater order and control, which centralizes and rationalizes the organization. [ 2 ] In contrast to the reformer, the enthusiast tries to discover a new vision of the decision making within garbage can processes. [ 2 ] The enthusiast realizes that the planning is in large part symbolic, and is an excuse for participants to interact and generate meaning. [ 2 ] It allows participants to feel a sense of belonging, and to learn about identities and views. [ 2 ] Once the enthusiast understands that the decision arena is more for sense-making and observations, than for making decisions, temporal sorting can be used as a way to organize attention. The temporal order of topics presented can suggest what is of more concern for collective discussion. Flows of problems and solutions are viewed as a matching market, where energies and connections are mobilized. [ 2 ] Assessing who is present, and where time and energy are sufficient, allows the enthusiasts to advance their case most effectively. Characteristics of the garbage can model that were seen by others as disadvantages, such as flexible implementation, uncoordinated action, and confusion, are viewed as advantages by the enthusiast. [ 2 ] The pragmatist tries to exploit the anarchy inherent in the garbage can processes to further personal agendas. [ 2 ] Timing can be manipulated to have solutions arrive when attention is low. The meeting can be arranged in an order that is personally favorable, where items that are desired to be discussed are placed at the top of the agenda, and items that need to be passed, in which discussion is not desired, are placed at the bottom of the agenda, so that the decision can be rushed through when there is not enough time for discussion. [ 2 ] The pragmatist pays attention to fluctuations in interests and participant involvement, so that when certain individuals are not present, it can be easier to advance issues and solutions that may have otherwise been opposed by different participants. [ 2 ] Initiatives that are entangled with other streams can be abandoned, and if an unfavorable topic arises, the system can be overloaded to protect the pragmatist's interests. [ 2 ] This can be accomplished by bringing up different problems and solutions, which will slow the decision-making process down and make it more complex. [ 2 ] Other choice opportunities (meetings) can also be proposed to lure problems and participants away from choices that are of interest, in the process gaining time for the pragmatists to address the issues of their concern. [ 2 ] The garbage can model can be especially helpful in explaining all types of meetings where problems and solutions are fluidly discussed. [ 2 ] The model fits well with almost any decentralized social system attempting to address issues, and the model is continuously finding its way into new domains. [ 1 ] For example, across a sample of firms involved in hydrocarbon megaprojects, researchers found that problems given the most attention are different from those responsible for budget overruns, and that the attribution of reasons for these overruns differ between project owners and supply chain firms. [ 5 ] These inconsistencies are addressed by the garbage can model. Also, trade fairs have been found to be organizational forms that have permeable, fluid participation, and diversified and spontaneous in terms of individual goals and actions, once again displaying traits characteristic of the model. [ 6 ] Several fields such as higher education, the policy-government world, and academic research, are discussed further below. The American college or university is, in a way, a prototypical organized anarchy. [ 7 ] Students constantly enter and leave the institution, and the faculty and staff working there for longer periods of time may have many competing demands on their attention and resources, such as course instruction, research, and conference travel. Different academic departments may prioritize different, and even competing, goals for the university. University senates, in particular, provide an opportunity to see the characteristics of organized anarchy and the garbage can model in action. [ 3 ] These senates largely serve symbolic meaning making functions for participants to express themselves through their membership, commitment to professional values, and maintaining relationships. [ 3 ] Often, committees that report to the senate take so long to work on their issue, due to constraints on participant time, or difficulty matching problems with solutions, that by the time the committee produces anything, the issue has already passed on. [ 8 ] Hence, this provides an example for how the decision was already made, by the garbage can model's decision outcome of flight, where decisions are taken after problems have already gone away. The university senate is known for this latency. [ 3 ] Government can be viewed as an organized anarchy. [ 9 ] The actors (politicians) can consistently change with election cycles. There are multiple, often competing, preferences. Problems arise from current events, and can gain or lose focus based on media coverage. Policies may be proposed by think tanks or lobby groups, but these policies may not gain attention until the right situation arises that promotes their relevance. John W. Kingdon built on the ideas of organized anarchy to examine these dynamics in his "Multiple Streams Approach", adapted for the field of public policy [ 9 ] Kingdon renamed some of the terms familiar in the garbage can model. Problems remain termed as problems, but solutions became renamed as policies, and participants were termed as politics. These streams converge, or, as Kingdon says, couple, in the policy window (choice opportunity). Ambiguity, competition, an imperfect selection process, actors having limited time, and decision-making processes being neither "comprehensively rational" nor linear, are several key elements of multiple streams approach that clearly reflect the general properties of organized anarchy. [ 10 ] The research process in the field of social sciences, particularly in psychology, can be interpreted as an organized anarchy. [ 11 ] The academic field of psychology is much more a loose collection of ideas and theories, rather than a coherent structure with a shared intellectual paradigm. Technologies used to conduct research may not be fully understood. Methods for analyzing data, or conducting research, are taken from other fields when the need arises. Participation in the research process is fluid, with some research being done by students, other research being done by professors who may publish one or a few articles and then not continue as a researcher, and other research being done by people who make the research process their life-long profession. Joanne Martin recognized these characteristics of organized anarchy, and applied an adapted version of the garbage can model to the psychological research process. [ 11 ] Martin's model restyled the original model's four streams. Problems took the parameters of theoretical problems. Solutions were seen as the results of the research process. Choice opportunities were understood as the selection of which methodology to use for the research. Finally, the stream for participants was re-termed resources, to reflect that, unlike in organizational decision making, not only were actors required to move the decision/research process forward, but specific intellects and skill-sets could also be required, as well as financing, study subjects, and access to certain environments for conducting the research in. The garbage can model of the psychological research process describes how and why some research topics may go unaddressed, certain theoretical problems may be linked with only a single methodological approach, researchers may continue to work on the same issues throughout their careers, some methods may be seldom applied, and how and why the field may appear to make little progress at times. [ 11 ] The garbage can model continues to appear in academic articles, textbooks, and the press, being applied across many diverse domains. Features of organized anarchy have increased in modern times, and many attempts have been made to contribute to the theoretical discourse of the garbage can model by extending it to include new components. For example, fluid participation, a key characteristic of organized anarchy, has greatly increased since the original model was formulated. [ 12 ] Some recent research has sought to contribute to the theoretical discourse of the model, by finding leadership style to be a key predictor of decision structure in organized anarchy. [ 13 ] Other recent research has found problems with the computer simulation model used in the original article by Cohen, March, and Olsen, suggesting that decision making styles have not been sufficiently analyzed. [ 14 ] In 2012, the volume The Garbage Can Model of Organizational Choice: Looking Forward at Forty was published, containing a collection of papers celebrating 40 years since the original article on the garbage can model was introduced. [ 15 ] The papers collected in the volume present theories of organizational decision processes that build on the original garbage can model, at times adding new ideas to create a hybrid extension of the original, and at other times perhaps violating the original model's core assumptions, thereby proposing alternatives to the existing model. Some of these papers attempt to attach elements of economic reasoning based on rational action assumptions onto the model. [ 15 ] Many of the volume's chapters address the problem of agency, to which the garbage can model offered a solution based on a temporal, instead of a consequential, ordering of organizational events. [ 15 ] Some of the newer models that have been proposed make assumptions returning to a consequential view of decision making, as well as assuming that individual preferences may play a larger role in the process. The volume's papers collectively suggest that the next logical stage of evolution for the garbage can model may be to directly model complex network dependencies linking participants, solutions, problems, and choice opportunities, or overall, social processes, within organizations. [ 16 ] [ 15 ] Taken as a whole, the volume contributes to defining an intellectual agenda that may well extend far beyond the next forty years of organizational research. [ 15 ]
https://en.wikipedia.org/wiki/Garbage_can_model
In computer science , garbage in, garbage out ( GIGO ) is the concept that flawed, biased or poor quality ("garbage") information or input produces a result or output of similar ("garbage") quality. The adage points to the need to improve data quality in, for example, programming. Rubbish in, rubbish out ( RIRO ) is an alternate wording. [ 1 ] [ 2 ] The principle applies to all logical argumentation : soundness implies validity , but validity does not imply soundness . The expression was popular in the early days of computing. The first known use is in a 1957 syndicated newspaper article about US Army mathematicians and their work with early computers, [ 3 ] in which an Army Specialist named William D. Mellin explained that computers cannot think for themselves, and that "sloppily programmed" inputs inevitably lead to incorrect outputs. The underlying principle was noted by the inventor of the first programmable computing device design: On two occasions I have been asked, "Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?" ... I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question. More recently, the Marine Accident Investigation Branch comes to a similar conclusion: A loading computer is an effective and useful tool for the safe running of a ship. However, its output can only be as accurate as the information entered into it. The term may have been derived from last-in, first-out (LIFO) or first-in, first-out (FIFO). [ 6 ] This phrase can be used as an explanation for the poor quality of a digitized audio or video file. Although digitizing can be the first step in cleaning up a signal, it does not, by itself, improve the quality. Defects in the original analog signal will be faithfully recorded, but might be identified and removed by a subsequent step by digital signal processing . GIGO is also used to describe failures in human decision-making due to faulty, incomplete, or imprecise data. [ 7 ] In audiology , GIGO describes the process that occurs at the dorsal cochlear nucleus (DCN) when auditory neuropathy spectrum disorder is present. This occurs when the neural firing from the cochlea has become unsynchronized, resulting in a static-filled sound being input into the DCN and then passed up the chain to the auditory cortex. [ 8 ] The term was applied by Dan Schwartz at the 2012 Worldwide ANSD Conference, St. Petersburg, Florida, on 16 March 2012; and adopted as industry jargon to describe the electrical signal received by the dorsal cochlear nucleus and passed up the auditory chain to the superior olivary complex on the way to the auditory cortex destination. [ citation needed ] GIGO was the name of a Usenet gateway program to FidoNet , MAUSnet, e.a. [ 9 ] The phrase may also be used in the context of machine learning , where poor-quality training data will inevitably lead to a poor-quality model.
https://en.wikipedia.org/wiki/Garbage_in,_garbage_out
In evolutionary biology , the GARD (Graded Autocatalysis Replication Domain) model is a general kinetic model for homeostatic-growth and fission of compositional-assemblies, with specific application towards lipids . [ 1 ] In the context of abiogenesis , the lipid-world [ 2 ] suggests assemblies of simple molecules, such as lipids , can store and propagate information, and thus undergo evolution . These 'compositional assemblies' have been suggested to play a role in the origin of life. The idea is the information being transferred throughout the generations is compositional information – the different types and quantities of molecules within an assembly. This is different from the information encoded in RNA or DNA , which is the specific sequence of bases in such molecule. Thus, the model is viewed as an alternative or an ancestor to the RNA world hypothesis. The composition vector of an assembly is written as: v = n 1 ⋯ n N G {\displaystyle v=n_{1}\cdots n_{N_{G}}} . Where n 1 ⋯ n N G {\displaystyle n_{1}\cdots n_{N_{G}}} are the molecular counts of lipid type i within the assembly, and NG is how many different lipid types exist ( repertoire size ). The change in the count of molecule type i is described by: k f {\displaystyle k_{f}} and k b {\displaystyle k_{b}} are the basal forward (joining) and backward (leaving) rate constants, β ij is a non-negative rate enhancement exerted by molecule type j within the assembly on type i from the environment, and ρ is the environmental concentration of each molecule type. β is viewed as a directed, weighted, complex network . The assembly current size is N = ∑ i = 1 N G n i {\displaystyle N=\sum _{i=1}^{N_{G}}n_{i}} . The system is kept away from equilibrium by imposing a fission action once the assembly reaches a maximal size, Nmax, usually in the order of NG. This splitting action produces two progeny of same size, and one of which is grown again. The model is subjected to a Monte Carlo algorithm based simulations, using Gillespie algorithm . In 2010, Eors Szathmary and collaborators chose GARD as an archetypal metabolism-first realization. They introduced into the model a selection coefficient which increases or decreases the growth rate of assemblies depending on how similar or dissimilar they are to a given target. They found that the ranking of the assemblies is unaffected by the selection pressure and concluded that GARD does not exhibit Darwinian evolution. [ 3 ] In 2012, Doron Lancet and Omer Markovitch disputed this. Two major drawbacks of the 2010 paper were: (1) the authors focused on a general assembly and not on a composome or compotype (faithfully replicating and quasispecies , respectively); (2) they performed only a single, random simulation to test the selectability. [ 4 ] The quasispecies model describes a population of replicators that replicate with relatively high mutations. Due to mutations and back mutations the population eventually centres around a master-replicator (master sequence). GARD's populations were shown to form a quasispecies around a master-compotype and to exhibit an error catastrophe , similarly to classical quasispecies such as RNA viruses. [ 5 ]
https://en.wikipedia.org/wiki/Gard_model
Gardasil is an HPV vaccine for use in the prevention of certain strains of human papillomavirus (HPV). [ 10 ] [ 7 ] [ 8 ] [ 9 ] [ 11 ] It was developed by Merck & Co. [ 12 ] High-risk human papilloma virus (hr-HPV) genital infection is the most common sexually transmitted infection among women. [ 13 ] The HPV strains that Gardasil protects against are sexually transmitted, [ 14 ] specifically HPV types 6, 11, 16 and 18. [ 15 ] [ 16 ] HPV types 16 and 18 cause an estimated 70% of cervical cancers , [ 17 ] [ 18 ] and are responsible for most HPV-induced anal , [ 19 ] vulvar , vaginal , [ 20 ] and penile cancer cases. [ 19 ] HPV types 6 and 11 cause an estimated 90% of genital warts cases. [ 21 ] HPV type 16 is responsible for almost 90% of HPV-positive oropharyngeal cancers , [ 22 ] and the prevalence is higher in males than females. [ 22 ] Though Gardasil does not treat existing infection, vaccination is still recommended for HPV-positive individuals, as it may protect against one or more different strains of the disease. [ 23 ] The vaccine was approved for medical use in the United States in 2006, [ 24 ] [ 25 ] initially for use in females aged 9–26. [ 26 ] In 2007, the Advisory Committee on Immunization Practices recommended Gardasil for routine vaccination of girls aged 11 and 12 years. [ 27 ] As of August 2009, vaccination was recommended for both males and females before adolescence and the beginning of potential sexual activity. [ 15 ] [ 6 ] [ 28 ] By 2011, the vaccine had been approved in 120 other countries. [ 29 ] In 2014, the US Food and Drug Administration (FDA) approved a nine-valent version, Gardasil 9, to protect against infection with the strains covered by the first generation of Gardasil as well as five other HPV strains responsible for 20% of cervical cancers (types 31, 33, 45, 52, and 58). [ 6 ] [ 30 ] [ 31 ] In 2018, the FDA approved expanded use of Gardasil 9 for individuals 27 to 45 years old. [ 32 ] Gardasil is available as Gardasil which protects against 4 types of HPV (6, 11, 16, 18) and Gardasil 9 which protects against an additional 5 types (31, 33, 45, 52, 58). [ 5 ] [ 6 ] [ 8 ] [ 9 ] [ 33 ] In the United States, Gardasil is indicated for: [ 34 ] In the European Union, Gardasil is indicated for active immunization of individuals from the age of nine years against the following HPV diseases: [ 8 ] Gardasil is a vaccine to prevent HPV, that, for maximum effect, is recommended for individuals prior to them becoming sexually active. [ 23 ] Moreover, evidence supports the conclusion that women who were already infected with one or more of the four HPV types targeted by the vaccine (HPV types 6, 11, 16, or 18) were protected from clinical disease caused by the remaining HPV types in the vaccine. [ 23 ] [ failed verification ] HPV types 16 and 18 cause an estimated 70% of cervical cancers , [ 17 ] [ 18 ] and are responsible for most HPV-induced anal cancers. [ 19 ] Gardasil also protects against vulvar and vaginal cancers caused by HPV types 16 and 18, [ 20 ] as well as most penile cancers caused by these two HPV types. [ 35 ] In addition, protection against HPV types 6 and 11 may eliminate up to 90% of the cases of genital warts . [ 21 ] Common plantar warts —e.g., caused by HPV types 1, 2, and 4 [ 36 ] —are not prevented by this vaccine. In 2010, Gardasil was approved by the FDA for prevention of anal cancer and associated precancerous lesions due to HPV types 6, 11, 16, and 18 in people aged 9 through 26 years. [ 37 ] HPV infections, especially HPV 16, contribute to some head and neck cancer (HPV is found in an estimated 26–35% of head and neck squamous cell carcinoma). [ 38 ] [ 39 ] In principle, HPV vaccines may help reduce incidence of such cancers caused by HPV, but this has not been demonstrated. [ 40 ] [ needs update ] In June 2020, the FDA approved the use of Gardasil for the prevention of head and neck cancers. [ 41 ] [ 42 ] The FDA approved Gardasil 9 for women and men aged 27 to 45 based on the vaccine being 88% effective against persistent HPV infections that cause certain types genital warts and cancers in females. Vaccine efficacy in males in this age group was inferred. [ 32 ] A 2020 longitudinal study tracking over 1.6 million Swedish girls and women over an eleven-year period found half as many cervical cancer cases in all women who had been vaccinated, and amongst women who had been vaccinated before the age of 17 a 78% reduction in cervical cancer, "a substantially reduced risk of invasive cervical cancer at the population level." [ 43 ] An alternative vaccine known as Cervarix protects against two oncogenic strains of HPV, 16 and 18. [ 44 ] The National Cancer Institute says, "To date, protection against the targeted HPV types has been found to last for at least 10 years with Gardasil, at least 9 years with Cervarix, and at least 6 years with Gardasil 9. Long-term studies of vaccine efficacy that are still in progress will help scientists better understand the total duration of protection." [ 45 ] Gardasil has been shown to be partially effective (approximately 38%) in preventing cervical cancer caused by ten other high-risk HPV types. [ 46 ] Antibody levels at month 3 (one month post-dose number two) are substantially higher than at month 24 (18 months post-dose number three), suggesting that protection is achieved by month 3 and perhaps earlier. [ 6 ] In 2014, the World Health Organization (WHO) recommended that countries offer the vaccine in a two dose schedule to girls aged under 15, with each dose at least six months apart. [ 21 ] [ 47 ] The United Kingdom, Switzerland , Mexico, and Quebec province of Canada are among the countries or territories that have implemented this as of June 2015 [update] . The CDC recommended the vaccines be delivered in two shots over six months. [ 48 ] Gardasil is also effective in males, providing protection against genital warts, anal warts, anal cancer , and some potentially precancerous lesions caused by some HPV types. [ 16 ] [ 49 ] [ 50 ] Gardasil vaccine has been shown to decrease the risk of young men contracting genital warts. [ 51 ] In the United States, the FDA approved administration of the Gardasil vaccine to males between ages 9 and 26 in 2009. [ 52 ] [ 53 ] The FDA approved administration of the Gardasil 9 vaccine to males between ages 9 and 15 in 2014, and extended the age indication, by including males between ages 16 and 26, in 2015. [ 54 ] [ 55 ] [ 56 ] In the UK, HPV vaccines are licensed for males aged 9 to 15 and for females aged 9 to 26. [ 57 ] Men who have sex with men (MSM) are particularly at risk for conditions associated with HPV types 6, 11, 16, and 18; diseases and cancers that have a higher incidence among MSM include anal intraepithelial neoplasias, anal cancers, and genital warts. HPV type 16 is also responsible for almost 90% of HPV-positive oropharyngeal squamous-cell carcinoma (OPSCC), [ 22 ] a form of cancer that affects the mouth, tonsils, and throat ; [ 22 ] [ 58 ] the prevalence of HPV-positive oropharyngeal cancers is higher in males than females. [ 22 ] A 2005 study found that 95% of HIV -infected gay men also had anal HPV infection, of whom 50% had precancerous HPV-caused lesions. [ 59 ] Gardasil is given in three injections over six months. The second injection is two months after the first, and the third injection is six months after the first shot was administered. [ 15 ] [ 6 ] Alternatively, in some countries it is given as two injections with at least six months between them, for individuals aged 9 years up to and including 13 years. [ 4 ] [ 60 ] As of April 2014 [update] , more than 170 million doses of Gardasil had been distributed worldwide. [ 61 ] The vaccine was tested in thousands of females (ages 9 to 26). [ 62 ] The US Food and Drug Administration (FDA) and the US Centers for Disease Control and Prevention (CDC) consider the vaccine to be safe. It does not contain mercury , thiomersal , live viruses or dead viruses, but virus-like particles, which cannot reproduce in the human body. [ 62 ] The vaccine has mostly minor side effects, such as pain around the injection area. [ 62 ] Fainting is more common among adolescents receiving the Gardasil vaccine than in other kinds of vaccinations. Patients should remain seated for 15 minutes after they receive the HPV vaccine. [ 6 ] There have been reports that the shot is more painful than other common vaccines, and the manufacturer Merck partly attributes this to the virus-like particles within the vaccine. [ 63 ] General side effects of the shot may include joint and muscle pain, fatigue, physical weakness and general malaise. [ 6 ] [ 64 ] The FDA and the CDC said that with millions of vaccinations "by chance alone some serious adverse effects and deaths" will occur in the time period following vaccination, but they have nothing to do with the vaccine. [ 65 ] More than twenty women who received the Gardasil vaccine have died, but these deaths have not been causally connected to the shot, as correlation does not imply causation . [ 65 ] Where information has been available, the cause of death was explained by other factors. [ 66 ] [ 67 ] Likewise, a small number of cases of Guillain–Barré syndrome (GBS) have been reported following vaccination with Gardasil, though there is no evidence linking GBS to the vaccine. [ 28 ] [ 68 ] [ 69 ] It is unknown why a person develops GBS, or what initiates the disease. [ 70 ] The FDA and the CDC monitor events to see if there are patterns, or more serious events than would be expected from chance alone. [ 66 ] The majority (68%) of side effects data were reported by the manufacturer, but in about 90% of the manufacturer reported events, no follow-up information was given that would be useful to investigate the event further. [ 71 ] In February 2009, the Spanish Ministry of Health suspended use of one batch of Gardasil after health authorities in the Valencia region reported that two girls had become ill after receiving the injection. Merck has stated that there was no evidence Gardasil was responsible for the two illnesses. [ 72 ] The following are the ingredients found in the different formulations of HPV vaccines: [ 73 ] The HPV major capsid protein, L1, can spontaneously self-assemble into virus-like particles (VLPs) that resemble authentic HPV virions . Gardasil contains recombinant VLPs assembled from the L1 proteins of HPV types 6, 11, 16 and 18. Since VLPs lack the viral DNA , they cannot induce cancer. They do, however, trigger an antibody response that protects vaccine recipients from becoming infected with the HPV types represented in the vaccine. The L1 proteins are produced by separate fermentations in recombinant Saccharomyces cerevisiae and self-assembled into VLPs. [ 74 ] The National Cancer Institute writes: Widespread HPV vaccination has the potential to reduce cervical cancer incidence around the world by as much as 90%. In addition, the vaccines may reduce the need for screening and subsequent medical care, biopsies, and invasive procedures associated with follow-up from abnormal cervical screening, thus helping to reduce health care costs and anxieties related to follow-up procedures. [ 45 ] Whether the effects are temporary or lifelong, widespread vaccination could have a substantial public health impact. As of 2018, studies have proven that cervical cancer rates have dropped significantly since the introduction of Gardasil. [ 75 ] Before Gardasil was introduced in 2006, 270,000 women died of cervical cancer worldwide in 2002. [ 76 ] As of 2014, the mortality rate from cervical cancer has dropped 50% from 1975 which is due to the Gardasil vaccination along with increased focus on cervical screening. [ 77 ] Acting FDA administrator Andrew von Eschenbach said the vaccine will have "a dramatic effect" on the health of women around the world. [ 78 ] Gardasil is an important tool in reducing cervical cancer rates even in countries where screening programs are routine. The National Cancer Institute estimated that 9,700 women would develop cervical cancer in 2006, and 3,700 would die. [ 79 ] Merck and CSL Limited are expected [ needs update ] to market Gardasil as a cancer vaccine, rather than an STD vaccine. In the early years of Gardasil's introduction it was unclear how widespread the use of the three-shot series would be, in part because of its $525 list price ($175 each for three shots). [ 80 ] But as of 2013, vaccine coverage has been rising. [ 75 ] In 2013, about 55% of girls ages 13–17 years had at least one dose of the vaccination covered, up from 29% in 2007. Coverage for women ages 18–34 also has increased significantly since 2007. [ 75 ] Studies using different pharmacoeconomic models predict that vaccinating young women with Gardasil in combination with screening programs may be more cost effective than screening alone. [ 81 ] These results have been important in decisions by many countries to start vaccination programs. [ 82 ] For example, the Canadian government approved $300 million to buy the HPV vaccine in 2008 after deciding from studies that the vaccine would be cost-effective especially by immunizing young women. [ 83 ] Marc Steben, an investigator for the vaccine, wrote that the financial burden of HPV related cancers on the Canadian people was already $300 million per year in 2005, so the vaccine could reduce this burden and be cost-effective. [ 84 ] Since penile and anal cancers are much less common than cervical cancer, HPV vaccination of young men is likely to be much less cost-effective than for young women yet is still recommended due to the existent risk (including oral cancer). [ 65 ] The August 2009 issue of the Journal of the American Medical Association had an article reiterating the safety of Gardasil [ 68 ] and another questioning the way it was presented to doctors and parents. The new vaccine against 4 types of human papillomavirus (HPV), Gardasil, like other immunizations appears to be a cost-effective intervention with the potential to enhance both adolescent health and the quality of their adult lives. However, the messages and the methods by which the vaccine was marketed present important challenges to physician practice and medical professionalism. By making the vaccine's target disease cervical cancer, the sexual transmission of HPV was minimized, the threat of cervical cancer to adolescents was maximized, and the subpopulations most at risk practically ignored. The vaccine manufacturer also provided educational grants to professional medical associations (PMAs) concerned with adolescent and women's health and oncology. The funding encouraged many PMAs to create educational programs and product-specific speakers' bureaus to promote vaccine use. However, much of the material did not address the full complexity of the issues surrounding the vaccine and did not provide balanced recommendations on risks and benefits. As important and appropriate as it is for PMAs to advocate for vaccination as a public good, their recommendations must be consistent with appropriate and cost-effective use. [ 85 ] According to the CDC, as of 2012, use of the HPV vaccine had cut rates of infection with HPV-6, -11, -16 and -18 in half in American teenagers (from 11.5% to 4.3%) and by one third in American women in their early twenties (from 18.5% to 12.1%). [ 86 ] Research findings that pioneered the development of the vaccine began in 1991 by investigators Jian Zhou and Ian Frazer in The University of Queensland , Australia. Researchers at UQ found a way to form non-infectious virus-like particles (VLP), which could also strongly activate the immune system. Subsequently, the vaccine was developed in parallel by researchers at Georgetown University Medical Center in America, the University of Rochester in America, the University of Queensland in Australia, and the US National Cancer Institute . [ 87 ] MedImmune , GSK , and Merck & Co. advanced these technologies and conducted clinical trials. [ 88 ] In December 2014, the FDA approved Gardasil 9, which protects against nine strains of HPV. [ 89 ] [ 90 ] A few conservative groups, such as the Family Research Council (FRC), have expressed their fears that vaccination with Gardasil might give girls a false sense of security regarding sex and lead to promiscuity, [ 78 ] [ 91 ] [ 92 ] [ 93 ] but no evidence exists to suggest that girls who were vaccinated went on to engage in more sexual activity than unvaccinated girls. [ 94 ] Merck, the manufacturer of the vaccine, has lobbied that state governments make vaccination with Gardasil mandatory for school attendance, [ 95 ] which has upset some conservative and libertarian groups. [ 78 ] [ 96 ] [ 91 ] The governor of Texas, Rick Perry , issued an executive order adding Gardasil to the state's required vaccination list, which was later overturned by the Texas legislature. Even though Perry also allowed parents to opt out of the program more easily, Perry's order was criticized, by fellow presidential candidates Rick Santorum and Michele Bachmann during the 2012 Republican Party presidential debate as being an overreach of state power in a decision properly left to parents. [ 97 ] Canada's National Advisory Committee on Immunization recommended HPV vaccination in 2007 for women and in 2012 for men. The Gardasil vaccine has been available free of charge to girls 12-17 in Canada since 2010. [ 98 ] In June 2013, the Japanese government issued a notice that "cervical cancer vaccinations should no longer be recommended for girls aged 12 to 16" while an investigation is conducted into certain adverse events including pain and numbness in 38 girls. [ 99 ] The vaccines sold in Japan are Cervarix, made by GSK plc (formerly GlaxoSmithKline) of the United Kingdom, and Gardasil, made by Merck Sharp & Dohme. An estimated 3.28 million people have received the vaccination; 1,968 cases of possible side effects have been reported. [ 100 ] In January 2014, the Vaccine Adverse Reactions Review Committee concluded that there was no evidence to suggest a causal association between the HPV vaccine and the reported adverse events, but did not reinstate proactive recommendations for its use. [ 101 ] A study on girls in Sapporo showed that since the Japanese government's suspension of recommending the vaccine, completion rates for the full course of vaccination have dropped to 0.6%. [ 101 ] On 26 November 2021, the Ministry of Health, Labour, and Welfare of Japan officially issued an announcement to resume active recommendations of the HPV vaccine after 8.5 years of suspension and municipalities are expected to restart such active recommendations from April 2022. [ 102 ]
https://en.wikipedia.org/wiki/Gardasil
In condensed matter physics , the Gardner transition refers to a temperature induced transition in which the free energy basin of a disordered system divides into many marginally stable sub-basins. [ 1 ] [ 2 ] It is named after Elizabeth Gardner who first described it in 1985. [ 1 ] This condensed matter physics -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Gardner_transition
The Gardner–Salinas braille codes are a proposed method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8 . There is also a corresponding 6-cell form called GS6 . [ 1 ] The codes were developed as a replacement for Nemeth Braille by John A. Gardner , a physicist at Oregon State University , and Norberto Salinas, an Argentinian mathematician. However, 15 years later Nemeth code was still the standard, with no further change as of 2024 [update] . [ 2 ] The Gardner–Salinas braille codes are an example of a compact human-readable markup language . The syntax is based on the LaTeX system for scientific typesetting. [ citation needed ] The set of lower-case letters, the period, comma, semicolon, colon, exclamation mark, apostrophe, and opening and closing double quotes are the same as in Grade-2 English Braille . [ 1 ] Apart from 0, this is the same as the Antoine notation used in French and Luxembourgish Braille . Sources disagree on 0. Both claimed forms are presented above. The second is the ISO form. Note however that ISO is concerned only with a one-to-one assignment between 8-dot braille and ASCII , and so has no particular connection to Gardner–Salinas braille. GS8 upper-case letters are indicated by the same cell as standard English braille (and GS8) lower-case letters, with dot #7 added. Compare Luxembourgish Braille . Dot 8 is added to the letter forms of International Greek Braille to derive Greek letters: The single quotation marks are the ASCII back tick ` and apostrophe ' . * Encodes the fraction-slash for the single adjacent digits/letters as numerator and denominator. * Used for any > 1 digit radicand. ** Used for markup to represent inkprint text. The difference between the two underline markers is not explained.
https://en.wikipedia.org/wiki/Gardner–Salinas_braille_codes
A Gardon gauge or circular-foil gauge is a heat flux sensor primarily intended for the measurement of high-intensity radiation . It is a sensor that is designed to measure the radiation flux density (in watts per metre squared) from a field of view of 180 degrees. The most common application of Gardon gauges is in exposure testing of sample materials for their resistance to fire and flames. While heat flux sensors can be made according to various designs, the sensor of a Gardon gauge consists of a foil connected to the sensor body at its external radius, and connected to a thin wire at the center, named after its originator Robert Gardon. [ 1 ] The foil center and side are the hot- and cold joint of a thermocouple respectively. When radiation hits the sensor this generates a signal. It is typically water-cooled and does not require any power to operate. A so-called Schmidt-Boelter Gauge has the same outward appearance as a Gardon Gauge, but employs different sensor technology. The Schmidt-Boelter has a plated constantan wire wrapped around an insulating chip. [ 2 ] Both are heat flux sensors. The only difference is practical; Gardon gauges can be manufactured in such a way that they withstand extremely high flux levels. The range for Schmidt-Boelter technology is more limited. On the other hand the Schmidt-Boelter technology can reach higher sensitivities at a lower response time. Please note: Images on this page are of a Schmidt-Boelter gauge. While of similar appearance externally, the internal construction is not that of a Gardon gauge. Construction of both is detailed in the explanation. A high intensity radiation spectrum extends approximately from 300 to 2,800 nm. Gardon gauges usually cover that spectrum with a spectral sensitivity that is as “flat” as possible. For a flux density or irradiance measurement it is required by definition that the response to “beam” radiation varies with the cosine of the angle of incidence; i.e. full response at when the radiation hits the sensor perpendicularly (normal to the surface, 0 degrees angle of incidence), zero response when the radiation is at the horizon (90 degrees angle of incidence, 90 degrees zenith angle), and 0.5 at 60 degrees angle of incidence. It follows from the definition that a Gardon gauge should have a so-called “directional response” or “cosine response” that is close to the ideal cosine characteristic. In order to attain the proper directional and spectral characteristics, a Gardon gauge's main components are: The black coating on the thermopile sensor absorbs the radiation that is converted to heat. The heat flows through the sensor to the sensor housing and from the housing to the cooling water. The thermopile sensor generates a voltage output signal that is proportional to the heat flux. Gardon gauges are frequently used in fire testing . Typically installed vertically and next to the sample under testing. Gardon- or Schmidt Boelter gauges are unprotected heat flux sensors, and that they are highly sensitive to local convection . In general users should make sure that: Gardon gauges are standardised according to the ASTM standard.
https://en.wikipedia.org/wiki/Gardon_gauge
Garfield's proof of the Pythagorean theorem is an original proof of the Pythagorean theorem discovered by James A. Garfield (November 19, 1831 – September 19, 1881), the 20th president of the United States . The proof appeared in print in the New-England Journal of Education (Vol. 3, No.14, April 1, 1876). [ 1 ] [ 2 ] At the time of the publication of the proof Garfield was a Congressman from Ohio. He assumed the office of President on March 4, 1881, and served in that position only for a brief period up to September 19, 1881 when he died following his assassination in July. [ 3 ] Garfield was the only President of the United States to have contributed anything original to mathematics. The proof is nontrivial and, according to the historian of mathematics William Dunham , "Garfield's is really a very clever proof." [ 4 ] The proof appears as the 231st proof in The Pythagorean Proposition , a compendium of 370 different proofs of the Pythagorean theorem. [ 5 ] In the figure, A B C {\displaystyle ABC} is a right-angled triangle with right angle at C {\displaystyle C} . The side-lengths of the triangle are a , b , c {\displaystyle a,b,c} . Pythagorean theorem asserts that c 2 = a 2 + b 2 {\displaystyle c^{2}=a^{2}+b^{2}} . To prove the theorem, Garfield drew a line through B {\displaystyle B} perpendicular to A B {\displaystyle AB} and on this line chose a point D {\displaystyle D} such that B D = B A {\displaystyle BD=BA} . Then, from D {\displaystyle D} he dropped a perpendicular D E {\displaystyle DE} upon the extended line C B {\displaystyle CB} . From the figure, one can easily see that the triangles A B C {\displaystyle ABC} and B D E {\displaystyle BDE} are congruent. Since A C {\displaystyle AC} and D E {\displaystyle DE} are both perpendicular to C E {\displaystyle CE} , they are parallel and so the quadrilateral A C E D {\displaystyle ACED} is a trapezoid. The theorem is proved by computing the area of this trapezoid in two different ways. From these one gets which on simplification yields
https://en.wikipedia.org/wiki/Garfield's_proof_of_the_Pythagorean_theorem