text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
A hexagonal phase of lyotropic liquid crystal is formed by some amphiphilic molecules when they are mixed with water or another polar solvent. In this phase, the amphiphile molecules are aggregated into cylindrical structures of indefinite length and these cylindrical aggregates are disposed on a hexagonal lattice, giving the phase long-range orientational order.
In normal topology hexagonal phases, which are formed by type I amphiphiles , the hydrocarbon chains are contained within the cylindrical aggregates such that the polar-apolar interface has a positive mean curvature . Inverse topology hexagonal phases have water within the cylindrical aggregates and the hydrocarbon chains fill the voids between the hexagonally packed cylinders. Normal topology hexagonal phases are denoted by H I while inverse topology hexagonal phases are denoted by H II . When viewed by polarization microscopy , thin films of both normal and inverse topology hexagonal phases exhibit birefringence , giving rise to characteristic optical textures. Typically, these textures are smoke-like, fan-like or mosaic in appearance. The phases are highly viscous and small air bubbles trapped within the preparation have highly distorted shapes. Size and shapes of lamellar, micellar and hexagonal phases of lipid bilayer phase behavior and mixed lipid polymorphism in aqueous dispersions can be easily identified and characterized by negative staining transmission electron microscopy too. [ 1 ]
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexagonal_phase |
Hexahydroxybenzene triscarbonate is a chemical compound , an oxide of carbon with formula C 9 O 9 . Its molecular structure consists of a benzene core with the six hydrogen atoms replaced by three carbonate groups . It can be seen as a sixfold ester of hexahydroxybenzene (benzenehexol) and carbonic acid .
The compound was obtained by C. Nallaiah in 1984, as a tetrahydrofuran solvate . [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it .
This article about an aromatic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexahydroxybenzene_triscarbonate |
Hexahydroxybenzene trisoxalate is a chemical compound , an oxide of carbon with formula C 12 O 12 . Its molecule consists of a benzene core with the six hydrogen atoms replaced by three oxalate groups . It can be seen as a sixfold ester of benzenehexol and oxalic acid .
The compound was first described by H. S. Verter and R. Dominic in 1967. [ 1 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it .
This article about an aromatic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexahydroxybenzene_trisoxalate |
Hexamethylene triperoxide diamine ( HMTD ) is a high explosive organic compound . HMTD is an organic peroxide , a heterocyclic compound with a cage-like structure. It is a primary explosive . It has been considered as an initiating explosive for blasting caps in the early part of 20th century, mostly because of its high initiating power (higher than that of mercury fulminate ) and its inexpensive production. As such, it was quickly taken up as a primary explosive in mining applications. [ 1 ] However, it has since been superseded by more (chemically) stable compounds such as dextrinated lead azide and DDNP (which contains no lead or mercury). HMTD is widely used in amateur-made blasting caps.
First synthesised in 1885 by the German chemist Ludwig Legler, [ 2 ] HMTD may be prepared by the reaction of an aqueous solution of hydrogen peroxide and hexamine in the presence of an acid catalyst , such as citric acid , acetic acid or dilute sulfuric acid . The hydrogen peroxide needs to be at least 12% w/w concentration , as lower concentrations lead to poor yields. Citric acid is overall superior to other acids, providing a yield of up to about 50%.
The molecule adopts a cage-like structure with the nitrogen atoms having an unusual trigonal planar geometry. [ 3 ]
Like other organic peroxides, such as acetone peroxide (TATP), HMTD is unstable and detonated by shock, friction, static electricity discharges, concentrated sulfuric acid, strong UV radiation and heat. Cases of detonation caused by the simple act of screwing a lid on a jar containing HMTD have been reported. [ 4 ] Common static electricity discharges have been reported to cause detonation. [ 5 ] It is, however, less unstable than many other peroxides under normal conditions; exposure to ultraviolet light increases its sensitivity. It also reacts with most common metals, which can lead to detonation . HMTD is chemically very stable when pure (free of acids, bases, and metal ions) and does not quickly sublime like its acetone counterparts.
HMTD is a more powerful initiating explosive than mercury fulminate, but its poor thermal and chemical stability prevents its use in detonators . [ 6 ] Nevertheless, HMTD is one of the three most widely used primary explosives in improvised, amateur made blasting caps, the others being TATP and silver acetylide .
HMTD is a common source of injury, particularly finger amputations, among amateur chemists. Most of these injuries are caused by small amounts of HMTD that inadvertently detonate in close proximity of fingers, since small amounts (grams) are generally not powerful enough to amputate fingers from distances larger than 5 – 10 cm. [ 7 ]
Calculated (Explo5) detonation pressure P cj at crystal density 1.597 g/cm 3 is 218 kbar with velocity of detonation VoD = 7777 m/s. Explosion temperature is 3141 K, energy of explosion is 5612 kJ/kg (or 3400 - 4000 kJ/kg per various sources) and volume of explosion gases at STP is calculated to be 826 L/kg. Loose powder has density close to 0.4 g/cm 3 , hence the common detonation velocities are closer to 3000 m/s and P cj is closer to 15 kbar. [ 8 ]
HMTD is overall slightly more sensitive than fresh TATP and can be considered to be slightly more dangerous than an average primary explosive. The variance of friction force between different surfaces (e.g. different kinds of paper) is often greater than the variance between the friction sensitivity of a given pair of primary explosives. This leads to different values for friction sensitivity measured at different laboratories.
Despite no longer being used in any military application, and despite its shock sensitivity, HMTD remains a common home-made explosive and has been used in a large number of suicide bombings and other attacks throughout the world. For example, it was one of the components in the explosives intended to bomb Los Angeles International Airport in the 2000 millennium attack plots [ 9 ] [ 10 ] and the 2016 New York and New Jersey bombings , [ 11 ] as well as one of the components of the explosives attempted to be made by the neo-Nazi terrorist organization Atomwaffen Division in the United States. [ 12 ] | https://en.wikipedia.org/wiki/Hexamethylene_triperoxide_diamine |
Hexamethylenetetramine ( HMTA ), also known as 1,3,5,7-tetraazaadamantane , is a heterocyclic organic compound with diverse applications. [ 2 ] [ 3 ] It has the chemical formula (CH 2 ) 6 N 4 and is a white crystalline compound that is highly soluble in water and polar organic solvents. It is useful in the synthesis of other organic compounds , including plastics , pharmaceuticals , and rubber additives. [ 2 ] [ 3 ] The compound is also used medically for certain conditions. [ 4 ] [ 5 ] It sublimes in vacuum at 280 °C. It has a tetrahedral cage-like structure similar to adamantane . [ 3 ] The four vertices are occupied by nitrogen atoms, which are linked by methylene groups . Although the molecular shape defines a cage, no void space is available at the interior.
Hexamethylenetetramine was discovered by Aleksandr Butlerov in 1859. [ 6 ] [ 7 ] It is prepared industrially by combining formaldehyde and ammonia : [ 8 ]
The molecule behaves like an amine base, undergoing protonation and as a ligand. [ 9 ] N - alkylation with chloroallyl chloride gives quaternium-15 ).
The dominant use of hexamethylenetetramine is in the production of solid (powder) or liquid phenolic resins and phenolic resin moulding compounds, in which it is added as a hardening component. These products are used as binders, e.g., in brake and clutch linings, abrasives, non-woven textiles, formed parts produced by moulding processes, and fireproof materials. [ 8 ]
The compound is also used medically as a urinary antiseptic and antibacterial medication under the name methenamine or hexamine . [ 4 ] [ 10 ] [ 11 ] [ 12 ] It is used as an alternative to antibiotics to prevent urinary tract infections (UTIs) and is sold under the brand names Hiprex , Urex , and Urotropin , among others. [ 4 ] [ 10 ] [ 12 ]
As the mandelic acid salt (methenamine mandelate) or the hippuric acid salt (methenamine hippurate), [ 13 ] it is used for the treatment of urinary tract infections . In an acidic environment, methenamine is believed to act as an antimicrobial by converting to formaldehyde . [ 13 ] [ 14 ] A systematic review of its use for this purpose in adult women found there was insufficient evidence of benefit and further research was needed. [ 15 ] A UK study showed that methenamine is as effective as daily low-dose antibiotics at preventing UTIs among women who experience recurrent UTIs. As methenamine is an antiseptic, it may avoid the issue of antibiotic resistance. [ 16 ] [ 17 ]
Methenamine acts as an over-the-counter antiperspirant due to the astringent property of formaldehyde. [ 18 ] Specifically, methenamine is used to minimize perspiration in the sockets of prosthetic devices . [ 19 ]
Methenamine silver stains are used for staining in histology , including the following types:
Together with 1,3,5-trioxane , hexamethylenetetramine is a component of hexamine fuel tablets used by campers, hobbyists, the military and relief organizations for heating camping food or military rations. It burns smokelessly, has a high energy density of 30.0 megajoules per kilogram (MJ/kg), does not liquify while burning, and leaves no ashes, although its fumes are toxic. [ citation needed ]
Standardized 0.149 g tablets of methenamine (hexamine) are used by fire-protection laboratories as a clean and reproducible fire source to test the flammability of carpets and rugs. [ 20 ]
Hexamethylenetetramine or hexamine is also used as a food additive as a preservative ( INS number 239). It is approved for usage for this purpose in the EU, [ 21 ] where it is listed under E number E239, however it is not approved in the USA, Russia, Australia, or New Zealand. [ 22 ]
Hexamethylenetetramine is a versatile reagent in organic synthesis . [ 23 ] It is used in the Duff reaction (formylation of arenes), [ 24 ] the Sommelet reaction (converting benzyl halides to aldehydes), [ 25 ] and in the Delepine reaction (synthesis of amines from alkyl halides). [ 26 ]
Hexamethylenetetramine is the base component to produce RDX and, consequently, C-4 [ 8 ] as well as octogen (a co-product with RDX), hexamine dinitrate, hexamine diperchlorate, HMTD , and R-salt .
From October 2023, sale of hexamethylenetetramine in the UK is restricted to licensed persons (as a "regulated precursor" under the terms of the Poisons Act 1972 ). [ 27 ]
Hexamethylenetetramine is also used in pyrotechnics to reduce combustion temperatures and decrease the color intensity of various fireworks. [ 28 ] Because of its ash-free combustion, hexamethylenetetramine is also utilized in indoor fireworks alongside magnesium and lithium salts . [ 29 ] [ 30 ]
Hexamethylenetetramine was first introduced into the medical setting in 1895 as a urinary antiseptic . [ 31 ] It was officially approved by the FDA for medical use in the United States in 1967. [ 32 ] However, it was only used in cases of acidic urine, whereas boric acid was used to treat urinary tract infections with alkaline urine. [ 33 ] Scientist De Eds found that there was a direct correlation between the acidity of hexamethylenetetramine's environment and the rate of its decomposition. [ 34 ] Therefore, its effectiveness as a drug depended greatly on the acidity of the urine rather than the amount of the drug administered. [ 33 ] In an alkaline environment, hexamethylenetetramine was found to be almost completely inactive. [ 33 ]
Hexamethylenetetramine was also used as a method of treatment for soldiers exposed to phosgene in World War I . Subsequent studies have shown that large doses of hexamethylenetetramine provide some protection if taken before phosgene exposure but none if taken afterwards. [ 35 ]
Since 1990 the number of European producers has been declining. The French SNPE factory closed in 1990; in 1993, the production of hexamethylenetetramine in Leuna , Germany ceased; in 1996, the Italian facility of Agrolinz closed down; in 2001, the UK producer Borden closed; in 2006, production at Chemko, Slovak Republic, was closed. Remaining producers include INEOS in Germany, Caldic in the Netherlands, and Hexion in Italy. In the US, Eli Lilly and Company stopped producing methenamine tablets in 2002. [ 20 ] In Australia, Hexamine Tablets for fuel are made by Thales Australia Ltd. In México, Hexamine is produced by Abiya. [ citation needed ] Many other countries who still produce this include Russia, Saudi Arabia, and China.
In 2020, NASA announced that hexamethylenetetramine had been found in the Murchison , Murray and Tagish Lake meteorites. [ 36 ] [ 37 ] | https://en.wikipedia.org/wiki/Hexamethylenetetramine |
This page provides supplementary chemical data on n -hexane .
The handling of this chemical may incur notable safety precautions. It is highly recommend that you seek the Material Safety Datasheet ( MSDS ) for this chemical from a reliable source and follow its directions.
Table data obtained from CRC Handbook of Chemistry and Physics 44th ed. | https://en.wikipedia.org/wiki/Hexane_(data_page) |
Hexanitrobenzene , also known as HNB , is a nitrobenzene compound in which six nitro groups are bonded to all six positions of a central benzene ring. It is a high- density explosive compound with chemical formula C 6 N 6 O 12 , obtained by oxidizing the amine group of pentanitroaniline with hydrogen peroxide in sulfuric acid .
The stable conformation of this molecule has the nitro groups rotated out of the plane of the central benzene ring. The molecule adopts a propeller-like conformation in which the nitro groups are rotated about 53° from planar. [ 2 ]
HNB has the undesirable property of being moderately sensitive to light and, therefore, hard to utilize safely. As of 2021, it is not used in any production explosives applications, though it is used as a precursor chemical in one method of production of TATB , another explosive.
HNB was experimentally used as a gas source for explosively pumped gas dynamic laser . [ 3 ] In this application, HNB and tetranitromethane are preferred to more conventional explosives because the explosion products CO 2 and N 2 are a simple enough mixture to simulate gas dynamic processes and quite similar to conventional gas dynamic laser medium. The water and hydrogen products of many other explosives could interfere with vibrational states of CO 2 in this type of laser.
During World War II, a method of synthesis of hexanitrobenzene was suggested in Germany, and the product was supposed to be manufactured on a semi-industrial scale according to the following scheme:
Complete nitration of benzene is practically impossible because the nitro groups are deactivating groups for further nitration. | https://en.wikipedia.org/wiki/Hexanitrobenzene |
Hexanitrodiphenylamine (abbreviated HND), is an explosive chemical compound with the formula C 12 H 5 N 7 O 12 . Since it is made from readily available raw materials, HND was used extensively by the Japanese and less extensively by Nazi Germany during World War II but was discontinued due to its toxicity.
Dinitrodiphenylamine is treated with 98% nitric acid . The starting material, dinitrodiphenylamine, is obtained from the reaction of aniline , dinitrochlorobenzene, and soda ash.
HND is a booster-class explosive that was used in World War II by the Germans as a component of Hexanite (60% TNT - 40% HND) and by the Japanese as a component of Kongo (Type 98 H 2 ) (60% Trinitroanisole - 40% HND) for use in bombs, sea mines and depth charges; Seigate (Type 97 H) (60% TNT - 40% HND) for use in torpedo warheads and depth charges; and also in Otsu-B (60% TNT, 24% HND & 16% aluminium powder) for use in torpedo warheads.
Its ammonium salt, also known as Aurantia or Imperial Yellow , was discovered in 1873 by Emil Kopp and used as a yellow colorant for leather, wool and silk in the 19th and early 20th centuries. [ 1 ]
A most toxic and poisonous explosive, it attacks the skin, causing blisters which resemble burns. Dust from HND is injurious to the mucous membranes of the mouth, nose, and lungs. Several nitroaromatic explosives, including HND, have been found to be mutagens . [ 2 ]
On 12 May 2022, construction of the Bundesautobahn 49 in Germany was halted, after traces of the explosive were found in the excavated material. The road passes over a former explosives factory near Stadtallendorf . The factory had been demolished after World War II, and it was not expected that traces of explosives had remained in the ground. [ 3 ] | https://en.wikipedia.org/wiki/Hexanitrodiphenylamine |
Hexanitroethane ( HNE ) is an organic compound with chemical formula C 2 N 6 O 12 or (O 2 N) 3 C-C(NO 2 ) 3 . It is a solid matter with a melting point of 135 °C.
Hexanitroethane is used in some pyrotechnic compositions as a nitrogen-rich oxidizer, e.g. in some decoy flare compositions and some propellants . Like hexanitrobenzene , HNE has been investigated as a gas source for explosively pumped gas dynamic lasers . [ citation needed ]
A composition of HNE as oxidizer with boron as fuel is being investigated as a new explosive. [ 1 ]
The first synthesis was described by Wilhelm Will in 1914, using the reaction between the potassium salt of tetranitroethane with nitric acid . [ 2 ]
A practicable method for industrial use starts with furfural , [ 3 ] which first undergoes oxidative ring-opening by bromine to mucobromic acid . [ 4 ] In the following step, mucobromic acid is reacted with potassium nitrite at just below room temperature to form the dipotassium salt of 2,3,3-trinitropropanal. The final product is obtained by nitration with nitric acid and sulfuric acid at −60 °C.
The thermal decomposition of hexanitroethane has been detected at 60 °C upwards in both the solid and solution phases. [ 5 ] Above 140 °C, this can occur explosively. [ 6 ] The decomposition is first order and is significantly faster in solution than in the solid. For the solid, the following reaction can be formulated: [ 5 ]
For the decomposition in solution, tetranitroethylene is first formed and can be trapped and detected as a Diels–Alder adduct, for example with anthracene or cyclopentadiene . [ 7 ] [ 8 ] | https://en.wikipedia.org/wiki/Hexanitroethane |
Hexanitrohexaazaisowurtzitane , also called HNIW and CL-20 , is a polycyclic nitroamine explosive with the formula C 6 H 6 N 12 O 12 . It has a better oxidizer -to- fuel ratio than conventional HMX or RDX . It releases 20% more energy than traditional HMX-based propellants.
In the 1980s, CL-20 was developed by the China Lake facility, primarily to be used in propellants . [ 1 ]
While most development of CL-20 has been fielded by the Thiokol Corporation , the US Navy (through ONR ) has also been interested in CL-20 for use in rocket propellants , such as for missiles , as it has lower observability characteristics such as less visible smoke. [ 2 ]
Thus far, CL-20 has only been used in the AeroVironment Switchblade 300 “kamikaze” drone, but is undergoing testing for use in the Lockheed Martin [LMT] AGM-158 C Long Range Anti-Ship Missile (LRASM) and AGM-158B Joint Air-to-Surface Standoff Missile-Extended Range (JASSM-ER). [ 3 ]
The Indian Armed Forces have also looked into CL-20. [ 4 ]
The Taiwanese National Chung-Shan Institute of Science and Technology innaugerated a CL-20 production facility in 2022 with reported integration into the HF-2 and HF-3 product lines. [ 5 ]
First, benzylamine ( 1 ) is condensed with glyoxal ( 2 ) under acidic and dehydrating conditions to yield the first intermediate compound.( 3 ). Four benzyl groups selectively undergo hydrogenolysis using palladium on carbon and hydrogen. The amino groups are then acetylated during the same step using acetic anhydride as the solvent. ( 4 ). Finally, compound 4 is reacted with nitronium tetrafluoroborate and nitrosonium tetrafluoroborate , resulting in HNIW. [ 6 ]
In August 2011, Adam Matzger and Onas Bolton published results showing that a cocrystal of CL-20 and TNT had twice the stability of CL-20—safe enough to transport, but when heated to 136 °C (277 °F) the cocrystal may separate into liquid TNT and a crystal form of CL-20 with structural defects that is somewhat less stable than CL-20. [ 7 ] [ 8 ]
In August 2012, Onas Bolton et al. published results showing that a cocrystal of 2 parts CL-20 and 1 part HMX had similar safety properties to HMX, but with a greater firing power closer to CL-20. [ 9 ] [ 10 ]
In 2017, K.P. Katin and M.M. Maslov designed one-dimensional covalent chains based on the CL-20 molecules. [ 11 ] Such chains were constructed using CH 2 molecular bridges for the covalent bonding between the isolated CL-20 fragments. It was theoretically predicted that their stability increased with efficient length growth. A year later, M.A. Gimaldinova and colleagues demonstrated the versatility of CH 2 molecular bridges. [ 12 ] It is shown that the use of CH 2 bridges is the universal technique to connect both CL-20 fragments in the chain and the chains together to make a network (linear or zigzag). It is confirmed that the increase of the effective sizes and dimensionality of the CL-20 covalent systems leads to their thermodynamic stability growth. Therefore, the formation of CL-20 crystalline covalent solids seems to be energetically favorable, and CL-20 molecules are capable of forming not only molecular crystals but bulk covalent structures as well. Numerical calculations of CL-20 chains and networks' electronic characteristics revealed that they were wide-bandgap semiconductors. [ 11 ] [ 12 ] | https://en.wikipedia.org/wiki/Hexanitrohexaazaisowurtzitane |
Hexanitrostilbene (HNS), also called JD-X , is an organic compound with the formula [(O 2 N) 3 C 6 H 2 CH] 2 . It is a yellow-orange solid. [ 1 ] It is used as a heat-resistant high explosive . It is slightly soluble (0.1 - 5 g/100 mL) in butyrolactone , DMF , DMSO , and N -methylpyrrolidone .
It is produced by oxidizing trinitrotoluene (TNT) with a solution of sodium hypochlorite . HNS boasts a higher insensitivity to heat than TNT, and like TNT it is insensitive to impact. When casting TNT, HNS is added at 0.5% to form erratic micro-crystals within the TNT, which prevent cracking. [ 1 ] Because of its insensitivity but high explosive properties, HNS is used in space missions. It was the main explosive fill in the seismic source generating mortar ammunition canisters used as part of the Apollo Lunar Active Seismic Experiments . [ 2 ]
Its heat of detonation is 4 kJ/g. [ 3 ]
It was developed by Kathryn Grove Shipp at the U.S. Naval Ordnance Laboratory in the 1960s and has been improved on since then. [ 4 ] | https://en.wikipedia.org/wiki/Hexanitrostilbene |
Hexaoxygen difluoride is a binary inorganic compound of fluorine and oxygen with the chemical formula O 6 F 2 . [ 1 ] [ 2 ] The compound is one of many known oxygen fluorides . [ 3 ]
The compound can be prepared by electric discharges through the F 2 — O 2 mixture of the certain molar ratio at 60 to 77 K. The ratio is predicted to be 6:2. [ 4 ]
Hexaoxygen difluoride is an oxidizing agent . At 60 K, the compound looks like a dark-brown crystalline solid. If slowly warmed, it decomposes to lower oxygen fluorides and ozone . If quickly warmed to 90 K, it explodes, creating O 2 and F 2 . [ 4 ] | https://en.wikipedia.org/wiki/Hexaoxygen_difluoride |
Hexaphenylethane is a hypothetical organic compound consisting of an ethane core with six phenyl substituents . All attempts at its synthesis have been unsuccessful. [ 1 ] The trityl free radical , Ph 3 C · , was originally thought to dimerize to form hexaphenylethane. However, an inspection of the NMR spectrum of this dimer reveals that it is in fact a non-symmetrical species, Gomberg's dimer instead.
A substituted derivative of hexaphenylethane, hexakis(3,5-di- t -butylphenyl)ethane, has however been prepared. It features a very long central C–C bond at 167 pm (compared to the typical bond length of 154 pm). Attractive London dispersion forces between the t -butyl substituents are believed to be responsible for the stability of this very hindered molecule. [ 2 ]
This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexaphenylethane |
Hexaphosphabenzene is a valence isoelectronic analogue of benzene and is expected to have a similar planar structure due to resonance stabilization and its sp2 nature. Although several other allotropes of phosphorus are stable , no evidence for the existence of P 6 has been reported. Preliminary ab initio calculations on the trimerisation of P 2 leading to the formation of the cyclic P 6 were performed, and it was predicted that hexaphosphabenzene would decompose to free P 2 with an energy barrier of 13−15.4 kcal mol −1 , [ 1 ] and would therefore not be observed in the uncomplexed state under normal experimental conditions. The presence of an added solvent , such as ethanol , might lead to the formation of intermolecular hydrogen bonds which may block the destabilizing interaction between phosphorus lone pairs and consequently stabilize P 6 . [ 1 ] The moderate barrier suggests that hexaphosphabenzene could be synthesized from a [2+2+2] cycloaddition of three P 2 molecules. [ 2 ] Currently, this is a synthetic endeavour which remains to be conquered.
Isolation of hexaphosphabenzene was first achieved within a triple-decker sandwich complex in 1985 by Scherer et al. Amber coloured, air-stable crystals of [{(η 5 - Me 5 C 5 ) Mo } 2 (μ,η 6 -P 6 )] are formed by reaction of [CpMo(CO) 2 / 3 ] 2 with excess P 4 in dimethylbenzene , albeit with a yield of approximately 1%. [ clarification needed ] [ 3 ] [ 4 ] The crystal structure of this complex is a centrosymmetric molecule, and both five-membered rings as well as the central bridge-ligand P 6 ring are planar and parallel. The average P–P distance for the hexaphosphabenzene within this complex is 2.170 Å. [ 3 ] [ 5 ]
Thirty years later, Fleischmann et al. improved the synthetic yield of [{(η 5 -Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] up to 64%. This was achieved by increasing the reaction temperature of the thermolysis of [CpMo(CO) 2 / 3 ] 2 with P 4 to approximately 205 °C in boiling diisopropylbenzene , thus favouring the formation of [{(η 5 -Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] as the thermodynamic product. [ 6 ]
Several analogues of this P 6 triple‐decker complex where the coordinating metal and η 5 -ligand has been varied have also been reported. These include P 6 triple‐decker complexes for Ti , V , Nb , and W , whereby the synthetic method is still based on the originally reported thermolysis of [CpM(CO) 2 / 3 ] 2 with P 4 . [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
If one regards the planar P 6 ring as a 6π electron donor ligand, then [{(η 5 -Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] is a triple-decker sandwich complex with 28 valence electrons . If P 6 , similar to C 6 H 6 , is taken as a 10π electron donor , a 32 valence electron count may be obtained. In most triple-decker complexes with an electron count ranging from 26 to 34, the structure of the middle ring is planar ([{(η 5 -Cp)M} 2 (μ,η 6 -P 6 )] with M = Mo, Sc, Y, Zr, Hf, V, Nb, Ta, Cr, and W). [ 12 ] [ 13 ] In the 24 valence electron [{(η 5 -Cp)Ti} 2 (μ,η 6 -P 6 )] complex, however, a distortion is observed, and the P 6 ring is puckered. [ 7 ]
Calculations have concluded that completely filled 2a*and 2b* orbitals in 28 valence electron complexes lead to a planar symmetrical P 6 middle ring. In 26 valence electron complexes, the occupancy of either 2a*or 2b* results in in-plane or bisallylic distortions and an asymmetric planar middle ring. The puckering of P 6 in 24 valence electron complexes is due to the stabilization of 5a, as well as that conferred by the tetravalent oxidation state of Ti in [{(η 5 -Cp)Ti} 2 (μ,η 6 -P 6 )]. [ 7 ] [ 14 ]
The reactivity of [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] toward silver and copper monocationic salts of the weakly coordinating anion [Al{OC(CF 3 ) 3 } 4 ] − ([TEF]) was studied by Fleischmann et al. in 2015. [ 6 ] Addition of a solution of Ag[TEF] or Cu[TEF] to a solution of [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] in chloroform results in oxidation of the complex, which can be observed by an immediate colour change from amber to dark teal. The magnetic moment of the dark teal crystals determined by the Evans NMR method is equal to 1.67 μB, which is consistent with one unpaired electron. Accordingly, [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] + is detected by ESI mass spectrometry .
The crystal structure of the teal product shows that the triple‐decker geometry is retained during the one‐electron oxidation of [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )]. The Mo—Mo bond length of the [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] + cation is 2.6617(4) Å; almost identical to the bond length determined for the unoxidized species at 2.6463(3) Å. However, the P—P bond lengths are strongly affected by the oxidation . While the P1—P1′ and P3—P3′ bonds are elongated, the remaining P—P bonds are shortened compared to the average P—P bond length of about 2.183 Å in the unoxidized species. Therefore, the middle deck of the 27 valence electron [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] + complex can best be described as a bisallylic distorted P 6 ligand, intermediate between the 28 valence electron complexes with a perfectly planar symmetrical ring, and those with 26 valence electrons displaying a more amplified in-plane distortion. Density functional theorem (DFT) calculations confirm that this distortion is due to depopulation of the P bonding orbitals upon oxidation of the triple-decker sandwich complex . [ 6 ]
To avoid oxidation of [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )], further reactions were performed in toluene to decrease the redox potentia l of the cations. This resulted in a bright orange coordination product upon reaction with copper , although a mixture also containing the dark teal oxidation product was obtained upon reaction with silver .
Single‐crystal X‐ray analysis reveals that this product displays a distorted square‐planar coordination environment around the central cation through two side‐on coordinating P—P bonds. The Ag—P distances are approximately 2.6 Å, whereas the Cu—P distances are determined to be approximately 2.4 Å. The P—P bonds are therefore elongated to 2.2694(16) Å and 2.2915(14) Å upon coordination to copper and silver , respectively, whilst the remaining P—P bonds are unaffected.
In another experiment Cu[TEF] is treated with [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] in pure toluene and the solution shows the bright orange color of the complex cation [Cu([{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )]) 2 ] + . However, analysis of crystals from this solution reveals a distorted tetrahedral coordination environment around Cu. The resulting Cu—P distances are somewhat shorter than their counterparts discussed above. The coordinating P—P bonds are a little longer, which is attributed to less steric crowding in the tetrahedral coordination geometry around the Cu center.
The successful isolation of [Cu([{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )]) 2 ] + either as its tetrahedral or square‐planar isomer is therefore achievable. DFT calculations show that the enthalpy for the tetrahedral to square‐planar isomerization is positive for both metals, with the tetrahedral coordination being favored. When entropy is taken into account, small positive values for Cu + and larger, but negative, values for Ag + are observed. This means that the tetrahedral geometry is predominant for Cu + , but a significant percentage of the complexes adopt a square‐planar geometry in solution. For Ag + , the equilibrium is shifted significantly to the right side, which is presumably why a tetrahedral coordination of [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] and Ag + has not yet been observed.
Examination of the crystal packing reveals that these products are layered compounds that crystallize in the monoclinic C 2/ c space group with alternating negatively charged layers of the [TEF] anions and positively charged layers of isolated [M([{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )]) 2 ] + complexes. The layers lie inside the bc plane, alternate along the a axis, and do not form a two‐dimensional network. [ 6 ]
The treatment of [{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )] with Tl[TEF] in chloroform gives an immediate color change from amber to a deep red. The crystal structure reveals a trigonal pyramidal coordination of the thallium cation, Tl + , by three side‐on coordinating P—P bonds of the P 6 ligands . Two of these P 6 ligands show shorter and uniform Tl—P distances of 3.2–3.3 Å with P—P bonds elongated to about 2.22 Å, whilst the third unit shows an unsymmetrical coordination with long Tl—P distances of approximately 3.42 and 3.69 Å and no P—P bond elongation.
Although the environment of Tl + is distinctly different from that of Cu + and Ag + , their structures are related by the two‐dimensional coordination network that propagates inside the bc plane. Crucially, whilst Cu + and Ag + form layered structures with isolated [M([{(η 5 - Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 )]) 2 ] + complex cations, there is a statistical distribution of the Tl + cations inside the two‐dimensional coordination, which shows further interconnection of the P 6 ligands to form an extended 2D network that could be regarded as a supramolecular analogue of graphene . [ 6 ]
Despite the triple-decker sandwich complex {(η 5 -Me 5 C 5 )Mo} 2 (μ,η 6 -P 6 ) containing a demonstrably planar P 6 ring with equal P—P bond lengths, theoretical calculations reveal that there are at least 7 non-planar P 6 isomers lower in energy than the planar benzene -like D 6h structure. [ 1 ] [ 2 ] [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] [ 24 ] In increasing order of energy these are: benzvalene, prismane, chair, Dewar benzene, bicyclopropenyl, distorted benzene, and benzene. [ 24 ]
A pseudo Jahn–Teller effect (PJT) is responsible for distortion of the D 6h benzene -like structure into the D 2 structure, [ 25 ] [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] which occurs along the e 2u doubly degenerate mode as a result of vibronic coupling of the HOMO − 1 (e 2g ) and LUMO (e 2u ): e 2g ⊗ e 2u = a 1u ⊕ a 2u ⊕ e 2u . The distorted structure is calculated to lie just 2.7 kcal mol −1 lower in energy than the D 6h structure. If the uncomplexed structure were to be successfully synthesized, the aromaticity of the benzene -like P 6 structure would not be sufficient to stabilize the planar geometry, and the PJT effect would result in distortion of the ring. [ 31 ]
Adaptive Natural Density Partitioning (AdNDP) is a theoretical tool developed by Alexander Boldyrev that is based on the concept of the electron pair as the main element of chemical bonding models. It can therefore recover Lewis bonding elements such as 1c–2e core electrons and lone pairs, 2c–2e objects which are two-center two-electron bonds, as well as delocalized many-center bonding elements with respect to aromaticity.
The AdNDP analysis of the seven representative low-lying P 6 structures reveal that these are well described by the classical Lewis model . A lone pair on each phosphorus atom, a two-center-two-electron (2c–2e) σ-bond in every pair of adjacent P atoms, and an additional 2c–2e π-bond between adjacent 2-coordinated P atoms are found, with occupation numbers (ON) of all these bonding elements above 1.92 |e|. [ 31 ]
The chemical bonding in the chair structure is unusual. Based on fragment orbital analysis , it was concluded that two linkages between the two P 3 fragments are of the one-electron hemibond type. The AdNDP analysis reveals a lone pair on each P atom and six 2c–2e P—P σ-bonds. One 3c–2e π-bond in every P 3 triangle was revealed with the user-directed form of the AdNDP analysis, as well as a 4c–2e bond responsible for bonding between the two P 3 triangle, confirming that this isomer cannot be represented by a single Lewis structure , and requires a resonance of two Lewis structures , or can be described by a single formula with delocalized bonding elements.
Both the D 6h benzene -like structure, as well as the D 2 isomer of P 6 is similar to the reported AdNDP bonding pattern of the C 6 H 6 benzene molecule: [ 32 ] 2c–2e σ-bond and lone pairs , as well as delocalized 6c-2e π-bonds. The distortion due to the PJT effect therefore does not significantly disturb the bonding picture. [ 31 ]
The planar P 6 hexagonal structure D 6h is a second-order saddle point due to the pseudo-Jahn–Teller effect (PJT) , which leads to the D 2 distorted structure. Upon sandwich complex formation the PJT effect is suppressed due to filling of the unoccupied molecular orbitals involved in vibronic coupling in P 6 with electron pairs of Mo atoms. [ 33 ] [ 34 ] [ 35 ] Specifically, from molecular orbital analysis it was determined that, upon complex formation, the LUMO in the isolated P 6 structure is now occupied in the triple-decker complex as a result of the appreciable δ-type M → L back-donation mechanism from the occupied d x 2 –y 2 and d xy atomic orbitals of the Mo atom into the partially antibonding π molecular orbitals of P 6 , thus restoring the high symmetry and planarity of P 6 . [ 35 ] | https://en.wikipedia.org/wiki/Hexaphosphabenzene |
The hexatic phase is a state of matter that is between the solid and the isotropic liquid phases in two dimensional systems of particles. It is characterized by two order parameters: a short-range positional and a quasi-long-range orientational (sixfold) order. More generally, a hexatic is any phase that contains sixfold orientational order, in analogy with the nematic phase (with twofold orientational order).
It is a fluid phase, since the shear modulus and the Young's modulus vanish due to the dissociation of dislocations . It is an anisotropic phase, since there exists a director field with sixfold symmetry. The existence of the director field implies that an elastic modulus against drilling or torsion exists within the plane, that is usually called Frank's constant after Charles Frank in analogy to liquid crystals . The ensemble becomes an isotropic liquid (and Frank's constant becomes zero) after the dissociation of disclinations at a higher temperature (or lower density). Therefore, the hexatic phase contains dislocations but no disclinations.
The KTHNY theory of two-step melting by i) destroying positional order and ii) destroying orientational order was developed by John Michael Kosterlitz , David J. Thouless , Bertrand Halperin , David Robert Nelson and A. P. Young in theoretical studies about topological defect unbinding two dimensions. In 2016, Kosterlitz and Thouless were awarded with the Nobel Prize in Physics (together with Duncan Haldane ) for the idea that melting in 2D is mediated by topological defects. The hexatic phase was predicted by D. Nelson and B. Halperin; it does not have a strict analogue in three dimensions.
The hexatic phase can be described by two order parameters , where the translational order is short ranged (exponential decay) and the orientational order is quasi-long ranged (algebraic decay).
If the position of atoms or particles is known, then the translational order can be determined with the translational correlation function G G → ( R → ) {\displaystyle G_{\vec {G}}({\vec {R}})} as function of the distance between lattice site at place R → {\displaystyle {\vec {R}}} and the place 0 → {\displaystyle {\vec {0}}} , based on the two-dimensional density function ρ G → ( R → ) = e i G → ⋅ [ R → + u → ( R → ) ] {\displaystyle \rho _{\vec {G}}({\vec {R}})=e^{i{\vec {G}}\cdot [{\vec {R}}+{\vec {u}}({\vec {R}})]}} in reciprocal space :
The vector R → {\displaystyle {\vec {R}}} points to a lattice site within the crystal , where the atom is allowed to fluctuate with an amplitude u → ( R → ) {\displaystyle {\vec {u}}({\vec {R}})} by thermal motion. G → {\displaystyle {\vec {G}}} is a reciprocal vector in Fourier space . The brackets denote a statistical average about all pairs of atoms with distance R.
The translational correlation function decays fast, i. e. exponential, in the hexatic phase. In a 2D crystal, the translational order is quasi-long range and the correlation function decays rather slow, i. e. algebraic; It is not perfect long range, as in three dimensions, since the displacements u → ( R → ) {\displaystyle {\vec {u}}({\vec {R}})} diverge logarithmically with systems size at temperatures above T=0 due to the Mermin-Wagner theorem .
A disadvantage of the translational correlation function is, that it is strictly spoken only well defined within the crystal. In the isotropic fluid, at the latest, disclinations are present and the reciprocal lattice vector is not defined any more.
The orientational order can be determined by the local director field of a particle at place r → i {\displaystyle {\vec {r}}_{i}} , if the angles θ i j {\displaystyle \theta _{ij}} are taken, given by the bond to the N i {\displaystyle N_{i}} nearest neighbours in sixfolded space, normalized with the number of nearest neighbours:
Ψ {\displaystyle \Psi } is a complex number of magnitude | Ψ ( r → ) | ≤ 1 {\displaystyle |\Psi ({\vec {r}})|\leq 1} and the orientation of the six-folded director is given by the phase. In a hexagonal crystal, this is nothing else but the crystal-axes. The local director field disappears for a particle with five or seven nearest neighbours, as given by dislocations and disclinations Ψ ∼ 0 {\displaystyle \Psi \sim 0} , except a small contribution due to thermal motion. The orientational correlation function between two particles i and k at distance r → = r → i − r → k {\displaystyle {\vec {r}}={\vec {r}}_{i}-{\vec {r}}_{k}} is now defined using the local director field:
Again, the brackets denote the statistical average about all pairs of particles with distance | r → | = r {\displaystyle |{\vec {r}}|=r} . All three thermodynamic phases can be identified with this orientational correlation function: it does not decay in the 2D crystal but takes a constant value (shown in blue in the figure). The stiffness against local torsion is arbitrarily large, Franks's constant is infinity. In the hexatic phase, the correlation decays with a power law (algebraic). This gives straight lines in a log-log-plot, shown in green in the Figure. In the isotropic phase, the correlations decay exponentially fast, this are the red curved lines in the log-log-plot (in a lin-log-plot, it would be straight lines). The discrete structure of the atoms or particles superimposes the correlation function, given by the minima at half integral distances a {\displaystyle a} . Particles which are poorly correlated in position, are also poorly correlated in their director. | https://en.wikipedia.org/wiki/Hexatic_phase |
The hexatriynyl radical , C 6 H , is an organic radical molecule consisting of a linear chain of six carbon atoms terminated by a hydrogen ( H−C≡C−C≡C−C≡C• ). The unpaired electron is located at the opposite end to the hydrogen atom, as indicated. Both experimental work and computer simulations on this species was done in the early 1990s. [ 1 ] [ 2 ]
The radical can be synthesized by photolysis . Two different examples involve
In 2006 the negatively charged hexatriyne anion of this molecule, C 6 H − , was the first negatively charged ion to be discovered to exist in the interstellar medium , using the Green Bank Telescope . [ 3 ] Negative ions were thought to be unstable in this environment due to the prevalence of ultraviolet light , which dislodges extra electrons such as this.
The laboratory synthesis starts from acetylene C 2 H 2 . The reaction takes place within a DC discharge at reduced pressure in a mixture with 15% argon . The product is observed by millimeter-wave spectroscopy.
The two species C 4 H − and C 8 H − have also been detected. | https://en.wikipedia.org/wiki/Hexatriynyl_radical |
Hexavalent chromium ( chromium(VI) , Cr(VI) , chromium 6 ) is any chemical compound that contains the element chromium in the +6 oxidation state (thus hexavalent ). [ 1 ] It has been identified as carcinogenic, which is of concern since approximately 136,000 tonnes (150,000 tons) of hexavalent chromium were produced in 1985. [ 2 ] Hexavalent chromium compounds can be carcinogens ( IARC Group 1 ), especially if airborne and inhaled where they can cause lung cancer .
Hexavalent chromium occurs only rarely in nature, an exception being crocoite (PbCrO 4 ). [ 3 ] It is however produced on a large scale industrially. Virtually all chromium ore is processed via the formation of hexavalent chromium, specifically the salt sodium dichromate . [ 2 ] Sodium chromate is converted into other hexavalent chromium compounds such as chromium trioxide and various salts of chromate and dichromate .
Industrial uses of hexavalent chromium compounds include chromate pigments in dyes, paints, inks, and plastics; chromates added as anticorrosive agents to paints, primers, and other surface coatings; and chromic acid electroplated onto metal parts to provide a decorative or protective coating.
Hexavalent chromium indeed is one of the most widely used heavy metals in various sectors and industries (metallurgy, chemicals, textiles, etc.) with particular involvement in the metal coating sector, especially when subjected to plating or coating processes involving hexavalent chromium. [ 4 ]
Hexavalent chromium can be formed when performing "hot work" such as welding on stainless steel or melting chromium metal. In these situations the chromium is not originally hexavalent, but the high temperatures involved in the process result in oxidation that converts the chromium to a hexavalent state. [ 5 ] Hexavalent chromium can also be found in drinking water and public water systems. [ 6 ] [ 7 ]
Many hexavalent chromium compounds can be carcinogens ( IARC Group 1 ), especially if airborne and inhaled where they can cause lung cancer . Positive associations have also been observed between exposure to chromium(VI) compounds and cancer of the nose and nasal sinuses . [ 8 ] Workers in many occupations are exposed to hexavalent chromium. Problematic exposure is known to occur among workers who handle chromate-containing products and those who grind and/or weld stainless steel. [ 9 ] Workers who are exposed to hexavalent chromium are at increased risk of developing lung cancer, asthma, or damage to the nasal epithelia and skin. [ 5 ] Within the European Union , the use of hexavalent chromium in electronic equipment is largely prohibited by the Restriction of Hazardous Substances Directive and the European Union regulation on Registration, Evaluation, Authorisation and Restriction of Chemicals . [ 10 ]
Hexavalent chromium compounds can be genotoxic carcinogens . Due to its structural similarity to sulfate , chromate (a typical form of chromium(VI) at neutral pH) is transported into cells via sulfate channels. [ 11 ] Inside the cell, hexavalent chromium(VI) is reduced first to pentavalent chromium(V) then to trivalent chromium(III) without the aid of any enzymes. [ 11 ] [ 12 ] The reduction occurs via direct electron transfer primarily from ascorbate and some nonprotein thiols . [ 11 ] Vitamin C and other reducing agents combine with chromate to give chromium(III) products inside the cell. [ 11 ] The resultant chromium(III) forms stable complexes with nucleic acids and proteins . [ 11 ] This causes strand breaks and Cr–DNA adducts which are responsible for mutagenic damage. [ 11 ] According to Shi et al., the DNA can also be damaged by hydroxyl radicals produced during reoxidation of pentavalent chromium by hydrogen peroxide molecules present in the cell, which can cause double-strand breakage. [ 12 ]
Both insoluble salts of lead and barium chromates as well as soluble chromates were negative in the implantation model of lung carcinogenesis . [ 11 ] Yet, soluble chromates are a confirmed carcinogen so it would be prudent to consider all chromates carcinogenic. [ 9 ] [ 11 ]
The LD50 of lead chromate is 5 g/kg (oral, rats). This low toxicity is attributed to its extremely low solubility. Consequently lead chromate remains a common, even preferred, pigment. [ 13 ]
Chronic inhalation from occupational exposures increases the risk of respiratory cancers. [ 11 ] The most common form of lung malignancies in chromate workers is squamous cell carcinoma. [ 11 ] Ingestion of chromium(VI) through drinking water has been found to cause cancer in the oral cavity and small intestine . [ 11 ] It can also cause irritation or ulcers in the stomach and intestines, and toxicity in the liver. [ 11 ] [ 14 ] Liver toxicity shows the body's apparent inability to detoxify chromium(VI) in the GI tract where it can then enter the circulatory system. [ 11 ]
Of 2,345 unsafe products in 2015 listed by the EU Commission for Justice, Consumers and Gender Equality some 64% came from China, and 23% were clothing articles, including leather goods (and shoes) contaminated with hexavalent chromium. [ 15 ] Chromate-dyed textiles or chromate-tanned leather shoes can cause skin sensitivity. [ 15 ]
In the U.S., the OSHA PEL for airborne exposures to hexavalent chromium is 5 μg/m 3 . [ 16 ] [ 17 ] The U.S. National Institute for Occupational Safety and Health proposed a REL of 0.2 μg/m 3 for airborne exposures to hexavalent chromium. [ 18 ]
Based on the findings of the National Toxicology Program (NTP)—which is headquartered in the National Institute of Environmental Health Sciences (NIEHS)—in 2014, California established a state-wide drinking water standard of 10 parts per billion (ppb)—micrograms per liter (MCL) of 10 ppb—"specifically for hexavalent chromium, not total chromium." [ 19 ] [ 20 ] [ 21 ]
For drinking water, the United States Environmental Protection Agency (EPA) does not have a Maximum Contaminant Level (MCL) for hexavalent chromium.
Attempts have been made to test the removal or reduction of hexavalent chromium from aqueous solutions. [ 22 ] Another study done by the American Industrial Hygiene Association indicates that the airborne hexavalent chromium in acidic mists of an electroplating tank collected on PVC filters was reduced over time after mist generation. [ 23 ] A number of other emerging technologies for removing chromium from water are also currently under research, including the use of cationic metal-organic frameworks to selectively adsorb chromium oxyanions . [ 24 ]
Thermus scotoductus , an extremophile living in hot water as well as inhabiting domestic water heaters (per study), [ 25 ] are capable of reducing Cr(VI). [ 26 ] Experiments with activated sludge have also shown its ability to reduce Cr(VI) to Cr(III). [ 27 ]
Hexavalent chromium is a constituent of tobacco smoke . [ 28 ]
Hexavalent chromium was released from the Newcastle Orica Kooragang Island ammonium nitrate plant on August 8, 2011. [ 29 ] The incident occurred when the plant entered the 'start up' phase after the completion of a five-yearly maintenance overhaul. [ 30 ] The "High Temperature Shift catalyst began the process of 'reduction'" where steam passes through the catalyst bed and out the SP8 vent stack. [ 30 ] At this time lower temperatures in parts of the plant caused some of the steam to condense lower which caused chromium(VI) from the catalyst bed to dissolve into the liquid present. [ 30 ] The amount of condensate overwhelmed the drainage arrangements resulting in the emission of condensate through the SP8 vent stack. [ 30 ] The leak went undetected for 30-minutes releasing 200 kg of chromium(VI) into the atmosphere exposing up to 20 workers at the plant and 70 nearby homes in Stockton . [ 30 ]
The town was not notified of the exposure until three days later on the Wednesday morning, [ 29 ] and sparked a major public controversy, with Orica criticized for playing down the extent and possible risks of the leak. [ 31 ] The office of Environment and Heritage in Stockton collected 71 samples. Low levels of chromium were detected in 11 of them. [ 29 ] These 11 samples were taken within six residential blocks close to the Orica plant, two of which were from water samples collected immediately south of the six block area. [ 29 ]
The Select Committee on the Kooragang Island Orica Chemical Leak released their report on the incident in February 2012. They found Orica's approach to addressing the leak's impact was grossly inadequate. [ 30 ] Orica failed to realize the potential impact that prevailing winds would have on an emission 60 meters high. [ 30 ] Orica failed to inspect the area immediately downwind and notify the Office of Environment and Heritage until August 9, 2011. [ 30 ] In Orica's initial report to the Office of Environment and Heritage they failed to disclose that the emissions had escaped off-site. [ 30 ] In the initial report to WorkCover Orica did not disclose potential impacts on workers as well as that the substance emitted was chromium(VI). [ 30 ] Orica's Emergency Response plan was not well understood by employees particularly about notification procedures. [ 30 ] The original notification of residents in Stockton was only to households immediately downwind of the emission which failed to realize the potential for contamination of the surrounding area as well. [ 30 ] The information presented at the original notification downplayed potential health risks and subsequently provided incomplete information and has led to a lack of trust between Stockton residents and Orica officials. [ 30 ] [ 31 ]
In 2014, Orica pleaded guilty to nine charges before the Land and Environment court and was fined $768,000. [ 32 ] NSW Health findings ruled that it is very unlikely that anyone in Stockton would later develop cancer as a result of the incident. [ 33 ]
Toxic poultry feed contaminated by chromium-based leather tanning waste products (as opposed to the non-toxic process of vegetable tanned leather) has been shown to have entered the food supply in Bangladesh through chicken meat, the most common source of protein in the country. Tanneries in Hazaribagh Thana , an industrial neighborhood of Dhaka , emit around 21,600 cubic metres (760,000 cu ft) of toxic waste each day, and generate as much as 100 tonnes (110 tons) per day of scraps, trimmed raw hide, flesh and fat, which are processed into feed by neighborhood recycling plants and used in chicken and fish farms across the country. Chromium levels ranging from 350–4,520 micrograms (0.35–4.52 mg) per kilogram were found in different organs of chickens which had been fed the tannery-scraps feed for two months, according to Abul Hossain, a chemistry professor at the University of Dhaka . The 2014 study estimated up to 25% of the chickens in Bangladesh contained harmful levels of chromium(VI). [ 34 ]
The chemistry of the groundwater in eastern Central Greece (central Euboea and the Asopos valley) revealed high concentrations of hexavalent chromium in groundwater systems sometimes exceeding the Greek and the EU drinking water maximum acceptable level for total chromium. Hexavalent chromium pollution in Greece is associated with industrial waste.
By using the GFAAS for total chromium, diphenylcarbazide-Cr(VI) complex colorimetric method for hexavalent chromium, and flame- AAS and ICP-MS for other toxic elements, their concentrations were investigated in several groundwater samples. The contamination of water by hexavalent chromium in central Euboea is mainly linked to natural processes, but there are anthropogenic cases. [ 35 ]
In the Thebes – Tanagra – Malakasa basin of Eastern Central Greece , [ 36 ] an area that supports many industrial activities, concentrations of chromium (up to 80 μg/L (0.0056 gr/imp gal) Cr(VI)) and Inofyta (up to 53 μg/L (0.0037 gr/imp gal) Cr(VI) were found in the urban water supply of Oropos ). Chromium(VI) concentrations ranging from 5–33 μg/L (0.00035–0.00232 gr/imp gal) Cr(VI) were found in groundwater that is used for Thiva 's water supply. Arsenic concentrations up to 34 μg/L (0.0024 gr/imp gal) along with chromium(VI) levels up to 40 μg/L (0.0028 gr/imp gal) were detected in Schimatari 's water supply.
In the Asopos River , total chromium values were up to 13 μg/L (0.00091 gr/imp gal), hexavalent chromium was less than 5 μg/L (0.00035 gr/imp gal), with other toxic elements relatively low. [ 36 ]
In 2008, defense contractor KBR was alleged to have exposed 16 members of the Indiana National Guard , as well as its own workers, to hexavalent chromium at the Qarmat Ali water treatment facility in Iraq in 2003. [ 37 ] Later, 433 members of the Oregon National Guard 's 162nd Infantry Battalion were informed of possible exposure to hexavalent chromium while escorting KBR contractors. [ 38 ]
One of the National Guard soldiers, David Moore, died in February 2008. The cause was lung disease at age 42. His death was ruled service-related. His brother believes it was hexavalent chromium. [ 39 ] On November 2, 2012, a Portland, Oregon jury found KBR negligent in knowingly exposing twelve National Guard soldiers to hexavalent chromium while working at the Qarmat Ali water treatment facility and awarded damages of $85 million to the plaintiffs. [ 40 ]
Prior to 1970, the federal government had limited reach in monitoring and enforcing environmental regulations. Local governments were tasked with environmental monitoring and regulations, such as the monitoring of heavy metals in wastewater. Examples of this can be seen in larger municipalities, such as: Chicago , Los Angeles , and New York . [ 41 ] A specific example was in 1969, when the Chicago Metropolitan Sanitary District imposed regulations on factories that were identified as having large amounts of heavy metal discharge. [ 41 ]
On December 2, 1970, the Environmental Protection Agency (EPA) was formed. [ 42 ] With the formation of the EPA, the federal government had the funds and the oversight to influence major environmental changes. Following the formation of the EPA, the United States saw groundbreaking legislations, such as the Clean Water Act (1972) and the Safe Drinking Water Act (1974).
The Federal Water Pollution Control Act (FWPCA) of 1948 was amended in 1972 to what is more commonly known as the Clean Water Act (CWA). The subsequent amendments provided a basis for the federal government to begin regulating pollutants, implementing wastewater standards, and increasing funding for water treatment facilities among other things. [ 43 ] Two years later in 1974, the Safe Drinking Water Act (SDWA) was passed by congress. The SDWA aimed to monitor and protect the United States' drinking water, and the water sources it is drawn from. [ 44 ]
In 1991, as part of the SDWA, the EPA placed chromium under its list of maximum contaminant level goals (MCLG), to have a maximum contaminant level (MCL) of 100 ppb. [ 45 ] In 1996, the SDWA was amended to include a provision known as the Unregulated Contaminant Monitoring Rule (UCMR). [ 46 ] Under this rule, the EPA issues a list of 30 or less contaminants that are not normally regulated under the SDWA. Chromium was monitored under the third UCMR, from January 2013 through December 2015. [ 46 ] The EPA uses data from these reports to assist in making regulatory decisions.
The current EPA standard in measuring chromium is in reference to total chromium, both trivalent and hexavalent. Often, trivalent and hexavalent chromium are mentioned together, when in fact, each possess vastly different properties. [ 45 ] At the risk of impacting public health, distinctions between the two chromiums must be clearly made in any publication containing information about chromium. These delineations are critical, as hexavalent chromium is carcinogenic, whereas trivalent chromium is not. [ 45 ]
In 1991, the MCL for chromium exposure was set based on potential of "adverse dermatological effects" related to long-term chromium exposure. [ 45 ] Chromium's MCL of 100 ppb has not changed since its 1991 recommendation. In 1998, the EPA released a toxicological review of hexavalent chromium. [ 45 ] This report examined current literature, at the time, and came to the conclusion that chromium was associated with various health issues. [ 47 ] As of 2012 [update] , "no federal or state laws restrict the carcinogen's presence in drinking water," according to the Natural Resources Defense Council (NRDC). [ 48 ]
In December 2013, the NRDC won a lawsuit against the California Department of Public Health , and the state was required to issue a standard on the maximum contaminant level (MCL) for chromium by "no later than June 15, 2014." [ 49 ] The MCL was added to the California Code of Regulations but, in 2017, another court ruled that the standard must be eliminated because the California Department of Public Health had not proven that the standard was economically feasible. [ 50 ]
Before the EPA can adjust the policy on chromium levels in drinking water, they have to release a final human health assessment. [ 45 ] The EPA mentions two specific documents that are currently under review to determine whether or not to adjust the current drinking water standard for chromium. [ 45 ] The first study the EPA mentioned that is under review is a 2008 study conducted by the Department of Health and Human Services National Toxicology Program. This study looks at chronic oral exposure of hexavalent chromium in rats, and its association with cancer. The other study mentioned is a human health assessment of chromium, titled Toxicological Review of Hexavalent Chromium . The final human health assessment is currently in the stage of draft development. [ 47 ] This stage is the first of seven. The EPA gives no forecast to when the review will be finalized, and if a decision will be made.
Since World War II , [ 51 ] the United States Army relied on hexavalent chromium compounds to protect its vehicles, equipment, aviation and missile systems from corrosion. The wash primer was sprayed as a pretreatment and protective layer on bare metal. [ 52 ]
From 2012 to 2015, Army Research Laboratory conducted research on a wash primer replacement, as a part of the DoD's effort to eliminate the use of toxic wash primers in the military. [ 52 ] Studies indicated that the wash primers contained hazardous air pollutants, and high levels of volatile organic compounds. [ 53 ]
The project resulted in the ARL qualifying three wash primer alternatives in 2015 [ 53 ] for use on Army depots, installations, and repair facilities. [ 52 ] The research led to the removal of chromate products from Army facilities in 2017. [ 52 ] [ 54 ]
For their efforts on the wash primer replacement, the ARL researchers won the Secretary of the Army 's "Award for Environmental Excellence in Weapon System Acquisition" for the 2016 fiscal year. [ 54 ]
The EPA currently limits total chromium in drinking water to 100 parts per billion, but there is no established limit specifically for chromium(VI). The Office of Environmental Health Hazard Assessment (OEHHA) the California Environmental Protection Agency proposed a goal of 0.2 parts per billion in its technical support draft in 2009, despite a 2001 state law requiring a standard be set by 2005. A final Public Health Goal of 0.02 ppb was published in the technical support document in July 2011. [ 20 ]
Monterey Bay Unified Air Pollution Control District monitored airborne levels of hexavalent chromium at an elementary school and fire department, as well as the point-source. They concluded that there were high levels of hexavalent chromium in the air, originating from a local cement plant, called Cemex . [ 55 ] The levels of hexavalent chromium were 8 to 10 times higher than the air district's acceptable level at Pacific Elementary School and the Davenport Fire Department. [ 55 ] The County of Santa Cruz sought help of the Health Services Agency (HSA) to investigate the findings of the Air District's report. Cemex voluntarily ceased operations due to the growing concern within the community, while additional air samples were analyzed. [ 55 ] The HSA worked with Cemex to implement engineering controls, such as dust scavenging systems and other dust mitigation procedures. Cemex also made a change in the materials they used, trying to replace current materials with materials lower in chromium. [ 55 ] The HSA also monitored the surrounding schools to determine if there were any health risks. Most schools came back with low levels, but in the case of higher levels a contractor was hired to clean up the chromium deposits. [ 55 ] This case highlights the previously unrecognized possibility that hexavalent chromium can be released from cement-making.
In 2016, air quality officials began investigating elevated levels of hexavalent chromium in Paramount, California. [ 56 ] The city of Paramount created an action project that included more code enforcement to aid AQMD inspectors and the launch of ParamountEnvironment.org [ 57 ] to keep the public informed. [ 58 ] Over time, efforts by SCAQMD and the city of Paramount have been effective lowering emissions to acceptable levels.
Hexavalent chromium was found in drinking water in the southern California town of Hinkley and was brought to popular attention by the involvement of Erin Brockovich and Attorney Edward Masry . The source of contamination was from the evaporating ponds of a PG&E ( Pacific Gas and Electric ) natural gas pipeline compressor station about 2 miles southeast of Hinkley. Between 1952 and 1966, chromium(VI) was used to prevent corrosion in the cooling stacks. The wastewater was dumped into the unlined evaporating ponds, and the chromium(VI) leaked into the groundwater. [ 59 ] The 580 ppb chromium(VI) in the groundwater in Hinkley exceeded the 100-ppb total chromium maximum contaminant level (MCL) set by the United States Environmental Protection Agency (EPA). [ 60 ] It also exceeded the California MCL of 50 ppb (as of November 2008 [update] ) for all types of chromium. [ 61 ] California first established an MCL specifically for hexavalent chromium in 2014, set at 10 ppb; [ 21 ] prior to that only total chromium standards applied.
A later study found that from 1996 to 2008, 196 cancers were identified among residents of the census tract that included Hinkley—a slightly lower number than the 224 cancers that would have been expected given its demographic characteristics. [ 62 ] [ 63 ] [ 64 ] This finding conflicted with the conclusions reached by the EPA and California's Department of Public Health that chromium(VI) does in fact cause cancer, as explained in a 2013 Center for Public Integrity article published in Mother Jones , critically evaluating that and other studies by researcher John Morgan. [ 65 ]
When a PG&E background study of chromium(VI) was conducted, average chromium(VI) levels in Hinkley were recorded as 1.19 ppb with a peak of 3.09 ppb. PG&E's Topock compressor station averaged 7.8 ppb and peaked at 31.8 ppb. The California MCL standard was still 50 ppb at the completion of this background study. [ 66 ] The Office of Environmental Health Hazard Assessment (OEHHA) of the California EPA proposed in 2009 a health goal of 0.06 ppb of chromium(VI) in drinking water. [ 67 ] In 2010, Brockovich returned to Hinkley in the midst of claims that the plume was spreading despite PG&E's cleanup activities. [ 68 ] PG&E continues to provide bottled water for Hinkley residents, as well as offering to buy their homes. All other ongoing cleanup documentation is maintained at California EPA's page. [ 59 ]
In Chicago 's first ever testing for the toxic metal contaminant, results show that the city's local drinking water contains levels of hexavalent chromium more than 11 times higher than the health standard set in California in July 2011. The results of the test showed that the water which is sent to over 7 million residents had average levels of 0.23 ppb of the toxic metal. California's Office of Environmental Health Hazard Assessment designated the nation's new "public health goal" limit as 0.02 ppb. Echoing their counterparts in other cities where the metal has been detected, Chicago officials stress that local tap water is safe and suggest that if a national limit is adopted, it likely would be less stringent than California's goal. [ 69 ] [ 70 ] The Illinois Environmental Protection Agency (Illinois EPA) has developed a chromium(VI) strategic plan which outlines tasks in order to reduce the levels of chromium(VI) in Illinois' drinking water. One of which is to work with the U.S. EPA to provide significant technical assistance to the City of Chicago to ensure they quickly develop an effective chromium(VI) specific monitoring program that makes use of the U.S. EPA-approved methods. [ 71 ]
Cambridge Plating Company, now known as Purecoat North, was an electroplating business in Belmont, Massachusetts . A report was conducted by the Agency for Toxic Substances and Disease Registry (ATSDR), to evaluate the association between environmental exposures from the Cambridge Plating Company and health effects on the surrounding community. The report indicated that residents of Belmont were exposed to chromium via air emissions, as well as groundwater and soil. [ 72 ] However, six types of cancer were evaluated, and the incidence was actually found to be average, in most cases, across all types, if not a little bit lower than average. [ 72 ] For example, in kidney cancer the number of observed cases was 7 versus an expected 16. [ 72 ] While that was the case for most diseases, it was not for all. The incidence of leukemia among females was elevated in Belmont, MA during 1982–1999 (32 diagnoses observed vs. 23.2 expected). [ 72 ] Elevations in females were due to four excess cases in each time period (11 diagnoses observed vs. 6.9 expected during 1988–1993; 13 diagnoses observed vs. 8.7 expected during 1994–1999) while elevations among males were based on one to three excess cases. [ 72 ] ATSDR deemed Cambridge Plating as an Indeterminate Public Health Hazard in the past, but No Apparent Public Health Hazard in the present or future. [ 72 ]
In 2009, a lawsuit was filed against Prime Tanning Corporation of St. Joseph, Missouri , over alleged hexavalent chromium contamination in Cameron, Missouri . A cluster of brain tumors had developed in the town that was above average for the population size of the town. The lawsuit alleges that the tumors were caused by waste hexavalent chromium that had been distributed to local farmers as free fertilizer. [ 73 ] In 2010 a government study found hexavalent chromium within the soil but not at levels that were hazardous to human health. In 2012, the case ruled that $10 million would be distributed to over a dozen farmers affected in the northwest Missouri area. The Tanning Corporation still denies that their fertilizer caused any harm. Some residents claim that the tumors were a direct cause from the chromium exposure, but it is difficult to determine what other future impacts might arise from exposure in the specific Missouri counties. [ 74 ]
On December 20, 2019, a green substance leaking onto I-696 , in Madison Heights , was identified as hexavalent chromium that had leaked from a basement of a local company, Electro-Plating Services. [ 75 ] [ 76 ]
In July 2022, an employee at the automotive supply company Tribar Technologies overrode alarms, leading to the release of hexavalent chromium into the Wixom wastewater system. The state of Michigan issued a no-contact order with Huron River water near the spill, but this order was lifted after revised estimates concluded that less than 20 pounds of chromium had reached the river. [ 77 ]
On April 8, 2009, the Texas Commission on Environmental Quality (TCEQ) collected ground water samples from a domestic well on West County Road 112 in Midland, Texas (U.S.), in response to a resident complaint of yellow water. The well was found to be contaminated with chromium(VI). The Midland groundwater reached higher levels of contamination than the EPA mandated maximum contaminant level (MCL) of 100 parts per billion. The current groundwater plume of chromium lies under approximately 260 acres of land at the West County Road 112 Groundwater Site. In response, the TCEQ installed filtration systems on water-well sites that showed contamination of chromium. [ 78 ]
As of 2016 [update] , TCEQ had sampled water from 235 wells and has installed over 45 anion-exchange filtration systems from this site [ 78 ] determined to be centered at 2601 West County Road 112, Midland, Texas. [ 79 ] The TCEQ continues to sample wells surrounding the area to monitor the movement of the plume. In addition, they continue to monitor the effectiveness of the anion-exchange filtration systems by sampling on a year-quarterly and the filters are maintained at no cost to the residents.
As of March 2011 [update] , the West County Road 112 Ground Water site was added to the National Priorities List (NPL) also known as the Superfund List by the U.S. Environmental Protection Agency (EPA). [ 78 ] From 2011 to 2013, TCEQ installed groundwater monitors and conducted groundwater sampling. In 2013, TCEQ began sampling residential soil and confirmed that it was contaminated from use of the contaminated groundwater for garden and lawn care.
According to the EPA, ongoing investigations have not concluded the source of the contamination and cleanup solutions are still being developed. Until such investigations are complete and remediation established, residents will continue to be at risk for health effects from exposure to the groundwater contamination. [ 79 ]
On January 7, 2011 it was announced that Milwaukee , Wisconsin had tested its water and hexavalent chromium was found to be present. Officials stated that it was in such small quantities that it was nothing to worry about, although this contaminant is a carcinogen. In Wisconsin, Milwaukee's average chromium(VI) level is 0.194 parts per billion (the EPA recommended maximum contaminant level (MCL) is 100 ppb). [ 45 ] [ 80 ] All 13 water systems tested positive for chromium(VI). Four out of seven systems detected the chemical in Waukesha County , and both Racine and Kenosha Counties had the highest levels averaging more than 0.2 parts per billion. [ 80 ] Further testing was being conducted as of 2011 [update] . [ 81 ] There was no further information available as of October 2016 [update] . | https://en.wikipedia.org/wiki/Hexavalent_chromium |
Hexazine (also known as hexaazabenzene ) is a hypothetical allotrope of nitrogen composed of 6 nitrogen atoms arranged in a ring-like structure analogous to that of benzene . As a neutrally charged species, it would be the final member of the azabenzene (azine) series, in which all of the methine groups of the benzene molecule have been replaced with nitrogen atoms. The two last members of this series, hexazine and pentazine , have not been observed, although all other members of the azine series have (such as pyridine , pyrimidine , pyridazine , pyrazine , triazines , and tetrazines ).
While a neutrally charged hexazine species has not yet been synthesized, two negatively charged variants, [N 6 ] 2- [ 2 ] and [N 6 ] 4- , [ 3 ] have been produced in potassium-nitrogen compounds under very high pressures (> 40 GPa) and temperatures (> 2000 K). In particular, [N 6 ] 4- is aromatic , respecting Hückel's rule , while [N 6 ] 2- is anti-aromatic .
The hexazine molecule bears a structural similarity to the very stable benzene molecule. Like benzene, it has been calculated that hexazine is likely an aromatic molecule. Despite this, it has yet to be synthesized. Additionally, it has been predicted computationally that the hexazine molecule is highly unstable, possibly due to the lone pairs on the nitrogen atoms, which may repel each other electrostatically and/or cause electron-donation to sigma antibonding orbitals . A figure-8-shaped isomer is predicted to be metastable. [ 4 ] | https://en.wikipedia.org/wiki/Hexazine |
Hexazinone is an organic compound that is used as a broad spectrum herbicide . It is a colorless solid. It exhibits some solubility in water but is highly soluble in most organic solvents except alkanes. A member of the triazine class herbicides, it is manufactured by DuPont and sold under the trade name Velpar . [ 1 ]
It functions by inhibiting photosynthesis and thus is a nonselective herbicide. It is used to control grasses, broadleaf, and woody plants. In the United States approximately 33% is used on alfalfa, 31% in forestry, 29% in industrial areas, 4% on rangeland and pastures, and < 2% on sugarcane. [ 2 ]
Hexazinone is a pervasive groundwater contaminant. Use of hexazinone causes groundwater to be at high risk of contamination due to the high leaching potential it exhibits. [ 3 ]
Hexazinone is widely used as a herbicide. It is a non-selective herbicide from the triazine family. It is used among a broad range of places. It is used to control weeds within all sort of applications. From sugarcane plantations, forestry field nurseries, pineapple plantations to high- and railway grasses and industrial plant sites. [ 4 ]
Hexazinone was first registered in 1975 for the overall control of weeds and later for uses in crops. [ 5 ]
Triazines like hexazinone can bind to the D-1 quinone protein of the electron transport chain in photosystem II to inhibit the photosynthesis. These diverted electrons can thereby damage membranes and destroy cells. [ 6 ]
Hexazinone can be synthesized in two different reaction processes. One process starts with a reaction of methyl chloroformate with cyanamide , forming hexazinone after a five-step pathway: [ 7 ]
A second synthesis starts with methyl thiourea .: [ 7 ]
The degradation of hexazinone has long been studied. [ 8 ] It degrades approximately 10% in five weeks, when exposed to artificial sunlight in distilled water. However, degradation in natural waters can be three to seven times greater. Surprisingly, the pH and the temperature of the water do not affect the photodegradation significantly. [ 9 ] It is mainly degraded by aerobic microorganisms in soils. [ 10 ]
Hexazinone is a broad-spectrum residual and contact herbicide , rapidly absorbed by the leaves and roots. It is tolerated by many conifers, and therefore it is a very effective herbicide for the control for annual and perennial broadleaf weeds, some grasses, and some woody species. Hexazinone works as rain or snowmelt makes it possible for the herbicide to move downward into the soil. There the hexazinone is absorbed from the soil by the roots. [ 11 ] It moves through the conductive tissues to the leaves, where it blocks the photosynthesis of the plant within the chloroplasts . Hexazinone binds to a protein of the photosystem II complex, which blocks the electron transport. The result are multiple following reactions. First triplet-state chlorophyll reacts with oxygen to form singlet oxygen . Both chlorophyll and singlet oxygen then remove hydrogen ions from the unsaturated lipids present in de cells and the organelle membranes, forming lipid radicals. These radicals will oxidize other lipids and proteins, eventually resulting in loss of the membrane integrity of the cells and organelles. This will result in a loss of chlorophyll , leakage of cellular contents, cell death, and eventually death of the plant. [ 12 ] Woody plants first show yellowing of the leaves before they start to defoliate, eventually they will die. [ 13 ] Sometimes plants are able to refoliate and defoliate again during the growing season. | https://en.wikipedia.org/wiki/Hexazinone |
Hexcel Corporation is an American public industrial materials company, based in Stamford, Connecticut . The company develops and manufactures structural materials. Hexcel was formed from the combination of California Reinforced Plastics (founded 1948), Ciba Composites (acquired 1995) and Hercules Composites Products Division (acquired 1995). The company sells its products in commercial, military and recreational markets for use in commercial and military aircraft , space launch vehicles and satellites , wind turbine blades, sports equipment and automotive products. Hexcel works with Airbus Group , The Boeing Company , and others. [ 2 ] Since 1980, the firm has publicly traded on the New York Stock Exchange under the ticker symbol HXL. [ 3 ]
Hexcel, originally named the California Reinforced Plastics Company, was founded in 1948 by a group of engineers from the University of California at Berkeley . [ citation needed ] The company's first contract was for the research and development of honeycomb materials for use in radar domes on military aircraft. [ 4 ] In 1954, the company changed its name to Hexcel Products, Inc. The name was derived from the hexagonal cell-shaped honeycomb materials manufactured by the company. [ 5 ]
In the 1960s, Hexcel sold aluminum honeycomb and pre-impregnated fiberglass to Hubert A. Zemke and Dave McCoy for use in building skis. [ 6 ]
Hexcel expanded from military and commercial aviation to the United States space program . The landing pads on the lunar module Apollo 11 that carried men to the moon in 1969 were built from Hexcel honeycomb materials. [ 7 ] [ 8 ] [ 9 ]
In 1970, Hexcel licensed the ski from McCoy. [ 10 ] A few years later, Hexcel decided to focus on its core aerospace business and sold the ski enterprise to Hanson Boots. [ citation needed ]
In the 1980s, Hexcel purchased Stevens-Genin S.A., a French company that manufactured glass-fiber and woven industrial materials. [ 4 ] [ 11 ]
In 1981, it provided materials for the nose, doors and wings of the Space Shuttle Columbia . [ 12 ] [ 13 ] In 1986, Hexcel made most of the material used in the fuselage and wings of the Rutan Voyager – the first aircraft to make a nonstop, around-the-world trip on a single tank of fuel. [ 5 ]
In 2017, Hexcel was selected by Airbus to supply the composite materials for the H160 helicopter's fuselage structures and rotor blades. [ 14 ] Hexcel acquired the aerospace and defense business of Oxford Performance Materials, a manufacturer of carbon fiber-reinforced 3D printed parts for commercial aerospace and space and defense applications. [ 15 ]
In March 2018, Hexcel opened its manufacturing facility at the MidParc Free Trade Zone in Casablanca, Morocco. [ 16 ] The facility oversees the transformation of lightweight honeycomb materials into engineered core parts for aircraft structures, engine nacelles and helicopter blades. Hexcel also signed a strategic alliance with Arkema in Colombes, France, to combine work in carbon fiber and PEKK. [ 17 ] The alliance will result in a joint research and development laboratory in France. The companies aim to develop carbon fiber-reinforced thermoplastic tapes to produce lightweight parts for aircraft. [ 18 ]
Also in 2018, Hexcel opened a carbon fiber plant at the Les Roches-Roussillon Chemicals Industry Platform in Isère , France. [ 19 ] The plant is based at the Osiris Chemicals Industry Platform. [ 20 ] Hexcel's composite materials were used as part of a new boat design used in the Tour de France à la voile . [ 21 ]
In July 2018, Hexcel opened an integrated factory in Salaise-sur-Sanne near Lyon , manufacturing polyacrylonitrile (PAN) , the carbon fiber precursor, the second after its Decatur, Alabama plant.
In December 2018, Hexcel announced the hiring of Colleen Pritchett as President - Aerospace, in America. [ 22 ]
On May 1st, 2024, Tom Gentile was named CEO following Nick Stanage’s retirement.
The company provides Airbus with over 80% of the carbon fiber it needs and is the main supplier of carbon fiber for Safran , notably for the CFM LEAP fan blades. [ 24 ]
Hexcel is creating a new R&D site in Les Avenieres , also near Lyon, focusing on out of autoclave processes, including resin-transfer molding and resin film infusion to target lower production costs for Airbus' future single-aisle family. [ 24 ] Using a thermoplastic resin jointly developed with chemicals specialist Arkema , as opposed to thermoset , would accelerate assembly , cut manufacturing costs and lighten structures. [ 24 ] | https://en.wikipedia.org/wiki/Hexcel |
A unit of information is any unit of measure of digital data size. In digital computing , a unit of information is used to describe the capacity of a digital data storage device. In telecommunications , a unit of information is used to describe the throughput of a communication channel . In information theory , a unit of information is used to measure information contained in messages and the entropy of random variables.
Due to the need to work with data sizes that range from very small to very large, units of information cover a wide range of data sizes. Units are defined as multiples of a smaller unit except for the smallest unit which is based on convention and hardware design. Multiplier prefixes are used to describe relatively large sizes.
For binary hardware , by far the most common hardware today, the smallest unit is the bit , a portmanteau of binary digit, [ 1 ] which represents a value that is one of two possible values; typically shown as 0 and 1. The nibble , 4 bits, represents the value of a single hexadecimal digit. The byte , 8 bits, 2 nibbles, is possibly the most commonly known and used base unit to describe data size. The word is a size that varies by and has a special importance for a particular hardware context. On modern hardware, a word is typically 2, 4 or 8 bytes, but the size varies dramatically on older hardware. Larger sizes can be expressed as multiples of a base unit via SI metric prefixes (powers of ten) or the newer and generally more accurate IEC binary prefixes (powers of two).
In 1928, Ralph Hartley observed a fundamental storage principle, [ 2 ] which was further formalized by Claude Shannon in 1945: the information that can be stored in a system is proportional to the logarithm of N possible states of that system, denoted log b N . Changing the base of the logarithm from b to a different number c has the effect of multiplying the value of the logarithm by a fixed constant, namely log c N = (log c b ) log b N .
Therefore, the choice of the base b determines the unit used to measure information. In particular, if b is a positive integer, then the unit is the amount of information that can be stored in a system with b possible states.
When b is 2, the unit is the shannon , equal to the information content of one "bit". A system with 8 possible states, for example, can store up to log 2 8 = 3 bits of information. Other units that have been named include:
The trit, ban, and nat are rarely used to measure storage capacity; but the nat, in particular, is often used in information theory, because natural logarithms are mathematically more convenient than logarithms in other bases.
Several conventional names are used for collections or groups of bits.
Historically, a byte was the number of bits used to encode a character of text in the computer, which depended on computer hardware architecture, but today it almost always means eight bits – that is, an octet . An 8-bit byte can represent 256 (2 8 ) distinct values, such as non-negative integers from 0 to 255, or signed integers from −128 to 127. The IEEE 1541-2002 standard specifies "B" (upper case) as the symbol for byte ( IEC 80000-13 uses "o" for octet in French, but also allows "B" in English). Bytes, or multiples thereof, are almost always used to specify the sizes of computer files and the capacity of storage units. Most modern computers and peripheral devices are designed to manipulate data in whole bytes or groups of bytes, rather than individual bits.
A group of four bits, or half a byte, is sometimes called a nibble , nybble or nyble. This unit is most often used in the context of hexadecimal number representations, since a nibble has the same number of possible values as one hexadecimal digit has. [ 7 ]
Computers usually manipulate bits in groups of a fixed size, conventionally called words . The number of bits in a word is usually defined by the size of the registers in the computer's CPU , or by the number of data bits that are fetched from its main memory in a single operation. In the IA-32 architecture more commonly known as x86-32, a word is 32 bits, but other past and current architectures use words with 4, 8, 9, 12, 13, 16, 18, 20, 21, 22, 24, 25, 29, 30, 31, 32, 33, 35, 36, 38, 39, 40, 42, 44, 48, 50, 52, 54, 56, 60, 64, 72 [ 8 ] bits or others.
Some machine instructions and computer number formats use two words (a "double word" or "dword"), or four words (a "quad word" or "quad").
Computer memory caches usually operate on blocks of memory that consist of several consecutive words. These units are customarily called cache blocks , or, in CPU caches , cache lines .
Virtual memory systems partition the computer's main storage into even larger units, traditionally called pages .
A unit for a large amount of data can be formed using either a metric or binary prefix with a base unit. For storage, the base unit is typically byte. For communication throughput, a base unit of bit is common. For example, using the metric kilo prefix, a kilobyte is 1000 bytes and a kilobit is 1000 bits.
Use of metric prefixes is common, but often inaccurate since binary storage hardware is organized with capacity that is a power of 2 – not 10 as the metric prefixes are. In the context of computing, the metric prefixes are often intended to mean something other than their normal meaning. For example, 'kilobyte' often refers to 1024 bytes even though the standard meaning of kilo is 1000. Also, 'mega' normally means one million, but in computing is often used to mean 2 20 = 1 048 576 . The table below illustrates the differences between normal metric sizes and the intended size – the binary size.
The International Electrotechnical Commission (IEC) issued a standard that introduces binary prefixes that accurately represent binary sizes without changing the meaning of the standard metric terms. Rather than based on powers of 1000, these are based on powers of 1024 which is a power of 2. [ 9 ]
The JEDEC memory standard JESD88F notes that the definitions of kilo (K), giga (G), and mega (M) based on powers of two are included only to reflect common usage, but are otherwise deprecated. [ 10 ]
Some notable unit names that are today obsolete or only used in limited contexts. | https://en.wikipedia.org/wiki/Hexlet_(unit) |
In chemistry , hexol is a cation with formula {[Co(NH 3 ) 4 (OH) 2 ] 3 Co} 6+ — a coordination complex consisting of four cobalt cations in oxidation state +3, twelve ammonia molecules NH 3 , and six hydroxy anions HO − , with a net charge of +6. The hydroxy groups act as bridges between the central cobalt atom and the other three, which carry the ammonia ligands .
Salts of hexol, such as the sulfate {[Co(NH 3 ) 4 (OH) 2 ] 3 Co}(SO 4 ) 3 (H 2 O) x , are of historical significance as the first synthetic non-carbon-containing chiral compounds. [ 2 ] [ 3 ]
Salts of hexol were first described by Jørgensen , [ 4 ] although it was Werner who recognized its structure. [ 5 ] The cation is prepared by heating a solution containing the cis -diaquotetramminecobalt(III) cation [Co(NH 3 ) 4 (H 2 O) 2 ] 3+ with a dilute base: [ 1 ]
Starting with the sulfate and using ammonium hydroxide as the base, depending on the conditions, one obtains the 9-hydrate, the 6-hydrate, or the 4-hydrate of hexol sulfate. These salts form dark brownish-violet or black tabular crystals, with low solubility in water. When treated with concentrated hydrochloric acid , hexol sulfate converts to cis -diaquotetramminecobalt(III) sulfate. In boiling dilute sulfuric acid , hexol sulfate further degrades with evolution of oxygen and nitrogen. [ 1 ]
The hexol cation exists as two optical isomers that are mirror images of each other, depending on the arrangement of the bonds between the central cobalt atom and the three bidentate peripheral units [Co(NH 3 ) 4 (HO) 2 ]. It belongs to the D 3 point group . The nature of chirality can be compared to that of the ferrioxalate anion [Fe(C 2 O 4 ) 3 ] 3− .
In a historic set of experiments, a salt of hexol with an optically active anion — specifically, its D -(+)-bromo camphorsulfonate – was resolved into separate salts of the two cation isomers by fractional crystallisation . [ 5 ] A more efficient resolution involves the bis(tartrato)diantimonate(III) anion . The hexol hexacation has a high specific rotation of 2640°. [ 6 ]
Werner also described a second achiral hexol (a minor byproduct from the production of Fremy's salt ) that he incorrectly identified as a linear tetramer. The second hexol is hexanuclear (contains six cobalt centres in each ion), not tetranuclear. [ 7 ] Its point group is C 2h , and its formula is [Co 6 (NH 3 ) 14 (OH) 8 O 2 ] 6+ , whereas that of hexol is [Co 4 (NH 3 ) 12 (OH) 6 ] 6+ . | https://en.wikipedia.org/wiki/Hexol |
The Uptake of Hexose Phosphates (Uhp) is a protein system found in bacteria . It is a type of two-component sensory transduction pathway which helps bacteria react to their environment. [ 1 ]
The uhp system is composed of UhpA, UhpB, UhpC, and UhpT. UhpB and UhpC are both transmembrane proteins which form a complex with each other. UhpA is a signal protein found in the cytoplasm . [ 2 ] UhpT is a transporter protein which facilitates the uptake of phosphorylated hexose molecules into the cell . [ 3 ]
The Uhp system uptakes phosphorylated hexose sugars into bacteria. The system is triggered by phosphorylated hexose sugars on the outside of the cell. UhpC binds to the phosphorylated hexose, which allows the phosphorylation of UhpB on one of its cytoplasmic histidines . This facilitates the phosphorylation of an aspartate on UhpA, [ 2 ] and the phosphorylated UhpA activates the transcription of UhpT. [ 2 ] UhpT then facilitates the transport of the phosphorylated hexose sugars into the cell. [ 3 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hexose_phosphate_uptake |
In computing, a hextet , or a chomp , is a sixteen-bit aggregation, [ 1 ] [ 2 ] or four nibbles . As a nibble typically is notated in hexadecimal format, a hextet consists of 4 hexadecimal digits. A hextet is the unofficial name for each of the 8 blocks in an IPv6 address .
A hextet is also referred to as a segment , in some documentation. [ 3 ]
Bob Bemer suggested the use of hextet for 16-bit groups in 2000. [ 1 ] In 2011 an Internet Draft explored various alternatives for hextet such as quibble , short for "quad nibble". [ 2 ] In response to this draft, author Trefor Davies suggested the use of the word chomp because it is in line with the current denominations bit , nibble , byte . [ 4 ]
Hextet would more properly describe a 6-bit aggregation, whereas the exact term for 16 bits should be hexadectet , directly related to the term octet (for 8 bits). However, because it is harder to pronounce, the short form hextet is used—in analogy to how hex is commonly used as an abbreviation for hexadecimal in computing. This usage of hex to mean 16 is also in line with the similar IEEE 1754 term hexlet indicating 16 octets. [ 5 ]
Although the word hextet is not officially recognized in the IETF documents, the word is used in technical literature on IPv6 [ 6 ] [ 7 ] published after the Internet Draft. Official IETF documents simply refer to them as pieces . [ 8 ]
Cisco sources generally [ citation needed ] use the term quartet as does IPv6.com, [ 9 ] a reference either to the four digit grouping or to the fact that it represents four nibbles; however, this term is also used by some to refer to a four-bit aggregation. [ 10 ] [ 11 ] | https://en.wikipedia.org/wiki/Hextet |
n -Hexyllithium , C 6 H 13 Li, sometimes abbreviated to HxLi or NHL , is an organolithium compound used in organic synthesis as a strong base or as a lithiation reagent. It is usually encountered as a colorless or pale yellow solution in hexanes. Such solutions are highly sensitive to air and can ignite when treated with water.
In terms of chemical properties, hexyllithium and n -butyllithium (BuLi) are very similar. As a base, hexyllithium generates n -hexane as a byproduct rather than gaseous butane , which results from the use of BuLi. Another advantage for HxLi is that it is slightly less reactive. [ 2 ] Both of these aspects encourage industrial applications. It is commercially available as a solution in mixed hexanes , usually at a concentration of about 2 M for laboratory use or 33% for industrial use.
As for BuLi, the structure and formula for HxLi are often depicted as a monomer. Like all organolithium compounds, it exists as clusters in solution and as a solid. [ 3 ] | https://en.wikipedia.org/wiki/Hexyllithium |
In the mathematical theory of probability, the Heyde theorem is the characterization theorem concerning the normal distribution (the Gaussian distribution) by the symmetry of one linear form given another. This theorem was proved by C. C. Heyde .
Let ξ j , j = 1 , 2 , … , n , n ≥ 2 {\displaystyle \xi _{j},j=1,2,\ldots ,n,n\geq 2} be independent random variables. Let α j , β j {\displaystyle \alpha _{j},\beta _{j}} be nonzero constants such that β i α i + β j α j ≠ 0 {\displaystyle {\frac {\beta _{i}}{\alpha _{i}}}+{\frac {\beta _{j}}{\alpha _{j}}}\neq 0} for all i ≠ j {\displaystyle i\neq j} . If the conditional distribution of the linear form L 2 = β 1 ξ 1 + ⋯ + β n ξ n {\displaystyle L_{2}=\beta _{1}\xi _{1}+\cdots +\beta _{n}\xi _{n}} given L 1 = α 1 ξ 1 + ⋯ + α n ξ n {\displaystyle L_{1}=\alpha _{1}\xi _{1}+\cdots +\alpha _{n}\xi _{n}} is symmetric then all random variables ξ j {\displaystyle \xi _{j}} have normal distributions (Gaussian distributions).
This probability -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heyde_theorem |
In mathematical logic , Heyting arithmetic H A {\displaystyle {\mathsf {HA}}} is an axiomatization of arithmetic in accordance with the philosophy of intuitionism . [ 1 ] It is named after Arend Heyting , who first proposed it.
Heyting arithmetic can be characterized just like the first-order theory of Peano arithmetic P A {\displaystyle {\mathsf {PA}}} , except that it uses the intuitionistic predicate calculus I Q C {\displaystyle {\mathsf {IQC}}} for inference. In particular, this means that the double-negation elimination principle, as well as the principle of the excluded middle P E M {\displaystyle {\mathrm {PEM} }} , do not hold. Note that to say P E M {\displaystyle {\mathrm {PEM} }} does not hold exactly means that the excluded middle statement is not automatically provable for all propositions—indeed many such statements are still provable in H A {\displaystyle {\mathsf {HA}}} and the negation of any such disjunction is inconsistent. P A {\displaystyle {\mathsf {PA}}} is strictly stronger than H A {\displaystyle {\mathsf {HA}}} in the sense that all H A {\displaystyle {\mathsf {HA}}} - theorems are also P A {\displaystyle {\mathsf {PA}}} -theorems.
Heyting arithmetic comprises the axioms of Peano arithmetic and the intended model is the collection of natural numbers N {\displaystyle {\mathbb {N} }} . The signature includes zero " 0 {\displaystyle 0} " and the successor " S {\displaystyle S} ", and the theories characterize addition and multiplication. This impacts the logic: With 1 := S 0 {\displaystyle 1:=S0} , it is a metatheorem that ⊥ {\displaystyle \bot } can be defined as 0 = 1 {\displaystyle 0=1} and so that ¬ P {\displaystyle \neg P} is P → ⊥ {\displaystyle P\to \bot } for every proposition P {\displaystyle P} . The negation of ⊥ {\displaystyle \bot } is of the form P → P {\displaystyle P\to P} and thus a trivial proposition.
For terms , write s ≠ t {\displaystyle s\neq t} for ¬ ( s = t ) {\displaystyle \neg (s=t)} .
For a fixed term t {\displaystyle t} , the equality t = t {\displaystyle t=t} is true by reflexivity and a proposition P {\displaystyle P} is equivalent to ( t = t ) → P {\displaystyle (t=t)\to P} .
It may be shown that P ∨ Q {\displaystyle P\lor Q} can then be defined as ∃ n . ( n = 0 → P ) ∧ ( n ≠ 0 → Q ) {\displaystyle \exists n.(n=0\to P)\land (n\neq 0\to Q)} . This formal elimination of disjunctions was not possible in the quantifier-free primitive recursive arithmetic P R A {\displaystyle {\mathsf {PRA}}} . The theory may be extended with function symbols for any primitive recursive function , making P R A {\displaystyle {\mathsf {PRA}}} also a fragment of this theory. For a total function f {\displaystyle f} , one often considers predicates of the form f ( n ) = 0 {\displaystyle f(n)=0} .
With explosion valid in any intuitionistic theory, if ¬ ¬ P {\displaystyle \neg \neg P} is a theorem for some P {\displaystyle P} , then by definition ¬ P {\displaystyle \neg P} is provable if and only if the theory is inconsistent. Indeed, in Heyting arithmetic the double-negation explicitly expresses ¬ P → 0 = 1 {\displaystyle \neg P\to 0=1} . For a predicate Q {\displaystyle Q} , a theorem of the form ¬ ¬ ∃ n . Q ( n ) {\displaystyle \neg \neg \exists n.Q(n)} expresses that it is inconsistent to rule out that Q ( t ) {\displaystyle Q(t)} could be validated for some t {\displaystyle t} . Constructively, this is weaker than the existence claim of such a t {\displaystyle t} .
A big part of the metatheoretical discussion will concern classically provable existence claims.
A double-negation ¬ ¬ P {\displaystyle \neg \neg P} entails ( α → ¬ P ) → ( α → 0 = 1 ) {\displaystyle (\alpha \to \neg P)\to (\alpha \to 0=1)} . So a theorem of the form ¬ ¬ P {\displaystyle \neg \neg P} also always gives new means to conclusively reject (also positive) statements α {\displaystyle \alpha } .
Recall that the implication in H A ⊢ α → ¬ ¬ α {\displaystyle {\mathsf {HA}}\vdash \alpha \to \neg \neg \alpha } can classically be reversed , and with that so can the one in H A ⊢ ( ∃ n . ¬ ψ ( n ) ) → ¬ ∀ n . ψ ( n ) {\displaystyle {\mathsf {HA}}\vdash {\big (}\exists n.\neg \psi (n){\big )}\to \neg \forall n.\psi (n)} . Here the distinction is between existence of numerical counter-examples versus absurd conclusions when assuming validity for all numbers. Inserting double-negations turns P A {\displaystyle {\mathsf {PA}}} -theorems into H A {\displaystyle {\mathsf {HA}}} -theorems. More exactly, for any formula provable in P A {\displaystyle {\mathsf {PA}}} , the classically equivalent Gödel–Gentzen negative translation of that formula is already provable in H A {\displaystyle {\mathsf {HA}}} . In one formulation, the translation procedure includes a rewriting of ( ∃ n . Q ( n ) ) N {\displaystyle {\big (}\exists n.Q(n){\big )}^{N}} to ¬ ∀ n . ¬ Q N ( n ) . {\displaystyle \neg \forall n.\neg Q^{N}(n).} The result means that all Peano arithmetic theorems have a proof that consists of a constructive proof followed by a classical logical rewriting. Roughly, the final step amounts to applications of double-negation elimination.
In particular, with undecidable atomic propositions being absent, for any proposition ψ {\displaystyle \psi } not including existential quantifications or disjunctions at all, one has P A ⊢ ψ ⟺ H A ⊢ ψ {\displaystyle {\mathsf {PA}}\vdash \psi \iff {\mathsf {HA}}\vdash \psi } .
Minimal logic proves double-negation elimination for negated formulas, ¬ ¬ ( ¬ α ) ↔ ( ¬ α ) {\displaystyle \neg \neg (\neg \alpha )\leftrightarrow (\neg \alpha )} . More generally, Heyting arithmetic proves this classical equivalence for any Harrop formula .
And Σ 1 0 {\displaystyle \Sigma _{1}^{0}} -results are well behaved as well: Markov's rule at the lowest level of the arithmetical hierarchy is an admissible rule of inference , i.e. for φ {\displaystyle \varphi } with n {\displaystyle n} free,
Instead of speaking of quantifier-free predicates, one may equivalently formulate this for primitive recursive predicate or Kleene's T predicate , called M R Q F {\displaystyle {\mathrm {MR} }_{\mathrm {QF} }} , resp. M R P R {\displaystyle {\mathrm {MR} _{\mathrm {PR} }}} and M R 0 {\displaystyle {\mathrm {MR} _{0}}} .
Even the related rule M R D e c {\displaystyle {\mathrm {MR} }_{\mathrm {Dec} }} is admissible, in which the tractability aspect of φ {\displaystyle \varphi } is not e.g. based on a syntactic condition but where the left hand side also requires ⊢ ∀ m . φ ( n , m ) ∨ ¬ φ ( n , m ) {\displaystyle \vdash \forall m.\varphi (n,m)\lor \neg \varphi (n,m)} .
Beware that in classifying a proposition based on its syntactic form, one ought not mistakenly assign a lower complexity based on some only classical valid equivalence.
As with other theories over intuitionistic logic, various instances of P E M {\displaystyle {\mathrm {PEM} }} can be proven in this constructive arithmetic. By disjunction introduction , whenever either the proposition P {\displaystyle P} or ¬ P {\displaystyle \neg P} is proven, then P ∨ ¬ P {\displaystyle P\lor \neg P} is proven as well. So for example, equipped with 0 = 0 {\displaystyle 0=0} and ∀ n . S n ≠ 0 {\displaystyle \forall n.Sn\neq 0} from the axioms, one may validate the premise for induction of excluded middle for the predicate n = 0 {\displaystyle n=0} . One then says equality to zero is decidable. Indeed, H A {\displaystyle {\mathsf {HA}}} proves equality " = {\displaystyle =} " decidable for all numbers, i.e. ∀ n . ∀ m . ( n = m ∨ n ≠ m ) {\displaystyle \forall n.\forall m.(n=m\lor n\neq m)} . Stronger yet, as equality is the only predicate symbol in Heyting arithmetic, it then follows that, for any quantifier -free formula ϕ {\displaystyle \phi } , where n , … , m {\displaystyle n,\dots ,m} are the free variables , the theory is closed under the rule
Any theory over minimal logic proves ¬ ¬ ( P ∨ ¬ P ) {\displaystyle \neg \neg (P\lor \neg P)} for all propositions. So if the theory is consistent, it never proves the negation of an excluded middle statement.
Practically speaking, in rather conservative constructive frameworks such as H A {\displaystyle {\mathsf {HA}}} , when it is understood what type of statements are algorithmically decidable, then an unprovability result of an excluded middle disjunction expresses the algorithmic undecidability of P {\displaystyle P} .
For simple statements, the theory does not just validate such classically valid binary dichotomies ϕ ( n , … ) ∨ ¬ ϕ ( n , … ) {\displaystyle \phi (n,\dots )\lor \neg \phi (n,\dots )} . The Friedman translation can be used to establish that P A {\displaystyle {\mathsf {PA}}} 's Π 2 0 {\displaystyle \Pi _{2}^{0}} -theorems are all proven by H A {\displaystyle {\mathsf {HA}}} : For any n {\displaystyle n} and quantifier-free φ {\displaystyle \varphi } ,
This result may of course also be expressed with explicit universal closures ∀ n {\displaystyle \forall n} . Roughly, simple statements about computable relations provable classically are already provable constructively. Although in halting problems , not just quantifier-free propositions but also Π 1 0 {\displaystyle \Pi _{1}^{0}} -propositions play an important role, and as will be argued these can be even classically independent. Similarly, already unique existence ∃ ! n . Q ( n ) {\displaystyle \exists !n.Q(n)} in an infinite domain, i.e. ∃ n . ∀ w . ( n = w ↔ Q ( w ) ) {\displaystyle \exists n.\forall w.{\big (}n=w\leftrightarrow Q(w){\big )}} , is formally not particularly simple.
So P A {\displaystyle {\mathsf {PA}}} is Π 2 0 {\displaystyle \Pi _{2}^{0}} -conservative over H A {\displaystyle {\mathsf {HA}}} . For contrast, while the classical theory of Robinson arithmetic Q {\displaystyle {\mathsf {Q}}} proves all Σ 1 0 {\displaystyle \Sigma _{1}^{0}} - P A {\displaystyle {\mathsf {PA}}} -theorems, some simple Π 1 0 {\displaystyle \Pi _{1}^{0}} - P A {\displaystyle {\mathsf {PA}}} -theorems are independent of it. Induction also plays a crucial role in Friedman's result: For example, the more workable theory obtained by strengthening Q {\displaystyle {\mathsf {Q}}} with axioms about ordering, and optionally decidable equality, does prove more Π 2 0 {\displaystyle \Pi _{2}^{0}} -statements than its intuitionistic counterpart.
The discussion here is by no means exhaustive. There are various results for when a classical theorem is already entailed by the constructive theory. Also note that it can be relevant what logic was used to obtain metalogical results. For example, many results on realizability were indeed obtained in a constructive metalogic. But when no specific context is given, stated results need to be assumed to be classical.
Independence results concern propositions such that neither they, nor their negations can be proven in a theory. If the classical theory is consistent (i.e. does not prove ⊥ {\displaystyle \bot } ) and the constructive counterpart does not prove one of its classical theorems P {\displaystyle P} , then that P {\displaystyle P} is independent of the latter. Given some independent propositions, it is easy to define more from them, especially in a constructive framework.
Heyting arithmetic has the disjunction property D P {\displaystyle {\mathrm {DP} }} : For all propositions α {\displaystyle \alpha } and β {\displaystyle \beta } , [ 2 ]
Indeed, this and its numerical generalization are also exhibited by constructive second-order arithmetic and common set theories such as C Z F {\displaystyle {\mathsf {CZF}}} and I Z F {\displaystyle {\mathsf {IZF}}} . It is a common desideratum for the informal notion of a constructive theory.
Now in a theory with D P {\displaystyle {\mathrm {DP} }} , if a proposition P {\displaystyle P} is independent , then the classically trivial P ∨ ¬ P {\displaystyle P\lor \neg P} is another independent proposition, and vice versa. A schema is not valid if there is at least one instance that cannot be proven, which is how P E M {\displaystyle {\mathrm {PEM} }} comes to fail. One may break D P {\displaystyle {\mathrm {DP} }} by adopting an excluded middle statement axiomatically without validating either of the disjuncts, as is the case in P A {\displaystyle {\mathsf {PA}}} .
More can be said: If P {\displaystyle P} is even classically independent, then also the negation ¬ P {\displaystyle \neg P} is independent—this holds whether or not ¬ ¬ P {\displaystyle \neg \neg P} is equivalent to P {\displaystyle P} . Then, constructively, the weak excluded middle W P E M {\displaystyle {\mathrm {WPEM} }} does not hold, i.e. the principle that ¬ α ∨ ¬ ¬ α {\displaystyle \neg \alpha \lor \neg \neg \alpha } would hold for all propositions is not valid. If such P {\displaystyle P} is Σ 1 0 {\displaystyle \Sigma _{1}^{0}} , unprovability of the disjunction manifests the breakdown of Π 1 0 {\displaystyle \Pi _{1}^{0}} - P E M {\displaystyle {\mathrm {PEM} }} , or what amounts to an instance of W L P O {\displaystyle {\mathrm {WLPO} }} for a primitive recursive function.
Knowledge of Gödel's incompleteness theorems aids in understanding the type of statements that are P A {\displaystyle {\mathsf {PA}}} -provable but not H A {\displaystyle {\mathsf {HA}}} -provable.
The resolution of Hilbert's tenth problem provided some concrete polynomials f {\displaystyle f} and corresponding polynomial equations , such that the claim that the latter have a solution is algorithmically undecidable . The proposition can be expressed as
Certain such zero value existence claims have a more particular interpretation: Theories such as P A {\displaystyle {\mathsf {PA}}} or Z F C {\displaystyle {\mathsf {ZFC}}} prove that these propositions are equivalent to the arithmetized claim of the theories own inconsistency. Thus, such propositions can even be written down for strong classical set theories.
In a consistent and sound arithmetic theory, such an existence claim I f {\displaystyle {\mathrm {I} _{f}}} is an independent Σ 1 0 {\displaystyle \Sigma _{1}^{0}} -proposition. Then C f := ¬ I f {\displaystyle {\mathrm {C} _{f}}:=\neg {\mathrm {I} _{f}}} , by pushing a negation through the quantifier, is seen to be an independent Goldbach-type - or Π 1 0 {\displaystyle \Pi _{1}^{0}} -proposition. To be explicit, the double-negation ¬ ¬ I f {\displaystyle \neg \neg {\mathrm {I} _{f}}} (or ¬ C f {\displaystyle \neg {\mathrm {C} _{f}}} ) is also independent. And any triple-negation is, in any case, already intuitionistically equivalent to a single negation.
The following illuminates the meaning involved in such independent statements. Given an index in an enumeration of all proofs of a theory, one can inspect what proposition it is a proof of. P A {\displaystyle {\mathsf {PA}}} is adequate in that it can correctly represent this procedure: there is a primitive recursive predicate F ( w ) := P r f ( w , ⌜ 0 = 1 ⌝ ) {\displaystyle \mathrm {F} (w):=\mathrm {Prf} (w,\ulcorner 0=1\urcorner )} expressing that a proof is one of the absurd proposition 0 = 1 {\displaystyle 0=1} . This relates to the more explicitly arithmetical predicate above, about a polynomial's return value being zero. One may metalogically reason that if P A {\displaystyle {\mathsf {PA}}} is consistent, then it indeed proves ¬ F ( w _ ) {\displaystyle \neg \mathrm {F} ({\underline {\mathrm {w} }})} for every individual index w {\displaystyle {\mathrm {w} }} .
In an effectively axiomatized theory, one may successively perform an inspection of each proof. If a theory is indeed consistent, then there does not exist a proof of an absurdity, which corresponds to the statement that the mentioned "absurdity search" will never halt. Formally in the theory, the former is expressed by the proposition ¬ ∃ w . F ( w ) {\displaystyle \neg \exists w.\mathrm {F} (w)} , negating the arithmetized inconsistency claim. The equivalent Π 1 0 {\displaystyle \Pi _{1}^{0}} -proposition ∀ w . ¬ F ( w ) {\displaystyle \forall w.\neg \mathrm {F} (w)} formalizes the never halting of the search by stating that all proofs are not a proof of an absurdity. And indeed, in an omega-consistent theory that accurately represents provability, there is no proof that the absurdity search would ever conclude by halting (explicit inconsistency not derivable), nor—as shown by Gödel—can there be a proof that the absurdity search would never halt (consistency not derivable). Reformulated, there is no proof that the absurdity search never halts (consistency not derivable), nor is there a proof that the absurdity search does not never halt (consistency not rejectible).
To reiterate, neither of these two disjuncts is P A {\displaystyle {\mathsf {PA}}} -provable, while their disjunction is trivially P A {\displaystyle {\mathsf {PA}}} -provable. Indeed, if P A {\displaystyle {\mathsf {PA}}} is consistent then it violates D P {\displaystyle {\mathrm {DP} }} .
The Σ 1 0 {\displaystyle \Sigma _{1}^{0}} -proposition expressing the existence of a proof of 0 = 1 {\displaystyle 0=1} is a logically positive statement. Nonetheless, it is historically denoted ¬ C o n P A {\displaystyle \neg {\mathrm {Con} }_{\mathsf {PA}}} , while its negation is a Π 1 0 {\displaystyle \Pi _{1}^{0}} -proposition denoted by C o n P A {\displaystyle {\mathrm {Con} }_{\mathsf {PA}}} . In a constructive context, this use of the negation sign may be misleading nomenclature.
Friedman established another interesting unprovable statement, namely that a consistent and adequate theory never proves its arithmetized disjunction property.
Already minimal logic logically proves all non-contradiction claims, and in particular ¬ ( I f ∧ ¬ I f ) {\displaystyle \neg ({\mathrm {I} _{f}}\land \neg {\mathrm {I} _{f}})} and ¬ ( C f ∧ ¬ C f ) {\displaystyle \neg ({\mathrm {C} _{f}}\land \neg {\mathrm {C} _{f}})} . Since also ( C f ∧ ¬ C f ) ↔ ¬ ( I f ∨ ¬ I f ) {\displaystyle ({\mathrm {C} _{f}}\land \neg {\mathrm {C} _{f}})\leftrightarrow \neg ({\mathrm {I} _{f}}\lor \neg {\mathrm {I} _{f}})} , the theorem ¬ ( C f ∧ ¬ C f ) {\displaystyle \neg ({\mathrm {C} _{f}}\land \neg {\mathrm {C} _{f}})} may be read as a provable double-negated excluded middle disjunction (or existence claim). But in light of the disjunction property, the plain excluded middle C f ∨ ¬ C f {\displaystyle {\mathrm {C} _{f}}\lor \neg {\mathrm {C} _{f}}} cannot be H A {\displaystyle {\mathsf {HA}}} -provable. So one of the De Morgan's laws cannot intuitionistically hold in general either.
The breakdown of the principles W P E M {\displaystyle {\mathrm {WPEM} }} and W L P O {\displaystyle {\mathrm {WLPO} }} have been explained.
Now in P A {\displaystyle {\mathsf {PA}}} , the least number principle L N P {\displaystyle {\mathrm {LNP} }} is just one of many statements equivalent to the induction principle. The proof below shows how L N P {\displaystyle {\mathrm {LNP} }} implies P E M {\displaystyle {\mathrm {PEM} }} , and therefore why this principle also cannot be generally valid in H A {\displaystyle {\mathsf {HA}}} . However, the schema granting double-negated least number existence for every non-trivial predicate, denoted ¬ ¬ L N P {\displaystyle \neg \neg {\mathrm {LNP} }} , is generally valid. In light of Gödel's proof, the breakdown of these three principles can be understood as Heyting arithmetic being consistent with the provability reading of constructive logic.
Markov's principle for primitive recursive predicates M P P R {\displaystyle {\mathrm {MP} _{\mathrm {PR} }}} does already not hold as an implication schema for H A {\displaystyle {\mathsf {HA}}} , let alone the strictly stronger M P D e c {\displaystyle {\mathrm {MP} _{\mathrm {Dec} }}} . Although in the form of the corresponding rules, they are admissible, as mentioned.
Similarly, the theory does not prove the independence of premise principle I P {\displaystyle {\mathrm {IP} }} for negated predicates, albeit it is closed under the rule for all negated propositions, i.e. one may pull out the existential quantifier in ¬ P → ∃ n . Q ( n ) {\displaystyle \neg P\to \exists n.Q(n)} . The same holds for the version where the existential statement is replaced by a mere disjunction.
The valid implication ¬ ¬ ( α → β ) → ( α → ¬ ¬ β ) {\displaystyle \neg \neg (\alpha \to \beta )\to (\alpha \to \neg \neg \beta )} can be proven to hold also in its reversed form, using the disjunctive syllogism. However, the double-negation shift D N S {\displaystyle {\mathrm {DNS} }} is not intuitionistically provable, i.e. the schema of commutativity of " ¬ ¬ {\displaystyle \neg \neg } " with universal quantification over all numbers. This is an interesting breakdown that is explained by the consistency of ¬ D e c M {\displaystyle \neg {\mathrm {Dec} _{M}}} for some M {\displaystyle M} , as discussed in the section on Church's thesis.
Making use of the order relation on the naturals, the strong induction principle reads
In class notation, as familiar from set theory, an arithmetic statement Q ( n ) {\displaystyle Q(n)} is expressed as n ∈ B {\displaystyle n\in B} where B := { m ∈ N ∣ Q ( m ) } {\displaystyle B:=\{m\in {\mathbb {N} }\mid Q(m)\}} . For any given predicate of negated form, i.e. ϕ ( n ) := ¬ ( n ∈ B ) {\displaystyle \phi (n):=\neg (n\in B)} , a logical equivalent to induction is
The insight is that among subclasses B ⊆ N {\displaystyle B\subseteq {\mathbb {N} }} , the property of (provably) having no least member is equivalent to being uninhabited, i.e. to being the empty class. Taking the contrapositive results in a theorem expressing that for any non-empty subclass, it cannot consistently be ruled out that there exists a member n ∈ B {\displaystyle n\in B} such that there is no member k ∈ B {\displaystyle k\in B} smaller than n {\displaystyle n} :
In Peano arithmetic, where double-negation elimination is always valid, this proves the least number principle in its common formulation. In the classical reading, being non-empty is equivalent to (provably) being inhabited by some least member.
A binary relation " < {\displaystyle <} " that validates the strong induction schema in the above form is always also irreflexive: Considering ϕ c ( n ) := ( n ≠ c ) {\displaystyle \phi _{c}(n):=(n\neq c)} or equivalently
for some fixed number c {\displaystyle c} , the above simplifies to the statement that no member k {\displaystyle k} of B = { c } {\displaystyle B=\{c\}} validates k < c {\displaystyle k<c} , which is to say ¬ ( c < c ) {\displaystyle \neg (c<c)} . (And this logical deduction did not even use any other property of the binary relation.)
More generally, if B {\displaystyle B} is non-empty and the associated (classical) least number principle can be used to prove some statement of negated form (such as ¬ ( c < c ) {\displaystyle \neg (c<c)} ), then one can extend this to a fully constructive proof. This is because implications α → ¬ β {\displaystyle \alpha \to \neg \beta } are always intutionistically equivalent to the formally stronger ( ¬ ¬ α ) → ¬ β {\displaystyle (\neg \neg \alpha )\to \neg \beta } .
But in general, over constructive logic, the weakening of the least number principle can not be lifted. The following example demonstrates this: For some proposition P {\displaystyle P} (say C f {\displaystyle {\mathrm {C} _{f}}} as above), consider the predicate
This Q P {\displaystyle Q_{P}} corresponds to a subclass b P := { z ∈ { 0 } ∣ P } ∪ { 1 } {\displaystyle b_{P}:=\{z\in \{0\}\mid P\}\cup \{1\}} of the natural numbers. Any number proven or assumed to be in this class provably either equals 0 {\displaystyle 0} or 1 {\displaystyle 1} , i.e. b P ⊆ { 0 , 1 } {\displaystyle b_{P}\subseteq \{0,1\}} . As 1 = 1 {\displaystyle 1=1} , the proposition Q P ( 1 ) {\displaystyle Q_{P}(1)} or 1 ∈ b P {\displaystyle 1\in b_{P}} is trivially true and so the class is inhabited . Further, using decidability of equality and the disjunctive syllogism proves the equivalence Q P ( 0 ) ↔ P {\displaystyle Q_{P}(0)\leftrightarrow P} . In other words, whether 0 {\displaystyle 0} is a member of the class is exactly as hard as P {\displaystyle P} . If the underlying proposition P {\displaystyle P} is independent, then also the predicate is undecidable in the theory.
One may ask what the least member of the class b P {\displaystyle b_{P}} may be. As it is inhabited, the least number existence for this class cannot be rejected. In fact, given a conjunction with Q P ( 1 ) {\displaystyle Q_{P}(1)} is trivial, the existence claim of a least number validating Q P {\displaystyle Q_{P}} itself translates to the excluded middle statement for P {\displaystyle P} . Knowledge of such a number's value does determine whether or not P {\displaystyle P} holds. So for independent P {\displaystyle P} , the least number principle instance with Q P {\displaystyle Q_{P}} is also independent of H A {\displaystyle {\mathsf {HA}}} .
In set theory notation, P {\displaystyle P} is also equivalent to b P = { 0 , 1 } {\displaystyle b_{P}=\{0,1\}} , while its negation is equivalent to b P = { 1 } {\displaystyle b_{P}=\{1\}} . This demonstrates that elusive predicates can define elusive subsets. And so also in constructive set theory , while the standard order on the class of naturals is decidable, the naturals are not well-ordered . But strong induction principles, that constructively do not imply unrealizable existence claims, are also still available.
In a computable context, for a predicate M {\displaystyle M} , the classically trivial infinite disjunction
also written D e c M {\displaystyle {\mathrm {Dec} }_{M}} , can be read as a validation of the decidability of a decision problem . In class notation, M ( n ) {\displaystyle M(n)} is also written n ∈ M {\displaystyle n\in M} .
H A {\displaystyle {\mathsf {HA}}} proves no propositions not provable by P A {\displaystyle {\mathsf {PA}}} and so, in particular, it does not reject any theorems of the classical theory. But there are also predicates M {\displaystyle M} such that the axioms H A + ¬ D e c M {\displaystyle {\mathsf {HA}}+\neg {\mathrm {Dec} }_{M}} are consistent. Again, constructively, such a negation is not equivalent to the existence of a particular numerical counter-example t {\displaystyle t} to excluded middle for M ( t ) {\displaystyle M(t)} . Indeed, already minimal logic proves the double-negated excluded middle for all propositions, and so ∀ n . ¬ ¬ ( Q ( n ) ∨ ¬ Q ( n ) ) {\displaystyle \forall n.\neg \neg {\big (}Q(n)\lor \neg Q(n){\big )}} , which is equivalent to ¬ ∃ n . ¬ ( Q ( n ) ∨ ¬ Q ( n ) ) {\displaystyle \neg \exists n.\neg {\big (}Q(n)\lor \neg Q(n){\big )}} , for any predicate Q {\displaystyle Q} .
Church's rule is an admissible rule in H A {\displaystyle {\mathsf {HA}}} . Church's thesis principle C T 0 {\displaystyle {\mathrm {CT} _{0}}} may be adopted in H A {\displaystyle {\mathsf {HA}}} , while P A {\displaystyle {\mathsf {PA}}} rejects it: It implies negations such as the ones just described.
Consider the principle in the form stating that all predicates that are decidable in the logic sense above are also decidable by a total computable function . To see how it is in conflict with excluded middle, one merely needs to define a predicate that is not computably decidable.
To this end, write Δ e ( w ) := T 1 ( e , e , w ) {\displaystyle \Delta _{e}(w):=T_{1}(e,e,w)} for predicates defined from Kleene's T predicate . The indices e {\displaystyle e} of total computable functions fulfill ∀ x . ∃ w . T 1 ( e , x , w ) {\displaystyle \forall x.\exists w.T_{1}(e,x,w)} . While T 1 {\displaystyle T_{1}} can be realized in a primitive recursive fashion, the predicate ∃ w . Δ e ( w ) {\displaystyle \exists w.\Delta _{e}(w)} in e {\displaystyle e} , i.e. the class H := { e ∣ ∃ w . Δ e ( w ) } {\displaystyle H:=\{e\mid \exists w.\Delta _{e}(w)\}} of partial computable function indices with a witness describing how they halt on the diagonal, is computably enumerable but not computable . The classical compliment M {\displaystyle M} defined using ¬ ∃ w . Δ e ( w ) {\displaystyle \neg \exists w.\Delta _{e}(w)} is not even computably enumerable, see halting problem. This provenly undecidable problem e ∈ M {\displaystyle e\in M} provides a violating example. For any index e {\displaystyle e} , the equivalent form ∀ w . ¬ Δ e ( w ) {\displaystyle \forall w.\neg \Delta _{e}(w)} expresses that when the corresponding function is evaluated (at e {\displaystyle e} ), then all conceivable descriptions of evaluation histories ( w {\displaystyle w} ) do not describe the evaluation at hand. Specifically, this not being decidable for functions establishes the negation of what amounts to W L P O {\displaystyle {\mathrm {WLPO} }} .
The formal Church's principles are associated with the recursive school, naturally. Markov's principle M P D e c {\displaystyle {\mathrm {MP} _{\mathrm {Dec} }}} is commonly adopted, by that school and by constructive mathematics more broadly. In the presence of Church's principle, M P D e c {\displaystyle {\mathrm {MP} _{\mathrm {Dec} }}} is equivalent to its weaker form M P P R {\displaystyle {\mathrm {MP} _{\mathrm {PR} }}} . The latter can generally be expressed as a single axiom, namely double-negation elimination for any ¬ ¬ ∃ w . T 1 ( e , x , w ) {\displaystyle \neg \neg \exists w.T_{1}(e,x,w)} . Heyting arithmetic together with both M P D e c {\displaystyle {\mathrm {MP} _{\mathrm {Dec} }}} + E C T 0 {\displaystyle {\mathrm {ECT} _{0}}} prove independence of premise for decidable predicates, I P 0 {\displaystyle {\mathrm {IP} }_{0}} . But they do not go together, consistently, with I P {\displaystyle {\mathrm {IP} }} . C T 0 {\displaystyle {\mathrm {CT} _{0}}} also negates D N S {\displaystyle {\mathrm {DNS} }} . The intuitionist school of L. E. J. Brouwer extends Heyting arithmetic by
a collection of principles that negate both P E M {\displaystyle {\mathrm {PEM} }} as well as C T 0 {\displaystyle {\mathrm {CT} _{0}}} .
If a theory is consistent, then no proof is one of an absurdity. Kurt Gödel introduced the negative translation and proved that if Heyting arithmetic is consistent, then so is Peano arithmetic. That is to say, he reduced the consistency task for P A {\displaystyle {\mathsf {PA}}} to that of H A {\displaystyle {\mathsf {HA}}} . However, Gödel's incompleteness theorems, about the incapability of certain theories to prove their own consistency, also applies to Heyting arithmetic itself.
The standard model of the classical first-order theory P A {\displaystyle {\mathsf {PA}}} as well as any of its non-standard models is also a model for Heyting arithmetic H A {\displaystyle {\mathsf {HA}}} .
There are also constructive set theory models for full H A {\displaystyle {\mathsf {HA}}} and its intended semantics. Relatively weak set theories suffice: They shall adopt the Axiom of infinity , the Axiom schema of predicative separation to prove induction of arithmetical formulas in ω {\displaystyle \omega } , as well as the existence of function spaces on finite domains for recursive definitions .
Specifically, those theories do not require P E M {\displaystyle {\mathrm {PEM} }} , the full axiom of separation or set induction (let alone the axiom of regularity ), nor general function spaces (let alone the full axiom of power set ).
H A {\displaystyle {\mathsf {HA}}} furthermore is bi-interpretable with a weak constructive set theory in which the class of ordinals is ω {\displaystyle \omega } , so that the collection of von Neumann naturals do not exist as a set in the theory. [ 3 ] [ 4 ] Meta-theoretically, the domain of that theory is as big as the class of its ordinals and essentially given through the class F i n {\displaystyle {\mathrm {Fin} }} of all sets that are in bijection with a natural n ∈ ω {\displaystyle n\in \omega } . As an axiom this is called V = F i n {\displaystyle V={\mathrm {Fin} }} and the other axioms are those related to set algebra and order: Union and Binary Intersection, which is tightly related to the Predicative Separation schema, Extensionality, Pairing, and the Set induction schema. This theory is then already identical with the theory given by C Z F {\displaystyle {\mathsf {CZF}}} without Strong Infinity and with the finitude axiom added. The discussion of H A {\displaystyle {\mathsf {HA}}} in this set theory is as in model theory. And in the other direction, the set theoretical axioms are proven with respect to the primitive recursive relation
That small universe of sets can be understood as the ordered collection of finite binary sequences which encode their mutual membership. For example, the 100 2 {\displaystyle 100_{2}} 'th set contains one other set and the 110101 2 {\displaystyle 110101_{2}} 'th set contains four other sets. See BIT predicate .
For some number n {\displaystyle \mathrm {n} } in the metatheory, the numeral in the studied object theory is denoted by n _ {\displaystyle {\underline {\mathrm {n} }}} .
In intuitionistic arithmetics, the disjunction property D P {\displaystyle {\mathrm {DP} }} is typically valid. And it is a theorem that any c.e. extension of arithmetic for which it holds also has the numerical existence property N E P {\displaystyle {\mathrm {NEP} }} :
So these properties are metalogical equivalent in Heyting arithmetic. The existence and disjunction property in fact still holds when relativizing the existence claim by a Harrop formula α {\displaystyle \alpha } , i.e. for provable α → ∃ n . ψ ( n ) {\displaystyle \alpha \to \exists n.\psi (n)} .
Kleene , a student of Church , introduced important realizability models of the Heyting arithmetic.
In turn, his student Nels David Nelson established (in an extension of H A {\displaystyle {\mathsf {HA}}} ) that all closed theorems of H A {\displaystyle {\mathsf {HA}}} (meaning all variables are bound) can be realized. Inference in Heyting arithmetic preserves realizability. Moreover, if H A ⊢ ∀ n . ∃ m . φ ( n , m ) {\displaystyle {\mathsf {HA}}\vdash \forall n.\exists m.\varphi (n,m)} then there is a partial recursive function realizing φ {\displaystyle \varphi } in the sense that whenever the function evaluated at n {\displaystyle {\mathrm {n} }} terminates with m {\displaystyle {\mathrm {m} }} , then H A ⊢ φ ( n _ , m _ ) {\displaystyle {\mathsf {HA}}\vdash \varphi ({\underline {\mathrm {n} }},{\underline {\mathrm {m} }})} . This can be extended to any finite number of function arguments n {\displaystyle {\mathrm {n} }} .
There are also classical theorems that are not H A {\displaystyle {\mathsf {HA}}} -provable but do have a realization.
Typed versions of realizability have been introduced by Georg Kreisel . With it he demonstrated the independence of the classically valid Markov's principle for intuitionistic theories.
See also BHK interpretation and Dialectica interpretation .
In the effective topos , already the finitely axiomizable subsystem of Heyting arithmetic with induction restricted to Σ 1 {\displaystyle \Sigma _{1}} is categorical . Categoricity here is reminiscent of Tennenbaum's theorem . The model validates H A {\displaystyle {\mathsf {HA}}} but not P A {\displaystyle {\mathsf {PA}}} and so completeness fails in this context.
Type theoretical realizations mirroring inference rules based logic formalizations have been implemented in various languages .
Heyting arithmetic has been discussed with potential function symbols added for primitive recursive functions. That theory proves the Ackermann function total.
Beyond this, axiom and formalism selection always has been a debate even within the constructivist circle. Many typed extensions of H A {\displaystyle {\mathsf {HA}}} have been extensively studied in proof theory , e.g. with types of functions between numbers, and function between those, etc. The formalities naturally become more complicated, with different possible axioms governing the application of functions. The class of functions being total can be enriched this way. The theory with finite types H A ω {\displaystyle {\mathsf {HA}}^{\omega }} when further combined with function extensionality plus an axiom of choice in N N {\displaystyle {\mathbb {N} }^{\mathbb {N} }} still proves the same arithmetic formulas as just H A {\displaystyle {\mathsf {HA}}} and has a type theoretic interpretation. However, that theory rejects Church’s thesis for N N {\displaystyle {\mathbb {N} }^{\mathbb {N} }} as well as that all functions in N N → N {\displaystyle {\mathbb {N} }^{\mathbb {N} }\to {\mathbb {N} }} would be continuous. But adopting, say, different extensionality rules, choice axioms, Markov's and independence principles and even the Kőnig's lemma —all together but each at specific strength or levels—one can define rather "stuffed" arithmetics that may still fail to prove excluded middle at the level of Π 1 0 {\displaystyle \Pi _{1}^{0}} -formulas.
Early on, also variants with intensional equality and Brouwerian choice sequence have been investigated.
Reverse mathematics studies of constructive second-order arithmetic have been performed. [ 5 ]
Formal axiomatization of the theory trace back to Heyting (1930), Herbrand and Kleene . Gödel proved the consistency result concerning P A {\displaystyle {\mathsf {PA}}} in 1933.
Heyting arithmetic should not be confused with Heyting algebras , which are the intuitionistic analogue of Boolean algebras . | https://en.wikipedia.org/wiki/Heyting_arithmetic |
A Heyting field is one of the inequivalent ways in constructive mathematics to capture the classical notion of a field . It is essentially a field with an apartness relation .
A commutative ring is a Heyting field if it is a field in the sense that
and if it is moreover local : Not only does the non-invertible 0 {\displaystyle 0} not equal the invertible 1 {\displaystyle 1} , but the following disjunctions are granted more generally
The third axiom may also be formulated as the statement that the algebraic " + {\displaystyle +} " transfers invertibility to one of its inputs: If a + b {\displaystyle a+b} is invertible, then either a {\displaystyle a} or b {\displaystyle b} is as well.
The structure defined without the third axiom may be called a weak Heyting field. Every such structure with decidable equality being a Heyting field is equivalent to excluded middle. Indeed, classically all fields are already local.
An apartness relation is defined by writing a # b {\displaystyle a\#b} if a − b {\displaystyle a-b} is invertible. This relation is often now written as a ≠ b {\displaystyle a\neq b} with the warning that it is not equivalent to ¬ ( a = b ) {\displaystyle \neg (a=b)} .
The assumption ¬ ( a = 0 ) {\displaystyle \neg (a=0)} is then generally not sufficient to construct the inverse of a {\displaystyle a} . However, a # 0 {\displaystyle a\#0} is sufficient.
The prototypical Heyting field is the real numbers .
This algebra -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Heyting_field |
Hafnium carbonitride ( HfCN ) is an ultra-high temperature ceramic (UHTC) mixed anion compound composed of hafnium (Hf), carbon (C) and nitrogen (N).
Ab initio molecular dynamics calculations have predicted the HfCN (specifically the HfC 0.75 N 0.22 phase) to have a melting point of 4,110 ± 62 °C (4,048–4,172 °C, 7,318–7,542 °F, 4,321–4,445 K), [ 3 ] the highest known for any material. [ 3 ] [ 4 ] [ 5 ] Another approach based on the artificial neural network machine learning pointed towards a similar composition — HfC 0.76 N 0.24 . [ 3 ] Experimental testing conducted in 2020 has confirmed a melting point above 4,000 °C (7,230 °F; 4,270 K), [ 4 ] [ 5 ] substantiating earlier predictions made with atomistic simulations in 2015. [ 6 ]
The HfC x N 1−x has been assessed to possess the following properties: [ 2 ] | https://en.wikipedia.org/wiki/HfCN |
Hafnium tetrafluoride is the inorganic compound with the formula HfF 4 . It is a white solid. It adopts the same structure as zirconium tetrafluoride , with 8-coordinate Hf(IV) centers.
Hafnium tetrafluoride forms a trihydrate, which has a polymeric structure consisting of octahedral Hf center, described as (μ−F) 2 [HfF 2 (H 20 ) 2 ] n (H 2 O) n and one water of crystallization . In a rare case where the chemistry of Hf and Zr differ, the trihydrate of zirconium(IV) fluoride has a molecular structure (μ−F) 2 [ZrF 3 (H 20 ) 3 ] 2 , without the lattice water. [ 3 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/HfF4 |
Hafnium(IV) iodide is the inorganic compound with the formula HfI 4 . It is a red-orange, moisture sensitive, sublimable solid that is produced by heating a mixture of hafnium with excess iodine . [ 2 ] It is an intermediate in the crystal bar process for producing hafnium metal.
In this compound, the hafnium centers adopt octahedral coordination geometry. Like most binary metal halides, the compound is a polymeric. It is one-dimensional polymer consisting of chains of edge-shared bioctahedral Hf 2 I 8 subunits, similar to the motif adopted by HfCl 4 . The nonbridging iodide ligands have shorter bonds to Hf than the bridging iodide ligands. [ 2 ] | https://en.wikipedia.org/wiki/HfI4 |
Hafnium(IV) oxide is the inorganic compound with the formula HfO 2 . Also known as hafnium dioxide or hafnia , this colourless solid is one of the most common and stable compounds of hafnium . It is an electrical insulator with a band gap of 5.3~5.7 eV . [ 2 ] Hafnium dioxide is an intermediate in some processes that give hafnium metal.
Hafnium(IV) oxide is quite inert. It reacts with strong acids such as concentrated sulfuric acid and with strong bases . It dissolves slowly in hydrofluoric acid to give fluorohafnate anions. At elevated temperatures, it reacts with chlorine in the presence of graphite or carbon tetrachloride to give hafnium tetrachloride .
Hafnia typically adopts the same structure as zirconia (ZrO 2 ). Unlike TiO 2 , which features six-coordinate Ti in all phases, zirconia and hafnia consist of seven-coordinate metal centres. A variety of other crystalline phases have been experimentally observed, including cubic fluorite (Fm 3 m), tetragonal (P4 2 /nmc), monoclinic (P2 1 /c) and orthorhombic (Pbca and Pnma). [ 3 ] It is also known that hafnia may adopt two other orthorhombic metastable phases (space group Pca2 1 and Pmn2 1 ) over a wide range of pressures and temperatures, [ 4 ] presumably being the sources of the ferroelectricity observed in thin films of hafnia. [ 5 ]
Thin films of hafnium oxides deposited by atomic layer deposition are usually crystalline. Because semiconductor devices benefit from having amorphous films present, researchers have alloyed hafnium oxide with aluminum or silicon (forming hafnium silicates ), which have a higher crystallization temperature than hafnium oxide. [ 6 ]
Hafnia is used in optical coatings , and as a high-κ dielectric in DRAM capacitors and in advanced metal–oxide–semiconductor devices. [ 7 ] Hafnium-based oxides were introduced by Intel in 2007 as a replacement for silicon oxide as a gate insulator in field-effect transistors . [ 8 ] The advantage for transistors is its high dielectric constant : the dielectric constant of HfO 2 is 4–6 times higher than that of SiO 2 . [ 9 ] The dielectric constant and other properties depend on the deposition method, composition and microstructure of the material.
Hafnium oxide (as well as doped and oxygen-deficient hafnium oxide) attracts additional interest as a possible candidate for resistive-switching memories [ 10 ] and CMOS-compatible ferroelectric field effect transistors ( FeFET memory ) and memory chips. [ 11 ] [ 12 ] [ 13 ] [ 14 ]
Because of its very high melting point, hafnia is also used as a refractory material in the insulation of such devices as thermocouples , where it can operate at temperatures up to 2500 °C. [ 15 ]
Multilayered films of hafnium dioxide, silica, and other materials have been developed for use in passive cooling of buildings. The films reflect sunlight and radiate heat at wavelengths that pass through Earth's atmosphere, and can have temperatures several degrees cooler than surrounding materials under the same conditions. [ 16 ] | https://en.wikipedia.org/wiki/HfO2 |
Hafnium disulfide is an inorganic compound of hafnium and sulfur . It is a layered di chalcogenide with the chemical formula is HfS 2 . A few atomic layers of this material can be exfoliated using the standard Scotch Tape technique (see graphene ) and used for the fabrication of a field-effect transistor . [ 4 ] High-yield synthesis of HfS 2 has also been demonstrated using liquid phase exfoliation, resulting in the production of stable few-layer HfS 2 flakes. [ 5 ] Hafnium disulfide powder can be produced by reacting hydrogen sulfide and hafnium oxides at 500–1300 °C. [ 6 ] | https://en.wikipedia.org/wiki/HfS2 |
A high-frequency recombination cell (Hfr cell) (also called an Hfr strain ) is a bacterium with a conjugative plasmid (for example, the F-factor ) integrated into its chromosomal DNA. The integration of the plasmid into the cell's chromosome is through homologous recombination . A conjugative plasmid that is capable of chromosome integration or can exist autonomously within the cell is also called an episome (a segment of DNA that can exist as a plasmid or become integrated into the chromosome). When conjugation occurs, Hfr cells are very efficient in delivering chromosomal genes of the cell into recipient F − cells, which lack the episome.
The Hfr strain was first characterized by Luca Cavalli-Sforza . William Hayes also isolated another Hfr strain independently. [ 2 ]
An Hfr cell can transfer a portion of the bacterial genome. Despite being integrated into the chromosomal DNA of the bacteria, the F factor of Hfr cells can still initiate conjugative transfer, without being excised from the bacterial chromosome first. Due to the F factor's inherent tendency to transfer itself during conjugation, the rest of the bacterial genome is dragged along with it. Therefore, unlike a normal F + cell, Hfr strains will attempt to transfer their entire DNA through the mating bridge , in a fashion similar to the normal conjugation. In a typical conjugation, the recipient cell also becomes F + after conjugation as it receives an entire copy of the F factor plasmid; but this is not the case in conjugation mediated by Hfr cells. Due to the large size of bacterial chromosome, it is very rare for the entire chromosome to be transferred into the F − cell as time required is simply too long for the cells to maintain their physical contact. Therefore, as the conjugative transfer is not complete (the circular nature of plasmid and bacterial chromosome requires complete transfer for the F factor to be transferred as it may be cut in the middle), the recipient F − cells do not receive the complete F factor sequence, and do not become F + due to its inability to form a sex pilus . [ 3 ]
In conjugation mediated by Hfr cells, transfer of DNA starts at the origin of transfer ( oriT ) located within the F factor and then continues clockwise or counterclockwise depending on the orientation of F factor in the chromosome. Therefore, the length of chromosomal DNA transferred into the F − cell is proportional to the time that conjugation is allowed to happen. This results in sequential transfer of genes on the bacterial chromosome. Bacterial geneticists make use of this principle to map the genes on the bacterial chromosome. This technique is called interrupted mating as geneticists allow conjugation to take place for different periods of time before stopping conjugation with a high-speed blender. By using Hfr and F − strains with one strain carrying mutations in several genes, each affecting a metabolic function or causing antibiotic resistance , and examining the phenotype of the recipient cells on selective agar plates, one can deduce which genes are transferred into the recipient cells first and therefore are closer to the oriT sequence on the chromosome.
F-prime cell contains F-plasmid that integrates with the chromosomal DNA and carries part of the chromosomal DNA along with it while being excised from the chromosome. Thus F-prime plasmid is the plasmid, containing part of the chromosomal DNA which can be transferred to recipient cell, along with the plasmid during conjugation. [ 4 ] | https://en.wikipedia.org/wiki/Hfr_cell |
Dimethylmercury is an extremely toxic organomercury compound with the formula ( CH 3 ) 2 Hg . A volatile, flammable, dense and colorless liquid, dimethylmercury is one of the strongest known neurotoxins . Less than 0.1 mL is capable of inducing severe mercury poisoning resulting in death. [ 2 ]
The compound was one of the earliest organometallics reported, reflecting its considerable stability. The compound was first prepared by George Buckton in 1857 by a reaction of methylmercury iodide with potassium cyanide : [ 3 ]
Later, Edward Frankland discovered that it could be synthesized by treating sodium amalgam with methyl halides :
It can also be obtained by alkylation of mercuric chloride with methyllithium :
The molecule adopts a linear structure with Hg–C bond lengths of 2.083 Å. [ 4 ]
Dimethylmercury is stable in water and reacts with mineral acids at a significant rate only at elevated temperatures, [ 5 ] [ 6 ] whereas the corresponding organocadmium and organozinc compounds (and most metal alkyls in general) hydrolyze rapidly. The difference reflects the high electronegativity of Hg (Pauling EN = 2.00) and the low affinity of Hg(II) for oxygen ligands. The compound undergoes a redistribution reaction with mercuric chloride to give methylmercury chloride:
Whereas dimethylmercury is a volatile liquid , methylmercury chloride is a crystalline solid . [ 7 ]
Dimethylmercury has few practical applications because of the risks involved. It has been studied for reactions involving bonding methylmercury cations to target molecules, forming potent bactericides , but methylmercury's bioaccumulation and ultimate toxicity has led to it being largely abandoned in favor of the less toxic ethylmercury and diethylmercury compounds, which perform a similar function without the bioaccumulation hazard. [ citation needed ]
In toxicology , it still finds limited use as a reference toxin. It is also used to calibrate NMR instruments for detection of mercury (δ 0 ppm for 199 Hg NMR), although diethylmercury and less toxic mercury salts are now preferred. [ 8 ] [ 9 ] [ 10 ]
Around 1960, Phil Pomerantz, a man working at the Bureau of Naval Weapons , suggested that dimethylmercury be used as a fuel mix with red fuming nitric acid . [ 11 ] This was never done, although it did lead to testing a red fuming nitric acid - unsymmetrical dimethylhydrazine rocket with elemental mercury being injected into the combustion chamber at the Naval Ordnance Test Station . [ 11 ]
Dimethylmercury is extremely toxic and dangerous to handle. Absorption of doses as low as 0.1 mL can result in severe mercury poisoning. [ 2 ] The risks are enhanced because of the compound's high vapor pressure . [ 2 ] Since it is highly lipophilic, it absorbs through the skin and into body fat very easily and can permeate many materials, including many plastics and rubber compounds. [ citation needed ]
Permeation tests showed that several types of disposable latex or polyvinyl chloride gloves (typically, about 0.1 mm thick), commonly used in most laboratories and clinical settings, had high and maximal rates of permeation by dimethylmercury within 15 seconds. [ 12 ] The American Occupational Safety and Health Administration (OSHA) advises handling dimethylmercury with highly resistant laminated gloves with an additional pair of abrasion-resistant gloves worn over the laminate pair, and also recommends using a face shield and working in a fume hood . [ 2 ] [ 13 ]
Dimethylmercury is metabolized after several days to methylmercury . [ 12 ] Methylmercury crosses the blood–brain barrier easily, probably owing to formation of a complex with cysteine . [ 13 ] It easily absorbs into the body and has a tendency to bioaccumulate . The symptoms of poisoning may be delayed by months, resulting in cases in which a diagnosis is ultimately discovered, but only at a point in which it is too late or almost too late for an effective treatment regimen to be successful. [ 13 ] Methylmercury poisoning is also known as Minamata disease . | https://en.wikipedia.org/wiki/Hg(CH3)2 |
Mercury(II) acetate , also known as mercuric acetate is a chemical compound , the mercury(II) salt of acetic acid, with the formula Hg ( O 2 C C H 3 ) 2 . Commonly abbreviated Hg(OAc) 2 , this compound is employed as a reagent to generate organomercury compounds from unsaturated organic precursors. It is a white, water-soluble solid, but some samples can appear yellowish with time owing to decomposition.
Mercury(II) acetate is a crystalline solid consisting of isolated Hg(OAc) 2 molecules with Hg-O distances of 2.07 Å . Three long, weak intermolecular Hg···O bonds of about 2.75 Å are also present, resulting in a slightly distorted square pyramidal coordination geometry at Hg. [ 2 ]
Mercury(II) acetate can be produced by reaction of mercuric oxide with acetic acid . [ 3 ]
HgO + 2 CH 3 COOH → Hg(CH 3 COO) 2 + H 2 O
Mercury(II) acetate in acetic acid solution reacts with H 2 S to rapidly precipitate the black (β) polymorph of HgS . With gentle heating of the slurry, the black solid converts to the red form. [ 4 ] The mineral cinnabar is red HgS. The precipitation of HgS as well as a few other sulfides, using hydrogen sulfide is a step in qualitative inorganic analysis .
Electron-rich arenes undergo "mercuration" upon treatment with Hg(OAc) 2 . This behavior is illustrated with phenol :
The acetate group (OAc) that remains on mercury can be displaced by chloride: [ 5 ]
The Hg 2+ center binds to alkenes , inducing the addition of hydroxide and alkoxide . For example, treatment of methyl acrylate with mercuric acetate in methanol gives an α-mercuri ester: [ 6 ]
Exploiting the high affinity of mercury(II) for sulfur ligands, Hg(OAc) 2 can be used as a reagent to deprotect thiol groups in organic synthesis . Similarly Hg(OAc) 2 has been used to convert thiocarbonate esters into dithiocarbonates:
Mercury(II) acetate is used for oxymercuration reactions.
A famous use of Hg(OAc) 2 was in the synthesis of idoxuridine .
Mercuric acetate is a highly toxic compound, due to it being water-soluble and having mercury ions.
Symptoms of mercury poisoning include peripheral neuropathy , skin discoloration and desquamation (peeling and/or shedding of the skin). [ 7 ] Chronic exposure may cause reduced intelligence and kidney failure . [ 8 ] | https://en.wikipedia.org/wiki/Hg(CH3COO)2 |
Mercury(II) nitrate is an inorganic compound with the chemical formula Hg ( N O 3 ) 2 . It is the mercury (II) salt of nitric acid HNO 3 . It contains mercury(II) cations Hg 2+ and nitrate anions NO − 3 , and water of crystallization H 2 O in the case of a hydrous salt. Mercury(II) nitrate forms hydrates Hg(NO 3 ) 2 · x H 2 O . Anhydrous and hydrous salts are colorless or white soluble crystalline solids that are occasionally used as a reagents . Mercury(II) nitrate is made by treating mercury with hot concentrated nitric acid. Neither anhydrous nor monohydrate has been confirmed by X-ray crystallography . [ 1 ] The anhydrous material is more widely used. [ clarification needed ]
Mercury(II) nitrate is used as an oxidizing agent in organic synthesis , as a nitrification agent, as an analytical reagent in laboratories, in the manufacture of felt , and in the manufacture of mercury fulminate . [ 2 ] An alternative qualitative Zeisel test can be done with the use of mercury(II) nitrate instead of silver nitrate, leading to the formation of scarlet red mercury(II) iodide . [ 3 ]
Mercury compounds are highly toxic. The use of this compound by hatters and the subsequent mercury poisoning of said hatters is a common theory of where the phrase " mad as a hatter " came from. | https://en.wikipedia.org/wiki/Hg(NO3)2 |
Mercury(II) hydroxide or mercuric hydroxide is the metal hydroxide with the chemical formula Hg(OH) 2 . The compound has not been isolated in pure form, although it has been the subject of several studies. [ 1 ] Attempts to isolate Hg(OH) 2 yield yellow solid HgO .
The solid has produced it by irradiating a frozen mixture of mercury, oxygen and hydrogen. The mixture had been produced by evaporating mercury atoms at 50 °C into a gas consisting of neon, argon or deuterium (in separate experiments) plus 2 to 8% hydrogen and 0.2 to 2.0% oxygen. The mixture was then condensed at 5 kelvins onto a caesium iodide window, through which it could be irradiated. [ 2 ] | https://en.wikipedia.org/wiki/Hg(OH)2 |
Mercury(I) bromide or mercurous bromide is the chemical compound composed of mercury and bromine with the formula Hg 2 Br 2 . It changes color from white to yellow when heated [ 1 ] and fluoresces a salmon color when exposed to ultraviolet light. It has applications in acousto-optical devices . [ 4 ]
A very rare mineral form is called kuzminite and has the chemical formula Hg 2 (Br,Cl) 2 .
Mercury(I) bromide is prepared by the oxidation of elemental mercury with elemental bromine or by adding sodium bromide to a solution of mercury(I) nitrate . [ 1 ] It decomposes to mercury(II) bromide and elemental mercury [ when? ] . [ 4 ]
In common with other Hg(I) (mercurous) compounds which contain linear X-Hg-Hg-X units, Hg 2 Br 2 contains linear BrHg 2 Br units with an Hg-Hg bond length of 249 pm (Hg-Hg in the metal is 300 pm) and an Hg-Br bond length of 271 pm. [ 5 ] The overall coordination of each Hg atom is octahedral as, in addition to the two nearest neighbours, there are four other Br atoms at 332 pm. [ 5 ] The compound is often formulated as Hg 2 2+ 2Br − , [ 6 ] although it is actually a molecular compound. | https://en.wikipedia.org/wiki/Hg2Br2 |
Mercury(I) chloride is the chemical compound with the formula Hg 2 Cl 2 . Also known as the mineral calomel [ 4 ] (a rare mineral) or mercurous chloride , this dense white or yellowish-white, odorless solid is the principal example of a mercury (I) compound. It is a component of reference electrodes in electrochemistry . [ 5 ] [ 6 ]
The name calomel is thought to come from the Greek καλός "beautiful", and μέλας "black"; or καλός and μέλι "honey" from its sweet taste. [ 4 ] The "black" name (somewhat surprising for a white compound) is probably due to its characteristic disproportionation reaction with ammonia , which gives a spectacular black coloration due to the finely dispersed metallic mercury formed. It is also referred to as the mineral horn quicksilver or horn mercury . [ 4 ]
Calomel was taken internally and used as a laxative, [ 4 ] for example to treat George III in 1801, and disinfectant, as well as in the treatment of syphilis, until the early 20th century. Until fairly recently, [ when? ] it was also used as a horticultural fungicide, most notably as a root dip to help prevent the occurrence of clubroot amongst crops of the family Brassicaceae . [ 7 ]
Mercury became a popular remedy for a variety of physical and mental ailments during the age of " heroic medicine ". It was prescribed by doctors in America throughout the 18th century, and during the revolution, to make patients regurgitate and release their body from "impurities". Benjamin Rush was a well-known advocate of mercury in medicine and used calomel to treat sufferers of yellow fever during its outbreak in Philadelphia in 1793. Calomel was given to patients as a purgative or cathartic until they began to salivate and was often administered to patients in such great quantities that their hair and teeth fell out. [ 8 ]
Yellow fever was also treated with calomel. [ 9 ]
Lewis and Clark brought calomel on their expedition. Researchers used that same mercury, found deep in latrine pits, to retrace the locations of their respective locations and campsites. [ 10 ]
Mercury is unique among the group 12 metals for its ability to form the M–M bond so readily. Hg 2 Cl 2 is a linear molecule. The mineral calomel crystallizes in the tetragonal system, with space group I4/m 2/m 2/m. The unit cell of the crystal structure is shown below:
The Hg–Hg bond length of 253 pm (Hg–Hg in the metal is 300 pm) and the Hg–Cl bond length in the linear Hg 2 Cl 2 unit is 243 pm. [ 11 ] The overall coordination of each Hg atom is octahedral as, in addition to the two nearest neighbours, there are four other Cl atoms at 321 pm. Longer mercury polycations exist.
Mercurous chloride forms by the reaction of elemental mercury and mercuric chloride:
It can be prepared via metathesis reaction involving aqueous mercury(I) nitrate using various chloride sources including NaCl or HCl.
Ammonia causes Hg 2 Cl 2 to disproportionate :
Mercurous chloride is employed extensively in electrochemistry , taking advantage of the ease of its oxidation and reduction reactions. The calomel electrode is a reference electrode , especially in older publications. Over the past 50 years, it has been superseded by the silver/silver chloride (Ag/AgCl) electrode. Although the mercury electrodes have been widely abandoned due to the dangerous nature of mercury , many chemists believe they are still more accurate and are not dangerous as long as they are handled properly. The differences in experimental potentials vary little from literature values. Other electrodes can vary by 70 to 100 millivolts. [ citation needed ]
Mercurous chloride decomposes into mercury(II) chloride and elemental mercury upon exposure to UV light.
The formation of Hg can be used to calculate the number of photons in the light beam, by the technique of actinometry .
By utilizing a light reaction in the presence of mercury(II) chloride and ammonium oxalate , mercury(I) chloride, ammonium chloride and carbon dioxide are produced.
This particular reaction was discovered by J. M. Eder (hence the name Eder reaction ) in 1880 and reinvestigated by W. E. Rosevaere in 1929. [ 12 ]
Mercury(I) bromide , Hg 2 Br 2 , is light yellow, whereas mercury(I) iodide , Hg 2 I 2 , is greenish in colour. Both are poorly soluble. Mercury(I) fluoride is unstable in the absence of a strong acid.
Mercurous chloride is toxic , although due to its low solubility in water it is generally less dangerous than its mercuric chloride counterpart. It was used in medicine as a diuretic and purgative (laxative) in the United States from the late 1700s through the 1860s. Calomel was also a common ingredient in teething powders in Britain up until 1954, causing widespread mercury poisoning in the form of pink disease , which at the time had a mortality rate of 1 in 10. [ 13 ] These medicinal uses were later discontinued when the compound's toxicity was discovered.
It has also found uses in cosmetics as soaps and skin lightening creams, but these preparations are now illegal to manufacture or import in many countries including the US, Canada, Japan and the European Union. [ 14 ] A study of workers involved in the production of these preparations showed that the sodium salt of 2,3-dimercapto-1-propanesulfonic acid (DMPS) was effective in lowering the body burden of mercury and in decreasing the urinary mercury concentration to normal levels. [ 15 ] | https://en.wikipedia.org/wiki/Hg2Cl2 |
Mercury(I) fluoride or mercurous fluoride is the chemical compound composed of mercury and fluorine with the formula Hg 2 F 2 . It consists of small yellow cubic crystals, which turn black when exposed to light. [ 1 ]
Mercury(I) fluoride is prepared by the reaction of mercury(I) carbonate with hydrofluoric acid :
When added to water, mercury(I) fluoride hydrolyzes to elemental liquid mercury, mercury(II) oxide , and hydrofluoric acid: [ 1 ]
It can be used in the Swarts reaction to convert alkyl halides into alkyl fluorides: [ 4 ]
In common with other Hg(I) (mercurous) compounds which contain linear X-Hg-Hg-X units, Hg 2 F 2 contains linear FHg 2 F units with an Hg-Hg bond length of 251 pm (Hg-Hg in the metal is 300 pm) and an Hg-F bond length of 214 pm. [ 5 ] The overall coordination of each Hg atom is a distorted octahedron ; in addition to the bonded F and other Hg of the molecule, there are four other F atoms at 272 pm. [ 5 ] The compound is often formulated as Hg 2+ 2 [F − ] 2 . [ 6 ] | https://en.wikipedia.org/wiki/Hg2F2 |
Mercury(I) iodide is a chemical compound of mercury and iodine . The chemical formula is Hg 2 I 2 . It is photosensitive and decomposes easily to mercury and HgI 2 .
Mercury(I) iodide can be prepared by directly reacting mercury and iodine.
In common with other Hg(I) (mercurous) compounds which contain linear X-Hg-Hg-X units, Hg 2 I 2 contains linear IHg 2 I units with an Hg-Hg bond length of 272 pm (Hg-Hg in the metal is 300 pm) and an Hg-I bond length of 268 pm. [ 2 ] The overall coordination of each Hg atom is octahedral as it has in addition to the two nearest neighbours there are four other I atoms at 351 pm. [ 2 ] The compound is often formulated as Hg 2 2+ 2I − . [ 3 ]
Mercury(I) iodide was a commonly used as a drug in the 19th century, sometimes under the contemporary name of protiodide of mercury . It was used to treat a wide range of conditions; everything from acne to kidney disease and in particular was the treatment of choice for syphilis . It was available over the counter at any drugstore in the world, the most common form being a concoction of protiodide, licorice , glycerin and marshmallow . [ citation needed ]
Taken orally, and in low doses, protiodide causes excessive salivation, fetid breath, spongy and bleeding gums and sore teeth . Excessive use or an overdose causes physical weakness, loss of teeth, hemolysing (destruction of the red blood cells ) of the blood and necrosis of the bones and tissues of the body. Early signs of an overdose or excessive use are muscular tremors , chorea , and locomotor ataxia . Violent bloody vomiting and voiding also occur.
Protiodide is banned as a medication , even though it persisted in use as a quack remedy until the early 20th century. | https://en.wikipedia.org/wiki/Hg2I2 |
Mercury(I) oxide , also known as mercurous oxide , is an inorganic metal oxide with the chemical formula Hg 2 O.
It is a brown/black powder, insoluble in water but soluble in nitric acid . With hydrochloric acid , it reacts to form calomel, Hg 2 Cl 2 . [ 4 ] Mercury(I) oxide is toxic but without taste or smell. It is chemically unstable and converts to mercury(II) oxide and mercury metal.
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hg2O |
Mercury(I) sulfate , commonly called mercurous sulphate ( UK ) or mercurous sulfate ( US ) is the chemical compound Hg 2 SO 4 . [ 3 ] Mercury(I) sulfate is a metallic compound that is a white, pale yellow or beige powder. [ 4 ] It is a metallic salt of sulfuric acid formed by replacing both hydrogen atoms with mercury(I). It is highly toxic; it could be fatal if inhaled, ingested, or absorbed by skin.
In the crystal, mercurous sulfate is made up of Hg 2 2+ center with an Hg-Hg distance of about 2.50 Å. The SO 4 2− anions form both long and short Hg-O bonds ranging from 2.23 to 2.93 Å. [ 5 ]
Focusing on the shorter Hg-O bonds, the Hg – Hg – O bond angle is 165°±1°. [ 6 ] [ 7 ]
One way to prepare mercury(I) sulfate is to mix the acidic solution of mercury(I) nitrate with 1 to 6 sulfuric acid solution:, [ 8 ] [ 9 ]
It can also be prepared by reacting an excess of mercury with concentrated sulfuric acid : [ 8 ]
Mercury(I) sulfate is often used in electrochemical cells . [ 10 ] [ 11 ] [ 12 ] It was first introduced in electrochemical cells by Latimer Clark in 1872, [ 13 ] It was then alternatively [ clarification needed ] used in Weston cells made by George Augustus Hulett in 1911. [ 13 ] It has been found to be a good electrode at high temperatures above 100 °C along with silver sulfate. [ 14 ]
Mercury(I) sulfate has been found to decompose at high temperatures. The decomposition process is endothermic , and it occurs between 335 °C and 500 °C.
Mercury(I) sulfate has unique properties that make the standard cells possible. It has a rather low solubility (about one gram per liter); diffusion from the cathode system is not excessive; and it is sufficient to give a large potential at a mercury electrode. [ 15 ] | https://en.wikipedia.org/wiki/Hg2O4S |
Mercury(I) sulfate , commonly called mercurous sulphate ( UK ) or mercurous sulfate ( US ) is the chemical compound Hg 2 SO 4 . [ 3 ] Mercury(I) sulfate is a metallic compound that is a white, pale yellow or beige powder. [ 4 ] It is a metallic salt of sulfuric acid formed by replacing both hydrogen atoms with mercury(I). It is highly toxic; it could be fatal if inhaled, ingested, or absorbed by skin.
In the crystal, mercurous sulfate is made up of Hg 2 2+ center with an Hg-Hg distance of about 2.50 Å. The SO 4 2− anions form both long and short Hg-O bonds ranging from 2.23 to 2.93 Å. [ 5 ]
Focusing on the shorter Hg-O bonds, the Hg – Hg – O bond angle is 165°±1°. [ 6 ] [ 7 ]
One way to prepare mercury(I) sulfate is to mix the acidic solution of mercury(I) nitrate with 1 to 6 sulfuric acid solution:, [ 8 ] [ 9 ]
It can also be prepared by reacting an excess of mercury with concentrated sulfuric acid : [ 8 ]
Mercury(I) sulfate is often used in electrochemical cells . [ 10 ] [ 11 ] [ 12 ] It was first introduced in electrochemical cells by Latimer Clark in 1872, [ 13 ] It was then alternatively [ clarification needed ] used in Weston cells made by George Augustus Hulett in 1911. [ 13 ] It has been found to be a good electrode at high temperatures above 100 °C along with silver sulfate. [ 14 ]
Mercury(I) sulfate has been found to decompose at high temperatures. The decomposition process is endothermic , and it occurs between 335 °C and 500 °C.
Mercury(I) sulfate has unique properties that make the standard cells possible. It has a rather low solubility (about one gram per liter); diffusion from the cathode system is not excessive; and it is sufficient to give a large potential at a mercury electrode. [ 15 ] | https://en.wikipedia.org/wiki/Hg2SO4 |
Mercury(II) bromide or mercuric bromide is an inorganic compound with the formula HgBr 2 . [ 2 ] This white solid is a laboratory reagent. [ 3 ] [ 2 ] Like all mercury salts, it is highly toxic. [ 2 ]
Mercury(II) bromide can be produced by reaction of metallic mercury with bromine. [ 4 ]
Mercury(II) bromide is used as a reagent in the Koenigs–Knorr reaction , which forms glycoside linkages on carbohydrates . [ 5 ] [ 6 ]
It is also used to test for the presence of arsenic , as recommended by the European Pharmacopoeia . [ 7 ] The arsenic in the sample is first converted to arsine gas by treatment with hydrogen . Arsine reacts with mercury(II) bromide: [ 8 ]
The white mercury(II) bromide will turn yellow, brown, or black if arsenic is present in the sample. [ 9 ]
Mercury(II) bromide reacts violently with elemental indium at high temperatures [ 10 ] and, when exposed to potassium , can form shock-sensitive explosive mixtures. [ 11 ] | https://en.wikipedia.org/wiki/HgBr2 |
Dimethylmercury is an extremely toxic organomercury compound with the formula ( CH 3 ) 2 Hg . A volatile, flammable, dense and colorless liquid, dimethylmercury is one of the strongest known neurotoxins . Less than 0.1 mL is capable of inducing severe mercury poisoning resulting in death. [ 2 ]
The compound was one of the earliest organometallics reported, reflecting its considerable stability. The compound was first prepared by George Buckton in 1857 by a reaction of methylmercury iodide with potassium cyanide : [ 3 ]
Later, Edward Frankland discovered that it could be synthesized by treating sodium amalgam with methyl halides :
It can also be obtained by alkylation of mercuric chloride with methyllithium :
The molecule adopts a linear structure with Hg–C bond lengths of 2.083 Å. [ 4 ]
Dimethylmercury is stable in water and reacts with mineral acids at a significant rate only at elevated temperatures, [ 5 ] [ 6 ] whereas the corresponding organocadmium and organozinc compounds (and most metal alkyls in general) hydrolyze rapidly. The difference reflects the high electronegativity of Hg (Pauling EN = 2.00) and the low affinity of Hg(II) for oxygen ligands. The compound undergoes a redistribution reaction with mercuric chloride to give methylmercury chloride:
Whereas dimethylmercury is a volatile liquid , methylmercury chloride is a crystalline solid . [ 7 ]
Dimethylmercury has few practical applications because of the risks involved. It has been studied for reactions involving bonding methylmercury cations to target molecules, forming potent bactericides , but methylmercury's bioaccumulation and ultimate toxicity has led to it being largely abandoned in favor of the less toxic ethylmercury and diethylmercury compounds, which perform a similar function without the bioaccumulation hazard. [ citation needed ]
In toxicology , it still finds limited use as a reference toxin. It is also used to calibrate NMR instruments for detection of mercury (δ 0 ppm for 199 Hg NMR), although diethylmercury and less toxic mercury salts are now preferred. [ 8 ] [ 9 ] [ 10 ]
Around 1960, Phil Pomerantz, a man working at the Bureau of Naval Weapons , suggested that dimethylmercury be used as a fuel mix with red fuming nitric acid . [ 11 ] This was never done, although it did lead to testing a red fuming nitric acid - unsymmetrical dimethylhydrazine rocket with elemental mercury being injected into the combustion chamber at the Naval Ordnance Test Station . [ 11 ]
Dimethylmercury is extremely toxic and dangerous to handle. Absorption of doses as low as 0.1 mL can result in severe mercury poisoning. [ 2 ] The risks are enhanced because of the compound's high vapor pressure . [ 2 ] Since it is highly lipophilic, it absorbs through the skin and into body fat very easily and can permeate many materials, including many plastics and rubber compounds. [ citation needed ]
Permeation tests showed that several types of disposable latex or polyvinyl chloride gloves (typically, about 0.1 mm thick), commonly used in most laboratories and clinical settings, had high and maximal rates of permeation by dimethylmercury within 15 seconds. [ 12 ] The American Occupational Safety and Health Administration (OSHA) advises handling dimethylmercury with highly resistant laminated gloves with an additional pair of abrasion-resistant gloves worn over the laminate pair, and also recommends using a face shield and working in a fume hood . [ 2 ] [ 13 ]
Dimethylmercury is metabolized after several days to methylmercury . [ 12 ] Methylmercury crosses the blood–brain barrier easily, probably owing to formation of a complex with cysteine . [ 13 ] It easily absorbs into the body and has a tendency to bioaccumulate . The symptoms of poisoning may be delayed by months, resulting in cases in which a diagnosis is ultimately discovered, but only at a point in which it is too late or almost too late for an effective treatment regimen to be successful. [ 13 ] Methylmercury poisoning is also known as Minamata disease . | https://en.wikipedia.org/wiki/HgC2H6 |
Mercury(II) chloride ( mercury bichloride [ citation needed ] , mercury dichloride , mercuric chloride ), historically also sulema or corrosive sublimate , [ 2 ] is the inorganic chemical compound of mercury and chlorine with the formula HgCl 2 , used as a laboratory reagent . It is a white crystalline solid and a molecular compound that is very toxic to humans. Once used as a first line treatment for syphilis , it has been replaced by the more effective and less toxic procaine penicillin since at least 1948.
Mercuric chloride is obtained by the action of chlorine on mercury or on mercury(I) chloride . It can also be produced by the addition of hydrochloric acid to a hot, concentrated solution of mercury(I) compounds such as the nitrate : [ 2 ]
Heating a mixture of solid mercury(II) sulfate and sodium chloride also affords volatile HgCl 2 , which can be separated by sublimation . [ 2 ]
Mercuric chloride is not a salt composed of discrete ions, but it is made of linear triatomic molecules, hence its tendency to sublime . In the crystal, each mercury atom is bonded to two chloride ligands with Hg–Cl distance of 2.38 Å ; six other chlorides are more distant at 3.38 Å. [ 3 ]
Its solubility in water increases from 6% at 20 °C (68 °F) to 36% at 100 °C (212 °F).
The main application of mercuric chloride is as a catalyst for the conversion of acetylene to vinyl chloride , the precursor to polyvinyl chloride :
For this application, the mercuric chloride is supported on carbon in concentrations of about 5 weight percent. This technology has been eclipsed by the thermal cracking of 1,2-dichloroethane . Other significant applications of mercuric chloride include its use as a depolarizer in batteries and as a reagent in organic synthesis and analytical chemistry (see below). [ 4 ] It is being used in plant tissue culture for surface sterilisation of explants such as leaf or stem nodes.
Mercuric chloride is occasionally used to form an amalgam with metals, such as aluminium . [ 5 ] Upon treatment with an aqueous solution of mercuric chloride, aluminium strips quickly become covered by a thin layer of the amalgam. Normally, aluminium is protected by a thin layer of oxide, thus making it inert. Amalgamated aluminium exhibits a variety of reactions not observed for aluminium itself. For example, amalgamated aluminum reacts with water generating Al(OH) 3 and hydrogen gas. Halocarbons react with amalgamated aluminium in the Barbier reaction . These alkylaluminium compounds are nucleophilic and can be used in a similar fashion to the Grignard reagent. Amalgamated aluminium is also used as a reducing agent in organic synthesis. Zinc is also commonly amalgamated using mercuric chloride.
Mercuric chloride is used to remove dithiane groups attached to a carbonyl in an umpolung reaction. This reaction exploits the high affinity of Hg 2+ for anionic sulfur ligands.
Mercuric chloride may be used as a stabilising agent for chemicals and analytical samples. Care must be taken to ensure that detected mercuric chloride does not eclipse the signals of other components in the sample, such as is possible in gas chromatography . [ 6 ]
Around 900, the authors of the Arabic writings attributed to Jabir ibn Hayyan (Latin: Geber) and the Persian physician and alchemist Abu Bakr al-Razi (Latin: Rhazes) were experimenting with sal ammoniac (ammonium chloride), which when it was distilled together with vitriol (hydrated sulfates of various metals) produced hydrogen chloride . [ 7 ] It is possible that in one of his experiments, al-Razi stumbled upon a primitive method to produce hydrochloric acid . [ 8 ] However, it appears that in most of these early experiments with chloride salts , the gaseous products were discarded, and hydrogen chloride may have been produced many times before it was discovered that it can be put to chemical use. [ 9 ]
One of the first such uses of hydrogen chloride was in the synthesis of mercury(II) chloride (corrosive sublimate), whose production from the heating of mercury either with alum and ammonium chloride or with vitriol and sodium chloride was first described in the De aluminibus et salibus ("On Alums and Salts"). [ 10 ] This eleventh- or twelfth-century Arabic alchemical text is anonymous in most manuscripts, though some manuscripts attribute it to Hermes Trismegistus , and a few falsely attribute it to Abu Bakr al-Razi. [ 11 ] It was translated into Hebrew and two times into Latin , with one Latin translation by Gerard of Cremona (1144–1187) . [ 12 ]
In the process described in the De aluminibus et salibus , hydrochloric acid started to form, but it immediately reacted with the mercury to produce mercury(II) chloride. Thirteenth-century Latin alchemists , for whom the De aluminibus et salibus was one of the main reference works, were fascinated by the chlorinating properties of mercury(II) chloride, and they eventually discovered that when the metals are eliminated from the process of heating vitriols, alums, and salts, strong mineral acids can directly be distilled. [ 13 ]
Mercury(II) chloride was used as a photographic intensifier to produce positive pictures in the collodion process of the 1800s. When applied to a negative, the mercury(II) chloride whitens and thickens the image, thereby increasing the opacity of the shadows and creating the illusion of a positive image. [ 14 ]
For the preservation of anthropological and biological specimens during the late 19th and early 20th centuries, objects were dipped in or were painted with a "mercuric solution". This was done to prevent the specimens' destruction by moths, mites and mold. Objects in drawers were protected by scattering crystalline mercuric chloride over them. [ 15 ] It finds minor use in tanning, and wood was preserved by kyanizing (soaking in mercuric chloride). [ 16 ] Mercuric chloride was one of the three chemicals used for railroad tie wood treatment between 1830 and 1856 in Europe and the United States. Limited railroad ties were treated in the United States until there were concerns over lumber shortages in the 1890s. [ 17 ] The process was generally abandoned because mercuric chloride was water-soluble and not effective for the long term, as well as being highly poisonous. Furthermore, alternative treatment processes, such as copper sulfate , zinc chloride , and ultimately creosote ; were found to be less toxic. Limited kyanizing was used for some railroad ties in the 1890s and early 1900s. [ 18 ]
Mercuric chloride was a common over-the-counter disinfectant in the early twentieth century, recommended for everything from fighting measles germs [ 19 ] to protecting fur coats [ 20 ] and exterminating red ants. [ 21 ] A New York physician, Carlin Philips, wrote in 1913 that "it is one of our most popular and effective household antiseptics", but so corrosive and poisonous that it should only be available by prescription. [ 22 ] A group of physicians in Chicago made the same demand later the same month. The product frequently caused accidental poisonings and was used as a suicide method. [ 23 ]
It was used to disinfect wounds by Arab physicians in the Middle Ages . [ 24 ] It continued to be used by Arab physicians into the twentieth century, until modern medicine deemed it unsafe for use.
Syphilis was frequently treated with mercuric chloride before the advent of antibiotics . It was inhaled, ingested, injected, and applied topically. Both mercuric-chloride treatment for syphilis and poisoning during the course of treatment were so common that the latter's symptoms were often confused with those of syphilis. This use of "salts of white mercury" is referred to in the English -language folk song " The Unfortunate Rake ". [ 25 ]
Yaws was treated with mercuric chloride (labeled as Corrosive Sublimate) before the advent of antibiotics . It was applied topically to alleviate ulcerative symptoms. Evidence of this is found in Jack London's book The Cruise of the Snark in the chapter entitled "The Amateur M.D."
Between 1901 and 1904 the US Marine Hospital Service quarantined and engaged in an extensive disinfection program of San Francisco's Chinatown in response to an epidemic of bubonic plague . This program forced the closure of over 14,000 rooms and the eviction of thousands of Chinese residents whose dwellings were rendered toxic and uninhabitable from the disinfection program. Long-term mercury pollution is still a concern for construction workers in Chinatown to this day. [ 26 ]
Mercury dichloride is a highly toxic compound, [ 37 ] both acutely and as a cumulative poison. Its toxicity is due not just to its mercury content but also to its corrosive properties, which can cause serious internal damage, including ulcers to the stomach, mouth, and throat, and corrosive damage to the intestines. Mercuric chloride also tends to accumulate in the kidneys, causing severe corrosive damage which can lead to acute kidney failure . However, mercuric chloride, like all inorganic mercury salts, does not cross the blood–brain barrier as readily as organic mercury, although it is known to be a cumulative poison.
Common side effects of acute mercuric chloride poisoning include burning sensations in the mouth and throat, stomach pain, abdominal discomfort, lethargy, vomiting of blood, corrosive bronchitis, severe irritation to the gastrointestinal tract, and kidney failure. Chronic exposure can lead to symptoms more common with mercury poisoning, such as insomnia, delayed reflexes, excessive salivation, bleeding gums, fatigue, tremors, and dental problems.
Acute exposure to large amounts of mercuric chloride can cause death in as little as 24 hours, usually due to acute kidney failure or damage to the gastrointestinal tract. In other cases, victims of acute exposure have taken up to two weeks to die. [ 38 ] | https://en.wikipedia.org/wiki/HgCl2 |
Mercury(II) fluoride has the molecular formula HgF 2 as a chemical compound of one atom of mercury with 2 atoms of fluorine .
Mercury(II) fluoride is most commonly produced by the reaction of mercury(II) oxide and hydrogen fluoride :
Mercury(II) fluoride can also be produced through the fluorination of mercury(II) chloride :
or of mercury(II) oxide : [ 3 ]
with oxygen as byproduct.
Mercury(II) fluoride is a selective fluorination agent. [ 4 ] [ 5 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/HgF2 |
Mercury(IV) fluoride , HgF 4 , is a purported compound, the first to be reported with mercury in the +4 oxidation state . Mercury, like the other group 12 elements ( cadmium and zinc ), has an s 2 d 10 electron configuration and generally only forms bonds involving its 6s orbital. This means that the highest oxidation state mercury normally attains is +2, and for this reason it is sometimes considered a post-transition metal instead of a transition metal . HgF 4 was first reported from experiments in 2007, but its existence remains disputed; experiments conducted in 2008 could not replicate the compound. [ 1 ] [ 2 ]
Speculation about higher oxidation states for mercury had existed since the 1970s, and theoretical calculations in the 1990s predicted that it should be stable in the gas phase, with a square-planar geometry consistent with a formal d 8 configuration. However, experimental proof remained elusive until 2007, when HgF 4 was first prepared using solid neon and argon for matrix isolation at a temperature of 4 K . The compound was detected using infrared spectroscopy . [ 3 ] [ 4 ]
However, the compound's synthesis has not been replicated in other labs, and more recent theoretical studies cast doubt on the possible existence of mercury(IV) (and copernicium(IV)) fluoride. Dirac-Hartree-Fock computations including both relativistic effects and electron correlation suggest that an HgF 4 compound would be unbound by about 2 eV (and CnF 4 by 14 eV). [ 5 ]
Theoretical studies suggest that mercury is unique among the natural elements of group 12 in forming a tetrafluoride , and attribute this observation to relativistic effects . According to calculations, the tetrafluorides of the "less relativistic" elements cadmium and zinc are unstable and eliminate a fluorine molecule, F 2 , to form the metal difluoride complex. [ citation needed ] On the other hand, the tetrafluoride of the "more relativistic" synthetic element 112, copernicium , is predicted to be more stable. [ 6 ] [ failed verification ]
Subsequent density functional theory and coupled cluster calculations indicated that bonding in HgF 4 (if it really exists) involves d orbitals. This has led to the suggestion that mercury should be considered a transition metal (the group 12 metals are sometimes excluded from the transition metals because they do not oxidize beyond +2). [ 7 ] Chemical historian William B. Jensen has argued that the compound alone is insufficient to reclassify the metal, because HgF 4 represents at best a non-equilibrium transient state . [ 8 ]
HgF 4 is produced by the reaction of elemental mercury with fluorine :
HgF 4 is only stable in matrix isolation at 4 K (−269 °C); upon heating, or if the HgF 4 molecules touch each other, it decomposes to mercury(II) fluoride and fluorine:
HgF 4 is a diamagnetic , square planar molecule. The mercury atom has a formal 6s 2 5d 8 6p 6 electron configuration, and as such obeys the octet rule but not the 18-electron rule . HgF 4 is isoelectronic with the tetrafluoroaurate anion, AuF − 4 , and is valence isoelectronic with the tetrachloroaurate ( AuCl − 4 ), tetrabromoaurate ( AuBr − 4 ), and tetrachloroplatinate ( PtCl 2− 4 ) anions. | https://en.wikipedia.org/wiki/HgF4 |
Mercury(I) hydride (systematically named mercury hydride ) is an inorganic compound with the chemical formula HgH. It has not yet been obtained in bulk, hence its bulk properties remain unknown. However, molecular mercury(I) hydrides with the formulae HgH and Hg 2 H 2 have been isolated in solid gas matrices. The molecular hydrides are very unstable toward thermal decomposition . As such the compound is not well characterised, although many of its properties have been calculated via computational chemistry .
In 1979 and 1985, Swiss chemical physicists, Egger and Gerber, and Soviet chemical physicists, Kolbycheva and Kolbychev, independently, theoretically determined that it is feasible to develop a mercury(I) hydride molecular laser.
Mercury(I) hydride is an unstable gas [ 1 ] and is the heaviest group 12 monohydride. In mercury(I) hydride, the formal oxidation states of hydrogen and mercury are −1 and +1, respectively, because of the electronegativity of mercury is lower than that of hydrogen. The stability of the diatomic metal hydrides with the formula MH (M = Zn-Hg) increases as the atomic number of M increases.
The Hg-H bond is very weak and therefore the compound has only been matrix isolated at temperatures up to 6 K. [ 2 ] [ 3 ] The dihydride , HgH 2 , has also been detected this way.
A related compound is dimercurane(2), or bis(hydridomercury)( Hg — Hg ), with the formula Hg 2 H 2 , which can be considered to be dimeric mercury(I) hydride. It spontaneously decomposes into the monomeric form.
The mercury centre in mercury complexes such as hydridomercury can accept or donate a single electron by association:
Because of this acceptance or donation of the electron, hydridomercury has radical character. It is a moderately reactive monoradical. | https://en.wikipedia.org/wiki/HgH |
Mercury(II) hydride (systematically named mercurane(2) and dihydridomercury ) is an inorganic compound with the chemical formula HgH 2 (also written as [HgH 2 ] ). It is both thermodynamically and kinetically unstable at ambient temperature, and as such, little is known about its bulk properties. However, it can also be a white, crystalline solid, which is kinetically stable at temperatures below −125 °C (−193 °F), which was synthesized for the first time in 1951. [ 1 ]
Mercury(II) hydride is the second simplest mercury hydride (after the significantly less stable mercury(I) hydride ). Due to its instability, it has no practical industrial uses. However, in analytical chemistry , mercury(II) hydride is fundamental to certain forms of spectrometric techniques used to determine mercury content. In addition, it is investigated for its effect on high sensitivity isotope-ratio mass spectrometry methods that involve mercury, such as MC-ICP-MS , when used to compare thallium to mercury. [ 2 ]
In solid mercury(II) hydride, the HgH 2 molecules are connected by mercurophilic bonds. Trimers and a lesser proportion of dimers are detected in the vapour. Unlike solid zinc(II), and cadmium(II) hydride, which are network solids , solid mercury(II) hydride is a covalently bound molecular solid . This is due to relativistic effects, which also accounts for the relatively low decomposition temperature of -125 °C. [ 3 ]
The HgH 2 molecule is linear and symmetric in the form H-Hg-H. The bond length is 1.646543 Å. The antisymmetric stretching frequency, ν 3 of the bond is 1912.8 cm −1 , 57.34473 THz for isotopes 202 Hg and 1 H. [ 3 ] The energy needed to break the Hg-H bond in HgH 2 is 70 kcal/mol. The second bond in the resulting HgH is much weaker only needing 8.6 kcal/mol to break. Reacting two hydrogen atoms releases 103.3 kcal/mol, and so HgH 2 formation from hydrogen molecules and Hg gas is endothermic at 24.2 kcal/mol. [ 3 ]
Alireza Shayesteh et al conjectured that bacteria containing the flavoprotein mercuric reductase , such as Escherichia coli , can in theory reduce soluble mercury compounds to volatile HgH 2 , which should have a transient existence in nature. [ 3 ]
Mercury(II) hydride may be prepared by the reduction of mercury(II) chloride . In this process, mercury(II) chloride and a hydride salt equivalent react to produce mercury(II) hydride according to the following equations, which depend on the stoichiometry of the reaction:
Variations of this method exits where mercury(II) chloride is substituted for its heavier halide homologues.
Mercury(II) hydride can also be generated by direct synthesis from the elements in the gas phase or in cryogenic inert gas martices: [ 3 ]
This requires excitation of the mercury atom to the 1 P or 3 P state, as atomic mercury in its ground-state does not insert into the dihydrogen bond. [ 3 ] Excitation is accomplished by means of an ultraviolet-laser, [ 1 ] or electric discharge. [ 3 ] The initial yield is high; however, due to the product being in an excited state, a significant amount dissociates rapidly into mercury(I) hydride , then back into the initial reagents:
This is the preferred method for matrix isolation research. Besides mercury(II) hydride, it also produces other mercury hydrides in lesser quantities, such as the mercury(I) hydrides (HgH and Hg 2 H 2 ).
Upon treatment with a Lewis base, mercury(II) hydride converts to an adduct. Upon treatment with a standard acid, mercury(II) hydride and its adducts convert either to a mercury salt or a mercuran(2)yl derivative and elemental hydrogen . [ citation needed ] Oxidation of mercury(II) hydride gives elemental mercury. [ citation needed ] Unless cooled below −125 °C (−193 °F), mercury(II) hydride decomposes to produce elemental mercury and hydrogen: [ 4 ]
Mercury(II) hydride was successfully synthesized and identified in 1951 by Egon Wiberg and Walter Henle, by the reaction of mercury(II) iodide and lithium tetrahydroaluminate in a mixture of petroleum ether and tetrahydrofuran. In 1993 Legay-Sommaire announced HgH 2 production in cryogenic argon and krypton matrices with a KrF laser. [ 1 ] In 2004, solid HgH 2 was definitively synthesized and consequentially analysed, by Xuefeng Wang and Lester Andrews , by direct matrix isolation reaction of excited mercury with molecular hydrogen. [ 4 ] In 2005, gaseous HgH 2 was synthesized by Alireza Shayesteh et al , by the direct gas-phase reaction of excited mercury with molecular hydrogen at standard temperature; [ 5 ] and Xuefeng Wang and Lester Andrews [ 4 ] determined the structure of solid mercury HgH 2 , to be a molecular solid. | https://en.wikipedia.org/wiki/HgH2 |
Soluble in excess KI( Potassium iodide ) forming soluble complex K 2 [HgI 4 ]( Potassium tetraiodomercurate(II) ) also known as Nessler's reagent
Mercury(II) iodide is a chemical compound with the molecular formula Hg I 2 . It is typically produced synthetically but can also be found in nature as the extremely rare mineral coccinite . Unlike the related mercury(II) chloride it is hardly soluble in water (<100 ppm).
Mercury(II) iodide is produced by adding an aqueous solution of potassium iodide to an aqueous solution of mercury(II) chloride with stirring; the precipitate is filtered off, washed and dried at 70 °C.
Mercury(II) iodide displays thermochromism ; when heated above 126 °C (400 K) it undergoes a phase transition , from the red alpha crystalline form to a pale yellow beta form. As the sample cools, it gradually reacquires its original colour. It has often been used for thermochromism demonstrations. [ 2 ] A third form, which is orange, is also known; this can be formed by recrystallisation and is also metastable , eventually converting back to the red alpha form. [ 3 ] The various forms can exist in a diverse range of crystal structures and as a result mercury(II) iodide possesses a surprisingly complex phase diagram . [ 4 ]
Mercury(II) iodide is used for preparation of Nessler's reagent , used for detection of presence of ammonia .
Mercury(II) iodide is a semiconductor material , used in some x-ray and gamma ray detection and imaging devices operating at room temperatures. [ 5 ]
In veterinary medicine , mercury(II) iodide is used in blister ointments in exostoses , bursal enlargement, etc. [ citation needed ]
It can appear as a precipitate in many reactions. | https://en.wikipedia.org/wiki/HgI2 |
Mercury(II) sulfate , commonly called mercuric sulfate , is the chemical compound Hg S O 4 . It is an odorless salt that forms white granules or crystalline powder. In water, it separates into an insoluble basic sulfate with a yellow color and sulfuric acid. [ 3 ]
The anhydrous compound features Hg 2+ in a highly distorted tetrahedral HgO 4 environment. Two Hg-O distances are 2.22 Å and the others are 2.28 and 2.42 Å. [ 5 ] In the monohydrate, Hg 2+ adopts a linear coordination geometry with Hg-O (sulfate) and Hg-O (water) bond lengths of 2.179 and 2.228 Å, respectively. Four weaker bonds are also observed with Hg---O distances >2.5 Å. [ 6 ]
In 1932, the Japanese chemical company Chisso Corporation began using mercury sulfate as the catalyst for the production of acetaldehyde from acetylene and water . Though it was unknown at the time, methylmercury is formed as side product of this reaction. Exposure and consumption of the mercury waste products, including methylmercury, that were dumped into Minamata Bay by Chisso are believed to be the cause of Minamata disease in Minamata , Japan . [ 7 ]
Mercury sulfate can be produced
by treating mercury with hot concentrated sulfuric acid : [ 8 ]
Alternatively yellow mercuric oxide reacts also with concentrated sulfuric acid. [ 9 ]
An acidic solution of mercury sulfate is known as Denigés' reagent . It was commonly used throughout the 20th century as a qualitative analysis reagent. If Denigés' reagent is added to a solution containing compounds that have tertiary alcohols, a yellow or red precipitate will form. [ 10 ]
Mercury sulfate, as well as other mercury(II) compounds, are commonly used as catalysts in oxymercuration-demercuration , a type of electrophilic addition reaction that results in hydration of an unsaturated compound. The hydration of an alkene gives an alcohol. The regioselectivity is that predicted by Markovnikov's rule . For an alkyne, the result is an enol , which tautomerizes to give the carbonyl. [ 11 ] At one time, this chemistry was employed commercially for the preparation of acetaldehyde from acetylene: [ 12 ]
A related and specialized example is the conversion of 2,5-dimethylhexyne-2,5-diol to 2,2,5,5-tetramethyltetrahydrofuran using aqueous mercury sulfate without the addition of acid. [ 13 ]
Inhalation of HgSO 4 can result in acute poisoning: causing tightness in the chest, difficulties breathing, coughing and pain. Exposure of HgSO 4 to the eyes can cause ulceration of conjunctiva and cornea. If mercury sulfate is exposed to the skin it may cause sensitization dermatitis. Lastly, ingestion of mercury sulfate will cause necrosis, pain, vomiting, and severe purging. Ingestion can result in death within a few hours due to peripheral vascular collapse. [ 1 ]
It was used in the late 19th century to induce vomiting for medical reasons. [ 14 ] | https://en.wikipedia.org/wiki/HgO4S |
Mercury sulfide , or mercury(II) sulfide is a chemical compound composed of the chemical elements mercury and sulfur . It is represented by the chemical formula HgS. It is virtually insoluble in water. [ 4 ]
HgS is dimorphic with two crystal forms:
β-HgS precipitates as a black solid when Hg(II) salts are treated with H 2 S . The reaction is conveniently conducted with an acetic acid solution of mercury(II) acetate . With gentle heating of the slurry, the black polymorph converts to the red form. [ 6 ] β-HgS is unreactive to all but concentrated acids. [ 4 ]
Mercury is produced from the cinnabar ore by roasting in air and condensing the vapour. [ 4 ]
When α-HgS is used as a red pigment, it is known as cinnabar . The tendency of cinnabar to darken has been ascribed to conversion from red α-HgS to black β-HgS. However β-HgS was not detected at excavations in Pompeii, where originally red walls darkened, and was attributed to the formation of Hg-Cl compounds (e.g., corderoite , calomel , and terlinguaite ) and calcium sulfate , gypsum. [ 7 ]
As the mercury cell as used in the chlor-alkali industry ( Castner–Kellner process ) is being phased out over concerns over mercury emissions, the metallic mercury from these setups is converted into mercury sulfide for underground storage.
With a band gap of 2.1 eV and its stability, it is possible to be used as photoelectrochemical cell . [ 8 ]
Neutralization with sulfur has been suggested to clean mercury spills, but the reaction does not proceed rapidly and completely enough for emergencies. [ 9 ] | https://en.wikipedia.org/wiki/HgS |
Mercury(II) sulfate , commonly called mercuric sulfate , is the chemical compound Hg S O 4 . It is an odorless salt that forms white granules or crystalline powder. In water, it separates into an insoluble basic sulfate with a yellow color and sulfuric acid. [ 3 ]
The anhydrous compound features Hg 2+ in a highly distorted tetrahedral HgO 4 environment. Two Hg-O distances are 2.22 Å and the others are 2.28 and 2.42 Å. [ 5 ] In the monohydrate, Hg 2+ adopts a linear coordination geometry with Hg-O (sulfate) and Hg-O (water) bond lengths of 2.179 and 2.228 Å, respectively. Four weaker bonds are also observed with Hg---O distances >2.5 Å. [ 6 ]
In 1932, the Japanese chemical company Chisso Corporation began using mercury sulfate as the catalyst for the production of acetaldehyde from acetylene and water . Though it was unknown at the time, methylmercury is formed as side product of this reaction. Exposure and consumption of the mercury waste products, including methylmercury, that were dumped into Minamata Bay by Chisso are believed to be the cause of Minamata disease in Minamata , Japan . [ 7 ]
Mercury sulfate can be produced
by treating mercury with hot concentrated sulfuric acid : [ 8 ]
Alternatively yellow mercuric oxide reacts also with concentrated sulfuric acid. [ 9 ]
An acidic solution of mercury sulfate is known as Denigés' reagent . It was commonly used throughout the 20th century as a qualitative analysis reagent. If Denigés' reagent is added to a solution containing compounds that have tertiary alcohols, a yellow or red precipitate will form. [ 10 ]
Mercury sulfate, as well as other mercury(II) compounds, are commonly used as catalysts in oxymercuration-demercuration , a type of electrophilic addition reaction that results in hydration of an unsaturated compound. The hydration of an alkene gives an alcohol. The regioselectivity is that predicted by Markovnikov's rule . For an alkyne, the result is an enol , which tautomerizes to give the carbonyl. [ 11 ] At one time, this chemistry was employed commercially for the preparation of acetaldehyde from acetylene: [ 12 ]
A related and specialized example is the conversion of 2,5-dimethylhexyne-2,5-diol to 2,2,5,5-tetramethyltetrahydrofuran using aqueous mercury sulfate without the addition of acid. [ 13 ]
Inhalation of HgSO 4 can result in acute poisoning: causing tightness in the chest, difficulties breathing, coughing and pain. Exposure of HgSO 4 to the eyes can cause ulceration of conjunctiva and cornea. If mercury sulfate is exposed to the skin it may cause sensitization dermatitis. Lastly, ingestion of mercury sulfate will cause necrosis, pain, vomiting, and severe purging. Ingestion can result in death within a few hours due to peripheral vascular collapse. [ 1 ]
It was used in the late 19th century to induce vomiting for medical reasons. [ 14 ] | https://en.wikipedia.org/wiki/HgSO4 |
Mercury telluride (HgTe) is a binary chemical compound of mercury and tellurium . It is a semi-metal related to the II-VI group of semiconductor materials. Alternative names are mercuric telluride and mercury(II) telluride.
HgTe occurs in nature as the mineral form coloradoite .
All properties are at standard temperature and pressure unless stated otherwise. The lattice parameter is about 0.646 nm in the cubic crystalline form. The bulk modulus is about 42.1 GPa. The thermal expansion coefficient is about 5.2×10 −6 /K. The static and dynamic dielectric constants are 20.8 and 15.1, respectively. The thermal conductivity is low at 2.7 W·m 2 /(m·K). HgTe bonds are weak leading to low hardness values. The hardness is 2.7×10 7 kg/m 2 . [ 1 ] [ 2 ] [ 3 ]
N-type doping can be achieved with elements such as boron , aluminium , gallium , or indium . Iodine and iron will also dope n-type. HgTe is naturally p-type due to mercury vacancies. P-type doping is also achieved by introducing zinc, copper, silver, or gold. [ 1 ] [ 2 ]
Mercury telluride was the first topological insulator discovered, in 2007. Topological insulators cannot support an electric current in the bulk, but electronic states confined to the surface can serve as charge carriers . [ 5 ]
HgTe bonds are weak. Their enthalpy of formation , around −32kJ/mol, is less than a third of the value for the related compound cadmium telluride. HgTe is easily etched by acids, such as hydrobromic acid . [ 1 ] [ 2 ]
Bulk growth is from a mercury and tellurium melt in the presence of a high mercury vapour pressure. HgTe can also be grown epitaxially, for example, by sputtering or by metalorganic vapour phase epitaxy . [ 1 ] [ 2 ]
Nanoparticles of mercury telluride can be obtained via cation exchange from cadmium telluride nanoplatelets. [ 6 ] | https://en.wikipedia.org/wiki/HgTe |
The HiFive Unleashed , or HFU is a discontinued single-board computer development board created by SiFive with the intention to increase exposure and adoption of the open-source RISC-V architecture. [ 1 ] [ 2 ] [ 3 ]
The HFU is capable of running the Debian Linux distribution and Quake II . [ 4 ] [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/HiFive_Unleashed |
HiWish is a program created by NASA so that anyone can suggest a place for the HiRISE camera on the Mars Reconnaissance Orbiter to photograph. [ 1 ] [ 2 ] [ 3 ] It was started in January 2010. In the first few months of the program 3000 people signed up to use HiRISE. [ 4 ] [ 5 ] The first images were released in April 2010. [ 6 ] Over 12,000 suggestions were made by the public; suggestions were made for targets in each of the 30 quadrangles of Mars. Selected images released were used for three talks at the 16th Annual International Mars Society Convention. Below are some of the over 4,224 images that have been released from the HiWish program as of March 2016. [ 7 ]
Some landscapes look just like glaciers moving out of mountain valleys on Earth. Some have a hollowed-out appearance, looking like a glacier after almost all the ice has disappeared. What is left are the moraines—the dirt and debris carried by the glacier. The center is hollowed out because the ice is mostly gone. [ 8 ] These supposed alpine glaciers have been called glacier-like forms (GLF) or glacier-like flows (GLF). [ 9 ] Glacier-like forms are a later and maybe more accurate term because we cannot be sure the structure is currently moving. [ 10 ]
The radial and concentric cracks visible here are common when forces penetrate a brittle layer, such as a rock thrown through a glass window. These particular fractures were probably created by something emerging from below the brittle Martian surface. Ice may have accumulated under the surface in a lens shape; thus making these cracked mounds. Ice being less dense than rock, pushed upwards on the surface and generated these spider web-like patterns. A similar process creates similar sized mounds in arctic tundra on Earth. Such features are called "pingos", an Inuit word. [ 11 ] Pingos would contain pure water ice; thus they could be sources of water for future colonists of Mars. Many features that look like the pingos on the Earth are found in Utopia Planitia (~35-50° N; ~80-115° E). [ 12 ]
There is great deal of evidence that water once flowed in river valleys on Mars. Pictures from orbit show winding valleys, branched valleys, and even meanders with oxbow lakes . Mars probably once even had an ocean. [ 13 ] [ 14 ] [ 15 ] Some are visible in the pictures below.
Streamlined shapes represent more evidence of past flowing water on Mars. Water shaped features into streamlined shapes.
Many locations on Mars have sand dunes . The dunes are covered by a seasonal carbon dioxide frost that forms in early autumn and remains until late spring. Many martian dunes strongly resemble terrestrial dunes but images acquired by the High-Resolution Imaging Science Experiment on the Mars Reconnaissance Orbiter have shown that martian dunes in the north polar region are subject to modification via grainflow triggered by seasonal CO 2 sublimation , a process not seen on Earth. Many dunes are black because they are derived from the dark volcanic rock basalt. Extraterrestrial sand seas such as those found on Mars are referred to as "undae" from the Latin for waves.
Some of the targets suggested became possible sites for a Rover Mission in 2020. The targets were in Firsoff (crater) and Holden Crater . These locations were picked as two of 26 locations considered for a mission that will look for signs of life and gather samples for a later return to Earth. [ 16 ] [ 17 ] [ 18 ]
Recurrent slope lineae are small dark streaks on slopes that elongate in warm seasons. They may be evidence of liquid water. [ 20 ] [ 21 ] [ 22 ] However, there remains debate about whether water or much water is needed. [ 23 ] [ 24 ] [ 25 ] [ 26 ]
Many places on Mars show rocks arranged in layers. Rock can form layers in a variety of ways. Volcanoes, wind, or water can produce layers. [ 28 ] Layers can be hardened by the action of groundwater.
This group of layers that are found in a crater all come from the Arabia quadrangle .
This next group of layered terrain comes from the Louros Valles in the Coprates quadrangle .
Martian gullies are incised networks of narrow channels and their associated downslope sediment deposits, found on the planet of Mars . They are named for their resemblance to terrestrial gullies . First discovered on images from Mars Global Surveyor , they occur on steep slopes, especially on the walls of craters. Usually, each gully has a dendritic alcove at its head, a fan-shaped apron at its base, and a single thread of incised channel linking the two, giving the whole gully an hourglass shape. [ 29 ] They are believed to be relatively young because they have few, if any craters.
On the basis of their form, aspects, positions, and location amongst and apparent interaction with features thought to be rich in water ice, many researchers believed that the processes carving the gullies involve liquid water. However, this remains a topic of active research. Some later observations suggest that dry ice may be involved in the development of gullies today. [ 30 ]
Much of the Martian surface is covered with a thick ice-rich, mantle layer that has fallen from the sky a number of times in the past. [ 31 ] [ 32 ] [ 33 ] In some places a number of layers are visible in the mantle. [ 34 ]
It fell as snow and ice-coated dust. There is good evidence that this mantle is ice-rich. The shapes of the polygons common on many surfaces suggest ice-rich soil. High levels of hydrogen (probably from water) have been found with Mars Odyssey . [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] Thermal measurements from orbit suggest ice. [ 40 ] [ 41 ] The Phoenix (spacecraft) discovered water ice with made direct observations since it landed in a field of polygons. [ 42 ] [ 43 ] In fact, its landing rockets exposed pure ice. Theory had predicted that ice would be found under a few cm of soil. This mantle layer is called "latitude dependent mantle" because its occurrence is related to the latitude. It is this mantle that cracks and then forms polygonal ground. This cracking of ice-rich ground is predicted based on physical processes. [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] [ 49 ] [ 50 ]
Polygonal, patterned ground is quite common in some regions of Mars. [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 49 ] [ 55 ] [ 56 ] It is commonly believed to be caused by the sublimation of ice from the ground. Sublimation is the direct change of solid ice to a gas. This is similar to what happens to dry ice on the Earth. Places on Mars that display polygonal ground may indicate where future colonists can find water ice. Patterned ground forms in a mantle layer, called latitude dependent mantle , that fell from the sky when the climate was different. [ 31 ] [ 32 ] [ 57 ] [ 58 ]
HiRISE images taken under the HiWish program found triangular shaped depressions in Milankovic Crater that researchers found contain vast amounts of ice that are found under only 1–2 meters of soil.
These depressions contain water ice in the straight wall that faces the pole, according to the study published in the journal Science. Eight sites were found with Milankovic Crater being the only one in the northern hemisphere. Research was conducted with instruments on board the Mars Reconnaissance Orbiter (MRO). [ 59 ] [ 60 ] [ 61 ] [ 62 ] [ 63 ]
The following images are ones referred to in this study of subsurface ice sheets. [ 64 ]
These triangular depressions are similar to those in scalloped terrain. However scalloped terrain, displays a gentle equator-facing slope and is rounded. Scarps discussed here have a steep pole-facing side and have been found between 55 and 59 degrees north and south latitude [ 64 ] Scalloped topography is common in the mid-latitudes of Mars, between 45° and 60° north and south.
Scalloped topography is common in the mid-latitudes of Mars, between 45° and 60° north and south. It is particularly prominent in the region of Utopia Planitia [ 65 ] [ 66 ] in the northern hemisphere and in the region of Peneus and Amphitrites Patera [ 67 ] [ 68 ] in the southern hemisphere. Such topography consists of shallow, rimless depressions with scalloped edges, commonly referred to as "scalloped depressions" or simply "scallops". Scalloped depressions can be isolated or clustered and sometimes seem to coalesce. A typical scalloped depression displays a gentle equator-facing slope and a steeper pole-facing scarp. This topographic asymmetry is probably due to differences in insolation . Scalloped depressions are believed to form from the removal of subsurface material, possibly interstitial ice, by sublimation . This process may still be happening at present. [ 69 ]
On November 22, 2016, NASA reported finding a large amount of underground ice in the Utopia Planitia region of Mars. [ 70 ] The volume of water detected has been estimated to be equivalent to the volume of water in Lake Superior . [ 71 ] [ 72 ] The volume of water ice in the region were based on measurements from the ground-penetrating radar instrument on Mars Reconnaissance Orbiter , called SHARAD . From the data obtained from SHARAD, " dielectric permittivity ", or the dielectric constant was determined. The dielectric constant value was consistent with a large concentration of water ice. [ 73 ] [ 74 ] [ 75 ]
A pedestal crater is a crater with its ejecta sitting above the surrounding terrain and thereby forming a raised platform (like a pedestal ). They form when an impact crater ejects material which forms an erosion-resistant layer, thus causing the immediate area to erode more slowly than the rest of the region. Some pedestals have been accurately measured to be hundreds of meters above the surrounding area. This means that hundreds of meters of material were eroded away. The result is that both the crater and its ejecta blanket stand above the surroundings. Pedestal craters were first observed during the Mariner missions. [ 76 ] [ 77 ] [ 78 ] [ 79 ]
Ring mold craters are believed to be formed from asteroid impacts into ground that has an underlying layer of ice. The impact produces an rebound of the ice layer to form a "ring-mold" shape.
Another, later idea, for their formation suggests that the impacting body goes through layers of different densities. Later, erosion could have helped shape them. It was thought that ring-mold craters could only exist in areas with large amounts of ground ice. However, with more extensive analysis of larger areas, it was found the ring mold craters are sometimes formed where there is not as much ice underground. [ 80 ] [ 81 ]
Dust devil tracks can be very pretty. They are caused by giant dust devils removing bright colored dust from the Martian surface; thereby exposing a dark layer. [ 83 ] [ 84 ] [ 85 ] The patterns formed by the dust devil tracks change frequently; sometimes in just a few months. [ 86 ] [ 87 ] [ 88 ] [ 89 ]
Dust devils on Mars have been photographed both from the ground and high overhead from orbit. They have even blown dust off the solar panels of two Rovers on Mars, thereby greatly extending their useful lifetime. [ 90 ] The pattern of the tracks has been shown to change every few months. [ 91 ] A study that combined data from the High Resolution Stereo Camera (HRSC) and the Mars Orbiter Camera (MOC) found that some large dust devils on Mars have a diameter of 700 metres (2,300 ft) and last at least 26 minutes. [ 92 ]
Yardangs are common in some regions on Mars, especially in what is called the " Medusae Fossae Formation ". This formation is found in the Amazonis quadrangle and near the equator. [ 93 ] They are formed by the action of wind on sand sized particles; hence yardangs often point in the direction that the winds were blowing when they were formed. [ 94 ] Because they exhibit very few impact craters they are believed to be relatively young. [ 95 ]
In spring, dark eruptions of gas and dust occur in certain areas. During the eruptions, wind often blows the material into a fan or a tail-like shape. During the winter, much frost accumulates. It freezes out directly onto the surface of the permanent polar cap, which is made of water ice covered with layers of dust and sand. The deposit begins as a layer of dusty CO 2 frost. Over the winter, it recrystallizes and becomes denser. The dust and sand particles caught in the frost slowly sink. By the time temperatures rise in the spring, the frost layer has become a slab of semi-transparent ice about 3 feet thick, lying on a substrate of dark sand and dust. This dark material absorbs light and causes the ice to sublimate (turn directly into a gas). Eventually much gas accumulates and becomes pressurized. When it finds a weak spot, the gas escapes and blows out the dust. Speeds can reach 100 miles per hour. [ 96 ] Calculations show that the plumes are 20–80 meters high. [ 97 ] [ 98 ] Dark channels can sometimes be seen; they are called "spiders". [ 99 ] [ 100 ] [ 101 ] The surface appears covered with dark spots when this process is occurring. [ 96 ] [ 102 ]
Many ideas have been advanced to explain these features. [ 103 ] [ 104 ] [ 105 ] [ 106 ] [ 107 ] [ 108 ] [ 109 ] These features can be seen in some of the pictures below.
Remnants of a 50–100 meter thick mantling, called the upper plains unit, has been discovered in the mid-latitudes of Mars. It was first investigated in the Deuteronilus Mensae ( Ismenius Lacus quadrangle ) region, but it occurs in other places as well. The remnants consist of sets of dipping layers in craters and along mesas. [ 110 ] Sets of dipping layers may be of various sizes and shapes—some look like Aztec pyramids from Central America. Dipping layers are common in some regions of Mars. They may be the remains of mantle layers. Another idea for their origin was presented at 55th LPSC (2024) by an international team of researchers. They suggest that the layers are from past ice sheets. [ 111 ]
This unit also degrades into brain terrain . Brain terrain is a region of maze-like ridges 3–5 meters high. Some ridges may consist of an ice core, so they may be sources of water for future colonists.
Some regions of the upper plains unit display large fractures and troughs with raised rims; such regions are called ribbed upper plains. Fractures are believed to have started with small cracks from stresses. Stress is suggested to initiate the fracture process since ribbed upper plains are common when debris aprons come together or near the edge of debris aprons—such sites would generate compressional stresses. Cracks exposed more surfaces, and consequently more ice in the material sublimates into the planet's thin atmosphere. Eventually, small cracks become large canyons or troughs.
Small cracks often contain small pits and chains of pits; these are thought to be from sublimation of ice in the ground. [ 112 ] [ 113 ] Large areas of the Martian surface are loaded with ice that is protected by a meters thick layer of dust and other material. However, if cracks appear, a fresh surface will expose ice to the thin atmosphere. [ 114 ] [ 115 ] In a short time, the ice will disappear into the cold, thin atmosphere in a process called sublimation. Dry ice behaves in a similar fashion on the Earth. On Mars sublimation has been observed when the Phoenix lander uncovered chunks of ice that disappeared in a few days. [ 42 ] [ 116 ] In addition, HiRISE has seen fresh craters with ice at the bottom. After a time, HiRISE saw the ice deposit disappear. [ 117 ]
The upper plains unit is thought to have fallen from the sky. It drapes various surfaces, as if it fell evenly. As is the case for other mantle deposits, the upper plains unit has layers, is fine-grained, and is ice-rich. It is widespread; it does not seem to have a point source. The surface appearance of some regions of Mars is due to how this unit has degraded. It is a major cause of the surface appearance of lobate debris aprons . [ 113 ] The layering of the upper plains mantling unit and other mantling units are believed to be caused by major changes in the planet's climate. Models predict that the obliquity or tilt of the rotational axis has varied from its present 25 degrees to maybe over 80 degrees over geological time. Periods of high tilt will cause the ice in the polar caps to be redistributed and change the amount of dust in the atmosphere. Dust woill gain a coating of ice and then fall to the ground when the ice layer is heavy enough. [ 118 ] [ 119 ] [ 120 ]
Linear ridge networks are found in various places on Mars in and around craters. [ 121 ] Ridges often appear as mostly straight segments that intersect in a lattice-like manner. They are hundreds of meters long, tens of meters high, and several meters wide. It is thought that impacts created fractures in the surface, these fractures later acted as channels for fluids. Fluids cemented the structures. With the passage of time, surrounding material was eroded away, thereby leaving hard ridges behind.
Since the ridges occur in locations with clay, these formations could serve as a marker for clay which requires water for its formation. Water here could have supported life. [ 122 ] [ 123 ] [ 124 ]
Some places on Mars break up with large fractures that created a terrain with mesas and valleys. Some of these can be quite pretty.
There is evidence that volcanoes sometimes erupt under ice, as they do on Earth at times. What seems to happen it that much ice melts, the water escapes, and then the surface cracks and collapses. These exhibit concentric fractures and large pieces of ground that seemed to have been pulled apart. [ 125 ] Sites like this may have recently had held liquid water, hence they may be fruitful places to search for evidence of life. [ 126 ] [ 127 ]
In places large fractures break up surfaces. Sometimes straight edges are formed and large cubes are created by the fractures.
So-called "rootless cones" are caused by explosions of lava with ground ice under the flow. [ 128 ] [ 129 ] The ice melts and turns into a vapor that expands in an explosion that produces a cone or ring. Featureslike these are found in Iceland, when lavas cover water-saturated substrates. [ 130 ] [ 128 ] [ 131 ]
Some features look like volcanoes. Some of them may be mud volcanoes where pressurized mud is forced upward forming cones. These features may be places to look for life as they bring to the surface possible life that has been protected from radiation.
Strange terrain was discovered on parts of the floor of Hellas Planitia. Scientists are not sure of how it formed.
Exhumed craters seem to be in the process of being uncovered. [ 132 ] It is believed that they formed, were covered over, and now are being exhumed as material is being eroded. When a crater forms, it will destroy what is under it. In the example below, only part of the crater is visible. if the crater came after the layered feature, it would have removed part of the feature and we would see the entire crater.
To suggest a location for HiRISE to image visit the site at http://www.uahirise.org/hiwish
In the sign up process you will need to come up with an ID and a password. When you choose a target to be imaged, you have to pick an exact location on a map and write about why the image should be taken. If your suggestion is accepted, it may take 3 months or more to see your image. You will be sent an email telling you about your images. The emails usually arrive on the first Wednesday of the month in the late afternoon. | https://en.wikipedia.org/wiki/HiWish_program |
In anatomy, a hiatus is a natural fissure in a structure. Examples include: | https://en.wikipedia.org/wiki/Hiatus_(anatomy) |
Hibakujumoku ( Japanese : 被爆樹木 ; also called survivor tree or A-bombed tree in English) is a Japanese term for a tree that survived the atomic bombings of Hiroshima and Nagasaki in 1945. The term is from Japanese : 被爆 , romanized : hibaku , lit. 'bombed, A-bombed, nuked' [ 1 ] and Japanese : 樹木 , romanized : jumoku , lit. 'trees and shrubs'. [ 2 ]
The heat emitted by the explosion in Hiroshima within the first three seconds at a distance of three kilometres from the hypocenter was about 40 times greater than that from the Sun. [ 3 ] The initial radiation level at the hypocenter was approximately 240 Gy . [ 3 ] According to Hiroshima and Nagasaki: The Physical, Medical, and Social Effects of the Atomic Bombings , plants suffered damage only in the portions exposed above ground, while portions underground were not directly damaged. [ 4 ]
The rate of regeneration differed by species. Active regeneration was shown by broad-leaved trees . [ 4 ] Approximately 170 trees that grew in Hiroshima in 2011 had actually been there prior to the bombing. [ 5 ] The oleander was designated the official flower of Hiroshima for its remarkable vitality. [ 4 ]
Hibakujumoku species are listed in the UNITAR database, [ 6 ] shown below, combined with data from Hiroshima and Nagasaki: The Physical, Medical, and Social Effects of the Atomic Bombings . A more extensive list, including distance from the hypocenter for each tree, is available in Survivors: The A-bombed Trees of Hiroshima . [ 7 ]
Although not as well known as the hibakujumoku in Hiroshima, there are a number of similar survivors in the vicinity of the hypocenter in Nagasaki. Approximately 50 of these trees have been documented in English. [ 8 ]
The J-pop singer and actor Fukuyama Masaharu , who was born in Nagasaki to survivors of the atomic bomb , [ 9 ] has been active in preserving Nagasaki's hibakujumoku. His song "Kusunoki" (クスノキ), from his 2014 album Human , honours the camphor trees of Sannō Shrine . Fukuyama used the song to solicit donations which the city of Nagasaki used to establish the Kusunoki Foundation, dedicated to preserving the trees and teaching the history associated with them. [ 10 ] | https://en.wikipedia.org/wiki/Hibakujumoku |
A hibernaculum (plural form: hibernacula) (Latin, "tent for winter quarters") is a place in which an animal seeks refuge , such as a bear using a cave to overwinter . The word can be used to describe a variety of shelters used by many kinds of animals, including insects , toads , lizards , snakes , bats , rodents , and primates of various species.
Insects range in their size, structure, and general appearance but most use hibernacula. All insects are primarily exothermic . [ 1 ] For this reason, extremely cold temperatures, such as those experienced in the winter, outside of tropical locations, cause their metabolic systems to shut down; long exposure may lead to death. Insects survive colder winters through the process of overwintering , which occurs at all stages of development and may include migration or hibernation for different insects, the latter of which must be done in hibernacula. Insects that do not migrate must halt their growth to avoid freezing to death, in a process called diapause . [ 2 ] Insects prepare to overwinter through a variety of mechanisms, such as using anti-freeze proteins or cryoprotectants in freeze-avoidant insects, like soybean aphids . Cryoprotectants are toxic, with high concentrations only tolerated at low temperatures. Thus, hibernacula are used to avoid sporadic warming and the risk of death due high concentrations of cryoprotectants at warmer temperatures. [ 3 ] Freeze-tolerant insects, like second-generation corn-borers , can survive being frozen and therefore, undergo inoculative freezing. [ 4 ] Hibernacula range in size and structure depending on the insects using them. [ 5 ]
However, insect hibernacula are generally required to be:
Some insects, like convergent lady bugs , reuse the same hibernacula, year after year. They converge with other lady beetles and migrate to hibernacula used by prior generations. They are able to find old hibernacula due to hydrocarbons released by lady beetle feet which create a lasting path. This allows lady beetles to retrace their footsteps to previously used hibernacula. [ 7 ] Their tendency to aggregate and overwinter in groups is likely due to their attraction to similar environments and conspecifics. Beetles use rock crevices as hibernacula, often clumping in them, in groups. These rock crevices are found in rock fields the beetle are attracted to for high levels of vegetation and greenery. [ 8 ]
Other types of insect hibernacula include self-spun silk hibernacula, such as those made and used by spruce budworms as they moult and overwinter in their second instars. [ 9 ] An example is the eastern spruce budworm which creates hibernacula after dispersing during its first instar then overwinter before emerging from the hibernacula in early May. [ 10 ] Woolly bear caterpillars overwinter as caterpillars and grow to be isabella tiger moths . They use plant debris as makeshift hibernacula, to protect themselves from extreme elements. [ 6 ] Some butterflies, like the white admiral butterfly also only mature halfway as a caterpillar before hibernating for the winter. [ 11 ] For freeze-avoidant insects, ideal hibernacula are dry, as freeze-avoidant insects are less likely to dampen and freeze in them, however moist hibernacula promote inoculative freezing for freeze-tolerant insects. [ 12 ]
Amphibians that hibernate include several species of frogs and salamanders from the northern continental climates of North America and Eurasia and also from extreme Southern Hemisphere climates. [ 13 ] These amphibians slow their metabolism during winter to avoid unsuitable conditions, such as freezing. Most freeze avoidance strategies include overwintering in aquatic situations or burrowing in the soil to depths below the frostline. [ 14 ] A hibernaculum for amphibians should provide the following: [ 13 ]
Species from cool continental climates hibernate at temperatures from 0 to 4 °C. Some species will not survive hibernation at temperatures that exceed 4 °C. [ 13 ]
Generally, for amphibians that hibernate under ice, it is necessary for the animal to be submerged in water that is 10 to 15 cm deep and to maintain the temperature between 2 and 3 °C and not above 4 °C. Water should be well aerated, with maintained low-intensity light levels and minimal disturbance of the amphibians.
Like other amphibians, frogs show minimal capacities for freezing tolerance and survive winter by using terrestrial hibernacula where they avoid freezing. However, frogs may exhibit greater freeze-tolerance capacity at high latitude range limits, where winter climate is more severe. For example, data suggests that while cricket frogs in South Dakota survive winter by locating hibernacula that prevent freezing, their toleration of short freezing bouts may expand the range of suitable hibernacula. [ 14 ] However, overwinter mortality may be high at the northern range boundary due to colder temperatures and might limit cricket frogs from expanding their range northward. [ 14 ]
The microclimate refers to the climate of a very small or restricted area, like the hibernaculum, especially when this differs from the climate of the surrounding area. Overwinter survival in these cricket frogs among other frogs is dependent on using hibernacula with appropriate physical microclimate characteristics, such as moist soil, that buffer frogs from temperatures that drop below the freezing point of the body fluids for extended periods. [ 14 ] Although, determining if frogs can identify sites with appropriate microclimates to support overwinter survival and what factors might inform such choices are still unknown and will require further study. Therefore, it is still not known to what extent various types of prospective hibernacula for frogs might be suitable in the years to come, especially factoring in climate change. [ citation needed ]
As part the Highways Agency Biodiversity Action Plan (HABAP) in the UK, the Species Action Plan (SAP) for great crested newts aims to maintain and enhance existing newt populations through appropriate management of suitable habitat. As part of steps to implement the HABAP, newt hibernacula (e.g. log piles) have been constructed to improve the quality of the terrestrial habitat through increasing the number of potential overwintering sites. [ 15 ] It was also determined that habitat surrounding breeding ponds with plenty of cover and suitable overwintering sites may have less need for provision of artificial hibernacula than landscapes with less woodland, hedgerows, scrub etc. [ 15 ] Because great crested newts show high loyalty to over-wintering locations, returning to such established areas year after year, artificial hibernacula could be important in future years to conserve newts and other amphibians. [ 15 ] Although, monitoring in the vicinity of these hibernacula in autumn using felt roofing tiles did not reveal the presence of any great crested newts even though they are known to breed in nearby ponds. [ 15 ] Common toads and frogs did surround the area however. Therefore, further studies need to be conducted in order to create species-appropriate artificial hibernacula. [ citation needed ]
Many reptiles undergo hibernation or a process called brumation , which is similar to hibernation; both processes require usage of a hibernaculum. Staying inside an insulated hibernaculum is a strategy to avoid the harsh winter months when the frigid outside temperatures may kill an ectothermic reptile. They depress their metabolism and heart rates to reduce energy consumption so they don't need to exit their hibernacula. Hibernating reptiles are also safer from predation inside of their concealed and protected hibernacula. Various species of turtles, snakes, and lizards all use hibernacula, the forms of which can vary greatly. [ 16 ] [ 17 ]
Hibernacula are typically:
Common snapping turtles generally hibernate for about six months from early October to mid-April. They live in lakes during their active months, then travel to small offshoot streams to hibernate. Hibernacula are about 100–150 meters away from the main body of the home lake. Most snapping turtles hibernate by burrowing into the banks of alder streams or vegetated streams, but some use other structures such as abandoned beaver dens. These streams are typically less than 0.3 m deep and 0.7 m wide, covered by sunken alder roots or fallen trees, and not covered by ice in the winter. Many animals return to the same stream to hibernate in subsequent years. [ 16 ]
Unlike more solitary snapping turtles, snakes may either hibernate alone or in large aggregations of up to several thousand individuals of the same or different species. They use a wide variety of hibernacula, including: rock piles, debris-filled wells, caves, crevices, unused burrows made by other animals, and ant mounds. The common European viper has actually been observed using all of the hibernacula listed above. Most species seem to prefer finding an already-present suitable site rather than constructing one of their own, but they do expand upon present structures and may make their own burrows if there aren't any quality sites available. [ 17 ] Pine snakes and the closely related Louisiana pine snakes are two of the most well-studied hibernating snake species, and share similar hibernacula characteristics. These species sometimes construct their own burrows, or use tunnels formed from the decay of tree roots or by gophers. The tunnels form complex networks, and have side chambers which each house one snake. [ 17 ] [ 19 ]
A fossil specimen of the stem group - boa Hibernophis is known from the White River Formation of Wyoming, comprising 4 individuals preserved together in a hibernaculum. This indicates that the aggregating behavior of brumating snakes dates back to at least the Early Oligocene . [ 21 ]
Mesquite lizards in Mexico and the southern United States have been found hibernating in groups of 2 to 8 in cracks or under slabs of bark in mesquite trees . [ 22 ] Common collared lizards spend about 6 months hibernating, almost always solitarily, though pairs of juvenile females have been observed within the same hibernaculum. They use the undersides of rock slabs as hibernacula, digging a small chamber in the dirt just large enough for their bodies with a small tunnel for outside access. Adults use larger rock slabs, dig deeper chambers, and have longer tunnels than juveniles. [ 20 ] Perhaps the most extreme example is seen in the viviparous lizard , the most northern of all lizards. They can burrow into the soil, go under leaf litter, or use shelters like rocks as hibernacula. Although the air temperature in West Siberia can drop to −10 °C, the soil temperature at the depths where these lizards hibernate remains higher than −10 °C. This enables them to survive the harshest temperatures of any lizard. [ 18 ]
Like other animals, mammals hibernate during seasons of harsh environmental conditions and resource scarcity. As it requires less energy to maintain homeostasis and survive when an individual is hibernating, this is a cost-effective strategy to increase survival rates. [ 23 ] [ 24 ] Hibernation is usually perceived as taking place during winter, as in the most well-known hibernators bears and bats, [ 25 ] [ 24 ] but can also occur during the dry season when there is little food or water, as in the mouse lemurs of Madagascar. [ 23 ] Given that mammals can spend anywhere from 1 to 9 months hibernating, their choice in hibernaculum is essential in determining their survival. [ 24 ]
Hibernacula vary greatly, but are typically:
Many bears occupy similar hibernacula to smaller mammals, but theirs are, of course, much larger and can vary greatly across and within species. Most black bears excavate dens into a hillside or at the base of a tree, stump, or shrub, but some make dens at the bases of hollow trees, in hollow logs, or in rock caves or cavities. Den reuse is observed in this species, but very rarely. There were no significant den size differences between age or sex classes, except adult males creating larger entrances. [ 29 ] Grizzly bears likewise don't show age or sex class differences in den dimensions. Grizzlies prefer hibernacula sites with abundant ground and canopy cover, and abundant sweet-vetch . [ 24 ] Polar bears differ from black bears, grizzlies, and other bear species where both sexes hibernate in that only females use hibernacula. Like other female bears, polar bears use hibernacula as maternity dens . Also like other species, they tend to dig dens into the earth, although their Arctic hibernacula are usually covered with snow by the time they emerge. [ 31 ]
Bats favor larger hibernacula where large groups may roost together, including natural caves, mines, cellars, and other kinds of underground sites and man-made structures, like ice-houses. [ 28 ] Within these hibernacula, the bats are still highly tuned to environmental factors. Little brown bats in northern latitudes hibernate for up to eight months during the winter, and leave their roosts in the warm spring weather when insect prey is plentiful again. Bats gauge the outside temperature by being attuned to the airflow at the hibernacula entrance, which is driven by temperature differences between inside and outside the hibernacula, allowing bats to leave when the temperature begins to warm. [ 25 ] Some hibernacula are shared between multiple species, such as common pipistrelles roosting with soprano pipistrelles . Behavior other than hibernating can also occur at hibernacula; common pipistrelles produce most of their mating calls and mate with each other in and near their hibernacula. [ 30 ]
Many hibernating, small-bodied mammals hibernate in simple holes in the ground, though some use complex systems of tunnels and burrows. Mountain pygmy possums in New South Wales, Australia, dig holes in the ground to form hibernacula, with the preferred location being in boulder fields under a layer of snow. During the first few months of hibernation, possums awaken occasionally and leave one hibernaculum in favor of another, seemingly in an effort to find the hibernaculum with the most suitable microclimate . [ 26 ] The reddish-gray mouse lemur also wakes and leaves the hibernaculum spontaneously and for brief periods of time. Their hibernacula are located in holes in large trees with varying levels of insulation. However, the range of insulation levels is relatively narrow, as there are often small numbers of suitably large trees. [ 23 ] There can be hibernacula differences even within a species. In Columbian ground squirrels , hibernacula size is proportional to the weight of the individual occupying it, with adults having deeper hibernacula than juveniles, unlike black bears. Most juveniles choose to hibernate within 20 meters of their mother's burrow; those that don't have higher mortality rates. [ 27 ] | https://en.wikipedia.org/wiki/Hibernaculum_(zoology) |
A hibernation factor is a protein used by cells to induce a dormant state by slowing or halting the cellular metabolism . [ 1 ] This can occur during periods of stress, [ 1 ] randomly in order to allocate "designated survivors" in a population, [ 1 ] or when bacteria cease growth (enter stationary phase ). [ 2 ] Hibernation factors can do a variety of things, including dismantling cellular machinery and halting gene expression, but the most important hibernation factors bind to the ribosome and halt protein production , which consumes a large fraction of the energy in a cell. [ 1 ] [ 2 ]
Ribosome hibernation occurs when ribosome hibernation factors bind to the ribosome and halt protein production. Ribosome hibernation is almost ubiquitous in bacteria , as well as in the plastids of plants, and may also be present in eukaryotes . [ 2 ] Ribosome hibernation factors can simply inactivate ribosomes (RaiA, Balon), link pairs into inactive dimers called 100 S ribosomes (RMF and HPF), or interfere at various stages of the translation cycle (RsfS, YqjD, SRA, and EttA). [ 2 ] [ 3 ] One indicator of ribosome hibernation is the presence of a large number of 100S ribosomes, which can constitute up to 60% of the ribosomes in a cell at a time. [ 2 ]
Three proteins, RMF, RaiA, and HPF, are only found in the large class of bacteria gammaproteobacteria . [ 2 ] RMF (Ribosome modulation factor) is a small protein, typically produced under nutrient starvation and stress conditions, [ 4 ] that is the main factor in the formation of 100S ribosomes. [ 2 ] During the formation process, RMF binds together 70S (standard) ribosomes to form 90S ribosome dimers. [ 2 ] These 90S dimers are converted by HPF (hibernation promoting factor) to form mature 100S dimers. [ 2 ] A third protein, RaiA (ribosome-associated inhibitor A) is thought to both inactivate 70S ribosomes alone and stabilize them, preventing them from being converted into 100S ribosomes. [ 2 ] Most non-gammaproteobacteria, as well as some plant plastids, instead contain a HPF homologue that can form 100S ribosomes by itself. [ 2 ]
Balon (Spanish "ball", after homologue Pelota) [ 5 ] is a hibernation factor protein found in the cold-adapted bacterium Psychrobacter urativorans . [ 5 ] The protein was discovered accidentally by a researcher who unintentionally left a sample of P. urativorans in an ice bucket for too long, cold-shocking it, through subsequent cryo-EM scans of the organism's ribosomes. [ 1 ] Unlike other factors, Balon can bind to the ribosome while protein production is in process. [ 1 ] This is important for rapid response to stress because in some cells, protein production can take up to 20 minutes to complete. [ 6 ] Balon does this by rather than physically blocking the A site of the ribosome, as other hibernation factors do, binding near to but not across the channel, allowing it to attach to the ribosome independent of whether protein production is taking place. [ 1 ] Genetic relatives of Balon have been found in 20% of bacterial genomes catalogued in public databases, but are absent from Escherichia coli and Staphylococcus aureus , the most widely used models for cellular dormancy. [ 1 ]
RsfS (Ribosome silencing factor S) inhibits translation by preventing the 30S and 50S subunits of the ribosome from binding to each other again after they split during ribosome recycling . [ 2 ] It has also been suggested to be a ribosome biogenesis factor rather than a hibernation factor. [ 7 ]
SRA (Stationary-phase-induced Ribosome-Associated protein) is not well understood as of 2018. [ 2 ] [ needs update ] It is a small protein of 45 amino acids and is tightly associated with the 30S ribosomal subunit. [ 2 ] It increases from an average of 0.1 molecules per ribosome to 0.4 per ribosome during the transition to stationary phase and remains so for several days. [ 2 ]
YqjD is an inner membrane protein specific to stationary phase. It binds to 70S and 100S ribosomes and has been proposed as of 2018 [ needs update ] to mediate the localization (moving) of hibernating ribosomes to the cell membrane. [ 2 ] While cells lacking YqjD do not have altered growth rates of ribosome composition, artificially high levels of it quickly halts growth depending on the protein's ribosome-binding capability. [ 2 ]
EttA (Energy-dependent translational throttle A) is an ATP -binding protein of the ABC-F family which is thought to modulate translation rate based on the energy level of a cell. [ 2 ] When ADP (degraded ATP, indicating low energy) levels are high, the protein inhibits ribosome activity, allowing translation at high ATP levels. [ 2 ] EttA interferes specifically after the formation of the first peptide bond in the new protein and before the first translocation step induced by EF-G . [ 2 ] | https://en.wikipedia.org/wiki/Hibernation_factor |
In fluid dynamics , Hicks equation , sometimes also referred as Bragg–Hawthorne equation or Squire–Long equation , is a partial differential equation that describes the distribution of stream function for axisymmetric inviscid fluid, named after William Mitchinson Hicks , who derived it first in 1898. [ 1 ] [ 2 ] [ 3 ] The equation was also re-derived by Stephen Bragg and William Hawthorne in 1950 and by Robert R. Long in 1953 and by Herbert Squire in 1956. [ 4 ] [ 5 ] [ 6 ] The Hicks equation without swirl was first introduced by George Gabriel Stokes in 1842. [ 7 ] [ 8 ] The Grad–Shafranov equation appearing in plasma physics also takes the same form as the Hicks equation.
Representing ( r , θ , z ) {\displaystyle (r,\theta ,z)} as coordinates in the sense of cylindrical coordinate system with corresponding flow velocity components denoted by ( v r , v θ , v z ) {\displaystyle (v_{r},v_{\theta },v_{z})} , the stream function ψ {\displaystyle \psi } that defines the meridional motion can be defined as
that satisfies the continuity equation for axisymmetric flows automatically. The Hicks equation is then given by [ 9 ]
where
where H ( ψ ) {\displaystyle H(\psi )} is the total head, cf. Bernoulli's Principle . and 2 π Γ {\displaystyle 2\pi \Gamma } is the circulation , both of them being conserved along streamlines. Here, p {\displaystyle p} is the pressure and ρ {\displaystyle \rho } is the fluid density. The functions H ( ψ ) {\displaystyle H(\psi )} and Γ ( ψ ) {\displaystyle \Gamma (\psi )} are known functions, usually prescribed at one of the boundary; see the example below. If there are closed streamlines in the interior of the fluid domain, say, a recirculation region, then the functions H ( ψ ) {\displaystyle H(\psi )} and Γ ( ψ ) {\displaystyle \Gamma (\psi )} are typically unknown and therefore in those regions, Hicks equation is not useful; Prandtl–Batchelor theorem provides details about the closed streamline regions.
Consider the axisymmetric flow in cylindrical coordinate system ( r , θ , z ) {\displaystyle (r,\theta ,z)} with velocity components ( v r , v θ , v z ) {\displaystyle (v_{r},v_{\theta },v_{z})} and vorticity components ( ω r , ω θ , ω z ) {\displaystyle (\omega _{r},\omega _{\theta },\omega _{z})} . Since ∂ / ∂ θ = 0 {\displaystyle \partial /\partial \theta =0} in axisymmetric flows, the vorticity components are
Continuity equation allows to define a stream function ψ ( r , z ) {\displaystyle \psi (r,z)} such that
(Note that the vorticity components ω r {\displaystyle \omega _{r}} and ω z {\displaystyle \omega _{z}} are related to r v θ {\displaystyle rv_{\theta }} in exactly the same way that v r {\displaystyle v_{r}} and v z {\displaystyle v_{z}} are related to ψ {\displaystyle \psi } ). Therefore the azimuthal component of vorticity becomes
The inviscid momentum equations ∂ v / ∂ t − v × ω = − ∇ H {\displaystyle \partial {\boldsymbol {v}}/\partial t-{\boldsymbol {v}}\times {\boldsymbol {\omega }}=-\nabla H} , where H = 1 2 ( v r 2 + v θ 2 + v z 2 ) + p ρ {\displaystyle H={\frac {1}{2}}(v_{r}^{2}+v_{\theta }^{2}+v_{z}^{2})+{\frac {p}{\rho }}} is the Bernoulli constant, p {\displaystyle p} is the fluid pressure and ρ {\displaystyle \rho } is the fluid density, when written for the axisymmetric flow field, becomes
in which the second equation may also be written as D ( r v θ ) / D t = 0 {\displaystyle D(rv_{\theta })/Dt=0} , where D / D t {\displaystyle D/Dt} is the material derivative . This implies that the circulation 2 π r v θ {\displaystyle 2\pi rv_{\theta }} round a material curve in the form of a circle centered on z {\displaystyle z} -axis is constant.
If the fluid motion is steady, the fluid particle moves along a streamline, in other words, it moves on the surface given by ψ = {\displaystyle \psi =} constant. It follows then that H = H ( ψ ) {\displaystyle H=H(\psi )} and Γ = Γ ( ψ ) {\displaystyle \Gamma =\Gamma (\psi )} , where Γ = r v θ {\displaystyle \Gamma =rv_{\theta }} . Therefore the radial and the azimuthal component of vorticity are
The components of v {\displaystyle {\boldsymbol {v}}} and ω {\displaystyle {\boldsymbol {\omega }}} are locally parallel. The above expressions can be substituted into either the radial or axial momentum equations (after removing the time derivative term) to solve for ω θ {\displaystyle \omega _{\theta }} . For instance, substituting the above expression for ω r {\displaystyle \omega _{r}} into the axial momentum equation leads to [ 9 ]
But ω θ {\displaystyle \omega _{\theta }} can be expressed in terms of ψ {\displaystyle \psi } as shown at the beginning of this derivation. When ω θ {\displaystyle \omega _{\theta }} is expressed in terms of ψ {\displaystyle \psi } , we get
This completes the required derivation.
Consider the problem where the fluid in the far stream exhibit uniform axial velocity U {\displaystyle U} and rotates with angular velocity Ω {\displaystyle \Omega } . This upstream motion corresponds to
From these, we obtain
indicating that in this case, H {\displaystyle H} and Γ {\displaystyle \Gamma } are simple linear functions of ψ {\displaystyle \psi } . The Hicks equation itself becomes
which upon introducing ψ ( r , z ) = U r 2 / 2 + r f ( r , z ) {\displaystyle \psi (r,z)=Ur^{2}/2+rf(r,z)} becomes
where k = 2 Ω / U {\displaystyle k=2\Omega /U} .
For an incompressible flow D ρ / D t = 0 {\displaystyle D\rho /Dt=0} , but with variable density, Chia-Shun Yih derived the necessary equation. The velocity field is first transformed using Yih transformation
where ρ 0 {\displaystyle \rho _{0}} is some reference density, with corresponding Stokes streamfunction ψ ′ {\displaystyle \psi '} defined such that
Let us include the gravitational force acting in the negative z {\displaystyle z} direction. The Yih equation is then given by [ 10 ] [ 11 ]
where | https://en.wikipedia.org/wiki/Hicks_equation |
Hidden Figures is a 2016 American biographical drama film directed by Theodore Melfi and written by Melfi and Allison Schroeder . It is loosely based on the 2016 non-fiction book of the same name by Margot Lee Shetterly about three female African-American mathematicians : Katherine Goble Johnson ( Taraji P. Henson ), Dorothy Vaughan ( Octavia Spencer ), and Mary Jackson ( Janelle Monáe ), who worked at NASA during the Space Race . Other stars include Kevin Costner , Kirsten Dunst , Jim Parsons , Mahershala Ali , Aldis Hodge , and Glen Powell .
Principal photography began in March 2016 in Atlanta, Georgia , and wrapped up in May 2016. Other filming locations included several other locations in Georgia , including East Point , Canton , Monroe , Columbus , and Madison .
Hidden Figures had a limited release on December 25, 2016, by 20th Century Fox , before going wide in on January 6, 2017. The film received positive reviews, with praise for the performances (particularly Henson, Spencer and Monáe), the writing, direction, cinematography, emotional tone, and historical accuracy, although some argued it featured a white savior narrative . The film was a commercial success, grossing $236 million worldwide against its $25 million production budget. Deadline Hollywood noted it as one of the most profitable releases of 2016, and estimated that it made a net profit of $95.5 million. [ 4 ]
The film was chosen by the National Board of Review as one of the top ten films of 2016 [ 5 ] and received various awards and nominations, including three nominations at the 89th Academy Awards , including Best Picture . It also won the Screen Actors Guild Award for Outstanding Performance by a Cast in a Motion Picture .
Katherine Goble works at the West Area of Langley Research Center in Hampton, Virginia , in 1961, alongside her colleagues Mary Jackson and Dorothy Vaughan , as lowly " computers ", performing mathematical calculations without being told what they are for. All of them are African-American women; the unit is segregated by race and sex . White supervisor Vivian Mitchell assigns Katherine to assist Al Harrison's Space Task Group , given her skills in analytic geometry . She becomes the first Black woman on the team; head engineer Paul Stafford is especially dismissive.
Mary is assigned to the space capsule heat shield team, where she immediately identifies a design flaw. Encouraged by her team leader Karl Zielinski, a Polish-Jewish Holocaust survivor , Mary applies for a NASA engineer position. She is told by Mitchell that, regardless of her mathematics and physical science degree, the position requires additional courses. Mary files a petition for permission to attend all-white Hampton High School , despite her husband's opposition. Pleading her case in court, she wins over the local judge by appealing to his sense of history, allowing her to attend night classes.
Katherine meets African-American National Guard Lt. Col. Jim Johnson, who voices skepticism about women's mathematical abilities. He later apologizes and begins to spend time with Katherine and her three daughters. The Mercury 7 astronauts visit Langley, and astronaut John Glenn goes out of his way to greet the West Area women. Katherine impresses Harrison by solving a complex mathematical equation from redacted documents, as the Soviet Union 's successful launch of Yuri Gagarin increases pressure to send American astronauts into space.
Harrison confronts Katherine about her "breaks," unaware that she is forced to walk half a mile (800 meters) to use the nearest restroom designated to "colored" people. She angrily explains the discrimination she faces at work, which leads Harrison to destroy the "colored" restroom signs and abolish restroom segregation. He allows Katherine to be included in high-level meetings to calculate the space capsule's re-entry point. Stafford makes Katherine remove her name from reports, insisting that " computers " cannot author them, and her work is credited solely to Stafford.
Informed by Mitchell that there are no plans to assign a "permanent supervisor for the colored group," Dorothy learns NASA has installed an IBM 7090 electronic computer that threatens to replace human computers. When a librarian scolds her for visiting the whites-only section, Dorothy sneaks out a book about Fortran and teaches herself and her West Area co-workers programming. She visits the computer room, successfully starts the machine, and is promoted to supervise the Programming Department; she agrees to do so if thirty of her co-workers are transferred as well. Mitchell finally addresses her as "Mrs. Vaughan".
Making final arrangements for John Glenn's launch , the department no longer needs human computers; Katherine is reassigned to the West Area and marries Jim. On the day of the launch, discrepancies are found in the IBM 7090 calculations, and Katherine is asked to check the capsule's landing coordinates. She delivers the results to the control room, and Harrison allows her inside. After a successful launch and orbit, a warning indicates the capsule's heat shield may be loose. Mission Control decides to land Glenn after three orbits instead of seven, and Katherine supports Harrison's suggestion to leave the retro-rocket attached to help keep the heat shield in place. Friendship 7 lands successfully.
Though the mathematicians are ultimately replaced by electronic computers, a textual epilogue reveals Mary obtained her engineering degree and became NASA's first female African American engineer; Dorothy continued as NASA's first African American supervisor; and Katherine, accepted by Stafford as a report coauthor, went on to calculate the trajectories for the Apollo 11 and Space Shuttle missions. In 2015, she was awarded the Presidential Medal of Freedom . In 2016, NASA dedicated the Langley Research Center 's Katherine Johnson Computational Building in her honor.
In 2015, producer Donna Gigliotti acquired Margot Lee Shetterly's nonfiction book Hidden Figures , about a group of Black female mathematicians that helped NASA win the Space Race . [ 6 ] Allison Schroeder wrote the script, which was developed by Gigliotti through Levantine Films. Schroeder grew up by Cape Canaveral and her grandparents worked at NASA, where she also interned as a teenager, and as a result saw the project as a perfect fit for herself. [ 7 ] Levantine Films produced the film with Peter Chernin 's Chernin Entertainment . Fox 2000 Pictures acquired the film rights, and Theodore Melfi signed on to direct. [ 6 ] After coming aboard, Melfi revised Schroeder's script, and in particular focused on balancing the home lives of the three protagonists with their careers at NASA. [ 7 ] After the film's development was announced, actresses considered to play the lead roles included Oprah Winfrey , Viola Davis , Octavia Spencer , and Taraji P. Henson . [ 6 ]
Chernin and Jenno Topping produced, along with Gigliotti and Melfi. [ 8 ] Fox cast Henson to play the lead role of mathematician Katherine Goble Johnson . Spencer was selected to play Dorothy Vaughan , one of the three lead mathematicians at NASA. [ 9 ] Kevin Costner was cast in the film to play the fictional head of the space program. [ 10 ] Singer Janelle Monáe signed on to play the third lead mathematician, Mary Jackson. [ 11 ] Kirsten Dunst , Glen Powell , and Mahershala Ali were cast in the film: Powell to play astronaut John Glenn, [ 12 ] and Ali as Johnson's love interest. [ 13 ] [ 14 ]
Principal photography began in March 2016 on the campus of Morehouse College in Atlanta , Georgia . [ 15 ] Scenes were also shot on location in Historic Downtown Canton, Georgia . [ 16 ] Filming also took place at Lockheed Martin Aeronautics at Dobbins Air Reserve Base . [ 17 ] Jim Parsons was cast in the film to play the head engineer of the Space Task Group at NASA, Paul Stafford. [ 12 ] Pharrell Williams (a native of Virginia Beach , near Langley Research Center [ 18 ] ) came on board as a producer on the film. He also wrote original songs and handled the music department and soundtrack of the film, with Hans Zimmer and Benjamin Wallfisch . [ 19 ] Morehouse College mathematics professor Rudy L. Horne was brought in to be the on-set mathematician.
The film, set at NASA Langley Research Center in 1961, depicts segregated facilities such as the West Area Computing unit, where an all-Black group of female mathematicians were originally required to use separate dining and bathroom facilities. However, in reality, Dorothy Vaughan was promoted to supervisor of West Computing much earlier, in 1949, becoming the first Black supervisor at the National Advisory Committee for Aeronautics (NACA) and one of its few female supervisors. In 1958, when NACA became NASA, segregated facilities, including the West Computing office, were abolished. [ 20 ] Vaughan and many of the former West computers transferred to the new Analysis and Computation Division (ACD), a racially and gender-integrated group. [ 21 ]
It was Mary Jackson, not Katherine Goble Johnson, who had difficulty finding a colored bathroom – in a 1953 incident she experienced while on temporary assignment in the East Area, a region of Langley unfamiliar to her and where few Blacks worked. [ 22 ] Katherine Goble Johnson, for her part, was initially unaware that the bathrooms at Langley were segregated (in both its East and West areas during the NACA era), and used the "whites-only" bathrooms (many were not explicitly labeled as such) for years before anyone complained. She ignored the complaint, and the issue was dropped. [ 23 ] [ 24 ]
In an interview with WHRO-TV , Goble Johnson denied the feelings of segregation. "I didn't feel the segregation at NASA, because everybody there was doing research. You had a mission and you worked on it, and it was important to you to do your job [...] and play bridge at lunch. I didn't feel any segregation. I knew it was there, but I didn't feel it." [ 25 ]
Mary Jackson did not have to get a court order to attend night classes at the whites-only high school. She asked the city of Hampton for an exception, and it was granted. The school turned out to be run down and dilapidated, a hidden cost of running two parallel school systems. [ 26 ] She completed her engineering courses and earned a promotion to engineer in 1958. [ 27 ]
Katherine Goble Johnson worked mostly in Langley's West Area, not the East Area – working mainly in Building 1244 starting in mid-1953, and remaining in 1244 even after joining the Space Task Group, through at least the early 1960s and John Glenn 's historic flight . [ 28 ] [ 29 ] [ 30 ]
The scene where a coffeepot labeled "colored" appears in Katherine Goble Johnson's workplace did not happen in real life, and the book on which the film is based mentions no such incident.
Katherine Goble Johnson carpooled with Eunice Smith, a nine-year West Area computer veteran at the time Goble Johnson joined NACA. Smith was her neighbor and friend from her sorority and church choir. [ 31 ] The three Goble children were teenagers at the time of Katherine's marriage to Jim Johnson. [ 32 ]
Katherine Goble Johnson was assigned to the Flight Research Division in 1953, a move that soon became permanent. When the Space Task Group was created in 1958, engineers from the Flight Research Division formed the core of the group, and Goble Johnson was included. She coauthored a research report published by NASA in 1960, the first time a woman in the Flight Research Division had received credit as an author of a research report. [ 33 ] Goble Johnson gained access to editorial meetings as of 1958 simply through persistence, not because one particular meeting was critical. [ 34 ] [ 35 ]
The Space Task Group was led by Robert Gilruth , not the fictional character Al Harrison, who was created to simplify a more complex management structure. The scene where Harrison smashes the Colored Ladies Room sign never happened, as in real life Goble Johnson refused to walk the extra distance to use the colored bathroom and, in her words, "just went to the white one." [ 36 ] Harrison also lets her into Mission Control to witness the launch. Neither scene happened in real life, and screenwriter Theodore Melfi said he saw no problem with adding the scenes, saying, "There needs to be white people who do the right thing, there needs to be Black people who do the right thing, and someone does the right thing. And so who cares who does the right thing, as long as the right thing is achieved?" [ 36 ]
Dexter Thomas of Vice News criticized Melfi's additions as creating the white savior trope : "In this case, it means that a white person doesn't have to think about the possibility that, were they around back in the 1960s South, they might have been one of the bad ones." [ 37 ] The Atlantic ' s Megan Garber said that the film's "narrative trajectory" involved "thematic elements of the white savior". [ 38 ] Melfi said he found "hurtful" the "accusations of a 'white savior' storyline", saying:
It was very upsetting to me because I am at a place where I've lived my life colorless and I grew up in Brooklyn. I walked to school with people of all shapes, sizes, and colors, and that's how I've lived my life. So it's very upsetting that we still have to have this conversation. I get upset when I hear 'Black film,' and so does Taraji P. Henson [...] It's just a film. And if we keep labeling something 'a Black film,' or 'a white film' – basically it's modern day segregation. We're all humans. Any human can tell any human's story. I don't want to have this conversation about Black film or white film anymore. I wanna have conversations about film.
The Huffington Post ' s Zeba Blay said of Melfi's frustration:
His frustration is also a perfect example of how, when it comes to open dialogue about depictions of people of color on screen, it behooves white people (especially those who position themselves as 'allies') to listen [...] the inclusion of the bathroom scene doesn't make Melfi a bad filmmaker, or a bad person, or a racist. But his suggestion that a feel-good scene like that was needed for the marketability and overall appeal of the film speaks to the fact that Hollywood at large still has a long way to go in telling Black stories, no matter how many strides have been made. [ 39 ]
The fictional characters Vivian Mitchell and Paul Stafford are composites of several team members and reflect common social views and attitudes of the time. Karl Zielinski is based on Mary Jackson's mentor, Kazimierz "Kaz" Czarnecki . [ 40 ]
John Glenn, who was about a decade older than depicted at the time of launch, did ask specifically for Goble Johnson to verify the IBM calculations, [ 41 ] although she had several days before the launch date to complete the process. [ 42 ]
Author Margot Lee Shetterly has agreed that there are differences between her book and the movie, but found that to be understandable:
For better or for worse, there is history, there is the book and then there's the movie. Timelines had to be conflated and [there were] composite characters, and for most people [who have seen the movie] have already taken that as the literal fact. [...] You might get the indication in the movie that these were the only people doing those jobs, when in reality we know they worked in teams, and those teams had other teams. There were sections, branches, divisions, and they all went up to a director. There were so many people required to make this happen. [...] It would be great for people to understand that there were so many more people. Even though Katherine Goble Johnson, in this role, was a hero, there were so many others that were required to do other kinds of tests and checks to make [Glenn's] mission come to fruition. But I understand you can't make a movie with 300 characters. It is simply not possible. [ 43 ]
John Glenn's flight was not terminated early as stated in the movie's closing subtitles. The MA-6 mission was planned for three orbits and landed at the expected time. The press kit published before launch states that "The Mercury Operations Director may elect a one, two or three orbit mission." [ 44 ] The post-mission report also shows that retrofire was scheduled to occur on the third orbit. [ 45 ] Scott Carpenter 's subsequent flight in May was also scheduled and flew for three orbits, and Wally Schirra 's planned six-orbit flight in October required extensive modifications to the Mercury capsule's life-support system to allow him to fly a nine-hour mission. [ 46 ] The phrase "go for at least seven orbits" that is in the mission transcript refers to the fact that the Atlas booster had placed Glenn's capsule into an orbit that would be stable for at least seven orbits, not that he had permission to stay up that long.
The Mercury Control Center was located at Cape Canaveral in Florida, not at the Langley Research Center in Virginia. The orbit plots displayed in the front of the room incorrectly show a six-orbit mission, which did not happen until Wally Schirra's MA-8 mission in October 1962. The movie also incorrectly shows NASA flight controllers monitoring live telemetry from the Soviet Vostok launch, which the Soviet Union would not have been sharing with NASA in 1961.
Katherine Goble Johnson's Technical Note D-233, co-written with T.H. Skopinski, can be found on the NASA Technical Reports Server. [ 47 ]
The movie depicts the IBM 7090 as the first computer at Langley, but there were actually earlier computers there, [ 48 ] : 138 and Dorothy Vaughan had previously been programming for the IBM 704 in FORTRAN . [ 48 ] : 205–206
The movie refers to an IBM 7090 (first released in 1959), but the console shown is for an IBM 7094 (released in 1962).
The film began a limited release on December 25, 2016, before a wide release on January 6, 2017. [ 49 ] [ 50 ]
After Hidden Figures was released on December 25, 2016, certain charities, institutions and independent businesses who regard the film as relevant to the cause of improving youth awareness in education and careers in the science, technology, engineering, and mathematics (STEM) fields, organized free screenings of the film in order to spread the message of the film's subject matter. [ 51 ] A collaborative effort between Western New York STEM Hub, AT&T and the Girl Scouts of the USA allowed more than 200 Buffalo Public Schools students, Girl Scouts and teachers to see the film. WBFO's Senior Reporter Eileen Buckley stated the event was designed to help encourage a new generation of women to consider STEM careers . Research indicates that by 2020, there will be 2.4 million unfilled STEM jobs. [ 52 ] Aspiring astronaut Naia Butler-Craig wrote of the film: "I can't imagine what that would have been like: 16-year-old, impressionable, curious and space-obsessed Naia finding out that Black women had something to do with getting Americans on the moon." [ 53 ]
Also, the film's principal actors (Henson, Spencer, Monáe and Parsons), director (Melfi), producer/musical creator (Williams), and other non-profit outside groups have offered free screenings to Hidden Figures at several cinema locations around the world. Some of the screenings were open to all-comers, while others were arranged to benefit girls, women and the underprivileged. The campaign began as individual activism by Spencer, and made a total of more than 1,500 seats for Hidden Figures available, free of charge, to poor individuals and families. The result was seven more screenings for people who otherwise might not have been able to afford to see the film - in Atlanta (sponsored by Monáe), in Washington, D.C. (sponsored by Henson), in Chicago (also Henson), in Houston (by Parsons), in Hazelwood, Missouri (by Melfi and actress/co-producer Kimberly Quinn), and in Norfolk and Virginia Beach, Virginia (both sponsored by Williams). [ 54 ]
In February 2017, AMC Theatres and 21st Century Fox announced that free screenings of Hidden Figures would take place in celebration of Black History Month in up to 14 select U.S. cities (including Atlanta, Chicago, Dallas, Los Angeles and Miami). The statement described the February charity screenings as building broader awareness of the film's true story of Black women mathematicians who worked at NASA during the Space Race. [ 55 ] 21st Century Fox and AMC Theatres also invited schools, community groups and non-profit organizations to apply for additional special screenings to be held in their towns. "As we celebrate Black History Month and look ahead to Women's History Month in March, this story of empowerment and perseverance is more relevant than ever," said Liba Rubenstein, 21st Century Fox's Senior Vice President of Social Impact, "We at 21CF were inspired by the grassroots movement to bring this film to audiences that wouldn't otherwise be able to see it - audiences that might include future innovators and barrier-breakers - and we wanted to support and extend that movement". [ 56 ]
Philanthropic non-profit outside groups and other local efforts by individuals have offered free screenings of Hidden Figures by using crowdfunding platforms on the Internet, that allow people to raise money for free film screening events. [ 57 ] [ 58 ] Dozens of other GoFundMe free screening campaigns have appeared since the film's general release, all by people wanting to raise money to pay for students to see the film. [ 57 ]
In 2019, The Walt Disney Company partnered with the U.S. Department of State on the third annual "Hidden No More" exchange program, which was inspired by the film and brings to the United States 50 women from around the world who have excelled in STEM careers such as spacecraft engineering, data solutions and data privacy, and STEM-related education. [ 59 ] The exchange program began in 2017 after local US embassies screened the film to their local communities. The support for the screenings was so positive that 48 countries decided to each nominate one women in STEM to represent their country on a three-week IVLP exchange program in the United States. [ 60 ]
Following the 2017 Lego Ideas Contest , Denmark-based toy maker The Lego Group announced plans to manufacture a fan-designed Women of NASA figurine set of five female scientists, engineers and astronauts, as based on real women who have worked for NASA . The minifigures planned for inclusion in the set were Katherine Johnson, computer scientist Margaret Hamilton ; astronaut, physicist and educator Sally Ride ; astronomer Nancy Grace Roman ; and astronaut and physician Mae Jemison (who is also African American). The finished set did not include Johnson. The Women of NASA set was released November 1, 2017. [ 61 ] [ 62 ] [ 63 ]
Hidden Figures was released on Digital HD on March 28, 2017, and Blu-ray, 4K Ultra HD, and DVD on April 11, 2017. [ 64 ] The film debuted at No. 3 on the home video sales chart. [ 65 ]
Hidden Figures grossed $169.6 million in the United States and Canada, and $66.3 million in other territories, for a worldwide gross of $236 million, against a production budget of $25 million. [ 3 ] Domestically, Hidden Figures was the highest-grossing Best Picture nominee at the 89th Academy Awards. [ 66 ] Deadline Hollywood calculated the net profit of the film to be $95.55 million, when factoring together all expenses and revenues for the film, making it one of the top twenty most profitable release of 2016. [ 4 ]
On review aggregator Rotten Tomatoes , the film has an approval rating of 93% based on 325 reviews, with an average rating of 7.6/10. The website's critical consensus reads, "In heartwarming, crowd-pleasing fashion, Hidden Figures celebrates overlooked—and crucial—contributions from a pivotal moment in American history." [ 67 ] On Metacritic , the film has a weighted average score of 74 out of 100, based on 47 critics, indicating "generally favorable" reviews. [ 68 ] Audiences polled by CinemaScore gave the film an average grade of "A+" on an A+ to F scale, [ 69 ] one of fewer than 90 films in the history of the service to receive such a score. [ 70 ]
Simon Thompson of IGN gave the film a rating of nine out of ten, writing, " Hidden Figures fills in an all too forgotten, or simply too widely unknown, blank in US history in a classy, engaging, entertaining and hugely fulfilling way. Superb performances across the board and a fascinating story alone make Hidden Figures a solid, an accomplished and deftly executed movie that entertains, engages and earns your time, money and attention." [ 71 ] Ty Burr of The Boston Globe wrote, "the film's made with more heart than art and more skill than subtlety, and it works primarily because of the women that it portrays and the actresses who portray them. Best of all, you come out of the movie knowing who Katherine Johnson and Dorothy Vaughn and Mary Jackson are, and so do your daughters and sons." [ 72 ]
Clayton Davis of Awards Circuit gave the film three and a half stars, saying "Precisely marketed as terrific adult entertainment for the Christmas season, Hidden Figures is a faithful and truly beautiful portrait of our country's consistent gloss over the racial tensions that have divided and continue to plague the fabric of our existence. Lavishly engaging from start to finish, Hidden Figures may be able to catch the most inopportune movie-goer off guard and cause them to fall for its undeniable and classic storytelling. The film is not to be missed." [ 73 ]
Other reviews criticized the film for its fictional embellishments and conventional, feel-good style. Tim Grierson, writing for Screen International, states that "Hidden Figures is almost patronisingly earnest in its depiction of sexism and racism. An air of do-gooder self-satisfaction hovers over the proceedings", [ 74 ] while Jesse Hassenger at The A.V. Club comments that "lack of surprise is in this movie's bones." [ 75 ] Eric Kohn of IndieWire argues that the film "trivializes history; as a hagiographic tribute to its brilliant protagonists, it doesn't dig into the essence of their struggles" [ 76 ] and similarly, Paul Byrnes concludes that "When a film purports to be selling history, we're entitled to ask where the history went, even if it offers a good time instead." [ 77 ]
Among its many achievements, Octavia Spencer was particularly lauded for her portrayal of Dorothy Vaughan and was nominated for the Academy Award for Best Supporting Actress , Golden Globe Award for Best Supporting Actress – Motion Picture and Screen Actors Guild Award for Outstanding Performance by a Female Actor in a Supporting Role . The film's ensemble cast won the Screen Actors Guild Award for Outstanding Performance by a Cast in a Motion Picture . The film itself garnered a nomination for the Academy Award for Best Picture and several nominations its screenplay (including for the Oscar and BAFTA ), soundtrack and score .
Overall, the film received 3 nominations for the 89th Academy Awards in 2017 , winning none: | https://en.wikipedia.org/wiki/Hidden_Figures |
Hidden algebra provides a formal semantics for use in the field of software engineering , especially for concurrent distributed object systems . [ 1 ] It supports correctness proofs . [ 2 ]
Hidden algebra was studied by Joseph Goguen . [ 1 ] [ 3 ] It handles features of large software-based systems, including concurrency , distribution , nondeterminism , and local states . It also handled object-oriented features like classes , subclasses ( inheritance ), attributes , and methods . Hidden algebra generalizes process algebra and transition system approaches.
This software-engineering -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hidden_algebra |
In the bifurcation theory , a bounded oscillation that is born without loss of stability of stationary set is called a hidden oscillation . In nonlinear control theory, the birth of a hidden oscillation in a time-invariant control system with bounded states means crossing a boundary, in the domain of the parameters, where local stability of the stationary states implies global stability (see, e.g. Kalman's conjecture ). If a hidden oscillation (or a set of such hidden oscillations filling a compact subset of the phase space of the dynamical system ) attracts all nearby oscillations, then it is called a hidden attractor . For a dynamical system with a unique equilibrium point that is globally attractive, the birth of a hidden attractor corresponds to a qualitative change in behaviour from monostability to bi-stability. In the general case, a dynamical system may turn out to be multistable and have coexisting local attractors in the phase space. While trivial attractors, i.e. stable equilibrium points , can be easily found analytically or numerically, the search of periodic and chaotic attractors can turn out to be a challenging problem (see, e.g. the second part of Hilbert's 16th problem ).
To identify a local attractor in a physical or numerical experiment, one needs to choose an initial system’s state in attractor’s basin of attraction and observe how the system’s state, starting from this initial state, after a transient process, visualizes the attractor. The classification of attractors as being hidden or self-excited reflects the difficulties of revealing basins of attraction and searching for the local attractors in the phase space .
Definition . [ 1 ] [ 2 ] [ 3 ] An attractor is called a hidden attractor if its basin of attraction does not intersect with a certain open neighbourhood of equilibrium points; otherwise it is called a self-excited attractor.
The classification of attractors as being hidden or self-excited was introduced by G. Leonov and N. Kuznetsov in connection with the discovery of the hidden Chua attractor [ 4 ] [ 5 ] [ 6 ] [ 7 ] for the first time in 2009 year. Similarly, an arbitrary bounded oscillation, not necessarily having an open neighborhood as the basin of attraction in the phase space, is classified as a self-excited or hidden oscillation.
For a self-excited attractor, its basin of attraction is connected with an unstable equilibrium and, therefore, the self-excited attractors can be found numerically by a standard computational procedure in which after a transient process, a trajectory, starting in a neighbourhood of an unstable equilibrium, is attracted to the state of oscillation and then traces it (see, e.g. self-oscillation process). Thus, self-excited attractors, even coexisting in the case of multistability , can be easily revealed and visualized numerically. In the Lorenz system , for classical parameters, the attractor is self-excited with respect to all existing equilibria, and can be visualized by any trajectory from their vicinities; however, for some other parameter values there are two trivial attractors coexisting with a chaotic attractor, which is a self-excited one with respect to the zero equilibrium only. Classical attractors in Van der Pol , Beluosov–Zhabotinsky , Rössler , Chua , Hénon dynamical systems are self-excited.
A conjecture is that the Lyapunov dimension of a self-excited attractor does not exceed the Lyapunov dimension of one of the unstable equilibria, the unstable manifold of which intersects with the basin of attraction and visualizes the attractor. [ 8 ]
Hidden attractors have basins of attraction which are not connected with equilibria and are “hidden” somewhere in the phase space. For example, the hidden attractors are attractors in the systems without equilibria: e.g. rotating electromechanical dynamical systems with Sommerfeld effect (1902), in the systems with only one equilibrium, which is stable: e.g. counterexamples to the Aizerman's conjecture (1949) and Kalman's conjecture (1957) on the monostability of nonlinear control systems. One of the first related theoretical problems is the second part of Hilbert's 16th problem on the number and mutual disposition of limit cycles in two-dimensional polynomial systems where the nested stable limit cycles are hidden periodic attractors. The notion of a hidden attractor has become a catalyst for the discovery of hidden attractors in many applied dynamical models. [ 1 ] [ 9 ] [ 10 ]
In general, the problem with hidden attractors is that there are no general straightforward methods to trace or predict such states for the system’s dynamics (see, e.g. [ 11 ] ). While for two-dimensional systems, hidden oscillations can be investigated using analytical methods (see, e.g., the results on the second part of Hilbert's 16th problem ), for the study of stability and oscillations in complex nonlinear multidimensional systems, numerical methods are often used.
In the multi-dimensional case the integration of trajectories with random initial data is unlikely to provide a localization of a hidden attractor, since a basin of attraction may be very small, and the attractor dimension itself may be much less than the dimension of the considered system.
Therefore, for the numerical localization of hidden attractors in multi-dimensional space, it is necessary to develop special analytical-numerical computational procedures, [ 1 ] [ 12 ] [ 8 ] which allow one to choose initial data in the attraction domain of the hidden oscillation (which does not contain neighborhoods of equilibria), and then to perform trajectory computation.
There are corresponding effective methods based on homotopy and numerical continuation : a sequence of similar systems is constructed, such that
for the first (starting) system, the initial data for numerical computation of an oscillating solution
(starting oscillation) can be obtained analytically, and then the transformation of this starting oscillation in the transition from one system to another is followed numerically.
The classification of attractors as self-exited or hidden ones was a fundamental
premise for the emergence of the theory of hidden oscillations, which represents
the modern development of Andronov’s theory of oscillations. [ 13 ] : 39 It is key to determining the exact boundaries of the global stability,
parts of which are classified by N. Kuznetsov as trivial (i.e., determined
by local bifurcations) or as hidden (i.e., determined by non-local bifurcations
and by the birth of hidden oscillations). [ 14 ] [ 15 ] | https://en.wikipedia.org/wiki/Hidden_attractor |
A hidden message is information that is not immediately noticeable, and that must be discovered or uncovered and interpreted before it can be known. Hidden messages include backwards audio messages, hidden visual messages and symbolic or cryptic codes such as a crossword or cipher . Although there are many legitimate examples of hidden messages created with techniques such as backmasking and steganography , many so-called hidden messages are merely fanciful imaginings or apophany .
The information in hidden messages is not immediately noticeable; it must be discovered or uncovered, and interpreted before it can be known. Hidden messages include backwards audio messages , hidden visual messages, and symbolic or cryptic codes such as a crossword or cipher . There are many legitimate examples of hidden messages, though many are imaginings.
A backward message in an audio recording is only fully apparent when the recording is played reversed. Some backward messages are produced by deliberate backmasking , while others are simply phonetic reversals resulting from random combinations of words. Backward messages may occur in various mediums, including music, video games , music videos , movies , and television shows .
Backmasking is a recording technique in which a message is recorded backwards onto a track that is meant to be played forwards. It was popularized by The Beatles , who used backward vocals and instrumentation on their 1966 album Revolver . The technique has also been used to censor words or phrases for "clean" releases of songs [ citation needed ] .
Backmasking has been a controversial topic in the United States since the 1980s, when allegations of its use for Satanic purposes were made against prominent rock musicians , leading to record-burnings and proposed anti-backmasking legislation by state and federal governments. In debate are both the existence of backmasked Satanic messages and their purported ability to subliminally affect listeners. [ citation needed ]
Certain phrases produce a different phrase when their phonemes are reversed—a process known as phonetic reversal. For example, "Kiss" backwards sounds like "sick", and so the title of Yoko Ono 's " Kiss Kiss Kiss " sounds like "Sick Sick Sick" or "Six Six Six" backwards. Queen 's " Another One Bites the Dust " [ 1 ] backwards was claimed that the chorus, when played in reverse, can be heard as "It's fun to smoke marijuana" [ 2 ] [ 1 ] or "start to smoke marijuana". [ 3 ] The Paul is dead phenomenon was started in part because a phonetic reversal of "Number nine" (the words were constantly repeated in Revolution 9 ) was interpreted as "Turn me on, dead man".
According to proponents of reverse speech , phonetic reversal occurs unknowingly during normal speech.
Hidden messages can be created in visual mediums with techniques such as hidden computer text and steganography .
In the 1980s, Coca-Cola released in South Australia an advertising poster featuring the reintroduced contour bottle, with a speech bubble, "Feel the Curves!!" An image hidden inside one of the ice cubes depicted an oral sex act. [ 4 ] Thousands of posters were distributed to hotels and bottle shops in Australia before the mistake was discovered by Coca-Cola management. The artist of the poster was fired and all the posters were recalled. [ 4 ] Rival PepsiCo had a similar accusation in 1990 when their promotional Pepsi Cool Cans was accused of having the word "sex" hidden in their design if two of their cans were placed atop each other. [ 5 ]
Various other messages have been claimed to exist in Disney movies, some of them risque, such as the well-known allegation of an erection showing on a priest in The Little Mermaid . [ 6 ] According to the Snopes website, one image "is clearly true [and] undeniably purposely inserted into the movie": a topless woman in two frames of The Rescuers . [ 7 ]
PETA (People for the Ethical Treatment of Animals) had an antipathy towards PETCO , a pet food retailer in San Diego, regarding the purported mistreatment of live animals at their stores. When the San Diego Padres baseball team announced that the retailer had purchased naming rights to Petco Park , PETA was unable to persuade the sports team to terminate the agreement. Later, PETA successfully purchased a commemorative display brick with what appears to be a complimentary message: "Break Open Your Cold Ones! Toast The Padres! Enjoy This Championship Organization!" However, if one takes the first letters of each word, the resulting acrostic reads "BOYCOTT PETCO". Neither PETCO nor the Padres have taken any action to remove the brick, stating that if someone walked by, they would not know it had anything to do with the PETA/PETCO feud. [ 8 ]
Secretive design language is widely used on web sites as Easter eggs or within products as hidden features, such as In-N-Out Burger 's secret menu or the new Norwegian passport design for security. [ 9 ] | https://en.wikipedia.org/wiki/Hidden_message |
In wireless networking , the hidden node problem or hidden terminal problem occurs when a node can communicate with a wireless access point (AP), but cannot directly communicate with other nodes that are communicating with that AP. [ 1 ] This leads to difficulties in medium access control sublayer since multiple nodes can send data packets to the AP simultaneously, which creates interference at the AP resulting in no packet getting through.
Although some loss of packets is normal in wireless networking, and the higher layers will resend them, if one of the nodes is transferring a lot of large packets over a long period, the other node may get very little goodput .
Practical protocol solutions exist to the hidden node problem. For example, Request To Send/Clear To Send (RTS/CTS) mechanisms where nodes send short packets to request permission of the access point to send longer data packets. As responses from the AP are seen by all the nodes, the nodes can synchronize their transmissions to not interfere. However, the mechanism introduces latency , and the overhead can often be greater than the cost, particularly for short data packets.
Hidden nodes in a wireless network are nodes that are out of range of other nodes or a collection of nodes. Consider a physical star topology with an access point with many nodes surrounding it in a circular fashion: each node is within communication range of the AP, but the nodes cannot communicate with each other.
For example, in a wireless network, it is likely that the node at the far edge of the access point's range, which is known as A , can see the access point, but it is unlikely that the same node can communicate with a node on the opposite end of the access point's range, C . These nodes are known as hidden .
Another example would be where A and C are either side of an obstacle that reflects or strongly absorbs radio waves, but nevertheless they can both still see the same AP.
The problem is when nodes A and C start to send packets simultaneously to the access point B . As the nodes A and C cannot receive each other's signals, so they cannot detect the collision before or while transmitting, carrier-sense multiple access with collision detection (CSMA/CD) does not work, and collisions occur, which then corrupt the data received by the access point.
To overcome the hidden node problem, request-to-send/clear-to-send (RTS/CTS) handshaking ( IEEE 802.11 RTS/CTS ) is implemented at the Access Point in conjunction with the Carrier sense multiple access with collision avoidance ( CSMA/CA ) scheme. The same problem exists in a mobile ad hoc network ( MANET ).
IEEE 802.11 uses 802.11 RTS/CTS acknowledgment and handshake packets to partly overcome the hidden node problem. RTS/CTS is not a complete solution and may decrease throughput even further, but adaptive acknowledgements from the base station can help too.
The comparison with hidden stations shows that RTS/CTS packages in each traffic class are profitable (even with short audio frames, which cause a high overhead on RTS/CTS frames). [ 2 ]
In the experimental environment following traffic classes are included: data (not time critical), data (time critical), video, audio. Examples for notations: (0|0|0|2) means 2 audio stations; (1|1|2|0) means 1 data station (not time critical), 1 data station (time critical), 2 video stations.
The other methods that can be employed to solve hidden node problem are :
Increasing the transmission power of the nodes can solve the hidden node problem by allowing the cell around each node to increase in size, encompassing all of the other nodes. This configuration enables the non-hidden nodes to detect, or hear, the hidden node. If the non-hidden nodes can hear the hidden node, the hidden node is no longer hidden. As wireless LANs use the CSMA/CA protocol, nodes will wait their turn before communicating with the access point .
This solution only works if one increases the transmission power on nodes that are hidden. In the typical case of a WiFi network, increasing transmission power on the access point only will not solve the problem because typically the hidden nodes are the clients (e.g. laptops, mobile devices), not the access point itself, and the clients will still not be able to hear each other. Increasing transmission power on the access point is actually likely to make the problem worse, because it will put new clients in range of the access point and thus add new nodes to the network that are hidden from other clients.
Since nodes using directional antennas are nearly invisible to nodes that are not positioned in the direction the antenna is aimed at, directional antennas should be used only for very small networks (e.g., dedicated point-to-point connections). Use omnidirectional antennas for widespread networks consisting of more than two nodes.
Increasing the power on mobile nodes may not work if, for example, the reason one node is hidden is that there is a concrete or steel wall preventing communication with other nodes. It is doubtful that one would be able to remove such an obstacle, but removal of the obstacle is another method of remedy for the hidden node problem.
Another method of solving the hidden node problem is moving the nodes so that they can all hear each other. If it is found that the hidden node problem is the result of a user moving his computer to an area that is hidden from the other wireless nodes, it may be necessary to have that user move again. The alternative to forcing users to move is extending the wireless LAN to add proper coverage to the hidden area, perhaps using additional access points.
There are several software implementations of additional protocols that essentially implement a polling or token passing strategy. Then, a master (typically the access point) dynamically polls clients for data. Clients are not allowed to send data without the master's invitation. This eliminates the hidden node problem at the cost of increased latency and less maximum throughput.
The Wi-Fi IEEE 802.11 RTS/CTS is one handshake protocol that is used. Clients that wish to send data send an RTS frame, the access point then sends a CTS frame when it is ready for that particular node. For short packets the overhead is quite large, so short packets do not usually use it, the minimum size is generally configurable.
With cellular networks the hidden node problem has practical solutions by time domain multiplexing for each given client for a mast, and using spatially diverse transmitters, so that each node is potentially served by any of three masts to greatly minimise issues with obstacles interfering with radio propagation. | https://en.wikipedia.org/wiki/Hidden_node_problem |
A hidden state of matter is a state of matter which cannot be reached under ergodic conditions, and is therefore distinct from known thermodynamic phases of the material. [ 1 ] [ 2 ] Examples exist in condensed matter systems, and are typically reached by the non-ergodic conditions created through laser photo excitation. [ 3 ] [ 4 ] Short-lived hidden states of matter have also been reported in crystals using lasers. Recently a persistent hidden state was discovered in a crystal of Tantalum(IV) sulfide (TaS 2 ), where the state is stable at low temperatures. [ 2 ] A hidden state of matter is not to be confused with hidden order , which exists in equilibrium, but is not immediately apparent or easily observed.
Using ultrashort laser pulses impinging on solid state matter, [ 3 ] the system may be knocked out of equilibrium so that not only are the individual subsystems out of equilibrium with each other but also internally. Under such conditions, new states of matter may be created which are not otherwise reachable under equilibrium, ergodic system evolution.
Such states are usually unstable and decay very rapidly, typically in nanoseconds or less. [ 4 ] The difficulty is in distinguishing a genuine hidden state from one which is simply out of thermal equilibrium. [ 5 ]
Probably the first instance of a photoinduced state is described for the organic molecular compound TTF-CA, which turns from neutral to ionic species as a result of excitation by laser pulses. [ 4 ] [ 6 ] [ 7 ] However, a similar transformation is also possible by the application of pressure, so strictly speaking the photoinduced transition is not to a hidden state under the definition given in the introductory paragraph. A few further examples are given in ref. [ 4 ] Photoexcitation has been shown to produce persistent states in vanadates [ 8 ] [ 9 ] and manganite materials, [ 10 ] [ 11 ] [ 12 ] leading to filamentary paths of a modified charge ordered phase which is sustained by a passing current. Transient superconductivity was also reported in cuprates . [ 13 ] [ 14 ]
A hypothetical schematic diagram for the transition to an H state by photo excitation is shown in the Figure (After [ 4 ] ). An absorbed photon causes an electron from the ground state G to an excited state E (red arrow). State E rapidly relaxes via Franck-Condon relaxation to an intermediate locally reordered state I. Through interactions with others of its kind, this state collectively orders to form a macroscopically ordered metastable state H, further lowering its energy as a result. The new state has a broken symmetry with respect to the G or E state, and may also involve further relaxation compared to the I state. The barrier E B prevents state H from reverting to the ground state G. If the barrier is sufficiently large compared to thermal energy k B T, where k B is the Boltzmann constant , the H state can be stable indefinitely. | https://en.wikipedia.org/wiki/Hidden_states_of_matter |
The hidden subgroup problem ( HSP ) is a topic of research in mathematics and theoretical computer science . The framework captures problems such as factoring , discrete logarithm , graph isomorphism , and the shortest vector problem . This makes it especially important in the theory of quantum computing because Shor's algorithms for factoring and finding discrete logarithms in quantum computing are instances of the hidden subgroup problem for finite abelian groups , while the other problems correspond to finite groups that are not abelian.
Given a group G {\displaystyle G} , a subgroup H ≤ G {\displaystyle H\leq G} , and a set X {\displaystyle X} , we say a function f : G → X {\displaystyle f:G\to X} hides the subgroup H {\displaystyle H} if for all g 1 , g 2 ∈ G , f ( g 1 ) = f ( g 2 ) {\displaystyle g_{1},g_{2}\in G,f(g_{1})=f(g_{2})} if and only if g 1 H = g 2 H {\displaystyle g_{1}H=g_{2}H} . Equivalently, f {\displaystyle f} is constant on each coset of H , while it is different between the different cosets of H .
Hidden subgroup problem : Let G {\displaystyle G} be a group, X {\displaystyle X} a finite set, and f : G → X {\displaystyle f:G\to X} a function that hides a subgroup H ≤ G {\displaystyle H\leq G} . The function f {\displaystyle f} is given via an oracle , which uses O ( log | G | + log | X | ) {\displaystyle O(\log |G|+\log |X|)} bits. Using information gained from evaluations of f {\displaystyle f} via its oracle, determine a generating set for H {\displaystyle H} .
A special case is when X {\displaystyle X} is a group and f {\displaystyle f} is a group homomorphism in which case H {\displaystyle H} corresponds to the kernel of f {\displaystyle f} .
The hidden subgroup problem is especially important in the theory of quantum computing for the following reasons.
There is an efficient quantum algorithm for solving HSP over finite abelian groups in time polynomial in log | G | {\displaystyle \log |G|} . For arbitrary groups, it is known that the hidden subgroup problem is solvable using a polynomial number of evaluations of the oracle. [ 3 ] However, the circuits that implement this may be exponential in log | G | {\displaystyle \log |G|} , making the algorithm not efficient overall; efficient algorithms must be polynomial in the number of oracle evaluations and running time. The existence of such an algorithm for arbitrary groups is open. Quantum polynomial time algorithms exist for certain subclasses of groups, such as semi-direct products of some abelian groups .
The algorithm for abelian groups uses representations , i.e. homomorphisms from G {\displaystyle G} to G L k ( C ) {\displaystyle \mathrm {GL} _{k}(\mathbb {C} )} , the general linear group over the complex numbers . A representation is irreducible if it cannot be expressed as the direct product of two or more representations of G {\displaystyle G} . For an abelian group, all the irreducible representations are the characters , which are the representations of dimension one; there are no irreducible representations of larger dimension for abelian groups.
The quantum fourier transform can be defined in terms of Z N {\displaystyle \mathrm {Z} _{N}} , the additive cyclic group of order N {\displaystyle N} . Introducing the character χ j ( k ) = ω N j k = e 2 π i j k N , {\displaystyle \chi _{j}(k)=\omega _{N}^{jk}=e^{2\pi i{\frac {jk}{N}}},} the quantum fourier transform has the definition of F N | j ⟩ = 1 N ∑ k = 0 N χ j ( k ) | k ⟩ . {\displaystyle F_{N}|j\rangle ={\frac {1}{\sqrt {N}}}\sum _{k=0}^{N}\chi _{j}(k)|k\rangle .} Furthermore, we define | χ j ⟩ = F N | j ⟩ {\displaystyle |\chi _{j}\rangle =F_{N}|j\rangle } . Any finite abelian group can be written as the direct product of multiple cyclic groups Z N 1 × Z N 2 × … × Z N m {\displaystyle \mathrm {Z} _{N_{1}}\times \mathrm {Z} _{N_{2}}\times \ldots \times \mathrm {Z} _{N_{m}}} . On a quantum computer, this is represented as the tensor product of multiple registers of dimensions N 1 , N 2 , … , N m {\displaystyle N_{1},N_{2},\ldots ,N_{m}} respectively, and the overall quantum fourier transform is F N 1 ⊗ F N 2 ⊗ … ⊗ F N m {\displaystyle F_{N_{1}}\otimes F_{N_{2}}\otimes \ldots \otimes F_{N_{m}}} .
The set of characters of G {\displaystyle G} forms a group G ^ {\displaystyle {\widehat {G}}} called the dual group of G {\displaystyle G} . We also have a subgroup H ⊥ ≤ G ^ {\displaystyle H^{\perp }\leq {\widehat {G}}} of size | G | / | H | {\displaystyle |G|/|H|} defined by H ⊥ = { χ g : χ g ( h ) = 1 for all h ∈ H } {\displaystyle H^{\perp }=\{\chi _{g}:\chi _{g}(h)=1{\text{ for all }}h\in H\}} For each iteration of the algorithm, the quantum circuit outputs an element g ∈ G {\displaystyle g\in G} corresponding to a character χ g ∈ H ⊥ {\displaystyle \chi _{g}\in H^{\perp }} , and since χ g ( h ) = 1 {\displaystyle \chi _{g}(h)={1}} for all h ∈ H {\displaystyle h\in H} , it helps to pin down what H {\displaystyle H} is.
The algorithm is as follows:
The state in step 5 is equal to the state in step 6 because of the following: 1 | H | ∑ h ∈ H | χ s + h ⟩ = 1 | H | | G | ∑ h ∈ H ∑ g ∈ G χ s + h ( g ) | g ⟩ = 1 | H | | G | ∑ g ∈ G χ s ( g ) ∑ h ∈ H χ h ( g ) | g ⟩ = 1 | H | | G | ∑ g ∈ G χ g ( s ) ( ∑ h ∈ H χ g ( h ) ) | g ⟩ = | H | | G | ∑ χ g ∈ H ⊥ χ g ( s ) | g ⟩ {\displaystyle {\begin{aligned}{\frac {1}{\sqrt {|H|}}}\sum _{h\in H}|\chi _{s+h}\rangle &={\frac {1}{\sqrt {|H||G|}}}\sum _{h\in H}\sum _{g\in G}\chi _{s+h}(g)|g\rangle \\&={\frac {1}{\sqrt {|H||G|}}}\sum _{g\in G}\chi _{s}(g)\sum _{h\in H}\chi _{h}(g)|g\rangle \\&={\frac {1}{\sqrt {|H||G|}}}\sum _{g\in G}\chi _{g}(s)\left(\sum _{h\in H}\chi _{g}(h)\right)|g\rangle \\&={\sqrt {\frac {|H|}{|G|}}}\sum _{\chi _{g}\in H^{\perp }}\chi _{g}(s)|g\rangle \end{aligned}}} For the last equality, we use the following identity:
Theorem — ∑ h ∈ H χ g ( h ) = { | H | χ g ∈ H ⊥ 0 χ g ∉ H ⊥ {\displaystyle \sum _{h\in H}\chi _{g}(h)={\begin{cases}|H|&\chi _{g}\in H^{\perp }\\0&\chi _{g}\notin H^{\perp }\end{cases}}}
This can be derived from the orthogonality of characters. The characters of G {\displaystyle G} form an orthonormal basis: 1 | H | ∑ h ∈ H χ g ( h ) χ g ′ ( h ) = { 1 g = g ′ 0 g ≠ g ′ {\displaystyle {\frac {1}{\vert H\vert }}\sum _{h\in H}\chi _{g}(h)\chi _{g'}(h)={\begin{cases}1&g=g'\\0&g\neq g'\end{cases}}} We let χ g ′ {\displaystyle \chi _{g'}} be the trivial representation, which maps all inputs to 1 {\displaystyle 1} , to get ∑ h ∈ H χ g ( h ) = { | H | g is trivial 0 g is not trivial {\displaystyle \sum _{h\in H}\chi _{g}(h)={\begin{cases}\vert H\vert &g{\text{ is trivial}}\\0&g{\text{ is not trivial}}\end{cases}}} Since the summation is done over H {\displaystyle H} , χ g {\displaystyle \chi _{g}} also being trivial only matters for if it is trivial over H {\displaystyle H} ; that is, if χ g ∈ H ⊥ {\displaystyle \chi _{g}\in H^{\perp }} . Thus, we know that the summation will result in | H | {\displaystyle \vert H\vert } if χ g ∈ H ⊥ {\displaystyle \chi _{g}\in H^{\perp }} and will result in 0 {\displaystyle 0} if χ g ∉ H ⊥ {\displaystyle \chi _{g}\notin H^{\perp }} .
Each measurement of the final state will result in some information gained about H {\displaystyle H} since we know that χ g ( h ) = 1 {\displaystyle \chi _{g}(h)=1} for all h ∈ H {\displaystyle h\in H} . H {\displaystyle H} , or a generating set for H {\displaystyle H} , will be found after a polynomial number of measurements. The size of a generating set will be logarithmically small compared to the size of G {\displaystyle G} . Let T {\displaystyle T} denote a generating set for H {\displaystyle H} , meaning ⟨ T ⟩ = H {\displaystyle \langle T\rangle =H} . The size of the subgroup generated by T {\displaystyle T} will at least be doubled when a new element t ∉ T {\displaystyle t\notin T} is added to it, because H {\displaystyle H} and t + H {\displaystyle t+H} are disjoint and because H ∪ t + H ⊆ ⟨ { t } ∪ T ⟩ {\displaystyle H\cup t+H\subseteq \langle \{t\}\cup T\rangle } . Therefore, the size of a generating set | T | {\displaystyle |T|} satisfies | T | ≤ log | H | ≤ log | G | {\displaystyle |T|\leq \log |H|\leq \log |G|} Thus a generating set for H {\displaystyle H} will be able to be obtained in polynomial time even if G {\displaystyle G} is exponential in size.
Many algorithms where quantum speedups occur in quantum computing are instances of the hidden subgroup problem. The following list outlines important instances of the HSP, and whether or not they are solvable. | https://en.wikipedia.org/wiki/Hidden_subgroup_problem |
Hideyuki Matsumura ( 松村 英之 , 1930–1995) was a Japanese mathematician particularly known for his textbooks [ 1 ] [ 2 ] in commutative algebra . He received his Ph.D. in 1958 from Kyoto University under the advisory of mathematician Yasuo Akizuki . [ 3 ]
This article about a Japanese mathematician is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hideyuki_Matsumura |
For telephone services to mobile phones , Hierarchical cell structure ("HCS") [ 1 ] used in mobile telecommunication means the splitting of cells . This type of cell structure allows the network to effectively use the geographical area and serve an increasing population.
The large cell (called a "macro cell") is rearranged to include small cells in it called micro and pico cells. The cricket stadium/exhibition ground can be a micro cell and a multi storied building can be a pico cell within the large cell. The micro/pico cell is allocated the radio spectrum to serve the increased population. The User Equipments (UEs) going out of the pico/micro cells are allowed to reselect the larger cell.
The HCS cells are given priorities from 0-7 where 0 is the lowest priority and 7 the highest. The cells close to the serving cell are given highest priority. The mobiles in high mobility prioritise to reselect to the lower priority cells to avoid continuous reselections.
Microcells can add localized capacity within Macro cell.
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hierarchical_cell_structure_(telecommunications) |
Hierarchical closeness ( HC ) is a structural centrality measure used in network theory or graph theory . It is extended from closeness centrality to rank how centrally located a node is in a directed network. While the original closeness centrality of a directed network considers the most important node to be that with the least total distance from all other nodes, hierarchical closeness evaluates the most important node as the one which reaches the most nodes by the shortest paths. The hierarchical closeness explicitly includes information about the range of other nodes that can be affected by the given node. In a directed network G ( V , A ) {\displaystyle G(V,A)} where V {\displaystyle V} is the set of nodes and A {\displaystyle A} is the set of interactions, hierarchical closeness of a node i {\displaystyle i} ∈ V {\displaystyle V} called C h c ( i ) {\displaystyle C_{hc}(i)} was proposed by Tran and Kwon [ 1 ] as follows:
where:
In the formula, N R ( i ) {\displaystyle N_{R}(i)} represents the number of nodes in V {\displaystyle V} that can be reachable from i {\displaystyle i} . It can also represent the hierarchical position of a node in a directed network. It notes that if N R ( i ) = 0 {\displaystyle N_{R}(i)=0} , then C h c ( i ) = 0 {\displaystyle C_{hc}(i)=0} because C ( c l o − i ) ( i ) {\displaystyle C_{(clo-i)}(i)} is 0 {\displaystyle 0} . In cases where N R ( i ) > 0 {\displaystyle N_{R}(i)>0} , the reachability is a dominant factor because N R ( i ) ≥ 1 {\displaystyle N_{R}(i)\geq 1} but C ( c l o − i ) ( i ) < 1 {\displaystyle C_{(clo-i)}(i)<1} . In other words, the first term indicates the level of the global hierarchy and the second term presents the level of the local centrality.
Hierarchical closeness can be used in biological networks to rank the risk of genes to carry diseases. [1] | https://en.wikipedia.org/wiki/Hierarchical_closeness |
The hierarchical editing language for macromolecules (HELM) is a method of describing complex biological molecules. It is a notation that is machine readable to render the composition and structure of peptides , proteins , oligonucleotides , and related small molecule linkers. [ 1 ]
HELM was developed by a consortium of pharmaceutical companies in what is known as the Pistoia Alliance. Development began in 2008. In 2012 the notation was published openly and for free. [ 2 ]
The HELM open source project can be found on GitHub . [ 2 ]
The need for HELM became obvious as researchers began working on modeling and computational projects involving molecules and engineered biomolecules of this type. There was not a language to describe the entities in an accurate manner which described both the composition and the complex branching and structure common in these entity types. [ 1 ] Protein sequences can describe larger proteins and chemical language files such as mol files can describe simple peptides. But the complexity of new research biomolecules makes describing large complex molecules difficult with chemical formats, and peptide formats are not sufficiently flexible to describe non-natural amino acids and other chemistries. [ 3 ]
In HELM, molecules are represented at four levels in a hierarchy: [ 4 ]
Monomers are assigned short unique identifiers in internal HELM databases and can be represented by the identifier in strings. The approach is similar to that used in Simplified molecular-input line-entry system (SMILES). An exchangeable file format allows sharing of data between companies who have assigned different identifiers to monomers. [ 5 ]
(For now, see the following external links: "HELM notation" on HELM wiki , and test data file .)
In 2014 ChEMBL announced plans to adopt HELM by 2014. [ 6 ] The informatics company BIOVIA developed a modified Molfile format called the Self-Contained Sequence Representation (SCSR) A standard which can incorporate individual attempts to solve the problem and be used universally and avoid proliferating standards is a goal of HELM. [ 5 ]
An editor tool is needed to visualize and work with biomolecules at the correct level of detail. The editor is needed to "zoom out" to see a large molecule at the amino-acid sequence level, then "zoom in" to the atomic level at a particular site of conjugation or derivatization. [ 7 ]
The HELM Editor and HAbE (HELM Antibody Editor) are two client tools which may in the future be released as web-based applications. [ 8 ]
At a conference in Pistoia, Italy, a group of researchers from Pfizer , AstraZeneca , GlaxoSmithKline , and Novartis formed what came to be known as the Pistoia Alliance. All parties were interested in solving problems for data aggregation, data sharing and analytics for pharmaceutical research. The alliance was incorporated in 2008. The alliance is now composed of informatics experts and researchers from industry, academia and life science service organizations. [ 9 ] | https://en.wikipedia.org/wiki/Hierarchical_editing_language_for_macromolecules |
The hierarchical equations of motion (HEOM) technique derived by Yoshitaka Tanimura and Ryogo Kubo in 1989, [ 1 ] is a non-perturbative approach developed to study the evolution of a density matrix ρ ( t ) {\displaystyle \rho (t)} of quantum dissipative systems. The method can treat system-bath interaction non-perturbatively as well as non-Markovian noise correlation times without the hindrance of the typical assumptions that conventional Redfield (master) equations suffer from such as the Born, Markovian and rotating-wave approximations. HEOM is applicable even at low temperatures where quantum effects are not negligible.
The hierarchical equation of motion for a system in a harmonic Markovian bath is [ 2 ]
where the superscript × {\displaystyle ^{\times }} denoting a commutator and the temperature-dependent super-operator Θ ^ {\displaystyle {\hat {\Theta }}} are defined below. The parameter γ {\displaystyle \gamma } is the frequency width of the Drude spectral function J ( ω ) {\displaystyle J(\omega )} (see below).
HEOMs are developed to describe the time evolution of the density matrix ρ ( t ) {\displaystyle \rho (t)} for an open quantum system. It is a non-perturbative, non-Markovian approach to propagating in time a quantum state. Motivated by the path integral formalism presented by Feynman and Vernon, Tanimura derive the HEOM from a combination of statistical and quantum dynamical techniques. [ 2 ] [ 3 ] [ 4 ] Using a two level spin-boson system Hamiltonian
By writing the density matrix in path integral notation and making use of Feynman–Vernon influence functional, all the bath coordinates x j {\displaystyle x_{j}} in the interaction terms can be grouped into this influence functional which in some specific cases can be calculated in closed form.
Assuming a Drude spectral function
and a high temperature heat bath, taking the time derivative of the system density matrix, and writing it in hierarchical form yields ( n = 0 , 1 , … {\displaystyle n=0,1,\ldots } )
Here Θ {\displaystyle \Theta } reduces the system excitation and hence is referred to as the relaxation operator:
with the inverse temperature β = 1 / k B T {\displaystyle \beta =1/k_{B}T} and the following "super-operator" notation:
The counter n {\displaystyle n} provides for n = 0 {\displaystyle n=0} the system density matrix.
As with Kubo's stochastic Liouville equation in hierarchical form, it goes up to infinity in the hierarchy which is a problem numerically. Tanimura and Kubo, however, provide a method by which the hierarchy can be truncated to a finite set of N {\displaystyle N} differential equations. This "terminator" N {\displaystyle N} defines the depth of the hierarchy and is determined by some constraint sensitive to the characteristics of the system, i.e. frequency, amplitude of fluctuations, bath coupling etc. A simple relation to eliminate the ρ ^ n + 1 {\displaystyle {\hat {\rho }}_{n+1}} term is [ 5 ]
The closing line of the hierarchy is thus:
The HEOM approach allows information about the bath noise and system response to be encoded into the equations of motion. It cures the infinite energy problem of Kubo's stochastic Liouville equation by introducing the relaxation operator that ensures a return to equilibrium.
When the open quantum system is represented by M {\displaystyle M} levels and M {\displaystyle M} baths with each bath response function represented by K {\displaystyle K} exponentials, a hierarchy with N {\displaystyle {\mathcal {N}}} layers will contain:
matrices, each with M 2 {\displaystyle M^{2}} complex-valued (containing both real and imaginary parts) elements. Therefore, the limiting factor in HEOM calculations is the amount of RAM required, since if one copy of each matrix is stored, the total RAM required would be:
bytes (assuming double-precision).
The HEOM method is implemented in a number of freely available codes. A number of these are at the website of Yoshitaka Tanimura [ 6 ] including a version for GPUs [ 7 ] which used improvements introduced by David Wilkins and Nike Dattani. [ 8 ] The nanoHUB version provides a very flexible implementation. [ 9 ] An open source parallel CPU implementation is available from the Schulten group. [ 10 ] | https://en.wikipedia.org/wiki/Hierarchical_equations_of_motion |
The hierarchical fair-service curve ( HFSC ) is a network scheduling algorithm for a network scheduler proposed by Ion Stoica, Hui Zhang and T. S. Eugene from Carnegie Mellon University at SIGCOMM 1997 [ 1 ] [ 2 ]
In this paper, we propose a scheduling algorithm that to the best of our knowledge is the first that can support simultaneously (a) hierarchical link-sharing service, (b) guaranteed real-time service with provable tight delay bounds, and (c) decoupled delay and bandwidth allocation (which subsumes priority scheduling). This is achieved by defining and incorporating fairness property, which is essential for link-sharing, into Service-Curve based schedulers, which can decouple the allocation of bandwidth and delay. We call the hierarchical version of the resulted algorithm a Hierarchical Fair Service Curve (H-FSC) Algorithm. We analyze the performance of H-FSC and present simulation results to demonstrate the advantages of H-FSC over previously proposed algorithms such as H-PFQ and CBQ. Preliminary experimental results based on a prototype implementation in NetBSD are also presented.
It is based on a QoS and CBQ .
An implementation of HFSC is available in all operating systems based on the Linux kernel , [ 3 ] such as e.g. OpenWrt , [ 4 ] and also in DD-WRT , NetBSD 5.0, FreeBSD 8.0 and OpenBSD 4.6.
This network -related software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hierarchical_fair-service_curve |
The Hierarchical navigable small world ( HNSW ) algorithm is a graph -based approximate nearest neighbor search technique used in many vector databases . [ 1 ] Nearest neighbor search without an index involves computing the distance from the query to each point in the database, which for large datasets is computationally prohibitive. For high-dimensional data, tree-based exact vector search techniques such as the k-d tree and R-tree do not perform well enough because of the curse of dimensionality . To remedy this, approximate k-nearest neighbor searches have been proposed, such as locality-sensitive hashing (LSH) and product quantization (PQ) that trade performance for accuracy. [ 1 ] The HNSW graph offers an approximate k-nearest neighbor search which scales logarithmically even in high-dimensional data.
It is an extension of the earlier work on navigable small world graphs presented at the Similarity Search and Applications (SISAP) conference in 2012 with an additional hierarchical navigation to find entry points to the main graph faster. [ 2 ] HNSW-based libraries are among the best performers in the approximate nearest neighbors benchmark. [ 3 ] [ 4 ]
A related technique is IVFFlat. [ 5 ]
HNSW is a key method for approximate nearest neighbor search in high-dimensional vector databases, for example in the context of embeddings from neural networks in large language models. Databases that use HNSW as search index include:
Several of these use either the hnswlib library [ 15 ] provided by the original authors, or the FAISS library.
This algorithms or data structures -related article is a stub . You can help Wikipedia by expanding it .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hierarchical_navigable_small_world |
In lower power systems, Hierarchical Value Cache refers to the hierarchical arrangement of Value Caches (VCs) in such a fashion that lower level VCs observe higher hit-rates, but undergo more switching activity on VC hits.
The organization is similar to Memory Hierarchy , where lower-level caches enjoy higher hit rates, but longer hit latencies . The architecture for Hierarchical Value Cache is mainly organized along two approaches: Hierarchical Unified Value Cache (HUVC) and Hierarchical Combinational Value Cache (HCVC). [ 1 ]
This architecture of Value Cache employs all value caches storing full data values, with larger value caches in the lower levels of the hierarchy . This architecture suffers from high area overhead, but reduces the bus switching activity.
The cache in HUVC in managed by LRU policy, with each VC storing 32-bit values. For incoming data, it is simultaneously checked with the VC on each level, with the uppermost VC hit getting encoded . Each hit at the i th level of the HUVC incurs i bits switching activity. By switching any bit of 32-bit data bus , we can get (32!)/((32- i )! i !) numbers. That is, we could have (32!)/((32- i )! i !) entries. However, it would require complicated logic to map VC indexes to bus values. For easy VC index encoding, we partition the data bus into i segments and switch one bit in each segment.
Thus, the HUVC scheme requires n control signals, where n is the depth of the VC hierarchy. The i - th control signal switched to indicate that the VC of level i hits.
For 4-level HUVC, and 32-bit data bus, the total VC size is 22.4KB. The size of the VC is too large to be feasible in practice.
In HCVC, level i contains 2^( i -1) VCs that store only partial values, instead of full values as in HUVC.
Except the case of first level, all VCs in HCVC store partial data values only. 2^( i -1) segments are generated by partitioning the data values, and each VC stores one data segment. Similar to the HUVC, the incoming data is simultaneously checked with the VC on each level, with the uppermost VC hit getting encoded.
The HCVC scheme requires n Control signals, where i is the number of VCs. The i - th control signal is switched to indicate the Formula VC hit. The total VC size of the i - th level is 32/(2^( i -1)) words. For 4-level HCVC with 32-bit data bus, the total VC size is only 240 bytes.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hierarchical_value_cache |
Construction by Jean-François Mertens and Zamir implementing with John Harsanyi 's proposal to model games with incomplete information by supposing that each player is characterized by a privately known type that describes his feasible strategies and payoffs as well as a probability distribution over other players' types. [ 1 ]
Such probability distribution at the first level can be interpreted as a low level belief of a player. One level up the probability on the belief of other players is interpreted as beliefs on beliefs. A recursive universal construct is built—in which player have beliefs on their beliefs at different level—this construct is called the hierarchy of beliefs.
The result is a universal space of types in which, subject to specified consistency conditions, each type corresponds to the infinite hierarchy of his probabilistic beliefs about others' probabilistic beliefs. They also showed that any subspace can be approximated arbitrarily closely by a finite subspace.
Another popular examples of the usage of the construction are the induction puzzles . And so is Robert Aumann 's construction of common knowledge . [ 2 ] | https://en.wikipedia.org/wiki/Hierarchy_of_beliefs |
Spontaneous symmetry breaking , a vacuum Higgs field , and its associated fundamental particle the Higgs boson are quantum phenomena. A vacuum Higgs field is responsible for spontaneous symmetry breaking the gauge symmetries of fundamental interactions and provides the Higgs mechanism of generating mass of elementary particles.
At the same time, classical gauge theory admits comprehensive geometric formulation where gauge fields are represented by connections on principal bundles . In this framework, spontaneous symmetry breaking is characterized as a reduction of the structure group G {\displaystyle G} of a principal bundle P → X {\displaystyle P\to X} to its closed subgroup H {\displaystyle H} . By the well-known theorem, such a reduction takes place if and only if there exists a global section h {\displaystyle h} of the quotient bundle P / H → X {\displaystyle P/H\to X} . This section is treated as a classical Higgs field .
A key point is that there exists a composite bundle P → P / H → X {\displaystyle P\to P/H\to X} where P → P / H {\displaystyle P\to P/H} is a principal bundle with the structure group H {\displaystyle H} . Then matter fields, possessing an exact symmetry group H {\displaystyle H} , in the presence of classical Higgs fields are described by sections of some composite bundle E → P / H → X {\displaystyle E\to P/H\to X} , where E → P / H {\displaystyle E\to P/H} is some associated bundle to P → P / H {\displaystyle P\to P/H} . Herewith, a Lagrangian of these matter fields is gauge invariant only if it factorizes through the vertical covariant differential of some connection on a principal bundle P → P / H {\displaystyle P\to P/H} , but not P → X {\displaystyle P\to X} .
An example of a classical Higgs field is a classical gravitational field identified with a pseudo-Riemannian metric on a world manifold X {\displaystyle X} . In the framework of gauge gravitation theory , it is described as a global section of the quotient bundle F X / O ( 1 , 3 ) → X {\displaystyle FX/O(1,3)\to X} where F X {\displaystyle FX} is a principal bundle of the tangent frames to X {\displaystyle X} with the structure group G L ( 4 , R ) {\displaystyle GL(4,\mathbb {R} )} .
This article about theoretical physics is a stub . You can help Wikipedia by expanding it .
This quantum mechanics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Higgs_field_(classical) |
In the Standard Model of particle physics , the Higgs mechanism is essential to explain the generation mechanism of the property " mass " for gauge bosons . Without the Higgs mechanism, all bosons (one of the two classes of particles, the other being fermions ) would be considered massless , but measurements show that the W + , W − , and Z 0 bosons actually have relatively large masses of around 80 GeV/ c 2 . The Higgs field resolves this conundrum. The simplest description of the mechanism adds to the Standard Model a quantum field (the Higgs field ), which permeates all of space. Below some extremely high temperature, the field causes spontaneous symmetry breaking during interactions. The breaking of symmetry triggers the Higgs mechanism, causing the bosons with which it interacts to have mass. In the Standard Model, the phrase "Higgs mechanism" refers specifically to the generation of masses for the W ± , and Z weak gauge bosons through electroweak symmetry breaking. [ 1 ] The Large Hadron Collider at CERN announced results consistent with the Higgs particle on 14 March 2013, making it extremely likely that the field, or one like it, exists, and explaining how the Higgs mechanism takes place in nature.
The view of the Higgs mechanism as involving spontaneous symmetry breaking of a gauge symmetry is technically incorrect since by Elitzur's theorem gauge symmetries never can be spontaneously broken. Rather, the Fröhlich –Morchio–Strocchi mechanism reformulates the Higgs mechanism in an entirely gauge invariant way, generally leading to the same results. [ 2 ]
The mechanism was proposed in 1962 by Philip Warren Anderson , [ 3 ] following work in the late 1950s on symmetry breaking in superconductivity and a 1960 paper by Yoichiro Nambu that discussed its application within particle physics .
A theory able to finally explain mass generation without "breaking" gauge theory was published almost simultaneously by three independent groups in 1964: by Robert Brout and François Englert ; [ 4 ] by Peter Higgs ; [ 5 ] and by Gerald Guralnik , C. R. Hagen , and Tom Kibble . [ 6 ] [ 7 ] [ 8 ] The Higgs mechanism is therefore also called the Brout–Englert–Higgs mechanism , or Englert–Brout–Higgs–Guralnik–Hagen–Kibble mechanism , [ 9 ] Anderson–Higgs mechanism , [ 10 ] Anderson–Higgs–Kibble mechanism , [ 11 ] Higgs–Kibble mechanism by Abdus Salam [ 12 ] and ABEGHHK'tH mechanism (for Anderson, Brout, Englert, Guralnik, Hagen, Higgs, Kibble, and 't Hooft ) by Peter Higgs. [ 12 ] The Higgs mechanism in electrodynamics was also discovered independently by Eberly and Reiss in reverse as the "gauge" Dirac field mass gain due to the artificially displaced electromagnetic field as a Higgs field. [ 13 ]
On 8 October 2013, following the discovery at CERN's Large Hadron Collider of a new particle that appeared to be the long-sought Higgs boson predicted by the theory, it was announced that Peter Higgs and François Englert had been awarded the 2013 Nobel Prize in Physics . [ a ] [ 14 ]
The Higgs mechanism was incorporated into modern particle physics by Steven Weinberg and Abdus Salam , and is an essential part of the Standard Model .
In the Standard Model, at temperatures high enough that electroweak symmetry is unbroken, all elementary particles are massless. At a critical temperature, the Higgs field develops a vacuum expectation value ; some theories suggest the symmetry is spontaneously broken by tachyon condensation , and the W and Z bosons acquire masses (also called "electroweak symmetry breaking", or EWSB ). In the history of the universe, this is believed to have happened about a picosecond (10 −12 s) after the hot big bang, when the universe was at a temperature 159.5 ± 1.5 GeV / k B . [ 15 ]
Fermions, such as the leptons and quarks in the Standard Model, can also acquire mass as a result of their interaction with the Higgs field, but not in the same way as the gauge bosons.
In the Standard Model, the Higgs field is an SU(2) doublet (i.e. the standard representation with two complex components called isospin), which is a scalar under Lorentz transformations. Its electric charge is zero; its weak isospin is 1 / 2 and the third component of weak isospin is − 1 / 2 ; and its weak hypercharge (the charge for the U(1) gauge group defined up to an arbitrary multiplicative constant) is 1. Under U(1) rotations, it is multiplied by a phase, which thus mixes the real and imaginary parts of the complex spinor into each other, combining to the standard two-component complex representation of the group U(2).
The Higgs field, through the interactions specified (summarized, represented, or even simulated) by its potential, induces spontaneous breaking of three out of the four generators ("directions") of the gauge group U(2). This is often written as SU(2) L × U(1) Y , (which is strictly speaking only the same on the level of infinitesimal symmetries) because the diagonal phase factor also acts on other fields – quarks in particular. Three out of its four components would ordinarily resolve as Goldstone bosons , if they were not coupled to gauge fields.
However, after symmetry breaking, these three of the four degrees of freedom in the Higgs field mix with the three W and Z bosons ( W + , W − and Z 0 ), and are only observable as components of these weak bosons , which are made massive by their inclusion; only the single remaining degree of freedom becomes a new scalar particle: the Higgs boson . The components that do not mix with Goldstone bosons form a massless photon.
The gauge group of the electroweak part of the standard model is SU(2) L × U(1) Y . The group SU(2) is the group of all 2-by-2 unitary matrices with unit determinant; all the orthonormal changes of coordinates in a complex two dimensional vector space.
Rotating the coordinates so that the second basis vector points in the direction of the Higgs boson makes the vacuum expectation value of H the spinor (0, v ) . The generators for rotations about the x-, y-, and z-axes are by half the Pauli matrices σ x , σ y , and σ z , so that a rotation of angle θ about the z-axis takes the vacuum to
While the T x and T y generators mix up the top and bottom components of the spinor , the T z rotations only multiply each by opposite phases. This phase can be undone by a U(1) rotation of angle 1 / 2 θ . Consequently, under both an SU(2) T z -rotation and a U(1) rotation by an amount 1 / 2 θ , the vacuum is invariant.
This combination of generators
defines the unbroken part of the gauge group, where Q is the electric charge, T 3 is the generator of rotations around the 3-axis in the adjoint representation of SU(2) and Y W is the weak hypercharge generator of the U(1). This combination of generators (a 3 rotation in the SU(2) and a simultaneous U(1) rotation by half the angle) preserves the vacuum, and defines the unbroken gauge group in the standard model, namely the electric charge group. The part of the gauge field in this direction stays massless, and amounts to the physical photon.
By contrast, the broken trace-orthogonal charge T 3 − 1 2 Y W = 2 T 3 − Q {\displaystyle \ T_{3}-{\tfrac {\ 1\ }{2}}\ Y_{\mathsf {W}}=2\ T_{3}-Q\ } couples to the massive Z 0 boson.
In spite of the introduction of spontaneous symmetry breaking, the mass terms preclude chiral gauge invariance. For these fields, the mass terms should always be replaced by a gauge-invariant "Higgs" mechanism. One possibility is some kind of Yukawa coupling (see below) between the fermion field ψ and the Higgs field φ , with unknown couplings G ψ , which after symmetry breaking (more precisely: after expansion of the Lagrange density around a suitable ground state) again results in the original mass terms, which are now, however (i.e., by introduction of the Higgs field) written in a gauge-invariant way. The Lagrange density for the Yukawa interaction of a fermion field ψ and the Higgs field φ is
where again the gauge field A only enters via the gauge covariant derivative operator D μ (i.e., it is only indirectly visible). The quantities γ μ are the Dirac matrices , and G ψ is the already-mentioned Yukawa coupling parameter for ψ . Now the mass-generation follows the same principle as above, namely from the existence of a finite expectation value | ⟨ ϕ ⟩ | {\displaystyle \vert \langle \phi \rangle \vert } . Again, this is crucial for the existence of the property mass .
Spontaneous symmetry breaking offered a framework to introduce bosons into relativistic quantum field theories. However, according to Goldstone's theorem , these bosons should be massless. [ 16 ] The only observed particles which could be approximately interpreted as Goldstone bosons were the pions , which Yoichiro Nambu related to chiral symmetry breaking.
A similar problem arises with Yang–Mills theory (also known as non-abelian gauge theory ), which predicts massless spin -1 gauge bosons . Massless weakly-interacting gauge bosons lead to long-range forces, which are only observed for electromagnetism and the corresponding massless photon . Gauge theories of the weak force needed a way to describe massive gauge bosons in order to be consistent.
That breaking gauge symmetries did not lead to massless particles was observed in 1961 by Julian Schwinger , [ 17 ] but he did not demonstrate massive particles would eventuate. This was done in Philip Warren Anderson 's 1962 paper [ 3 ] but only in non-relativistic field theory; it also discussed consequences for particle physics but did not work out an explicit relativistic model. The relativistic model was developed in 1964 by three independent groups:
Slightly later, in 1965, but independently from the other publications [ 18 ] [ 19 ] [ 20 ] [ 21 ] [ 22 ] [ 23 ] the mechanism was also proposed by Alexander Migdal and Alexander Polyakov , [ 24 ] at that time Soviet undergraduate students. However, their paper was delayed by the editorial office of JETP , and was published late, in 1966.
The mechanism is closely analogous to phenomena previously discovered by Yoichiro Nambu involving the "vacuum structure" of quantum fields in superconductivity . [ 25 ] A similar but distinct effect (involving an affine realization of what is now recognized as the Higgs field), known as the Stueckelberg mechanism , had previously been studied by Ernst Stueckelberg .
These physicists discovered that when a gauge theory is combined with an additional field that spontaneously breaks the symmetry group, the gauge bosons can consistently acquire a nonzero mass. In spite of the large values involved (see below) this permits a gauge theory description of the weak force, which was independently developed by Steven Weinberg and Abdus Salam in 1967. Higgs's original article presenting the model was rejected by Physics Letters . When revising the article before resubmitting it to Physical Review Letters , he added a sentence at the end, [ 26 ] mentioning that it implies the existence of one or more new, massive scalar bosons, which do not form complete representations of the symmetry group; these are the Higgs bosons.
The three papers by Brout and Englert; Higgs; and Guralnik, Hagen, and Kibble were each recognized as "milestone letters" by Physical Review Letters in 2008. [ 27 ] While each of these seminal papers took similar approaches, the contributions and differences among the 1964 PRL symmetry breaking papers are noteworthy. All six physicists were jointly awarded the 2010 J. J. Sakurai Prize for Theoretical Particle Physics for this work. [ 28 ]
Benjamin W. Lee is often credited with first naming the "Higgs-like" mechanism, although there is debate around when this first occurred. [ 29 ] [ 30 ] [ 31 ] One of the first times the Higgs name appeared in print was in 1972 when Gerardus 't Hooft and Martinus J. G. Veltman referred to it as the "Higgs–Kibble mechanism" in their Nobel winning paper. [ 32 ] [ 33 ]
The proposed Higgs mechanism arose as a result of theories proposed to explain observations in superconductivity . A superconductor does not allow penetration by external magnetic fields (the Meissner effect ). This strange observation implies that the electromagnetic field somehow becomes short-ranged during this phenomenon. Successful theories arose to explain this during the 1950s, first for fermions ( Ginzburg–Landau theory , 1950), and then for bosons ( BCS theory , 1957).
In these theories, superconductivity is interpreted as arising from a charged condensate . Initially, the condensate value does not have any preferred direction. This implies that it is scalar, but its phase is capable of defining a gauge in gauge based field theories. To do this, the field must be charged. A charged scalar field must also be complex (or described another way, it contains at least two components, and a symmetry capable of rotating the compontents into each other). In naïve gauge theory, a gauge transformation of a condensate usually rotates the phase. However, in these circumstances, it instead fixes a preferred choice of phase. However it turns out that fixing the choice of gauge so that the condensate has the same phase everywhere, also causes the electromagnetic field to gain an extra term. This extra term causes the electromagnetic field to become short range.
Goldstone's theorem also plays a role in such theories. The connection is technically, when a condensate breaks a symmetry, then the state reached by acting with a symmetry generator on the condensate has the same energy as before. This means that some kinds of oscillation will not involve change of energy. Oscillations with unchanged energy imply that excitations (particles) associated with the oscillation are massless.
Once attention was drawn to this theory within particle physics, the parallels were clear. A change of the usually long range electromagnetic field to become short-ranged, within a gauge invariant theory, was exactly the needed effect sought for the bosons that mediate the weak interaction (because a long-range force has massless gauge bosons, and a short-ranged force implies massive gauge bosons, suggesting that a result of this interaction is that the field's gauge bosons acquired mass, or a similar and equivalent effect). The features of a field required to do this was also quite well-defined – it would have to be a charged scalar field, with at least two components, and complex in order to support a symmetry able to rotate these into each other.
The Higgs mechanism occurs whenever a charged field has a vacuum expectation value. In the non-relativistic context this is a superconductor , more formally known as the Landau model of a charged Bose–Einstein condensate . In the relativistic condensate, the condensate is a scalar field that is relativistically invariant.
The Higgs mechanism is a type of superconductivity that occurs in the vacuum. It occurs when all of space is filled with a sea of particles which are charged, or, in field language, when a charged field has a nonzero vacuum expectation value. Interaction with the quantum fluid filling the space prevents certain forces from propagating over long distances (as it does inside a superconductor; e.g., in the Ginzburg–Landau theory ).
A superconductor expels all magnetic fields from its interior, a phenomenon known as the Meissner effect . This was mysterious for a long time, because it implies that electromagnetic forces somehow become short-range inside the superconductor. Contrast this with the behavior of an ordinary metal. In a metal, the conductivity shields electric fields by rearranging charges on the surface until the total field cancels in the interior.
But magnetic fields can penetrate to any distance, and if a magnetic monopole (an isolated magnetic pole) is surrounded by a metal the field can escape without collimating into a string. In a superconductor, however, electric charges move with no dissipation, and this allows for permanent surface currents, not only surface charges. When magnetic fields are introduced at the boundary of a superconductor, they produce surface currents that exactly neutralize them.
The Meissner effect arises due to currents in a thin surface layer, whose thickness can be calculated from the simple model of Ginzburg–Landau theory, which treats superconductivity as a charged Bose–Einstein condensate.
Suppose that a superconductor contains bosons with charge q . The wavefunction of the bosons can be described by introducing a quantum field , ψ , {\displaystyle \ \psi \ ,} which obeys the Schrödinger equation as a field equation . In units where the reduced Planck constant , ħ , is set to 1:
The operator ψ ( x ) {\displaystyle \psi (x)} annihilates a boson at the point x , while its adjoint ψ † {\displaystyle \psi ^{\dagger }} creates a new boson at the same point. The wavefunction of the Bose–Einstein condensate is then the expectation value ⟨ ψ ⟩ {\displaystyle \langle \psi \rangle } of ψ ( x ) {\displaystyle \psi (x)} , which is a classical function that obeys the same equation. The interpretation of the expectation value is that it is the phase that one should give to a newly created boson so that it will coherently superpose with all the other bosons already in the condensate.
When there is a charged condensate, the electromagnetic interactions are screened. To see this, consider the effect of a gauge transformation on the field. A gauge transformation rotates the phase of the condensate by an amount which changes from point to point, and shifts the vector potential by a gradient:
When there is no condensate, this transformation only changes the definition of the phase of ψ {\displaystyle \ \psi \ } at every point. But when there is a condensate, the phase of the condensate defines a preferred choice of phase.
The condensate wave function can be written as
where ρ is real amplitude, which determines the local density of the condensate. If the condensate were neutral, the flow would be along the gradients of θ , the direction in which the phase of the Schrödinger field changes. If the phase θ changes slowly, the flow is slow and has very little energy. But now θ can be made equal to zero just by making a gauge transformation to rotate the phase of the field.
The energy of slow changes of phase can be calculated from the Schrödinger kinetic energy,
and taking the density of the condensate ρ to be constant,
Fixing the choice of gauge so that the condensate has the same phase everywhere, the electromagnetic field energy has an extra term,
When this term is present, electromagnetic interactions become short-ranged. Every field mode, no matter how long the wavelength, oscillates with a nonzero frequency. The lowest frequency can be read off from the energy of a long wavelength A mode,
This is a harmonic oscillator with frequency
The quantity | ψ ( x ) | 2 = ρ 2 {\displaystyle \ \left|\psi (x)\right|^{2}=\rho ^{2}\ } is the density of the condensate of superconducting particles.
In an actual superconductor, the charged particles are electrons, which are fermions not bosons. So in order to have superconductivity, the electrons need to somehow bind into Cooper pairs . The charge of the condensate q is therefore twice the electron charge −e . The pairing in a normal superconductor is due to lattice vibrations, and is in fact very weak; this means that the pairs are very loosely bound. The description of a Bose–Einstein condensate of loosely bound pairs is actually more difficult than the description of a condensate of elementary particles, and was only worked out in 1957 by John Bardeen , Leon Cooper , and John Robert Schrieffer in the famous BCS theory .
Gauge invariance means that certain transformations of the gauge field do not change the energy at all. If an arbitrary gradient is added to A , the energy of the field is exactly the same. This makes it difficult to add a mass term, because a mass term tends to push the field toward the value zero. But the zero value of the vector potential is not a gauge invariant idea. What is zero in one gauge is nonzero in another.
So in order to give mass to a gauge theory, the gauge invariance must be broken by a condensate. The condensate will then define a preferred phase, and the phase of the condensate will define the zero value of the field in a gauge-invariant way. The gauge-invariant definition is that a gauge field is zero when the phase change along any path from parallel transport is equal to the phase difference in the condensate wavefunction.
The condensate value is described by a quantum field with an expectation value, just as in the Ginzburg–Landau model .
In order for the phase of the vacuum to define a gauge, the field must have a phase (also referred to as 'to be charged'). In order for a scalar field Φ to have a phase, it must be complex, or (equivalently) it should contain two fields with a symmetry which rotates them into each other. The vector potential changes the phase of the quanta produced by the field when they move from point to point. In terms of fields, it defines how much to rotate the real and imaginary parts of the fields into each other when comparing field values at nearby points.
The only renormalizable model where a complex scalar field Φ acquires a nonzero value is the 'Mexican-hat' model, where the field energy has a minimum away from zero. The action for this model is
which results in the Hamiltonian
The first term is the kinetic energy of the field. The second term is the extra potential energy when the field varies from point to point. The third term is the potential energy when the field has any given magnitude.
This potential energy, the Higgs potential , V ( z , Φ ) = λ ( | z | 2 − Φ 2 ) 2 , {\displaystyle ~V\left(z,\Phi \right)=\lambda \left(\left|z\right|^{2}-\Phi ^{2}\right)^{2}\ ,} [ 34 ] has a graph which looks like a Mexican hat , which gives the model its name. In particular, the minimum energy value is not at z = 0 , but on the circle of points where the magnitude of z is Φ .
When the field Φ( x ) is not coupled to electromagnetism, the Mexican-hat potential has flat directions. Starting in any one of the circle of vacua and changing the phase of the field from point to point costs very little energy. Mathematically, if
with a constant prefactor, then the action for the field θ ( x ) , i.e., the "phase" of the Higgs field Φ( x ) , has only derivative terms. This is not a surprise: Adding a constant to θ ( x ) is a symmetry of the original theory, so different values of θ ( x ) cannot have different energies. This is an example of configuring the model to conform to Goldstone's theorem : Spontaneously broken continuous symmetries (normally) produce massless excitations.
The Abelian Higgs model is the Mexican-hat model coupled to electromagnetism :
The Abelian Higgs model action can also be written
where the potential is
and the covariant derivative D μ {\displaystyle D_{\mu }} is
For completeness, the tensor F μ ν = ∂ μ A ν − ∂ ν A μ {\displaystyle \ F_{\mu \nu }=\partial _{\mu }A_{\nu }-\partial _{\nu }A_{\mu }\ } is the Maxwell tensor, also known as the electromagnetic field strength, U ( 1 ) {\displaystyle \ \mathrm {U} (1)\ } field strength or more geometrically the curvature of the U ( 1 ) {\displaystyle \ \mathrm {U} (1)\ } connection A μ {\displaystyle A_{\mu }} . The four-vector gauge field A μ {\displaystyle \ A_{\mu }\ } is also known as the four-potential.
This makes the gauge-invariance of the action (and therefore Lagrangian and resulting equations of motion) manifest. The potential makes the non-zero vacuum expectation value evident.
The classical vacuum is again at the minimum of the potential, where the magnitude of the complex field φ is equal to Φ . But now the phase of the field is arbitrary, because gauge transformations change it. This means that the field θ ( x ) {\displaystyle \ \theta (x)\ } can be set to zero by a gauge transformation, and does not represent any actual degrees of freedom at all.
Furthermore, choosing a gauge where the phase of the vacuum is fixed, the potential energy for fluctuations of the vector field is nonzero. So in the Abelian Higgs model, the gauge field acquires a mass. To calculate the magnitude of the mass, consider a constant value of the vector potential A in the x -direction in the gauge where the condensate has constant phase. This is the same as a sinusoidally varying condensate in the gauge where the vector potential is zero. In the gauge where A is zero, the potential energy density in the condensate is the scalar gradient energy:
This energy is the same as a mass term 1 / 2 m 2 A 2 where m = q Φ .
Start from the Lagrangian
with
Guided by the minimum of the potential V {\displaystyle V} being at | ϕ | = v , {\displaystyle \ |\phi |=v\ ,} we write the complex scalar field ϕ ( x ) {\displaystyle \ \phi (x)\ } in terms of real scalar fields ξ ( x ) {\displaystyle \ \xi (x)\ } and η ( x ) {\displaystyle \ \eta (x)\ } as follows:
The field ξ ( x ) {\displaystyle \ \xi (x)\ } is known as the Nambu-Goldstone field, and the field η ( x ) {\displaystyle \ \eta (x)\ } is known as the Higgs boson.
Upon rewriting the Lagrangian in terms of ξ {\displaystyle \ \xi \ } and η {\displaystyle \ \eta \ } one finds
At this point the only term which contains ξ {\displaystyle \ \xi \ } is the term containing ∂ μ ξ + e A μ . {\displaystyle \ \partial _{\mu }\xi +eA_{\mu }~.} But the dependence on ξ {\displaystyle \ \xi \ } can be gauged away by the gauge transformation which sends A μ + 1 e ∂ μ ξ ↦ A μ . {\displaystyle \ A_{\mu }+{\frac {1}{\ e\ }}\partial _{\mu }\xi \mapsto A_{\mu }~.} This is known as the unitary or unitarity gauge. In differential-geometric language, as is spelled out in the following box, the condensate ξ ( x ) {\displaystyle \xi (x)} has defined a canonical trivialization.
In unitary gauge, the Lagrangian can be organised into parts which depend on the gauge field and Higgs field
or into quadratic and interaction pieces
By focusing on the quadratic piece, we see that the gauge field A μ {\displaystyle \ A_{\mu }\ } has acquired a Proca mass , while the Higgs field η {\displaystyle \ \eta \ } has a mass of 2 λ v 2 . {\displaystyle \ {\sqrt {2\lambda v^{2}~}}~.}
This method largely carries over to the case where the U ( 1 ) {\displaystyle \ \mathrm {U} (1)\ } gauge symmetry is promoted to a non-abelian gauge group G . {\displaystyle \ G~.} The Nambu-Goldstone field ξ {\displaystyle \ \xi \ } is then promoted to a g {\displaystyle \ {\mathfrak {g}}} -valued field, where g {\displaystyle \ {\mathfrak {g}}\ } is the Lie algebra of G . {\displaystyle G~.}
A more mathematical or specifically differential-geometric viewpoint is that the field θ ( x ) {\displaystyle \ \theta (x)\ } picks out a canonical trivialization which breaks the right-invariance of the principal bundle that the gauge theory lives on.
This is realized most easily when the theory is based on flat spacetime R 1 , 3 , {\displaystyle \ \mathbb {R} ^{1,3}\ ,} as then the base spacetime is contractible, and hence any fibre bundle is trivial. In gauge theory one considers principal bundles with the spacetime as its base manifold, where the fibre is a torsor of the gauge group G . {\displaystyle \ G~.} Crucially, since the principal bundle must be trivial, there exists a global trivialization. In physics, one generally works under an implicit global trivialization and rarely in the more abstract principal bundle.
However, there are many choices of global trivialization, which differ from one another by a transition function, which can be written as a function
From the physical viewpoint, this is known as a gauge transformation. There is a corresponding (choice of) transition function or gauge transformation at the algebra level
such that exp ( α ( x ) ) = g ( x ) , {\displaystyle \ \exp(\alpha (x))=g(x)\ ,} where exp {\displaystyle ~\exp ~} is the exponential map for Lie algebras. Then we can view the phase function θ ( x ) {\displaystyle \theta (x)} as a transition function at the algebra level. It picks out a canonical global trivialization which 'differs from' the initial implicit global trivialization by θ ( x ) . {\displaystyle \ \theta (x)~.}
This breaks the (right-)invariance of the principal bundle under the action of G , {\displaystyle \ G\ ,} as this action does not preserve the canonical trivialization. Mathematically, this is the symmetry which is broken during spontaneous symmetry breaking. For the Abelian Higgs mechanism the relevant gauge group is U ( 1 ) . {\displaystyle \ \mathrm {U} (1)~.}
The Non-Abelian Higgs model has the following action
where now the non-Abelian field A is contained in the covariant derivative D and in the tensor components F μ ν {\displaystyle F^{\mu \nu }} and F μ ν {\displaystyle F_{\mu \nu }} (the relation between A and those components is well-known from the Yang–Mills theory ).
It is exactly analogous to the Abelian Higgs model. Now the field ϕ {\displaystyle \phi } is in a representation of the gauge group, and the gauge covariant derivative is defined by the rate of change of the field minus the rate of change from parallel transport using the gauge field A as a connection.
Again, the expectation value of ϕ {\displaystyle \phi } defines a preferred gauge where the vacuum is constant, and fixing this gauge, fluctuations in the gauge field A come with a nonzero energy cost.
Depending on the representation of the scalar field, not every gauge field acquires a mass. A simple example is in the renormalizable version of an early electroweak model due to Julian Schwinger . In this model, the gauge group is SO(3) (or SU(2) − there are no spinor representations in the model), and the gauge invariance is broken down to U(1) or SO(2) at long distances. To make a consistent renormalizable version using the Higgs mechanism, introduce a scalar field ϕ a {\displaystyle \phi ^{a}} which transforms as a vector (a triplet) of SO(3). If this field has a vacuum expectation value, it points in some direction in field space. Without loss of generality, one can choose the z -axis in field space to be the direction that ϕ {\displaystyle \phi } is pointing, and then the vacuum expectation value of ϕ {\displaystyle \phi } is (0, 0, à ) , where à is a constant with dimensions of mass ( c = ℏ = 1 {\displaystyle c=\hbar =1} ).
Rotations around the z -axis form a U(1) subgroup of SO(3) which preserves the vacuum expectation value of ϕ {\displaystyle \phi } , and this is the unbroken gauge group. Rotations around the x and y -axis do not preserve the vacuum, and the components of the SO(3) gauge field which generate these rotations become massive vector mesons. There are two massive W mesons in the Schwinger model, with a mass set by the mass scale à , and one massless U(1) gauge boson, similar to the photon.
The Schwinger model predicts magnetic monopoles at the electroweak unification scale, and does not predict the Z boson. It doesn't break electroweak symmetry properly as in nature. But historically, a model similar to this (but not using the Higgs mechanism) was the first in which the weak force and the electromagnetic force were unified.
Ernst Stueckelberg discovered [ 35 ] a version of the Higgs mechanism by analyzing the theory of quantum electrodynamics with a massive photon. Effectively, Stueckelberg's model is a limit of the regular Mexican hat Abelian Higgs model, where the vacuum expectation value H goes to infinity and the charge of the Higgs field goes to zero in such a way that their product stays fixed. The mass of the Higgs boson is proportional to H , so the Higgs boson becomes infinitely massive and decouples, so is not present in the discussion. The vector meson mass, however, is equal to the product eH , and stays finite.
The interpretation is that when a U(1) gauge field does not require quantized charges, it is possible to keep only the angular part of the Higgs oscillations, and discard the radial part. The angular part of the Higgs field θ has the following gauge transformation law:
The gauge covariant derivative for the angle (which is actually gauge invariant) is:
In order to keep θ fluctuations finite and nonzero in this limit, θ should be rescaled by H , so that its kinetic term in the action stays normalized. The action for the theta field is read off from the Mexican hat action by substituting \phi = H e^{i\theta/H} .
since eH is the gauge boson mass. By making a gauge transformation to set θ = 0 , the gauge freedom in the action is eliminated, and the action becomes that of a massive vector field:
To have arbitrarily small charges requires that the U(1) is not the circle of unit complex numbers under multiplication, but the real numbers under addition, which is only different in the global topology. Such a U(1) group is non-compact. The field θ transforms as an affine representation of the gauge group. Among the allowed gauge groups, only non-compact U(1) admits affine representations, and the U(1) of electromagnetism is experimentally known to be compact, since charge quantization holds to extremely high accuracy.
The Higgs condensate in this model has infinitesimal charge, so interactions with the Higgs boson do not violate charge conservation. The theory of quantum electrodynamics with a massive photon is still a renormalizable theory, one in which electric charge is still conserved, but magnetic monopoles are not allowed. For non-Abelian gauge theory, there is no affine limit, and the Higgs oscillations cannot be too much more massive than the vectors. | https://en.wikipedia.org/wiki/Higgs_mechanism |
In particle physics , the Higgs sector is the collection of quantum fields and/or particles that are responsible for the Higgs mechanism , i.e. for the spontaneous symmetry breaking of the Higgs field . The word "sector" refers to a subgroup of the total set of fields and particles. [ 1 ]
This particle physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Higgs_sector |
HDC ( Hybrid Digital Coding or High-Definition Coding ) with SBR ( spectral band replication ) is a proprietary lossy audio compression codec developed by iBiquity for use with HD Radio . It replaced the earlier PAC codec in 2003. [ 1 ] [ 2 ] In June 2017, the format was reverse engineered and determined to be a variant of HE-AACv1 . [ 3 ] It uses a modified discrete cosine transform (MDCT) audio coding data compression algorithm. [ 4 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-Definition_Coding |
High-altitude adaptation in humans is an instance of evolutionary modification in certain human populations, including those of Tibet in Asia, the Andes of the Americas, and Ethiopia in Africa, who have evolved the ability to survive at altitudes above 2,500 meters (8,200 ft). [ 1 ] This adaptation means irreversible, long-term physiological responses to high-altitude environments associated with heritable behavioral and genetic changes . While the rest of the human population would suffer serious health consequences at high altitudes, the indigenous inhabitants of these regions thrive in the highest parts of the world. These humans have undergone extensive physiological and genetic changes, particularly in the regulatory systems of oxygen respiration and blood circulation when compared to the general lowland population. [ 2 ] [ 3 ]
Around 81.6 million humans (approximately 1.1% of the world's human population) live permanently at altitudes above 2,500 meters (8,200 ft), [ 4 ] which would seem to put these populations at risk for chronic mountain sickness (CMS) . [ 1 ] However, the high-altitude populations in South America, East Africa , and South Asia have lived there for millennia without apparent complications. [ 5 ] This special adaptation is now recognized as an example of natural selection in action. [ 6 ] The adaptation of the Tibetans is the fastest known example of human evolution , as it is estimated to have occurred between 1,000 BCE [ 7 ] [ 8 ] [ 9 ] to 7,000 BCE. [ 10 ] [ 11 ]
Humans are generally adapted to lowland environments where oxygen is abundant. [ 12 ] At altitudes above 2,500 meters (8,200 ft), such humans experience altitude sickness , which is a type of hypoxia , a clinical syndrome of severe lack of oxygen. Some humans develop the illness beginning at above 1,500 meters (5,000 ft). [ 13 ] Symptoms include fatigue , dizziness , breathlessness , headaches , insomnia , malaise , nausea , vomiting , body pain , loss of appetite , ear-ringing , blistering and purpling of the hands and feet , and dilated blood vessels . [ 14 ] [ 15 ] [ 16 ]
The sickness is compounded by related symptoms such as cerebral oedema (swelling of brain) and pulmonary oedema (fluid accumulation in lungs) . [ 17 ] [ 18 ] Over a span of multiple days, individuals experiencing the effects of high-altitude hypoxia demonstrate raised respiratory activity and elevated metabolic conditions which persist during periods of rest. Subsequently, afflicted people will experience slowly declining heart rate. Hypoxia is a primary contributor to fatalities within mountaineering groups, making it a significant risk factor within high-altitude related challenges. [ 19 ] [ 20 ] In women, pregnancy can be severely affected, such as development of preeclampsia , which causes premature labor , low birth weight of babies, and often complicates with profuse bleeding , seizures , or death of the mother . [ 2 ] [ 21 ]
An estimated 81.6 million humans live at an elevation higher than 2,500 meters (8,200 ft) above sea level, of which 21.7 million reside in Ethiopia , 12.5 million in China , 11.7 million in Colombia , 7.8 million in Peru , and 6.2 million in Bolivia . [ 4 ] Certain natives of Tibet, Ethiopia, and the Andes have been living at these high altitudes for generations and are resistant to hypoxia as a consequence of genetic adaptation. [ 5 ] [ 14 ] It is estimated that at 4,000 meters (13,000 ft) altitude, every lungful of air has approximately 60% of the oxygen molecules found in a lungful of air at sea level. [ 22 ] Highlanders are thus constantly exposed to a low oxygen environment, yet they live without any debilitating problems. [ 23 ]
One of the best-documented effects of high altitude on non-adapted women is a progressive reduction in birth weight . By contrast, the women of long-resident, high-altitude populations are known to give birth to heavier-weight infants than women of the lowland. This is particularly true among Tibetan babies, whose average birth weight is 294–650g (~470) g heavier than the surrounding Chinese population , and their blood-oxygen level is considerably higher. [ 24 ]
Scientific investigation of high-altitude adaptation was initiated by A. Roberto Frisancho of the University of Michigan in the late 1960s among the Quechua people of Peru. [ 25 ] [ 26 ] Paul T. Baker of Penn State University ’s Department of Anthropology also conducted a considerable amount of research into human adaptation to high altitudes, and mentored students who continued this research. [ 27 ] One of these students, anthropologist Cynthia Beall of Case Western Reserve University , began conducting decades-long research on high altitude adaptation among the Tibetans in the early 1980s. [ 28 ]
Among the different native highlander populations, the underlying physiological responses to adaptation differ. For example, among four quantitative features, such as resting ventilation, hypoxic ventilatory response, oxygen saturation, and hemoglobin concentration, the levels of variations are significantly different between the Tibetans and the Aymaras . [ 29 ] Methylation also influences oxygenation. [ 30 ]
In the early 20th century, researchers observed the impressive physical abilities of Tibetans during Himalayan climbing expeditions. They considered the possibility that these abilities resulted from an evolutionary genetic adaptation to high-altitude conditions. [ 31 ] The Tibetan plateau has an average elevation of 4,000 meters (13,000 ft) above sea level and covers more than 2.5 million km 2 ; it is the highest and largest plateau in the world. In 1990, it was estimated that 4,594,188 Tibetans live on the plateau, with 53% living at an altitude over 3,500 meters (11,500 ft). Fairly large numbers (approximately 600,000) live at an altitude exceeding 4,500 meters (14,800 ft) in the Chantong-Qingnan area. [ 32 ]
Tibetans who have been living in the Chantong-Qingnan area for 3,000 years do not exhibit the same elevated hemoglobin concentrations to cope with oxygen deficiency that are observed in other populations who have moved temporarily or permanently to high altitudes. Instead, the Tibetans inhale more air with each breath and breathe more rapidly than either sea-level populations or Andeans. Tibetans have better oxygenation at birth, enlarged lung volumes throughout life, and a higher capacity for exercise . They show a sustained increase in cerebral blood flow, lower hemoglobin concentration, and less susceptibility to chronic mountain sickness than other populations due to their longer history of high-altitude habitation. [ 33 ] [ 34 ]
With the proper physical preparation, individuals can develop short-term tolerance to high-altitude conditions. However, these biological changes are temporary and will reverse upon returning to lower elevations. [ 35 ] Moreover, while lowland people typically experience increased breathing for only a few days after entering high altitudes, Tibetans maintain this rapid breathing and elevated lung capacity throughout their lifetime. [ 36 ] This enables them to inhale large amounts of air per unit of time to compensate for low oxygen levels. Additionally, Tibetans typically have significantly higher levels of nitric oxide in their blood, often double that of lowlanders. This likely contributes to enhanced blood circulation by promoting vasodilation . [ 37 ]
Furthermore, their hemoglobin level is not significantly different (average 15.6 g/dl in males and 14.2 g/dl in females) [ 38 ] from those of humans living at low altitude. This is evidenced by mountaineers experiencing an increase of over 2 g/dl in hemoglobin levels within two weeks at the Mt. Everest base camp. [ 39 ] Consequently, Tibetans demonstrate the capacity to mitigate the effects of hypoxia and mountain sickness throughout their lives. Even when ascending extraordinarily high peaks such as Mount Everest, they exhibit consistent oxygen uptake, heightened ventilation, augmented hypoxic ventilatory responses, expanded lung volumes, increased diffusing capacities, stable body weight, and improved sleep quality compared to lowland populations. [ 40 ]
In contrast to the Tibetans, Andean highlanders show different patterns of hemoglobin adaptation. Their hemoglobin concentration is higher than those of the lowlander population, which also happens to lowlanders who move to high altitudes. When they spend some weeks in the lowlands, their hemoglobin drops to the same levels as lowland humans. However, in contrast to lowland humans, they have increased oxygen levels in their hemoglobin; that is, more oxygen per blood volume. This confers an ability to carry more oxygen in each red blood cell, meaning a more effective transport of oxygen throughout their bodies. [ 36 ] This enables Andeans to overcome hypoxia and normally reproduce without risk of death for the mother or baby. They have developmentally-acquired enlarged residual lung volume and an associated increase in alveolar area, which are supplemented with increased tissue thickness and moderate increase in red blood cells . Though Andean highlander children show delayed body growth, change in lung volume is accelerated. [ 41 ]
Among the Quechua people of the Altiplano , there is a significant variation in NOS3 (the gene encoding endothelial nitric oxide synthase , eNOS), which is associated with higher levels of nitric oxide at high altitude. [ 42 ] Nuñoa children of Quechua ancestry exhibit higher blood-oxygen content (91.3) and lower heart rate (84.8) than their peers of different ethnicities, who have an average of 89.9 blood-oxygen and 88–91 heart rate. [ 43 ] Quechua women have comparatively enlarged lung volume for increased respiration. [ 44 ]
Blood profile comparisons show that among the Andeans, Aymaran highlanders are better adapted to highlands than the Quechuas. [ 45 ] [ 46 ] Among the Bolivian Aymara people, the resting ventilation and hypoxic ventilatory response were quite low (roughly 1.5 times lower) compared to those of the Tibetans. The intrapopulation genetic variation was relatively smaller among the Aymara people. [ 47 ] [ 48 ] Moreover, when compared to Tibetans, blood hemoglobin levels at high altitudes among Aymaran is notably higher, with an average of 19.2 g/dl for males and 17.8 g/dl for females. [ 38 ]
The people of the Ethiopian highlands also live at extremely high altitudes, around 3,000 meters (9,800 ft) to 3,500 meters (11,500 ft). Highland Ethiopians exhibit elevated hemoglobin levels, like Andeans and lowlander humans at high altitudes, but do not exhibit the Andeans’ increase in oxygen content of hemoglobin. [ 49 ] Among healthy individuals, the average hemoglobin concentrations are 15.9 and 15.0 g/dl for males and females, respectively (which is lower than normal, similar to the Tibetans), and an average oxygen saturation of hemoglobin is 95.3% (which is higher than average, like the Andeans). [ 50 ] Additionally, Ethiopian highlanders do not exhibit any significant change in blood circulation of the brain, which has been observed among the Peruvian highlanders and attributed to their frequent altitude-related illnesses. [ 51 ] Yet, similar to the Andeans and Tibetans, the Ethiopian highlanders are immune to the extreme dangers posed by high-altitude environment, and their pattern of adaptation is unique from that of other highland people. [ 22 ]
The underlying molecular evolution of high-altitude adaptation has been explored in recent years. [ 23 ] Depending on geographical and environmental pressures, high-altitude adaptation involves different genetic patterns, some of which have evolved not long ago. For example, Tibetan adaptations became prevalent in the past 3,000 years, an example of rapid recent human evolution . At the turn of the 21st century, it was reported that the genetic makeup of the respiratory components of the Tibetan and the Ethiopian populations were significantly different. [ 29 ]
Substantial evidence from Tibetan highlanders suggests that variation in hemoglobin and blood-oxygen levels are adaptive as Darwinian fitness. It has been documented that Tibetan women with a high likelihood of possessing one to two alleles for high blood-oxygen content (which is rare in other women) had more surviving children; the higher the oxygen capacity, the lower the infant mortality. [ 52 ] In 2010, for the first time, the genes responsible for the unique adaptive traits were identified following genome sequencing of 50 Tibetans and 40 Han Chinese from Beijing . Initially, the strongest signal of natural selection was a transcription factor involved in response to hypoxia, called endothelial Per-Arnt-Sim (PAS) domain protein 1 ( EPAS1 ). It was found that one single-nucleotide polymorphism (SNP) at EPAS1 shows a 78% frequency difference between Tibetan and mainland Chinese samples, representing the fastest genetic change observed in any human gene to date. Hence, Tibetan adaptation to high altitude is recognized as one of the fastest processes of phenotypically observable evolution in humans, [ 53 ] which is estimated to have occurred a few thousand years ago, when the Tibetans split from the mainland Chinese population. The time of genetic divergence has been variously estimated as 2,750 (original estimate), [ 9 ] 4,725, [ 11 ] 8,000, [ 54 ] or 9,000 [ 10 ] years ago.
Mutations in EPAS1 occur at a higher frequency in Tibetans than their Han neighbors and correlates with decreased hemoglobin concentrations among the Tibetans. This is known as the hallmark of their adaptation to hypoxia. Simultaneously, two genes, egl nine homolog 1 ( EGLN1 ), which inhibits hemoglobin production under high oxygen concentration, and peroxisome proliferator-activated receptor alpha ( PPARA ), were also identified to be positively selected for decreased hemoglobin levels in the Tibetans. [ 55 ]
Similarly, the Sherpas , known for their Himalayan hardiness, exhibit similar patterns in the EPAS1 gene, which is further evidence that the gene is under selection pressure for adaptation to the high-altitude life of Tibetans. [ 56 ] A study in 2014 indicates that the mutant EPAS1 gene could have been inherited from archaic hominins , the Denisovans . [ 57 ] EPAS1 and EGLN1 are believed to be important genes for unique adaptive traits when compared with those of the Chinese and Japanese. [ 58 ] Comparative genome analysis in 2014 revealed that the Tibetans inherited an equal mixture of genomes from the Nepalese Sherpas and Hans, and that they acquired adaptive genes from the Sherpa lineage. Further, the population split was estimated to occur around 20,000 to 40,000 years ago, a range supported by archaeological, mitochondria DNA, and Y chromosome evidence for an initial colonization of the Tibetan plateau around 30,000 years ago. [ 59 ]
The genes EPAS1 , EGLN1 , and PPARA function in concert with another gene named hypoxia inducible factors ( HIF ), which is in turn a principal regulator of red blood cell production ( erythropoiesis ) in response to oxygen metabolism. [ 60 ] [ 61 ] [ 62 ] The genes are associated not only with decreased hemoglobin levels, but also with regulating metabolism. EPAS1 is significantly associated with increased lactate concentration, a product of anaerobic glycolysis , and PPARA is correlated with decrease in the activity of fatty acid oxidation . [ 63 ] EGLN1 codes for an enzyme, prolyl hydroxylase 2 (PHD2), involved in erythropoiesis.
Among the Tibetans, a mutation in EGLN1 (specifically at position 12, where cytosine is replaced with guanine; and at 380, where G is replaced with C) results in mutant PHD2 (aspartic acid at position 4 becomes glutamine, and cysteine at 127 becomes serine) and this mutation inhibits erythropoiesis. This mutation is estimated to have occurred approximately 8,000 years ago. [ 64 ] Further, the Tibetans are enriched for genes in the disease class of human reproduction (such as genes from the DAZ , BPY2 , CDY , and HLA-DQ and HLA-DR gene clusters) and biological process categories of response to DNA damage stimulus and DNA repair (such as RAD51 , RAD52 , and MRE11A ), which are related to the adaptive traits of high infant birth weight and darker skin tone and are most likely due to recent local adaptation. [ 65 ]
The patterns of genetic adaptation among the Andeans are largely distinct from those of the Tibetans, with both populations showing evidence of positive natural selection in different genes or gene regions. For genes in the HIF pathway, EGLN1 is the only instance where evidence of positive selection is observed in both Tibetans and Andeans. [ 66 ] Even then, the pattern of variation for this gene differs between the two populations. [ 6 ] Furthermore, there are no significant associations between EPAS1 or EGLN1 SNP genotypes and hemoglobin concentration among the Andeans, which is characteristic of the Tibetans. [ 67 ]
The Andean pattern of adaptation is characterized by selection in a number of genes involved in cardiovascular development and function (such as BRINP3 , EDNRA , NOS2A ). [ 68 ] [ 69 ] This suggests that selection in Andeans, instead of targeting the HIF pathway like in the Tibetans, focused on adaptations of the cardiovascular system to combat chronic disease at high altitude. Analysis of ancient Andean genomes, some dating back 7,000 years, discovered selection in DST , a gene involved in cardiovascular function. [ 70 ] The whole genome sequences of 20 Andeans (half of them having chronic mountain sickness) revealed that two genes, SENP1 (an erythropoiesis regulator) and ANP32D (an oncogene) play vital roles in their weak adaptation to hypoxia. [ 71 ]
The adaptive mechanism of Ethiopian highlanders differs from those of the Tibetans and Andeans due to the fact that their migration to the highland was relatively early. For example, the Amhara have inhabited altitudes above 2,500 meters (8,200 ft) for at least 5,000 years and altitudes around 2,000 meters (6,600 ft) to 2,400 meters (7,900 ft) for more than 70,000 years. [ 72 ] Genomic analysis of two ethnic groups, Amhara and Oromo , has revealed that gene variations associated with hemoglobin difference among Tibetans or other variants at the exact gene location do not influence the adaptation in Ethiopians. [ 73 ] Several candidate genes have been identified as possible explanations for the adaptation of Ethiopians, including CBARA1 , VAV3 , ARNT2 and THRB . Two of these genes ( THRB and ARNT2 ) are known to play a role in the HIF-1 pathway , a pathway implicated in previous work reported in Tibetan and Andean studies. This supports the hypothesis that adaptation to high altitude arose independently among different highlander populations as a result of convergent evolution . [ 74 ] | https://en.wikipedia.org/wiki/High-altitude_adaptation_in_humans |
There are a wide range of potential applications for research at high altitude, including medical, physiological , and cosmic physics research.
The most obvious and direct application of high-altitude research is to understand altitude illnesses such as acute mountain sickness , and the rare but rapidly fatal conditions, high-altitude pulmonary edema ( HAPE ) and high-altitude cerebral edema ( HACE ). [ 1 ] [ 2 ] Research at high altitude is also an important way to learn about sea level conditions that are caused or complicated by hypoxia such as chronic lung disease and sepsis . Patients with these conditions are very complex and usually suffer from several other diseases at the same time, so it is virtually impossible to work out which of their problems is caused by lack of oxygen. Altitude research gets round this by studying the effects of oxygen deprivation on otherwise healthy people.
Travelling to high altitude is often used as a way of studying the way the body responds to a shortage of oxygen. It is difficult and prohibitively expensive to conduct some of this research at sea level.
Although the shortage of air contributes to the effects on the human body, research has found that most altitude sicknesses can be linked to the lack of atmospheric pressure. At low elevation, the pressure is higher because the molecules of air are compressed from the weight of the air above them. However, at higher elevations, the pressure is lower and the molecules are more dispersed. The percentage of oxygen in the air at sea level is the same at high altitudes. But because the air molecules are more spread out at higher altitudes, each breath takes in less oxygen to the body. With this in mind, the lungs take in as much air as possible, but because the atmospheric pressure is lower the molecules are more dispersed, resulting in a lower amount of oxygen per breath.
At 26,000 feet the body reaches a maximum and can no longer adjust to the altitude, often referred to as the "Death Zone". [ 3 ]
This science article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-altitude_research |
media redundancy protocol IEC 62493-2 Parallel Redundancy Protocol IEC 62439-3 Coupled redundancy protocoI IEC 62439-4 Beacon Redundancy Protocol IEC 62439-5
High-availability Seamless Redundancy ( HSR ) is a network protocol for Ethernet that provides seamless failover against failure of any single network component. PRP and HSR are independent of the application-protocol and can be used by most Industrial Ethernet protocols in the IEC 61784 suite. HSR does not cover the failure of end nodes, but redundant nodes can be connected via HSR.
HSR nodes have two ports and act as a bridge , which allows arranging them into a ring or meshed structure without dedicated switches. This is in contrast to the companion standard Parallel Redundancy Protocol (PRP), [ 1 ] with which HSR shares the operating principle. PRP and HSR are standardized by the IEC 62439-3:2016. [ 2 ]
PRP and HSR are suited for applications that request high availability and short switchover time. [ 3 ] For such applications, the recovery time of commonly used protocols such as the Rapid Spanning Tree Protocol (RSTP) is too long. It has been adopted for electrical substation automation in the framework of IEC 61850 . [ 4 ] It is used in synchronized drives (e.g. in printing machines) and high power inverters. [ 5 ]
The cost of HSR is that nodes require hardware support ( FPGA or ASIC ) to forward or discard frames within microseconds. This cost is compensated because no Ethernet switches are required. Hardware support is anyhow needed when the node supports clock synchronization or security.
An HSR network node (DANH) has at least two Ethernet ports, each attached to a neighbour HSR node, so that always two paths exist between two nodes. Therefore, as long as one path is operational, the destination application always receives one frame. HSR nodes check the redundancy continuously to detect lurking failures.
HSR is typically used in a ring topology or in another mesh topology .
Nodes with single attachment (such as a printer) are attached through a RedBox (Redundancy Box).
Redundant connections to other networks are possible, especially to a Parallel Redundancy Protocol (PRP) network.
Since HSR and PRP use the same duplicate identification mechanism, PRP and HSR networks can be connected without single point of failure and the same nodes can be built to be used in both PRP and HSR networks.
Every HSR node is a switching node, i.e. it can forward a frame received on one port to at least one other port in cut-through mode.
A source node sends the same frame over all ports to the neighbour nodes.
A destination node should receive, in the fault-free state, two identical frames within a certain time skew, forward the first frame to the application and discard the second frame when (and if) it comes.
A node forwards a frame unless it detects a frame that it sent itself or that it already sent.
To reduce unicast traffic, a node does not forward a frame for which it is the sole destination
(Mode U). This does not apply when traffic supervision is needed.
To reduce traffic, a node may refrain from sending a frame that it already received from the opposite direction on the same port (Mode X), [ 6 ] but this does not apply to all frames.
Also, several algorithms that relies on network node location learning can serve in the HSR traffic reduction, such as the Port Locking and Enhanced Port Locking, (PL) and (EPL) respectively, which work on closing the ports that leads to a non existed node, [ 7 ]
Especially, Precision Time Protocol frames (multicast) are no duplicates of each other since they are modified by each node to correct the time. Such frames can only be retired by the node that originally inserted them, or by another node that already sent them. Also, this mode cannot be used when deterministic operation is required.
A special treatment is given to link-specific frames such as LLDP or Pdelay_Req / Pdelay_Resp Precision Time Protocol frames, for which the HSR tag is ignored, but must be present.
To simplify the detection of duplicates, the frames are identified by their source address and a sequence number that is incremented for each frame sent according to the HSR protocol. The sequence number, the frame size and the path identifier are appended in a 6-octet HSR tag (header).
NOTE: all legacy devices should accept Ethernet frames up to 1528 octets, this is below the theoretical limit of 1535 octets.
In an HSR ring, only about half of the network bandwidth is available to applications for multicast traffic (compared to RSTP). This is because all frames are sent twice over the same network, even when there is no failure.
However, since the network infrastructure is also doubled in closed ring topologies, the nominal network bandwidth is available. E.g. in a 100 Mbit/s Ethernet ring 100 Mbit/s are available (but not 200 Mbit/s).
Since the forwarding delay of every node in an HSR ring adds to the total network latency, frames are forwarded within microseconds. In practice, hardware support ( FPGA ) [ 8 ] is required to bring down the per-hop latency to a reasonable value (some 5μs at 100 Mbit/s), using cut-through switching . To this purpose, each frame has an HSR tag that allows recognition of whether the frame should be forwarded or not, to avoid store-and-forward . This means that corrupted frames will not be removed from the ring until they reach a node that already sent them.
IEC 62439-3 Annex C specifies a Precision Time Protocol Industry Profile (PIP L2P2P), that allows a clock synchronization down to an accuracy of 1 μs in a ring of 16 HSR nodes. This PTP protocol also allows operating the HSR ring deterministically for a dedicated class of traffic, for instance Sampled Values in IEC 61850 . It has been adopted by IEEE as IEC/IEEE 61850-9-3 ,
. [ 9 ]
Originally, the protocol was named HASAR for the initial of the five companies working for electrical utilities that created it (Hirschmann, ABB, Siemens, Alstom and RuggedCom). Marketing renamed it HSR, for "High-availability Seamless Ring", but HSR is not limited to a simple ring topology. | https://en.wikipedia.org/wiki/High-availability_Seamless_Redundancy |
High-capacity data radio (HCDR) is a development of the Near-Term Digital Radio (NTDR) for the UK government as a part of the Bowman communication system . [ 1 ] It is a secure wideband 225–450 MHz UHF radio system that provides a self-managing IP -based Internet backbone capability without the need for other infrastructure communications (mobile phone, fixed communications).
There is also an export version that incorporates Advanced Encryption Standard (AES) encryption rather than UK Government Type 1 Crypto. The radio offers a link throughput (terminal to terminal) of 500 kbit/s. A deployment of over 200 HCDR-equipped military vehicles can automatically configure and self manage into a fully connected autonomous mesh network intercommunicating using mobile ad hoc network (MANET) protocols. The radio is an IPv4 -compliant three-port router having a radio port, Ethernet port and PPP serial port. The 20-watt radio has adaptive transmit power and adaptive forward error correction and can optimally achieve ground ranges up to 15 km with omnidirectional antennas . A maritime version allows radio LAN operation within flotillas of naval ships up to 20 km apart. The radio features coded modulation with internal wide-band or narrow band radio data modems .
This article related to radio communications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-capacity_data_radio |
The term high-conductance state describes a particular state of neurons in specific states of the brain , such as for example during wakefulness , attentive states, or even during some anesthetized states. [ 1 ] In individual neurons, the high-conductance state is formally defined by the fact that the total synaptic conductance received by the neuron is larger than its natural resting (or leak) conductance, so in a sense the neuron is "driven" by its inputs rather than being dominated by its intrinsic activity. High-conductance states have been well characterized experimentally, but they also have motivated numerous theoretical studies, in particular in relation to the considerable amount of "noise" present in such states. It is believed that this " synaptic noise " has determinant properties on neuronal processing, and even may confer several computational advantages to neurons (see details in the article High-Conductance State in Scholarpedia ).
The term high-conductance state is also used to describe specific states of single ion channels . In this case, the high-conductance state corresponds to an open state of the channel which is associated with a particularly high conductance compared to other states.
This neuroscience article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-conductance_state |
High-content screening (HCS), also known as high-content analysis (HCA) or cellomics , is a method that is used in biological research and drug discovery to identify substances such as small molecules , peptides , or RNAi that alter the phenotype of a cell in a desired manner. [ 1 ] [ 2 ] Hence high content screening is a type of phenotypic screen conducted in cells involving the analysis of whole cells or components of cells with simultaneous readout of several parameters. [ 3 ] HCS is related to high-throughput screening (HTS), in which thousands of compounds are tested in parallel for their activity in one or more biological assays, but involves assays of more complex cellular phenotypes as outputs. [ 4 ] Phenotypic changes may include increases or decreases in the production of cellular products such as proteins and/or changes in the morphology (visual appearance) of the cell. Hence HCA typically involves automated microscopy and image analysis. [ 4 ] Unlike high-content analysis, high-content screening implies a level of throughput which is why the term "screening" differentiates HCS from HCA, which may be high in content but low in throughput.
In high content screening, cells are first incubated with the substance and after a period of time, structures and molecular components of the cells are analyzed. The most common analysis involves labeling proteins with fluorescent tags , and finally changes in cell phenotype are measured using automated image analysis . Through the use of fluorescent tags with different absorption and emission maxima, it is possible to measure several different cell components in parallel. Furthermore, the imaging is able to detect changes at a subcellular level (e.g., cytoplasm vs. nucleus vs. other organelles ). Therefore, a large number of data points can be collected per cell. In addition to fluorescent labeling, various label free assays have been used in high content screening. [ 5 ]
High-content screening (HCS) in cell-based systems uses living cells as tools in biological research to elucidate the workings of normal and diseased cells. HCS is also used to discover and optimize new drug candidates. High content screening is a combination of modern cell biology , with all its molecular tools, with automated high resolution microscopy and robotic handling. Cells are first exposed to chemicals or RNAi reagents. Changes in cell morphology are then detected using image analysis . Changes in the amounts of proteins synthesized by cells are measured using a variety of techniques such as the green fluorescent proteins fused to endogenous proteins, or by fluorescent antibodies .
The technology may be used to determine whether a potential drug is disease modifying. For example, in humans G-protein coupled receptors (GPCRs) are a large family of around 880 cell surface proteins that transduce extra-cellular changes in the environment into a cell response, like triggering an increase in blood pressure because of the release of a regulatory hormone into the blood stream. Activation of these GPCRs can involve their entry into cells and when this can be visualised it can be the basis of a systematic analysis of receptor function through chemical genetics , systematic genome wide screening or physiological manipulation.
At a cellular level, parallel acquisition of data on different cell properties, for example activity of signal transduction cascades and cytoskeleton integrity is the main advantage of this method in comparison to the faster but less detailed high throughput screening . While HCS is slower, the wealth of acquired data allows a more profound understanding of drug effects.
Automated image based screening permits the identification of small compounds altering cellular phenotypes and is of interest for the discovery of new pharmaceuticals and new cell biological tools for modifying cell function. The selection of molecules based on a cellular phenotype does not require a prior knowledge of the biochemical targets that are affected by compounds. However the identification of the biological target will make subsequent preclinical optimization and clinical development of the compound hit significantly easier. Given the increase in the use of phenotypic/visual screening as a cell biological tool, methods are required that permit systematic biochemical target identification if these molecules are to be of broad use. [ 6 ] Target identification has been defined as the rate limiting step in chemical genetics/high-content screening. [ 7 ]
High-content screening technology is mainly based on automated digital microscopy and flow cytometry , in combination with IT-systems for the analysis and storage of the data.
“High-content” or visual biology technology has two purposes, first to acquire spatially or temporally resolved information on an event and second to automatically quantify it. Spatially resolved instruments are typically automated microscopes , and temporal resolution still requires some form of fluorescence measurement in most cases. This means that a lot of HCS instruments are ( fluorescence ) microscopes that are connected to some form of image analysis package. These take care of all the steps in taking fluorescent images of cells and provide rapid, automated and unbiased assessment of experiments.
HCS instruments on the market today can be separated based on an array of specifications that significantly influence the instruments versatility and overall cost. These include speed, a live cell chamber that includes temperature and CO 2 control (some also have humidity control for longer term live cell imaging), a built in pipettor or injector for fast kinetic assays, and additional imaging modes such as confocal, bright field, phase contrast and FRET. One of the most incisive difference is whether the instruments are optical confocal or not. Confocal microscopy summarizes as imaging/resolving a thin slice through an object and rejecting out of focus light that comes from outside this slice. Confocal imaging enables higher image signal to noise and higher resolution than the more commonly applied epi- fluorescence microscopy . Depending on the instrument confocality is achieved via laser scanning, a single spinning disk with pinholes or slits, a dual spinning disk, or a virtual slit. There are trade offs of sensitivity, resolution, speed, photo-toxicity, photo-bleaching, instrument complexity, and price between these various confocal techniques.
What all instruments share is the ability to take, store and interpret images automatically and integrate into large robotic cell/medium handling platforms.
Many screens are analyzed using the image analysis software that accompanies the instrument, providing a turn-key solution. Third-party software alternatives are often used for particularly challenging screens or where a laboratory or facility has multiple instruments and wishes to standardize to a single analysis platform. Some instrument software provides bulk importing and exporting of images and data, for users who want to do such standardization on a single analysis platform without the use of third-party software, however.
This technology allows a (very) large number of experiments to be performed, allowing explorative screening. Cell-based systems are mainly used in chemical genetics where large, diverse small molecule collections are systematically tested for their effect on cellular model systems. Novel drugs can be found using screens of tens of thousands of molecules, and these have promise for the future of drug development.
Beyond drug discovery, chemical genetics is aimed at functionalizing the genome by identifying small molecules that acts on most of the 21,000 gene products in a cell. High-content technology will be part of this effort which could provide useful tools for learning where and when proteins act by knocking them out chemically. This would be most useful for gene where knock out mice (missing one or several genes) can not be made because the protein is required for development, growth or otherwise lethal when it is not there. Chemical knock out could address how and where these genes work.
Further the technology is used in combination with RNAi to identify sets of genes involved in specific mechanisms, for example cell division. Here, libraries of RNAis, covering a whole set of predicted genes inside the target organism's genome can be used to identify relevant subsets, facilitating the annotation of genes for which no clear role has been established beforehand.
The large datasets produced by automated cell biology contain spatially resolved, quantitative data which can be used for building for systems level models and simulations of how cells and organisms function. Systems biology models of cell function would permit prediction of why, where and how the cell responds to external changes, growth and disease.
High-content screening technology allows for the evaluation of multiple biochemical and morphological parameters in intact biological systems.
For cell-based approaches the utility of automated cell biology requires an examination of how automation and objective measurement can improve the experimentation and the understanding of disease. First, it removes the influence of the investigator in most, but not all, aspects of cell biology research and second it makes entirely new approaches possible.
In review, classical 20th century cell biology used cell lines grown in culture where the experiments were measured using very similar to that described here, but there the investigator made the choice on what was measured and how. In the early 1990s, the development of charge-coupled device (CCD) cameras for research created the opportunity to measure features in pictures of cells- such as how much protein is in the nucleus, how much is outside. Sophisticated measurements soon followed using new fluorescent molecules, which are used to measure cell properties like second messenger concentrations or the pH of internal cell compartments. The wide use of the green fluorescent protein, a natural fluorescent protein molecule from jellyfish, then accelerated the trend toward cell imaging as a mainstream technology in cell biology. Despite these advances, the choice of which cell to image and which data to present and how to analyze it was still selected by the investigator.
By analogy, if one imagines a football field and dinner plates laid across it, instead of looking at all of them, the investigator would choose a handful near the score line and had to leave the rest. In this analogy the field is a tissue culture dish, the plates the cells growing on it. While this was a reasonable and pragmatic approach automation of the whole process and the analysis makes possible the analysis of the whole population of living cells, so the whole football field can be measured. | https://en.wikipedia.org/wiki/High-content_screening |
High-energy X-rays or HEX-rays are very hard X-rays , with typical energies of 80–1000 keV (1 MeV), [ a ] about one order of magnitude higher than conventional X-rays used for X-ray crystallography (and well into gamma-ray energies over 120 keV). They are produced at modern synchrotron radiation sources such as the Cornell High Energy Synchrotron Source , SPring-8, and the beamlines ID15 and BM18 at the European Synchrotron Radiation Facility (ESRF). The main benefit is the deep penetration into matter which makes them a probe for thick samples in physics and materials science and permits an in-air sample environment and operation. Scattering angles are small and diffraction directed forward allows for simple detector setups.
High energy (megavolt) X-rays are also used in cancer therapy , using beams generated by linear accelerators to suppress tumors. [ 1 ]
High-energy X-rays (HEX-rays) between 100 and 300 keV have several advantages over conventional hard X-rays, which lie in the range of 5–20 keV [ 2 ] They can be listed as follows:
With these advantages, HEX-rays can be applied for a wide range of investigations. An overview, which is far from complete: | https://en.wikipedia.org/wiki/High-energy_X-rays |
High-energy astronomy is the study of astronomical objects that release electromagnetic radiation of highly energetic wavelengths . It includes X-ray astronomy , gamma-ray astronomy , extreme UV astronomy , neutrino astronomy , and studies of cosmic rays . The physical study of these phenomena is referred to as high-energy astrophysics . [ 1 ]
Astronomical objects commonly studied in this field may include black holes , neutron stars , active galactic nuclei , supernovae , kilonovae , supernova remnants , and gamma-ray bursts .
Some space and ground-based telescopes that have studied high energy astronomy include the following: [ 2 ]
This astronomy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-energy_astronomy |
High-energy nuclear physics studies the behavior of nuclear matter in energy regimes typical of high-energy physics . The primary focus of this field is the study of heavy-ion collisions, as compared to lighter atoms in other particle accelerators . At sufficient collision energies, these types of collisions are theorized to produce the quark–gluon plasma . In peripheral nuclear collisions at high energies one expects to obtain information on the electromagnetic production of leptons and mesons that are not accessible in electron–positron colliders due to their much smaller luminosities. [ 1 ] [ 2 ] [ 3 ]
Previous high-energy nuclear accelerator experiments have studied heavy-ion collisions using projectile energies of 1 GeV/nucleon at JINR and LBNL-Bevalac up to 158 GeV/nucleon at CERN-SPS . Experiments of this type, called "fixed-target" experiments, primarily accelerate a "bunch" of ions (typically around 10 6 to 10 8 ions per bunch) to speeds approaching the speed of light (0.999 c ) and smash them into a target of similar heavy ions. While all collision systems are interesting, great focus was applied in the late 1990s to symmetric collision systems of gold beams on gold targets at Brookhaven National Laboratory 's Alternating Gradient Synchrotron (AGS) and uranium beams on uranium targets at CERN 's Super Proton Synchrotron .
High-energy nuclear physics experiments are continued at the Brookhaven National Laboratory 's Relativistic Heavy Ion Collider (RHIC) and at the CERN Large Hadron Collider . At RHIC the programme began with four experiments— PHENIX, STAR, PHOBOS, and BRAHMS—all dedicated to study collisions of highly relativistic nuclei. Unlike fixed-target experiments, collider experiments steer two accelerated beams of ions toward each other at (in the case of RHIC) six interaction regions. At RHIC, ions can be accelerated (depending on the ion size) from 100 GeV/nucleon to 250 GeV/nucleon. Since each colliding ion possesses this energy moving in opposite directions, the maximal energy of the collisions can achieve a center-of-mass collision energy of 200 GeV/nucleon for gold and 500 GeV/nucleon for protons.
The ALICE (A Large Ion Collider Experiment) detector at the LHC at CERN is specialized in studying Pb–Pb nuclei collisions at a center-of-mass energy of 2.76 TeV per nucleon pair. All major LHC detectors—ALICE, ATLAS , CMS and LHCb —participate in the heavy-ion programme. [ 4 ]
The exploration of hot hadron matter and of multiparticle production has a long history initiated by theoretical work on multiparticle production by Enrico Fermi in the US and Lev Landau in the USSR. These efforts paved the way to the development in the early 1960s of the thermal description of multiparticle production and the statistical bootstrap model by Rolf Hagedorn . These developments led to search for and discovery of quark-gluon plasma . Onset of the production of this new form of matter remains under active investigation.
The first heavy-ion collisions at modestly relativistic conditions were undertaken at the Lawrence Berkeley National Laboratory (LBNL, formerly LBL) at Berkeley , California, U.S.A., and at the Joint Institute for Nuclear Research (JINR) in Dubna , Moscow Oblast, USSR. At the LBL, a transport line was built to carry heavy ions from the heavy-ion accelerator HILAC to the Bevatron . The energy scale at the level of 1–2 GeV per nucleon attained initially yields compressed nuclear matter at few times normal nuclear density. The demonstration of the possibility of studying the properties of compressed and excited nuclear matter motivated research programs at much higher energies in accelerators available at BNL and CERN with relativist beams targeting laboratory fixed targets. The first collider experiments started in 1999 at RHIC, and LHC begun colliding heavy ions at one order of magnitude higher energy in 2010.
The LHC collider at CERN operates one month a year in the nuclear-collision mode, with Pb nuclei colliding at 2.76 TeV per nucleon pair, about 1500 times the energy equivalent of the rest mass. Overall 1250 valence quarks collide, generating a hot quark–gluon soup. Heavy atomic nuclei stripped of their electron cloud are called heavy ions, and one speaks of (ultra)relativistic heavy ions when the kinetic energy exceeds significantly the rest energy , as it is the case at LHC. The outcome of such collisions is production of very many strongly interacting particles .
In August 2012 ALICE scientists announced that their experiments produced quark–gluon plasma with temperature at around 5.5 trillion kelvins , the highest temperature achieved in any physical experiments thus far. [ 5 ] This temperature is about 38% higher than the previous record of about 4 trillion kelvins, achieved in the 2010 experiments at the Brookhaven National Laboratory . [ 5 ] The ALICE results were announced at the August 13 Quark Matter 2012 conference in Washington, D.C. The quark–gluon plasma produced by these experiments approximates the conditions in the universe that existed microseconds after the Big Bang , before the matter coalesced into atoms . [ 6 ]
There are several scientific objectives of this international research program:
This experimental program follows on a decade of research at the RHIC collider at BNL and almost two decades of studies using fixed targets at SPS at CERN and AGS at BNL. This experimental program has already confirmed that the extreme conditions of matter necessary to reach QGP phase can be reached. A typical temperature range achieved in the QGP created
is more than 100 000 times greater than in the center of the Sun . This corresponds to an energy density
The corresponding relativistic-matter pressure is | https://en.wikipedia.org/wiki/High-energy_nuclear_physics |
High-energy phosphate can mean one of two things:
High-energy phosphate bonds are usually pyrophosphate bonds, acid anhydride linkages formed by taking phosphoric acid derivatives and dehydrating them. As a consequence, the hydrolysis of these bonds is exergonic under physiological conditions, releasing Gibbs free energy . [ citation needed ]
Except for PP i → 2 P i , these reactions are, in general, not allowed to go uncontrolled in the human cell but are instead coupled to other processes needing energy to drive them to completion. Thus, high-energy phosphate reactions can: [ citation needed ]
The one exception is of value because it allows a single hydrolysis, ATP + H 2 O → AMP + PP i , to effectively supply the energy of hydrolysis of two high-energy bonds, with the hydrolysis of PP i being allowed to go to completion in a separate reaction. The AMP is regenerated to ATP in two steps, with the equilibrium reaction ATP + AMP ↔ 2ADP, followed by regeneration of ATP by the usual means, oxidative phosphorylation or other energy-producing pathways such as glycolysis . [ citation needed ]
Often, high-energy phosphate bonds are denoted by the character '~'. In this "squiggle" notation, ATP becomes A-P~P~P. The squiggle notation was invented by Fritz Albert Lipmann , who first proposed ATP as the main energy transfer molecule of the cell, in 1941. [ 4 ] Lipmann's notation emphasizes the special nature of these bonds. [ 5 ] Stryer states:
ATP is often called a high energy compound and its phosphoanhydride bonds are referred to as high-energy bonds. There is nothing special about the bonds themselves. They are high-energy bonds in the sense that free energy is released when they are hydrolyzed , for the reasons given above. Lipmann’s term "high-energy bond" and his symbol ~P (squiggle P) for a compound having a high phosphate group transfer potential are vivid, concise, and useful notations. In fact Lipmann's squiggle did much to stimulate interest in bioenergetics. [ 5 ]
The term 'high energy' with respect to these bonds can be misleading because the negative free energy change is not due directly to the breaking of the bonds themselves. The breaking of these bonds, like the breaking of most bonds, is endergonic and consumes energy rather than releasing it. The negative free energy change comes instead from the fact that the bonds formed after hydrolysis - or the phosphorylation of a residue by ATP - are lower in energy than the bonds present before hydrolysis. (This includes all of the bonds involved in the reaction, not just the phosphate bonds themselves). This effect is due to a number of factors including increased resonance stabilization and solvation of the products relative to the reactants, and destabilization of the reactants due to electrostatic repulsion between neighboring phosphorus atoms. [ 6 ] | https://en.wikipedia.org/wiki/High-energy_phosphate |
High-entropy-alloy nanoparticles (HEA-NPs) are nanoparticles having five or more elements alloyed in a single-phase solid solution structure. [ 1 ] HEA-NPs possess a wide range of compositional library, distinct alloy mixing structure, and nanoscale size effect , giving them huge potential in catalysis, energy, environmental, and biomedical applications.
HEA-NPs are a structural analog to bulk high-entropy alloys (HEAs), [ 2 ] [ 3 ] but synthesized at the nanoscale. The formation of HEAs typically requires high temperature for multi-element mixing; however, high temperature acts against nano-material synthesis due to high-temperature-induced structure aggregation and surface reconstruction.
In 2018, HEA-NPs were firstly synthesized by a carbothermal shock synthesis . [ 1 ] (The material and technology are patented. [ 4 ] [ 5 ] ) The carbothermal shock employs a rapid high-temperature heating (e.g. 2000 K, in 55 ms) to enable the non-equilibrium synthesis of HEA-NPs with uniform size and homogeneous mixing despite containing immiscible combinations. Although rapid quenching is desired to maintain the solid-solution state, too fast cooling rate can hinder structural ordering. Therefore, the cooling rate should be chosen carefully based on the temperature-time-transformation diagram . [ 6 ]
Another guide that can be used for the synthesis is the Ellingham diagram . Elements at the top of the diagram are easily reduced and tend to form HEA-NPs, while elements at the bottom of the diagram tend to form high-entropy oxide NPs. [ 6 ]
Later, other similar non-equilibrium "shock" methods were also introduced to synthesize HEA-NPs and other types of high entropy nanostructures. [ 7 ] [ 8 ] [ 9 ] Recently, a low temperature synthesis through simultaneous multi-cation exchange (below 900 K) has been demonstrated for high-entropy metal sulfide NPs, which may be applied to metal selenides, tellurides, phosphides, and halides as well. [ 10 ]
In 2024 a study showed that induction plasma can be used as a one-step method that enables the continuous synthesis of HEA-NPs directly from elemental metal powders via in-flight alloying. [ 11 ]
Due to the random distribution of elements in HEA-NPs, in addition to conventional characterization methods , other methods with higher resolution are needed for their structural analysis. To analyze the random mixing of multiple elements, atomic electron tomography can be used, which provides positional precision of 21 pm and identification of atoms by periods. [ 12 ] Furthermore, X-ray absorption spectroscopy can give information on local coordination environments, while extended X-ray absorption fine structure can be used to get coordination numbers and bond distances. [ 13 ] Combined with hard X-ray photoelectron spectroscopy or X-ray absorption near-edge structure , these analyses can be used to explore structure–property relationships in HEA-NPs. [ 14 ] In addition, due to the immense number of possibilities of compositions and surfaces (i.e., terrace, edge, and corner) available for HEA-NPs, simulations such as density functional theory calculations are also popularly used for their analysis. [ 15 ]
HEA-NPs have a large compositional library, which enables tunability in chemical composition, structure, and associated properties. In HEA-NPs, the same type of atoms can have different local density of states because their neighboring atom compositions can be different. [ 14 ] Such variations in local environment lead to diverse and tunable adsorption energy levels, which can be beneficial to satisfy the Sabatier principle especially for complex reactions. [ 6 ]
In addition, owing to the high entropy structure, HEA-NPs typically show improved structural stability. One suggested mechanism for the enhanced structural stability is through prevention of phase separation due to lattice distortions from different sized elements acting as diffusion barriers. [ 16 ] With the above merits, HEA-NPs have been used as high-performance catalysts for both thermochemical and electrochemical reactions, such as ammonia oxidation, decomposition, and water splitting. [ 1 ] [ 17 ] [ 18 ] [ 19 ] High throughput and data mining approaches are being implemented toward accelerated materials discovery in the multi-dimensional space of HEA-NPs. [ 20 ] [ 21 ] | https://en.wikipedia.org/wiki/High-entropy-alloy_nanoparticles |
High-entropy alloys ( HEAs ) are alloys that are formed by mixing equal or relatively large proportions of (usually) five or more elements . Prior to the synthesis of these substances, typical metal alloys comprised one or two major components with smaller amounts of other elements. For example, additional elements can be added to iron to improve its properties, thereby creating an iron-based alloy, but typically in fairly low proportions, such as the proportions of carbon , manganese , and others in various steels . [ 2 ] Hence, high-entropy alloys are a novel class of materials. [ 1 ] [ 2 ] The term "high-entropy alloys" was coined by Taiwanese scientist Jien-Wei Yeh [ 3 ] because the entropy increase of mixing is substantially higher when there is a larger number of elements in the mix, and their proportions are more nearly equal. [ 4 ] Some alternative names, such as multi-component alloys, compositionally complex alloys and multi-principal-element alloys are also suggested by other researchers. [ 5 ] [ 6 ] Compositionally complex alloys (CCAs) are an up-and-coming group of materials due to their unique mechanical properties. They have high strength and toughness, the ability to operate at higher temperatures than current alloys, and have superior ductility. Material ductility is important because it quantifies the permanent deformation a material can withstand before failure, a key consideration in designing safe and reliable materials. Due to their enhanced properties, CCAs show promise in extreme environments. An extreme environment presents significant challenges for a material to perform to its intended use within designated safety limits. CCAs can be used in several applications such as aerospace propulsion systems, land-based gas turbines, heat exchangers, and the chemical process industry.
These alloys are currently the focus of significant attention in materials science and engineering because they have potentially desirable properties. [ 2 ] Furthermore, research indicates that some HEAs have considerably better strength-to-weight ratios , with a higher degree of fracture resistance , tensile strength , and corrosion and oxidation resistance than conventional alloys. [ 7 ] [ 8 ] [ 9 ] Although HEAs have been studied since the 1980s, research substantially accelerated in the 2010s. [ 2 ] [ 6 ] [ 10 ] [ 11 ] [ 12 ] [ 13 ] [ 14 ]
Although HEAs were considered from a theoretical standpoint as early as 1981 [ 15 ] and 1996, [ 16 ] and throughout the 1980s, in 1995 Taiwanese scientist Jien-Wei Yeh came up with his idea for ways of actually creating high-entropy alloys, while driving through the Hsinchu , Taiwan, countryside. Soon after, he decided to begin creating these special alloys in his lab, being in the only region researching these alloys for over a decade. Most countries in Europe , the United States , and other parts of the world lagged behind in the development of HEAs. Significant research interest from other countries did not develop until after 2004 when Yeh and his team of scientists built the world's first high-entropy alloys to withstand extremely high temperatures and pressures. [ 17 ] Potential applications include use in state-of-the-art race cars, spacecraft, submarines, nuclear reactors, [ 18 ] jet aircraft, nuclear weapons, long range hypersonic missiles , and so on. [ 19 ] [ 20 ]
A few months later, after the publication of Yeh's paper, another independent paper on high-entropy alloys was published by a team from the United Kingdom composed of Brian Cantor , I. T. H. Chang, P. Knight, and A. J. B. Vincent. Yeh was also the first to coin the term "high-entropy alloy" when he attributed the high configurational entropy as the mechanism stabilizing the solid solution phase. [ 21 ] Cantor did the first work in the field in the late 1970s and early 1980s, though he did not publish until 2004. Unaware of Yeh's work, he did not describe his new materials as "high-entropy" alloys, preferring the term "multicomponent alloys". The base alloy he developed, equiatomic CrMnFeCoNi, has been the subject of considerable work in the field, and is known as the "Cantor alloy", with similar derivatives known as Cantor alloys. [ 22 ] It was one of the first HEAs to be reported to form a single-phase FCC ( face-centred cubic crystal structure ) solid solution. [ 23 ]
Before the classification of high-entropy alloys and multi-component systems as a separate class of materials, nuclear scientists had already studied a system that can now be classified as a high-entropy alloy: within nuclear fuels Mo-Pd-Rh-Ru-Tc particles form at grain boundaries and at fission gas bubbles. [ 24 ] Understanding the behavior of these "five-metal particles" was of specific interest to the medical industry because Tc-99m is an important medical imaging isotope.
There is no universally agreed-upon definition of a HEA. The originally defined HEAs as alloys containing at least 5 elements with concentrations between 5 and 35 atomic percent . [ 21 ] Later research however, suggested that this definition could be expanded. Otto et al. suggested that only alloys that form a solid solution with no intermetallic phases should be considered true high-entropy alloys, because the formation of ordered phases decreases the entropy of the system. [ 25 ] Some authors have described four-component alloys as high-entropy alloys [ 26 ] while others have suggested that alloys meeting the other requirements of HEAs, but with only 2–4 elements [ 27 ] or a mixing entropy between R and 1.5 R [ 28 ] should be considered "medium-entropy" alloys. [ 27 ]
Due to their multi-component composition, HEAs exhibit different basic effects than other traditional alloys that are based only on one or two elements. Those different effects are called "the four core effects of HEAs" and are behind a lot of the particular microstructure and properties of HEAs. [ 29 ] The four core effects are high entropy, severe lattice distortion, sluggish diffusion, and cocktail effects.
The high entropy effect is the most important effect because it can enhance the formation of solid solutions and makes the microstructure much simpler than expected. Prior knowledge expected multi component alloys to have many different interactions among elements and thus form many different kinds of binary, ternary, and quaternary compounds and/or segregated phases. Thus, such alloys would possess complicated structures, brittle by nature. This expectation in fact neglects the effect of high entropy. Indeed, according to the second law of thermodynamics , the state having the lowest mixing Gibbs free energy Δ G m i x = Δ H m i x − T Δ S m i x {\displaystyle \Delta G_{mix}=\Delta H_{mix}-T\Delta S_{mix}} among all possible states would be the equilibrium state. Elemental phases based on one major element have small enthalpy of mixing ( Δ H m i x {\displaystyle \Delta H_{mix}} ) and a small entropy of mixing ( Δ S m i x {\displaystyle \Delta S_{mix}} ), and compound phases have large Δ H m i x {\displaystyle \Delta H_{mix}} but small Δ S m i x {\displaystyle \Delta S_{mix}} ; on the other hand, solid-solution phases containing multiple elements have medium Δ H m i x {\displaystyle \Delta H_{mix}} and high Δ S m i x {\displaystyle \Delta S_{mix}} . As a result, solid-solution phases become highly competitive for equilibrium state and more stable especially at high temperatures. [ 30 ]
Because solid solution phases with multi-principal elements are usually found in HEAs, the conventional crystal structure concept is thus extended from a one or two element basis to a multi-element basis. Every atom is surrounded by different kinds of atoms and thus suffers lattice strain and stress mainly due to atomic size difference. Besides the atomic size difference, both different bonding energy and crystal structure tendency among constituent elements are also believed to cause even higher lattice distortion because non-symmetrical bindings and electronic structure exist between an atom and its first neighbours. This distortion is believed to be the source of some of the mechanical, thermal, electrical, optical, and chemical behaviour of HEAs. Thus, overall lattice distortion would be more severe than that in traditional alloys in which most matrix atoms (or solvent atoms) have the same kind of atoms as their surroundings. [ 30 ]
As explained in the last section, an HEA mainly contains a random solid solution and/or an ordered solid solution. Their matrices could be regarded as whole-solute matrices. In HEAs, those whole-solute matrices' diffusion vacancies are surrounded by different element atoms, and thus have a specific lattice potential energy (LPE). This large fluctuation of LPE between lattice sites leads to low-LPE sites, serving as traps and hindering atomic diffusion. [ 31 ] This leads to the sluggish diffusion effect.
The cocktail effect is used to emphasise the enhancement of the alloy's properties by at least five major elements. Because HEAs might have one or more phases, the whole properties are from the overall contribution of the constituent phases. Besides, each phase is a solid solution and can be viewed as a composite with properties coming not only from the basic properties of the constituent, but by the mixture rule also from the interactions among all the constituents and from severe lattice distortion. The cocktail effect takes into account the effect from the atomic-scale multicomponent phases and from the multiple composite phases at the micro scale. [ 32 ]
In conventional alloy design, one primary element such as iron, copper, or aluminum is chosen for its properties. Then, small amounts of additional elements are added to improve or add properties. Even among binary alloy systems, there are few common cases of both elements being used in nearly-equal proportions such as Pb - Sn solders . Therefore, much is known from experimental results about phases near the edges of binary phase diagrams and the corners of ternary phase diagrams and much less is known about phases near the centers. In higher-order (4+ components) systems that cannot be easily represented on a two-dimensional phase diagram, virtually nothing is known. [ 22 ]
Early research of HEA was focussed on forming single-phased solid solution, which could maximize the major features of high entropy alloy: high entropy, sluggish diffusion, severe lattice distortion, and cocktail effects. It has been pointed out that most successful materials need some secondary phase to strengthen the material, [ 33 ] [ 34 ] and that any HEA used in application will have a multiphase microstructure. [ 35 ] However, it is still important to form single-phased material since a single-phased sample is essential for understanding the underlying mechanism of HEAs and testing specific microstructures to find structures producing special properties. [ 35 ]
Gibbs' phase rule , F = C − P + 2 {\displaystyle F=C-P+2} , can be used to determine an upper bound on the number of phases that will form in an equilibrium system. In his 2004 paper, Cantor created a 20-component alloy containing 5% of Mn, Cr, Fe, Co, Ni, Cu, Ag, W, Mo, Nb, Al, Cd, Sn, Pb, Bi, Zn, Ge, Si, Sb, and Mg. At constant pressure, the phase rule would allow for up to 21 phases at equilibrium, but far fewer actually formed. The predominant phase was a face-centered cubic solid-solution phase, containing mainly Cr, Mn, Fe, Co, and Ni. From that result, the CrMnFeCoNi alloy, which forms only a solid-solution phase, was developed. [ 22 ]
The Hume-Rothery rules have historically been applied to determine whether a mixture will form a solid solution. Research into high-entropy alloys has found that in multi-component systems, these rules tend to be relaxed slightly. In particular, the rule that solvent and solute elements must have the same crystal structure does not seem to apply, as Cr, Mn, Fe, Co, and Ni have three different crystal structures as pure elements (and when the elements are present in equal concentrations, there can be no meaningful distinction between "solvent" and "solute" elements). [ 25 ]
Phase formation of HEA is determined by thermodynamics and geometry. When phase formation is controlled by thermodynamics and kinetics are ignored, the Gibbs free energy of mixing Δ G m i x {\displaystyle \Delta G_{mix}} is defined as:
where H m i x {\displaystyle H_{mix}} is defined as enthalpy of mixing , T {\displaystyle T} is temperature, and Δ S m i x {\displaystyle \Delta S_{mix}} is entropy of mixing respectively. Δ H m i x {\displaystyle \Delta H_{mix}} and T Δ S m i x {\displaystyle T\Delta S_{mix}} continuously compete to determine the phase of the HEA material. Other important factors include the atomic size of each element within the HEA, where Hume-Rothery rules and Akihisa Inoue [ Wikidata ] 's three empirical rules for bulk metallic glass play a role.
Disordered solids form when atomic size difference is small and Δ G m i x {\displaystyle \Delta G_{mix}} is not negative enough. This is because every atom is about the same size and can easily substitute for each other and Δ H m i x {\displaystyle \Delta H_{mix}} is not low enough to form a compound. More-ordered HEAs form as the size difference between the elements gets larger and Δ G m i x {\displaystyle \Delta G_{mix}} gets more negative. When the size difference of each individual element become too large, bulk metallic glasses form instead of HEAs. High temperature and high Δ S m i x {\displaystyle \Delta S_{mix}} also promote the formation of HEA because they significantly lower Δ G m i x {\displaystyle \Delta G_{mix}} , making the HEA easier to form because it is more stable than other phases such as intermetallics. [ 36 ]
The multi-component alloys that Yeh developed also consisted mostly or entirely of solid-solution phases, contrary to what had been expected from earlier work in multi-component systems, primarily in the field of metallic glasses . [ 21 ] [ 37 ] Yeh attributed this result to the high configurational, or mixing, entropy of a random solid solution containing numerous elements. The mixing entropy for a random ideal solid solution can be calculated by:
where R {\displaystyle R} is the ideal gas constant , N {\displaystyle N} is the number of components, and c i {\displaystyle c_{i}} is the atomic fraction of component i {\displaystyle i} . From this it can be seen that alloys in which the components are present in equal proportions will have the highest entropy, and adding additional elements will increase the entropy. A five-component, equiatomic alloy will have a mixing entropy of 1.61R. [ 21 ] [ 38 ]
However, entropy alone is not sufficient to stabilize the solid-solution phase in every system. The enthalpy of mixing (ΔH) must also be taken into account. This can be calculated using:
where Δ H A B m i x {\displaystyle {\Delta }H_{AB}^{mix}} is the binary enthalpy of mixing for A and B. [ 39 ] Zhang et al. found, empirically, that in order to form a complete solid solution, ΔH mix should be between -10 and 5 kJ/mol. [ 38 ] In addition, Otto et al. found that if the alloy contains any pair of elements that tend to form ordered compounds in their binary system, a multi-component alloy containing them is also likely to form ordered compounds. [ 25 ]
Both of the thermodynamic parameters can be combined into a single, unitless parameter Ω:
where T m is the average melting point of the elements in the alloy. Ω should be greater than or equal to 1.0, (or 1.1 in practice), which means entropy dominates over enthalpy at the point of solidification, to promote solid solution development. [ 40 ] [ 41 ]
Ω can be optimized by adjusting element composition. Waite J. C. has proposed an optimisation algorithm to maximize Ω and demonstrated that slight change in composition could cause huge increase of Ω. [ 35 ]
The atomic radii of the components must also be similar in order to form a solid solution. Zhang et al. proposed a parameter δ, average lattice mismatch, representing the difference in atomic radii:
where r i is the atomic radius of element i and r ¯ = ∑ i = 1 N c i r i {\displaystyle {\bar {r}}=\sum _{i=1}^{N}c_{i}r_{i}} . Formation of a solid-solution phase requires a δ ≤ 6.6%, which is an empirical number based on experiments on bulk metallic glasses (BMG). [ 35 ] Exceptions are found on both sides of 6.6%: some alloys with 4% < δ ≤ 6.6% do form intermetallics, [ 38 ] [ 40 ] and solid-solution phases do appear in alloys with δ > 9%. [ 41 ]
The multi-element lattice in HEAs is highly distorted because all elements are solute atoms and their atomic radii are different. δ helps evaluating the lattice strain caused by disorder crystal structure. When the atomic size difference (δ) is sufficiently large, the distorted lattice would collapse and a new phase such as an amorphous structure would be formed. The lattice distortion effect can result in solid solution hardening. [ 2 ]
For those alloys that do form solid solutions, an additional empirical parameter has been proposed to predict the crystal structure that will form. HEAs are usually FCC (face-centred cubic), BCC (body-centred cubic), HCP (hexagonal close-packed), or a mixture of the above structures, and each structure has their own advantages and disadvantages in terms of mechanical properties. There are many methods to predict the structure of HEA. Valence electron concentration (VEC) can be used to predict the stability of the HEA structure. The stability of physical properties of the HEA is closely associated with electron concentration (this is associated with the electron concentration rule from the Hume-Rothery rules ).
When HEA is made with casting, only FCC structures are formed when VEC is larger than 8. When VEC is between 6.87 and 8, HEA is a mixture of BCC and FCC, and while VEC is below 6.87, the material is BCC. In order to produce a certain crystal structure of HEA, certain phase stabilizing elements can be added. Experimentally, adding elements such as Al and Cr can help the formation of BCC HEA while Ni and Co can help form FCC HEA. [ 36 ]
High-entropy alloys are difficult to manufacture using extant techniques as of 2018 [update] , and typically require both expensive materials and specialty processing techniques. [ 42 ]
High-entropy alloys are mostly produced using methods that depend on the metals phase – if the metals are combined while in a liquid, solid, or gas state.
Additive manufacturing can produce alloys with a different microstructure, [ 47 ] [ 18 ] potentially increasing strength (to 1.3 gigapascals) as well as increasing ductility. [ 48 ]
Other techniques include thermal spray , laser cladding , and electrodeposition . [ 40 ] [ 49 ]
The atomic-scale complexity presents additional challenges to computational modelling of high-entropy alloys. Thermodynamic modeling using the CALPHAD method requires extrapolating from binary and ternary systems. [ 50 ] Most commercial thermodynamic databases are designed for, and may only be valid for, alloys consisting primarily of a single element. Thus, they require experimental verification or additional ab initio calculations such as density functional theory (DFT). [ 51 ] However, DFT modeling of complex, random alloys has its own challenges, as the method requires defining a fixed-size cell, which can introduce non-random periodicity. This is commonly overcome using the method of "special quasirandom structures", designed to most closely approximate the radial distribution function of a random system, [ 52 ] combined with the Vienna Ab initio Simulation Package . Using this method, it has been shown that results of a four-component equiatomic alloy begins to converge with a cell as small as 24 atoms. [ 53 ] [ 54 ] The exact muffin-tin orbital method with the coherent potential approximation (CPA) has also been employed to model HEAs. [ 53 ] [ 55 ]
Another approach based on the KKR-CPA formulation of DFT is the S ( 2 ) {\displaystyle S^{(2)}} theory for multicomponent alloys, [ 56 ] [ 57 ] which evaluates the two-point correlation function, an atomic short-range order parameter, ab initio. The S ( 2 ) {\displaystyle S^{(2)}} theory has been used with success to study the Cantor alloy CrMnFeCoNi and its derivatives, [ 58 ] the refractory HEAs, [ 59 ] [ 60 ] as well as to examine the influence of a material's magnetic state on atomic ordering tendencies. [ 61 ]
Other techniques include the 'multiple randomly populated supercell' approach, which better describes the random population of a true solid solution (although this is far more computationally demanding). [ 62 ] This method has also been used to model glassy and amorphous systems without a crystal lattice (including bulk metallic glasses ). [ 63 ] [ 64 ]
Further, modeling techniques are being used to suggest new HEAs for targeted applications. The use of modeling techniques in this 'combinatorial explosion' is necessary for targeted and rapid HEA discovery and application.
Simulations have highlighted the preference for local ordering in some high-entropy alloys and, when the enthalpies of formation are combined with terms for configurational entropy , transition temperatures between order and disorder can be estimated, [ 65 ] allowing one to understand when effects like age hardening and degradation of an alloy's mechanical properties may be an issue.
The transition temperature to reach the solid solution (miscibility gap) was recently addressed with the Lederer-Toher-Vecchio-Curtarolo thermodynamic model. [ 66 ]
CALPHAD (CALculation of PHAse Diagrams) is a method to create reliable thermodynamic databases that can be an effective tool when searching for single phase HEAs. However, this method can be limited since it needs to extrapolate from known binary or ternary phase diagrams. This method also does not take into account the process of material synthesis and can only predict equilibrium phases. [ 67 ] The phase diagrams of HEAs can be explored experimentally via high throughput experimentation (HTE) . This method rapidly produces hundreds of samples, allowing the researcher to explore a region of composition in one step and thus can used to quickly map out the phase diagram of the HEA. [ 68 ] Another way to predict the phase of the HEA is via enthalpy concentration. This method accounts for specific combinations of single phase HEA and rejects similar combinations that have been shown not to be single phase. This model uses first principle high throughput density functional theory to calculate the enthalpies, thus requiring no experiment input, and it has shown excellent agreement with reported experimental results. [ 69 ]
The crystal structure of HEAs has been found to be the dominant factor in determining the mechanical properties. BCC HEAs typically have high yield strength and low ductility and vice versa for FCC HEAs. Some alloys have been particularly noted for their exceptional mechanical properties. A refractory alloy, VNbMoTaW maintains a high yield strength (>600 MPa (87 ksi )) even at a temperature of 1,400 °C (2,550 °F), significantly outperforming conventional superalloys such as Inconel 718. However, room temperature ductility is poor, less is known about other important high temperature properties such as creep resistance, and the density of the alloy is higher than conventional nickel-based superalloys. [ 40 ]
CrMnFeCoNi has been found to have exceptional low-temperature mechanical properties and high fracture toughness , with both ductility and yield strength increasing as the test temperature was reduced from room temperature to 77 K (−321.1 °F). This was attributed to the onset of nanoscale twin boundary formation, an additional deformation mechanism that was not in effect at higher temperatures. At ultralow temperatures, inhomogenous deformation by serrations has been reported. [ 70 ] As such, it may have applications as a structural material in low-temperature applications or, because of its high toughness, as an energy-absorbing material. [ 71 ] However, later research showed that lower-entropy alloys with fewer elements or non-equiatomic compositions may have higher strength [ 72 ] or higher toughness. [ 73 ] No ductile to brittle transition was observed in the BCC AlCrFeCoNi alloy in tests as low as 77 K. [ 40 ]
Al 0.5 CrFeCoNiCu was found to have a high fatigue life and endurance limit , possibly exceeding some conventional steel and titanium alloys, but there was significant variability in the results. This suggests the material is very sensitive to defects introduced during manufacturing such as aluminum oxide particles and microcracks. [ 74 ]
A single-phase nanocrystalline Al 20 Li 20 Mg 10 Sc 20 Ti 30 alloy was developed with a density of 2.67 g cm −3 and microhardness of 4.9–5.8 GPa, which would give it an estimated strength-to-weight ratio comparable to ceramic materials such as silicon carbide , [ 12 ] though the high cost of scandium limits the possible uses. [ 75 ]
Rather than bulk HEAs, small-scale HEA samples (e.g. NbMoTaW micro-pillars) exhibit extraordinarily high yield strengths of 4–10 GPa — one order of magnitude higher than that of its bulk form – and their ductility is considerably improved. Additionally, such HEA films show substantially enhanced stability for high-temperature, long-duration conditions (at 1,100 °C for 3 days). Small-scale HEAs combining these properties represent a new class of materials in small-dimension devices potentially for high-stress and high-temperature applications. [ 46 ] [ 26 ]
In 2018, new types of HEAs based on the careful placement of ordered oxygen complexes, a type of ordered interstitial complex, have been produced. In particular, alloys of titanium , hafnium , and zirconium have been shown to have enhanced work hardening and ductility characteristics. [ 76 ]
Bala et al. studied the effects of high-temperature exposure on the microstructure and mechanical properties of the Al 5 Ti 5 Co 35 Ni 35 Fe 20 high-entropy alloy. After hot rolling and air-quenching, the alloy was exposed to a temperature range of 650–900 °C for 7 days. The air-quenching caused γ′ precipitation distributed uniformly throughout the microstructure. The high-temperature exposure resulted in growth of the γ′ particles and at temperatures higher than 700 °C, additional precipitation of γ′ was observed. The highest mechanical properties were obtained after exposure to 650 °C with a yield strength of 1050 MPa and an ultimate tensile yield strength of 1370 MPa. Increasing the temperature further decreased the mechanical properties. [ 77 ]
Liu et al. studied a series of quaternary non-equimolar high-entropy alloys Al x Cr 15x Co 15x Ni 70−x with x ranging from 0 to 35%. The lattice structure transitioned from FCC to BCC as Al content increased and with Al content in the range of 12.5 to 19.3 at%, the γ′ phase formed and strengthened the alloy at both room and elevated temperatures. With Al content at 19.3 at%, a lamellar eutectic structure formed composed of γ′ and B2 phases. Due to high γ′ phase fraction of 70 vol%, the alloy had a compressive yield strength of 925 MPa and fracture strain of 29% at room temperature and high yield strength at high temperatures as well with values of 789, 546, and 129 MPa at the temperatures of 973, 1123, and 1273 K. [ 78 ]
In general, refractory high-entropy alloys have exceptional strength at elevated temperatures but are brittle at room temperature. The TiZrNbHfTa alloy is an exception, with plasticity of over 50% at room temperature. However, its strength at high temperature is insufficient. With the aim of increasing high temperature strength, Chien-Chuang et al. modified the composition of TiZrNbHfTa and studied the mechanical properties of the refractory high-entropy alloys TiZrMoHfTa and TiZrNbMoHfTa. Both alloys have simple BCC structure. Their experiments showed that the yield strength of TiZrNbMoHfTa had a yield strength 6 times greater than TiZrMoHfTa at 1200 °C with a fracture strain of 12% retained in the alloy at room temperature. [ 79 ]
CrFeCoNiCu is an FCC alloy that was found to be paramagnetic. But upon adding titanium, it forms a complex microstructure consisting of FCC solid solution, amorphous regions and nanoparticles of Laves phase , resulting in superparamagnetic behavior. [ 80 ] High magnetic coercivity has been measured in a FeMnNiCoBi alloy. [ 49 ] There are several magnetic high-entropy alloys which exhibit promising soft magnetic behavior with strong mechanical properties. [ 81 ] Superconductivity was observed in TiZrNbHfTa alloys, with transition temperatures between 5.0 and 7.3 K. [ 82 ]
High-entropy alloys are promising for electronics due to their thermal stability and electrical conductivity. [ 83 ] They are being used for high-performance applications like power electronics , heat spreaders , sensors , and inductors , and show potential for efficient conductive materials in advanced components. [ 84 ]
Since high-entropy alloys are likely utilized in high temperature environments, thermal stability is very important for designing HEA. Nano-crystallinity is especially critical where extra driving force exists for grain growth. Two aspects need to be considered for nano-crystalline HEAs: the stability of phases formed, which is dominated by the thermodynamics mechanism (see alloy design), and the retention of nanocrystallinity. [ 85 ] The stability of nano-crystalline HEAs are controlled by many factors, including grain boundary diffusion, presence of oxide, etc.
The high concentrations of multiple elements leads to slow diffusion . The activation energy for diffusion was found to be higher for several elements in CrMnFeCoNi than in pure metals and stainless steels, leading to lower diffusion coefficients. [ 86 ]
Some equiatomic multicomponent alloys have also been reported to show good resistance to damage by energetic radiation. [ 87 ]
High-entropy alloys are being investigated for hydrogen storage applications. [ 88 ] [ 89 ] Some high-entropy alloys such as TiZrCrMnFeNi show fast and reversible hydrogen storage at room temperature with good storage capacity for commercial applications. [ 90 ] The high-entropy materials have high potential for a wider range of energy applications, particularly in the form of high-entropy ceramics. [ 91 ] [ 92 ]
The development of high-entropy photocatalysts, which was initiated in 2020, is one of the applications which has been employed for hydrogen production, oxygen production, carbon dioxide conversion and plastic waste conversion. [ 93 ]
A number of tungsten-based HEAs exhibit self-sharpening, making them possible candidate materials for kinetic energy penetrators and other penetration workloads. Examples include equimolar WMoFeNi and W30Mo7FeNi (W30-Mo7-Fe31.5-Ni31.5, in atomic %). The reason for self-sharpening is that the ultra-strong μ phase induces a high local strain gradient, which generates adiabatic shear bands . [ 94 ]
Most HEAs are prepared by vacuum arc melting, which obtains larger grain sizes at the μm-level. As a result, studies regarding high-performance high entropy alloy films (HEAFs) have attracted more material scientists. Compared to the preparation methods of HEA bulk materials, HEAFs are easily achieved by rapid solidification with a faster cooling rate of 10 9 K/s. [ 95 ] A rapid cooling rate can limit the diffusion of the constituent elements, inhibit phase separation, favor the formation of the single solid-solution phase or even an amorphous structure, [ 96 ] and obtain a smaller grain size (nm) than those of HEA bulk materials (μm). So far, lots of technologies have been used to fabricate the HEAFs such as spraying, laser cladding, electrodeposition, and magnetron sputtering. Magnetron sputtering technique is the most-used method to fabricate the HEAFs. An inert gas (Ar) is introduced in a vacuum chamber and it's accelerated by a high voltage that is applied between the substrate and the target. [ 97 ] As a result, a target is bombarded by the energetic ions and some atoms are ejected from the target surface, then these atoms reach the substrate and condense on the substrate to form a thin film. [ 97 ] The composition of each constituent element in HEAFs can be controlled by a given target and the operational parameters like power, gas flow, bias, and working distance between substrate and target during film deposition. Also, the oxide, nitride, and carbide films can be readily prepared by introducing reactive gases such as O 2 , N 2 , and C 2 H 2 . Until now, three routes has been investigated to prepare HEAFs via the magnetron sputtering technique. [ 96 ] First, a single HEA target can be used to fabricate the HEAFs. The related contents of the as-deposited films are approximately equal to that of the original target alloy even though each element has a different sputtering yield with the help of the pre-sputtering step. [ 96 ] However, preparing a single HEA target is very time-consuming and difficult. For example, it's hard to produce an equiatomic CoCrFeMnNi alloy target due to the high evaporation rate of Mn. Thus, the additional amount of Mn is hard to expect and calculate to ensure each element is equiatomic. Secondly, HEAFs can be synthesized by co-sputtering deposition with various metal targets. [ 96 ] A wide range of chemical compositions can be controlled by varying the processing conditions such as power, bias, gas flow, etc. Based on the published papers, lots of researchers doped different quantities of elements such as Al, Mo, V, Nb, Ti, and Nd into the CrMnFeCoNi system, which can modify the chemical composition and structure of the alloy and improve the mechanical properties. These HEAFs were prepared by co-sputtering deposition with a single CrMnFeCoNi alloy and Al/Ti/V/Mo/Nb targets. [ 98 ] [ 99 ] [ 100 ] [ 101 ] [ 102 ] However, it needs trial and error to obtain the desired composition. Take Al x CrMnFeCoNi films as an example. [ 98 ] The crystalline structure changed from the single FCC phase for x = 0.07 to duplex FCC + BCC phases for x = 0.3, and eventually, to a single BCC phase for x = 1.0. The whole process was manipulated by varying both powers of CoCrFeMnNi and Al targets to obtain desired compositions, showing a phase transition from FCC to BCC phase with increasing Al contents. The last one is via the powder targets. [ 96 ] The compositions of the target are simply adjusted by altering the weight fractions of the individual powders, but these powders must be well-mixed to ensure homogeneity. AlCrFeCoNiCu films were successfully deposited by sputtering pressed power targets. [ 103 ]
Recently, there are more researchers investigating the mechanical properties of the HEAFs with nitrogen incorporation due to superior properties like high hardness. As above-mentioned, nitride-based HEAFs can be synthesized via magnetron sputtering by incorporating N 2 and Ar gases into the vacuum chamber. Adjusting the nitrogen flow ratio, R N = N 2 /(Ar + N 2 ), can obtain different amounts of nitrogen. Most of them increased the nitrogen flow ratio to study the correlation between phase transformation and mechanical properties.
Both values of hardness and related moduli like reduced modulus ( Er ) or elastic modulus ( E ) will significantly increase through the magnetron sputtering method. This is because the rapid cooling rate can limit the growth of grain size, i.e., HEAFs have smaller grain sizes compared to bulk counterparts, which can inhibit the motion of dislocation and then lead to an increase in mechanical properties such as hardness and elastic modulus. For instance, CoCrFeMnNiAl x films were successfully prepared by the co-sputtering method. [ 98 ] The as-deposited CoCrFeMnNi film (Al 0 ) exhibited a single FCC structure with a lower hardness of around 5.71 GPa, and the addition of a small amount of Al atoms resulted in an increase to 5.91 GPa in the FCC structure of Al 0.07 . With the further addition of Al, the hardness increased drastically to 8.36 GPa in the duplex FCC + BCC phases region. When the phase transformed to a single BCC structure, the Al 1.3 film reached a maximum hardness of 8.74 GPa. As a result, the structural transition from FCC to BCC led to hardness enhancements with the increasing Al content. It is worth noting that Al-doped CoCrFeMnNi HEAs have been processed and their mechanical properties have been characterized by Xian et al. [ 104 ] and the measured hardness values are included in Hsu et al. work for comparison. Compared to Al-doped CoCrFeMnNi HEAs, Al-doped CoCrFeMnNi HEAFs had a much higher hardness, which could be attributed to the much smaller size of HEAFs (nm vs. μm). Also, the reduced modulus in Al 0 and Al 1.3 are 172.84 and 167.19 GPa, respectively.
In addition, the RF-sputtering technique was capable of depositing CoCrFeMnNiTi x HEAFs by co-sputtering of CoCrFeMnNi alloy and Ti targets. [ 99 ] The hardness increased drastically to 8.61 GPa for Ti 0.2 by adding Ti atoms to the CoCrFeMnNi alloy system, suggesting good solid solution strengthening effects. With the further addition of Ti, the Ti 0.8 film had a maximum hardness of 8.99 GPa. The increase in hardness was due to both the lattice distortion effect and the presence of the amorphous phase that was attributed to the addition of the larger Ti atoms to the CoCrFeMnNi alloy system. This is different from CoCrFeMnNiTi x HEAs because the bulk alloy has intermetallic precipitate in the matrix. The reason is the difference in cooling rate, i.e., the preparation method of the bulk HEAs has slower cooling rate and thus intermetallic compound will appear in HEAs. Instead, HEAFs have higher cooling rate and limit the diffusion rate, so they seldom have intermetallic phases. And the reduced modulus in Ti 0.2 and Ti 0.8 are 157.81 and 151.42 GPa, respectively. Other HEAFs were successfully fabricated by the magnetron sputtering technique and the hardness and the related modulus values are listed in Table 1.
For nitride-HEAFs, Huang et al. prepared (AlCrNbSiTiV)N films and investigated the effect of nitrogen content on structure and mechanical properties. [ 105 ] They found that both values of hardness (41 GPa) and elastic modulus (360 GPa) reached a maximum when R N = 28%. The (AlCrMoTaTiZr)N x film deposited at R N = 40% with the highest hardness of 40.2 GPa and elastic modulus of 420 GPa. [ 106 ] Chang et al. fabricated (TiVCrAlZr)N on silicon substrates under different R N = 0 ~ 66.7%. At R N = 50%, the hardness and elastic modulus of the films reached maximum values of 11 and 151 GPa. [ 107 ] Liu et al. studied the (FeCoNiCuVZrAl)N HEAFs and increased the R N ratio from 0 to 50%. [ 108 ] They observed both values of hardness and elastic modulus exhibited maxima of 12 and 166 GPa with an amorphous structure at R N = 30%. Other related nitride-based HEAFs are summarized in Table 2. Compared to pure metallic HEAFs (Table 1), most nitride-based films have larger hardness and elastic modulus due to the formation of binary compound consisting of nitrogen. However, there are still some films possessing relatively low hardness, which are smaller than 20 GPa because of the inclusion of non-nitride-forming elements. [ 96 ]
There have been many studies focused on the HEAFs and designed different compositions and techniques. The grain size, phase transformation, structure, densification, residual stress, and the content of nitrogen, carbon, and oxygen also can affect the values of hardness and elastic modulus. Therefore, they still delve into the correlation between the microstructures and mechanical properties and their related applications.
Table 1. The published papers regarding the pure metallic HEAFs and their phase, hardness and related modulus values via magnetron sputtering method.
Table 2. Current publications regarding the nitride-based HEAFs and their structures, the related hardness and elastic modulus values.
A subset of ultra-high temperature ceramics (UHTC) includes high-entropy ultra-high temperature ceramics, also referred to as compositionally complex ceramics (CCC). This class of materials is a leading choice for applications that experience extreme conditions, such as hypersonic applications which endure very high temperature, corrosion, and high strain rates. [ 130 ] [ 131 ] In general, UHTCs possess desirable properties including high melting temperature, high thermal conductivity, high stiffness and hardness, and high corrosion resistance. [ 132 ] CCCs exemplify the tunability of UHTC systems by adding in more elements to the overall composition in approximately equimolar proportions. These high-entropy materials have displayed enhanced mechanical properties and performance compared to the traditional UHTC system. [ 133 ]
As an emerging field, a fully comprehensive relationship between composition, microstructure, processing, and properties is not yet completely developed. Therefore, there is a lot of ongoing research in this field to better understand this system and its ability to scale to implementation in extreme environment applications. A multitude of factors contribute to the elevated mechanical properties in CCC. Notably, the complex microstructure and particular processing parameters enables these systems to display improved properties such as higher hardness. [ 134 ] A plausible reason as to why CCCs may exhibit even higher hardness than traditional UHTCs may be due to the integration of various transition metals of different sizes in the CCC high-entropy lattice, rather than just a single repeating element of the same size in the metallic sites. Plastic deformation in materials is due to the movement of dislocations . Generally speaking, increased movement of dislocations throughout the lattice leads to deformation, while inhibition of dislocation motion leads to less deformation and a harder material. In ceramics, dislocation motion is extremely limited due to more constraints in the ceramic bonding structure, which explains their higher hardness over metals. Since the CCC structure has a wider variety of elemental sizes, it will become even more difficult for any dislocations to move in these systems, increasing the strain energy needed to move dislocations. This phenomenon may explain the further improved hardness that is observed. [ 132 ] [ 134 ] In addition to the direct effects that the microstructure has on enhancing properties, optimizing processing parameters for CCCs is crucial. For instance, powders may be processed using high energy ball milling (HEBM) which relies on the principle of mechanical alloying . Mechanical alloying balances competing mechanisms of deformation and recovery, including micro-forging, cold welding, and fracturing. [ 135 ] With the proper balance achieved, this processing step yields a refined and homogeneous powder, which subsequently facilitates proper densification of the final part and desirable mechanical properties. [ 136 ] Incomplete densification or an unacceptable fraction of voids diminishes the overall mechanical properties, as it would lead to premature failure. To conclude, high-entropy UHTCs or CCCs are extremely promising candidates for applications in extreme environments as evidenced so far by their enhanced properties. | https://en.wikipedia.org/wiki/High-entropy_alloy |
A high-frequency approximation (or "high energy approximation") for scattering or other wave propagation problems, in physics or engineering , is an approximation whose accuracy increases with the size of features on the scatterer or medium relative to the wavelength of the scattered particles.
Classical mechanics and geometric optics are the most common and extreme high frequency approximation, where the wave or field properties of, respectively, quantum mechanics and electromagnetism are neglected entirely.
Less extreme approximations include, the WKB approximation , physical optics , the geometric theory of diffraction , the uniform theory of diffraction , and the physical theory of diffraction . When these are used to approximate quantum mechanics , they are called semiclassical approximations.
This scattering –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/High-frequency_approximation |
The high-frequency impact treatment or HiFIT – Method is the treatment of welded steel constructions at the weld transition to increase the fatigue strength.
The durability and life of dynamically loaded, welded steel structures is determined in many cases by the welds, in particular the weld transitions. Through selective treatment of the transitions ( grinding (abrasive cutting) , abrasive blasting , hammering , etc.), the durability of many designs increase significantly. Hammering methods have proven to be particularly effective treatment methods and were within the joint project REFRESH [ 1 ] extensively studied and developed. The HiFIT (High-Frequency Impact Treatment (also called HFMI (High Frequency Mechanical Impact))) process is such a hammering method that is universally applicable, requires only a low tech equipment and still offers high reproducibility and the possibility for quality control.
The HiFIT hammer operates with a hardened pin with a ball resting on the workpiece with a diameter D of 3 mm. [ 2 ] This pin is hammered with an adjustable intensity at around 180–300 Hz at the weld toe. Local mechanical deformations occur in the form of a treatment track. The weld toe is deformed plastically. The induced compressive residual stress prevents the track cracking and the crack propagation on the surface.
The International Institute of Welding Technology IIW published the Guideline "Recommendations for the HFMI Treatment" [ 3 ] in October 2016. An overview of higher frequency hammers (HFMI) is presented, and recommendations for the correct application of the method and quantitative measurements for quality assurance the guideline provides the basis for measurements of HFMI improved welded joints on the basis of all known stress calculation concepts.
In numerous experiments at various institutes and universities an 80 to 100 percent increase of fatigue strength and a 5 – to 15-fold increase in weld-life could be demonstrated. The most extensive research project was from 2006 to 2009 "REFRESH – life extension of existing and new welded steel structures (P702). In this research project, the HiFIT device was developed and made ready for production. This report is available in book form at the FOSTA (Forschungsvereinigung Stahlanwendung e.V.) and can be ordered under the number ISBN 978-3-942541-03-9 . The book contains detailed scientific verifications and validations.
The HiFIT method can be applied to both existing as well as new steel structures.
For a targeted treatment, the visibility and accessibility of the transition in the welded areas are required.
Existing structures typically are prepared at the transition for surface finishing . The parts must be free of loose rust and old paint. If necessary, previous sandblasting is required.
The device operates with a compressed air supply of 6–8 bar.
HiFIT device is manually placed on the treated weld transition and during treatment, along this run.
By local transformations, the weld toe plastically deformed and solidified.
The depth of the aftertreatment track should be between 0.2 and 0.35 mm.
The undercut at the weld toe is no longer recognizable.
By visual inspection, the treated region are examined. The treatment depth can be checked with a special gauge.
A digital display of the operating pressure allows the user to control the entire process.
When applied to existing constructions, the lifetime can be extended considerably. If no macroscopically visible cracks are present, HiFIT is a very suitable remediation tool.
With timely remediation of existing structures there is practically no difference to the life of new treated welds. This gives the potential to use existing constructions far beyond the planned lifetime. The HiFIT-method is used very efficient e.g. at highway bridges in steel hollow box-section design on the fly. Costs for reconstruction are low compared to conventional methods. In the commercial vehicle industry and other industries highly stressed welds on existing and new structures are treated with HiFIT to extend lifetime successfully.
In case of new constructions and for some existing structures the load level for treated welds can be increased. Using constructions for the same lifetime as before welds can transfer 1.6 times loads. This has e.g. for cranes the very positive effect of larger lifting capacity. The efficiency of cranes increases with each stroke.
Taking into account the HiFIT process during development, on same load level and same lifetime, the construction can be slimmed down specifically. Extensive experimental investigations on structural details and FEM-supported-design methods has shown the high efficiency with conventional S235, S355J2 and fine grain steels, such as S460N, S690QL and even higher strength steels. The achievable material saving makes the HiFIT application in most applications already economically viable. Considering the additional benefit of the weight advantage e.g. the achievable payload in vehicles can be increased. | https://en.wikipedia.org/wiki/High-frequency_impact_treatment |
HFIM , acronym for high-frequency-impulse-measurement , is a type of measurement technique in acoustics , where structure-borne sound signals are detected and processed with certain emphasis on short-lived signals as they are indicative for crack formation in a solid body, mostly steel . The basic idea is to use mathematical signal processing methods such as Fourier analysis in combination with suitable computer hardware to allow for real-time measurements of acoustic signal amplitudes as well as their distribution in frequency space. The main benefit of this technique is the enhanced signal-to-noise ratio when it comes to the separation of acoustic emission from a certain source and other, unwanted contamination by any kinds of noise. The technique is therefore mostly applied in industrial production processes, e.g. cold forming or machining, where a 100 percent quality control is required or in condition monitoring for e.g. quantifying tool wear.
High-frequency-impulse measurement is an algorithm for obtaining frequency information of any structure- or air-borne sound source on the basis of discrete signal transformations. This is mostly done using [Fourier series] to quantify the distribution of the energy content of a sound signal in frequency space. On the software side, the tool used for this is the fast Fourier transform ( FFT ) implementation of this mathematical transformation. This allows, in combination with specific hardware, to directly obtain frequency information so that this is accessible in-line, e.g. during a production process. Contrary to classical, off-line frequency analysis methods, the signal is not unfolded before transformation but is directly fed into the FFT computation. Single events, such as cracks, are hence depicted as extremely short-lived signals covering the entire frequency range (the Fourier transform of a single impulse is a signal covering the entire observed frequency space). Therefore, such single events are easily separable from other noises, even if they are much more energetic.
Because of its in-line capabilities, HFIM is mostly applied in industrial production processes when it comes to high quality standards e.g. for auto parts that are relevant for crash behavior of a car:
There are also several applications of HFIM devices in materials science laboratories where the exact timing of crack formation is relevant, for instance when determining the plasticity of a new kind of steel. | https://en.wikipedia.org/wiki/High-frequency_impulse-measurement |
High-frequency oscillations (HFO) are brain waves of the frequency faster than ~80 Hz , generated by neuronal cell population. High-frequency oscillations can be recorded during an electroencephalagram (EEG), local field potential (LFP) or electrocorticogram (ECoG) electrophysiology recordings. They are present in physiological state during sharp waves and ripples - oscillatory patterns involved in memory consolidation processes. [ 1 ] HFOs are associated with pathophysiology of the brain like epileptic seizure [ 2 ] and are often recorded during seizure onset. It makes a promising biomarker for the identification of the epileptogenic zone. [ 3 ] [ 4 ] Other studies points to the HFO role in psychiatric disorders and possible implications to psychotic episodes in schizophrenia . [ 5 ] [ 6 ] [ 7 ]
Traditional classification of the frequency bands, that are associated to different functions/states of the brain and consist of delta , theta , alpha , beta and gamma bands. Due to the limited capabilities of the early experimental/medical setup to record fast frequencies, for historical reason, all oscillations above 30 Hz were considered as high frequency and were difficult to investigate. [ 1 ] Recent advance in manufacturing electrophysiological setups enables to record electric potential with high temporal and space resolution, and to "catch" dynamics of single cell action potential . In neuroscience nomenclature, there is still a reaming gap between ~100 Hz and multi unit activity (>500 Hz), so these oscillations are often called high gamma or HFO.
HFO are generated by different cellular mechanisms and can be detected in many brain areas. [ 8 ] [ 9 ] In hippocampus , this fast neuronal activity is effect of the population synchronous spiking of pyramidal cells in the CA3 region and dendritic layer of the CA1 , which give rise to a characteristic oscillation pattern (see more in sharp waves and ripples ). [ 10 ] The HFO occurrence during memory task (encoding and recalling images) was also reported in human patients from intracranial recordings in primary visual , limbic and higher order cortical areas . [ 11 ] Another example of physiological HFO of around 300 Hz, was found in subthalamic nucleus , [ 12 ] the brain region which is the main target for high-frequency (130 Hz) deep brain stimulation treatment for patients with Parkinson's disease .
ECoG recordings from human somatosensory cortex , has shown HFO (reaching even 600 Hz) presence during sensory evoked potentials and somatosensory evoked magnetic field after median nerve stimulation. [ 13 ] These bursts of activity are generated by thalamocortical loop and driven by highly synchronized spiking of the thalamocortical fibres, and are thought to play a role in information processing. [ 14 ] Somatosensory evoked HFO amplitude changes may be potentially used as biomarker for neurologic disorders, which can help in diagnosis in certain clinical contexts. Some oncology patients with brain tumors showed higher HFOs amplitude on the same side, where the tumor was. Authors of this study also suggest contribution from the thalamocortical pathways to the fast oscillations. [ 15 ] Interestingly, higher HFO amplitudes (between 400 and 800 Hz) after nerve stimulation were also reported in the EEG signal of healthy football and racquet sports players. [ 16 ]
There are many studies, that reports pathophysiological types of HFO in human patients and animal models of disease, which are related to different psychiatric or neurological disorders:
There are increasing number of studies indicating that HFO rhythms (130–180 Hz) may arise due to the local NMDA receptor blockage, [ 25 ] [ 26 ] [ 27 ] [ 28 ] which is also a pharmacological model of schizophrenia. [ 26 ] These NMDA receptor dependent fast oscillations were detected in different brain areas including hippocampus , [ 29 ] nucleus accumbens [ 6 ] and prefrontal cortex regions. [ 30 ] Despite the fact that this type of HFO was not yet confirmed in human patients, second generation antipsychotic drugs , widely used to treat schizophrenia and schizoaffective disorders (i.e. Clozapine , Risperidone ), were shown to reduce HFO frequency. [ 6 ] Recent studies, reports on the new source of HFO in the olfactory bulb structures, which is surprisingly stronger than any other previously seen in the mammalian brain. [ 31 ] [ 32 ] HFO in the bulb is generated by local excitatory-inhibitory circuits modulated by breathing rhythm and may be also recorded under ketamine-xylazine anesthesia. [ 33 ] This findings may aid understanding early symptoms of schizophrenia patients and their relatives, that can suffer from olfactory system impairments. [ 34 ] | https://en.wikipedia.org/wiki/High-frequency_oscillations |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.