text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
Hydrology is the science which studies the water cycle as a whole, hence the water exchanges between soil and atmosphere ( precipitation and evaporation ) but also between the soil and sub ground (groundwater).
Switzerland has a varied and complex hydrological system. The climate of Switzerland gives precipitation under the form of snow and rain and is also responsible for the evaporation of water into the atmosphere. The altitude and climate allow the formation and maintenance of many glaciers that feed rivers from five major European river catchments, through which water leaves the country and joins the sea.
Switzerland is sometimes called the "water tower of Europe". [ 1 ] [ 2 ] Water from Switzerland reaches all northern, southern, western and eastern parts of Europe.
Surface water flows through a network of nearly 65,000 km of rivers , shared between the basins of five European rivers: the Rhine , the Rhone , the Po , the Danube and the Adige . Thus, the hydrological network of Switzerland brings feeds the North Sea , the Mediterranean Sea (Western Mediterranean and Adriatic Sea ) and the Black Sea . Among these five rivers, two have their source in Switzerland, the Rhine and the Rhone. The other three rivers have tributaries that originate in Switzerland. While most of the country drains into the North Sea, most of the basins (3 out of 5) drain into the Mediterranean Sea.
All major lakes of Switzerland are located in the Rhine, Rhone and Po basins. Lakes in the Danube and Adige basins are less than 5 km 2 . All basins except the Adige have glaciers.
Groundwater refers to water located beneath the ground surface, as opposed to surface water that forms lakes and rivers. This is called hydrogeology . The nature and location of the groundwater is defined by the geological nature of the soils. In the mid-twentieth century, knowledge of groundwater in Switzerland suffered from significant gaps. They were partially filled in the 1980s and 1990s, with a national research program.
The geological structure of Switzerland was formed by collision of two tectonic plates, the Eurasian Plate to the north and the Adriatic Plate to the south. Geologically, the subsoil is very complex and varied with the Alps in the south, the Jura in the northwest and the plateau between them. Large quantities of water are present in the basement of Switzerland, and form a vast network linked to the geological structures. The underground lake of Saint-Léonard , located in Valais, with its 300 m long and 25 m wide is a notable example.
Each year, one hectare of the Swiss plateau filter an average of four million liters of clean groundwater. According to the Federal Office of Environment, the Swiss basement contains about fifty billion m3 of water. Groundwater is by far the main source of drinking water in Switzerland, covering 80% of requirements. Considering other uses (drinking water and industrial water), groundwater covers 58% of requirements. | https://en.wikipedia.org/wiki/Hydrology_of_Switzerland |
The Catawissa Tunnel is a mine drainage tunnel in Schuylkill County, Pennsylvania , in the United States. [ 1 ] Its properties include the discharge , the pH , the chemical hydrology , and the water temperature. A total of 30 different metals and metalloids have been observed in the tunnel's waters. [ 2 ] The hydrological data comes from a gauge on the tunnel at a location of 40°54'39" north and 76°03'59" west and an elevation of 1,440 feet (440 m) above sea level . [ 2 ] Some of the most abundant metals in the waters of the tunnel include iron, aluminum, and manganese. These metals have concentrations on the order of several milligrams per liter. A number of other metals have concentrations on the order of micrograms per liter and some metals are found in even lower concentrations. Nonmetals such as nitrates, sulfates, fluorides, chlorides, and silica are also present in the tunnel. The concentrations of such nonmetals range between several micrograms per liter and several milligrams per liter.
The discharge of the Catawissa Tunnel is similar to the discharges of other mine drainage tunnels in the watershed of Catawissa Creek , being on the order of several thousand gallons per minute. However, it can become significantly higher during times of heavy rainfall. Additionally, the tunnel is highly acidic , with a pH averaging slightly more than 4.
Coal mining began in the South Green Mountain Coal Basin in the middle of the 1800s. The Catawissa Tunnel was constructed in the 1930s to drain the aforementioned coal basin via gravity . [ 1 ]
The concentrations of metals in the Catawissa Tunnel are lower than other mine drainage tunnels in the watershed of Catawissa Creek. [ 3 ]
The concentration of iron in the water discharged from the Catawissa Tunnel is 1.01 milligrams per liter and the daily load of iron is 6.9 pounds (3.1 kg). The maximum allowable concentration of iron is 0.58 milligrams per liter and the maximum allowable load is 4.0 pounds (1.8 kg). The iron load requires a 43 percent reduction to meet its total maximum daily load requirements. The concentration of manganese is 0.31 milligrams per liter and the load of manganese is 2.1 pounds (0.95 kg) per day. The maximum allowable manganese concentration is 0.31 milligrams per liter and the maximum allowable load is 2.1 pounds (0.95 kg) per day. The load of manganese requires no reduction to meet its total maximum daily load requirements. [ 1 ]
The concentration of aluminum in the tunnel's waters is 1.27 milligrams per liter and the daily load is 8.7 pounds (3.9 kg). The maximum allowable concentration is 0.39 milligrams per liter and the maximum allowable daily load is 2.7 pounds (1.2 kg). The aluminum load requires a 69 percent reduction to meet its total maximum daily load requirements. [ 1 ] On April 15, 1975, the concentrations of magnesium and calcium were 3.70 and 3.50 milligrams per liter, respectively. The concentration of strontium was measured to be 20 micrograms per liter and the barium is 23 micrograms per liter. The concentrations of lithium , sodium and potassium were 0.10, 50, and 60 micrograms per liter, respectively. [ 2 ]
The concentrations of titanium and zirconium in the discharge of the Catawissa Tunnel are both less than 1 microgram per liter. The vanadium concentration is less than 0.5 milligrams per liter, the chromium and molybdenum concentrations are less than 1 milligram per liter. The concentrations of cobalt , nickel , and copper are 40 micrograms per liter, 100 micrograms per liter, and 30 micrograms per liter, respectively. The silver concentration is less than one microgram per liter. The concentrations of zinc and mercury are 200 and 0.5 micrograms per liter, respectively. [ 2 ]
The concentrations of gallium , germanium , and tin in the waters of the Catawissa Tunnel is less than 1 microgram per liter. The bismuth concentration is less than 1 microgram per liter and the arsenic concentration is 1 milligram per liter. [ 2 ]
A number of other elements have been observed in the discharge of the Catawissa Tunnel, but their concentrations are unknown. These include beryllium , boron , cadmium , and lead . [ 2 ]
On April 15, 1975, the concentration of nitrogen in the form nitrates was measured to be 0.08 milligrams per liter in the waters of the Catawissa Tunnel. The concentration of organic carbon was measured to be 13.0 milligrams per liter. The concentration of hydrogen ions in the tunnel's waters was measured to be 0.12689 milligrams per liter. [ 2 ]
The concentration of sulfates in the waters of the Catawissa Tunnel was measured to be 58.0 milligrams per liter on April 15, 1975. The chloride concentration was 1.4 milligrams per liter and the fluoride concentration was 0.1 milligrams per liter. Additionally, the mineral silica is present in the waters of the tunnel. Its concentration was measured to be 8 micrograms per liter. [ 2 ]
The concentration of total dissolved solids in the waters of the Catawissa Tunnel was measured to be 0.19 tons per day and 0.12 tons per acre-foot on April 15, 1975. [ 2 ]
The discharge of the Catawissa Tunnel is 820,000 gallons per day. [ 1 ] The discharge of the tunnel ranges from 4,000 to 10,000 gallons per minute, although it can reach 18,000 gallons per minute during rainfall. This is fairly close to the discharges of the other mine drainage tunnels in the watershed of Catawissa Creek. [ 3 ]
The pH of the water discharged from the Catawissa Tunnel ranges from 3.8 to 4.5, with an average of 4.17. [ 1 ] However, it was measured to be 3.9 on April 15, 1975. [ 2 ] The concentration of acidity in the tunnel's water is 18.44 milligrams per liter and the daily load of acidity is 126.1 pounds (57.2 kg). The acidity load requires a 90 percent reduction to meet its total maximum daily load requirements. The concentration of alkalinity is 4.11 milligrams per liter and the daily load of alkalinity is 28.1 pounds (12.7 kg) per day. [ 1 ] The concentration of water hardness was measured to be 24.0 milligrams per liter in 1975. [ 2 ]
The water temperature of the water discharged from the Catawissa Tunnel was measured to be 7.0 °C (44.6 °F) on April 15, 1975. The specific conductance at this time was 175 micro-siemens per centimeter at 25 °C (77 °F). [ 2 ] | https://en.wikipedia.org/wiki/Hydrology_of_the_Catawissa_Tunnel |
Hydrolysis ( / h aɪ ˈ d r ɒ l ɪ s ɪ s / ; from Ancient Greek hydro- ' water ' and lysis ' to unbind ' ) is any chemical reaction in which a molecule of water breaks one or more chemical bonds. The term is used broadly for substitution , elimination , and solvation reactions in which water is the nucleophile . [ 1 ]
Biological hydrolysis is the cleavage of biomolecules where a water molecule is consumed to effect the separation of a larger molecule into component parts. When a carbohydrate is broken into its component sugar molecules by hydrolysis (e.g., sucrose being broken down into glucose and fructose ), this is recognized as saccharification . [ 2 ]
Hydrolysis reactions can be the reverse of a condensation reaction in which two molecules join into a larger one and eject a water molecule. Thus hydrolysis adds water to break down, whereas condensation builds up by removing water. [ 3 ]
Usually hydrolysis is a chemical process in which a molecule of water is added to a substance. Sometimes this addition causes both the substance and water molecule to split into two parts. In such reactions, one fragment of the target molecule (or parent molecule) gains a hydrogen ion . It breaks a chemical bond in the compound.
A common kind of hydrolysis occurs when a salt of a weak acid or weak base (or both) is dissolved in water. Water spontaneously ionizes into hydroxide anions and hydronium cations . The salt also dissociates into its constituent anions and cations. For example, sodium acetate dissociates in water into sodium and acetate ions. Sodium ions react very little with the hydroxide ions whereas the acetate ions combine with hydronium ions to produce acetic acid . In this case the net result is a relative excess of hydroxide ions, yielding a basic solution .
Strong acids also undergo hydrolysis. For example, dissolving sulfuric acid ( H 2 SO 4 ) in water is accompanied by hydrolysis to give hydronium and bisulfate , the sulfuric acid's conjugate base . For a more technical discussion of what occurs during such a hydrolysis, see Brønsted–Lowry acid–base theory .
Acid–base-catalysed hydrolyses are very common; one example is the hydrolysis of amides or esters . Their hydrolysis occurs when the nucleophile (a nucleus-seeking agent, e.g., water or hydroxyl ion) attacks the carbon of the carbonyl group of the ester or amide . In an aqueous base, hydroxyl ions are better nucleophiles than polar molecules such as water. In acids, the carbonyl group becomes protonated, and this leads to a much easier nucleophilic attack. The products for both hydrolyses are compounds with carboxylic acid groups.
Perhaps the oldest commercially practiced example of ester hydrolysis is saponification (formation of soap). It is the hydrolysis of a triglyceride (fat) with an aqueous base such as sodium hydroxide (NaOH). During the process, glycerol is formed, and the fatty acids react with the base, converting them to salts. These salts are called soaps, commonly used in households. [ 4 ]
In addition, in living systems, most biochemical reactions (including ATP hydrolysis) take place during the catalysis of enzymes . The catalytic action of enzymes allows the hydrolysis of proteins , fats, oils, and carbohydrates . As an example, one may consider proteases (enzymes that aid digestion by causing hydrolysis of peptide bonds in proteins ). They catalyze the hydrolysis of interior peptide bonds in peptide chains, as opposed to exopeptidases (another class of enzymes, that catalyze the hydrolysis of terminal peptide bonds, liberating one free amino acid at a time).
However, proteases do not catalyze the hydrolysis of all kinds of proteins. Their action is stereo-selective: Only proteins with a certain tertiary structure are targeted as some kind of orienting force is needed to place the amide group in the proper position for catalysis. The necessary contacts between an enzyme and its substrates (proteins) are created because the enzyme folds in such a way as to form a crevice into which the substrate fits; the crevice also contains the catalytic groups. Therefore, proteins that do not fit into the crevice will not undergo hydrolysis. This specificity preserves the integrity of other proteins such as hormones , and therefore the biological system continues to function normally.
Upon hydrolysis, an amide converts into a carboxylic acid and an amine or ammonia (which in the presence of acid are immediately converted to ammonium salts). One of the two oxygen groups on the carboxylic acid are derived from a water molecule and the amine (or ammonia) gains the hydrogen ion. The hydrolysis of peptides gives amino acids . [ 4 ]
Many polyamide polymers such as nylon 6,6 hydrolyze in the presence of strong acids. The process leads to depolymerization . For this reason nylon products fail by fracturing when exposed to small amounts of acidic water. Polyesters are also susceptible to similar polymer degradation reactions. The problem is known as environmental stress cracking .
Hydrolysis is related to energy metabolism and storage. All living cells require a continual supply of energy for two main purposes: the biosynthesis of micro and macromolecules, and the active transport of ions and molecules across cell membranes. The energy derived from the oxidation of nutrients is not used directly but, by means of a complex and long sequence of reactions, it is channeled into a special energy-storage molecule, adenosine triphosphate (ATP). The ATP molecule contains pyrophosphate linkages (bonds formed when two phosphate units are combined) that release energy when needed. ATP can undergo hydrolysis in two ways: Firstly, the removal of terminal phosphate to form adenosine diphosphate (ADP) and inorganic phosphate, with the reaction:
Secondly, the removal of a terminal diphosphate to yield adenosine monophosphate (AMP) and pyrophosphate . The latter usually undergoes further cleavage into its two constituent phosphates. This results in biosynthesis reactions, which usually occur in chains, that can be driven in the direction of synthesis when the phosphate bonds have undergone hydrolysis.
Monosaccharides can be linked together by glycosidic bonds , which can be cleaved by hydrolysis. Two, three, several or many monosaccharides thus linked form disaccharides , trisaccharides , oligosaccharides , or polysaccharides , respectively. Enzymes that hydrolyze glycosidic bonds are called " glycoside hydrolases " or "glycosidases".
The best-known disaccharide is sucrose (table sugar). Hydrolysis of sucrose yields glucose and fructose . Invertase is a sucrase used industrially for the hydrolysis of sucrose to so-called invert sugar . Lactase is essential for digestive hydrolysis of lactose in milk; many adult humans do not produce lactase and cannot digest the lactose in milk.
The hydrolysis of polysaccharides to soluble sugars can be recognized as saccharification . [ 2 ] Malt made from barley is used as a source of β-amylase to break down starch into the disaccharide maltose , which can be used by yeast to produce beer . Other amylase enzymes may convert starch to glucose or to oligosaccharides. Cellulose is first hydrolyzed to cellobiose by cellulase and then cellobiose is further hydrolyzed to glucose by beta-glucosidase . Ruminants such as cows are able to hydrolyze cellulose into cellobiose and then glucose because of symbiotic bacteria that produce cellulases.
Hydrolysis of DNA occurs at a significant rate in vivo. [ 5 ] For example, it is estimated that in each human cell 2,000 to 10,000 DNA purine bases turn over every day due to hydrolytic depurination, and that this is largely counteracted by specific rapid DNA repair processes. [ 5 ] Hydrolytic DNA damages that fail to be accurately repaired may contribute to carcinogenesis and ageing . [ 5 ]
Metal ions are Lewis acids , and in aqueous solution they form metal aquo complexes of the general formula M(H 2 O) n m + . [ 6 ] [ 7 ] The aqua ions undergo hydrolysis, to a greater or lesser extent. The first hydrolysis step is given generically as
Thus the aqua cations behave as acids in terms of Brønsted–Lowry acid–base theory . This effect is easily explained by considering the inductive effect of the positively charged metal ion, which weakens the O−H bond of an attached water molecule, making the liberation of a proton relatively easy.
The dissociation constant , pK a , for this reaction is more or less linearly related to the charge-to-size ratio of the metal ion. [ 8 ] Ions with low charges, such as Na + are very weak acids with almost imperceptible hydrolysis. Large divalent ions such as Ca 2+ , Zn 2+ , Sn 2+ and Pb 2+ have a pK a of 6 or more and would not normally be classed as acids, but small divalent ions such as Be 2+ undergo extensive hydrolysis. Trivalent ions like Al 3+ and Fe 3+ are weak acids whose pK a is comparable to that of acetic acid . Solutions of salts such as BeCl 2 or Al(NO 3 ) 3 in water are noticeably acidic ; the hydrolysis can be suppressed by adding an acid such as nitric acid , making the solution more acidic.
Hydrolysis may proceed beyond the first step, often with the formation of polynuclear species via the process of olation . [ 8 ] Some "exotic" species such as Sn 3 (OH) 2+ 4 [ 9 ] are well characterized. Hydrolysis tends to proceed as pH rises leading, in many cases, to the precipitation of a hydroxide such as Al(OH) 3 or AlO(OH) . These substances, major constituents of bauxite , are known as laterites and are formed by leaching from rocks of most of the ions other than aluminium and iron and subsequent hydrolysis of the remaining aluminium and iron.
Acetals , imines , and enamines can be converted back into ketones by treatment with excess water under acid-catalyzed conditions: RO·OR−H 3 O−O ; NR·H 3 O−O ; RNR−H 3 O−O . [ 10 ]
Acid catalysis can be applied to hydrolyses. [ 11 ] For example, in the conversion of cellulose or starch to glucose . [ 12 ] [ 13 ] [ 14 ] Carboxylic acids can be produced from acid hydrolysis of esters. [ 15 ]
Acids catalyze hydrolysis of nitriles to amides. Acid hydrolysis does not usually refer to the acid catalyzed addition of the elements of water to double or triple bonds by electrophilic addition as may originate from a hydration reaction . Acid hydrolysis is used to prepare monosaccharide with the help of mineral acids but formic acid and trifluoroacetic acid have been used. [ 16 ]
Acid hydrolysis can be utilized in the pretreatment of cellulosic material, so as to cut the interchain linkages in hemicellulose and cellulose. [ 17 ]
Alkaline hydrolysis usually refers to types of nucleophilic substitution reactions in which the attacking nucleophile is a hydroxide ion . The best known type is saponification : cleaving esters into carboxylate salts and alcohols . In ester hydrolysis , the hydroxide ion nucleophile attacks the carbonyl carbon. This mechanism is supported by isotope labeling experiments. For example, when ethyl propionate with an oxygen-18 labeled ethoxy group is treated with sodium hydroxide (NaOH), the oxygen-18 is completely absent from the sodium propionate product and is found exclusively in the ethanol formed. [ 18 ]
The reaction is often used to solubilize solid organic matter. Chemical drain cleaners take advantage of this method to dissolve hair and fat in pipes. The reaction is also used to dispose of human and other animal remains as an alternative to traditional burial or cremation. | https://en.wikipedia.org/wiki/Hydrolysis |
The word hydrolysis is applied to chemical reactions in which a substance reacts with water. In organic chemistry, the products of the reaction are usually molecular, being formed by combination with H and OH groups (e.g., hydrolysis of an ester to an alcohol and a carboxylic acid ). In inorganic chemistry, the word most often applies to cations forming soluble hydroxide or oxide complexes with, in some cases, the formation of hydroxide and oxide precipitates.
The hydrolysis reaction for a hydrated metal ion in aqueous solution can be written as:
and the corresponding formation constant as:
and associated equilibria can be written as:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Kitamura et al., 2010 [ 10 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
al., 1999 [ 9 ]
al., 2010 [ 10 ]
Ekberg, 2016 [ 22 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K (The divalent state is unstable in water, producing hydrogen whilst being oxidised to a higher valency state (Baes and Mesmer, 1976). The reliability of the data is in doubt.):
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
14.9 ± 0.3 (cr)
–26.5 (cr)
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
*Errors in compilations concerning equilibrium and/or data elaboration. Data not recommended. Strongly suggested to refer to the original papers.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
2-line ferrihydrite
6-line ferrihydrite
goethite
goethite
hematite
hematite
maghemite
maghemite
goethite
lepidocrocite
magnetite
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
12.90 (yellow)
–15.1 (yellow)
–15.1 ± 0.08 (yellow)
–4.2 (yellow)
–1.2 (yellow)
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) 0.5 M HClO 4
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution, T = 298.15 K and I = 3 M NaClO 4 ( a ) or 0.1 M Na + medium, Data at I = 0 are not available ( b ):
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
–23.1(inact)
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
1976 [ 89 ]
2016 [ 90 ]
al, 2020 [ 6 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
11.75 ± 0.13 (microcr)
–17.2 ± 1.3 (cr)
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution, I = 0.1 M and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) At I = 0.1 M ( b ) At I = 2.5 M
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
53.8 ± 0.5 (1w)
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
1976 [ 106 ]
2016 [ 107 ]
al, 2020 [ 6 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
–25.9 (cr)
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) The number of significant figures are retained to minimise propagation of round-off errors; they should not be taken to indicate the relative uncertainty of the values, which is always at least one order of magnitude less than indicated.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) The number of significant figures are retained to minimise propagation of round-off errors; they should not be taken to indicate the relative uncertainty of the values, which is always at least one order of magnitude less than indicated.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) The number of significant figures are retained to minimise propagation of round-off errors; they should not be taken to indicate the relative uncertainty of the values, which is always at least one order of magnitude less than indicated.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) The number of significant figures are retained to minimise propagation of round-off errors; they should not be taken to indicate the relative uncertainty of the values, which is always at least one order of magnitude less than indicated.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) The number of significant figures are retained to minimise propagation of round-off errors; they should not be taken to indicate the relative uncertainty of the values, which is always at least one order of magnitude less than indicated.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
( a ) The number of significant figures are retained to minimise propagation of round-off errors; they should not be taken to indicate the relative uncertainty of the values, which is always at least one order of magnitude less than indicated.
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
1976 [ 129 ]
al., 2008 [ 130 ]
al, 014 [ 131 ]
2016 [ 132 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
1976 [ 141 ]
al., 2014 [ 142 ]
2016 [ 143 ]
2020 [ 6 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
1976 [ 144 ]
al., 1992 [ 145 ]
2016 [ 146 ]
2020 [ 6 ]
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
Hydrolysis constants (log values) in critical compilations at infinite dilution and T = 298.15 K:
*Errors in compilations concerning equilibrium and/or data elaboration. Data not recommended. It is strongly suggested to refer to the original papers. | https://en.wikipedia.org/wiki/Hydrolysis_constant |
Hydrolyzed vegetable protein ( HVP ) products are foodstuffs obtained by the hydrolysis of protein , and have a meaty, savory taste similar to broth (bouillon).
Regarding the production process, a distinction can be made between acid-hydrolyzed vegetable protein (aHVP), enzymatically produced HVP, and other seasonings, e.g., fermented soy sauce . Hydrolyzed vegetable protein products are particularly used to round off the taste of soups, sauces, meat products, snacks, and other dishes, as well as for the production of ready-to-cook soups and bouillons.
Food technologists have long known that protein hydrolysis produces a meat bouillon-like odor and taste. [ 1 ] Hydrolysates have been a part of the human diet for centuries, notably in the form of fermented soy sauce, or Shoyu . Shoyu, traditionally made from wheat and soy protein, has been produced in Japan for over 1,500 years, following its introduction from mainland China. The origins of producing these materials through the acid hydrolysis of protein (aHVP) can be traced back to the scarcity and economic challenges of obtaining meat extracts during the Napoleonic wars. [ 2 ]
In 1831, Berzelius obtained products having a meat bouillon taste when hydrolysing proteins with hydrochloric acid. [ 3 ] Julius Maggi produced acid-catalyzed hydrolyzed vegetable protein industrially for the first time in 1886. [ 4 ]
In 1906, Fischer found that amino acids contributed to the specific taste. [ 5 ] In 1954, D. Phillips found that the bouillon odor required the presence of proteins containing threonine . [ 6 ] Another important substance that gives a characteristic taste is glutamic acid .
Almost all products rich in protein are suitable for the production of HVP. Today, it is made mainly from protein resources of vegetable origin, such as defatted oil seeds (soybean meal, grapeseed meal) and protein from maize ( Corn gluten meal ), wheat ( gluten ), pea, and rice. [ 7 ] The process and the feedstock determines the organoleptic properties of the end product. Proteins consist of chains of amino acids joined through amide bonds . When subjected to hydrolysis (hydrolyzed), the protein is broken down into its component amino acids.
In aHVP, hydrochloric acid is used for hydrolysis. The remaining acid is then neutralized by mixing with an alkali such as sodium hydroxide , which leaves behind table salt , which comprises up to 20% of the final product (acid-hydrolyzed vegetable protein, aHVP).
In enzymatic HVP (eHVP), proteases are used to break down the proteins under a more neutral pH and lower temperatures. [ 8 ] The amount of salt is greatly reduced.
Because of the different processing conditions, the two types of HVP have different sensory profiles. aHVP is usually dark-brown in color and has a strong savory flavor, whereas eHVP usually is lighter in color and has a mild savory flavor. [ 9 ] [ 10 ]
Acid hydrolysates are produced from various edible protein sources, with soy, corn, wheat, and casein being the most common. For the production of aHVP, the proteins are hydrolyzed by cooking with a diluted (15–20%) hydrochloric acid, at a temperature between 90 and 120 °C for up to 8 hours. After cooling, the hydrolysate is neutralized with either sodium carbonate or sodium hydroxide to a pH of 5 to 6. During hydrolysis, extraneous polymeric material known as humin , which forms from the interaction of carbohydrate and protein fragments, is generated and subsequently removed by filtration and then further refined. [ 11 ]
The source of the raw material, concentration of the acid, the temperature of the reaction, the time of the reaction, and other factors can all affect the organoleptic properties of the final product. Activated carbon treatment can be employed to remove both flavor and color components, to the required specification. Following a final filtration, the aHVP may, depending upon the application, be fortified with additional flavoring components. Thereafter, the product can be stored as a liquid at 30–40% dry matter, or alternatively it may be spray dried or vacuum dried and further used as a food ingredient. [ 11 ]
One hundred pounds (45kg) of material containing 60% protein will yield 100 pounds of aHVP, which contains approximately 40 pounds (18 kg) of salt. This salt gain occurs during the neutralization step. [ 2 ]
For the production process of enzymatic HVP, enzymes are used to break down the proteins. To break down the protein to amino acids, proteases are added to the mixture of defatted protein and water. Due to the sensitivity of enzymes to a specific pH, either an acid or a base is added to match the optimum pH. Depending on the activity of the enzymes, up to 24 hours are needed to break down the proteins. The mixture is heated to inactivate the enzymes and then filtered to remove the insoluble carbohydrates (humin).
Since no salt is formed during the production process, manufacturers may add salt to eHVP preparations to extend shelf life or to provide a product similar to conventional aHVP. [ citation needed ] A vendor source states that salt is conventionally added before eHVP production to control microbial growth. With acid-tolerant enzymes, some of the salt can be replaced with a small amount of an acid (patent literature mentioning the acid-tolerant enzyme suggests a reaction pH of 4) [ 12 ] to reduce sodium content. [ 13 ]
A commonly used protease mixture is "Flavourzyme", extracted from Aspergillus oryzae , the mold used for soy sauce production. This mixture contains both endo- and exo-peptidases. [ 14 ]
The endopeptidase Alcalase may also be used, but without an exopeptidase it tends to generate a bitter flavor. As a result, it should be used with a companion exopeptidase. [ 15 ] A commercial exopeptidase produced for this purpose is "Protana Prime", [ 16 ] a mixture with both leucine aminopeptidase and carboxypeptidase D activity. [ 17 ] [ 18 ]
Beyond proteolysis, the amount of umami taste can also be increased by adding a glutaminase , which converts glutamine to glutamate. Commercial options include "Protana Boost" and others. [ 16 ] [ 19 ]
Liquid aHVP typically contains 55% water, 16% salt, 25% organic substances (thereof 20% protein (amino acids) analyzed as about 3% total nitrogen and 2% amino nitrogen).
Many amino acids have either a bitter or sweet taste. In many commercial processes, nonpolar amino acids such as L-leucine and L-isoleucine are often removed to create hydrolysates with a more mellow and less bitter character. D-tryptophan, D-histidine, D-phenylalanine, D-tyrosine, D-leucine, L-alanine, and glycine are known to be sweet, while bitterness is associated with L-tryptophan, L-phenylalanine, L-tyrosine, and L-leucine. [ 2 ] When not specified explicitly, the chirality of an amino acid is assumed to be L-, the form found in natural proteins. However, the D-forms do occur in natural food materials in smaller amounts, and the harsh chemical condition of aHVP production is known to flip a small amount of molecules to the D-form. [ 20 ] Modern aHVP production has a step for removing tyrosine and leucine from the hydrolysate. [ 11 ]
Tyrosine is an amino acid susceptible to halogenation during hydrolysis with HCl. [ 21 ] [ better source needed ] Lysine is stable under standard acid hydrolysis, but during heat treatment, the side-chain amino group can react with other compounds, such as reducing sugars, producing Maillard products. [ 22 ] [ better source needed ]
The organoleptic properties of HVP is determined not only by amino acid composition, but also by the various aroma-bearing substances other than the amino acids created during the production of both aHVP and eHVP. Aromas can be formed via amino acid decomposition, Maillard reaction , sugar cyclization, and lipid oxidation. [ 23 ] A complex mix of aromas similar to butter, meat, [ 24 ] [ 23 ] bone stock, [ 23 ] wood smoke, [ 25 ] lovage [ 26 ] and many other substances can be produced, depending on reaction conditions (time, temperature, hydrolysis method, additional feedstock such as xylose and spices). [ 23 ] [ 27 ]
According to the European Code of Practice for Bouillons and Consommés, hydrolyzed protein products intended for retail sale correspond to these characteristics: [ 28 ]
When foods are produced by canning, freezing, or drying, some flavor loss is almost inevitable. Manufacturers can use HVP to make up for it. [ 7 ] Therefore, HVP is used in a wide variety of products, such as in the spice, meat, fish, fine-food, snack, flavor, and soup industries.
3-MCPD , a carcinogen in rodents and a suspected human carcinogen, is created during acid-hydrolysis as glycerol released from lipid (e.g. triglycerides ) reacts with hydrochloric acid. Legal limits have been set to keep aHVP products safe for human consumption. aHVP manufacturers can reduce the amount of 3-MCPD to acceptable limits by (1) careful control of reaction time and temperature (2) timely neutralization of hydrochloric acid, optionally extending to an alkaline hydrolysis step to destroy any 3-MCPD already formed (3) replacement of hydrochloric acid with other acids such as sulfuric acid . [ 11 ]
Whether hydrolyzed vegetable protein is an allergen or not is contentious.
According to European law, wheat and soy are subject to allergen labelling in terms of Regulation (EU) 1169/2011 on food information to consumers. Since wheat and soy used for the production of HVP are not exempted from allergen labelling for formal reasons, HVP produced by using those raw materials has to be labelled with a reference to wheat or soy in the list of ingredients.
Nevertheless, strong evidence indicates at least aHVP is not allergenic, since proteins are degraded to single amino acids which are not likely to trigger an allergic reaction. A 2010 study has shown that aHVP does not contain detectable traces of proteins or IgE-reactive peptides. This provides strong evidence that aHVP is very unlikely to trigger an allergic reaction to people who are intolerant or allergic to soy or wheat. [ 29 ] Earlier peer-reviewed animal studies done in 2006 also indicate that soy-hypersensitive dogs do not react to soy hydrolysate, a proposed protein source for soy-sensitive dogs. [ 30 ]
There are reports of a cosmetic-grade aHVP, Glupearl 19S (GP19S), inducing anaphylaxis when present in soap. Unlike food aHVP, this Japanese wheat aHVP is only very mildly hydrolyzed. [ 31 ] The unusual chemical condition makes GP19S more allergenic than pure gluten. [ 32 ] Newer regulations for cosmetic hydrolyzed wheat protein have been developed in response, requiring an average molecular mass of less than 3500 Da – about 35 residues long. In theory, "an allergen must have at least 2 IgE-binding epitopes, and each epitope must be at least 15 amino acid residues long, to trigger a type 1 hypersensitivity reaction." Experiments also show that this degree of hydrolysis is sufficient to not trigger IgE binding from GP19S-allergic patients. [ 31 ]
Allergenicity of eHVP depends on the specific food source and the enzyme used. Alcalase is able to render chickpea and green pea completely non-immunoreactive but papain only achieves partial reduction. Alcalase is also unable to make white beans non-reactive due to the antinutritional factors preventing complete digestion. [ 33 ] Alcalase, but not "Flavourzyme" (a commercial Aspergillus oryzae protease blend [ 14 ] for eHVP production), is able to make roasted peanut non-reactive. [ 34 ]
aHVP is not acceptable in the production of natural flavors in the EU, but eHVP is. [ 16 ] | https://en.wikipedia.org/wiki/Hydrolyzed_vegetable_protein |
Hydrometalation (hydrometallation) is a type of chemical reaction in organometallic chemistry in which a chemical compound with a hydrogen to metal bond (M-H, metal hydride ) adds to compounds with an unsaturated bond like an alkene (RC=CR) forming a new compound with a carbon to metal bond (RHC-CRM). [ 1 ] The metal is less electronegative than hydrogen, the reverse reaction is beta-hydride elimination . The reaction is structurally related to carbometalation . When the substrate is an alkyne the reaction product is a vinylorganometallic.
Examples are hydroboration , hydroalumination, hydrosilylation and hydrozirconation . | https://en.wikipedia.org/wiki/Hydrometalation |
Hydrometallurgy is a technique within the field of extractive metallurgy , the obtaining of metals from their ores. Hydrometallurgy involve the use of aqueous solutions for the recovery of metals from ores, concentrates, and recycled or residual materials. [ 1 ] [ 2 ] Processing techniques that complement hydrometallurgy are pyrometallurgy , vapour metallurgy, and molten salt electrometallurgy. Hydrometallurgy is typically divided into three general areas:
Leaching involves the use of aqueous solutions to extract metal from metal-bearing materials which are brought into contact with them. [ 3 ] In China in the 11th and 12th centuries, this technique was used to extract copper; this was used for much of the total copper production. [ 4 ] In the 17th century it was used for the same purposes in Germany and Spain. [ 5 ]
The lixiviant solution conditions vary in terms of pH , oxidation-reduction potential , presence of chelating agents and temperature, to optimize the rate, extent and selectivity of dissolution of the desired metal component into the aqueous phase. By using chelating agents, one can selectively extract certain metals. These agents are typically amines of Schiff bases . [ 6 ]
The five basic leaching reactor configurations are in-situ, heap, vat, tank and autoclave.
In-situ leaching is also called "solution mining". This process initially involves drilling of holes into the ore deposit. Explosives or hydraulic fracturing are used to create open pathways within the deposit for solution to penetrate into. Leaching solution is pumped into the deposit where it makes contact with the ore. The solution is then collected and processed. The Beverley uranium deposit is an example of in-situ leaching.
In heap leaching processes, crushed (and sometimes agglomerated) ore is piled in a heap which is lined with an impervious layer. Leach solution is sprayed over the top of the heap, and allowed to percolate downward through the heap. The heap design usually incorporates collection sumps, which allow the "pregnant" leach solution (i.e. solution with dissolved valuable metals) to be pumped for further processing. An example is gold cyanidation , where pulverized ores are extracted with a solution of sodium cyanide , which, in the presence of air, dissolves the gold, leaving behind the nonprecious residue.
Vat leaching involves contacting material, which has usually undergone size reduction and classification, with leach solution in large vats.
Stirred tank , also called agitation leaching, involves contacting material, which has usually undergone size reduction and classification, with leach solution in agitated tanks. The agitation can enhance reaction kinetics by enhancing mass transfer. Tanks are often configured as reactors in series.
Autoclave reactors are used for reactions at higher temperatures, which can enhance the rate of the reaction. Similarly, autoclaves enable the use of gaseous reagents in the system.
After leaching, the leach liquor must normally undergo concentration of the metal ions that are to be recovered. Additionally, undesirable metal ions sometimes require removal. [ 1 ]
In the solvent extraction a mixture of an extractant in a diluent is used to extract a metal from one phase to another. In solvent extraction this mixture is often referred to as the "organic" because the main constituent (diluent) is some type of oil.
The PLS (pregnant leach solution) is mixed to emulsification with the stripped organic and allowed to separate. [ citation needed ] The metal will be exchanged from the PLS to the organic they are modified. [ clarification needed ] The resulting streams will be a loaded organic and a raffinate . When dealing with electrowinning, the loaded organic is then mixed to emulsification with a lean electrolyte and allowed to separate. The metal will be exchanged from the organic to the electrolyte. The resulting streams will be a stripped organic and a rich electrolyte. The organic stream is recycled through the solvent extraction process while the aqueous streams cycle through leaching and electrowinning [ clarification needed ] processes respectively. [ citation needed ]
Chelating agents, natural zeolite , activated carbon, resins, and liquid organics impregnated with chelating agents are all used to exchange cations or anions with the solution. [ citation needed ] Selectivity and recovery are a function of the reagents used and the contaminants present.
Metal recovery is the final step in a hydrometallurgical process, in which metals suitable for sale as raw materials are produced. Sometimes, however, further refining is needed to produce ultra-high purity metals. The main types of metal recovery processes are electrolysis, gaseous reduction, and precipitation. For example, a major target of hydrometallurgy is copper, which is conveniently obtained by electrolysis. Cu 2+ ions are reduced to Cu metal at low potentials , leaving behind contaminating metal ions such as Fe 2+ and Zn 2+ .
Electrowinning and electrorefining respectively involve the recovery and purification of metals using electrodeposition of metals at the cathode, and either metal dissolution or a competing oxidation reaction at the anode.
Precipitation in hydrometallurgy involves the chemical precipitation from aqueous solutions, either of metals and their compounds or of the contaminants. Precipitation will proceed when, through reagent addition, evaporation , pH change or temperature manipulation, the amount of a species present in the solution exceeds the maximum determined by its solubility. | https://en.wikipedia.org/wiki/Hydrometallurgy |
Hydrometeor loading is the induced drag effects on the atmosphere from a falling hydrometeor . When falling at terminal velocity , the value of this drag is equal to gr h , where g is the acceleration due to gravity and r h is the mixing ratio of the hydrometeors. Hydrometeor loading has a net-negative effect on the atmospheric buoyancy equations. [ 1 ] As the hydrometeor falls toward the surface, the surrounding air provides resistance against the acceleration due to gravity, and the air in the vicinity of the hydrometeor becomes denser . [ 2 ] The increased weight of the atmosphere can support a present downdraft or even cause a downdraft to occur. [ 3 ] Hydrometeor loading can also lead to increased high pressure inside of a mesohigh in a thunderstorm. [ 4 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it .
This meteorology –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydrometeor_loading |
Hydrometeorology is a branch of meteorology and hydrology that studies the transfer of water and energy between the land surface and the lower atmosphere for academic research, commercial gain or operational forecasting purposes.
Whilst traditionally meteorologists and hydrologists sit within separate organisations, hydrometeorlogists may work in joint project teams, virtual teams, deal with specific incidents or be permanently co-located to deliver specific objectives. Hydrometeorlogists typically have a foundation in one or other discipline before undertaking additional training and specialist forecaster training depending on requirements. The cross over skills and knowledge between the two disciplines can bring organisational benefits in terms of efficiencies in terms of using tools and data available, and provide benefits in terms of enhanced lead times ahead of hydrometeological hazards occurring.
UNESCO has several programs and activities in place that deal with the study of natural hazards of hydrometeorological origin and the mitigation of their effects. [ 1 ] Among these hazards are the results of natural processes and atmospheric , hydrological , or oceanographic phenomena such as floods , tropical cyclones , drought , and desertification . Many countries have established an operational hydrometeorological capability to assist with forecasting , warning, and informing the public of these developing hazards.
One of the more significant aspects of hydrometeorology involves predictions about and attempts to mitigate the effects of high precipitation events. [ 2 ] There are three primary ways to model meteorological phenomena in weather forecasting, including nowcasting , numerical weather prediction , and statistical techniques. [ 3 ] Nowcasting is good for predicting events a few hours out, utilizing observations and live radar data to combine them with numerical weather prediction models. [ 3 ] The primary technique used to forecast weather, numerical weather prediction uses mathematical models to account for the atmosphere, ocean, and many other variables when producing forecasts. [ 3 ] These forecasts are generally used to predict events days or weeks out. [ 3 ] Finally, statistical techniques use regressions and other statistical methods to create long-term projections that go out weeks and months at a time. [ 3 ] These models allow scientists to visualize how a multitude of different variables interact with one another, and they illustrate one grand picture of how the Earth's climate interacts with itself. [ 4 ]
A major component of hydrometeorology is mitigating the risk associated with flooding and other hydrological threats. First, there has to be knowledge of the possible hydrological threats that are expected within a specific region. [ 3 ] After analyzing the possible threats, warning systems are put in place to quickly alert people and communicate to them the identity and magnitude of the threat. [ 3 ] Many nations have their own specific regional hydrometeorological centers that communicate threats to the public. Finally, there must be proper response protocols in place to protect the public during a dangerous event. [ 3 ]
Countries with a current operational hydrometeorological service include, among others: | https://en.wikipedia.org/wiki/Hydrometeorology |
Hydromethanation , [hahy-droh- meth-uh-ney-shuhn] is the process by which methane (the main constituent of natural gas ) is produced through the combination of steam , carbonaceous solids and a catalyst in a fluidized bed reactor . The process, developed over the past 60 years by multiple research groups, enables the highly efficient conversion of coal , petroleum coke and biomass (e.g. switchgrass or wood waste) into clean, pipeline quality methane. [ 1 ]
The chemistry of catalytic hydromethanation involves reacting steam and carbon to produce methane and carbon dioxide , according to the following reaction:
2C + 2H 2 O -> CH 4 + CO 2
The process utilizes a specially designed reactor and depends upon a proprietary metal catalyst to promote chemical conversion at the low temperatures where the water gas shift reaction and methanation take place.
When a feedstock treated with the catalyst is introduced into this reactor and mixed with steam, three reactions occur that efficiently convert the feedstock into methane.
C + H 2 O -> CO + H 2
CO + H 2 O -> H 2 + CO 2
2H 2 + C -> CH 4
The combination of carbon (C) from the carbon feedstock, water (H 2 O) from steam, and the catalyst, produces pure methane and a pure stream of carbon dioxide (CO 2 ) which is 100% captured in the system and available for sequestration . The overall reaction is thermally neutral, requiring no addition or removal of heat, making it highly efficient.
The development of hydromethanation is an example of process intensification, where several operations are combined into a single step to improve overall efficiency, reduce maintenance and equipment requirements, and lower capital costs.
In addition to methane, hydromethanation produces a high-purity stream of carbon dioxide (CO 2 ), an odorless, colorless greenhouse gas . This CO 2 stream is fully captured in the process and can be prevented from entering the atmosphere using a process called sequestration . The CO 2 can be injected into underground oil reserves, through a process called enhanced oil recovery (“EOR”), or geologically sequestered.
Because hydromethanation is a catalytic process that does not rely on the combustion of carbonaceous solids to capture their energy value, it does not produce the nitrogen oxides (NOx), sulfur oxides (SOx) and particulate emissions typically associated with the burning of carbon feedstocks, including certain types of biomass. Due to this quality, it intrinsically captures nearly all of the impurities found in coal and converts them into valuable chemical grade products. Ash , sulfur , nitrogen , and trace metals are all removed using commercial gas clean-up processes and are either safely disposed of or used as raw materials for other products such as sulfuric acid and fertilizer .
GreatPoint Energy , a company founded in 2005 by serial entrepreneur Andrew Perlman , is a forerunner in the development and commercialization of hydromethanation. The company has raised $150 million in venture capital [ 2 ] from Dow , AES Corporation , Suncor Energy Inc. , Peabody Energy , Advanced Technology Ventures (ATV) , Draper Fisher Jurvetson , Kleiner Perkins Caufield & Byers , Khosla Ventures and Citi Capital Advisors (CCA) . In May, 2012 GreatPoint Energy and China Wanxiang Holdings closed a $1.25 billion investment and partnership agreement to finance and construct the first phase of a one trillion cubic feet per year coal to natural gas production facility in China . [ 2 ] The deal between GreatPoint Energy and Wanxiang was the largest US venture capital investment in 2012. [ 3 ] | https://en.wikipedia.org/wiki/Hydromethanation |
Hydrometry is the monitoring of the components of the hydrological cycle including rainfall , groundwater characteristics, as well as water quality and flow characteristics of surface waters . [ 1 ] The etymology of the term hydrometry is from Greek : ὕδωρ ( hydor ) 'water' + μέτρον ( metron ) 'measure'.
Hydrometrics is a topic in applied science and engineering dealing with Hydrometry. It is an engineering discipline encompassing several different areas. This discipline is primarily related to hydrology but specializing in the measurement of components of the hydrological cycle particularly the bulk quantification of water resources. It encompasses several areas of traditional engineering practices including hydrology, structures, control systems, computer sciences, data management and communications. The International Organization for Standardization formally defines hydrometry as "science of the measurement of water including the methods, techniques and instrumentation used". [ 2 ]
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydrometry |
The hydromill trench cutter is a specialized type of construction equipment designed to dig the narrow but deep trenches used in the casting of slurry walls . Typically, it is a cutter attachment mounted on a crawler crane base machine, with different types of hose handling systems. The machine excavates by cutting the soil using two cutting wheels, while a powerful pump extracts the loose material mixed with some of the slurry, typically bentonite .
The hydromill, also known as hydraulic cutter, basically consists of:
Usually, there are two sets of hoses, one for hydraulic oil , and another for extraction of slurry and entrained spoil from the diaphragm wall excavation.
Hydraulic cutter assemblies have a standard length of 2.8 metres (9 ft 2 in), with various widths, varying from 60 centimetres (24 in) up to 250 centimetres (98 in). Cutting wheel teeth depend mainly on the type of soil intended to be excavated.
This tool article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydromill_trench_cutter |
In chemistry , hydronium ( hydroxonium in traditional British English ) is the cation [H 3 O] + , also written as H 3 O + , the type of oxonium ion produced by protonation of water . It is often viewed as the positive ion present when an Arrhenius acid is dissolved in water, as Arrhenius acid molecules in solution give up a proton (a positive hydrogen ion, H + ) to the surrounding water molecules ( H 2 O ). In fact, acids must be surrounded by more than a single water molecule in order to ionize, yielding aqueous H + and conjugate base.
Three main structures for the aqueous proton have garnered experimental support:
Spectroscopic evidence from well-defined IR spectra overwhelmingly supports the Stoyanov cation as the predominant form. [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ non-primary source needed ] For this reason, it has been suggested that wherever possible, the symbol H + (aq) should be used instead of the hydronium ion. [ 2 ]
The molar concentration of hydronium or H + ions determines a solution's pH according to
where M = mol/L. The concentration of hydroxide ions analogously determines a solution's pOH . The molecules in pure water auto-dissociate into aqueous protons and hydroxide ions in the following equilibrium:
In pure water, there is an equal number of hydroxide and H + ions, so it is a neutral solution. At 25 °C (77 °F), pure water has a pH of 7 and a pOH of 7 (this varies when the temperature changes: see self-ionization of water ). A pH value less than 7 indicates an acidic solution, and a pH value more than 7 indicates a basic solution. [ 7 ]
According to IUPAC nomenclature of organic chemistry , the hydronium ion should be referred to as oxonium . [ 8 ] Hydroxonium may also be used unambiguously to identify it. [ citation needed ]
An oxonium ion is any cation containing a trivalent oxygen atom.
Since O + and N have the same number of electrons, H 3 O + is isoelectronic with ammonia . As shown in the images above, H 3 O + has a trigonal pyramidal molecular geometry with the oxygen atom at its apex. The H−O−H bond angle is approximately 113°, [ 9 ] [ 10 ] and the center of mass is very close to the oxygen atom. Because the base of the pyramid is made up of three identical hydrogen atoms, the H 3 O + molecule's symmetric top configuration is such that it belongs to the C 3v point group . Because of this symmetry and the fact that it has a dipole moment, the rotational selection rules are Δ J = ±1 and Δ K = 0. The transition dipole lies along the c -axis and, because the negative charge is localized near the oxygen atom, the dipole moment points to the apex, perpendicular to the base plane.
The hydrated proton is very acidic: at 25 °C, its p K a is approximately 0. [ 11 ] The values commonly given for p K a aq (H 3 O + ) are 0 or −1.74. The former uses the convention that the activity of the solvent in a dilute solution (in this case, water) is 1, while the latter uses the value of the concentration of water in the pure liquid of 55.5 M. Silverstein has shown that the latter value is thermodynamically unsupportable. [ 12 ] The disagreement comes from the ambiguity that to define p K a of H 3 O + in water, H 2 O has to act simultaneously as a solute and the solvent. The IUPAC has not given an official definition of p K a that would resolve this ambiguity. Burgot has argued that H 3 O + (aq) + H 2 O (l) ⇄ H 2 O (aq) + H 3 O + (aq) is simply not a thermodynamically well-defined process. For an estimate of p K a aq (H 3 O + ), Burgot suggests taking the measured value p K a EtOH (H 3 O + ) = 0.3, the p K a of H 3 O + in ethanol, and applying the correlation equation p K a aq = p K a EtOH − 1.0 (± 0.3) to convert the ethanol p K a to an aqueous value, to give a value of p K a aq (H 3 O + ) = −0.7 (± 0.3). [ 13 ] On the other hand, Silverstein has shown that Ballinger and Long's experimental results [ 14 ] support a p K a of 0.0 for the aqueous proton. [ 15 ] Neils and Schaertel provide added arguments for a p K a of 0.0 [ 16 ]
The aqueous proton is the most acidic species that can exist in water (assuming sufficient water for dissolution): any stronger acid will ionize and yield a hydrated proton. The acidity of H + (aq) is the implicit standard used to judge the strength of an acid in water: strong acids must be better proton donors than H + (aq), as otherwise a significant portion of acid will exist in a non-ionized state (i.e.: a weak acid). Unlike H + (aq) in neutral solutions that result from water's autodissociation, in acidic solutions, H + (aq) is long-lasting and concentrated, in proportion to the strength of the dissolved acid.
pH was originally conceived to be a measure of the hydrogen ion concentration of aqueous solution. [ 17 ] Virtually all such free protons are quickly hydrated; acidity of an aqueous solution is therefore more accurately characterized by its concentration of H + (aq). In organic syntheses, such as acid catalyzed reactions, the hydronium ion ( H 3 O + ) is used interchangeably with the H + ion; choosing one over the other has no significant effect on the mechanism of reaction.
Researchers have yet to fully characterize the solvation of hydronium ion in water, in part because many different meanings of solvation exist. A freezing-point depression study determined that the mean hydration ion in cold water is approximately H 3 O + (H 2 O) 6 : [ 18 ] on average, each hydronium ion is solvated by 6 water molecules which are unable to solvate other solute molecules.
Some hydration structures are quite large: the H 3 O + (H 2 O) 20 magic ion number structure (called magic number because of its increased stability with respect to hydration structures involving a comparable number of water molecules – this is a similar usage of the term magic number as in nuclear physics ) might place the hydronium inside a dodecahedral cage. [ 19 ] However, more recent ab initio method molecular dynamics simulations have shown that, on average, the hydrated proton resides on the surface of the H 3 O + (H 2 O) 20 cluster. [ 20 ] Further, several disparate features of these simulations agree with their experimental counterparts suggesting an alternative interpretation of the experimental results.
Two other well-known structures are the Zundel cation and the Eigen cation . The Eigen solvation structure has the hydronium ion at the center of an H 9 O + 4 complex in which the hydronium is strongly hydrogen-bonded to three neighbouring water molecules. [ 21 ] In the Zundel H 5 O + 2 complex the proton is shared equally by two water molecules in a symmetric hydrogen bond . [ 22 ] A work in 1999 indicates that both of these complexes represent ideal structures in a more general hydrogen bond network defect. [ 23 ]
Isolation of the hydronium ion monomer in liquid phase was achieved in a nonaqueous, low nucleophilicity superacid solution ( HF − SbF 5 SO 2 ). The ion was characterized by high resolution 17 O nuclear magnetic resonance . [ 24 ]
A 2007 calculation of the enthalpies and free energies of the various hydrogen bonds around the hydronium cation in liquid protonated water [ 25 ] at room temperature and a study of the proton hopping mechanism using molecular dynamics showed that the hydrogen-bonds around the hydronium ion (formed with the three water ligands in the first solvation shell of the hydronium) are quite strong compared to those of bulk water.
A new model was proposed by Stoyanov based on infrared spectroscopy in which the proton exists as an H 13 O + 6 ion. The positive charge is thus delocalized over 6 water molecules. [ 26 ]
For many strong acids , it is possible to form crystals of their hydronium salt that are relatively stable. These salts are sometimes called acid monohydrates . As a rule, any acid with an ionization constant of 10 9 or higher may do this. Acids whose ionization constants are below 10 9 generally cannot form stable H 3 O + salts. For example, nitric acid has an ionization constant of 10 1.4 , and mixtures with water at all proportions are liquid at room temperature. However, perchloric acid has an ionization constant of 10 10 , and if liquid anhydrous perchloric acid and water are combined in a 1:1 molar ratio, they react to form solid hydronium perchlorate ( H 3 O + ·ClO − 4 ). [ citation needed ]
The hydronium ion also forms stable compounds with the carborane superacid H(CB 11 H(CH 3 ) 5 Br 6 ) . [ 27 ] X-ray crystallography shows a C 3v symmetry for the hydronium ion with each proton interacting with a bromine atom each from three carborane anions 320 pm apart on average. The [H 3 O] [H(CB 11 HCl 11 )] salt is also soluble in benzene . In crystals grown from a benzene solution the solvent co-crystallizes and a H 3 O·(C 6 H 6 ) 3 cation is completely separated from the anion. In the cation three benzene molecules surround hydronium forming pi-cation interactions with the hydrogen atoms. The closest (non-bonding) approach of the anion at chlorine to the cation at oxygen is 348 pm.
There are also many known examples of salts containing hydrated hydronium ions, such as the H 5 O + 2 ion in HCl·2H 2 O , the H 7 O + 3 and H 9 O + 4 ions both found in HBr·4H 2 O . [ 28 ]
Sulfuric acid is also known to form a hydronium salt H 3 O + HSO − 4 at temperatures below 8.49 °C (47.28 °F). [ 29 ]
Hydronium is an abundant molecular ion in the interstellar medium and is found in diffuse [ 30 ] and dense [ 31 ] molecular clouds as well as the plasma tails of comets. [ 32 ] Interstellar sources of hydronium observations include the regions of Sagittarius B2, Orion OMC-1, Orion BN–IRc2, Orion KL, and the comet Hale–Bopp.
Interstellar hydronium is formed by a chain of reactions started by the ionization of H 2 into H + 2 by cosmic radiation. [ 33 ] H 3 O + can produce either OH − or H 2 O through dissociative recombination reactions, which occur very quickly even at the low (≥10 K) temperatures of dense clouds. [ 34 ] This leads to hydronium playing a very important role in interstellar ion-neutral chemistry.
Astronomers are especially interested in determining the abundance of water in various interstellar climates due to its key role in the cooling of dense molecular gases through radiative processes. [ 35 ] However, H 2 O does not have many favorable transitions for ground-based observations. [ 36 ] Although observations of HDO (the deuterated version of water [ 37 ] ) could potentially be used for estimating H 2 O abundances, the ratio of HDO to H 2 O is not known very accurately. [ 36 ]
Hydronium, on the other hand, has several transitions that make it a superior candidate for detection and identification in a variety of situations. [ 36 ] This information has been used in conjunction with laboratory measurements of the branching ratios of the various H 3 O + dissociative recombination reactions [ 34 ] to provide what are believed to be relatively accurate OH − and H 2 O abundances without requiring direct observation of these species. [ 38 ] [ 39 ]
As mentioned previously, H 3 O + is found in both diffuse and dense molecular clouds. By applying the reaction rate constants ( α , β , and γ ) corresponding to all of the currently available characterized reactions involving H 3 O + , it is possible to calculate k ( T ) for each of these reactions. By multiplying these k ( T ) by the relative abundances of the products, the relative rates (in cm 3 /s) for each reaction at a given temperature can be determined. These relative rates can be made in absolute rates by multiplying them by the [H 2 ] 2 . [ 40 ] By assuming T = 10 K for a dense cloud and T = 50 K for a diffuse cloud, the results indicate that most dominant formation and destruction mechanisms were the same for both cases. It should be mentioned that the relative abundances used in these calculations correspond to TMC-1, a dense molecular cloud, and that the calculated relative rates are therefore expected to be more accurate at T = 10 K . The three fastest formation and destruction mechanisms are listed in the table below, along with their relative rates. Note that the rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions. [ 32 ] All three destruction mechanisms in the table below are classified as dissociative recombination reactions. [ 41 ]
It is also worth noting that the relative rates for the formation reactions in the table above are the same for a given reaction at both temperatures. This is due to the reaction rate constants for these reactions having β and γ constants of 0, resulting in k = α which is independent of temperature.
Since all three of these reactions produce either H 2 O or OH, these results reinforce the strong connection between their relative abundances and that of H 3 O + . The rates of these six reactions are such that they make up approximately 99% of hydronium ion's chemical interactions under these conditions.
As early as 1973 and before the first interstellar detection, chemical models of the interstellar medium (the first corresponding to a dense cloud) predicted that hydronium was an abundant molecular ion and that it played an important role in ion-neutral chemistry. [ 42 ] However, before an astronomical search could be underway there was still the matter of determining hydronium's spectroscopic features in the gas phase, which at this point were unknown. The first studies of these characteristics came in 1977, [ 43 ] which was followed by other, higher resolution spectroscopy experiments. Once several lines had been identified in the laboratory, the first interstellar detection of H 3 O + was made by two groups almost simultaneously in 1986. [ 31 ] [ 36 ] The first, published in June 1986, reported observation of the J vt K = 1 − 1 − 2 + 1 transition at 307 192 .41 MHz in OMC-1 and Sgr B2 . The second, published in August, reported observation of the same transition toward the Orion-KL nebula.
These first detections have been followed by observations of a number of additional H 3 O + transitions. The first observations of each subsequent transition detection are given below in chronological order:
In 1991, the 3 + 2 − 2 − 2 transition at 364 797 .427 MHz was observed in OMC-1 and Sgr B2. [ 44 ] One year later, the 3 + 0 − 2 − 0 transition at 396 272 .412 MHz was observed in several regions, the clearest of which was the W3 IRS 5 cloud. [ 39 ]
The first far-IR 4 − 3 − 3 + 3 transition at 69.524 μm (4.3121 THz) was made in 1996 near Orion BN -IRc2. [ 45 ] In 2001, three additional transitions of H 3 O + in were observed in the far infrared in Sgr B2; 2 − 1 − 1 + 1 transition at 100.577 μm (2.98073 THz), 1 − 1 − 1 + 1 at 181.054 μm (1.65582 THz) and 2 − 0 − 1 + 0 at 100.869 μm (2.9721 THz). [ 46 ] | https://en.wikipedia.org/wiki/Hydronium |
Hydroperoxides or peroxols are compounds of the form ROOH, where R stands for any group, typically organic , which contain the hydroperoxy functional group ( −OOH ). Hydroperoxide also refers to the hydroperoxide anion ( − OOH ) and its salts , and the neutral hydroperoxyl radical (•OOH) consist of an unbond hydroperoxy group. When R is organic, the compounds are called organic hydroperoxides . Such compounds are a subset of organic peroxides , which have the formula ROOR. Organic hydroperoxides can either intentionally or unintentionally initiate explosive polymerisation in materials with saturated chemical bonds . [ 1 ]
The O−O bond length in peroxides is about 1.45 Å , and the R−O−O angles (R = H , C ) are about 110° ( water -like). Characteristically, the C−O−O−H dihedral angles are about 120°. The O−O bond is relatively weak, with a bond dissociation energy of 45–50 kcal/mol (190–210 kJ/mol), less than half the strengths of C−C , C−H , and C−O bonds. [ 2 ] [ 3 ]
Hydroperoxides are typically more volatile than the corresponding alcohols :
Hydroperoxides are mildly acidic . The range is indicated by 11.5 for CH 3 OOH to 13.1 for Ph 3 COOH . [ 4 ]
Hydroperoxides can be reduced to alcohols with lithium aluminium hydride , as described in this idealized equation:
This reaction is the basis of methods for analysis of organic peroxides. [ 5 ] Another way to evaluate the content of peracids and peroxides is the volumetric titration with alkoxides such as sodium ethoxide . [ 6 ] The phosphite esters and tertiary phosphines also effect reduction:
"The single most important synthetic application of alkyl hydroperoxides is without doubt the metal-catalysed epoxidation of alkenes." In the Halcon process tert-butyl hydroperoxide (TBHP) is employed for the production of propylene oxide . [ 7 ]
Of specialized interest, chiral epoxides are prepared using hydroperoxides as reagents in the Sharpless epoxidation . [ 8 ]
Hydroperoxides are intermediates in the production of many organic compounds in industry. For example, the cobalt catalyzed oxidation of cyclohexane to cyclohexanone : [ 9 ]
Drying oils , as found in many paints and varnishes, function via the formation of hydroperoxides.
Compounds with allylic and benzylic C−H bonds are especially susceptible to oxygenation. [ 10 ] Such reactivity is exploited industrially on a large scale for the production of phenol by the Cumene process or Hock process for its cumene and cumene hydroperoxide intermediates. [ 11 ] Such reactions rely on radical initiators that reacts with oxygen to form an intermediate that abstracts a hydrogen atom from a weak C-H bond. The resulting radical binds O 2 , to give hydroperoxyl (ROO•), which then continues the cycle of H-atom abstraction. [ 12 ]
The most important (in a commercial sense) peroxides are produced by autoxidation , the direct reaction of O 2 with a hydrocarbon. Autoxidation is a radical reaction that begins with the abstraction of an H atom from a relatively weak C-H bond . Important compounds made in this way include tert-butyl hydroperoxide , cumene hydroperoxide and ethylbenzene hydroperoxide : [ 7 ]
Auto-oxidation reaction is also observed with common ethers , such as diethyl ether , diisopropyl ether , tetrahydrofuran , and 1,4-dioxane . An illustrative product is diethyl ether peroxide . Such compounds can result in a serious explosion when distilled. [ 12 ] To minimize this problem, commercial samples of THF are often inhibited with butylated hydroxytoluene (BHT). Distillation of THF to dryness is avoided because the explosive peroxides concentrate in the residue.
Although ether hydroperoxide often form adventitiously (i.e. autoxidation), they can be prepared in high yield by the acid-catalyzed addition of hydrogen peroxide to vinyl ethers: [ 13 ]
Many industrial peroxides are produced using hydrogen peroxide. Reactions with aldehydes and ketones yield a series of compounds depending on conditions. Specific reactions include addition of hydrogen peroxide across the C=O double bond:
In some cases, these hydroperoxides convert to give cyclic diperoxides:
Addition of this initial adduct to a second equivalent of the carbonyl:
Further replacement of alcohol groups:
Triphenylmethanol reacts with hydrogen peroxide gives the unusually stable hydroperoxide, Ph 3 COOH . [ 14 ]
Many hydroperoxides are derived from fatty acids, steroids, and terpenes . The biosynthesis of these species is affected extensively by enzymes.
Although hydroperoxide often refers to a class of organic compounds, many inorganic or metallo-organic compounds are hydroperoxides. One example involves sodium perborate, a commercially important bleaching agent with the formula Na 2 [(HO) 2 B] 2 (OO) 2 )] . It acts by hydrolysis to give a boron-hydroperoxide: [ 16 ]
This hydrogen peroxide then releases hydrogen peroxide:
Several metal hydroperoxide complexes have been characterized by X-ray crystallography, for example: triphenylsilicon and triphenylgermanium hydroperoxides can be obtained by reaction of initial chlorides with excess of hydrogen peroxide in presence of base. [ 17 ] [ 18 ] Some form by the reaction of metal hydrides with oxygen gas: [ 19 ]
Some transition metal dioxygen complexes abstract H atoms (and sometimes protons) to give hydroperoxides: | https://en.wikipedia.org/wiki/Hydroperoxide |
The hydroperoxyl radical , also known as the hydrogen superoxide , is the protonated form of superoxide with the chemical formula HO 2 , also written HOO • . This species plays an important role in the atmosphere and as a reactive oxygen species in cell biology. [ 2 ]
The molecule has a bent structure. [ 3 ]
The superoxide anion, • O − 2 , and the hydroperoxyl radical exist in equilibrium in aqueous solution :
The p K a of HO 2 is 4.88. Therefore, about 0.3% of any superoxide present in the cytosol of a typical cell is in the protonated form. [ 4 ]
It oxidizes nitric oxide to nitrogen dioxide: [ 2 ]
Together with its conjugate base superoxide , hydroperoxyl is an important reactive oxygen species . Unlike • O − 2 , which has reducing properties, HO • 2 can act as an oxidant in a number of biologically important reactions, such as the abstraction of hydrogen atoms from tocopherol and polyunstaturated fatty acids in the lipid bilayer . As such, it may be an important initiator of lipid peroxidation .
Gaseous hydroperoxyl is involved in reaction cycles that destroy stratospheric ozone . It is also present in the troposphere, where it is essentially a byproduct of the oxidation of carbon monoxide and of hydrocarbons by the hydroxyl radical . [ 5 ]
Because dielectric constant has a strong effect on p K a , and the dielectric constant of air is quite low, superoxide produced (photochemically) in the atmosphere is almost exclusively present as HO • 2 . As hydroperoxyl is quite reactive, it acts as a "cleanser" of the atmosphere by degrading certain organic pollutants. As such, the chemistry of HO • 2 is of considerable geochemical importance. | https://en.wikipedia.org/wiki/Hydroperoxyl |
A hydrophile is a molecule or other molecular entity that is attracted to water molecules and tends to be dissolved by water. [ 1 ]
In contrast, hydrophobes are not attracted to water and may seem to be repelled by it. Hygroscopics are attracted to water, but are not dissolved by water.
A hydrophilic molecule or portion of a molecule is one whose interactions with water and other polar substances are more thermodynamically favorable than their interactions with oil or other hydrophobic solvents. [ 2 ] [ 3 ] They are typically charge-polarized and capable of hydrogen bonding . This makes these molecules soluble not only in water but also in polar solvents .
Hydrophilic molecules (and portions of molecules) can be contrasted with hydrophobic molecules (and portions of molecules). In some cases, both hydrophilic and hydrophobic properties occur in a single molecule. An example of these amphiphilic molecules is the lipids that comprise the cell membrane . Another example is soap , which has a hydrophilic head and a hydrophobic tail, allowing it to dissolve in both water and oil.
Hydrophilic and hydrophobic molecules are also known as polar molecules and nonpolar molecules , respectively. Some hydrophilic substances do not dissolve. This type of mixture is called a colloid .
An approximate rule of thumb for hydrophilicity of organic compounds is that solubility of a molecule in water is more than 1 mass % if there is at least one neutral hydrophile group per 5 carbons, or at least one electrically charged hydrophile group per 7 carbons. [ 4 ]
Hydrophilic substances (ex: salts) can seem to attract water out of the air. Sugar is also hydrophilic, and like salt is sometimes used to draw water out of foods. Sugar sprinkled on cut fruit will "draw out the water" through hydrophilia, making the fruit mushy and wet, as in a common strawberry compote recipe.
Liquid hydrophilic chemicals complexed with solid chemicals can be used to optimize solubility of hydrophobic chemicals.
Examples of hydrophilic liquids include ammonia, alcohols, some amides such as urea and some carboxylic acids such as acetic acid.
Hydroxyl groups (-OH), found in alcohols, are polar and therefore hydrophilic (water liking) but their carbon chain portion is non-polar which make them hydrophobic. The molecule increasingly becomes overall more nonpolar and therefore less soluble in the polar water as the carbon chain becomes longer. [ 5 ] Methanol has the shortest carbon chain of all alcohols (one carbon atom) followed by ethanol (two carbon atoms), and 1-propanol along with its isomer 2-propanol , all being miscible with water. Tert-Butyl alcohol , with four carbon atoms, is the only one among its isomers to be miscible with water.
Cyclodextrins are used to make pharmaceutical solutions by capturing hydrophobic molecules as guest hosts. Because inclusion compounds of cyclodextrins with hydrophobic molecules are able to penetrate body tissues, these can be used to release biologically active compounds under specific conditions. [ 6 ] For example, testosterone is complexed with hydroxy-propyl-beta-cyclodextrin (HPBCD), 95% absorption of testosterone was achieved in 20 minutes via the sublingual route but HPBCD was not absorbed, whereas hydrophobic testosterone is usually absorbed less than 40% via the sublingual route. [ 7 ]
Hydrophilic membrane filtration is used in several industries to filter various liquids. These hydrophilic filters are used in the medical, industrial, and biochemical fields to filter elements such as bacteria, viruses, proteins, particulates, drugs, and other contaminants. Common hydrophilic molecules include colloids, cotton, and cellulose (which cotton consists of).
Unlike other membranes, hydrophilic membranes do not require pre-wetting: they can filter liquids in their dry state. Although most are used in low-heat filtration processes, many new hydrophilic membrane fabrics are used to filter hot liquids and fluids. [ 8 ] | https://en.wikipedia.org/wiki/Hydrophile |
Hydrophilic interaction chromatography (or hydrophilic interaction liquid chromatography , HILIC ) [ 1 ] is a variant of normal phase liquid chromatography that partly overlaps with other chromatographic applications such as ion chromatography and reversed phase liquid chromatography . HILIC uses hydrophilic stationary phases with reversed-phase type eluents . The name was suggested by Andrew Alpert in his 1990 paper on the subject. [ 2 ] He described the chromatographic mechanism for it as liquid-liquid partition chromatography where analytes elute in order of increasing polarity, a conclusion supported by a review and re-evaluation of published data. [ 3 ]
Any polar chromatographic surface can be used for HILIC separations. Even non-polar bonded silicas have been used with extremely high organic solvent composition, thanks to the exposed patches of silica in between the bonded ligands on the support, which can affect the interactions. [ 4 ] With that exception, HILIC phases can be grouped into five categories of neutral polar or ionic surfaces: [ 5 ]
A typical mobile phase for HILIC chromatography includes acetonitrile ("MeCN", also designated as "ACN") with a small amount of water. However, any aprotic solvent miscible with water (e.g. THF or dioxane ) can be used. Alcohols can also be used, however, their concentration must be higher to achieve the same degree of retention for an analyte relative to an aprotic solvent–water combination. See also Aqueous normal phase chromatography .
It is commonly believed that in HILIC, the mobile phase forms a water-rich layer on the surface of the polar stationary phase vs. the water-deficient mobile phase , creating a liquid/liquid extraction system. The analyte is distributed between these two layers. However, HILIC is more than just simple partitioning and includes hydrogen donor interactions between neutral polar species as well as weak electrostatic mechanisms under the high organic solvent conditions used for retention. This distinguishes HILIC as a mechanism distinct from ion exchange chromatography . The more polar compounds will have a stronger interaction with the stationary aqueous layer than the less polar compounds. Thus, a separation based on a compound's polarity and degree of solvation takes place.
Ionic additives, such as ammonium acetate and ammonium formate, are usually used to control the mobile phase pH and ion strength. In HILIC they can also contribute to the polarity of the analyte, resulting in differential changes in retention. For extremely polar analytes (e.g. aminoglycoside antibiotics ( gentamicin ) or adenosine triphosphate ), higher concentrations of buffer (c. 100 mM) are required to ensure that the analyte will be in a single ionic form. Otherwise, asymmetric peak shape, chromatographic tailing, and/or poor recovery from the stationary phase will be observed. For the separation of neutral polar analytes (e.g. carbohydrates), no buffer is necessary.
Other salts, such as 100–300 mM sodium perchlorate, that are soluble in high-organic solvent mixtures (c. 70–90% acetonitrile ), can be used to increase the mobile phase polarity to affect elution These salts are not volatile, so this technique is less useful with a mass spectrometer as the detector. Usually a gradient (to increasing amounts of water) is enough to promote elution.
All ions partition into the stationary phase to some degree, so an occasional "wash" with water is required to ensure a reproducible stationary phase.
The HILIC mode of separation is used extensively for separation of some biomolecules , organic and some inorganic molecules [ 11 ] by differences in polarity. Its utility has increased due to the simplified sample preparation for biological samples, when analyzing for metabolites, since the metabolic process generally results in the addition of polar groups to enhance elimination from the cellular tissue. This separation technique is also particularly suitable for glycosylation analysis [ 12 ] and quality assurance of glycoproteins and glycoforms in biologic medical products . [ 13 ] For the detection of polar compounds with the use of electrospray-ionization mass spectrometry as a chromatographic detector, HILIC can offer a ten fold increase in sensitivity over reversed-phase chromatography [ 11 ] because the organic solvent is much more volatile.
With surface chemistries that are weakly ionic, the choice of pH can affect the ionic nature of the column chemistry. Properly adjusted, the pH can be set to reduce the selectivity toward functional groups with the same charge as the column, or enhance it for oppositely charged functional groups. Similarly, the choice of pH affects the polarity of the solutes. However, for column surface chemistries that are strongly ionic, and thus resistant to pH values in the mid-range of the pH scale (pH 3.5–8.5), these separations will be reflective of the polarity of the analytes alone, and thus might be easier to understand when doing methods development.
In 2008, Alpert coined the term, ERLIC [ 14 ] (electrostatic repulsion hydrophilic interaction chromatography), for HILIC separations where an ionic column surface chemistry is used to repel a common ionic polar group on an analyte or within a set of analytes, to facilitate separation by the remaining polar groups. Electrostatic effects have an order of magnitude stronger chemical potential than neutral polar effects. This allows one to minimize the influence of a common, ionic group within a set of analyte molecules; or to reduce the degree of retention from these more polar functional groups, even enabling isocratic separations in lieu of a gradient in some situations. His subsequent publication further described orientation effects [ 15 ] which others have also called ion-pair normal phase [ 16 ] or e-HILIC, reflecting retention mechanisms sensitive to a particular ionic portion of the analyte, either attractive or repulsive. ERLIC (eHILIC) separations need not be isocratic, but the net effect is the reduction of the attraction of a particularly strong polar group, which then requires less strong elution conditions, and the enhanced interaction of the remaining polar (opposite charged ionic, or non-ionic) functional groups of the analyte(s).Based on the ERLIC column invented by Andrew Alpert, a new peptide mapping methodology was developed with unique properties of separation of asparagine deamidation and isomerization. This unique properties would be very beneficial for future mass spectrometry based multi-attributes monitoring in biologics quality control. [ 17 ]
For example, one could use a cation exchange (negatively charged) surface chemistry for ERLIC separations to reduce the influence on retention of anionic (negatively charged) groups (the phosphates of nucleotides or of phosphonyl antibiotic mixtures; or sialic acid groups of modified carbohydrates) to now allow separation based more on the basic and/or neutral functional groups of these molecules. Modifying the polarity of a weakly ionic group (e.g. carboxyl) on the surface is easily accomplished by adjusting the pH to be within two pH units of that group's pKa. For strongly ionic functional groups of the surface (i.e. sulfates or phosphates) one could instead use a lower amount of buffer so the residual charge is not completely ion paired. An example of this would be the use of a 12.5mM (rather than the recommended >20mM buffer), pH 9.2 mobile phase on a polymeric , zwitterionic , betaine - sulfonate surface to separate phosphonyl antibiotic mixtures (each containing a phosphate group). This enhances the influence of the column's sulfonic acid functional groups of its surface chemistry over its, slightly diminished (by pH), quaternary amine. Commensurate with this, these analytes will show a reduced retention on the column eluting earlier, and in higher amounts of organic solvent, than if a neutral polar HILIC surface were used. This also increases their detection sensitivity by negative ion mass spectrometry.
By analogy to the above, one can use an anion exchange (positively charged) column surface chemistry to reduce the influence on retention of cationic (positively charged) functional groups for a set of analytes, such as when selectively isolating phosphorylated peptides or sulfated polysaccharide molecules. Use of a pH between 1 and 2 pH units will reduce the polarity of two of the three ionizable oxygens of the phosphate group, and thus will allow easy desorption from the (oppositely charged) surface chemistry. It will also reduce the influence of negatively charged carboxyls in the analytes, since they will be protonated at this low a pH value, and thus contribute less overall polarity to the molecule. Any common, positively charged amino groups will be repelled from the column surface chemistry and thus these conditions enhance the role of the phosphate's polarity (as well as other neutral polar groups) in the separation. | https://en.wikipedia.org/wiki/Hydrophilic_interaction_chromatography |
In chemistry , hydrophobicity is the chemical property of a molecule (called a hydrophobe ) that is seemingly repelled from a mass of water . [ 1 ] In contrast, hydrophiles are attracted to water.
Hydrophobic molecules tend to be nonpolar and, thus, prefer other neutral molecules and nonpolar solvents . Because water molecules are polar, hydrophobes do not dissolve well among them. Hydrophobic molecules in water often cluster together, forming micelles . Water on hydrophobic surfaces will exhibit a high contact angle .
Examples of hydrophobic molecules include the alkanes , oils , fats , and greasy substances in general. Hydrophobic materials are used for oil removal from water, the management of oil spills , and chemical separation processes to remove non-polar substances from polar compounds. [ 2 ]
The term hydrophobic —which comes from the Ancient Greek ὑδρόφοβος ( hydróphobos ), "having a fear of water", constructed from Ancient Greek ὕδωρ (húdōr) ' water ' and Ancient Greek φόβος (phóbos) ' fear ' [ 3 ] —is often used interchangeably with lipophilic , "fat-loving". However, the two terms are not synonymous. While hydrophobic substances are usually lipophilic, there are exceptions, such as the silicones and fluorocarbons . [ 4 ] [ 5 ]
The hydrophobic interaction is mostly an entropic effect originating from the disruption of the highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute, causing the water to compensate by forming a clathrate -like cage structure around the non-polar molecules. This structure is more highly ordered than free water molecules due to the water molecules arranging themselves to interact as much as possible with themselves, and thus results in a lower entropic state at the interface. This causes non-polar molecules to clump together to reduce the surface area exposed to water and thereby increase the entropy of the system. [ 6 ] [ 7 ] Thus, the two immiscible phases (hydrophilic vs. hydrophobic) will change so that their corresponding interfacial area will be minimal. This effect can be visualized in the phenomenon called phase separation. [ citation needed ]
Superhydrophobic surfaces, such as the leaves of the lotus plant, are those that are extremely difficult to wet. The contact angles of a water droplet exceeds 150°. [ 8 ] This is referred to as the lotus effect , and is primarily a physical property related to interfacial tension , rather than a chemical property. [ 9 ]
In 1805, Thomas Young defined the contact angle θ by analyzing the forces acting on a fluid droplet resting on a solid surface surrounded by a gas. [ 10 ]
where
θ can be measured using a contact angle goniometer .
Wenzel determined that when the liquid is in intimate contact with a microstructured surface, θ will change to θ W*
where r is the ratio of the actual area to the projected area. [ 11 ] Wenzel's equation shows that microstructuring a surface amplifies the natural tendency of the surface. A hydrophobic surface (one that has an original contact angle greater than 90°) becomes more hydrophobic when microstructured – its new contact angle becomes greater than the original. However, a hydrophilic surface (one that has an original contact angle less than 90°) becomes more hydrophilic when microstructured – its new contact angle becomes less than the original. [ 12 ] Cassie and Baxter found that if the liquid is suspended on the tops of microstructures, θ will change to θ CB* :
where φ is the area fraction of the solid that touches the liquid. [ 13 ] Liquid in the Cassie–Baxter state is more mobile than in the Wenzel state. [ citation needed ]
We can predict whether the Wenzel or Cassie–Baxter state should exist by calculating the new contact angle with both equations. By a minimization of free energy argument, the relation that predicted the smaller new contact angle is the state most likely to exist. Stated in mathematical terms, for the Cassie–Baxter state to exist, the following inequality must be true. [ 14 ]
A recent alternative criterion for the Cassie–Baxter state asserts that the Cassie–Baxter state exists when the following 2 criteria are met:1) Contact line forces overcome body forces of unsupported droplet weight and 2) The microstructures are tall enough to prevent the liquid that bridges microstructures from touching the base of the microstructures. [ 15 ]
A new criterion for the switch between Wenzel and Cassie-Baxter states has been developed recently based on surface roughness and surface energy . [ 16 ] The criterion focuses on the air-trapping capability under liquid droplets on rough surfaces, which could tell whether Wenzel's model or Cassie-Baxter's model should be used for certain combination of surface roughness and energy. [ citation needed ]
Contact angle is a measure of static hydrophobicity, and contact angle hysteresis and slide angle are dynamic measures. Contact angle hysteresis is a phenomenon that characterizes surface heterogeneity. [ 17 ] When a pipette injects a liquid onto a solid, the liquid will form some contact angle. As the pipette injects more liquid, the droplet will increase in volume, the contact angle will increase, but its three-phase boundary will remain stationary until it suddenly advances outward. The contact angle the droplet had immediately before advancing outward is termed the advancing contact angle. The receding contact angle is now measured by pumping the liquid back out of the droplet. The droplet will decrease in volume, the contact angle will decrease, but its three-phase boundary will remain stationary until it suddenly recedes inward. The contact angle the droplet had immediately before receding inward is termed the receding contact angle. The difference between advancing and receding contact angles is termed contact angle hysteresis and can be used to characterize surface heterogeneity, roughness, and mobility. [ 18 ] Surfaces that are not homogeneous will have domains that impede motion of the contact line. The slide angle is another dynamic measure of hydrophobicity and is measured by depositing a droplet on a surface and tilting the surface until the droplet begins to slide. In general, liquids in the Cassie–Baxter state exhibit lower slide angles and contact angle hysteresis than those in the Wenzel state. [ citation needed ]
Dettre and Johnson discovered in 1964 that the superhydrophobic lotus effect phenomenon was related to rough hydrophobic surfaces, and they developed a theoretical model based on experiments with glass beads coated with paraffin or TFE telomer. The self-cleaning property of superhydrophobic micro- nanostructured surfaces was reported in 1977. [ 19 ] Perfluoroalkyl, perfluoropolyether, and RF plasma -formed superhydrophobic materials were developed, used for electrowetting and commercialized for bio-medical applications between 1986 and 1995. [ 20 ] [ 21 ] [ 22 ] [ 23 ] Other technology and applications have emerged since the mid-1990s. [ 24 ] A durable superhydrophobic hierarchical composition, applied in one or two steps, was disclosed in 2002 comprising nano-sized particles ≤ 100 nanometers overlaying a surface having micrometer-sized features or particles ≤ 100 micrometers. The larger particles were observed to protect the smaller particles from mechanical abrasion. [ 25 ]
In recent research, superhydrophobicity has been reported by allowing alkylketene dimer (AKD) to solidify into a nanostructured fractal surface. [ 26 ] Many papers have since presented fabrication methods for producing superhydrophobic surfaces including particle deposition, [ 27 ] sol-gel techniques, [ 28 ] plasma treatments, [ 29 ] vapor deposition, [ 27 ] and casting techniques. [ 30 ] Current opportunity for research impact lies mainly in fundamental research and practical manufacturing. [ 31 ] Debates have recently emerged concerning the applicability of the Wenzel and Cassie–Baxter models. In an experiment designed to challenge the surface energy perspective of the Wenzel and Cassie–Baxter model and promote a contact line perspective, water drops were placed on a smooth hydrophobic spot in a rough hydrophobic field, a rough hydrophobic spot in a smooth hydrophobic field, and a hydrophilic spot in a hydrophobic field. [ 32 ] Experiments showed that the surface chemistry and geometry at the contact line affected the contact angle and contact angle hysteresis , but the surface area inside the contact line had no effect. An argument that increased jaggedness in the contact line enhances droplet mobility has also been proposed. [ 33 ]
Many hydrophobic materials found in nature rely on Cassie's law and are biphasic on the submicrometer level with one component air. The lotus effect is based on this principle. Inspired by it , many functional superhydrophobic surfaces have been prepared. [ 34 ]
An example of a bionic or biomimetic superhydrophobic material in nanotechnology is nanopin film . [ citation needed ]
One study presents a vanadium pentoxide surface that switches reversibly between superhydrophobicity and superhydrophilicity under the influence of UV radiation. [ 35 ] According to the study, any surface can be modified to this effect by application of a suspension of rose-like V 2 O 5 particles, for instance with an inkjet printer . Once again hydrophobicity is induced by interlaminar air pockets (separated by 2.1 nm distances). The UV effect is also explained. UV light creates electron-hole pairs , with the holes reacting with lattice oxygen, creating surface oxygen vacancies, while the electrons reduce V 5+ to V 3+ . The oxygen vacancies are met by water, and it is this water absorbency by the vanadium surface that makes it hydrophilic. By extended storage in the dark, water is replaced by oxygen and hydrophilicity is once again lost. [ citation needed ]
A significant majority of hydrophobic surfaces have their hydrophobic properties imparted by structural or chemical modification of a surface of a bulk material, through either coatings or surface treatments. That is to say, the presence of molecular species (usually organic) or structural features results in high contact angles of water. In recent years, rare earth oxides have been shown to possess intrinsic hydrophobicity. [ 36 ] The intrinsic hydrophobicity of rare earth oxides depends on surface orientation and oxygen vacancy levels, and is naturally more robust than coatings or surface treatments, having potential applications in condensers and catalysts that can operate at high temperatures or corrosive environments. [ 37 ]
Hydrophobic concrete has been produced since the mid-20th century. [ citation needed ]
Active recent research on superhydrophobic materials might eventually lead to more industrial applications. [ 38 ]
A simple routine of coating cotton fabric with silica [ 39 ] or titania [ 40 ] particles by sol-gel technique has been reported, which protects the fabric from UV light and makes it superhydrophobic.
An efficient routine has been reported for making polyethylene superhydrophobic and thus self-cleaning. [ 41 ] 99% of dirt on such a surface is easily washed away.
Patterned superhydrophobic surfaces also have promise for lab-on-a-chip microfluidic devices and can drastically improve surface-based bioanalysis. [ 42 ]
In pharmaceuticals, hydrophobicity of pharmaceutical blends affects important quality attributes of final products, such as drug dissolution and hardness . [ 43 ] Methods have been developed to measure the hydrophobicity of pharmaceutical materials. [ 44 ] [ 45 ]
The development of hydrophobic passive daytime radiative cooling (PDRC) surfaces, whose effectiveness at solar reflectance and thermal emittance is predicated on their cleanliness, has improved the "self-cleaning" of these surfaces. Scalable and sustainable hydrophobic PDRCs that avoid VOCs have further been developed. [ 46 ] | https://en.wikipedia.org/wiki/Hydrophobe |
The hydrophobic effect is the observed tendency of nonpolar substances to aggregate in an aqueous solution and to be excluded by water . [ 1 ] [ 2 ] The word hydrophobic literally means "water-fearing", and it describes the segregation of water and nonpolar substances, which maximizes the entropy of water and minimizes the area of contact between water and nonpolar molecules. In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. [ 3 ] A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity.
The hydrophobic effect is responsible for the separation of a mixture of oil and water into its two components. It is also responsible for effects related to biology, including: cell membrane and vesicle formation, protein folding , insertion of membrane proteins into the nonpolar lipid environment and protein- small molecule associations. Hence the hydrophobic effect is essential to life. [ 4 ] [ 5 ] [ 6 ] [ 7 ] Substances for which this effect is observed are known as hydrophobes .
Amphiphiles are molecules that have both hydrophobic and hydrophilic domains. Detergents are composed of amphiphiles that allow hydrophobic molecules to be solubilized in water by forming micelles and bilayers (as in soap bubbles ). They are also important to cell membranes composed of amphiphilic phospholipids that prevent the internal aqueous environment of a cell from mixing with external water.
In the case of protein folding, the hydrophobic effect is important to understanding the structure of proteins that have hydrophobic amino acids (such as valine , leucine , isoleucine , phenylalanine , tryptophan and methionine ) clustered together within the protein. Structures of globular proteins have a hydrophobic core in which hydrophobic side chains are buried from water, which stabilizes the folded state. Charged and polar side chains are situated on the solvent-exposed surface where they interact with surrounding water molecules. Minimizing the number of hydrophobic side chains exposed to water is the principal driving force behind the folding process, [ 8 ] [ 9 ] [ 10 ] although formation of hydrogen bonds within the protein also stabilizes protein structure. [ 11 ] [ 12 ]
The energetics of DNA tertiary-structure assembly were determined to be driven by the hydrophobic effect, in addition to Watson–Crick base pairing , which is responsible for sequence selectivity, and stacking interactions between the aromatic bases. [ 13 ] [ 14 ]
In biochemistry , the hydrophobic effect can be used to separate mixtures of proteins based on their hydrophobicity. Column chromatography with a hydrophobic stationary phase such as phenyl - sepharose will cause more hydrophobic proteins to travel more slowly, while less hydrophobic ones elute from the column sooner. To achieve better separation, a salt may be added (higher concentrations of salt increase the hydrophobic effect) and its concentration decreased as the separation progresses. [ 15 ]
The origin of the hydrophobic effect is not fully understood.
Some argue that the hydrophobic interaction is mostly an entropic effect originating from the disruption of highly dynamic hydrogen bonds between molecules of liquid water by the nonpolar solute. [ 16 ] A hydrocarbon chain or a similar nonpolar region of a large molecule is incapable of forming hydrogen bonds with water. Introduction of such a non-hydrogen bonding surface into water causes disruption of the hydrogen bonding network between water molecules. The hydrogen bonds are reoriented tangentially to such surface to minimize disruption of the hydrogen bonded 3D network of water molecules, and this leads to a structured water "cage" around the nonpolar surface. The water molecules that form the "cage" (or clathrate ) have restricted mobility. In the solvation shell of small nonpolar particles, the restriction amounts to some 10%. For example, in the case of dissolved xenon at room temperature a mobility restriction of 30% has been found. [ 17 ] In the case of larger nonpolar molecules, the reorientational and translational motion of the water molecules in the solvation shell may be restricted by a factor of two to four; thus, at 25 °C the reorientational correlation time of water increases from 2 to 4-8 picoseconds. Generally, this leads to significant losses in translational and rotational entropy of water molecules and makes the process unfavorable in terms of the free energy in the system. [ 18 ] By aggregating together, nonpolar molecules reduce the surface area exposed to water and minimize their disruptive effect.
The hydrophobic effect can be quantified by measuring the partition coefficients of non-polar molecules between water and non-polar solvents. The partition coefficients can be transformed to free energy of transfer which includes enthalpic and entropic components, ΔG = ΔH - TΔS . These components are experimentally determined by calorimetry . The hydrophobic effect was found to be entropy-driven at room temperature because of the reduced mobility of water molecules in the solvation shell of the non-polar solute; however, the enthalpic component of transfer energy was found to be favorable, meaning it strengthened water-water hydrogen bonds in the solvation shell due to the reduced mobility of water molecules. At the higher temperature, when water molecules become more mobile, this energy gain decreases along with the entropic component. The hydrophobic effect depends on the temperature, which leads to "cold denaturation " of proteins. [ 19 ]
The hydrophobic effect can be calculated by comparing the free energy of solvation with bulk water. In this way, the hydrophobic effect not only can be localized but also decomposed into enthalpic and entropic contributions. [ 3 ] | https://en.wikipedia.org/wiki/Hydrophobic_effect |
Hydrophobic sand (or magic sand ) is a toy made from sand coated with a hydrophobic compound. The presence of the hydrophobic compound causes the grains of sand to adhere to one another and form cylinders (to minimize surface area ) when exposed to water, and form a pocket of air around the sand. [ 1 ] The pocket of air makes magic sand unable to get wet. A variation of this, kinetic sand , has several of the same properties, but acts like wet sand that will not dry out. Hydrophobic sand, whether the wet or dry type, will not mix with water. [ clarification needed ]
The earliest reference to waterproof sand is in the 1915 book The Boy Mechanic Book 2 published by Popular Mechanics . The Boy Mechanic states waterproof sand was invented by East Indian magicians . The sand was made by mixing heated sand with melted wax . The wax would repel water when the sand was exposed to water. [ 2 ]
Magic sand was originally developed to trap ocean oil spills near the shore. This was done by sprinkling magic sand on floating petroleum, which would then mix with the oil and make it heavy enough to sink. Due to the expense of production, however, it is no longer used for this purpose.
Hydrophobic sand has also been tested by utility companies in Arctic regions as a foundation for junction boxes , as it never freezes. It is also used as an aerating medium for potted plants.
The properties of Magic sand are achieved using ordinary beach sand, which contains tiny particles of pure silica , and exposing it to vapors of trimethylsilanol ( CH 3 ) 3 Si OH , an organosilicon compound. Upon exposure, the trimethylsilanol compound bonds to the silica particles while forming water. The exteriors of the sand grains are thus coated with hydrophobic groups. When Magic sand is removed from water, it is completely dry and free-flowing. | https://en.wikipedia.org/wiki/Hydrophobic_sand |
Hydrophobic soil is a soil whose particles repel water. The layer of hydrophobicity is commonly found at or a few centimeters below the surface, parallel to the soil profile. [ 1 ] This layer can vary in thickness and abundance and is typically covered by a layer of ash or burned soil.
Hydrophobic soil is most familiarly formed when a fire or hot air disperses waxy compounds found in the uppermost litter layer consisting of organic matter. [ 2 ] After the compounds disperse, they mainly coat sandy soil particles near the surface in the upper layers of soil, making the soil hydrophobic. Other producers of hydrophobic coatings are contamination and industrial spillages along with soil microbial activity. [ 2 ] Hydrophobicity can also be seen as a natural soil property that results from the degradation of natural vegetation such as Eucalyptus that has natural wax properties. [ 3 ]
It was found that in a particular New Zealand sand, this waxy lipid coating consisted of primarily hydrocarbons and triglycerides that were basic in pH along with a lesser value of acidic long-chain fatty acids. [ 4 ] Capillary penetration amongst soil particles is limited by the hydrophobic coating on the particles, resulting in water repellence in each particle affected as the hydrophilic head of the lipid attaches itself to the sand particle leaving the hydrophobic tail shielding the outside of the particle. [ 2 ] This can be seen in Figure 1 below.
Other important soil water averting factors have been found to include soil texture, microbiology, soil surface roughness, soil organic matter content, soil chemical composition, acidity, soil water content, soil type, mineralogy of clay particles, and seasonal variations of the region. [ 5 ] Soil texture plays a large role in predicting whether a soil could be water repelling as larger grained particles in the soil such as sand have smaller surface areas, making them more prone to being fully coated by hydrophobic compounds. It is much more difficult to entirely coat a silt or clay particle with more surface area, but when it does happen, the resulting water repellency of the soil is severe. [ 6 ] As soil organic matter in the form of plant or microbial biomass decomposes, physiochemical changes can release these hydrophobic compounds into the soil as well. [ 7 ] This, however, depends on the type of microbial activity present in the soil as it can also hinder the development of hydrophobic compounds. [ 6 ]
Soil water repellence is almost always tested with the water droplet penetration time (WDPT) test first because of the simplicity of the test. [ 8 ] This test is executed by recording the time it takes for one droplet of water to infiltrate a specific soil, indicating the stability of repellency. [ 2 ] Water infiltration is expressed as water entering the soil in a spontaneous fashion and correlates with the angle of the water-soil contact. If the water-soil contact angle is greater than 90º, then the soil is determined to be hydrophobic. It has also been observed that if the test droplet is placed on hydrophobic soil, it will rapidly develop a particulate skin before disappearing. [ 2 ]
Results of the WDPT: [ 9 ]
Table 1: Characterizing the degree of hydrophobicity in soils based on the water droplet penetration test.
Another method for determining soil water repellency is the molarity of ethanol droplet (MED) test. [ 10 ] The MED test uses solutions of ethanol of varying surface tensions to observe soil wetting within a time frame of 10 seconds. If there is no wetting within the specified timeframe, an aqueous solution of ethanol with lower surface tension is then placed on a different area of the sample. The results of the MED test depend on the molarity of the ethanol solution whose droplets were absorbed in the allotted 10 seconds. [ 10 ] Classifying soil water repellency from this test can be done by using a MED index where a non-water repellent soil has an index of less than or equal to 1 and a severely water repellent soil has an index of greater than or equal to 2.2. [ 8 ] The MED index, 90º surface tension, ethanol molarity, and volume percentage correlate and can be converted into one another. [ 8 ] In this test, the liquid-air surface tension value of the ethanol solution that is absorbed within this timeframe is used as the ninety-degree surface tension of the soil. The water entry pressure associated with the tested soil is another indicator of infiltration rates as it is associated with the degree of water repellency along with soil pore size. [ 8 ]
Hydrophobic soils and their aversion to water have consequences on plant water availability, plant-available nutrients, hydrology, and geomorphology of the affected area. [ 5 ] By reducing the infiltration rate, runoff generation time is reduced and leads to an increase in the land flow of water during precipitation or irrigation events. Greater runoff increases erosion, causes uneven wetting patterns in soil, accelerates nutrient leaching reducing soil fertility, develops different flow paths in the region, and increases the risk of contamination in soils. [ 5 ]
Drainage of nutrients occurs in weaker areas of repellency in hydrophobic soil where water preferentially drains into the soil. Because the water cannot drain into the stronger areas of hydrophobicity, the water finds pathways of preferential flow where it can infiltrate deeper into the soil profile. If irrigation or precipitation events are large, the water could potentially flow below the root zone, making it unavailable to any plant life and oftentimes taking fertilizers and nutrients with it. This additionally leads to an uneven distribution of nutrients and applied chemicals resulting in patchy vegetation. [ 11 ]
In an agricultural setting, hydrophobic soil is a large constraint on crop yields. For example, in Australia, there have been documented reports of up to 80% loss in production due to soil water repellency. [ 6 ] This is due to low rates of seed germination in soils as well as low plant available water levels. [ 3 ]
Hydrophobic soils have been found on all continents except for Antarctica. [ 5 ] It occurs in dry regions in the United States, southern Australia, and the Mediterranean Basin, and in wet regions including Sweden, the Netherlands, British Columbia, and Columbia. [ 6 ] Although it mainly appears in coarse-textured soils such as sand-dominated soils, it affects soils of all different soil types and has been reported in forests, pastures, agricultural plots, and shrublands. [ 6 ] Generally, the degree of hydrophobicity is more severe in the soils of legume-grass pastures compared to cultivated agricultural fields. [ 3 ]
One method of managing sandy water-repellent soil is claying. This is done by adding select clay materials , such as calcium-bentonite , to give the soil more surface area per unit volume and improve soil mineralogy supporting aggregation . A hydrophobic barley field increased crop yield from 1.7 to 3.4 t/ha, and a field of lupins increased the yield by 1 t/ha within 2 years of claying. [ 3 ]
Liming is a method to reduce soil water repellency in acidic soils. [ 3 ] The liming process consists of adding calcium carbonate, which increases the pH of the soil towards neutral. Separate from hydrophobic soils, increased calcium and pH are associated with increased infiltration from improved structure and aggregation in biologically active soils .
Increasing the soil's pH increases the ability of naturally occurring humic substances to improve infiltration in hydrophobic soils. Humic acid is only water-soluble at a pH greater than 6.5, while fulvic acid is soluble at all pH ranges. Both resident acids have a property that enables them to reduce the surface tension of water when in solution. In contrast, it has been reported that soils with a deficiency of fulvic acid in solution would have more severe water repellency. [ 3 ]
The agricultural practice of tilling decreases the degree of soil water repellency. Tilling crop fields reduces the carbon content of the soil through mixing and mineralization, thus decreasing the likelihood of decomposition by microorganisms that can lead to the dispersal of the hydrophobic coating that triggers soil water repellency. [ 3 ]
Naturally forming holes and cracks in hydrophobic soil patches allow water to infiltrate the surface. These can form from burrowing animals, root channels, or macropores from decayed roots. [ 12 ] These macropores have been identified as essential pathways in forest ecosystems for water to penetrate the soil because they account for approximately 35% of the near-surface volume of the soil. [ 12 ] | https://en.wikipedia.org/wiki/Hydrophobic_soil |
Hydrophobicity scales are values that define the relative hydrophobicity or hydrophilicity of amino acid residues. The more positive the value, the more hydrophobic are the amino acids located in that region of the protein. These scales are commonly used to predict the transmembrane alpha-helices of membrane proteins . When consecutively measuring amino acids of a protein, changes in value indicate attraction of specific protein regions towards the hydrophobic region inside lipid bilayer .
The hydrophobic or hydrophilic character of a compound or amino acid is its hydropathic character, [ 1 ] hydropathicity, or hydropathy.
The hydrophobic effect represents the tendency of water to exclude non-polar molecules. The effect originates from the disruption of highly dynamic hydrogen bonds between molecules of liquid water. Polar chemical groups, such as OH group in methanol do not cause the hydrophobic effect. However, a pure hydrocarbon molecule, for example hexane , cannot accept or donate hydrogen bonds to water. Introduction of hexane into water causes disruption of the hydrogen bonding network between water molecules. The hydrogen bonds are partially reconstructed by building a water "cage" around the hexane molecule, similar to that in clathrate hydrates formed at lower temperatures. The mobility of water molecules in the "cage" (or solvation shell ) is strongly restricted. This leads to significant losses in translational and rotational entropy of water molecules and makes the process unfavorable in terms of free energy of the system. [ 2 ] [ 3 ] [ 4 ] [ 5 ] In terms of thermodynamics, the hydrophobic effect is the free energy change of water surrounding a solute. [ 6 ] A positive free energy change of the surrounding solvent indicates hydrophobicity, whereas a negative free energy change implies hydrophilicity. In this way, the hydrophobic effect not only can be localized but also decomposed into enthalpic and entropic contributions.
A number of different hydrophobicity scales have been developed. [ 3 ] [ 1 ] [ 7 ] [ 8 ] [ 9 ] The Expasy Protscale website lists a total of 22 hydrophobicity scales. [ 10 ]
There are clear differences between the four scales shown in the table. [ 11 ] Both the second and fourth scales place cysteine as the most hydrophobic residue, unlike the other two scales. This difference is due to the different methods used to measure hydrophobicity. The method used to obtain the Janin and Rose et al. scales was to examine proteins with known 3-D structures and define the hydrophobic character as the tendency for a residue to be found inside of a protein rather than on its surface. [ 12 ] [ 13 ] Since cysteine forms disulfide bonds that must occur inside a globular structure, cysteine is ranked as the most hydrophobic. The first and third scales are derived from the physiochemical properties of the amino acid side chains. These scales result mainly from inspection of the amino acid structures. [ 14 ] [ 1 ] Biswas et al., divided the scales based on the method used to obtain the scale into five different categories. [ 3 ]
The most common method of measuring amino acid hydrophobicity is partitioning between two immiscible liquid phases. Different organic solvents are most widely used to mimic the protein interior. However, organic solvents are slightly miscible with water and the characteristics of both phases change making it difficult to obtain pure hydrophobicity scale. [ 3 ] Nozaki and Tanford proposed the first major hydrophobicity scale for nine amino acids. [ 15 ] Ethanol and dioxane are used as the organic solvents and the free energy of transfer of each amino acid was calculated. Non liquid phases can also be used with partitioning methods such as micellar phases and vapor phases. Two scales have been developed using micellar phases. [ 16 ] [ 17 ] Fendler et al. measured the partitioning of 14 radiolabeled amino acids using sodium dodecyl sulfate (SDS) micelles . Also, amino acid side chain affinity for water was measured using vapor phases. [ 14 ] Vapor phases represent the simplest non polar phases, because it has no interaction with the solute. [ 18 ] The hydration potential and its correlation to the appearance of amino acids on the surface of proteins was studied by Wolfenden. Aqueous and polymer phases were used in the development of a novel partitioning scale. [ 19 ] Partitioning methods have many drawbacks. First, it is difficult to mimic the protein interior. [ 20 ] [ 21 ] In addition, the role of self solvation makes using free amino acids very difficult. Moreover, hydrogen bonds that are lost in the transfer to organic solvents are not reformed but often in the interior of protein. [ 22 ]
Hydrophobicity scales can also be obtained by calculating the solvent accessible surface areas for amino acid residues in the expended polypeptide chain [ 22 ] or in alpha-helix and multiplying the surface areas by the empirical solvation parameters for the corresponding types of atoms. [ 3 ] A differential solvent accessible surface area hydrophobicity scale based on proteins as compacted networks near a critical point, due to self-organization by evolution, was constructed based on asymptotic power-law (self-similar) behavior. [ 23 ] [ 24 ] This scale is based on a bioinformatic survey of 5526 high-resolution structures from the Protein Data Bank. This differential scale has two comparative advantages: (1) it is especially useful for treating changes in water-protein interactions that are too small to be accessible to conventional force-field calculations, and (2) for homologous structures, it can yield correlations with changes in properties from mutations in the amino acid sequences alone, without determining corresponding structural changes, either in vitro or in vivo.
Reversed phase liquid chromatography (RPLC) is the most important chromatographic method for measuring solute hydrophobicity. [ 3 ] [ 25 ] The non polar stationary phase mimics biological membranes. Peptide usage has many advantages because partition is not extended by the terminal charges in RPLC. Also, secondary structures formation is avoided by using short sequence peptides. Derivatization of amino acids is necessary to ease its partition into a C18 bonded phase. Another scale had been developed in 1971 and used peptide retention on hydrophilic gel. [ 26 ] 1-butanol and pyridine were used as the mobile phase in this particular scale and glycine was used as the reference value. Pliska and his coworkers [ 27 ] used thin layer chromatography to relate mobility values of free amino acids to their hydrophobicities. About a decade ago, another hydrophilicity scale was published, this scale used normal phase liquid chromatography and showed the retention of 121 peptides on an amide-80 column. [ 28 ] The absolute values and relative rankings of hydrophobicity determined by chromatographic methods can be affected by a number of parameters. These parameters include the silica surface area and pore diameter, the choice and pH of aqueous buffer, temperature and the bonding density of stationary phase chains. [ 3 ]
This method use DNA recombinant technology and it gives an actual measurement of protein stability. In his detailed site-directed mutagenesis studies, Utani and his coworkers substituted 19 amino acids at Trp49 of the tryptophan synthase and he measured the free energy of unfolding. They found that the increased stability is directly proportional to increase in hydrophobicity up to a certain size limit. The main disadvantage of site-directed mutagenesis method is that not all the 20 naturally occurring amino acids can substitute a single residue in a protein. Moreover, these methods have cost problems and is useful only for measuring protein stability. [ 3 ] [ 29 ]
The hydrophobicity scales developed by physical property methods are based on the measurement of different physical properties. Examples include, partial molar heat capacity, transition temperature and surface tension. Physical methods are easy to use and flexible in terms of solute. The most popular hydrophobicity scale was developed by measuring surface tension values for the naturally occurring 20 amino acids in NaCl solution. [ 30 ] The main drawbacks of surface tension measurements is that the broken hydrogen bonds and the neutralized charged groups remain at the solution air interface. [ 3 ] [ 1 ] Another physical property method involve measuring the solvation free energy. [ 31 ] The solvation free energy is estimated as a product of an accessibility of an atom to the solvent and an atomic solvation parameter. Results indicate the solvation free energy lowers by an average of 1 Kcal/residue upon folding. [ 3 ]
Palliser and Parry have examined about 100 scales and found that they can use them for locating B-strands on the surface of proteins. [ 32 ] Hydrophobicity scales were also used to predict the preservation of the genetic code. [ 33 ] Trinquier observed a new order of the bases that better reflect the conserved character of the genetic code. [ 3 ] They believed new ordering of the bases was uracil-guanine-cystosine-adenine (UGCA) better reflected the conserved character of the genetic code compared to the commonly seen ordering UCAG. [ 3 ]
The Wimley–White whole residue hydrophobicity scales are significant for two reasons. First, they include the contributions of the peptide bonds as well as the sidechains, providing absolute values. Second, they are based on direct, experimentally determined values for transfer free energies of polypeptides.
Two whole-residue hydrophobicity scales have been measured:
The Stephen H. White website [ 34 ] provides an example of whole residue hydrophobicity scales showing the free energy of transfer ΔG(kcal/mol) from water to POPC interface and to n-octanol. [ 34 ] These two scales are then used together to make Whole residue hydropathy plots. [ 34 ] The hydropathy plot constructed using ΔG woct − ΔG wif shows favorable peaks on the absolute scale that correspond to the known TM helices. Thus, the whole residue hydropathy plots illustrate why transmembrane segments prefer a transmembrane location rather than a surface one. [ 35 ] [ 36 ] [ 37 ] [ 38 ]
Most of the existing hydrophobicity scales are derived from the properties of amino acids in their free forms or as a part of a short peptide. Bandyopadhyay-Mehler hydrophobicity scale was based on partitioning of amino acids in the context of protein structure. Protein structure is a complex mosaic of various dielectric medium generated by arrangement of different amino acids. Hence, different parts of the protein structure most likely would behave as solvents with different dielectric values. For simplicity, each protein structure was considered as an immiscible mixture of two solvents, protein interior and protein exterior. The local environment around individual amino acid (termed as "micro-environment") was computed for both protein interior and protein exterior. The ratio gives the relative hydrophobicity scale for individual amino acids. Computation was trained on high resolution protein crystal structures. This quantitative descriptor for microenvironment was derived from the octanol-water partition coefficient , (known as Rekker's Fragmental Constants) widely used for pharmacophores. This scale well correlate with the existing methods, based on partitioning and free energy computations. Advantage of this scale is it is more realistic, as it is in the context of real protein structures. [ 9 ]
In the field of engineering , the hydrophobicity (or dewetting ability) of a flat surface (e.g., a counter top in kitchen or a cooking pan) can be measured by the contact angle of water droplet. A University of Nebraska-Lincoln team recently devised a computational approach that can relate the molecular hydrophobicity scale of amino-acid chains to the contact angle of water nanodroplet. [ 39 ] The team constructed planar networks composed of unified amino-acid side chains with native structure of the beta-sheet protein. Using molecular dynamics simulation, the team is able to measure the contact angle of water nanodroplet on the planar networks (caHydrophobicity).
On the other hand, previous studies show that the minimum of excess chemical potential of a hard-sphere solute with respect to that in the bulk exhibits a linear dependence on cosine value of contact angle. [ 40 ] Based on the computed excess chemical potentials of the purely repulsive methane-sized Weeks–Chandler–Andersen solute with respect to that in the bulk, the extrapolated values of cosine value of contact angle are calculated(ccHydrophobicity), which can be used to quantify the hydrophobicity of amino acid side chains with complete wetting behaviors. | https://en.wikipedia.org/wiki/Hydrophobicity_scales |
Hydrophosphination is the insertion of a double bond into a phosphorus - hydrogen bond. Often the hydrophosphination makes phosphorus-carbon bonds by addition of P-H bonds to carbon-carbon multiple bonds, but the reaction is probably most useful in reactions of phosphine with formaldehyde, a form of hydroxymethylation .
Like other hydrofunctionalizations, the rate and regiochemistry of the insertion reaction is influenced by the catalyst. Catalysts take many forms, but most prevalent are bases and free-radical initiators. [ 1 ] Most hydrophosphinations involve reactions of phosphine (PH3). [ 2 ]
Although most of this article concerns addition of P-H bonds to alkenes, an important variant is the hydrophosphination of formaldehyde . Tetrakis(hydroxymethyl)phosphonium chloride (THPC) is prepared as follows from phosphine :
It is a white water-soluble salt with applications as a precursor to fire-retardant materials [ 3 ] and as a microbiocide in commercial and industrial water systems. The hydroxymethyl groups on THPC undergo replacement reactions when THPC is treated with α,β-unsaturated nitrile, acid, amide, and epoxides. For example, base induces condensation between THPC and acrylamide with displacement of the hydroxymethyl groups. (Z = CONH 2 )
Similar reactions occur when THPC is treated with acrylic acid ; only one hydroxymethyl group is displaced, however. [ 4 ]
THPC converts to tris(hydroxymethyl)phosphine upon treatment with aqueous sodium hydroxide : [ 5 ]
The usual application of hydrophosphination involves reactions of phosphine (PH 3 ). Typically base-catalysis allows addition of Michael acceptors such as acrylonitrile to give tris(cyanoethyl)phosphine : [ 2 ]
Acid catalysis is applicable to hydrophosphination with alkenes that form stable carbocations. These alkenes include isobutylene and many analogues: [ 2 ]
Bases catalyze the addition of secondary phosphines to vinyldiphenylphosphine : [ 6 ]
Many hydrophosphination reactions are initiated by free-radicals . AIBN and peroxides are typical initiators, as well as Ultraviolet irradiation . In this way, the commercially important tributylphosphine and trioctylphosphine are prepared in good yields from 1-butene and 1-octene , respectively. [ 2 ]
The reactions proceed by abstraction of an H atom the phosphine precursor, producing the phosphino radical, a seven electron species. This radical then adds to the alkene, and subsequent H-atom transfer completes the cycle. [ 7 ] Some highly efficient hydrophosphinations appear not to proceed via radicals, but alternative explanations are lacking. [ 8 ]
Metal-catalyzed hydrophosphinations are not widely used, although they have been extensively researched. Studies mainly focus on secondary and primary organophosphines (R 2 PH and RPH 2 , respectively). These substrates bind to metals, and the resulting adducts insert alkenes and alkynes into the P-H bonds via diverse mechanisms. [ 9 ] [ 10 ] [ 11 ]
Metal complexes of d 0 configurations are effective catalysts for hydrophosphinations of simple alkenes and alkynes. [ 12 ] [ 13 ] Intramolecular reactions are facile, e.g. starting with α,ω -pentenylphosphine. The primary phosphine undergoes a σ -bond metathesis with the bis(trimethylsilyl)methylene ligand forming the lanthanide-phosphido complex. Subsequently, the pendant terminal alkene or alkyne inserts into the Ln-P bond. Finally, protonolysis of the Ln-C bond with the starting primary phosphine releases the new phosphine and regenerates the catalyst. Given that the metal is electron-poor, the M-C bond is sufficiently enough to be protonolyzed by the substrate primary phosphine.
Most metal catalyzed hydrophosphinations proceed via metal phosphido intermediates . Some however proceed by metal-phosphinidene intermediates, i.e. species with M=PR double bonds. One such example is the Ti-catalyzed hydrophosphination of diphenylacetylene with phenylphosphine. [ 14 ] This system involves a cationic catalyst precursor that is stabilized by the bulky 2,4,6-tri(isopropyl)phenyl- substituent on the phosphinidene and the close ionic association of methyltris(pentafluorophenyl)borate. This precursor undergoes exchange with phenylphosphine to give the titanium-phenylphosphinidene complex, which is the catalyst. The Ti=PPh species undergoes a [2+2] cycloaddition with diphenylacetylene to make the corresponding metallacyclobutene. The substrate, phenylphosphine, protonolyzes the Ti-C bond and after a proton shift regenerates the catalyst and releases the new phosphine.
Titanium-catalyzed 1,4-hydrophosphination of 1,3-dienes with diphenylphosphine has been demonstrated. [ 15 ] It is a rare example of a d 2 catalyst. In the first step, the Ti(II) precursor inserted in the P-H bond of diphenylphosphine (Ph 2 PH).
Late transition metal hydrophosphination catalysts, i.e. those reliant on the nickel-triad and neighboring elements, generally require alkenes and alkynes with electron withdrawing substituents. A strong base is required as a cocatalyst. [ 10 ]
Some late metal hydrophosphination catalysts proceed via oxidative addition of a P-H bond. For example, a Pt(0) catalyst undergoes oxidative addition of a secondary phosphine to form the corresponding hydrido Pt(II) phosphido complex. These systems catalyze hydrophosphination of acrylonitrile, although this reaction can be achieved without metal catalysts. The key P-C bond-forming step occurs through an outer-sphere, Michael-type addition. [ 10 ]
The usual mechanism for hydrophosphination for late metal catalysts involves insertion of the alkene into the metal-phosphorus bond. Insertion into the metal-hydrogen bond is also possible. The product phosphine is produced through reductive elimination of a P-C bond rather than a P-H bond in Glueck's system. [ 16 ] [ 17 ] The Ni(0) catalyst involves oxidation addition of a P-H bond to the metal, followed by insertion of the alkene into the M-H bond.
Calcium-catalysed intermolecular hydrophosphination is known, using a β-diketiminato complex. [ 18 ]
Utilizing phosphorus(V) precursors hydrophosphorylation entails the insertion of alkenes and alkynes into the P-H bonds of secondary phosphine oxides: [ 19 ]
The reaction can be effected both using metal catalysts or free-radical initiators. | https://en.wikipedia.org/wiki/Hydrophosphination |
Hydropneumatic devices (or hydro-pneumatic devices ) are systems that operate using water and gas. The devices are used in various applications.
A hydropneumatic device is a tool that functions by using water and gas. [ 1 ] Hydropneumatic refers to the pneumatic (gas) and hydraulic (water) components needed for operation of the devices.
Hydropneumatic accumulators or pulsation dampeners are devices which prevent, but do not absorb, alleviate, arrest, attenuate, or suppress a shock that already exists, meaning that these devices prevent the creation of a shock wave at an otherwise earlier stage. These can include pulsation dampeners, hydropneumatic accumulators, water hammer preventers, water hammer arrestors, and other things.
Hydropneumatic water hammer preventers are chambers of sufficient volume to allow an extension of time in which a given flow may be accelerated or decelerated without sudden large change in pressure. See also expansion tank . When shock waves of an incompressible fluid within a piping system exist, especially at a high velocity, there is a high chance for water hammer. To help prevent a swing check from slamming and causing water hammer, a spring-assisted non-slam check valve is installed. Rather than relying on flow or gravity to be closed, the non-slam design prevents a sudden velocity decrease and reverse flow. [ 2 ]
The hydropneumatic water hammer preventer chamber is generally adapted to contain a separator member which prevents the escape of a pre-filled compressed inert gas . They may be
Variations on the design include
Hydropneumatic pump controllers provide a
The controllers are pressure cylinders containing a movable separator member between a gas and a liquid, said moveable member causing the actuation of directional control valve or valves. The controllers are used in a circuit after a pump that is followed by a valved-side branch, and beyond a check valve, so that this device can only discharge liquid volume by a pressure fall of the system.
Variations on the design include
Hydropneumatic pulsation filters provide means of reducing the amplitude of pressure changes the velocity of which is in the order of 1.4 km/s. All are used in industry.
A hydropneumatic pulsation filter is a pressure container with separate inlet and outlet, connectable to a pipe system so that all pressure changes must attempt to pass through said chamber. Entry and exit of said chamber being of a diameter relative to chamber diameter that provides a high discharge coefficient, and without close proximity of any reflective surface. Lack of any sudden change in cross section area of flow path that would reflect a pressure wave, i.e. no orifice plate(s). Variations include Combination "dual purpose" devices addressing "acceleration head reduction" by means of a gas containment.
The devices have applications by frequency response
Hydropneumatic acceleration head reducers minimize the mass of liquid that has to be accelerated when flow velocity changes. Within a piping system, pressure rises when a volume of fluid becomes present. This acceleration head needs to be reduced to prevent damage to pump components and excessive noise. [ 3 ] These devices are typically mountable in any orientation such that the device is connectable directly to the suction check valve beneath the pump or directly to any vertical or horizontal discharge check valve; minimizing the length of any liquid column mass that will experience velocity change. Pump connection being separate from system connection so that no acceleration head changes occur due to reciprocation within one port.
Applications for hydropneumatic acceleration head reducers include
Variations on the design include
Some manufacturers of pulsation dampeners provide items which do not dampen pulsations. The compressibility of a gas, often nitrogen because it is inert at normal temperatures, stores any sudden volume change. Storing sudden volume change enables volume to change against a soft gas cushion, without the need to accelerate all the existing liquid in the system out of the way of the new volume coming from a pump. Therefore, as all the volume in a system does not have to be suddenly accelerated, the cushion is preventing "acceleration head" (force) having to be generated. The pressure pulse is accordingly not generated in the first place, so it is not dampened at all. The gas cushion simply allows volume change to be stored. The manufacturers are providing, are liquid accumulators, not an item which removes energy.
Gas cushion (spring) pre-filled accumulators of liquids are called hydropneumatic accumulators. "Hydro" because a liquid (like water) is involved. "Pneumatic" because a gas (like air) is involved. "Accumulator" because the purpose is to store or accumulate liquid volume by easy compression of the gas. These devices are typified by having only one liquid connection that goes to a "T" on the system.
There are other forms of accumulator used for fluid power hydraulic purposes. For example, coil spring plus sealed piston; though these are less popular. Therefore, a hydraulic accumulator is not necessarily a hydropneumatic accumulator. | https://en.wikipedia.org/wiki/Hydropneumatic_device |
The Hydropower Sustainability Assessment Protocol (HSAP) is a global framework for assessing the sustainability of hydropower projects. The Protocol defines good and best practice at each stage of the life-cycle of a hydropower project across twenty-four environmental, social, technical and economic topics. [ 1 ]
The Protocol was developed between 2007 and 2010 by a multi-stakeholder forum made up of representatives from industry, civil society , donors, developing country governments and financial institutions . [ 2 ] The final version was published in 2010 after a trial period in sixteen countries. The Protocol was updated in 2018 to include good and best practice in climate change resilience and mitigation . [ 3 ]
After the Protocol's launch, the governance entity of the Protocol approved the development of two additional tools derived from the HSAP, the Hydropower ESG Gap Analysis Tool (HESG Tool) to identify gaps against basic good practice and the Hydropower Sustainability Guidelines on Good International Industry Practice (HGIIP Guidelines), a reference document presenting definitions relating to good and best industry practice. [ 4 ] [ 5 ]
The construction of a dam , power plant and reservoir creates social and physical changes in the surrounding area. As a result, hydropower projects can have both a positive and a negative environmental and social impacts.
The sustainability of the hydropower sector was the subject of a report by the World Commission on Dams in 2000. [ 6 ] The HSAP was developed in response to the Commission's recommendations, as well as standards set out in the Equator Principles , World Bank Safeguard Policies, IFC Performance Standards and sustainability guidelines developed by the International Hydropower Association (IHA). [ 2 ]
The Protocol is used by different hydropower stakeholders for different reasons:
Crédit Agricole , Societe Generale , Standard Chartered , Citi , and UBS now refer to the Protocol in their sector guidance. [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The World Bank has analysed the value of the Protocol for use by their clients, concluding that it is a useful tool for guiding the development of sustainable hydropower in developing countries. [ 3 ]
The International Institute for Environment and Development has reviewed social and environmental safeguards for large dam projects, concluding that the Protocol currently offers the best available ‘measuring stick’ for the World Commission on Dams provisions. [ 12 ]
A Protocol assessment takes place over a one-week period at the project site and provides a rapid sustainability check.
A Protocol assessment does not replace an environmental and social impact assessment (ESIA), which takes place over a much longer period of time as a mandatory regulatory requirement. A Protocol assessment will, amongst other things, check the scope and quality of the ESIA which has been done.
To ensure high quality, all commercial use of the Protocol is carried out by accredited assessors. These assessors have significant experience of the hydropower sector or relevant sustainability issues, and have passed a rigorous accreditation course. [ 13 ]
The Protocol can be used at any stage of hydropower development, from the early planning stages through to operation. Each project stage is assessed using a different tool:
The Protocol covers a range of topics that need to be understood to assess the overall sustainability of a hydropower project.
The Protocol also includes ‘cross-cutting issues’ such as climate change , gender , and human rights , which feature in multiple topics.
For each sustainability topic, performance is assessed against a range of criteria at two levels: basic good practice and proven best practice.
Criteria
Basic Good Practice
Proven Best Practice
Assessment
Adequate and effective assessment of the issues
The assessment takes a broad view of the issues.
No significant opportunities for improvement.
Management
Adequate and effective management plans and processes
Plans can anticipate and respond to new issues and opportunities.
No significant opportunities for improvement.
Stakeholder Engagement
Adequate and effective stakeholder engagement
Project is inclusive and participatory. Feedback is provided. Stakeholders involved in decision-making.
No significant opportunities for improvement.
Stakeholder Support
General support amongst directly affected stakeholder groups
Support of nearly all directly affected stakeholder groups.
In cases, formal agreements with consent of directly affected stakeholder groups.
Conformance and Compliance
No significant non-compliances and non-conformances
No non-compliances and non-conformances.
Outcomes
Impacts are avoided, minimised, and mitigated, with no significant gaps
Impacts are also compensated, and the project enhances pre-project conditions.
A multi-stakeholder forum developed the Protocol between 2008 and 2010. [ 2 ]
The following key group were represented: social and environmental NGOs, governments of developed and developing countries, financial institutions, development banks, and the hydropower industry.
The forum jointly reviewed, enhanced and built consensus on what a sustainable hydropower project should look like.
Policies taken into account included the World Commission on Dams ’ Criteria and Guidelines, World Bank Safeguard Policies, IFC Performance Standards, and the Equator Principles .
A draft of the Protocol was released in 2009, which was trialled in 16 countries across six continents and subjected to further consultation involving 1,933 individual stakeholders from 28 countries.
The final version was produced in 2010. [ 40 ]
The diversity of the forum was important to ensure that the Protocol became globally applicable and universally accepted. Diversity also ensured that the multiple perspectives and stakeholder interests surrounding a hydropower project were incorporated into the document.
The Protocol is governed by a multi-stakeholder body, the Hydropower Sustainability Assessment Council (HSA Council). [ 41 ]
The mission of the Council is to ensure multi-stakeholder input and confidence in the Protocol's content and application.
All individuals and organisations engaged in hydropower are welcome and encouraged to join the Council. This approach to governance ensures that all stakeholder voices are heard in the shaping of the use of the Protocol and its future development.
The Council consists of a series of Chambers, each representing a different segment of hydropower stakeholders. Each chamber elects a chair and alternate chair for a two-year term. The chamber chairs come together regularly to form the decision-making Protocol Governance Committee. [ 42 ] | https://en.wikipedia.org/wiki/Hydropower_Sustainability_Assessment_Protocol |
Hydroprocessing is a catalytic term relating to the processes of hydrocracking and hydrotreating . [ 1 ] These process are for the removal of sulfur , oxygen , nitrogen and metals from crude oil , this is done in the refining of fuel to enable lower sulfur levels in fuels. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
This article related to natural gas, petroleum or the petroleum industry is a stub . You can help Wikipedia by expanding it .
This catalysis article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydroprocessing |
Hydroquinone , also known as benzene-1,4-diol or quinol , is an aromatic organic compound that is a type of phenol , a derivative of benzene , having the chemical formula C 6 H 4 (OH) 2 . It has two hydroxyl groups bonded to a benzene ring in a para position. It is a white granular solid . Substituted derivatives of this parent compound are also referred to as hydroquinones. The name "hydroquinone" was coined by Friedrich Wöhler in 1843. [ 7 ]
In 2022, it was the 268th most commonly prescribed medication in the United States, with more than 900,000 prescriptions. [ 8 ] [ 9 ]
Hydroquinone is produced industrially in two main ways. [ 10 ]
Other, less common methods include:
The latter three methods are generally less atom-economical than oxidation with hydrogen peroxide, and their commercial practice in China produced serious pollution in 2022. [ 20 ]
The reactivity of hydroquinone's hydroxyl groups resembles that of other phenols , being weakly acidic. The resulting conjugate base easily undergoes O -alkylation to give mono- and diethers . Similarly, hydroquinone is highly susceptible to ring substitution via Friedel–Crafts alkylation. This reaction is often used for the production of several popular antioxidants, namely 2- tert -butyl-4-methoxyphenol ( BHA ). The useful dye quinizarin is produced by diacylation of hydroquinone with phthalic anhydride . [ 10 ]
Hydroquinone can be reversibly oxidised under mild conditions to give benzoquinone . Naturally occurring hydroquinone derivatives, such as coenzyme Q , exhibit similar reactivity, wherein one hydroxyl group is exchanged for an amino group. Given the conditional reversibility and relative ubiquity of reagents, oxidation reactions of hydroquinones and hydroquinone derivatives are of significant commercial use, often used at an industrial scale.
When colorless hydroquinone and benzoquinone - bright yellow in solid form - are cocrystallized at a 1:1 ratio, a dark-green crystalline charge-transfer complex ( melting point 171 °C), known as quinhydrone ( C 6 H 6 O 2 ·C 6 H 4 O 2 ), is formed. [ citation needed ] This complex dissolves in hot water, dissociating both quinone molecules in solution. [ 21 ]
An important reaction involves the conversion of hydroquinone to its mono- and di-amine derivatives. One such derivative, methylaminophenol , used in photography, is produced according to the stochiometry: [ 10 ]
Diamines - used in the rubber industry as antiozone agents - aminated from aniline , are formed via a similar pathway:
Hydroquinone has a variety of uses principally associated with its action as a reducing agent that is soluble in water. It is a major component in most black and white photographic developers for film and paper where, with the compound metol , it reduces silver halides to elemental silver .
There are various other uses associated with its reducing power . As a polymerisation inhibitor , exploiting its antioxidant properties, hydroquinone prevents polymerization of acrylic acid , methyl methacrylate , cyanoacrylate , and other monomers that are susceptible to radical-initiated polymerization . By acting as a free radical scavenger, hydroquinone serves to prolong the shelflife of light-sensitive resins such as preceramic polymers . [ 22 ]
Hydroquinone can lose a hydrogen cation from both hydroxyl groups to form a diphenolate ion. The di sodium diphenolate salt of hydroquinone is used as an alternating co monomer unit in the production of the polymer PEEK .
Hydroquinone is used as a topical application in skin whitening to reduce the color of skin. It does not have the same predisposition to cause dermatitis as metol does. This is a prescription-only ingredient in some countries, including the member states of the European Union under Directives 76/768/EEC:1976. [ 23 ] [ 24 ]
In 2006, the United States Food and Drug Administration revoked its previous approval of hydroquinone and proposed a ban on all over-the-counter preparations. [ 25 ] The FDA officially banned hydroquinone in 2020 as part of a larger reform of the over-the-counter drug review process. [ 26 ] The FDA stated that hydroquinone cannot be ruled out as a potential carcinogen . [ 27 ] This conclusion was reached based on the extent of absorption in humans and the incidence of neoplasms in rats in several studies where adult rats were found to have increased rates of tumours, including thyroid follicular cell hyperplasias , anisokaryosis (variation in nuclei sizes), mononuclear cell leukemia, hepatocellular adenomas and renal tubule cell adenomas . The Campaign for Safe Cosmetics has also highlighted concerns. [ 28 ]
Numerous studies have revealed that hydroquinone, if taken orally, can cause exogenous ochronosis , a disfiguring disease in which blue-black pigments are deposited onto the skin; however, skin preparations containing the ingredient are administered topically. The FDA had classified hydroquinone in 1982 as a safe product - generally recognized as safe and effective (GRASE), however additional studies under the National Toxicology Program (NTP) were suggested in order to determine whether there is a risk to humans from the use of hydroquinone. [ 25 ] [ 27 ] [ 29 ] NTP evaluation showed some evidence of long-term carcinogenic and genotoxic effects. [ 30 ]
While hydroquinone remains widely prescribed for treatment of hyperpigmentation , questions raised about its safety profile by regulatory agencies in the EU, Japan, and USA encourage the search for other agents with comparable efficacy. [ 31 ] Several such agents are already available or under research, [ 32 ] including azelaic acid , [ 33 ] kojic acid , retinoids, cysteamine, [ 34 ] topical steroids, glycolic acid , and other substances. One of these, 4-butylresorcinol , has been proved to be more effective at treating melanin-related skin disorders by a wide margin, as well as safe enough to be made available over the counter. [ 35 ]
In the anthraquinone process substituted hydroquinones, typically anthrahydroquinone are used to produce hydrogen peroxide which forms spontaneously on reaction with oxygen. The type of substituted hydroquinone is selected depending on reactivity and recyclability.
Hydroquinones are one of the two primary reagents in the defensive glands of bombardier beetles , along with hydrogen peroxide (and perhaps other compounds, depending on the species), which collect in a reservoir. The reservoir opens through a muscle-controlled valve onto a thick-walled reaction chamber. This chamber is lined with cells that secrete catalases and peroxidases . When the contents of the reservoir are forced into the reaction chamber, the catalases and peroxidases rapidly break down the hydrogen peroxide and catalyze the oxidation of the hydroquinones into p -quinones . These reactions release free oxygen and generate enough heat to bring the mixture to the boiling point and vaporize about a fifth of it, producing a hot spray from the beetle's abdomen . [ 36 ]
Hydroquinone is thought to be the active toxin in Agaricus hondensis mushrooms. [ 37 ]
Hydroquinone has been shown to be one of the chemical constituents of the natural product propolis . [ 38 ]
It is also one of the chemical compounds found in castoreum . This compound is gathered from the beaver 's castor sacs. [ 39 ] | https://en.wikipedia.org/wiki/Hydroquinone |
Hydroseeding (or hydraulic mulch seeding , hydro-mulching , hydraseeding ) is a planting process that uses a slurry of seed and mulch . It is often used as an erosion control technique on construction sites, as an alternative to the traditional process of broadcasting or sowing dry seed. [ 1 ]
The hydroseeding slurry is transported in a tank, either truck-mounted or trailer-mounted and sprayed over prepared ground. Helicopters have been used to cover larger areas. Aircraft application may also be used on burned wilderness areas after a fire , and in such uses may contain only soil stabilizer to avoid introducing non-native plant species. Hydroseeding is an alternative to the traditional process of broadcasting or sowing dry seed. A study conducted along the lower Colorado River in Arizona reported that hydroseeding could be used to restore riparian vegetation in cleared land. [ 2 ]
The slurry often has other ingredients including fertilizer , tackifying agents , fiber mulch, and green dye. [ 3 ]
Originating back to the USA in the 1940s, Maurice Mandell from the Connecticut Highway Department discovered that by mixing seed and water together, the resulting mulch could be spread and sprayed over the steep and otherwise inaccessible slopes of the Connecticut expressways.
If planting a relatively large area, hydroseeding can be completed in a very short period of time. It can be very effective for hillsides and sloping lawns to help with erosion control and quick planting. Hydroseeding will typically cost less than planting with sod, but more than broadcast seeding. Results are often quick with high germination rates producing grass growth in about a week and mowing maintenance beginning around 3 to 4 weeks from the date of application. Fiber mulch accelerates the growing process by maintaining moisture around the seeds thereby increasing the rate of germination. [ 4 ]
Seeds are applied to tilled soil using a high pressure hose. The seeds are likely mixed into a water-based spray that often contains mulch, fertilizer, lime, or other substances that promote seed growth. [ 5 ] | https://en.wikipedia.org/wiki/Hydroseeding |
Hydrosilylation , also called catalytic hydrosilation , describes the addition of Si-H bonds across unsaturated bonds . [ 1 ] Ordinarily the reaction is conducted catalytically and usually the substrates are unsaturated organic compounds . Alkenes and alkynes give alkyl and vinyl silanes ; aldehydes and ketones give silyl ethers, while esters provide alkyl silyl mixed acetals. Hydrosilylation has been called the "most important application of platinum in homogeneous catalysis." [ 2 ]
Hydrosilylation of alkenes represents a commercially important method for preparing organosilicon compounds . The process is mechanistically similar to the hydrogenation of alkenes. In fact, similar catalysts are sometimes employed for the two catalytic processes.
The prevalent mechanism, called the Chalk-Harrod mechanism , assumes an intermediate metal complex that contains a hydride , a silyl ligand (R 3 Si), and the alkene substrate. Oxidative addition proceeds by the intermediacy of a sigma-complex , wherein the Si-H bond is not fully broken. Variations of the Chalk-Harrod mechanism exist. Some cases involve insertion of alkene into M-Si bond followed by reductive elimination, the opposite of the sequence in the Chalk-Harrod mechanism. In certain cases, hydrosilylation results in vinyl or allylic silanes resulting from beta-hydride elimination . [ 3 ]
Hydrosilylation of alkenes usually proceeds via anti-Markovnikov addition , i.e., silicon is placed at the terminal carbon when hydrosilylating a terminal alkene ; [ 1 ] however, in the recent years, Markovnikov addition has become a growing field of research. [ 4 ]
Alkynes also undergo hydrosilylation, e.g., the addition of triethylsilane to diphenylacetylene : [ 5 ]
Using chiral phosphines as spectator ligands , catalysts have been developed for catalytic asymmetric hydrosilation. A well studied reaction is the addition of trichlorosilane to styrene to give 1-phenyl-1-(trichlorosilyl)ethane:
Nearly perfect enantioselectivities (ee's) can be achieved using palladium catalysts supported by binaphthyl-substituted monophosphine ligands. [ 6 ]
Silicon wafers can be etched in hydrofluoric acid (HF) to remove the native oxide and form a hydrogen-terminated silicon surface . The hydrogen-terminated surfaces undergo hydrosilation with unsaturated compounds (such as terminal alkenes and alkynes), to form a stable monolayer on the surface. For example:
The hydrosilylation reaction can be initiated with UV light at room temperature or with heat (typical reaction temperature 120-200 °C), under moisture- and oxygen-free conditions. [ 7 ] The resulting monolayer, which is stable and inert, inhibits oxidation of the base silicon layer, relevant to various device applications. [ 8 ]
Before introduction of platinum catalysts by Speier, hydrosilylation was not practiced widely. A peroxide-catalyzed process was reported in academic literature in 1947, [ 9 ] but the introduction of Speier's catalyst ( H 2 PtCl 6 ) was a big breakthrough.
Karstedt's catalyst was later introduced. It is a lipophilic complex that is soluble in the organic substrates of industrial interest. [ 10 ] Complexes and compounds that catalyze hydrogenation are often effective catalysts for hydrosilylation, e.g. Wilkinson's catalyst .
Books
Articles | https://en.wikipedia.org/wiki/Hydrosilylation |
The hydrosphere (from Ancient Greek ὕδωρ ( húdōr ) ' water ' and σφαῖρα ( sphaîra ) ' sphere ' ) [ 1 ] [ 2 ] is the combined mass of water found on, under, and above the surface of a planet , minor planet , or natural satellite . Although Earth 's hydrosphere has been around for about 4 billion years, [ 3 ] [ 4 ] it continues to change in shape. This is caused by seafloor spreading and continental drift , which rearranges the land and ocean. [ 5 ]
It has been estimated that there are 1.386 billion cubic kilometres (333 million cubic miles) of water on Earth. [ 6 ] [ 7 ] [ 8 ] This includes water in gaseous, liquid and frozen forms as soil moisture, groundwater and permafrost in the Earth's crust (to a depth of 2 km); oceans and seas , lakes , rivers and streams , wetlands , glaciers , ice and snow cover on Earth's surface; vapour, droplets and crystals in the air; and part of living plants, animals and unicellular organisms of the biosphere. Saltwater accounts for 97.5% of this amount, whereas fresh water accounts for only 2.5%. Of this fresh water, 68.9% is in the form of ice and permanent snow cover in the Arctic, the Antarctic and mountain glaciers ; 30.8% is in the form of fresh groundwater; and only 0.3% of the fresh water on Earth is in easily accessible lakes, reservoirs and river systems. [ 9 ]
The total mass of Earth's hydrosphere is about 1.4 × 10 18 tonnes , which is about 0.023% of Earth's total mass. At any given time, about 2 × 10 13 tonnes of this is in the form of water vapor in the Earth's atmosphere (for practical purposes, 1 cubic metre of water weighs 1 tonne). Approximately 71% of Earth's surface, an area of some 361 million square kilometres (139.5 million square miles), is covered by ocean . The average salinity of Earth's oceans is about 35 grams of salt per kilogram of sea water (3.5%). [ 10 ]
According to Merriam Webster, the word hydrosphere was brought into English in 1887, translating the German term hydrosphäre , introduced by Eduard Suess . [ 11 ]
The water cycle refers to the transfer of water from one state or reservoir to another. Reservoirs include atmospheric moisture (snow, rain and clouds), streams, oceans, rivers, lakes, groundwater , subterranean aquifers , polar ice caps and saturated soil. Solar energy , in the form of heat and light ( insolation ), and gravity cause the transfer from one state to another over periods from hours to thousands of years. Most evaporation comes from the oceans and is returned to the earth as snow or rain. [ 12 ] : 27 Sublimation refers to evaporation from snow and ice. Transpiration refers to the expiration of water through the minute pores or stomata of trees. Evapotranspiration is the term used by hydrologists in reference to the three processes together, transpiration, sublimation and evaporation. [ 12 ]
Marq de Villiers has described the hydrosphere as a closed system in which water exists. The hydrosphere is intricate, complex, interdependent, all-pervading, stable, and "seems purpose-built for regulating life." [ 12 ] : 26 De Villiers claimed that, "On earth, the total amount of water has almost certainly not changed since geological times: what we had then we still have. Water can be polluted, abused, and misused but it is neither created nor destroyed, it only migrates. There is no evidence that water vapor escapes into space." [ 12 ] : 26
Every year the turnover of water on Earth involves 577,000 km 3 of water. This is water that evaporates from the oceanic surface (502,800 km 3 ) and from land (74,200 km 3 ). The same amount of water falls as atmospheric precipitation, 458,000 km 3 on the ocean and 119,000 km 3 on land. The difference between precipitation and evaporation from the land surface (119,000 − 74,200 = 44,800 km 3 /year) represents the total runoff of the Earth's rivers (42,700 km 3 /year) and direct groundwater runoff to the ocean (2100 km 3 /year). These are the principal sources of fresh water to support life necessities and man's economic activities. [ 9 ]
Water is a basic necessity of life. Since two thirds of the Earth is covered by water, the Earth is also called the blue planet and the watery planet. [ notes 1 ] The hydrosphere plays an important role in the existence of the atmosphere in its present form. Oceans are important in this regard. When the Earth was formed it had only a very thin atmosphere rich in hydrogen and helium similar to the present atmosphere of Mercury. Later the gases hydrogen and helium were expelled from the atmosphere. The gases and water vapor released as the Earth cooled became its present atmosphere. Other gases and water vapor released by volcanoes also entered the atmosphere. As the Earth cooled the water vapor in the atmosphere condensed and fell as rain. The atmosphere cooled further as atmospheric carbon dioxide dissolved into the rain water. In turn, this further caused water vapor to condense and fall as rain. This rain water filled the depressions on the Earth's surface and formed the oceans. It is estimated that this occurred about 4000 million years ago. The first life forms began in the oceans. These organisms did not breathe oxygen. Later, when cyanobacteria evolved, the process of conversion of carbon dioxide into food and oxygen began. As a result, Earth's atmosphere has a distinctly different composition from that of other planets and allowed for life to evolve on Earth .
Human activity has had an impact on the water cycle. Infrastructure, like dams, have a clear, direct impact on the water cycle by blocking and redirecting water pathways. Human caused pollution has changed the biogeochemical cycles of some water systems, and climate change has significantly altered weather patterns. [ 13 ] Water withdrawals have exponentially increased because of agriculture, state and domestic use, and infrastructure. [ 14 ]
According to Igor A. Shiklomanov , it takes 2500 years for the complete recharge and replenishment of oceanic waters, 10,000 years for permafrost and ice, 1500 years for deep groundwater and mountainous glaciers, 17 years in lakes, and 16 days in rivers. [ 9 ]
"Specific water availability is the residual (after use) per capita quantity of fresh water." [ 9 ] Fresh water resources are unevenly distributed in terms of space and time and can go from floods to water shortages within months in the same area. In 1998, 76% of the total population had a specific water availability of less than 5.0 thousand m 3 per year per capita. Already by 1998, 35% of the global population suffered "very low or catastrophically low water supplies," and Shiklomanov predicted that the situation would deteriorate in the twenty-first century with "most of the Earth's population living under the conditions of low or catastrophically low water supply" by 2025. Only 2.5% of the water in the hydrosphere is fresh water and only 0.25% of that water is accessible for our use.
The activities of modern humans have drastic effects on the hydrosphere. For instance, water diversion, human development, and pollution all affect the hydrosphere and natural processes within. Humans are withdrawing water from aquifers and diverting rivers at an unprecedented rate. The Ogallala Aquifer is used for agriculture in the United States; if the aquifer goes dry, more than $20 billion worth of food and fiber will vanish from the world's markets. [ 15 ] The aquifer is being depleted so much faster than it is replenished that, eventually, the aquifer will run dry. Additionally, only one third of rivers are free-flowing due to the extensive use of dams, levees, hydropower , and habitat degradation. [ 16 ] Excessive water use has also caused intermittent streams to become more dry, which is dangerous because they are extremely important for water purification and habitat. [ 17 ] Other ways humans impact the hydrosphere include eutrophication , acid rain , and ocean acidification . Humans also rely on the health of the hydrosphere. It is used for water supply, navigation, fishing, agriculture, energy, and recreation. [ 18 ] | https://en.wikipedia.org/wiki/Hydrosphere |
When generating hydropower , the head is the distance that a given water source has to fall before the point where power is generated. Ultimately the force responsible for hydropower is gravity , so a hydroelectricity plant [ 1 ] with a tall/high head can produce more power than a similar plant with a short/low head. In short, for a given water flow, a larger head will be converted into greater kinetic energy . That energy is then harnessed by a water wheel or water turbine to create usable hydropower.
Hydrostatic head is also used as a measure of the waterproofing of a fabric, commonly in clothing and equipment used for outdoor recreation . It is measured as a length (typically millimetres), representing the maximum height of a vertical column of water that could be placed on top of the fabric before water started seeping through the weave. Thus a fabric with a hydrostatic head rating of 5000 mm could hold back a column of water five metres high, but no more. [ 2 ] | https://en.wikipedia.org/wiki/Hydrostatic_head |
In continuum mechanics , hydrostatic stress , also known as isotropic stress or volumetric stress , [ 1 ] is a component of stress which contains uniaxial stresses , but not shear stresses . [ 2 ] A specialized case of hydrostatic stress contains isotropic compressive stress, which changes only in volume, but not in shape. [ 1 ] Pure hydrostatic stress can be experienced by a point in a fluid such as water. It is often used interchangeably with "mechanical pressure " and is also known as confining stress, particularly in the field of geomechanics . [ citation needed ]
Hydrostatic stress is equivalent to the average of the uniaxial stresses along three orthogonal axes, so it is one third of the first invariant of the stress tensor (i.e. the trace of the stress tensor): [ 2 ]
σ h = I i 3 = 1 3 tr ( σ ) {\displaystyle \sigma _{h}={\frac {I_{i}}{3}}={\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})}
For example in cartesian coordinates (x,y,z) the hydrostatic stress is simply: σ h = σ x x + σ y y + σ z z 3 {\displaystyle \sigma _{h}={\frac {\sigma _{xx}+\sigma _{yy}+\sigma _{zz}}{3}}}
In the particular case of an incompressible fluid ,
the thermodynamic pressure coincides with the mechanical pressure (i.e. the opposite of the hydrostatic stress): p = − σ h = − 1 3 tr ( σ ) {\displaystyle p=-\sigma _{h}=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})}
In the general case of a compressible fluid , the thermodynamic pressure p {\displaystyle p} is no more proportional to the isotropic stress term (the mechanical pressure), since there is an additional term dependent on the trace of the strain rate tensor :
p = − 1 3 tr ( σ ) + ζ tr ( ϵ ) {\displaystyle p=-{\frac {1}{3}}\operatorname {tr} ({\boldsymbol {\sigma }})+\zeta \operatorname {tr} ({\boldsymbol {\epsilon }})}
where the coefficient ζ {\displaystyle \zeta } is the bulk viscosity . The trace of the strain rate tensor corresponds to the flow compression (the divergence of the flow velocity ):
tr ( ϵ ) = tr ( 1 2 ( ∇ u + ( ∇ u ) T ) ) = ∇ ⋅ u {\displaystyle \operatorname {tr} ({\boldsymbol {\epsilon }})=\operatorname {tr} \left({\frac {1}{2}}(\nabla \mathbf {u} +(\nabla \mathbf {u} )^{T})\right)=\nabla \cdot \mathbf {u} }
So the expression for the thermodynamic pressure is usually expressed as:
p = − σ h + ζ ∇ ⋅ u = p ¯ + ζ ∇ ⋅ u {\displaystyle p=-\sigma _{h}+\zeta \nabla \cdot \mathbf {u} ={\bar {p}}+\zeta \nabla \cdot \mathbf {u} }
where the mechanical pressure has been denoted with p ¯ {\textstyle {\bar {p}}} .
In some cases, the second viscosity ζ {\textstyle \zeta } can be assumed to be constant in which case, the effect of the volume viscosity ζ {\textstyle \zeta } is that the mechanical pressure is not equivalent to the thermodynamic pressure [ 3 ] as stated above. p ¯ ≡ p − ζ ∇ ⋅ u , {\displaystyle {\bar {p}}\equiv p-\zeta \,\nabla \cdot \mathbf {u} ,} However, this difference is usually neglected most of the time (that is whenever we are not dealing with processes such as sound absorption and attenuation of shock waves, [ 4 ] where second viscosity coefficient becomes important) by explicitly assuming ζ = 0 {\textstyle \zeta =0} . The assumption of setting ζ = 0 {\textstyle \zeta =0} is called as the Stokes hypothesis . [ 5 ] The validity of Stokes hypothesis can be demonstrated for monoatomic gas both experimentally and from the kinetic theory; [ 6 ] for other gases and liquids, Stokes hypothesis is generally incorrect.
Its magnitude in a fluid, σ h {\displaystyle \sigma _{h}} , can be given by Stevin's Law :
where
For example, the magnitude of the hydrostatic stress felt at a point under ten meters of fresh water would be
where the index w indicates "water".
Because the hydrostatic stress is isotropic, it acts equally in all directions. In tensor form, the hydrostatic stress is equal to
where I 3 {\displaystyle I_{3}} is the 3-by-3 identity matrix .
Hydrostatic compressive stress is used for the determination of the bulk modulus for materials. | https://en.wikipedia.org/wiki/Hydrostatic_stress |
A hydrostatic test is a way in which pressure vessels such as pipelines , plumbing , gas cylinders , boilers and fuel tanks can be tested for strength and leaks. The test involves filling the vessel or pipe system with a liquid, usually water, which may be dyed to aid in visual leak detection , and pressurization of the vessel to the specified test pressure. Pressure tightness can be tested by shutting off the supply valve and observing whether there is a pressure loss. The location of a leak can be visually identified more easily if the water contains a colorant. Strength is usually tested by measuring permanent deformation of the container.
Hydrostatic testing is the most common method employed for testing pipes and pressure vessels. Using this test helps maintain safety standards and durability of a vessel over time. Newly manufactured pieces are initially qualified using the hydrostatic test. They are then revalidated at regular intervals according to the relevant standard. In some cases where a hydrostatic test is not practicable a pneumatic pressure test may be an acceptable alternative. [ 1 ]
Testing of pressure vessels for transport and storage of gases is very important because such containers can explode if they fail under pressure.
Hydrostatic tests are conducted under the constraints of either the industry's or the customer's specifications, or may be required by law. The vessel is filled with a nearly incompressible liquid – usually water or oil – pressurised to test pressure, and examined for leaks or permanent changes in shape. Red or fluorescent dyes may be added to the water to make leaks easier to see. The test pressure is always considerably higher than the operating pressure to give a factor of safety. This factor of safety is typically 166.66%, 143% or 150% of the designed working pressure, depending on the regulations that apply. For example, if a cylinder was rated to DOT-2015 PSI (approximately 139 bar), it would be tested at around 3360 PSI (approximately 232 bar).
Water is commonly used because it is cheap and easily available, and is usually harmless to the system to be tested. Hydraulic fluids and oils may be specified where contamination with water could cause problems. These fluids are nearly incompressible, therefore requiring relatively little work to develop a high pressure, and is therefore also only able to release a small amount of energy in case of a failure - only a small volume will escape under high pressure if the container fails. If high pressure gas were used, then the gas would expand to V=(nRT)/p with its compressed volume resulting in an explosion, with the attendant risk of damage or injury.
Small pressure vessels are normally tested using a water jacket test. The vessel is visually examined for defects and then placed in a container filled with water, and in which the change in volume of the vessel can be measured, usually by monitoring the water level in a calibrated tube. The vessel is then pressurised for a specified period, usually 30 or more seconds, and if specified, the expansion will be measured by reading off the amount of liquid that has been forced into the measuring tube by the volume increase of the pressurised vessel. The vessel is then depressurised, and the permanent volume increase due to plastic deformation while under pressure ( permanent set ) is measured by comparing the final volume in the measuring tube with the volume before pressurisation.
A leak will give a similar result to permanent set, but will be detectable by holding the volume in the pressurised vessel by closing the inlet valve for a period before depressurising, as the pressure will drop steadily during this period if there is a leak. In most cases a permanent set that exceeds the specified maximum will indicate failure. A leak may also be a failure criterion, but it may be that the leak is due to poor sealing of the test equipment. If the vessel fails, it will normally go through a condemning process marking the cylinder as unsafe. [ citation needed ]
The information needed to specify the test is stamped onto the cylinder. This includes the design standard, serial number, manufacturer, and manufacture date. After testing, the vessel or its nameplate will usually be stamp marked with the date of the successful test, and the test facility's identification mark. [ citation needed ]
A simpler test, that is also considered a hydrostatic test but can be performed by anyone who has a garden hose, is to pressurise the vessel by filling it with water and to physically examine the outside for leaks. This type of test is suitable for containers such as boat fuel tanks, which are not pressure vessels but must work under the hydrostatic pressure of the contents. A hydrostatic test head is usually specified as a height above the tank top. The tank is pressurised by filling water to the specified height through a temporary standpipe if necessary. It may be necessary to seal vents and other outlets during the test. [ citation needed ]
Portable fire extinguishers are safety tools that are required in most public buildings. Fire extinguishers are also recommended in homes. Over time the conditions in which they are housed, and the manner in which they are handled affect the structural integrity of the extinguisher. A structurally weakened fire extinguisher can malfunction or even burst when it is needed the most. To maintain the quality and safety of this product, hydrostatic testing is utilized. All critical components of the fire extinguisher should be tested to ensure proper function.
Hydrotesting of pipes, pipelines and vessels is performed to expose defective materials that have missed prior detection, ensure that any remaining defects are insignificant enough to allow operation at design pressures, expose possible leaks and serve as a final validation of the integrity of the constructed system. ASME B31.3 requires this testing to ensure tightness and strength.
Buried high pressure oil and gas pipelines are tested for strength by pressurising them to at least 125% of their maximum allowable working pressure (MAWP) at any point along their length. Since many long distance transmission pipelines are designed to have a steel hoop stress of 80% of specified minimum yield strength (SMYS) at Maximum allowable operating pressure MAOP, this means that the steel is stressed to SMYS and above during the testing, and test sections must be selected to ensure that excessive plastic deformation does not occur. [ citation needed ]
For piping built to ASME B31.3, if the design temperature is greater than the test temperature, then the test pressure must be adjusted for the related allowable stress at the design temperature. This is done by multiplying 1.5 MAWP by the ratio of the allowable stress at the test temperature to allowable stress at the design temperature per ASME B31.3 Section 345.4.2 Equation 24. Test pressures need not exceed a value that would produce a stress higher than yield stress at test temperature. ASME B31.3 section 345.4.2 (c)
Other codes require a more onerous approach. BS PD 8010-2 requires testing to 150% of the design pressure – which should not be less than the MAOP plus surge and other incidental effects that will occur during normal operation. [ citation needed ]
Leak testing is performed by balancing changes in the measured pressure in the test section against the theoretical pressure changes calculated from changes in the measured temperature of the test section.
Australian standard AS2885.5 "Pipelines – Gas and liquid petroleum: Part 5: Field pressure testing" gives an excellent explanation of the factors involved. [ clarification needed ]
In the aerospace industry, depending on the airline, company or customer, certain codes will need to be followed. For example, Bell Helicopter has a certain specification that will have to be followed for any parts that will be used in their helicopters. [ citation needed ]
Most countries have legislation or pressure vessel codes which requires vessels to be regularly tested, for example every two years (with a visual inspection annually) for high pressure gas cylinders and every five or ten years for lower pressure ones such as used in fire extinguishers . [ 2 ] [ clarification needed ] Gas cylinders which fail are normally destroyed as part of the testing protocol to avoid the dangers inherent in them being subsequently used. [ 3 ]
These common US standard gas cylinders have the following requirements: [ 4 ]
Typically organizations such as DOT PHMSA , ISO , ASTM and ASME specify the guidelines for the different types of pressure vessels.
Hydraulic testing is a hazardous process and should be performed with caution by competent personnel. Adhering to prescribed procedures defined in relevant technical standards appropriate to the specific application and jurisdiction will usually reduce these risks to an acceptable level.
Equipment: | https://en.wikipedia.org/wiki/Hydrostatic_test |
Hydrothermal carbonization ( HTC ) (also referred to as "aqueous carbonization at elevated temperature and pressure") is a chemical process for the conversion of organic compounds to structured carbons. It can be used to make a wide variety of nanostructured carbons, simple production of brown coal substitute, synthesis gas , liquid petroleum precursors and humus from biomass with release of energy. Technically the process imitates, within a few hours, the brown coal formation process (German " Inkohlung " literally "coalification") which takes place in nature over enormously longer geological periods of 50,000 to 50 million years. It was investigated by Friedrich Bergius and first described in 1913. [ 1 ]
The carbon efficiency of most processes to convert organic matter to fuel is relatively low. I.e. the proportion of carbon contained in the biomass, which is later contained in the usable end product is relatively low:
In poorly designed systems, the unused carbon escapes into the atmosphere as carbon dioxide, or, when fermented, as methane. Both gases are greenhouse gases with methane even more climate-active on a per molecule basis than CO 2 . In addition, the heat which is released in these processes is not generally used. Advanced modern systems capture nearly all the gases and use the heat as part of the process or for district heating .
The problem with the production of biodiesel from oil plants is that only the energy contained in the fruit can be used. If the entire plant could be used for fuel production, the energy yield could be increased by a factor of three to five with the same cultivation area when growing fast-growing plants such as willow , poplar , miscanthus , hemp , reeds or forestry , while simultaneously reducing energy, fertilizer and herbicide use, with the possibility of using - for current energy plant cultivation - poor soil. Hydrothermal carbonization makes it possible - similar to the biomass-to-liquid process - to use almost all of the carbon contained in the biomass for fuel generation. It is a new variation of an old field (biomass conversion to biofuel ) that has recently been further developed in Germany. [ 2 ] It involves moderate temperatures and pressures over an aqueous solution of biomass in a dilute acid for several hours. The resulting matter reportedly captures 100% of the carbon in a "charcoal" powder that could provide a feed source for soil amendment (similar to biochar ) and further studies in economic nanomaterial production. [ 3 ]
Biomass is heated together with water to 180 °C (356 °F) in a pressure vessel , in particular vegetable material (in the following reaction equation, simplified as sugar with the formula C 6 H 12 O 6 ). The pressure rises to about 1 megapascal (150 psi). During the reaction, oxonium ions are also formed which reduce the pH to pH 5 and lower. This step can be accelerated by adding a small amount of citric acid . [ 4 ] In this case, at low pH values, more carbon passes into the aqueous phase. The effluent reaction is exothermic , that is, energy is released. After 12 hours, the carbon of the reactants is completely reacted, 90 to 99% of the carbon is present as an aqueous sludge of porous brown coal spheres (C 6 H 2 O) with pore sizes between 8 and 20 nm as a solid phase, the remaining 1 to 10% of carbon is either dissolved in the aqueous phase or converted to carbon dioxide. The reaction equation for the formation of brown coal is:
C 6 H 12 O 6 → C 6 H 2 O + 5 H 2 O Δ H = − 1.105 k J / m o l {\displaystyle \mathrm {C_{6}H_{12}O_{6}} \quad \rightarrow \quad \mathrm {C_{6}H_{2}O} +\mathrm {5\ H_{2}O\qquad \Delta H=-1.105\ \mathrm {kJ/mol} } }
The reaction can be stopped in several stages with incomplete elimination of water, giving different intermediate products. After a few minutes, liquid intermediate lipophilic substances are formed, but their handling is very difficult because of their high reactivity. Subsequently, these substances polymerize and peat-like structures are formed, which are present as intermediates after about 8 hours.
As a result of the exothermic reaction of hydrothermal carbonization, about 3/8 of the calorific value of the biomass based on the dry mass is released (with a high lignin , resin and/or oil content of at least 1/4). If the process is managed properly, it is possible to use this waste heat from wet biomass to produce dry biocoal and to use some of the converted energy for energy generation.
In a large-scale technical implementation of hydrothermal carbonization of sewage sludge , it has been shown that about 20% of the fuel energy content contained in 90% end-dried HTC coal is required to heat the process. Furthermore, approximately 5% of the generated energy content is necessary for the electrical operation of the plant. It has proved particularly beneficial in the case of the HTC process that, with mechanical dehydration , more than 60% of the dry substance content can be achieved in the raw carbon, and thus the energy and equipment expenditure for the final drying of the coal is low compared to conventional drying methods of these slurries. [ 5 ]
Compared to sludge digestion with subsequent drying, the energy requirement of the HTC is lower by approximately 20% of the electrical energy and approximately 70% of the thermal energy. The amount of energy produced by the HTC as a storable coal is simultaneously 10% higher. [ 6 ] Compared to conventional thermal drying of sewage sludge, the HTC saves 62% of electricity and 69% of thermal energy due to its significantly simpler drainage. [ 7 ]
C 6 H 2 O + 5 H 2 O → 6 C O + 6 H 2 {\displaystyle \mathrm {C_{6}H_{2}O} +\mathrm {5\ H_{2}O} \quad \rightarrow \quad \mathrm {6\ CO} +\mathrm {6\ H_{2}} }
This synthesis gas could be used to produce gasoline via the Fischer-Tropsch process. Alternatively, the liquid intermediates that are formed during the incomplete conversion of the biomass could be used for fuel and plastic production.
Mexico City started the construction of the first HTC module for the conversion of 23.000 tons of organic waste per year in 2022. The plant is based on the TerraNova HTC technology and includes a pyrolysis plant to provide process heat to the HTC process. [ 12 ]
In Phoenixville, Pennsylvania in the US, HTC will be used in the first municipally owned wastewater treatment in North America built by SoMax BioEnergy [ 13 ]
In Mezzocorona (TN), Italy, the first HTC in the country was built in late 2019 by CarboREM and it is in service treating the digestate from an existent anaerobic digestion plant (AD). The AD is fed with sludge coming from regional-based wineries and dairies. The slurry from the HTC plant is then separated by a centrifuge, the HTC liquid is recirculated to the AD plant to produce more biogas and nearly 500 tons per year of hydrochar are produced. Subsequently, hydrochar is stabilized and processed by a third company as compost for the re-introduction in agriculture with a circular process.
In Relzow, Germany near Anklam ( Mecklenburg-Western Pomerania ) an HTC plant was officially inaugurated in the middle of November 2017 in "Innovation Park Vorpommern". [ 14 ] AVA is also the first company in the world to establish an HTC plant on an industrial level in 2010. [ 7 ]
In the Summer of 2016, an HTC plant for the treatment of sewage sludge was put into operation in Jining/China, to produce renewable fuel for the local coal-fired power plant. According to the manufacturer TerraNova Energy, it is in continuous operation with an annual capacity of 14.000 tons. [ 15 ] | https://en.wikipedia.org/wiki/Hydrothermal_carbonization |
Hydrothermal liquefaction (HTL) is a thermal depolymerization process used to convert wet biomass , and other macromolecules, into crude-like oil under moderate temperature and high pressure. [ 1 ] The crude-like oil has high energy density with a lower heating value of 33.8-36.9 MJ/kg and 5-20 wt% oxygen and renewable chemicals. [ 2 ] [ 3 ] The process has also been called hydrous pyrolysis .
The reaction usually involves homogeneous and/or heterogeneous catalysts to improve the quality of products and yields. [ 1 ] Carbon and hydrogen of an organic material, such as biomass, peat or low-ranked coals ( lignite ) are thermo-chemically converted into hydrophobic compounds with low viscosity and high solubility. Depending on the processing conditions, the fuel can be used as produced for heavy engines, including marine and rail or upgraded to transportation fuels, [ 4 ] such as diesel, gasoline or jet-fuels.
The process may be significant in the creation of fossil fuels . [ 5 ] Simple heating without water, anhydrous pyrolysis has long been considered to take place naturally during the catagenesis of kerogens to fossil fuels . In recent decades it has been found that water under pressure causes more efficient breakdown of kerogens at lower temperatures than without it. The carbon isotope ratio of natural gas also suggests that hydrogen from water has been added during creation of the gas.
As early as the 1920s, the concept of using hot water and alkali catalysts to produce oil out of biomass was proposed. [ 6 ] In 1939, U.S. patent 2,177,557, [ 7 ] described a two-stage process in which a mixture of water, wood chips , and calcium hydroxide is heated in the first stage at temperatures in a range of 220 to 360 °C (428 to 680 °F), with the pressure "higher than that of saturated steam at the temperature used." This produces "oils and alcohols" which are collected. The materials are then subjected in a second stage to what is called "dry distillation", which produces "oils and ketones". Temperatures and pressures for this second stage are not disclosed.
These processes were the foundation of later HTL technologies that attracted research interest especially during the 1970s oil embargo. It was around that time that a high-pressure (hydrothermal) liquefaction process was developed at the Pittsburgh Energy Research Center (PERC) and later demonstrated (at the 100 kg/h scale) at the Albany Biomass Liquefaction Experimental Facility at Albany, Oregon, US. [ 2 ] [ 8 ] In 1982, Shell Oil developed the HTU™ process in the Netherlands. [ 8 ] Other organizations that have previously demonstrated HTL of biomass include Hochschule für Angewandte Wissenschaften Hamburg, Germany, SCF Technologies in Copenhagen, Denmark, EPA’s Water Engineering Research Laboratory, Cincinnati, Ohio, USA, and Changing World Technology Inc. (CWT), Philadelphia, Pennsylvania, USA. [ 8 ] Today, technology companies such as Licella/Ignite Energy Resources (Australia), Arbios Biotech , a Licella/Canfor joint venture, Altaca Energy (Turkey), Circlia Nordic (Denmark), Steeper Energy (Denmark, Canada) continue to explore the commercialization of HTL. [ 9 ] Construction has begun in Teesside, UK , for a catalytic hydrothermal liquefaction plant that aims to process 80,000 tonnes per year of mixed plastic waste by 2022. [ 10 ]
In hydrothermal liquefaction processes, long carbon chain molecules in biomass are thermally cracked and oxygen is removed in the form of H 2 O (dehydration) and CO 2 (decarboxylation). These reactions result in the production of high H/C ratio bio-oil. Simplified descriptions of dehydration and decarboxylation reactions can be found in the literature (e.g. Asghari and Yoshida (2006) [ 11 ] and Snåre et al. (2007). [ 12 ]
Most applications of hydrothermal liquefaction operate at temperatures between 250-550 °C and high pressures of 5-25 MPa as well as catalysts for 20–60 minutes, [ 2 ] [ 3 ] although higher or lower temperatures can be used to optimize gas or liquid yields, respectively. [ 8 ] At these temperatures and pressures, the water present in the biomass becomes either subcritical or supercritical, depending on the conditions, and acts as a solvent, reactant, and catalyst to facilitate the reaction of biomass to bio-oil.
The exact conversion of biomass to bio-oil is dependent on several variables: [ 1 ]
Theoretically, any biomass can be converted into bio-oil using hydrothermal liquefaction regardless of water content, and various different biomasses have been tested, from forestry and agriculture residues, [ 13 ] sewage sludges, food process wastes, to emerging non-food biomass such as algae. [ 1 ] [ 6 ] [ 8 ] [ 14 ] The composition of cellulose , hemicellulose , protein, and lignin in the feedstock influence the yield and quality of the oil from the process.
Zhang et al., [ 15 ] at the University of Illinois, report on a hydrous pyrolysis process in which swine manure is converted to oil by heating the swine manure and water in the presence of carbon monoxide in a closed container. For that process they report that a temperatures of at least 275 °C (527 °F) is required to convert the swine manure to oil, and temperatures above about 335 °C (635 °F) reduces the amount of oil produced. The Zhang et al. process produces pressures of about 7 to 18 Mpa (1000 to 2600 psi - 69 to 178 atm ), with higher temperatures producing higher pressures. Zhang et al. used a retention time of 120 minutes for the reported study, but report at higher temperatures a time of less than 30 minutes results in significant production of oil.
Barbero-López et al., [ 16 ] tested in the University of Eastern Finland the use of spent mushroom substrate and tomato plant residues as feedstock for hydrothermal liquefaction. They focused in the hydrothermal liquids produced, rich in many different constituents, and found that they are potential antifungals against several fungi causing decay on wood, but their ecotoxicity was lower than that of the commercial Cu-based wood preservative. The effectiveness of the antifungal activity of the hydrothermal liquids varied mostly due to liquid concentration and strain sensitivity, while the different feedstocks did not have such a significant effect.
A commercialized process [ 17 ] using hydrous pyrolysis (see the article Thermal depolymerization ) used by Changing World Technologies, Inc. (CWT) and its subsidiary Renewable Environmental Solutions, LLC (RES) to convert turkey offal. [ 18 ] As a two-stage process, the first stage to convert the turkey offal to hydrocarbons at a temperature of 200 to 300 °C (392 to 572 °F) and a second stage to crack the oil into light hydrocarbons at a temperature of near 500 °C (932 °F). Adams et al. report only that the first stage heating is "under pressure"; Lemley, [ 19 ] in a non-technical article on the CWT process, reports that for the first stage (for conversion) a temperature of about 260 °C (500 °F) and a pressure of about 600 psi, with a time for the conversion of "usually about 15 minutes". For the second stage (cracking), Lemley reports a temperature of about 480 °C (896 °F).
Temperature plays a major role in the conversion of biomass to bio-oil. The temperature of the reaction determines the depolymerization of the biomass to bio-oil, as well as the repolymerization into char . [ 1 ] While the ideal reaction temperature is dependent on the feedstock used, temperatures above ideal lead to an increase in char formation and eventually increased gas formation, while lower than ideal temperatures reduce depolymerization and overall product yields.
Similarly to temperature, the rate of heating plays a critical role in the production of the different phase streams, due to the prevalence of secondary reactions at non-optimum heating rates. [ 1 ] Secondary reactions become dominant in heating rates that are too low, leading to the formation of char. While high heating rates are required to form liquid bio-oil, there is a threshold heating rate and temperature where liquid production is inhibited and gas production is favored in secondary reactions.
Pressure (along with temperature) determines the super- or subcritical state of solvents as well as overall reaction kinetics and the energy inputs required to yield the desirable HTL products (oil, gas, chemicals, char etc.). [ 1 ]
Hydrothermal liquefaction is a fast process, resulting in low residence times for depolymerization to occur. Typical residence times are measured in minutes (15 to 60 minutes); however, the residence time is highly dependent on the reaction conditions, including feedstock, solvent ratio and temperature. As such, optimization of the residence time is necessary to ensure a complete depolymerization without allowing further reactions to occur. [ 1 ]
While water acts as a catalyst in the reaction, other catalysts can be added to the reaction vessel to optimize the conversion. [ 20 ] Previously used catalysts include water-soluble inorganic compounds and salts, including KOH and Na 2 CO 3 , as well as transition metal catalysts using nickel , palladium , platinum and ruthenium supported on either carbon , silica or alumina . The addition of these catalysts can lead to an oil yield increase of 20% or greater, due to the catalysts converting the protein, cellulose, and hemicellulose into oil. This ability for catalysts to convert biomaterials other than fats and oils to bio-oil allows for a wider range of feedstock to be used. [ citation needed ]
Biofuels that are produced through hydrothermal liquefaction are carbon neutral , meaning that there are no net carbon emissions produced when burning the biofuel. The plant materials used to produce bio-oils use photosynthesis to grow, and as such consume carbon dioxide from the atmosphere. The burning of the biofuels produced releases carbon dioxide into the atmosphere, but is nearly completely offset by the carbon dioxide consumed from growing the plants, resulting in a release of only 15-18 g of CO 2 per kWh of energy produced. This is substantially lower than the releases rate of fossil fuel technologies, which can range from releases of 955 g/kWh (coal), 813 g/kWh (oil), and 446 g/kWh (natural gas). [ 1 ] Recently, Steeper Energy announced that the carbon intensity (CI) of its Hydrofaction™ oil is 15 CO 2 eq/MJ according to GHGenius model (version 4.03a), while diesel fuel is 93.55 CO 2 eq/MJ. [ 21 ]
Hydrothermal liquefaction is a clean process that doesn't produce harmful compounds, such as ammonia , NO x , or SO x . [ 1 ] Instead the heteroatoms , including nitrogen, sulfur, and chlorine, are converted into harmless byproducts such as N 2 and inorganic acids that can be neutralized with bases.
The HTL process differs from pyrolysis as it can process wet biomass and produce a bio-oil that contains approximately twice the energy density of pyrolysis oil. Pyrolysis is a related process to HTL, but biomass must be processed and dried in order to increase the yield. [ 22 ] The presence of water in pyrolysis drastically increases the heat of vaporization of the organic material, increasing the energy required to decompose the biomass. Typical pyrolysis processes require a water content of less than 40% to suitably convert the biomass to bio-oil. This requires considerable pretreatment of wet biomass such as tropical grasses, which contain a water content as high as 80-85%, and even further treatment for aquatic species, which can contain higher than 90% water content. [ 1 ]
The HTL oil can contain up to 80% of the feedstock carbon content (single pass). [ 23 ] HTL oil has good potential to yield bio-oil with "drop-in" properties that can be directly distributed in existing petroleum infrastructure. [ 23 ] [ 24 ]
The energy returned on energy invested (EROEI) of these processes is uncertain and/or has not been measured. Furthermore, products of hydrous pyrolysis might not meet current fuel standards. Further processing may be required to produce fuels. [ 25 ] | https://en.wikipedia.org/wiki/Hydrothermal_liquefaction |
Hydrothermal synthesis includes the various techniques of synthesizing substances from high-temperature aqueous solutions at high pressures ; also termed "hydrothermal method". The term " hydrothermal " is of geologic origin. [ 1 ] Geochemists and mineralogists have studied hydrothermal phase equilibria since the beginning of the twentieth century. George W. Morey at the Carnegie Institution and later, Percy W. Bridgman at Harvard University did much of the work to lay the foundations necessary to containment of reactive media in the temperature and pressure range where most of the hydrothermal work is conducted. In the broadest definition, a process is considered hydrothermal if it involves water temperatures above 100 °C (212 °F) and pressures above 1 atm. [ 2 ]
In the context of material science , hydrothermal synthesis focuses on the production of single crystal . Under high temperature > (300 °C) and pressure (> 100 atm), ordinarily insoluble minerals become soluble in water. [ 2 ] The crystal growth is performed in an apparatus consisting of a steel pressure vessel called an autoclave , in which the reactant ("nutrient") is supplied along with water . A temperature gradient is maintained between the opposite ends of the growth chamber. At the hotter end the nutrient solute dissolves, while at the cooler end it is deposited on a seed crystal, growing the desired crystal.
Advantages of the hydrothermal method over other types of crystal growth include the ability to create crystalline phases which are not stable at the melting point. Also, materials which have a high vapor pressure near their melting points can be grown by the hydrothermal method. The method is also particularly suitable for the growth of large good-quality crystals while maintaining control over their composition. Disadvantages of the method include the need of expensive autoclaves, and the impossibility of observing the crystal as it grows if a steel tube is used. [ 3 ] There are autoclaves made out of thick walled glass, which can be used up to 300 °C and 10 bar. [ 4 ]
The first report of the hydrothermal growth of crystals [ 5 ] was by German geologist Karl Emil von Schafhäutl (1803–1890) in 1845: he grew microscopic quartz crystals in a pressure cooker. [ 6 ] In 1848, Robert Bunsen reported growing crystals of barium and strontium carbonate at 200 °C and at pressures of 15 atmospheres, using sealed glass tubes and aqueous ammonium chloride ( "Salmiak" ) as a solvent. [ 7 ] In 1849 and 1851, French crystallographer Henri Hureau de Sénarmont (1808–1862) produced crystals of various minerals via hydrothermal synthesis. [ 8 ] [ 9 ] Later (1905) Giorgio Spezia (1842–1911) published reports on the growth of macroscopic crystals. [ 10 ] He used solutions of sodium silicate , natural crystals as seeds and supply, and a silver-lined vessel. By heating the supply end of his vessel to 320–350 °C, and the other end to 165–180 °C, he obtained about 15 mm of new growth over a 200-day period. Unlike modern practice, the hotter part of the vessel was at the top. A shortage in the electronics industry of natural quartz crystals from Brazil during World War 2 led to postwar development of a commercial-scale hydrothermal process for culturing quartz crystals, by A. C. Walker and Ernie Buehler in 1950 at Bell Laboratories. [ 11 ] Other notable contributions have been made by Nacken (1946), Hale (1948), Brown (1951), and Kohman (1955). [ 12 ]
A large number of compounds belonging to practically all classes have been synthesized under hydrothermal conditions: elements, simple and complex oxides , tungstates , molybdates , carbonates, silicates , germanates etc. Hydrothermal synthesis is commonly used to grow synthetic quartz , gems and other single crystals with commercial value. Some of the crystals that have been efficiently grown are emeralds , rubies , quartz, alexandrite and others. The method has proved to be extremely efficient both in the search for new compounds with specific physical properties and in the systematic physicochemical investigation of intricate multicomponent systems at elevated temperatures and pressures.
The crystallization vessels used are autoclaves . These are usually thick-walled steel cylinders with a hermetic seal which must withstand high temperatures and pressures for prolonged periods of time. Furthermore, the autoclave material must be inert with respect to the solvent . The closure is the most important element of the autoclave. Many designs have been developed for seals, the most famous being the Bridgman seal . In most cases, steel -corroding solutions are used in hydrothermal experiments. To prevent corrosion of the internal cavity of the autoclave, protective inserts are generally used. These may have the same shape as the autoclave and fit in the internal cavity (contact-type insert), or be "floating" type inserts which occupy only part of the autoclave interior. Inserts may be made of carbon-free iron , copper , silver , gold , platinum , titanium , glass (or quartz ), or Teflon , depending on the temperature and solution used.
This is the most extensively used method in hydrothermal synthesis and crystal growing. Supersaturation is achieved by reducing the temperature in the crystal growth zone. The nutrient is placed in the lower part of the autoclave filled with a specific amount of solvent. The autoclave is heated in order to create a temperature gradient . The nutrient dissolves in the hotter zone and the saturated aqueous solution in the lower part is transported to the upper part by convective motion of the solution. The cooler and denser solution in the upper part of the autoclave descends while the counterflow of solution ascends. The solution becomes supersaturated in the upper part as the result of the reduction in temperature and crystallization sets in.
In this technique, crystallization takes place without a temperature gradient between the growth and dissolution zones. The supersaturation is achieved by a gradual reduction in temperature of the solution in the autoclave. The disadvantage of this technique is the difficulty in controlling the growth process and introducing seed crystals . For these reasons, this technique is very seldom used.
This technique is based on the difference in solubility between the phase to be grown and that serving as the starting material.
The nutrient consists of compounds that are thermodynamically unstable under the growth conditions. The solubility of the metastable phase exceeds that of the stable phase, and the latter crystallize due to the dissolution of the metastable phase. This technique is usually combined with one of the other two techniques above.
Compared to the classic techniques, at room temperature, this innovative one is expected to generate comparable morphostructural and biological properties of the obtained materials, but in a shorter time. [ 13 ]
Hydrothermal synthesis is of interest for green chemistry processes as it offers the possibility of replacing fossil-based organic solvents with just water. The hydrothermal condition can be seen as halfway between supercritical water and ordinary room-temperature water. The dielectric constant of water is reduced, allowing nonpolar substances to better dissolve. The self-dissociation constant of water also increases by three orders of magnitude, making OH- and H+ more abundant for catalyzing reactions. [ 14 ]
Processing biomass under hydrothermal conditions is already widely used in industry. It is the most common way for breaking down keratin molecules in feather meal into more digestible parts, though it comes with the downside of degrading some amino acids. [ 15 ] It can break down biomass-rich waste products such as sewage sludge and waste straw into potentially useful materials. [ 16 ] | https://en.wikipedia.org/wiki/Hydrothermal_synthesis |
The hydrothermal vent microbial community includes all unicellular organisms that live and reproduce in a chemically distinct area around hydrothermal vents . These include organisms in the microbial mat , free floating cells, or bacteria in an endosymbiotic relationship with animals. Chemolithoautotrophic bacteria derive nutrients and energy from the geological activity at Hydrothermal vents to fix carbon into organic forms. Viruses are also a part of the hydrothermal vent microbial community and their influence on the microbial ecology in these ecosystems is a burgeoning field of research. [ 1 ]
Hydrothermal vents are located where the tectonic plates are moving apart and spreading. This allows water from the ocean to enter into the crust of the earth where it is heated by the magma. The increasing pressure and temperature forces the water back out of these openings, on the way out, the water accumulates dissolved minerals and chemicals from the rocks that it encounters. There are generally three kinds of vents that occur and are all characterized by its temperature and chemical composition. There are generally three kinds of vents that occur, and are all characterized by its temperature and chemical composition; diffuse vents, white smoker vents, and black smoker vents. [ 2 ] Due to the absence of sunlight at these ocean depths, energy is provided by chemosynthesis where symbiotic bacteria and archaea form the bottom of the food chain and are able to support a variety of organisms such as Riftia pachyptila and Alvinella pompejana. These organisms use this symbiotic relationship in order to use and obtain the chemical energy that is released at these hydrothermal vent areas. [ 3 ]
Although there is a large variation in temperatures at the surface of the water with the seasonal changes in depths of the thermocline , the temperatures underneath the thermocline and the waters near the deep sea are relatively constant. No changes are caused by seasonal effects or annual changes. These temperatures stay in the range of 0–3 °C with the exception of the waters immediately surrounding the hydrothermal vents, which can get as high as 407 °C. [ 4 ] [ 5 ] These waters are prevented from boiling due to the high pressure at those depths.
With increasing depth, the high pressure begins to take effect. The pressure increases by about 10 megapascals (MPa) for every kilometre of vertical distance. This means that hydrostatic pressure can reach up to 110 MPa at the depths of the trenches. [ 6 ]
Salinity remains relatively constant within the deep seas around the world, at 35 parts per thousand. [ 4 ]
Although there is very little light in the hydrothermal vent environment, photosynthetic organisms have been found. [ 7 ] However, the energy that the majority of organisms use comes from chemosynthesis. The organisms use the minerals and chemicals that come out of the vents.
Vent types correlate with geochemical and physical conditions. Characteristics like temperature, mineralogy, and pH. The most common hydrothermal vents are black smokers and white smokers, the general third type being diffuse vents. [ 2 ] Alkaline vents occur as well, along with other forms of hydrothermal vents.
Generally release water hotter than the other vents between 300-400 °C. [ 2 ] Fluids are high in metal sulfides such as iron, manganese, and copper, and zinc. These minerals precipitate when exposed to cold waters, and deposit on the seabed and form chimney-like structures. [ 8 ]
Vents emit a milky-coloured water between 200-330 °C. [ 2 ] The milky-colour originates from calcium-rich minerals in ocean crustal rocks, which easily dissolve in cold seawater, when warmed water surfaces again, white plumes are produced. White smoker vents form chimney-like structures from these precipitates.
These vents are characterized by clear, low-temperature (< 30 °C) waters. [ 2 ] Vent fluids are released and spread out slowly before breaching the sea floor. These vents create temperature and chemical gradients, supporting diverse microbial communities, including sulfur-oxidizing bacteria. [ 9 ]
Characteristics found in alkaline vents are fluids with a high pH, typically ranging from 9 to 11, and low temperatures ranging from 40-90 °C. The fluids are rich in methane (CH 4 ) and hydrogen (H 2 ), which were produced from reactions between upper mantle peridotite and cool seawater. This process is known as serpentinization. Environments hosting ultramafic rocks, such as peridotites, are complex and vast. [ 10 ]
Extreme conditions in the hydrothermal vent environment mean that microbial communities that inhabit these areas need to adapt to them. Microbes that live here are known to be hyperthermophiles, microorganisms that grow at temperatures above 90 °C. These organisms are found where the fluids from the vents are expelled and mixed with the surrounding water. These hyperthermophilic microbes are thought to contain proteins that have extended stability at higher temperatures due to intramolecular interactions but the exact mechanisms are not clear yet. The stabilization mechanisms for DNA are not as unknown and the denaturation of DNA are thought to be minimized through high salt concentrations, more specifically Mg, K, and PO4 which are highly concentrated in hyperthermophiles. Along with this, many of the microbes have proteins similar to histones that are bound to the DNA and can offer protection against the high temperatures. Microbes are also found to be in symbiotic relationships with other organisms in the hydrothermal vent environment due to their ability to have a detoxification mechanism which allows them to metabolize the sulfide-rich waters which would otherwise be toxic to the organisms and the microbes. [ 11 ]
Microbial communities inhabiting deep-sea hydrothermal vent chimneys appear to be highly enriched in genes that encode enzymes employed in DNA mismatch repair and homologous recombination . [ 12 ] This finding suggests that these microbial communities have evolved extensive DNA repair capabilities to cope with the extreme DNA damaging conditions in which they exist. [ 12 ]
The most abundant bacteria in hydrothermal vents are chemolithotrophs. These bacteria use reduced chemical species, most often sulfur, as sources of energy to reduce carbon dioxide to organic carbon. [ 13 ] The chemolithotrophic abundance in a hydrothermal vent environment is determined by the available energy sources; different temperature vents have different concentrations of nutrients, suggesting large variation between vents. In general, large microbial populations are found in warm vent water plumes (25 °C), the surfaces exposed to warm vent plumes and in symbiotic tissues within certain vent invertebrates in the vicinity of the vent. [ 13 ]
These bacteria use various forms of available sulfur (S −2 , S 0 , S 2 O 3 −2 ) in the presence of oxygen. They are the predominant population in the majority of hydrothermal vents because their source of energy is widely available, and chemosynthesis rates increase in aerobic conditions. The bacteria at hydrothermal vents are similar to the types of sulfur bacteria found in other H 2 S-rich environments - except Thiomicrospira has replaced Thiobacillus. Other common species are Thiothrix and Beggiatoa, which is of particular importance because of its ability to fix nitrogen. [ 13 ]
Methane is a substantial source of energy in certain hydrothermal vents, but not others: methane is more abundant in warm vents (25 °C) than hydrogen. [ 13 ] Many types of methanotrophic bacteria exist, which require oxygen and fix CH 4 , CH 3 NH 2 , and other C 1 compounds, including CO 2 and CO, if present in vent water. [ 13 ] These type of bacteria are also found in Riftia trophosome, indicating a symbiotic relationship. [ 13 ] Here, methane-oxidizing bacteria refers to methanotrophs , which are not the same as methanogens : Methanococcus and Methanocaldococcus jannaschii are examples methanogens, [ 13 ] which are found in hydrothermal vents; whereas Methylocystaceae are methanotrophs, which have been discovered in hydrothermal vent communities as well. [ 14 ]
Little is known about microbes that use hydrogen as a source of energy, however, studies have shown that they are aerobic, and also symbiotic with Riftia (see below). [ 13 ] [ 15 ] These bacteria are important in the primary production of organic carbon because the geothermally-produced H 2 is taken up for this process. [ 13 ] Hydrogen-oxidizing and denitrifying bacteria may be abundant in vents where NO 3 − -containing bottom seawater mixes with hydrothermal fluid. [ 13 ] Desulfonauticus submarinus is a hydrogenotroph that reduces sulfur-compounds in warm vents and has been found in tube worms R. pachyptila and Alvinella pompejana. [ 16 ]
These bacteria are commonly found in iron and manganese deposits on surfaces exposed intermittently to plumes of hydrothermal and bottom seawater. However, due to the rapid oxidation of Fe 2+ in neutral and alkaline waters (i.e. freshwater and seawater), bacteria responsible for the oxidative deposition of iron would be more commonly found in acidic waters. [ 13 ] Manganese-oxidizing bacteria would be more abundant in freshwater and seawater compared to iron-oxidizing bacteria due to the higher concentration of available metal. [ 13 ]
To illustrate the incredible diversity of hydrothermal vents, the list below is a cumulative representation of bacterial phyla and genera, in alphabetical order. As shown, proteobacteria appears to be the most dominant phyla present in deep-sea vents.
Symbiotic chemosynthesis is an important process for hydrothermal vent communities. [ 13 ] At warm vents, common symbionts for bacteria are deep-sea clams, Calpytogena magnifica , mussels such as Bathyomodiolus thermophilus and pogonophoran tube worms, Riftia pachyptila , and Alvinella pompejana . [ 13 ] [ 15 ] [ self-published source? ] [ 16 ] The trophosome of these animals are specified organs for symbionts that contains valuable molecules for chemosynthesis. These organisms have become so reliant on their symbionts that they have lost all morphological features relating to ingestion and digestion, though the bacteria are provided with H 2 S and free O 2 . [ 13 ] Additionally, methane-oxidizing bacteria have been isolated from C. magnifica and R. pachyptila, which indicate that methane assimilation may take place within the trophosome of these organisms.
Viruses are the most abundant life in the ocean, harboring the greatest reservoir of genetic diversity. [ 20 ] As their infections are often fatal, they constitute a significant source of mortality and thus have widespread influence on biological oceanographic processes, evolution and biogeochemical cycling within the ocean. [ 21 ] Evidence has been found, however, to indicate that viruses found in vent habitats have adopted a more mutualistic than parasitic evolutionary strategy in order to survive the extreme and volatile environment in which they exist. [ 22 ]
Deep-sea hydrothermal vents were found to have large numbers of viruses, indicating high viral production. [ 23 ] Samples from the Endeavour Hydrothermal Vents off the southwest coast of British Columbia showed that active venting black smokers had viral abundances from 1.45×10 5 to 9.90×10 7 per mL, with a drop-off in abundance found in the hydrothermal-vent plume (3.5×10 6 per mL) and outside the venting system (2.94×10 6 per mL). The high density of viruses and therefore of viral production (in comparison to surrounding deep-sea waters) implies that viruses are a significant source of microbial mortality at the vents. [ 23 ] As in other marine environments, deep-sea hydrothermal viruses affect the abundance and diversity of prokaryotes and therefore impact microbial biogeochemical cycling by lysing their hosts to replicate. [ 24 ]
However, in contrast to their role as a source of mortality and population control, viruses have also been postulated to enhance survival of prokaryotes in extreme environments, acting as reservoirs of genetic information. The interactions of the virosphere with microorganisms under environmental stresses is therefore thought to aide microorganism survival through dispersal of host genes through Horizontal Gene Transfer . [ 25 ]
Each second, "there's roughly Avogadro's number of infections going on in the ocean, and every one of those interactions can result in the transfer of genetic information between virus and host." — Curtis Suttle [ 26 ]
Temperate phages (those not causing immediate lysis) can sometimes confer phenotypes that improve fitness in prokaryotes [7] The lysogenic life-cycle can persist stably for thousands of generations of infected bacteria and the viruses can alter the host's phenotype by enabling genes (a process known as lysogenic conversion ) which can therefore allow hosts to cope with different environments. [ 27 ] Benefits to the host population can also be conferred by expression of phage-encoded fitness-enhancing phenotypes. [ 28 ]
A review of viral work at hydrothermal vents published in 2015 stated that vents harbour a significant proportion of lysogenic hosts and that a large proportion of viruses are temperate, indicating that the vent environments may provide an advantage to the prophage. [ 29 ]
One study of virus-host interactions in diffuse-flow hydrothermal vent environments found that the high incidence of lysogenic hosts and large populations of temperate viruses was unique in its magnitude and that these viruses are likely critical to the systems' ecology of prokaryotes. The same study's genetic analysis found that 51% of the viral metagenome sequences were unknown (lacking homology to sequenced data), with high diversity across vent environments but lower diversity for specific vent sites, which indicates high specificity for viral targets. [ 28 ]
A metagenomic analysis of deep-sea hydrothermal vent viromes showed that viral genes manipulated bacterial metabolism , participating in metabolic pathways as well as forming branched pathways in microbial metabolism which facilitated adaptation to the extreme environment. [ 30 ]
An example of this was associated with the sulfur-consuming bacterium SUP05. A study found that 15 of 18 viral genomes sequenced from samples of vent plumes contained genes closely related to an enzyme that the SUP05 chemolithoautotrophs use to extract energy from sulfur compounds. The authors concluded that such phage genes ( auxiliary metabolic genes ) that are able to enhance the sulfur oxidation metabolism in their hosts could provide selective advantages to viruses (continued infection and replication). [ 31 ] The similarity in viral and SUP05 genes for the sulfur metabolism implies an exchange of genes in the past and could implicate the viruses as agents of evolution. [ 32 ]
Another metagenomic study found that viral genes had relatively high proportions of metabolism, vitamins and cofactor genes, indicating that viral genomes encode auxiliary metabolic genes. Coupled with the observations of a high proportion of lysogenic viruses, this indicates that viruses are selected to be integrated pro-viruses rather than free floating viruses and that the auxiliary genes can be expressed to benefit both the host and the integrated virus. The viruses enhance fitness by boosting metabolism or offering greater metabolic flexibility to their hosts. The evidence suggests that deep-sea hydrothermal vent viral evolutionary strategies promote prolonged host integration, favoring a form of mutualism rather than classic parasitism. [ 22 ]
As hydrothermal vents outlets sub-seafloor material, there is also likely a connection between vent viruses and those in the crust. [ 29 ]
Different geographic regions exhibit variations in the structure and metabolism of microbial communities at hydrothermal vents. This is due to environmental factors such as temperature, plate tectonics, the geochemistry of hydrothermal vent fluid and the availability of nutrients that make them up, as well as vent type, and dispersal barriers. [ 33 ]
Comparative analysis of vent fields have revealed distinct microbial assemblages associated with the varying geochemical conditions along mid-ocean ridges, bark-arc basins, and volcanic arcs. Slow spreading systems like the Mid-Atlantic Ridge (MAR), have significant impacts on fluid chemistry. One study analyzed microbial communities in the sulfide deposits of hydrothermal vents along the MAR; differences arose in vent field sites, even when within geographical proximity of each other. The study highlighted the difference between two rock hosts- basalt and ultramafic. The vent fields hosted by basalt observed in this study often maintained similar key chemistry; high sulfides and metals, and higher abundances of sulfur-oxidizng and sulfur-reducing bacteria. The dominat microbial groups here were found to be Gammaprotecteobacteria and Desulfobacterota. The sites analyzed were the Trans-Atlantic Geotraverse (TAG) and the Snake Pit vent fields. [ 34 ]
Microbial communities at hydrothermal vents mediate the transformation of energy and minerals produced by geological activity into organic material . Organic matter produced by autotrophic bacteria is then used to support the upper trophic levels . The hydrothermal vent fluid and the surrounding ocean water is rich in elements such as iron , manganese and various species of sulfur including sulfide , sulfite , sulfate , elemental sulfur from which they can derive energy or nutrients. [ 35 ] Microbes derive energy by oxidizing or reducing elements. Different microbial species use different chemical species of an element in their metabolic processes. For example, some microbe species oxidize sulfide to sulfate and another species will reduce sulfate to elemental sulfur. As a result, a web of chemical pathways mediated by different microbial species transform elements such as carbon, sulfur, nitrogen, and hydrogen, from one species to another. Their activity alters the original chemical composition produced by geological activity of the hydrothermal vent environment. [ 36 ]
Geological activity at hydrothermal vents produce an abundance of carbon compounds . [ 13 ] Hydrothermal vent plumes contain high concentrations of methane and carbon monoxide with methane concentration reaching 10 7 times of the surrounding ocean water. [ 13 ] [ 37 ] Deep ocean water is also a large reservoir of carbon and concentration of carbon dioxide species such as dissolved CO 2 and HCO 3 − around 2.2mM. [ 38 ] The bountiful carbon and electron acceptors produced by geological activity support an oasis of chemoautotrophic microbial communities that fix inorganic carbon, such as CO 2, using energy from sources such as oxidation of sulfur, iron, manganese, hydrogen and methane. [ 13 ] These bacteria supply a large portion of organic carbon that support heterotrophic life at hydrothermal vents. [ 39 ]
Carbon fixation is the incorporation of inorganic carbon into organic matter. Unlike the surface of the planet where light is a major source of energy for carbon fixation, hydrothermal vent chemolithotrophic bacteria rely on chemical oxidation to obtain the energy required. [ 18 ] Fixation of CO 2 is observed in members of Gammaproteobacteria , Campylobacterota , Alphaproteobacteria , and members of Archaea domain at hydrothermal vents. Four major metabolic pathways for carbon fixation found in microbial vent communities include the Calvin–Benson–Bassham (CBB) cycle, reductive tricarboxylic acid (rTCA) cycle, 3-hydroxypropionate (3-HP) cycle and reductive acetyl coenzyme A (acetyl-CoA) pathway. [ 18 ]
The Calvin-Benson-Bassham (CBB) cycle is the most common CO 2 fixation pathway found among autotrophs. [ 17 ] The key enzyme is ribulose-1,5-bisphosphate carboxylase/oxygenase ( RuBisCO ). [ 18 ] RuBisCO has been identified in members of the microbial community such as Thiomicrospira, Beggiatoa , zetaproteobacterium , and gammaproteobacterial endosymbionts of tubeworms , bivalves , and gastropods . [ 17 ]
The Reductive Carboxylic Acid Cycle (rTCA) is the second most commonly found carbon fixation pathway at hydrothermal vents. [ 17 ] rTCA cycle is essentially a reversed TCA or Kreb cycle heterotrophs use to oxidize organic matter. Organism that use the rTCA cycle prefer to inhabit anoxic zones in the hydrothermal vent system because some enzymes in the rTCA cycle are sensitive to the presence of O 2. [ 18 ] It is found in sulfate reducing deltaproteobacterium such as some members of Desulfobacter , Aquificales and Aquifex and Thermoproteales . [ 18 ]
The key enzymes of 3-HP and 3-HP/4-HB cycles are acetyl-CoA/propionyl-CoA carboxylase, malonyl-CoA reductase and propionyl-CoA synthase. Most of the organisms that use this pathway are mixotrophs with the ability to use organic carbon in addition to carbon fixation. [ 18 ]
The Reductive Acetyl CoA pathway has only been found in chemoautotrophs. This pathway does not require ATP as the pathway is directly coupled to the reduction of H 2 . Organisms that have been found with this pathway prefer H 2 rich areas. Species include deltaproteobacterium such as Dulfobacterium autotrophicum , acetogens and methanogenic Archaea . [ 18 ]
Hydrothermal vents produce high quantities of methane which can originate from both geological and biological processes. [ 13 ] [ 37 ] Methane concentrations in hydrothermal vent plumes can exceed 300μM in concentration depending on the vent. In comparison, the vent fluid contains 10 6 – 10 7 times more methane than the surrounding deep ocean water, of which methane ranges between 0.2-0.3nM in concentration. [ 37 ] Microbial communities use the high concentrations of methane as an energy source and a source of carbon. [ 13 ] Methanotrophy , where a species uses methane both as an energy and carbon source, have been observed with the presence of gammaproteobacteria in the Methylococcaceae lineages. [ 17 ] Methanotrophs convert methane into carbon dioxide and organic carbon. [ 37 ] They are typically characterized by the presence of intercellular membranes and microbes with intercellular membranes were observed to make up 20% of the microbial mat at hydrothermal vents. [ 13 ] [ 37 ]
Energy generation via methane oxidation yields the next best source of energy after sulfur oxidation. [ 13 ] It has been suggested that microbial oxidation facilitates rapid turnover at hydrothermal vents, thus much of the methane is oxidize within short distance of the vent. [ 37 ] In hydrothermal vent communities, aerobic oxidation of methane is commonly found in endosymbiotic microbes of vent animals. [ 40 ] Anaerobic oxidation of methane (AOM) is typically coupled to reduction of sulfate or Fe and Mn as terminal electron acceptors as these are most plentiful at hydrothermal vents. [ 37 ] [ 41 ] AOM is found to be prevalent in marine sediments at hydrothermal vents [ 42 ] [ 41 ] and may be responsible for consuming 75% of methane produced by the vent. [ 41 ] Species that perform AOM include Archaea of phylum Thermoproteota (formerly Crenarchaeota ) and Thermococcus . [ 43 ]
Production of methane through methanogenesis can be from degradation of hydrocarbons , from reaction of carbon dioxide or other compounds like formate . [ 40 ] Evidence of methanogenesis can be found alongside of AOM in sediments. [ 42 ] Thermophilic methanogens are found to grow in Hydrothermal vent plumes at temperatures between 55 °C to 80 °C. [ 44 ] However, autotropic methanogenesis performed by many thermophilic species require H 2 as an electron donor so microbial growth is limited by H 2 availability. [ 44 ] [ 39 ] Genera of thermophilic methanogens found at hydrothermal vents include Methanocaldococcus , Methanothermococcus , and Methanococcus . [ 44 ]
Microbial communities at hydrothermal vent convert sulfur such as H 2 S produced by geological activity into other forms such as sulfite , sulfate , and elemental sulfur for energy or assimilation into organic molecules . [ 36 ] Sulfide is plentiful at Hydrothermal Vents, with concentrations from one to tens of mM, whereas the surrounding ocean water usually only contains a few nano molars. [ 45 ]
Reduced sulfur compounds such as H 2 S produced by the hydrothermal vents are a major source of energy for sulfur metabolism in microbes. [ 13 ] Oxidation of reduced sulfur compounds into forms such as sulfite , thiosulfate , and elemental sulfur is used to produce energy for microbe metabolism such as synthesis of organic compounds from inorganic carbon . [ 36 ] The major metabolic pathways used for sulfur oxidation includes the SOX pathway and dissimilatory oxidation. The Sox pathway is a multi enzyme pathway capable of oxidizing sulfide, sulfite, elemental sulfur, and thiosulfate to sulfate. [ 36 ] Dissimilatory oxidation converts sulfite to elemental sulfur. [ 35 ] Sulfur oxidizing species include and the genera of Thiomicrospira , Halothiobacillus , Beggiatoa , Persephonella , and Sulfurimonas . Symbiotic species of the class Gammaproteobacteria and the phlym Campylobacterota can also oxidize sulfur. [ 36 ]
Sulfur reduction uses sulfate as an electron acceptor for the assimilation of sulfur . Microbes that perform sulfate reduction typically use hydrogen , methane or organic matter as an electron donor . [ 41 ] [ 19 ] Anaerobic oxidation of methane (AOM) often use sulfate as electron acceptor. [ 41 ] This method is favoured by organisms living in highly anoxic areas of the hydrothermal vent, [ 19 ] thus are one of the predominant processes that occur within the sediments. [ 39 ] Species that reduce sulfate have been identified in Archaea and members of Deltaproteobacteria such as Desulfovibrio , Desulfobulbus , Desulfobacteria , and Desulfuromonas at hydrothermal vents. [ 19 ]
Deep ocean water contains the largest reservoir of nitrogen available to hydrothermal vents, with around 0.59 mM of dissolved nitrogen gas. [ 46 ] [ 47 ] Ammonium is the dominant species of dissolved inorganic nitrogen, and can be produced by water mass mixing below hydrothermal vents and discharged in vent fluids. [ 47 ] Quantities of available ammonium vary between vents depending on the geological activity and microbial composition. [ 47 ] Nitrate and nitrite concentrations are depleted in hydrothermal vents compared to the surrounding seawater. [ 46 ]
The study of the Nitrogen cycle in hydrothermal vent microbial communities still requires more comprehensive research. [ 46 ] However, isotope data suggests that microorganisms influence dissolved inorganic nitrogen quantities and compositions, and all pathways of the nitrogen cycle are likely to be found at hydrothermal vents. [ 47 ] Biological nitrogen fixation is important to provide some of the biologically available nitrogen to the nitrogen cycle, especially at unsedimented hydrothermal vents. [ 46 ] Nitrogen is fixed by many different microbes including methanogen in the orders Methanomicrobiales , Methanococcales , and Methanobacteriales . [ 46 ] Thermophilic microbes have been found to be able to fix nitrogen at higher temperatures such as 92 °C. [ 46 ] Nitrogen fixation may be especially prevalent in microbial mats and particulate material where biologically available levels of nitrogen are low, due to high microbe density and anaerobic environment which allow the function of nitrogenase , a nitrogen-fixing enzyme. [ 46 ] Evidence has also been detected of assimilation , nitrification , denitrification , anammox , mineralization and dissimilatory nitrate reduction to ammonium . [ 47 ] For example, sulfur oxidizing bacteria like Begiatoa species perform denitrification and reduce nitrate to oxidize H 2 S. [ 46 ] Nitrate assimilation is performed by symbiotic species of Riftia pachyptila tubeworm. [ 46 ] | https://en.wikipedia.org/wiki/Hydrothermal_vent_microbial_communities |
Hydrotreated vegetable oil ( HVO ) is a biofuel made by the hydrocracking or hydrogenation of vegetable oil . Hydrocracking breaks big molecules into smaller ones using hydrogen while hydrogenation adds hydrogen to molecules. These methods can be used to create substitutes for gasoline , diesel , propane , kerosene and other chemical feedstock . Diesel fuel produced from these sources is known as green diesel or renewable diesel .
Diesel fuel created by hydrotreating is distinct from the biodiesel made through esterification .
The majority of plant and animal oils are triglycerides , suitable for refining. Refinery feedstock includes canola , carinata ( Brassica carinata ), algae , jatropha , salicornia , palm oil , tallow and soybeans . One type of algae, Botryococcus braunii produces a different type of oil, known as a triterpene , which is transformed into alkanes by a different process. [ citation needed ]
The production of hydrotreated vegetable oils is based on introducing hydrogen molecules into the raw fat or oil molecule. This process is associated with the reduction of the carbon compound. When hydrogen is used to react with triglycerides, different types of reactions can occur, and different resultant products are combined. [ 1 ] The second step of the process involves converting the triglycerides/fatty acids to hydrocarbons by hydrodeoxygenation (removing oxygen as water) and/or decarboxylation (removing oxygen as carbon dioxide).
A formulaic example of this is C 3 H 5 (RCOO) 3 + 12 H 2 ⟶ C 3 H 8 + 3 RCH 3 + 6 H 2 O
The chemical formula for HVO Diesel is C n H 2n+2
Hydrotreated oils are characterized by very good low temperature properties. The cloud point also occurs below −40 °C. Therefore, these fuels are suitable for the preparation of premium fuel with a high cetane number and excellent low temperature properties. The cold filter plugging point (CFPP) virtually corresponds to the cloud point value, which is why the value of the cloud point is significant in the case of hydrotreated oils. [ 1 ]
Both HVO diesel (green diesel) and biodiesel are made from the same vegetable oil feedstock. However the processing technologies and chemical makeup of the two fuels differ. The chemical reaction commonly used to produce biodiesel is known as transesterification . [ 2 ]
The production of biodiesel also makes glycerol, but the production of HVO does not.
Neste has published the differences between biodiesel and renewable diesel (HVO) which are summarized in the table below. [ 3 ]
Various stages of converting renewable hydrocarbon fuels produced by hydrotreating is done throughout energy industry. Some commercial examples of vegetable oil refining are:
Neste is the largest manufacturer, producing ca. 3.3 million tonnes annually (2023). [ 9 ] Neste completed their first NExBTL plant in the summer 2007 and the second one in 2009. Petrobras planned to use 256 megalitres (1,610,000 bbl) of vegetable oils in the production of H-Bio fuel in 2007. ConocoPhilips is processing 42,000 US gallons per day (1,000 bbl/d) of vegetable oil. Other companies working on the commercialization and industrialization of renewable hydrocarbons and biofuels include Neste, REG Synthetic Fuels, LLC, ENI, UPM Biofuels, Diamond Green Diesel partnered with countries across the globe. Manufacturers of these renewable diesels report greenhouse gas emissions reductions of 60-95% compared to fossil diesel, [ 10 ] [ 11 ] [ 12 ] as well as better cold-flow properties to work in colder climates. [ 10 ] In addition, all of these green diesels can be introduced into any diesel engine or infrastructure without many mechanical modifications [ 13 ] at any ratio with petroleum-based diesels. [ 10 ]
Renewable diesel from vegetable oil is a growing substitute for petroleum. [ 14 ] California fleets used over 200 million US gallons (760,000 m 3 ) of renewable diesel in 2017. The California Air Resources Board predicts that over 2 billion US gallons (7,600,000 m 3 ) of fuel will be consumed in the state under its Low Carbon Fuel Standard requirements in the next ten years. Fleets operating on Renewable Diesel from various refiners and feedstocks are reported to see lower emissions, reduced maintenance costs, and nearly identical experience when driving with this fuel. [ 15 ]
A number of issues have been raised about the sustainability of HVO, primarily concerning the sourcing of its lipid feedstocks. Waste oils such as used cooking oil are a limited resource and their use cannot be scaled up beyond a certain point. Further demand for HVO would have to be met with crop-based virgin vegetable oils, but the diversion of vegetable oils from the food market into the biofuels sector has been linked to increased global food prices, and to global agricultural expansion and intensification. This is associated with a variety of ecological and environmental implications ; moreover, greenhouse gas emissions from land use change may in some circumstances negate or exceed any benefit from the displacement of fossil fuels. [ 16 ]
A 2022 study published by the International Council on Clean Transportation found that the anticipated scale-up of renewable diesel capacity in the U.S. would quickly exhaust the available supply of waste and residual oils, and increasingly rely on domestic and imported soy oil. [ 17 ] The report also noted that increased U.S. renewable diesel production risked indirectly driving the expansion of palm oil cultivation in Southeast Asia, where the palm oil industry is still endemically associated with deforestation and peat destruction.
Refinery hydrotreaters are used for processing HVO. Introducing even minor amounts of biomaterial into a diesel hydrotreater has implications and potential risk factors. [ 18 ] The main issues are corrosion, high hydrogen consumption, and catalyst deactivation. [ 19 ]
According to Haldor Topsoe's experience with their licensed units, HVO production poses certain challenges for hydrotreaters including:
Corrosion - There are several corrosion mechanisms from hydrotreating vegetable oils and animal fats. Most are acidic though this is tempered by being bound into tri and di-glycerides. However, difficult feedstocks like Distillers Corn Oil can contain 10-15% free fatty acids . [ 20 ] These acids can attack non-stainless steels in the preheat train, fired heater, piping, valves and reactors. In addition, chlorides that contaminate feeds can be converted to hydrogen chloride in the reactor which can then cause accelerated corrosion in the effluent lines and for sour water. The presence of chlorides in a wet environment is also problematic for the common stainless steel grades 304 and 316 due to the potential of intergranular stress chloride cracking. [ 21 ] In addition, the formation of carbon dioxide from decarboxylation reactions during hydrotreating can form carbonic acid when contacted with water. [ 19 ]
Hydrogen Consumption - removing oxygen, cracking long-chain molecules, and saturating olefinic bonds will chemically consume two to four times the hydrogen of a conventional ULSD hydrotreater. ULSD hydrotreating chemical hydrogen consumption is typically 300-600 scf/bbl of feed depending on the aromatic saturation required for cycle oils and other cracked feedstocks. [ 22 ] Chemical consumption for HVO approaches 2,500 scf/bbl depending on the level of saturation of feedstock and the length of the carbon chains. [ 19 ] Delivering hydrogen for consumption, in addition to quench and additional excess circulating hydrogen can pose significant challenges to unit revamp design and operation with hydraulics, distribution, and compressor power being critical. [ 23 ]
Fouling - alkali metals and especially phosphorus must be kept low in HVO feedstocks in order to minimize pressure drop from fouling and general catalyst deactivation. Phosphatidic glassing is an aggressive catalyst poisoning mechanism that will not only plug off a reactor's pore spaces thus causing rapid pressure drop, but will also interfere with the catalysts acid sites by coating the outside of the catalyst and begin adhering to other catalyst particles. [ 19 ]
HVO processing is a young technology relative to most other refining processes. The first commercial scale HVO unit in the United States started up in Louisiana in 2010 with a capacity of 100 million US gallons (380,000 m 3 ) per year. [ 24 ]
A newbuild plant was constructed in 2010 Geismar, LA by the Syntroleum Corporation and its joint-venture partner Tyson Foods. [ 25 ] The plant initiated startup in the 3rd quarter with a target of 75 million US gallons (280,000 m 3 ) per year. [ 26 ] Feedstock for the plant was vegetable oil and pretreated rendered poultry fat. The site achieved 87% of its design capacity in 2011. [ 27 ] Corrosion, including chloride-linked stress corrosion cracking shut the plant down in 2012 for more than a year. [ 28 ] Tyson sold its 50% ownership to Renewable Energy Group (now Chevron) and Syntroleum's stock was announced by the same buyer in 2013 with closing in 2014. [ 29 ] In 2015, two fires caused damage to the plant with major damage being incurred. [ 30 ]
Source: [ 31 ]
Source: [ 32 ] | https://en.wikipedia.org/wiki/Hydrotreated_vegetable_oil |
Hydrous oxides are inorganic compounds of a metal, hydroxide , and weakly bound water. Some examples include:
Some of them, such as HFO and HAO, are precipitated in highly porous poorly crystalline or amorphous forms and therefore are good adsorbents used for example in water treatment . [ 5 ]
Some others are gels.
Hydrous oxide films may be used an various applications such as electrocatalysis, supercapacitors, and sensors. [ 6 ] [ 7 ]
HFO and HAO may also result from oxidative weathering of rocks to produce iron an aluminum hydrous oxide clay soils. [ 8 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydrous_oxide |
Hydrothermal liquefaction (HTL) is a thermal depolymerization process used to convert wet biomass , and other macromolecules, into crude-like oil under moderate temperature and high pressure. [ 1 ] The crude-like oil has high energy density with a lower heating value of 33.8-36.9 MJ/kg and 5-20 wt% oxygen and renewable chemicals. [ 2 ] [ 3 ] The process has also been called hydrous pyrolysis .
The reaction usually involves homogeneous and/or heterogeneous catalysts to improve the quality of products and yields. [ 1 ] Carbon and hydrogen of an organic material, such as biomass, peat or low-ranked coals ( lignite ) are thermo-chemically converted into hydrophobic compounds with low viscosity and high solubility. Depending on the processing conditions, the fuel can be used as produced for heavy engines, including marine and rail or upgraded to transportation fuels, [ 4 ] such as diesel, gasoline or jet-fuels.
The process may be significant in the creation of fossil fuels . [ 5 ] Simple heating without water, anhydrous pyrolysis has long been considered to take place naturally during the catagenesis of kerogens to fossil fuels . In recent decades it has been found that water under pressure causes more efficient breakdown of kerogens at lower temperatures than without it. The carbon isotope ratio of natural gas also suggests that hydrogen from water has been added during creation of the gas.
As early as the 1920s, the concept of using hot water and alkali catalysts to produce oil out of biomass was proposed. [ 6 ] In 1939, U.S. patent 2,177,557, [ 7 ] described a two-stage process in which a mixture of water, wood chips , and calcium hydroxide is heated in the first stage at temperatures in a range of 220 to 360 °C (428 to 680 °F), with the pressure "higher than that of saturated steam at the temperature used." This produces "oils and alcohols" which are collected. The materials are then subjected in a second stage to what is called "dry distillation", which produces "oils and ketones". Temperatures and pressures for this second stage are not disclosed.
These processes were the foundation of later HTL technologies that attracted research interest especially during the 1970s oil embargo. It was around that time that a high-pressure (hydrothermal) liquefaction process was developed at the Pittsburgh Energy Research Center (PERC) and later demonstrated (at the 100 kg/h scale) at the Albany Biomass Liquefaction Experimental Facility at Albany, Oregon, US. [ 2 ] [ 8 ] In 1982, Shell Oil developed the HTU™ process in the Netherlands. [ 8 ] Other organizations that have previously demonstrated HTL of biomass include Hochschule für Angewandte Wissenschaften Hamburg, Germany, SCF Technologies in Copenhagen, Denmark, EPA’s Water Engineering Research Laboratory, Cincinnati, Ohio, USA, and Changing World Technology Inc. (CWT), Philadelphia, Pennsylvania, USA. [ 8 ] Today, technology companies such as Licella/Ignite Energy Resources (Australia), Arbios Biotech , a Licella/Canfor joint venture, Altaca Energy (Turkey), Circlia Nordic (Denmark), Steeper Energy (Denmark, Canada) continue to explore the commercialization of HTL. [ 9 ] Construction has begun in Teesside, UK , for a catalytic hydrothermal liquefaction plant that aims to process 80,000 tonnes per year of mixed plastic waste by 2022. [ 10 ]
In hydrothermal liquefaction processes, long carbon chain molecules in biomass are thermally cracked and oxygen is removed in the form of H 2 O (dehydration) and CO 2 (decarboxylation). These reactions result in the production of high H/C ratio bio-oil. Simplified descriptions of dehydration and decarboxylation reactions can be found in the literature (e.g. Asghari and Yoshida (2006) [ 11 ] and Snåre et al. (2007). [ 12 ]
Most applications of hydrothermal liquefaction operate at temperatures between 250-550 °C and high pressures of 5-25 MPa as well as catalysts for 20–60 minutes, [ 2 ] [ 3 ] although higher or lower temperatures can be used to optimize gas or liquid yields, respectively. [ 8 ] At these temperatures and pressures, the water present in the biomass becomes either subcritical or supercritical, depending on the conditions, and acts as a solvent, reactant, and catalyst to facilitate the reaction of biomass to bio-oil.
The exact conversion of biomass to bio-oil is dependent on several variables: [ 1 ]
Theoretically, any biomass can be converted into bio-oil using hydrothermal liquefaction regardless of water content, and various different biomasses have been tested, from forestry and agriculture residues, [ 13 ] sewage sludges, food process wastes, to emerging non-food biomass such as algae. [ 1 ] [ 6 ] [ 8 ] [ 14 ] The composition of cellulose , hemicellulose , protein, and lignin in the feedstock influence the yield and quality of the oil from the process.
Zhang et al., [ 15 ] at the University of Illinois, report on a hydrous pyrolysis process in which swine manure is converted to oil by heating the swine manure and water in the presence of carbon monoxide in a closed container. For that process they report that a temperatures of at least 275 °C (527 °F) is required to convert the swine manure to oil, and temperatures above about 335 °C (635 °F) reduces the amount of oil produced. The Zhang et al. process produces pressures of about 7 to 18 Mpa (1000 to 2600 psi - 69 to 178 atm ), with higher temperatures producing higher pressures. Zhang et al. used a retention time of 120 minutes for the reported study, but report at higher temperatures a time of less than 30 minutes results in significant production of oil.
Barbero-López et al., [ 16 ] tested in the University of Eastern Finland the use of spent mushroom substrate and tomato plant residues as feedstock for hydrothermal liquefaction. They focused in the hydrothermal liquids produced, rich in many different constituents, and found that they are potential antifungals against several fungi causing decay on wood, but their ecotoxicity was lower than that of the commercial Cu-based wood preservative. The effectiveness of the antifungal activity of the hydrothermal liquids varied mostly due to liquid concentration and strain sensitivity, while the different feedstocks did not have such a significant effect.
A commercialized process [ 17 ] using hydrous pyrolysis (see the article Thermal depolymerization ) used by Changing World Technologies, Inc. (CWT) and its subsidiary Renewable Environmental Solutions, LLC (RES) to convert turkey offal. [ 18 ] As a two-stage process, the first stage to convert the turkey offal to hydrocarbons at a temperature of 200 to 300 °C (392 to 572 °F) and a second stage to crack the oil into light hydrocarbons at a temperature of near 500 °C (932 °F). Adams et al. report only that the first stage heating is "under pressure"; Lemley, [ 19 ] in a non-technical article on the CWT process, reports that for the first stage (for conversion) a temperature of about 260 °C (500 °F) and a pressure of about 600 psi, with a time for the conversion of "usually about 15 minutes". For the second stage (cracking), Lemley reports a temperature of about 480 °C (896 °F).
Temperature plays a major role in the conversion of biomass to bio-oil. The temperature of the reaction determines the depolymerization of the biomass to bio-oil, as well as the repolymerization into char . [ 1 ] While the ideal reaction temperature is dependent on the feedstock used, temperatures above ideal lead to an increase in char formation and eventually increased gas formation, while lower than ideal temperatures reduce depolymerization and overall product yields.
Similarly to temperature, the rate of heating plays a critical role in the production of the different phase streams, due to the prevalence of secondary reactions at non-optimum heating rates. [ 1 ] Secondary reactions become dominant in heating rates that are too low, leading to the formation of char. While high heating rates are required to form liquid bio-oil, there is a threshold heating rate and temperature where liquid production is inhibited and gas production is favored in secondary reactions.
Pressure (along with temperature) determines the super- or subcritical state of solvents as well as overall reaction kinetics and the energy inputs required to yield the desirable HTL products (oil, gas, chemicals, char etc.). [ 1 ]
Hydrothermal liquefaction is a fast process, resulting in low residence times for depolymerization to occur. Typical residence times are measured in minutes (15 to 60 minutes); however, the residence time is highly dependent on the reaction conditions, including feedstock, solvent ratio and temperature. As such, optimization of the residence time is necessary to ensure a complete depolymerization without allowing further reactions to occur. [ 1 ]
While water acts as a catalyst in the reaction, other catalysts can be added to the reaction vessel to optimize the conversion. [ 20 ] Previously used catalysts include water-soluble inorganic compounds and salts, including KOH and Na 2 CO 3 , as well as transition metal catalysts using nickel , palladium , platinum and ruthenium supported on either carbon , silica or alumina . The addition of these catalysts can lead to an oil yield increase of 20% or greater, due to the catalysts converting the protein, cellulose, and hemicellulose into oil. This ability for catalysts to convert biomaterials other than fats and oils to bio-oil allows for a wider range of feedstock to be used. [ citation needed ]
Biofuels that are produced through hydrothermal liquefaction are carbon neutral , meaning that there are no net carbon emissions produced when burning the biofuel. The plant materials used to produce bio-oils use photosynthesis to grow, and as such consume carbon dioxide from the atmosphere. The burning of the biofuels produced releases carbon dioxide into the atmosphere, but is nearly completely offset by the carbon dioxide consumed from growing the plants, resulting in a release of only 15-18 g of CO 2 per kWh of energy produced. This is substantially lower than the releases rate of fossil fuel technologies, which can range from releases of 955 g/kWh (coal), 813 g/kWh (oil), and 446 g/kWh (natural gas). [ 1 ] Recently, Steeper Energy announced that the carbon intensity (CI) of its Hydrofaction™ oil is 15 CO 2 eq/MJ according to GHGenius model (version 4.03a), while diesel fuel is 93.55 CO 2 eq/MJ. [ 21 ]
Hydrothermal liquefaction is a clean process that doesn't produce harmful compounds, such as ammonia , NO x , or SO x . [ 1 ] Instead the heteroatoms , including nitrogen, sulfur, and chlorine, are converted into harmless byproducts such as N 2 and inorganic acids that can be neutralized with bases.
The HTL process differs from pyrolysis as it can process wet biomass and produce a bio-oil that contains approximately twice the energy density of pyrolysis oil. Pyrolysis is a related process to HTL, but biomass must be processed and dried in order to increase the yield. [ 22 ] The presence of water in pyrolysis drastically increases the heat of vaporization of the organic material, increasing the energy required to decompose the biomass. Typical pyrolysis processes require a water content of less than 40% to suitably convert the biomass to bio-oil. This requires considerable pretreatment of wet biomass such as tropical grasses, which contain a water content as high as 80-85%, and even further treatment for aquatic species, which can contain higher than 90% water content. [ 1 ]
The HTL oil can contain up to 80% of the feedstock carbon content (single pass). [ 23 ] HTL oil has good potential to yield bio-oil with "drop-in" properties that can be directly distributed in existing petroleum infrastructure. [ 23 ] [ 24 ]
The energy returned on energy invested (EROEI) of these processes is uncertain and/or has not been measured. Furthermore, products of hydrous pyrolysis might not meet current fuel standards. Further processing may be required to produce fuels. [ 25 ] | https://en.wikipedia.org/wiki/Hydrous_pyrolysis |
In organic chemistry , hydroxamic acids are a class of organic compounds having a general formula R− C(=O) −N(−OH)−R' bearing the functional group − C (= O )− N (−O H )− , where R and R' are typically organyl groups (e.g., alkyl or aryl ) or hydrogen . They are amides ( R−C(=O)−NH−R' ) wherein the nitrogen atom has a hydroxyl ( −OH ) substituent . They are often used as metal chelators . [ 1 ]
Common example of hydroxamic acid is aceto- N -methylhydroxamic acid ( H 3 C−C(=O)−N(−OH)−CH 3 ). Some uncommon examples of hydroxamic acids are formo- N -chlorohydroxamic acid ( H−C(=O)−N(−OH)−Cl ) and chloroformo- N -methylhydroxamic acid ( Cl−C(=O)−N(−OH)−CH 3 ).
Hydroxamic acids are usually prepared from either esters or acid chlorides by a reaction with hydroxylamine salts. For the synthesis of benzohydroxamic acid ( C 6 H 5 −C(=O)−NH−OH or Ph−C(=O)−NH−OH , where Ph is phenyl group ), the overall equation is: [ 2 ]
Hydroxamic acids can also be synthesized from aldehydes and N -sulfonylhydroxylamine via the Angeli-Rimini reaction . [ 3 ] Alternatively, molybdenum oxide diperoxide oxidizes trimethylsilated amides to hydroxamic acids, although yields are only about 50%. [ 4 ] In a variation on the Nef reaction , primary nitro compounds kept in an acidic solution (to minimize the nitronate tautomer ) hydrolyze to a hydroxamic acid. [ 5 ]
A well-known reaction of hydroxamic acid esters is the Lossen rearrangement . [ 6 ]
The conjugate base of hydroxamic acids forms is called a hydroxamate . Deprotonation occurs at the −N(−OH)− group, with the hydrogen atom being removed, resulting in a hydroxamate anion R−C(=O)−N(−O − )−R' . The resulting conjugate base presents the metal with an anionic, conjugated O , O chelating ligand . Many hydroxamic acids and many iron hydroxamates have been isolated from natural sources. [ 8 ]
They function as ligands , usually for iron. [ 9 ] Nature has evolved families of hydroxamic acids to function as iron-binding compounds ( siderophores ) in bacteria . They extract iron(III) from otherwise insoluble sources ( rust , minerals , etc.). The resulting complexes are transported into the cell, where the iron is extracted and utilized metabolically. [ 10 ]
Ligands derived from hydroxamic acid and thiohydroxamic acid (a hydroxamic acid where one or both oxygens in the −C(=O)−N(−OH)− functional group are replaced by sulfur ) also form strong complexes with lead (II). [ 11 ]
Hydroxamic acids are used extensively in flotation of rare earth minerals during the concentration and extraction of ores to be subjected to further processing. [ 12 ] [ 13 ]
Some hydroxamic acids (e.g. vorinostat , belinostat , panobinostat , and trichostatin A ) are HDAC inhibitors with anti-cancer properties. Fosmidomycin is a natural hydroxamic acid inhibitor of 1-deoxy- D -xylulose-5-phosphate reductoisomerase ( DXP reductoisomerase ). Hydroxamic acids have also been investigated for reprocessing of irradiated fuel. [ citation needed ] | https://en.wikipedia.org/wiki/Hydroxamic_acid |
Hydroxide is a diatomic anion with chemical formula OH − . It consists of an oxygen and hydrogen atom held together by a single covalent bond , and carries a negative electric charge . It is an important but usually minor constituent of water . It functions as a base , a ligand , a nucleophile , and a catalyst . The hydroxide ion forms salts , some of which dissociate in aqueous solution, liberating solvated hydroxide ions. Sodium hydroxide is a multi-million-ton per annum commodity chemical .
The corresponding electrically neutral compound HO • is the hydroxyl radical . The corresponding covalently bound group −OH of atoms is the hydroxy group .
Both the hydroxide ion and hydroxy group are nucleophiles and can act as catalysts in organic chemistry .
Many inorganic substances which bear the word hydroxide in their names are not ionic compounds of the hydroxide ion, but covalent compounds which contain hydroxy groups .
The hydroxide ion is naturally produced from water by the self-ionization reaction: [ 2 ]
The equilibrium constant for this reaction, defined as
has a value close to 10 −14 at 25 °C, so the concentration of hydroxide ions in pure water is close to 10 −7 mol∙dm −3 , to satisfy the equal charge constraint. The pH of a solution is equal to the decimal cologarithm of the hydrogen cation concentration; [ note 2 ] the pH of pure water is close to 7 at ambient temperatures. The concentration of hydroxide ions can be expressed in terms of pOH , which is close to (14 − pH), [ note 3 ] so the pOH of pure water is also close to 7. Addition of a base to water will reduce the hydrogen cation concentration and therefore increase the hydroxide ion concentration (decrease pH, increase pOH) even if the base does not itself contain hydroxide. For example, ammonia solutions have a pH greater than 7 due to the reaction NH 3 + H + ⇌ NH + 4 , which decreases the hydrogen cation concentration, which increases the hydroxide ion concentration. pOH can be kept at a nearly constant value with various buffer solutions .
In an aqueous solution [ 4 ] the hydroxide ion is a base in the Brønsted–Lowry sense as it can accept a proton [ note 4 ] from a Brønsted–Lowry acid to form a water molecule. It can also act as a Lewis base by donating a pair of electrons to a Lewis acid. In aqueous solution both hydrogen ions and hydroxide ions are strongly solvated, with hydrogen bonds between oxygen and hydrogen atoms. Indeed, the bihydroxide ion H 3 O − 2 has been characterized in the solid state. This compound is centrosymmetric and has a very short hydrogen bond (114.5 pm ) that is similar to the length in the bifluoride ion HF − 2 (114 pm). [ 3 ] In aqueous solution the hydroxide ion forms strong hydrogen bonds with water molecules. A consequence of this is that concentrated solutions of sodium hydroxide have high viscosity due to the formation of an extended network of hydrogen bonds as in hydrogen fluoride solutions.
In solution, exposed to air, the hydroxide ion reacts rapidly with atmospheric carbon dioxide , which acts as a lewis acid, to form, initially, the bicarbonate ion.
The equilibrium constant for this reaction can be specified either as a reaction with dissolved carbon dioxide or as a reaction with carbon dioxide gas (see Carbonic acid for values and details). At neutral or acid pH, the reaction is slow, but is catalyzed by the enzyme carbonic anhydrase , which effectively creates hydroxide ions at the active site.
Solutions containing the hydroxide ion attack glass . In this case, the silicates in glass are acting as acids. Basic hydroxides, whether solids or in solution, are stored in airtight plastic containers.
The hydroxide ion can function as a typical electron-pair donor ligand , forming such complexes as tetrahydroxoaluminate/tetrahydroxido aluminate [Al(OH) 4 ] − . It is also often found in mixed-ligand complexes of the type [ML x (OH) y ] z + , where L is a ligand. The hydroxide ion often serves as a bridging ligand , donating one pair of electrons to each of the atoms being bridged. As illustrated by [Pb 2 (OH)] 3+ , metal hydroxides are often written in a simplified format. It can even act as a 3-electron-pair donor, as in the tetramer [PtMe 3 (OH)] 4 . [ 5 ]
When bound to a strongly electron-withdrawing metal centre, hydroxide ligands tend to ionise into oxide ligands. For example, the bichromate ion [HCrO 4 ] − dissociates according to
with a p K a of about 5.9. [ 6 ]
The infrared spectra of compounds containing the OH functional group have strong absorption bands in the region centered around 3500 cm −1 . [ 7 ] The high frequency of molecular vibration is a consequence of the small mass of the hydrogen atom as compared to the mass of the oxygen atom, and this makes detection of hydroxyl groups by infrared spectroscopy relatively easy. A band due to an OH group tends to be sharp. However, the band width increases when the OH group is involved in hydrogen bonding. A water molecule has an HOH bending mode at about 1600 cm −1 , so the absence of this band can be used to distinguish an OH group from a water molecule.
When the OH group is bound to a metal ion in a coordination complex , an M−OH bending mode can be observed. For example, in [Sn(OH) 6 ] 2− it occurs at 1065 cm −1 . The bending mode for a bridging hydroxide tends to be at a lower frequency as in [( bipyridine )Cu(OH) 2 Cu( bipyridine )] 2+ (955 cm −1 ). [ 8 ] M−OH stretching vibrations occur below about 600 cm −1 . For example, the tetrahedral ion [Zn(OH) 4 ] 2− has bands at 470 cm −1 ( Raman -active, polarized) and 420 cm −1 (infrared). The same ion has a (HO)–Zn–(OH) bending vibration at 300 cm −1 . [ 9 ]
Sodium hydroxide solutions, also known as lye and caustic soda, are used in the manufacture of pulp and paper , textiles , drinking water , soaps and detergents , and as a drain cleaner . Worldwide production in 2004 was approximately 60 million tonnes . [ 10 ] The principal method of manufacture is the chloralkali process .
Solutions containing the hydroxide ion are generated when a salt of a weak acid is dissolved in water. Sodium carbonate is used as an alkali, for example, by virtue of the hydrolysis reaction
An example of the use of sodium carbonate as an alkali is when washing soda (another name for sodium carbonate) acts on insoluble esters , such as triglycerides , commonly known as fats, to hydrolyze them and make them soluble.
Bauxite , a basic hydroxide of aluminium , is the principal ore from which the metal is manufactured. [ 11 ] Similarly, goethite (α-FeO(OH)) and lepidocrocite (γ-FeO(OH)), basic hydroxides of iron , are among the principal ores used for the manufacture of metallic iron. [ 12 ]
Aside from NaOH and KOH, which enjoy very large scale applications, the hydroxides of the other alkali metals also are useful. Lithium hydroxide (LiOH) is used in breathing gas purification systems for spacecraft , submarines , and rebreathers to remove carbon dioxide from exhaled gas. [ 13 ]
The hydroxide of lithium is preferred to that of sodium because of its lower mass. Sodium hydroxide , potassium hydroxide , and the hydroxides of the other alkali metals are also strong bases . [ 14 ]
Beryllium hydroxide Be(OH) 2 is amphoteric . [ 15 ] The hydroxide itself is insoluble in water, with a solubility product log K * sp of −11.7. Addition of acid gives soluble hydrolysis products, including the trimeric ion [Be 3 (OH) 3 (H 2 O) 6 ] 3+ , which has OH groups bridging between pairs of beryllium ions making a 6-membered ring. [ 16 ] At very low pH the aqua ion [Be(H 2 O) 4 ] 2+ is formed. Addition of hydroxide to Be(OH) 2 gives the soluble tetrahydroxoberyllate or tetrahydroxido beryllate anion, [Be(OH) 4 ] 2− .
The solubility in water of the other hydroxides in this group increases with increasing atomic number . [ 17 ] Magnesium hydroxide Mg(OH) 2 is a strong base (up to the limit of its solubility, which is very low in pure water), as are the hydroxides of the heavier alkaline earths: calcium hydroxide , strontium hydroxide , and barium hydroxide . A solution or suspension of calcium hydroxide is known as limewater and can be used to test for the weak acid carbon dioxide. The reaction Ca(OH) 2 + CO 2 ⇌ Ca 2+ + HCO − 3 + OH − illustrates the basicity of calcium hydroxide. Soda lime , which is a mixture of the strong bases NaOH and KOH with Ca(OH) 2 , is used as a CO 2 absorbent.
The simplest hydroxide of boron B(OH) 3 , known as boric acid , is an acid. Unlike the hydroxides of the alkali and alkaline earth hydroxides, it does not dissociate in aqueous solution. Instead, it reacts with water molecules acting as a Lewis acid, releasing protons.
A variety of oxyanions of boron are known, which, in the protonated form, contain hydroxide groups. [ 18 ]
Aluminium hydroxide Al(OH) 3 is amphoteric and dissolves in alkaline solution. [ 15 ]
In the Bayer process [ 19 ] for the production of pure aluminium oxide from bauxite minerals this equilibrium is manipulated by careful control of temperature and alkali concentration. In the first phase, aluminium dissolves in hot alkaline solution as Al(OH) − 4 , but other hydroxides usually present in the mineral, such as iron hydroxides, do not dissolve because they are not amphoteric. After removal of the insolubles, the so-called red mud , pure aluminium hydroxide is made to precipitate by reducing the temperature and adding water to the extract, which, by diluting the alkali, lowers the pH of the solution. Basic aluminium hydroxide AlO(OH), which may be present in bauxite, is also amphoteric.
In mildly acidic solutions, the hydroxo/hydroxido complexes formed by aluminium are somewhat different from those of boron, reflecting the greater size of Al(III) vs. B(III). The concentration of the species [Al 13 (OH) 32 ] 7+ is very dependent on the total aluminium concentration. Various other hydroxo complexes are found in crystalline compounds. Perhaps the most important is the basic hydroxide AlO(OH), a polymeric material known by the names of the mineral forms boehmite or diaspore , depending on crystal structure. Gallium hydroxide , [ 15 ] indium hydroxide , and thallium(III) hydroxide are also amphoteric. Thallium(I) hydroxide is a strong base. [ 20 ]
Carbon forms no simple hydroxides. The hypothetical compound C(OH) 4 ( orthocarbonic acid or methanetetrol) is unstable in aqueous solution: [ 21 ]
Carbon dioxide is also known as carbonic anhydride, meaning that it forms by dehydration of carbonic acid H 2 CO 3 (OC(OH) 2 ). [ 22 ]
Silicic acid is the name given to a variety of compounds with a generic formula [SiO x (OH) 4−2 x ] n . [ 23 ] [ 24 ] Orthosilicic acid has been identified in very dilute aqueous solution. It is a weak acid with p K a1 = 9.84, p K a2 = 13.2 at 25 °C. It is usually written as H 4 SiO 4 , but the formula Si(OH) 4 is generally accepted. [ 6 ] [ dubious – discuss ] Other silicic acids such as metasilicic acid (H 2 SiO 3 ), disilicic acid (H 2 Si 2 O 5 ), and pyrosilicic acid (H 6 Si 2 O 7 ) have been characterized. These acids also have hydroxide groups attached to the silicon; the formulas suggest that these acids are protonated forms of poly oxyanions .
Few hydroxo complexes of germanium have been characterized. Tin(II) hydroxide Sn(OH) 2 was prepared in anhydrous media. When tin(II) oxide is treated with alkali the pyramidal hydroxo complex Sn(OH) − 3 is formed. When solutions containing this ion are acidified, the ion [Sn 3 (OH) 4 ] 2+ is formed together with some basic hydroxo complexes. The structure of [Sn 3 (OH) 4 ] 2+ has a triangle of tin atoms connected by bridging hydroxide groups. [ 25 ] Tin(IV) hydroxide is unknown but can be regarded as the hypothetical acid from which stannates , with a formula [Sn(OH) 6 ] 2− , are derived by reaction with the (Lewis) basic hydroxide ion. [ 26 ]
Hydrolysis of Pb 2+ in aqueous solution is accompanied by the formation of various hydroxo-containing complexes, some of which are insoluble. The basic hydroxo complex [Pb 6 O(OH) 6 ] 4+ is a cluster of six lead centres with metal–metal bonds surrounding a central oxide ion. The six hydroxide groups lie on the faces of the two external Pb 4 tetrahedra. In strongly alkaline solutions soluble plumbate ions are formed, including [Pb(OH) 6 ] 2− . [ 27 ]
In the higher oxidation states of the pnictogens , chalcogens , halogens , and noble gases there are oxoacids in which the central atom is attached to oxide ions and hydroxide ions. Examples include phosphoric acid H 3 PO 4 , and sulfuric acid H 2 SO 4 . In these compounds one or more hydroxide groups can dissociate with the liberation of hydrogen cations as in a standard Brønsted–Lowry acid. Many oxoacids of sulfur are known and all feature OH groups that can dissociate. [ 28 ]
Telluric acid is often written with the formula H 2 TeO 4 ·2H 2 O but is better described structurally as Te(OH) 6 . [ 29 ]
Orthoperiodic acid [ note 6 ] can lose all its protons, eventually forming the periodate ion [IO 4 ] − . It can also be protonated in strongly acidic conditions to give the octahedral ion [I(OH) 6 ] + , completing the isoelectronic series, [E(OH) 6 ] z , E = Sn, Sb, Te, I; z = −2, −1, 0, +1. Other acids of iodine(VII) that contain hydroxide groups are known, in particular in salts such as the mesoperiodate ion that occurs in K 4 [I 2 O 8 (OH) 2 ]·8H 2 O. [ 30 ]
As is common outside of the alkali metals, hydroxides of the elements in lower oxidation states are complicated. For example, phosphorous acid H 3 PO 3 predominantly has the structure OP(H)(OH) 2 , in equilibrium with a small amount of P(OH) 3 . [ 31 ] [ 32 ]
The oxoacids of chlorine , bromine , and iodine have the formula O n −1 / 2 A(OH), where n is the oxidation number : +1, +3, +5, or +7, and A = Cl, Br, or I. The only oxoacid of fluorine is F(OH), hypofluorous acid . When these acids are neutralized the hydrogen atom is removed from the hydroxide group. [ 33 ]
The hydroxides of the transition metals and post-transition metals usually have the metal in the +2 (M = Mn, Fe, Co, Ni, Cu, Zn) or +3 (M = Fe, Ru, Rh, Ir) oxidation state. None are soluble in water, and many are poorly defined. One complicating feature of the hydroxides is their tendency to undergo further condensation to the oxides, a process called olation . Hydroxides of metals in the +1 oxidation state are also poorly defined or unstable. For example, silver hydroxide Ag(OH) decomposes spontaneously to the oxide (Ag 2 O). Copper(I) and gold(I) hydroxides are also unstable, although stable adducts of CuOH and AuOH are known. [ 34 ] The polymeric compounds M(OH) 2 and M(OH) 3 are in general prepared by increasing the pH of an aqueous solution of the corresponding metal cation until the hydroxide precipitates out of solution. On the converse, the hydroxides dissolve in acidic solution. Zinc hydroxide Zn(OH) 2 is amphoteric, forming the tetrahydroxido zincate ion Zn(OH) 2− 4 in strongly alkaline solution. [ 15 ]
Numerous mixed ligand complexes of these metals with the hydroxide ion exist. In fact, these are in general better defined than the simpler derivatives. Many can be made by deprotonation of the corresponding metal aquo complex .
Vanadic acid H 3 VO 4 shows similarities with phosphoric acid H 3 PO 4 though it has a much more complex vanadate oxoanion chemistry. Chromic acid H 2 CrO 4 , has similarities with sulfuric acid H 2 SO 4 ; for example, both form acid salts A + [HMO 4 ] − . Some metals, e.g. V, Cr, Nb, Ta, Mo, W, tend to exist in high oxidation states. Rather than forming hydroxides in aqueous solution, they convert to oxo clusters by the process of olation , forming polyoxometalates . [ 35 ]
In some cases, the products of partial hydrolysis of metal ion, described above, can be found in crystalline compounds. A striking example is found with zirconium (IV). Because of the high oxidation state, salts of Zr 4+ are extensively hydrolyzed in water even at low pH. The compound originally formulated as ZrOCl 2 ·8H 2 O was found to be the chloride salt of a tetrameric cation [Zr 4 (OH) 8 (H 2 O) 16 ] 8+ in which there is a square of Zr 4+ ions with two hydroxide groups bridging between Zr atoms on each side of the square and with four water molecules attached to each Zr atom. [ 36 ]
The mineral malachite is a typical example of a basic carbonate. The formula, Cu 2 CO 3 (OH) 2 shows that it is halfway between copper carbonate and copper hydroxide . Indeed, in the past the formula was written as CuCO 3 ·Cu(OH) 2 . The crystal structure is made up of copper, carbonate and hydroxide ions. [ 36 ] The mineral atacamite is an example of a basic chloride. It has the formula Cu 2 Cl(OH) 3 . In this case the composition is nearer to that of the hydroxide than that of the chloride: CuCl 2 ·3Cu(OH) 2 . [ 37 ] Copper forms hydroxyphosphate ( libethenite ), arsenate ( olivenite ), sulfate ( brochantite ), and nitrate compounds. White lead is a basic lead carbonate, (PbCO 3 ) 2 ·Pb(OH) 2 , which has been used as a white pigment because of its opaque quality, though its use is now restricted because it can be a source for lead poisoning . [ 36 ]
The hydroxide ion appears to rotate freely in crystals of the heavier alkali metal hydroxides at higher temperatures so as to present itself as a spherical ion, with an effective ionic radius of about 153 pm. [ 38 ] Thus, the high-temperature forms of KOH and NaOH have the sodium chloride structure, [ 39 ] which gradually freezes in a monoclinically distorted sodium chloride structure at temperatures below about 300 °C. The OH groups still rotate even at room temperature around their symmetry axes and, therefore, cannot be detected by X-ray diffraction . [ 40 ] The room-temperature form of NaOH has the thallium iodide structure. LiOH, however, has a layered structure, made up of tetrahedral Li(OH) 4 and (OH)Li 4 units. [ 38 ] This is consistent with the weakly basic character of LiOH in solution, indicating that the Li–OH bond has much covalent character.
The hydroxide ion displays cylindrical symmetry in hydroxides of divalent metals Ca, Cd, Mn, Fe, and Co. For example, magnesium hydroxide Mg(OH) 2 ( brucite ) crystallizes with the cadmium iodide layer structure, with a kind of close-packing of magnesium and hydroxide ions. [ 38 ] [ 41 ]
The amphoteric hydroxide Al(OH) 3 has four major crystalline forms: gibbsite (most stable), bayerite , nordstrandite , and doyleite . [ note 7 ] All these polymorphs are built up of double layers of hydroxide ions—the aluminium atoms on two-thirds of the octahedral holes between the two layers—and differ only in the stacking sequence of the layers. [ 42 ] The structures are similar to the brucite structure. However, whereas the brucite structure can be described as a close-packed structure, in gibbsite the OH groups on the underside of one layer rest on the groups of the layer below. This arrangement led to the suggestion that there are directional bonds between OH groups in adjacent layers. [ 43 ] This is an unusual form of hydrogen bonding since the two hydroxide ions involved would be expected to point away from each other. The hydrogen atoms have been located by neutron diffraction experiments on α-AlO(OH) ( diaspore ). The O–H–O distance is very short, at 265 pm; the hydrogen is not equidistant between the oxygen atoms and the short OH bond makes an angle of 12° with the O–O line. [ 44 ] A similar type of hydrogen bond has been proposed for other amphoteric hydroxides, including Be(OH) 2 , Zn(OH) 2 , and Fe(OH) 3 . [ 38 ]
A number of mixed hydroxides are known with stoichiometry A 3 M III (OH) 6 , A 2 M IV (OH) 6 , and AM V (OH) 6 . As the formula suggests these substances contain M(OH) 6 octahedral structural units. [ 45 ] Layered double hydroxides may be represented by the formula [M z + 1− x M 3+ x (OH) 2 ] q + (X n − ) q ⁄ n · y H 2 O . Most commonly, z = 2, and M 2+ = Ca 2+ , Mg 2+ , Mn 2+ , Fe 2+ , Co 2+ , Ni 2+ , Cu 2+ , or Zn 2+ ; hence q = x .
Potassium hydroxide and sodium hydroxide are two well-known reagents in organic chemistry .
The hydroxide ion may act as a base catalyst . [ 46 ] The base abstracts a proton from a weak acid to give an intermediate that goes on to react with another reagent. Common substrates for proton abstraction are alcohols , phenols , amines , and carbon acids . The p K a value for dissociation of a C–H bond is extremely high, but the pK a alpha hydrogens of a carbonyl compound are about 3 log units lower. Typical p K a values are 16.7 for acetaldehyde and 19 for acetone . [ 47 ] Dissociation can occur in the presence of a suitable base.
The base should have a p K a value not less than about 4 log units smaller, or the equilibrium will lie almost completely to the left.
The hydroxide ion by itself is not a strong enough base, but it can be converted to one by adding sodium hydroxide to ethanol
to produce the ethoxide ion. The pK a for self-dissociation of ethanol is about 16, so the alkoxide ion is a strong enough base. [ 48 ] The addition of an alcohol to an aldehyde to form a hemiacetal is an example of a reaction that can be catalyzed by the presence of hydroxide. Hydroxide can also act as a Lewis-base catalyst. [ 49 ]
The hydroxide ion is intermediate in nucleophilicity between the fluoride ion F − , and the amide ion NH − 2 . [ 50 ] Ester hydrolysis under alkaline conditions (also known as base hydrolysis )
is an example of a hydroxide ion serving as a nucleophile. [ 51 ]
Early methods for manufacturing soap treated triglycerides from animal fat (the ester) with lye .
Other cases where hydroxide can act as a nucleophilic reagent are amide hydrolysis, the Cannizzaro reaction , nucleophilic aliphatic substitution , nucleophilic aromatic substitution , and in elimination reactions . The reaction medium for KOH and NaOH is usually water but with a phase-transfer catalyst the hydroxide anion can be shuttled into an organic solvent as well, for example in the generation of the reactive intermediate dichlorocarbene . | https://en.wikipedia.org/wiki/Hydroxide |
In chemistry , a hydroxy or hydroxyl group is a functional group with the chemical formula −OH and composed of one oxygen atom covalently bonded to one hydrogen atom. In organic chemistry , alcohols and carboxylic acids contain one or more hydroxy groups. Both the negatively charged anion HO − , called hydroxide , and the neutral radical HO· , known as the hydroxyl radical , consist of an unbonded hydroxy group.
According to IUPAC definitions, the term hydroxyl refers to the hydroxyl radical ( ·OH ) only, while the functional group −OH is called a hydroxy group . [ 1 ]
Water, alcohols, carboxylic acids , and many other hydroxy-containing compounds can be readily deprotonated due to a large difference between the electronegativity of oxygen (3.5) and that of hydrogen (2.1). Hydroxy-containing compounds engage in intermolecular hydrogen bonding increasing the electrostatic attraction between molecules and thus to higher boiling and melting points than found for compounds that lack this functional group . Organic compounds, which are often poorly soluble in water, become water-soluble when they contain two or more hydroxyl groups, as illustrated by sugars and amino acid . [ 2 ] [ 3 ]
The hydroxy group is pervasive in chemistry and biochemistry. Many inorganic compounds contain hydroxyl groups, including sulfuric acid , the chemical compound produced on the largest scale industrially. [ 4 ]
Hydroxy groups participate in the dehydration reactions that link simple biological molecules into long chains. The joining of a fatty acid to glycerol to form a triacylglycerol removes the −OH from the carboxy end of the fatty acid. The joining of two aldehyde sugars to form a disaccharide removes the −OH from the carboxy group at the aldehyde end of one sugar . The creation of a peptide bond to link two amino acids to make a protein removes the −OH from the carboxy group of one amino acid. [ citation needed ]
Hydroxyl radicals are highly reactive and undergo chemical reactions that make them short-lived. When biological systems are exposed to hydroxyl radicals, they can cause damage to cells, including those in humans, where they can react with DNA , lipids , and proteins . [ 5 ]
The Earth's night sky is illuminated by diffuse light, called airglow , that is produced by radiative transitions of atoms and molecules. [ 6 ] Among the most intense such features observed in the Earth's night sky is a group of infrared transitions at wavelengths between 700 nanometers and 900 nanometers. In 1950, Aden Meinel showed that these were transitions of the hydroxyl molecule, OH. [ 7 ]
In 2009, India's Chandrayaan-1 satellite and the National Aeronautics and Space Administration (NASA) Cassini spacecraft and Deep Impact probe each detected evidence of water by evidence of hydroxyl fragments on the Moon . As reported by Richard Kerr, "A spectrometer [the Moon Mineralogy Mapper, also known as "M3"] detected an infrared absorption at a wavelength of 3.0 micrometers that only water or hydroxyl—a hydrogen and an oxygen bound together—could have created." [ 8 ] NASA also reported in 2009 that the LCROSS probe revealed an ultraviolet emission spectrum consistent with hydroxyl presence. [ 9 ]
On 26 October 2020, NASA reported definitive evidence of water on the sunlit surface of the Moon, in the vicinity of the crater Clavius (crater) , obtained by the Stratospheric Observatory for Infrared Astronomy (SOFIA) . [ 10 ] The SOFIA Faint Object infrared Camera for the SOFIA Telescope (FORCAST) detected emission bands at a wavelength of 6.1 micrometers that are present in water but not in hydroxyl. The abundance of water on the Moon's surface was inferred to be equivalent to the contents of a 12-ounce bottle of water per cubic meter of lunar soil. [ 11 ]
The Chang'e 5 probe, which landed on the Moon on 1 December 2020, carried a mineralogical spectrometer that could measure infrared reflectance spectra of lunar rock and regolith. The reflectance spectrum of a rock sample at a wavelength of 2.85 micrometers indicated localized water/hydroxyl concentrations as high as 180 parts per million. [ 12 ]
The Venus Express orbiter collected Venus science data from April 2006 until December 2014. In 2008, Piccioni, et al. reported measurements of night-side airglow emission in the atmosphere of Venus made with the Visible and Infrared Thermal Imaging Spectrometer (VIRTIS) on Venus Express. They attributed emission bands in wavelength ranges of 1.40–1.49 micrometers and 2.6–3.14 micrometers to vibrational transitions of OH. [ 13 ] This was the first evidence for OH in the atmosphere of any planet other than Earth. [ 13 ]
In 2013, OH near-infrared spectra were observed in the night glow in the polar winter atmosphere of Mars by use of the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM). [ 14 ]
In 2021, evidence for OH in the dayside atmosphere of the exoplanet WASP-33b was found in its emission spectrum at wavelengths between 1 and 2 micrometers. [ 15 ] Evidence for OH in the atmosphere of exoplanet WASP-76b was subsequently found. [ 16 ] Both WASP-33b and WASP-76b are ultra-hot Jupiters and it is likely that any water in their atmospheres is present as dissociated ions. | https://en.wikipedia.org/wiki/Hydroxy_group |
Hydroxyacetylaminofluorene is a derivative of 2-acetylaminofluorene used as a biochemical tool in the study of carcinogenesis .
This article about an organic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydroxyacetylaminofluorene |
Hydroxyarchaeol is a core lipid unique to archaea , similar to archaeol , with a hydroxide functional group at the carbon-3 position of one of its ether side chains. [ 1 ] It is found exclusively in certain taxa of methanogenic archaea , [ 2 ] and is a common biomarker for methanogenesis and methane-oxidation. Isotopic analysis of hydroxyarchaeol can be informative about the environment and substrates for methanogenesis. [ 3 ]
Hydroxyarchaeol was first identified by Dennis G. Sprott and colleagues in 1990 from Methanosaeta concilii by a combination of TLC , NMR and mass spectrometric analysis. [ 1 ]
The lipid consists of a glycerol backbone with two C 20 phytanyl ether chains attached, one of which has a hydroxyl (-OH) group attached at the C3 carbon. It is one of the major core lipids of methanogenic archaea alongside archaeol, forming the basis of their cell membrane . The two major forms are sn-2- and sn-3-hydroxyarchaeol, depending on if the hydroxyl group is on the sn-2 or sn-3 phytanyl chain of the glycerol backbone. [ 4 ]
Use of hydroxyarchaeol as a biomarker was a primary way to identify methanogens in the environment, though it has become supplementary to metagenomic and 16S rRNA techniques for identifying phylogeny. [ 2 ] [ 4 ] [ 5 ] [ 3 ] While hydroxyarchaeol has only been identified in methanogenic archaea, not all methanogens count it among their core lipids. [ 2 ] [ 4 ] Other methanogens may contain different derivatives of archaeol, including cyclic archaeol and caldarchaeol based on taxonomic differences. [ 2 ] Hydroxyarchaeol has been identified in many different taxa, including within the orders Methanococcales , Methanosarcinales , which contains the genus Methanosaeta , and a genus from the order Methanobacteriales . [ 2 ] There is evidence that there is a taxonomic preference for the sn-2 vs sn-3 form based on phylogeny, as a mix of the two forms do not tend to appear in the same organism, but the reason for this difference is not well understood. [ 1 ] Because of the hydroxyl group, which is prone to degradation over time, hydroxyarchaeol has not been observed in ancient samples, and thus is thought to indicate modern sources of methanogens . [ 6 ]
Original measurements of hydroxyarchaeol were done using TLC and NMR, but have become dominated by gas-chromatograph/mass spectrometry. For most methods, extraction of the core lipid is typically done using variations of a Bligh-Dyer method, [ 7 ] which makes use of the various polarities and miscibility of dichloromethane ( DCM ), methanol, and water. Acidic conditions using trichloroacetic acid (TCA) during extraction and additional cleanup of samples with polar solvents such as DCM is often needed to better isolate the lipids of interest. [ 1 ] [ 3 ] [ 5 ]
Prior to GC-MS analysis, the intact hydroxyarchaeol lipid is typically hydrolyzed to the core lipid component and derivatized by adding trimethyl silyl (TMS) groups to the free hydroxyl functional groups. [ 1 ] [ 5 ] [ 3 ] This allows for the lipid to volatilize in the GC and reach the MS analyzer. Because hydroxyarchaeol has multiple sites that can be modified after TMS derivatization, the observed mass spectra can be either the mono- or di-TMS derivative, and need to be compared to authentic standards to properly identify and quantify. [ 8 ] For identification and quantification, the mass spectrometer typically utilizes a quadrupole mass analyzer, but isotopic analysis uses an isotope-ratio mass spectrometer (IRMS) that has higher mass resolution and sensitivity. [ 5 ] [ 3 ]
The relative isotopic ratio of carbon ( δ 13 C ) found in hydroxyarchaeol is used to identify what the methane-associated organism is using as a carbon source. [ 3 ] Carbon sources in the environment will have a measurable δ 13 C signature that can be matched with the biomarkers found in an organism, which will gain the isotopic signature of its food source. Since archaea that make hydroxyarchaeol can harness a number of carbon sources, including dissolved inorganic carbon ( DIC ), methanol , trimethylamine , and methane , [ 2 ] [ 3 ] this is a useful way to determine which is the primary source of energy, or if there is a mixture of use in the environment.
Hydroxyarchaeol has been found in peat bogs [ 6 ] and methane seeps in the deep ocean [ 3 ] [ 5 ] as a marker of both methanogens and methanotrophs. The deep sea sediment hydroxyarchaeol had very depleted δ 13 C at methane seeps. Both the methane and DIC present also had depleted δ 13 C values, but not as a perfect match to the identified biomarker. [ 3 ] By modeling the isotopic ratio of DIC and methane to the isotopic ratio of the biomarkers, the researchers could estimate the relative contribution to biosynthesis and metabolic pathways that each source had for the organism. The model could predict a relative contribution that matched well with actual measurements, indicating there was mixed metabolism occurring at these sites, with specific biosynthetic pathways using different proportions of carbon derived from each source. [ 3 ] This method made use of hydroxyarchaeol in the bulk sample to target the metabolism of a specific group of microbes without need for exhaustive separations of different organisms, making it useful for environmental analysis. | https://en.wikipedia.org/wiki/Hydroxyarchaeol |
Hydroxybenzotriazole (abbreviated HOBt) is an organic compound with the formula C 6 H 4 N 3 OH . It is a derivative of benzotriazole . It is a white crystalline powder, which as a commercial product contains some water (~11.7% wt as the HOBt monohydrate crystal). Anhydrous HOBt is explosive. [ citation needed ] It is mainly used to suppress the racemization and to improve the efficiency of peptide synthesis . [ 1 ]
Automated peptide synthesis involves the condensation of the amino group of protected amino acids with the activated ester. HOBt is used to produce such activated esters which react with amines at ambient temperature to give amides. [ 2 ]
HOBt is also used for the synthesis of amides from carboxylic acids aside from amino acids. These substrates may not be convertible to the acyl chlorides . [ 3 ]
Due to reclassification as UN0508 , a class 1.3C explosive, hydroxybenzotriazole and its monohydrate crystal are no longer allowed to be transported by sea or air as per 49CFR ( USDOT hazardous materials regulations). However, UNECE draft proposal ECE/TRANS/WP.15/AC.1/HAR/2009/1 has been circulated to UN delegates and, if implemented, would amend current regulations thus allowing for the monohydrate crystal to be shipped under the less-stringent code of UN3474 as a class 4.1 desensitized explosive. HOBt was demonstrated to not exhibit dermal corrosion or irritation but did exhibit eye irritation. [ 4 ] The sensitization potential of HOBt was shown to be low (non-sensitizing at 1% in LLNA testing according to OECD 429 [ 5 ] ). | https://en.wikipedia.org/wiki/Hydroxybenzotriazole |
Hydroxycorticosteroids ( OHCSs ) are corticosteroids that have an additional hydroxy (-OH) group.
There are two main positions where the hydroxy group may be added: at carbon atom 11, and at carbon atom 17.
11-hydroxycorticosteroids (11-OHCSs) include:
17-hydroxycorticosteroids (17-OHCSs) include:
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydroxycorticosteroids |
Hydroxyl ion absorption is the absorption in optical fibers of electromagnetic radiation , including the near- infrared , due to the presence of trapped hydroxyl ions remaining from water as a contaminant. [ 1 ]
The hydroxyl (OH − ) ion can penetrate glass during or after product fabrication, resulting in significant attenuation of discrete optical wavelengths, e.g. , centred at 1.383 μm, used for communications via optical fibres. [ 2 ]
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 22 January 2022.
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
This optics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydroxyl_ion_absorption |
In analytical chemistry , the hydroxyl value is defined as the number of milligrams of potassium hydroxide (KOH) required to neutralize the acetic acid taken up on acetylation of one gram of a chemical substance that contains free hydroxyl groups . The analytical method used to determine hydroxyl value traditionally involves acetylation of the free hydroxyl groups of the substance with acetic anhydride in pyridine solvent. After completion of the reaction, water is added, and the remaining unreacted acetic anhydride is converted to acetic acid and measured by titration with potassium hydroxide.
The hydroxyl value can be calculated using the following equation. Note that a chemical substance may also have a measurable acid value affecting the measured endpoint of the titration. The acid value ( AV ) of the substance, determined in a separate experiment, enters into this equation as a correction factor in the calculation of the hydroxyl value ( HV ):
Where HV is the hydroxyl value; V B is the amount (ml) potassium hydroxide solution required for the titration of the blank; V acet is the amount (ml) of potassium hydroxide solution required for the titration of the acetylated sample; W acet is the weight of the sample (in grams) used for acetylation; N is the normality of the titrant; 56.1 is the molecular weight of potassium hydroxide (g/mol); AV is a separately determined acid value of the chemical substance.
The content of free hydroxyl groups in a substance can also be determined by methods other than acetylation. [ 1 ] Determinations of hydroxyl content by other methods may instead be expressed as a weight percentage (wt. %) of hydroxyl groups in units of the mass of hydroxide functional groups in grams per 100 grams of substance. The conversion between hydroxyl value and other hydroxyl content measurements is obtained by multiplying the hydroxyl value by the factor 17/560. [ 2 ] The chemical substance may be a fat, oil, natural or synthetic ester , or other polyol. [ 3 ]
ASTM D 1957 [ 4 ] and ASTM E222-10 [ 5 ] describe several versions of this method of determining hydroxyl value.
The value is important because it helps determine the Stoichiometry of a system for example in polyurethanes. [ 6 ] The value may also be used to calculate equivalent weight and if the functionality is known, the molecular weight also. [ 7 ] | https://en.wikipedia.org/wiki/Hydroxyl_number |
The hydroxyl radical , • HO , is the neutral form of the hydroxide ion (HO – ). Hydroxyl radicals are highly reactive and consequently short-lived; however, they form an important part of radical chemistry . Most notably hydroxyl radicals are produced from the decomposition of hydroperoxides (ROOH) or, in atmospheric chemistry , by the reaction of excited atomic oxygen with water. It is also an important radical formed in radiation chemistry, since it leads to the formation of hydrogen peroxide and oxygen , which can enhance corrosion and stress corrosion cracking in coolant systems subjected to radioactive environments. Hydroxyl radicals are also produced during UV-light dissociation of hydrogen peroxide (H 2 O 2 ) (suggested in 1879) and likely in Fenton chemistry , where trace amounts of reduced transition metals catalyze peroxide-mediated oxidations of organic compounds.
In organic synthesis hydroxyl radicals are most commonly generated by photolysis of 1-Hydroxy-2(1H)-pyridinethione .
The hydroxyl radical is often referred to as the "detergent" of the troposphere because it reacts with many pollutants, often acting as the first step to their removal. It also has an important role in eliminating some greenhouse gases like methane and ozone . [ 3 ] The rate of reaction with the hydroxyl radical often determines how long many pollutants last in the atmosphere, if they do not undergo photolysis or are rained out . For instance, methane, which reacts relatively slowly with hydroxyl radicals, has an average lifetime of >5 years and many CFCs have lifetimes of 50+ years. Pollutants, such as larger hydrocarbons , can have very short average lifetimes of less than a few hours.
The first reaction with many volatile organic compounds (VOCs) is the removal of a hydrogen atom, forming water and an alkyl radical (R • ):
The alkyl radical will typically react rapidly with oxygen forming a peroxy radical:
The fate of this radical in the troposphere is dependent on factors such as the amount of sunlight, pollution in the atmosphere and the nature of the alkyl radical that formed it (See chapters 12 & 13 in External Links "University Lecture notes on Atmospheric chemistry)
Hydroxyl radicals can occasionally be produced as a byproduct of immune action . Macrophages and microglia most frequently generate this compound when exposed to very specific pathogens , such as certain bacteria. The destructive action of hydroxyl radicals has been implicated in several neurological autoimmune diseases such as HIV-associated dementia , when immune cells become over-activated and toxic to neighboring healthy cells. [ 4 ]
The hydroxyl radical can damage virtually all types of macromolecules: carbohydrates, nucleic acids ( mutations ), lipids ( lipid peroxidation ) and amino acids (e.g. conversion of Phe to m- Tyrosine and o- Tyrosine ). The hydroxyl radical has a very short in vivo half-life of approximately 10 −9 seconds and a high reactivity. [ 5 ] This makes it a very dangerous compound to the organism. [ 6 ] [ 7 ]
Unlike superoxide , which can be detoxified by superoxide dismutase , the hydroxyl radical cannot be eliminated by an enzymatic reaction. Mechanisms for scavenging peroxyl radicals for the protection of cellular structures include endogenous antioxidants such as melatonin and glutathione , and dietary antioxidants such as mannitol and vitamin E . [ 6 ]
The hydroxyl • HO radicals is one of the main chemical species controlling the oxidizing capacity of the Earth's atmosphere, having a major impact on the concentrations and distribution of greenhouse gases and pollutants. It is the most widespread oxidizer in the troposphere , the lowest part of the atmosphere. Understanding • HO variability is important to evaluating human impacts on the atmosphere and climate. The • HO species has a lifetime in the Earth atmosphere of less than one second. [ 8 ] Understanding the role of • HO in the oxidation process of methane (CH 4 ) present in the atmosphere to first carbon monoxide (CO) and then carbon dioxide (CO 2 ) is important for assessing the residence time of this greenhouse gas, the overall carbon budget of the troposphere, and its influence on the process of global warming.
The lifetime of • HO radicals in the Earth atmosphere is very short, therefore • HO concentrations in the air are very low and very sensitive techniques are required for its direct detection. [ 9 ] Global average hydroxyl radical concentrations have been measured indirectly by analyzing methyl chloroform (CH 3 CCl 3 ) present in the air. The results obtained by Montzka et al. (2011) [ 10 ] shows that the interannual variability in • HO estimated from CH 3 CCl 3 measurements is small, indicating that global • HO is generally well buffered against perturbations. This small variability is consistent with measurements of methane and other trace gases primarily oxidized by • HO, as well as global photochemical model calculations.
The first experimental evidence for the presence of 18 cm absorption lines of the hydroxyl ( • HO) radical in the radio absorption spectrum of Cassiopeia A was obtained by Weinreb et al. (Nature, Vol. 200, pp. 829, 1963) based on observations made during the period October 15–29, 1963. [ 11 ]
• HO is a diatomic molecule. The electronic angular momentum along the molecular axis is +1 or −1, and the electronic spin angular momentum S=1/2. Because of the orbit-spin coupling, the spin angular momentum can be oriented in parallel or anti parallel directions to the orbital angular momentum, producing the splitting into Π 1/2 and Π 3/2 states. The 2 Π 3/2 ground state of • HO is split by lambda doubling interaction (an interaction between the nuclei rotation and the unpaired electron motion around its orbit). Hyperfine interaction with the unpaired spin of the proton further splits the levels.
In order to study gas phase interstellar chemistry, it is convenient to distinguish two types of interstellar clouds: diffuse clouds, with T=30–100 K, and n=10–1000 cm −3 , and dense clouds with T=10–30K and density n= 10 4 – 10 3 cm −3 . Ion chemical routes in both dense and diffuse clouds have been established for some works (Hartquist 1990).
The • HO radical is linked with the production of H 2 O in molecular clouds. Studies of • HO distribution in Taurus Molecular Cloud-1 (TMC-1) [ 20 ] suggest that in dense gas, • HO is mainly formed by dissociative recombination of H 3 O + . Dissociative recombination is the reaction in which a molecular ion recombines with an electron and dissociates into neutral fragments. Important formation mechanisms for • HO are:
H 3 O + + e − → • HO + H 2 (1a) Dissociative recombination
H 3 O + + e − → • HO + • H + • H (1b) Dissociative recombination
HCO 2 + + e − → • HO + CO (2a) Dissociative recombination
• O + HCO → • HO + CO (3a) Neutral-neutral
H − + H 3 O + → • HO + H 2 + • H (4a) Ion-molecular ion neutralization
Experimental data on association reactions of • H and • HO suggest that radiative association involving atomic and diatomic neutral radicals may be considered as an effective mechanism for the production of small neutral molecules in the interstellar clouds. [ 21 ] The formation of O 2 occurs in the gas phase via the neutral exchange reaction between • O and • HO, which is also the main sink for • HO in dense regions. [ 20 ]
We can see that atomic oxygen takes part both in the production and destruction of • HO, so the abundance of • HO depends mainly on the H + 3 abundance. Then, important chemical pathways leading from • HO radicals are:
• HO + • O → O 2 + • H (1A) Neutral-neutral
• HO + C + → CO + + • H (2A) Ion-neutral
• HO + • N → NO + • H (3A) Neutral-neutral
• HO + C → CO + • H (4A) Neutral-neutral
• HO + • H → H 2 O + photon (5A) Neutral-neutral
Rate constants can be derived from the dataset published on the website [1] . Rate constants have the form:
k(T) = alpha*(T/300) beta *exp(−gamma/T)cm 3 s −1
The following table has the rate constants calculated for a typical temperature in a dense cloud T=10 K.
Formation rates r ix can be obtained using the rate constants k(T) and the abundances of the reactants species C and D:
r ix =k(T) ix [C][D]
where [Y] represents the abundance of the specie Y. In this approach, abundances were taken from The UMIST database for astrochemistry 2006 , and the values are relatives to the H 2 density. Following table shows the ratio r ix /r 1a in order to get a view of the most important reactions.
The results suggest that (1a) reaction is the most prominent reaction in dense clouds. It is in concordance with Harju et al. 2000.
Next table shows the results by doing the same procedure for destruction reaction:
Results shows that, 1A reaction is the main sink for HO in dense clouds.
Discoveries of the microwave spectra of a considerable number of molecules prove the existence of rather complex molecules in the interstellar clouds, and provides the possibility to study dense clouds, which are obscured by the dust they contain. [ 22 ] The • HO molecule has been observed in the interstellar medium since 1963 through its 18-cm transitions. [ 23 ] In the subsequent years • HO was observed by its rotational transitions at far infrared wavelengths, mainly in the Orion region. Because each rotational level of • HO is split in by lambda doubling, astronomers can observe a wide variety of energy states from the ground state.
Very high densities are required to thermalize the rotational transitions of • HO, [ 24 ] so it is difficult to detect far-infrared emission lines from a quiescent molecular cloud. Even at H 2 densities of 10 6 cm −3 , dust must be optically thick at infrared wavelengths. But the passage of a shock wave through a molecular cloud is precisely the process which can bring the molecular gas out of equilibrium with the dust, making observations of far-infrared emission lines possible. A moderately fast shock may produce a transient raise in the • HO abundance relative to hydrogen. So, it is possible that far-infrared emission lines of • HO can be a good diagnostic of shock conditions.
Diffuse clouds are of astronomical interest because they play a primary role in the evolution and thermodynamics of ISM. Observation of the abundant atomic hydrogen in 21 cm has shown good signal-to-noise ratio in both emission and absorption. Nevertheless, HI observations have a fundamental difficulty when are directed to low mass regions of the hydrogen nucleus, as the center part of a diffuse cloud: Thermal width of hydrogen lines are the same order as the internal velocities' structures of interest, so clouds components of various temperatures and central velocities are indistinguishable in the spectrum. Molecular lines observations in principle doesn't suffer from these problems. Unlike HI, molecules generally have excitation temperature T ex << T kin , so that emission is very weak even from abundant species. CO and • HO are considered to be the most easily studied candidate molecules. CO has transitions in a region of the spectrum (wavelength < 3 mm) where there are not strong background continuum sources, but • HO has the 18 cm emission, line convenient for absorption observations. [ 16 ] Observation studies provide the most sensitive means of detections of molecules with sub thermal excitation, and can give the opacity of the spectral line, which is a central issue to model the molecular region.
Studies based in the kinematic comparison of • HO and HI absorption lines from diffuse clouds are useful in determining their physical conditions, especially because heavier elements provide higher velocity resolution.
• HO masers , a type of astrophysical maser , were the first masers to be discovered in space and have been observed in more environments than any other type of maser.
In the Milky Way , • HO masers are found in stellar masers (evolved stars), interstellar masers (regions of massive star formation), or in the interface between supernova remnants and molecular material. Interstellar HO masers are often observed from molecular material surrounding ultracompact H II regions (UC H II). But there are masers associated with very young stars that have yet to create UC H II regions. [ 25 ] This class of • HO masers appears to form near the edges of very dense material, places where H 2 O masers form, and where total densities drop rapidly, and UV radiation from young stars can dissociate the H 2 O molecules. So, observations of • HO masers in these regions, can be an important way to probe the distribution of the important H 2 O molecule in interstellar shocks at high spatial resolutions .
Application in water purification
Hydroxyl radicals also play a key role in the oxidative destruction of organic pollutants . [ 26 ] | https://en.wikipedia.org/wiki/Hydroxyl_radical |
Hydroxyl tagging velocimetry ( HTV ) is a velocimetry method used in humid air flows. The
method is often used in high-speed combusting flows because the high velocity and temperature
accentuate its advantages over similar methods. HTV uses a laser (often an argon-fluoride excimer laser operating at ~193 nm) to dissociate the water in the flow into H + OH. Before entering the flow optics are
used to create a grid of laser beams. The water in the flow is dissociated only where beams of
sufficient energy pass through the flow, thus creating a grid in the flow where the
concentrations of hydroxyl (OH) are higher than in the surrounding flow. Another laser beam (at either ~248 nm or ~308 nm) in
the form of a sheet is also passed through the flow in the same plane as the grid. This laser
beam is tuned to a wavelength that causes the hydroxyl molecules to fluoresce in the UV spectrum . The fluorescence is then captured by a charge-coupled device (CCD) camera. Using
electronic timing methods the picture of the grid can be captured at nearly the same instant
that the grid is created.
By delaying the pulse of the fluorescence laser and the camera shot, an image of the grid that
has now displaced downstream can be captured. Computer programs are then used to compare the
two images and determine the displacement of the grid. By dividing the displacement by the known
time delay the two dimensional velocity field (in the plane of the grid) can be determined. Flow ratios, however, are shown to affect the impingement locations, where increased air flow ratios can reduce the required combustor size by isolating reaction products solely within the secondary cavity. [ 1 ]
Other molecular tagging velocimetry (MTV) methods have used ozone (O 3 ), excited oxygen and nitric oxide as the tag instead of hydroxyl. In the case of ozone the method is known as ozone tagging velocimetry or OTV. OTV has been developed and tested in many room air temperature applications with very accurate test results. OTV consists of an initial "write" step, where a 193-nm pulsed excimer laser creates ozone grid lines via oxygen (O 2 ) UV absorption, and a subsequent "read" step, where a 248-nm excimer laser photodissociates the formed O 3 and fluoresces the vibrationally excited O 2 product thus revealing the grid lines' displacement. | https://en.wikipedia.org/wiki/Hydroxyl_tagging_velocimetry |
In analytical chemistry , the hydroxyl value is defined as the number of milligrams of potassium hydroxide (KOH) required to neutralize the acetic acid taken up on acetylation of one gram of a chemical substance that contains free hydroxyl groups . The analytical method used to determine hydroxyl value traditionally involves acetylation of the free hydroxyl groups of the substance with acetic anhydride in pyridine solvent. After completion of the reaction, water is added, and the remaining unreacted acetic anhydride is converted to acetic acid and measured by titration with potassium hydroxide.
The hydroxyl value can be calculated using the following equation. Note that a chemical substance may also have a measurable acid value affecting the measured endpoint of the titration. The acid value ( AV ) of the substance, determined in a separate experiment, enters into this equation as a correction factor in the calculation of the hydroxyl value ( HV ):
Where HV is the hydroxyl value; V B is the amount (ml) potassium hydroxide solution required for the titration of the blank; V acet is the amount (ml) of potassium hydroxide solution required for the titration of the acetylated sample; W acet is the weight of the sample (in grams) used for acetylation; N is the normality of the titrant; 56.1 is the molecular weight of potassium hydroxide (g/mol); AV is a separately determined acid value of the chemical substance.
The content of free hydroxyl groups in a substance can also be determined by methods other than acetylation. [ 1 ] Determinations of hydroxyl content by other methods may instead be expressed as a weight percentage (wt. %) of hydroxyl groups in units of the mass of hydroxide functional groups in grams per 100 grams of substance. The conversion between hydroxyl value and other hydroxyl content measurements is obtained by multiplying the hydroxyl value by the factor 17/560. [ 2 ] The chemical substance may be a fat, oil, natural or synthetic ester , or other polyol. [ 3 ]
ASTM D 1957 [ 4 ] and ASTM E222-10 [ 5 ] describe several versions of this method of determining hydroxyl value.
The value is important because it helps determine the Stoichiometry of a system for example in polyurethanes. [ 6 ] The value may also be used to calculate equivalent weight and if the functionality is known, the molecular weight also. [ 7 ] | https://en.wikipedia.org/wiki/Hydroxyl_value |
Hydroxylamine (also known as hydroxyammonia ) is an inorganic compound with the chemical formula N H 2 O H . The compound exists as hygroscopic colorless crystals . [ 4 ] Hydroxylamine is almost always provided and used as an aqueous solution or more often as one of its salts such as hydroxylammonium sulfate , a water-soluble solid.
Hydroxylamine and its salts are consumed almost exclusively to produce Nylon-6 . The oxidation of NH 3 to hydroxylamine is a step in biological nitrification . [ 5 ]
Hydroxylamine was first prepared as hydroxylammonium chloride in 1865 by the German chemist Wilhelm Clemens Lossen (1838-1906); he reacted tin and hydrochloric acid in the presence of ethyl nitrate . [ 6 ] It was first prepared in pure form in 1891 by the Dutch chemist Lobry de Bruyn and by the French chemist Léon Maurice Crismer (1858-1944). [ 7 ] [ 8 ] The coordination complex ZnCl 2 (NH 2 OH) 2 (zinc dichloride di(hydroxylamine)), known as Crismer's salt, releases hydroxylamine upon heating. [ 9 ]
Hydroxylamine and its N -substituted derivatives are pyramidal at nitrogen, with bond angles very similar to those of amines. The conformation of hydroxylamine places the NOH anti to the lone pair on nitrogen, seeming to minimize lone pair-lone pair interactions. [ 10 ]
Hydroxylamine or its salts (salts containing hydroxylammonium cations [NH 3 OH] + ) can be produced via several routes but only two are commercially viable. It is also produced naturally as discussed in a section on biochemistry .
NH 2 OH is mainly produced as its sulfuric acid salt , hydroxylammonium sulfate ( [NH 3 OH][SO 4 ] ), by the hydrogenation of nitric oxide over platinum catalysts in the presence of sulfuric acid. [ 11 ]
Another route to NH 2 OH is the Raschig process : aqueous ammonium nitrite is reduced by HSO − 3 and SO 2 at 0 °C to yield a hydroxylamido- N , N -disulfonate anion :
This ammonium hydroxylamine disulfonate anion is then hydrolyzed to give hydroxylammonium sulfate :
Julius Tafel discovered that hydroxylamine hydrochloride or sulfate salts can be produced by electrolytic reduction of nitric acid with HCl or H 2 SO 4 respectively: [ 12 ] [ 13 ]
Hydroxylamine can also be produced by the reduction of nitrous acid or potassium nitrite with bisulfite :
Hydrochloric acid disproportionates nitromethane to hydroxylamine hydrochloride and carbon monoxide via the hydroxamic acid . [ citation needed ]
A direct lab synthesis of hydroxylamine from molecular nitrogen in water plasma was demonstrated in 2024. [ 14 ]
Solid NH 2 OH can be collected by treatment with liquid ammonia . Ammonium sulfate , [NH 4 ] 2 SO 4 , a side-product insoluble in liquid ammonia, is removed by filtration; the liquid ammonia is evaporated to give the desired product. [ 4 ] The net reaction is:
Base, such as sodium butoxide, can be used to free the hydroxylamine from hydroxylammonium chloride : [ 4 ]
Hydroxylamine is a base with a pKa of 6.03:
Hydroxylamine reacts with alkylating agents usually at the nitrogen atom:
The reaction of NH 2 OH with an aldehyde or ketone produces an oxime .
This reaction can be useful in the purification of ketones and aldehydes: if hydroxylamine is added to an aldehyde or ketone in solution, an oxime forms, which generally precipitates from solution; heating the precipitate with aqueous acid then restores the original aldehyde or ketone. [ 15 ]
NH 2 OH reacts with chlorosulfonic acid to give hydroxylamine- O -sulfonic acid : [ 16 ]
It isomerizes to the amine oxide H 3 N + −O − . [ 17 ]
Hydroxylamine derivatives substituted in place of the hydroxyl or amine hydrogen are (respectively) called O - or N ‑hydroxylamines. In general N ‑hydroxylamines are more common. Examples are N ‑ tert ‑butylhydroxylamine or the glycosidic bond in calicheamicin . N , O ‑Dimethylhydroxylamine is a precursor to Weinreb amides .
Similarly to amines, one can distinguish hydroxylamines by their degree of substitution: primary, secondary and tertiary. When stored exposed to air for weeks, secondary hydroxylamines degrade to nitrones . [ 18 ]
N ‑organylhydroxylamines, R−NH−OH , where R is an organyl group, can be reduced to amines R−NH 2 : [ 19 ]
Oximes such as dimethylglyoxime are also employed as ligands .
The hydrolysis of N-substituted oximes, hydroxamic acids, and nitrones easily provides hydroxylamines.
Alkylating of hydroxylamine or N-alkylhydroxylamines proceeds usually at nitrogen. One challenge is dialkylation when only monoalkylation is desired.
For O-alkylation of hydroxylamines, strong base such as sodium hydride is required to first deprotonate the OH group: [ 20 ]
Amine oxidation with benzoyl peroxide is a common method to synthesize hydroxylamines. Care must be taken to prevent over-oxidation to a nitrone . Other methods include:
Approximately 95% of hydroxylamine is used in the synthesis of cyclohexanone oxime , a precursor to Nylon 6 . [ 11 ] The treatment of this oxime with acid induces the Beckmann rearrangement to give caprolactam . [ 21 ] The latter can then undergo a ring-opening polymerization to yield Nylon 6. [ 22 ]
Hydroxylamine and its salts are commonly used as reducing agents in myriad organic and inorganic reactions. They can also act as antioxidants for fatty acids.
High concentrations of hydroxylamine are used by biologists to introduce mutations by acting as a DNA nucleobase amine-hydroxylating agent. [ 23 ] In is thought to mainly act via hydroxylation of cytidine to hydroxyaminocytidine, which is misread as thymidine, thereby inducing C:G to T:A transition mutations. [ 24 ] But high concentrations or over-reaction of hydroxylamine in vitro are seemingly able to modify other regions of the DNA & lead to other types of mutations. [ 24 ] This may be due to the ability of hydroxylamine to undergo uncontrolled free radical chemistry in the presence of trace metals and oxygen, in fact in the absence of its free radical affects Ernst Freese noted hydroxylamine was unable to induce reversion mutations of its C:G to T:A transition effect and even considered hydroxylamine to be the most specific mutagen known. [ 25 ] Practically, it has been largely surpassed by more potent mutagens such as EMS , ENU , or nitrosoguanidine , but being a very small mutagenic compound with high specificity, it found some specialized uses such as mutation of DNA packed within bacteriophage capsids, [ 26 ] and mutation of purified DNA in vitro . [ 27 ]
An alternative industrial synthesis of paracetamol developed by Hoechst – Celanese involves the conversion of ketone to a ketoxime with hydroxylamine.
Some non-chemical uses include removal of hair from animal hides and photographic developing solutions. [ 2 ] In the semiconductor industry, hydroxylamine is often a component in the "resist stripper", which removes photoresist after lithography.
Hydroxylamine can also be used to better characterize the nature of a post-translational modification onto proteins. For example, poly(ADP-Ribose) chains are sensitive to hydroxylamine when attached to glutamic or aspartic acids but not sensitive when attached to serines. [ 28 ] Similarly, Ubiquitin molecules bound to serines or threonines residues are sensitive to hydroxylamine, but those bound to lysine (isopeptide bond) are resistant. [ 29 ]
In biological nitrification, the oxidation of NH 3 to hydroxylamine is mediated by the ammonia monooxygenase (AMO). [ 5 ] Hydroxylamine oxidoreductase (HAO) further oxidizes hydroxylamine to nitrite. [ 30 ]
Cytochrome P460, an enzyme found in the ammonia-oxidizing bacteria Nitrosomonas europea , can convert hydroxylamine to nitrous oxide , a potent greenhouse gas . [ 31 ]
Hydroxylamine can also be used to highly selectively cleave asparaginyl - glycine peptide bonds in peptides and proteins. [ 32 ] It also bonds to and permanently disables (poisons) heme-containing enzymes . It is used as an irreversible inhibitor of the oxygen-evolving complex of photosynthesis on account of its similar structure to water.
Hydroxylamine is a skin irritant but is of low toxicity.
A detonator can easily explode aqueous solutions concentrated above 80% by weight, and even 50% solution might prove detonable if tested in bulk. [ 33 ] [ 34 ] In air, the combustion is rapid and complete:
Absent air, pure hydroxylamine requires stronger heating and the detonation does not complete combustion:
At least two factories dealing in hydroxylamine have been destroyed since 1999 with loss of life. [ 35 ] It is known, however, that ferrous and ferric iron salts accelerate the decomposition of 50% NH 2 OH solutions. [ 36 ] Hydroxylamine and its derivatives are more safely handled in the form of salts .
It is an irritant to the respiratory tract , skin, eyes, and other mucous membranes . It may be absorbed through the skin, is harmful if swallowed, and is a possible mutagen . [ 37 ] | https://en.wikipedia.org/wiki/Hydroxylamine |
Hydroxylamine- O -sulfonic acid ( HOSA ) or aminosulfuric acid is the inorganic compound with molecular formula H 3 NO 4 S that is formed by the sulfonation of hydroxylamine with oleum . [ 2 ] It is a white, water-soluble and hygroscopic , solid, commonly represented by the condensed structural formula H 2 NOSO 3 H, though it actually exists as a zwitterion [ 3 ] and thus is more accurately represented as + H 3 NOSO 3 − . It is used as a reagent for the introduction of amine groups (–NH 2 ), for the conversion of aldehydes into nitriles and alicyclic ketones into lactams (cyclic amides ), and for the synthesis of variety of nitrogen-containing heterocycles. [ 3 ] [ 4 ] [ 5 ]
According to a laboratory procedure [ 2 ] hydroxylamine- O -sulfonic acid can be prepared by treating hydroxylamine sulfate with fuming sulfuric acid ( oleum ). The industrial process is similar. [ 6 ]
The sulfonation of hydroxylamine can also be effected with chlorosulfonic acid [ 3 ] by a method first published in 1925 [ 7 ] and refined for Organic Syntheses . [ 8 ]
The hydroxylamine- O -sulfonic acid, which should be stored at 0 °C to prevent decomposition, can be checked by iodometric titration. [ 9 ]
Analogous to sulfamic acid (H 3 N + SO 3 − ) and as is the case generally for amino acids, HOSA exists in the solid state as a zwitterion : H 3 N + OSO 3 − . It resembles an ammonia molecule coordinate covalently bonded to a sulfate group. [ 10 ]
HOSA reacts under basic conditions as an electrophile and under neutral or acidic conditions as a nucleophile . [ 4 ] [ 11 ]
It reacts with tertiary amines to trisubstituted hydrazinium salts and with pyridine to the 1-amino pyridinium salt. [ 12 ]
From 1-aminopyridinium salts the photochemically active 1- N -iminopyridinium ylides are accessible by acylation . [ 13 ] The photochemical rearrangement of the obtained 1-N-iminipyridinium ylides leads in high yields to 1 H -1,2- diazepines [ 14 ]
N -amination of 1 H -benzotriazole with hydroxylamine- O -sulfonic acid yields a mixture of 1-aminobenzotriazole (major product) and 2-aminobenzotriazole (minor product). From 1-aminotriazole, benzyne is formed in an almost quantitative yield by oxidation with lead(IV) acetate , which rapidly dimerizes to biphenylene in good yields. [ 15 ]
Electron deficient heterocycles , such as tetrazole , can be N - aminated with hydroxylamine- O -sulfonic acid, while even more electron-deficient compounds, such as 5-nitrotetrazole , react only with stronger aminating agents such as O -tosylhydroxylamine or O - mesitylene sulfonylhydroxylamine to amino compounds, which were investigated as explosives. [ 16 ]
In the N -amination of the unsubstituted tetrazole, a mixture of 1-amino- and 2-aminotetrazole is obtained.
Also sulfur compounds (such as thioethers ) can be aminated with hydroxylamine- O -sulfonic acid to sulfinimines (isosteric with sulfoxides but far more unstable) or phosphorus compounds (such as triphenylphosphine ) can be aminated to phosphine imides via the intermediate aminotriphenylphosphonium hydrogen sulfate. [ 17 ]
The reaction of hydroxylamine- O -sulfonic acid with metal salts of sulfinic acids in sodium acetate solution produces primary sulfonamides in very good yields. [ 18 ]
Diimine can formed in situ from hydroxylamine- O -sulfonic acid respectively hydroxylamine- O -sulfonic acid hydroxylamine sulfate mixtures, which hydrogenates selectively conjugated multiple bonds.[20]
At room temperature and below, hydroxylamine- O -sulfonic acid reacts with ketones and aldehydes as a nucleophile to the corresponding oxime- O -sulfonic acids or their salts. [ 19 ] The oxime- O -sulfonic acids of aldehydes react above room temperature upon elimination of sulfuric acid in high yields to nitriles . [ 20 ]
Aliphatic ketones provide under similar conditions in very high yields oximes, arylalkyl ketones react in a Beckmann rearrangement to amides. When heated to reflux for several hours under acidic conditions (e.g., in the presence of concentrated formic acid ) alicyclic ketones react to provide lactams in high yields. [ 21 ]
Under basic conditions in the presence of primary amines, hydroxylamine- O -sulfonic acid forms with aldehydes and ketones (e.g. cyclohexanone [ 22 ] ) diaziridines, which can easily be oxidized to the more stable diazirines.
The reaction also provides substituted aziridines from simple aldehydes and ketones with high yield and diastereoselectivity. [ 23 ]
1,2-Benzisoxazole is efficiently produced by nucleophilic attack of hydroxylamine- O -sulfonic acid to the carbonyl group of 2-hydroxybenzaldehyde followed by cyclization. [ 24 ]
1,2-Benzisoxazole is a structural element in the antipsychotic risperidone and paliperidone , as well as the anticonvulsant zonisamide .
In a one-pot reaction , N -aryl[3,4- d ]pyrazolopyrimidines are obtained in good yields from simple 4,6-dichloropyrimidine-5-carboxaldehyde, [ 25 ]
which can be used as purine analogs for a wide range of diagnostic and therapeutic applications. [ 26 ]
The chemiluminescence of the system luminol / cobalt(II) chloride is dramatically enhanced by the addition of hydroxylamine- O -sulfonic acid. [ 27 ] | https://en.wikipedia.org/wiki/Hydroxylamine-O-sulfonic_acid |
Hydroxylammonium nitrate or hydroxylamine nitrate (HAN) is an inorganic compound with the chemical formula [NH 3 OH] + [NO 3 ] − . It is a salt derived from hydroxylamine and nitric acid . In its pure form, it is a colourless hygroscopic solid. It has potential to be used as a rocket propellant either as a solution in monopropellants or bipropellants. [ 1 ] Hydroxylammonium nitrate (HAN)-based propellants are a viable and effective solution for future "green" propellant-based missions, as it offers 50% higher performance for a given propellant tank compared to commercially used hydrazine .
The compound is a salt with separated hydroxyammonium and nitrate ions . [ 2 ] Hydroxylammonium nitrate is unstable because it contains both a reducing agent (hydroxylammonium cation) and an oxidizer ( nitrate ), [ 3 ] the situation being analogous to ammonium nitrate . It is usually handled as an aqueous solution with small amount of nitric acid as a stabilizer. [ 4 ] : 1641 The solution is corrosive and toxic, and may be carcinogenic. Solid HAN is unstable, especially in the presence of trace amounts of iron(III) .
HAN has applications as a component of rocket propellant , in both solid and liquid form. HAN and ammonium dinitramide (ADN), another energetic ionic compound, were investigated as less-toxic replacements for toxic hydrazine for monopropellant rockets where only a catalyst is needed to cause decomposition. [ 5 ] HAN and ADN will work as monopropellants in water solution, as well as when dissolved with fuel liquids such as methanol .
HAN is used by the Network Centric Airborne Defense Element boost-phase interceptor being developed by Raytheon. [ 6 ] As a solid propellant oxidizer, it is typically bonded with glycidyl azide polymer (GAP), hydroxyl-terminated polybutadiene (HTPB), or carboxy-terminated polybutadiene (CTPB) and requires preheating to 200-300 °C to decompose. [ citation needed ] When used as a monopropellant, the catalyst is a noble metal, similar to the other monopropellants that use silver , palladium , or iridium . [ citation needed ]
HAN also enabled the development of solid propellants that could be controlled electrically and switched on and off. [ 7 ] Developed by DSSP for special effects [ 8 ] and microthrusters, these were the first HAN-based propellants in space; and aboard the Naval Research Laboratory SpinSat, launched in 2014. [ 9 ] [ 10 ]
It was used in a fuel/oxidizer blend known as "AF-M315E" [ 1 ] in the high thrust engines of the Green Propellant Infusion Mission , [ 11 ] [ 12 ] [ 13 ] which was initially expected to be launched in 2015, and eventually launched and deployed on 25 June 2019. [ 14 ] The specific impulse of AF-M315E is 257 s. [ 1 ] The aqueous solution of HAN can be added with fuel components such as methanol, glycine , TEAN ( triethanolammonium nitrate), and amines to form high performance monopropellants for space propulsion systems. [ 15 ]
China Aerospace Science and Technology Corporation (CASC) launched a demonstration of HAN-based thruster aboard a microsatellite in January 2018. [ 16 ]
Japanese technology demonstration satellite Innovative Satellite Technology Demonstration-1 , launched in January 2019, contains a demonstration thruster using HAN and operated successfully in orbit. [ 17 ] [ 18 ] [ 19 ]
HAN is sometimes used in nuclear reprocessing as a reducing agent for plutonium ions. [ 20 ] | https://en.wikipedia.org/wiki/Hydroxylammonium_nitrate |
The hydroxymethyl group is a substituent with the structural formula −CH 2 −OH . It consists of a methylene bridge ( −CH 2 − unit) bonded to a hydroxyl group ( −OH ). This makes the hydroxymethyl group an alcohol . It has the identical chemical formula with the methoxy group ( −O−CH 3 ) that differs only in the attachment site and orientation to the rest of the molecule. However, their chemical properties are different. [ 1 ] [ 2 ]
Hydroxymethyl is the side chain of encoded amino acid serine . [ 3 ]
This article about an organic compound is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hydroxymethyl_group |
The stability problem of functional equations originated from a question of Stanisław Ulam , posed in 1940, concerning the stability of group homomorphisms . In the next year, Donald H. Hyers [ 1 ] gave a partial affirmative answer to the question of Ulam in the context of Banach spaces in the case of additive mappings, that was the first significant breakthrough and a step toward more solutions in this area. Since then, a large number of papers have been published in connection with various generalizations of Ulam's problem and Hyers's theorem. In 1978, Themistocles M. Rassias [ 2 ] succeeded in extending Hyers's theorem for mappings between Banach spaces by considering an unbounded Cauchy difference [ 3 ] subject to a continuity condition upon the mapping. He was the first to prove the stability of the linear mapping . This result of Rassias attracted several mathematicians worldwide who began to be stimulated to investigate the stability problems of functional equations.
By regarding a large influence of S. M. Ulam , D. H. Hyers, and Th. M. Rassias on the study of stability problems of functional equations, the stability phenomenon proved by Th. M. Rassias led to the development of what is now known as Hyers–Ulam–Rassias stability [ 4 ] of functional equations . For an extensive presentation of the stability of functional equations in the context of Ulam's problem, the interested reader is referred to the books by S.-M. Jung, [ 5 ] S. Czerwik, [ 6 ] Y.J. Cho, C. Park, Th.M. Rassias and R. Saadati, [ 7 ] Y.J. Cho, Th.M. Rassias and R. Saadati, [ 8 ] and Pl. Kannappan, [ 9 ] as well as to the following papers. [ 10 ] [ 11 ] [ 12 ] [ 13 ] In 1950, T. Aoki [ 14 ] considered an unbounded Cauchy difference which was generalised later by Rassias to the linear case. This result is known as Hyers–Ulam–Aoki stability of the additive mapping. [ 15 ] Aoki (1950) had not considered continuity upon the mapping, whereas Rassias (1978) imposed extra continuity hypothesis which yielded a formally stronger conclusion. | https://en.wikipedia.org/wiki/Hyers–Ulam–Rassias_stability |
A hyetograph is a graphical representation of the distribution of rainfall intensity over time. For instance, in the 24-hour rainfall distributions as developed by the Soil Conservation Service (now the NRCS or National Resources Conservation Service ), rainfall intensity progressively increases until it reaches a maximum and then gradually decreases. Where this maximum occurs and how fast the maximum is reached is what differentiates one distribution from another. One important aspect to understand is that the distributions are for design storms, not necessarily actual storms. In other words, a real storm may not behave in this same fashion. The maximum intensity may not be reached as uniformly as shown in the SCS hyetographs.
This article about atmospheric science is a stub . You can help Wikipedia by expanding it .
This hydrology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hyetograph |
In medicine, the hygiene hypothesis states that early childhood exposure to particular microorganisms (such as the gut flora and helminth parasites) protects against allergies by properly tuning the immune system . [ 1 ] [ 2 ] In particular, a lack of such exposure is thought to lead to poor immune tolerance . [ 1 ] The time period for exposure begins before birth and ends at school age. [ 3 ]
While early versions of the hypothesis referred to microorganism exposure in general, later versions apply to a specific set of microbes that have co-evolved with humans. [ 1 ] [ 4 ] [ 2 ] The updates have been given various names, including the microbiome depletion hypothesis, the microflora hypothesis, and the "old friends" hypothesis . [ 4 ] [ 5 ] There is a significant amount of evidence supporting the idea that lack of exposure to these microbes is linked to allergies or other conditions, [ 2 ] [ 6 ] [ 7 ] although it is still rejected by many scientists. [ 4 ] [ 8 ] [ 9 ]
The term "hygiene hypothesis" has been described as a misnomer because people incorrectly interpret it as referring to their own cleanliness. [ 1 ] [ 8 ] [ 10 ] [ 11 ] Having worse personal hygiene, such as not washing hands before eating, only increases the risk of infection without affecting the risk of allergies or immune disorders. [ 1 ] [ 4 ] [ 9 ] Hygiene is essential for protecting vulnerable populations such as the elderly from infections, preventing the spread of antibiotic resistance , and combating emerging infectious diseases such as Ebola . [ 12 ] The hygiene hypothesis does not suggest that having more infections during childhood would be an overall benefit. [ 1 ] [ 8 ]
The idea of a link between parasite infection and immune disorders was first suggested in 1968 [ 13 ] before the advent of large scale DNA sequencing techniques . The original formulation of the hygiene hypothesis dates from 1989, when David Strachan proposed that lower incidence of infection in early childhood could be an explanation for the rise in allergic diseases such as asthma and hay fever during the 20th century. [ 14 ]
The hygiene hypothesis has also been expanded beyond allergies, and is also studied in the context of a broader range of conditions affected by the immune system, particularly inflammatory diseases . [ 15 ] These include type 1 diabetes , [ 16 ] multiple sclerosis, [ 17 ] [ 10 ] and also some types of depression [ 17 ] [ 18 ] and cancer. [ 19 ] For example, the global distribution of multiple sclerosis is negatively correlated with that of the helminth Trichuris trichiura and its incidence is negatively correlated with Helicobacter pylori infection. [ 10 ] Strachan's original hypothesis could not explain how various allergic conditions spiked or increased in prevalence at different times, such as why respiratory allergies began to increase much earlier than food allergies, which did not become more common until near the end of the 20th century. [ 12 ]
In 2003, Graham Rook proposed the "old friends" hypothesis which has been described as a more rational explanation for the link between microbial exposure and inflammatory disorders. [ 20 ] The hypothesis states that the vital microbial exposures are not colds, influenza, measles and other common childhood infections which have evolved relatively recently over the last 10,000 years, but rather the microbes already present during mammalian and human evolution, that could persist in small hunter-gatherer groups as microbiota, tolerated latent infections, or carrier states. He proposed that coevolution with these species has resulted in their gaining a role in immune system development. [ citation needed ]
Strachan's original formulation of the hygiene hypothesis also centred around the idea that smaller families provided insufficient microbial exposure partly because of less person-to-person spread of infections, but also because of "improved household amenities and higher standards of personal cleanliness". [ 14 ] It seems likely that this was the reason he named it the "hygiene hypothesis". Although the "hygiene revolution" of the nineteenth and twentieth centuries may have been a major factor, it now seems more likely that, while public health measures such as sanitation , potable water and garbage collection were instrumental in reducing our exposure to cholera , typhoid and so on, they also deprived people of their exposure to the "old friends" that occupy the same environmental habitats. [ 21 ]
The rise of autoimmune diseases and acute lymphoblastic leukemia in young people in the developed world was linked to the hygiene hypothesis. [ 22 ] [ 23 ] [ 24 ] Autism may be associated with changes in the gut microbiome and early infections. [ 25 ] The risk of chronic inflammatory diseases also depends on factors such as diet, pollution, physical activity, obesity, socio-economic factors, and stress. Genetic predisposition is also a factor. [ 26 ] [ 27 ] [ 28 ]
Since allergies and other chronic inflammatory diseases are largely diseases of the last 100 years or so, the "hygiene revolution" of the last 200 years came under scrutiny as a possible cause. During the 1800s, radical improvements to sanitation and water quality occurred in Europe and North America. The introduction of toilets and sewer systems and the cleanup of city streets, and cleaner food were part of this program. This in turn led to a rapid decline in infectious diseases, particularly during the period 1900–1950, through reduced exposure to infectious agents. [ 21 ]
Although the idea that exposure to certain infections may decrease the risk of allergy is not new, Strachan was one of the first to formally propose it, in an article published in the British Medical Journal in 1989. This article proposed to explain the observation that hay fever and eczema , both allergic diseases, were less common in children from larger families, which were presumably exposed to more infectious agents through their siblings, than in children from families with only one child. [ 29 ] The increased occurrence of allergies had previously been thought to be a result of increasing pollution. [ 8 ] The hypothesis was extensively investigated by immunologists and epidemiologists and has become an important theoretical framework for the study of chronic inflammatory disorders. [ citation needed ]
The "old friends hypothesis" proposed in 2003 [ 20 ] may offer a better explanation for the link between microbial exposure and inflammatory diseases. [ 18 ] [ 20 ] This hypothesis argues that the vital exposures are not common cold and other recently evolved infections, which are no older than 10,000 years, but rather microbes already present in hunter-gatherer times when the human immune system was evolving. Conventional childhood infections are mostly " crowd infections " that kill or immunise and thus cannot persist in isolated hunter-gatherer groups. Crowd infections started to appear after the Neolithic agricultural revolution, when human populations increased in size and proximity. The microbes that co-evolved with mammalian immune systems are much more ancient. According to this hypothesis, humans became so dependent on them that their immune systems can neither develop nor function properly without them.
Rook proposed that these microbes most likely include:
The modified hypothesis later expanded to include exposure to symbiotic bacteria and parasites. [ 30 ]
"Evolution turns the inevitable into a necessity." This means that the majority of mammalian evolution took place in mud and rotting vegetation and more than 90 percent of human evolution took place in isolated hunter-gatherer communities and farming communities. Therefore, the human immune systems have evolved to anticipate certain types of microbial input, making the inevitable exposure into a necessity. The organisms that are implicated in the hygiene hypothesis are not proven to cause the disease prevalence, however there are sufficient data on lactobacilli, saprophytic environment mycobacteria, and helminths and their association. These bacteria and parasites have commonly been found in vegetation, mud, and water throughout evolution. [ 18 ] [ 20 ]
Multiple possible mechanisms have been proposed for how the 'Old Friends' microorganisms prevent autoimmune diseases and asthma. They include:
The "microbial diversity" hypothesis, proposed by Paolo Matricardi and developed by von Hertzen, [ 31 ] [ 32 ] holds that diversity of microbes in the gut and other sites is a key factor for priming the immune system, rather than stable colonization with a particular species. Exposure to diverse organisms in early development builds a "database" that allows the immune system to identify harmful agents and normalize once the danger is eliminated. [ citation needed ]
For allergic disease, the most important times for exposure are: early in development; later during pregnancy; and the first few days or months of infancy. Exposure needs to be maintained over a significant period. This fits with evidence that delivery by Caesarean section may be associated with increased allergies, whilst breastfeeding can be protective. [ 21 ]
Humans and the microbes they harbor have co-evolved for thousands of centuries; however, it is thought that the human species has gone through numerous phases in history characterized by different pathogen exposures. For instance, in very early human societies, small interaction between its members has given particular selection to a relatively limited group of pathogens that had high transmission rates. It is considered that the human immune system is likely subjected to a selective pressure from pathogens that are responsible for down regulating certain alleles and therefore phenotypes in humans. The thalassemia genes that are shaped by the Plasmodium species expressing the selection pressure might be a model for this theory [ 33 ] but is not shown in-vivo.
Recent comparative genomic studies have shown that immune response genes (protein coding and non-coding regulatory genes) have less evolutionary constraint, and are rather more frequently targeted by positive selection from pathogens that coevolve with the human subject. Of all the various types of pathogens known to cause disease in humans, helminths warrant special attention, because of their ability to modify the prevalence or severity of certain immune-related responses in human and mouse models. In fact recent research has shown that parasitic worms have served as a stronger selective pressure on select human genes encoding interleukins and interleukin receptors when compared to viral and bacterial pathogens. Helminths are thought to have been as old as the adaptive immune system , suggesting that they may have co-evolved, also implying that our immune system has been strongly focused on fighting off helminthic infections, insofar as to potentially interact with them early in infancy. The host-pathogen interaction is a very important relationship that serves to shape the immune system development early on in life. [ 34 ] [ 35 ] [ 36 ] [ 37 ]
The primary proposed mechanism of the hygiene hypothesis is an imbalance between the T H 1 and T H 2 subtypes of T helper cells . [ 10 ] [ 38 ] Insufficient activation of the T H 1 arm would stimulate the cell defense of the immune system and lead to an overactive T H 2 arm, stimulating the antibody-mediated immunity of the immune systems, which in turn led to allergic disease. [ 39 ]
However, this explanation cannot explain the rise in incidence (similar to the rise of allergic diseases) of several T H 1-mediated autoimmune diseases , including inflammatory bowel disease , multiple sclerosis and type I diabetes . [Figure 1Bach] However, the North South Gradient seen in the prevalence of multiple sclerosis has been found to be inversely related to the global distribution of parasitic infection.[Figure 2Bach] Additionally, research has shown that MS patients infected with parasites displayed T H 2 type immune responses as opposed to the proinflammatory T H 1 immune phenotype seen in non-infected multiple sclerosis patients.[Fleming] Parasite infection has also been shown to improve inflammatory bowel disease and may act in a similar fashion as it does in multiple sclerosis.[Lee] [ citation needed ]
Allergic conditions are caused by inappropriate immunological responses to harmless antigens driven by a T H 2 -mediated immune response, T H 2 cells produce interleukin 4 , interleukin 5 , interleukin 6 , interleukin 13 and predominantly stimulate immunoglobulin E production. [ 23 ] Many bacteria and viruses elicit a T H 1 -mediated immune response, which down-regulates T H 2 responses. T H 1 immune responses are characterized by the secretion of pro-inflammatory cytokines such as interleukin 2 , IFNγ , and TNFα . Factors that favor a predominantly T H 1 phenotype include: older siblings, large family size, early day care attendance, infection (TB, measles, or hepatitis), rural living, or contact with animals. A T H 2-dominated phenotype is associated with high antibiotic use, western lifestyle, urban environment, diet, and sensitivity to dust mites and cockroaches. T H 1 and T H 2 responses are reciprocally inhibitory, so when one is active, the other is suppressed. [ 40 ] [ 41 ] [ 42 ]
An alternative explanation is that the developing immune system must receive stimuli (from infectious agents, symbiotic bacteria, or parasites) to adequately develop regulatory T cells . Without that stimuli it becomes more susceptible to autoimmune diseases and allergic diseases, because of insufficiently repressed T H 1 and T H 2 responses, respectively. [ 43 ] For example, all chronic inflammatory disorders show evidence of failed immunoregulation. [ 26 ] Secondly, helminths, non-pathogenic ambient pseudocommensal bacteria or certain gut commensals and probiotics , drive immunoregulation. They block or treat models of all chronic inflammatory conditions. [ 44 ]
There is a significant amount of evidence supporting the idea that microbial exposure is linked to allergies or other conditions, [ 2 ] [ 6 ] [ 7 ] although scientific disagreement still exists. [ 4 ] [ 8 ] [ 9 ] Since hygiene is difficult to define or measure directly, surrogate markers are used such as socioeconomic status, income, and diet. [ 38 ]
Studies have shown that various immunological and autoimmune diseases are much less common in the developing world than the industrialized world and that immigrants to the industrialized world from the developing world increasingly develop immunological disorders in relation to the length of time since arrival in the industrialized world. [ 23 ] This is true for asthma and other chronic inflammatory disorders. [ 18 ] The increase in allergy rates is primarily attributed to diet and reduced microbiome diversity, although the mechanistic reasons are unclear. [ 45 ]
The use of antibiotics in the first year of life has been linked to asthma and other allergic diseases, [ 46 ] and increased asthma rates are also associated with birth by Caesarean section . [ 47 ] However, at least one study suggests that personal hygienic practices may be unrelated to the incidence of asthma. [ 9 ] Antibiotic usage reduces the diversity of gut microbiota. Although several studies have shown associations between antibiotic use and later development of asthma or allergy, other studies suggest that the effect is due to more frequent antibiotic use in asthmatic children. Trends in vaccine use may also be relevant, but epidemiological studies provide no consistent support for a detrimental effect of vaccination/immunization on atopy rates. [ 21 ] In support of the old friends hypothesis, the intestinal microbiome was found to differ between allergic and non-allergic Estonian and Swedish children (although this finding was not replicated in a larger cohort), and the biodiversity of the intestinal flora in patients with Crohn's disease was diminished. [ 23 ]
The hygiene hypothesis does not apply to all populations. [ 9 ] [ 38 ] For example, in the case of inflammatory bowel disease , it is primarily relevant when a person's level of affluence increases, either due to changes in society or by moving to a more affluent country, but not when affluence remains constant at a high level. [ 38 ]
The hygiene hypothesis has difficulty explaining why allergic diseases also occur in less affluent regions. [ 9 ] Additionally, exposure to some microbial species actually increases future susceptibility to disease instead, as in the case of infection with rhinovirus (the main source of the common cold ) which increases the risk of asthma. [ 4 ] [ 48 ]
Current research suggests that manipulating the intestinal microbiota may be able to treat or prevent allergies and other immune-related conditions. [ 2 ] Various approaches are under investigation. Probiotics (drinks or foods) have never been shown to reintroduce microbes to the gut. As yet, therapeutically relevant microbes have not been specifically identified. [ 49 ] However, probiotic bacteria have been found to reduce allergic symptoms in some studies. [ 15 ] Other approaches being researched include prebiotics , which promote the growth of gut flora, and synbiotics , the use of prebiotics and probiotics at the same time. [ 2 ]
Should these therapies become accepted, public policy implications include providing green spaces in urban areas or even providing access to agricultural environments for children. [ 50 ]
Helminthic therapy is the treatment of autoimmune diseases and immune disorders by means of deliberate infestation with a helminth larva or ova . Helminthic therapy emerged from the search for reasons why the incidence of immunological disorders and autoimmune diseases correlates with the level of industrial development. [ 51 ] [ 52 ] The exact relationship between helminths and allergies is unclear, in part because studies tend to use different definitions and outcomes, and because of the wide variety among both helminth species and the populations they infect. [ 53 ] The infections induce a type 2 immune response, which likely evolved in mammals as a result of such infections; chronic helminth infection has been linked with a reduced sensitivity in peripheral T cells, and several studies have found deworming to lead to an increase in allergic sensitivity. [ 54 ] [ 13 ] However, in some cases helminths and other parasites are a cause of developing allergies instead. [ 4 ] In addition, such infections are not themselves a treatment as they are a major disease burden and in fact they are one of the most important neglected diseases . [ 54 ] [ 13 ] The development of drugs that mimic the effects without causing disease is in progress. [ 4 ]
The reduction of public confidence in hygiene has significant possible consequences for public health. [ 12 ] Hygiene is essential for protecting vulnerable populations such as the elderly from infections, preventing the spread of antibiotic resistance , and for combating emerging infectious diseases such as SARS and Ebola . [ 12 ]
The misunderstanding of the term "hygiene hypothesis" has resulted in unwarranted opposition to vaccination as well as other important public health measures. [ 8 ] It has been suggested that public awareness of the initial form of the hygiene hypothesis has led to an increased disregard for hygiene in the home. [ 55 ] The effective communication of science to the public has been hindered by the presentation of the hygiene hypothesis and other health-related information in the media. [ 12 ]
No evidence supports the idea that reducing modern practices of cleanliness and hygiene would have any impact on rates of chronic inflammatory and allergic disorders, but a significant amount of evidence indicates that reducing hygiene would increase the risks of infectious diseases. [ 21 ] The phrase "targeted hygiene" has been used in order to recognize the importance of hygiene in avoiding pathogens. [ 1 ]
If home and personal cleanliness contributes to reduced exposure to vital microbes, its role is likely to be small. The idea that homes can be made "sterile" through excessive cleanliness is implausible, and the evidence shows that after cleaning, microbes are quickly replaced by dust and air from outdoors, by shedding from the body and other living things, as well as from food. [ 21 ] [ 56 ] [ 57 ] [ 58 ] The key point may be that the microbial content of urban housing has altered, not because of home and personal hygiene habits, but because they are part of urban environments. Diet and lifestyle changes also affects the gut, skin and respiratory microbiota. [ citation needed ]
At the same time that concerns about allergies and other chronic inflammatory diseases have been increasing, so also have concerns about infectious disease. [ 21 ] [ 59 ] [ 60 ] Infectious diseases continue to exert a heavy health toll. Preventing pandemics and reducing antibiotic resistance are global priorities, and hygiene is a cornerstone of containing these threats. [ citation needed ]
The International Scientific Forum on Home Hygiene has developed a risk management approach to reducing home infection risks. This approach uses microbiological and epidemiological evidence to identify the key routes of infection transmission in the home. These data indicate that the critical routes involve the hands, hand and food contact surfaces and cleaning utensils. Clothing and household linens involve somewhat lower risks. Surfaces that contact the body, such as baths and hand basins, can act as infection vehicles, as can surfaces associated with toilets. Airborne transmission can be important for some pathogens. A key aspect of this approach is that it maximises protection against pathogens and infection, but is more relaxed about visible cleanliness in order to sustain normal exposure to other human, animal and environmental microbes. [ 56 ] | https://en.wikipedia.org/wiki/Hygiene_hypothesis |
Hygroscopy is the phenomenon of attracting and holding water molecules via either absorption or adsorption from the surrounding environment , which is usually at normal or room temperature. If water molecules become suspended among the substance's molecules, adsorbing substances can become physically changed, e.g. changing in volume, boiling point , viscosity or some other physical characteristic or property of the substance. For example, a finely dispersed hygroscopic powder, such as a salt, may become clumpy over time due to collection of moisture from the surrounding environment.
Deliquescent materials are sufficiently hygroscopic that they dissolve in the water they absorb, forming an aqueous solution .
Hygroscopy is essential for many plant and animal species' attainment of hydration, nutrition, reproduction and/or seed dispersal . Biological evolution created hygroscopic solutions for water harvesting, filament tensile strength, bonding and passive motion – natural solutions being considered in future biomimetics . [ 1 ] [ 2 ]
The word hygroscopy ( / h aɪ ˈ ɡ r ɒ s k ə p i / ) uses combining forms of hygro- (for moisture or humidity) and -scopy . Unlike any other -scopy word, it no longer refers to a viewing or imaging mode. It did begin that way, with the word hygroscope referring in the 1790s to measuring devices for humidity level. These hygroscopes used materials, such as certain animal hairs, that appreciably changed shape and size when they became damp. Such materials were then said to be hygroscopic because they were suitable for making a hygroscope. Eventually, the word hygroscope ceased to be used for any such instrument in modern usage , but the word hygroscopic (tending to retain moisture) lived on, and thus also hygroscopy (the ability to do so). Nowadays an instrument for measuring humidity is called a hygrometer ( hygro- + -meter ).
Early hygroscopy literature began circa 1880. [ 3 ] Studies by Victor Jodin ( Annales Agronomiques , October 1897) focused on the biological properties of hygroscopicity. [ 4 ] He noted pea seeds, both living and dead (without germinative capacity), responded similarly to atmospheric humidity, their weight increasing or decreasing in relation to hygrometric variation.
Marcellin Berthelot viewed hygroscopicity from the physical side, a physico-chemical process. Berthelot's principle of reversibility, briefly- that water dried from plant tissue could be restored hygroscopically, was published in "Recherches sur la desiccation des plantes et des tissues végétaux; conditions d'équilibre et de réversibilité," ( Annales de Chimie et de Physique , April 1903). [ 4 ]
Léo Errera viewed hygroscopicity from perspectives of the physicist and the chemist. [ 4 ] His memoir "Sur l'Hygroscopicité comme cause de l'action physiologique à distance" ( Recueil de l'lnstitut Botanique Léo Errera, Université de Bruxelles , tome vi., 1906) provided a hygroscopy definition that remains valid to this day. Hygroscopy is "exhibited in the most comprehensive sense, as displayed
Hygroscopic substances include cellulose fibers (such as cotton and paper), sugar , caramel , honey , glycerol , ethanol , wood , methanol , sulfuric acid , many fertilizer chemicals, many salts and a wide variety of other substances. [ 5 ]
If a compound dissolves in water, then it is considered to be hydrophilic . [ 6 ]
Zinc chloride and calcium chloride , as well as potassium hydroxide and sodium hydroxide (and many different salts ), are so hygroscopic that they readily dissolve in the water they absorb: this property is called deliquescence . Not only is sulfuric acid hygroscopic in concentrated form but its solutions are hygroscopic down to concentrations of 10% v/v or below. A hygroscopic material will tend to become damp and cakey when exposed to moist air (such as the salt inside salt shakers during humid weather).
Because of their affinity for atmospheric moisture , desirable hygroscopic materials might require storage in sealed containers. Some hygroscopic materials, e.g., sea salt and sulfates, occur naturally in the atmosphere and serve as cloud seeds , cloud condensation nuclei (CCNs). Being hygroscopic, their microscopic particles provide an attractive surface for moisture vapour to condense and form droplets. Modern-day human cloud seeding efforts began in 1946. [ 7 ]
When added to foods or other materials for the express purpose of maintaining moisture content , hygroscopic materials are known as humectants .
Materials and compounds exhibit different hygroscopic properties, and this difference can lead to detrimental effects, such as stress concentration in composite materials . The volume of a particular material or compound is affected by ambient moisture and may be considered its coefficient of hygroscopic expansion (CHE) (also referred to as CME, or coefficient of moisture expansion) or the coefficient of hygroscopic contraction (CHC)—the difference between the two terms being a difference in sign convention.
Differences in hygroscopy can be observed in plastic-laminated paperback book covers—often, in a suddenly moist environment, the book cover will curl away from the rest of the book. The unlaminated side of the cover absorbs more moisture than the laminated side and increases in area, causing a stress that curls the cover toward the laminated side. This is similar to the function of a thermostat's bimetallic strip . Inexpensive dial-type hygrometers make use of this principle using a coiled strip. Deliquescence is the process by which a substance absorbs moisture from the atmosphere until it dissolves in the absorbed water and forms a solution. Deliquescence occurs when the vapour pressure of the solution that is formed is less than the partial pressure of water vapour in the air.
While some similar forces are at work here, it is different from capillary attraction , a process where glass or other solid substances attract water, but are not changed in the process (e.g., water molecules do not become suspended between the glass molecules).
Deliquescence, like hygroscopy, is also characterized by a strong affinity for water and tendency to absorb moisture from the atmosphere if exposed to it. Unlike hygroscopy, however, deliquescence involves absorbing sufficient water to form an aqueous solution . Most deliquescent materials are salts , including calcium chloride , magnesium chloride , zinc chloride , ferric chloride , carnallite , potassium carbonate , potassium phosphate , ferric ammonium citrate , ammonium nitrate , potassium hydroxide , and sodium hydroxide . Owing to their very high affinity for water, these substances are often used as desiccants , which is also an application for concentrated sulfuric and phosphoric acids . Some deliquescent compounds are used in the chemical industry to remove water produced by chemical reactions (see drying tube ). [ 8 ]
Hygroscopy appears in both plant and animal kingdoms, the latter benefiting via hydration and nutrition. Some amphibian species secrete a hygroscopic mucus that harvests moisture from the air. Orb web building spiders produce hygroscopic secretions that preserve the stickiness and adhesion force of their webs. One aquatic reptile species is able to travel beyond aquatic limitations, onto land, due to its hygroscopic integument .
Plants benefit from hygroscopy via hydration [ 1 ] and reproduction – demonstrated by convergent evolution examples. [ 2 ] Hygroscopic movement (hygrometrically activated movement) is integral in fertilization, seed/spore release, dispersal and germination. The phrase "hygroscopic movement" originated in 1904's " Vorlesungen Über Pflanzenphysiologie ", translated in 1907 as "Lectures on Plant Physiology" ( Ludwig Jost and R.J. Harvey Gibson , Oxford, 1907). [ 9 ] When movement becomes larger scale, affected plant tissues are colloquially termed hygromorphs. [ 10 ] Hygromorphy is a common mechanism of seed dispersal as the movement of dead tissues respond to hygrometric variation, [ 11 ] e.g. spore release from the fertile margins of Onoclea sensibilis . Movement occurs when plant tissue matures, dies and desiccates, cell walls drying, shrinking; [ 12 ] and also when humidity re-hydrates plant tissue, cell walls enlarging, expanding. [ 11 ] The direction of the resulting force depends upon the architecture of the tissue and is capable of producing bending, twisting or coiling movements.
Typical of hygroscopic movement are plant tissues with "closely packed long (columnar) parallel thick-walled cells (that) respond by expanding longitudinally when exposed to humidity and shrinking when dried (Reyssat et al., 2009)". [ 10 ] Cell orientation, pattern structure (annular, planar, bi-layered or tri-layered) and the effects of the opposite-surface's cell orientation control the hygroscopic reaction. Moisture responsive seed encapsulations rely on valves opening when exposed to wetting or drying; discontinuous tissue structures provide such predetermined breaking points (sutures), often implemented via reduced cell wall thickness or seams within bi- or tri-layered structures. [ 2 ] Graded distributions varying in density and/or cell orientation focus hygroscopic movement, frequently observed as biological actuators (a hinge function); e.g. pinecones ( Pinus spp. ), the ice plant ( Aizoaceae spp. ) and the wheat awn ( Triticum spp. ), [ 20 ] described below.
Two angiospermae families have similar methods of dispersal, though method of implementation varies within family: Geraniaceae family examples are the common stork's-bill ( Erodium cicutarium ) and geraniums ( Pelargonium sp. ); Poaceae family, Needle-and-Thread ( Hesperostipa comata ) and wheat ( Triticum spp. ). All rely upon a bi-layered parallel fiber hygroscopic cell physiology to control the awn's movement for dispersal and self-burial of seeds. [ 2 ] Alignment of cellulose fibrils in the awn's controlling cell wall determines direction of movement. If fiber alignments are tilted, non-parallel venation, a helix develops and awn movement becomes twisting (coiling) instead of bending; [ 21 ] e.g. coiling occurs in awns of Erodium , [ 2 ] and Hesperostipa . [ 29 ]
Hygroscopicity is a general term used to describe a material's ability to absorb moisture from the environment. [ 31 ] There is no standard quantitative definition of hygroscopicity, so generally the qualification of hygroscopic and non-hygroscopic is determined on a case-by-case basis. For example, pharmaceuticals that pick up more than 5% by mass, between 40 and 90% relative humidity at 25 °C, are described as hygroscopic, while materials that pick up less than 1%, under the same conditions are regarded as non-hygroscopic. [ 32 ]
The amount of moisture held by hygroscopic materials is usually proportional to the relative humidity. Tables containing this information can be found in many engineering handbooks and is also available from suppliers of various materials and chemicals.
Hygroscopy also plays an important role in the engineering of plastic materials. Some plastics, e. g. nylon , are hygroscopic while others are not.
Many engineering polymers are hygroscopic, including nylon , ABS , polycarbonate , cellulose , carboxymethyl cellulose , and poly(methyl methacrylate) (PMMA, plexiglas , perspex ).
Other polymers, such as polyethylene and polystyrene , do not normally absorb much moisture, but are able to carry significant moisture on their surface when exposed to liquid water. [ 33 ]
Type-6 nylon (a polyamide ) can absorb up to 9.5% of its weight in moisture. [ 34 ]
The use of different substances' hygroscopic properties in baking are often used to achieve differences in moisture content and, hence, crispiness. Different varieties of sugars are used in different quantities to produce a crunchy, crisp cookie (British English: biscuit) versus a soft, chewy cake. Sugars such as honey , brown sugar , and molasses are examples of sweeteners used to create moister and chewier cakes. [ 35 ]
Several hygroscopic approaches to harvest atmospheric moisture have been demonstrated and require further development to assess their potentials as a viable water source.
Hygroscopic glues are candidates for commercial development. The most common cause of synthetic glue failure at high humidity is attributed to water lubricating the contact area, impacting bond quality. Hygroscopic glues may allow more durable adhesive bonds by absorbing (pulling) inter-facial environmental moisture away from the glue-substrate boundary. [ 14 ]
Integrating hygroscopic movement into smart building designs and systems is frequently mentioned, e.g. self-opening windows. [ 20 ] Such movement is appealing, an adaptive, self-shaping response that requires no external force or energy. However, capabilities of current material choices are limited. Biomimetic design of hygromorphic wood composites and hygro-actuated building systems have been modeled and evaluated. [ 37 ] | https://en.wikipedia.org/wiki/Hygroscopy |
Hylotheism (from Gk. hyle , 'matter' and theos , 'God') is the belief that matter and God are the same, so in other words, defining God as matter. [ 1 ] [ 2 ]
The American Lutheran Church–Missouri Synod defines hylotheism is "Theory equating matter with God or merging one into the other" which it states as "Synonym for pantheism * and materialism .*". [ 3 ]
This article relating to Lutheranism is a stub . You can help Wikipedia by expanding it .
This article about religious studies is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hylotheism |
Hyman Bass ( / ˈ h aɪ m ən b æ s / ; born October 5, 1932) [ 1 ] is an American mathematician , known for work in algebra and in mathematics education . From 1959 to 1998 he was Professor in the Mathematics Department at Columbia University . He is currently the Samuel Eilenberg Distinguished University Professor of Mathematics and Professor of Mathematics Education at the University of Michigan .
Born to a Jewish family in Houston , Texas , [ 1 ] he earned his B.A. in 1955 from Princeton University and his Ph.D. in 1959 from the University of Chicago . His thesis, titled Global dimensions of rings , was written under the supervision of Irving Kaplansky .
He has held visiting appointments at the Institute for Advanced Study in Princeton, New Jersey , [ 2 ] Institut des Hautes Études Scientifiques and École Normale Supérieure (Paris), Tata Institute of Fundamental Research (Bombay), University of Cambridge , University of California, Berkeley , University of Rome , IMPA (Rio), National Autonomous University of Mexico , Mittag-Leffler Institute (Stockholm), and the University of Utah . He was president of the American Mathematical Society .
Bass formerly chaired the Mathematical Sciences Education Board (1992–2000) at the National Academy of Sciences , and the Committee on Education of the American Mathematical Society. He was the President of ICMI from 1999 to 2006. [ 3 ] Since 1996 he has been collaborating with Deborah Ball and her research group at the University of Michigan on the mathematical knowledge and resources entailed in the teaching of mathematics at the elementary level. He has worked to build bridges between diverse professional communities and stakeholders involved in mathematics education .
His research interests have been in algebraic K-theory , commutative algebra and algebraic geometry , algebraic groups , geometric methods in group theory , and ζ functions on finite simple graphs .
Bass was elected as a member of the National Academy of Sciences in 1982. [ 4 ] In 1983, he was elected a Fellow of the American Academy of Arts and Sciences . [ 5 ] In 2002 he was elected a fellow of The World Academy of Sciences . [ 6 ] He is a 2006 National Medal of Science laureate. [ 7 ] In 2009 he was elected a member of the National Academy of Education. [ 8 ] In 2012 he became a fellow of the American Mathematical Society . [ 9 ] He was awarded the Mary P. Dolciani Award in 2013. [ 10 ] | https://en.wikipedia.org/wiki/Hyman_Bass |
A hymenophore refers to the hymenium -bearing structure of a fungal fruiting body . [ 1 ] Hymenophores can be smooth surfaces, lamellae , folds, tubes , or teeth. The term was coined by Robert Hooke in 1665.
This mycology -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hymenophore |
Hyper-IL-6 is a designer cytokine , which was generated by the German biochemist Stefan Rose-John. [ 1 ] Hyper-IL-6 is a fusion protein of the four-helical cytokine Interleukin -6 and the soluble Interleukin-6 receptor which are covalently linked by a flexible peptide linker. [ 1 ] Interleukin-6 on target cells binds to a membrane bound Interleukin-6 receptor. [ 2 ] The complex of Interleukin-6 and the Interleukin-6 receptor associate with a second receptor protein called gp130, which dimerises and initiates intracellular signal transduction . [ 3 ] Gp130 is expressed on all cells of the human body whereas the Interleukin-6 receptor is only found on few cells such as hepatocytes and some leukocytes. [ 4 ] Neither Interleukin-6 nor the Interleukin-6 receptor have a measurable affinity for gp130. [ 5 ] Therefore, cells, which only express gp130 but no Interleukin-6 receptor are not responsive to Interleukin-6. [ 5 ] It was found, however, that the membrane-bound Interleukin-6 receptor can be cleaved from the cell membrane generating a soluble Interleukin-6 receptor. [ 6 ] The soluble Interleukin-6 receptor can bind the ligand Interleukin-6 with similar affinity as the membrane-bound Interleukin-6 receptor and the complex of Interleukin-6 and the soluble Interleukin-6 receptor can bind to gp130 on cells, which only express gp130 but no Interleukin-6 receptor. [ 7 ] The mode of signaling via the soluble Interleukin-6 receptor has been named Interleukin-6 trans-signaling whereas Interleukin-6 signaling via the membrane-bound Interleukin-6 receptor is referred to as Interleukin-6 classic signaling. [ 8 ] Therefore, the generation of the soluble Interleukin-6 receptor enables cells to respond to Interleukin-6, which in the absence of soluble Interleukin-6 receptor would be completely unresponsive to the cytokine. [ 8 ]
In order to generate a molecular tool to discriminate between Interleukin-6 classic signaling and Interleukin-6 trans-signaling, a cDNA coding for human Interleukin-6 and a cDNA coding for the human soluble Interleukin-6 receptor were connected by a cDNA coding for a 13 amino acids long linker, which was long enough to bridge the 40 Å distance between the COOH terminus of the soluble Interleukin-6 receptor and the NH2 terminus of human Interleukin-6. [ 9 ] The generated cDNA was expressed in yeast cells and in mammalian cells and it was shown that. [ 10 ]
Hyper-IL-6 has been used to test which cells depend on Interleukin-6 trans-signaling in their response to the cytokine Interleukin-6. To this end, cells were treated with Interleukin-6 and alternatively with Hyper-IL-6. Cells, which respond to Interleukin-6 alone do express an Interleukin-6 receptor whereas cells, which only respond to Hyper-IL-6 but not to Interleukin-6 alone depend in their response to the cytokine on Interleukin-6 trans-signaling. [ 11 ] It turned out that hematopoietic stem cells, [ 12 ] neural cells, [ 13 ] smooth muscle cells [ 14 ] and endothelial cells [ 15 ] are typical target cells of Interleukin-6 trans-signaling.
The Hyper-IL-6 protein has also been used to explore the physiologic role of Interleukin-6 trans-signaling in vivo. It turned out that this signaling mode was involved in many types of inflammation [ 16 ] and cancer. [ 17 ]
Hyper-IL-6 has helped to establish the concept of Interleukin-6 trans-signaling. [ 18 ] Interleukin-6 trans-signaling mediates the pro-inflammatory activities of Interleukin-6 whereas Interleukin-6 classic signaling governs the protective and regenerative Interleukin-6 activities. [ 19 ] Recently, in breast cancer patients, it was shown with the help of Hyper-IL-6 that IL-6 trans-signaling via phosphoinositid-3-kinase signaling activates disseminated cancer cells long before metastases are formed. [ 20 ] In addition, it was demonstrated in mice that Hyper-IL-6 transneuronal delivery enabled functional recovery after severe spinal cord injury. [ 21 ] | https://en.wikipedia.org/wiki/Hyper-IL-6 |
Hyper-encryption is a form of encryption invented by Michael O. Rabin which uses a high-bandwidth source of public random bits , together with a secret key that is shared by only the sender and recipient(s) of the message. [ 1 ] It uses the assumptions of Ueli Maurer 's bounded-storage model as the basis of its secrecy. Although everyone can see the data, decryption by adversaries without the secret key is still not feasible, because of the space limitations of storing enough data to mount an attack against the system.
Unlike almost all other cryptosystems except the one-time pad , hyper-encryption can be proved to be information-theoretically secure , provided the storage bound cannot be surpassed. Moreover, if the necessary public information cannot be stored at the time of transmission, the plaintext can be shown to be impossible to recover, regardless of the computational capacity available to an adversary in the future, even if they have access to the secret key at that future time.
A highly energy-efficient implementation of a hyper-encryption chip was demonstrated by Krishna Palem et al. using the Probabilistic CMOS or PCMOS technology and was shown to be ~205 times more efficient in terms of Energy-Performance-Product. [ 2 ] [ 3 ]
This cryptography-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hyper-encryption |
The Hyper Text Coffee Pot Control Protocol ( HTCPCP ) is a facetious communication protocol for controlling, monitoring, and diagnosing coffee pots . It is specified in RFC 2324 , published on 1 April 1998 as an April Fools' Day RFC , [ 2 ] as part of an April Fools prank . [ 3 ] An extension, HTCPCP-TEA, was published as RFC 7168 on 1 April 2014 [ 4 ] to support brewing teas, also as an April Fools' Day RFC in error 418.
RFC 2324 was written by Larry Masinter , who describes it as a satire, saying "This has a serious purpose – it identifies many of the ways in which HTTP has been extended inappropriately." [ 5 ] The wording of the protocol made it clear that it was not entirely serious; for example, it notes that "there is a strong, dark, rich requirement for a protocol designed espressoly for the brewing of coffee".
Despite the joking nature of its origins, or perhaps because of it, the protocol has remained as a minor presence online. The editor Emacs includes a fully functional client-side implementation of it, [ 6 ] and a number of bug reports exist complaining about Mozilla 's lack of support for the protocol. [ 7 ] Ten years after the publication of HTCPCP, the Web-Controlled Coffee Consortium (WC3) published a first draft of "HTCPCP Vocabulary in RDF " [ 8 ] in parody of the World Wide Web Consortium (W3C)'s "HTTP Vocabulary in RDF". [ 9 ]
On April 1, 2014, RFC 7168 extended HTCPCP to fully handle teapots. [ 4 ]
HTCPCP is an extension of HTTP . HTCPCP requests are identified with the Uniform Resource Identifier (URI) scheme coffee (or the corresponding word in any other of the 29 listed languages) and contain several additions to the HTTP methods:
It also defines four error responses :
On 5 August 2017, Mark Nottingham, chairman of the IETF HTTPBIS Working Group, called for the removal of status code 418 "I'm a teapot" from the Node.js platform, a code implemented in reference to the original 418 "I'm a teapot" established in Hyper Text Coffee Pot Control Protocol. [ 12 ] On 6 August 2017, Nottingham requested that references to 418 "I'm a teapot" be removed from the programming language Go [ 13 ] and subsequently from Python 's Requests [ 14 ] and ASP.NET 's HttpAbstractions library [ 15 ] as well.
In response, 15-year-old developer Shane Brunswick created a website, save418.com, [ 16 ] and established the "Save 418 Movement", asserting that references to 418 "I'm a teapot" in different projects serve as "a reminder that the underlying processes of computers are still made by humans". Brunswick's site went viral in the hours following its publishing, garnering thousands of upvotes on the social platform Reddit , [ 17 ] and causing the mass adoption of the "#save418" Twitter hashtag he introduced on his site. Heeding the public outcry, Node.js, Go, Python's Requests, and ASP.NET's HttpAbstractions library decided against removing 418 "I'm a teapot" from their respective projects. The unanimous support from the aforementioned projects and the general public prompted Nottingham to begin the process of having 418 marked as a reserved HTTP status code, [ 18 ] ensuring that 418 will not be replaced by an official status code for the foreseeable future.
On 5 October 2020, Python 3.9 released with an updated HTTP library including 418 IM_A_TEAPOT status code. [ 19 ] In the corresponding pull request, the Save 418 movement was directly cited in support of adoption. [ 20 ]
The status code 418 is sometimes returned by servers when blocking a request, instead of the more appropriate 403 Forbidden , [ 21 ] or 404 Not Found . [ 22 ]
Around the time of the 2022 Russian invasion of Ukraine , the Russian military website mil.ru returned the HTTP 418 status code when accessed from outside of Russia as a DDoS attack protection measure. [ 23 ] [ 24 ] The change was first noticed in December of 2021. [ 25 ] | https://en.wikipedia.org/wiki/Hyper_Text_Coffee_Pot_Control_Protocol |
A hyperaccumulator is a plant capable of growing in soil or water with high concentrations of metals , absorbing them through their roots, and concentrating extremely high levels of metals in their tissues. [ 1 ] [ 2 ] The metals are concentrated at levels that are toxic to closely related species not adapted to growing on the metalliferous soils. Compared to non-hyperaccumulating species, hyperaccumulator roots extract the metal from the soil at a higher rate, transfer it more quickly to their shoots, and store large amounts in leaves and roots. [ 1 ] [ 3 ] The ability to hyperaccumulate toxic metals compared to related species has been shown to be due to differential gene expression and regulation of the same genes in both plants. [ 1 ]
Hyperaccumulating plants are of interest for their ability to extract metals from the soils of contaminated sites ( phytoremediation ) to return the ecosystem to a less toxic state. The plants also hold potential to be used to mine metals from soils with very high concentrations ( phytomining ) by growing the plants, then harvesting them for the metals in their tissues. [ 1 ]
The genetic advantage of hyperaccumulation of metals may be that the toxic levels of heavy metals in leaves deter herbivores or increase the toxicity of other anti-herbivory metabolites. [ 1 ]
Metals are predominantly accumulated in the roots causing an unbalanced shoot to root ratio of metal concentrations in most plants. However, in hyperaccumulators, the shoot to root ratio of metal concentrations are abnormally higher in the leaves and much lower in the roots. [ 4 ] As this process occurs, metals are efficiently shuttled from the root to the shoot as an enhanced ability in order to protect the roots from metal toxicity. [ 4 ]
Delving into tolerance: Throughout the research of hyperaccumulation, there is a conundrum with tolerance. There are several different understandings of tolerance associated with accumulation; however, there are a few similarities. Evidence has conveyed that the traits of tolerance and accumulation are separate to each other and are moderated by genetic and physiological mechanisms. [ 5 ] Moreover, the physiological mechanisms, in relation to tolerance, are classified as exclusion: when the movement of metals at the interfaces of soil/root or root/shoot are blocked, or accumulation: when the uptake of metals that have been rendered as non-toxic are allowed into the aerial plant parts. [ 6 ]
Characteristics from certain physiological elements:
There are certain characteristics that are specific to certain species. For example, when presented with a low supply of zinc, Thlaspi caerulescens had higher zinc concentrations accumulated compared to other non-accumulator plant species. [ 7 ] Further evidence indicated that when T. caerulescens were grown on soil with an adequate amount of contamination, the species accumulated an amount of zinc that was 24-60 times more than Raphanus sativus (radish) had accumulated. [ 7 ] Additionally, the capacity to experimentally manipulate soil metal concentrations with soil amendments has allowed researchers to identify the maximum soil concentrations that hyperaccumulation species can tolerate and the minimum soil concentrations in order to reach hyperaccumulation. [ 4 ] Furthermore, with these findings, two distinct categories of hyperaccumulation arose, active and passive hyperaccumulation. [ 4 ] Active hyperaccumulation is attained by relatively low soil concentrations. Passive hyperaccumulation is induced by exceedingly high soil concentrations. [ 4 ]
Several gene families are involved in the processes of hyperaccumulation including upregulation of absorption and sequestration of heavy-metal metals. [ 8 ] These hyperaccumulation genes (HA genes) are found in over 450 plant species, including the model organisms Arabidopsis and Brassicaceae . The expression of such genes is used to determine whether a species is capable of hyperaccumulation. Expression of HA genes provides the plant with capacity to uptake and sequester metals such as As, Co, Fe, Cu, Cd, Pb, Hg, Se, Mn, Zn, Mo and Ni in 100–1000x the concentration found in sister species or populations. [ 9 ]
The ability to hyperaccumulate is determined by two major factors: environmental exposure and the expression of ZIP gene family.
Although experiments have shown that the hyperaccumulation is partially dependent on environmental exposure (i.e. only plants exposed to metal are observed with high concentrations of that metal), hyperaccumulation is ultimately dependent on the presence and upregulation of genes involved with that process. It has been shown that hyperaccumulation capacities can be inherited in Thlaspi caerulescens (Brassicaceae) and others. As there is a wide variety among hyperaccumulating species that span across different plant families, it is likely that HA genes were eco-typically selected for.
In most hyperaccumulating plants, the main mechanism for metal transport are the proteins coded by genes in the ZIP family, however other families such as the HMA, MATE, YSL and MTP families have also been observed to be involved. The ZIP gene family is a novel, plant-specific gene family that encodes Cd, Mn, Fe and Zn transporters. The ZIP family plays a role in supplying Zn to metalloproteins. [ 10 ]
In one study on Arabidopsis , it was found that the metallophyte Arabidopsis halleri expressed a member of the ZIP family that was not expressed in a non-metallophyte sister species. This gene was an iron regulated transporter (IRT-protein) that encoded several primary transporters involved with cellular uptake of cations above the concentration gradient. When this gene was transformed into yeast, hyperaccumulation was observed. [ 11 ] This suggests that overexpression of ZIP family genes that encode cation transporters is a characteristic genetic feature of hyperaccumulation.
Another gene family that has been observed ubiquitously in hyperaccumulators are the ZTP and ZNT families. A study on T. caerulescens identified the ZTP family as a plant specific family with high sequence similarity to other zinc transporter4. Both the ZTP and ZNT families, like the ZIP family, are zinc transporters. [ 12 ] It has been observed in hyperaccumulating species, that these genes, specifically ZNT1 and ZNT2 alleles are chronically overexpressed. [ 13 ]
While the precise mechanism by which these genes facilitate hyperaccumulation is unknown, expression patterns strongly correlate with individual hyperaccumulation capacity and metal exposure, implying that these gene families play a regulatory role. Because the presence and expression of zinc transporter gene families are highly prevalent in hyperaccumulators, the ability to accumulate a diverse range of heavy metals is most likely due to the zinc transporters' inability to discriminate against specific metal ions. The response of the plants to hyperaccumulation of any metal also supports this theory as it has been observed that AhHMHA3 is expressed in hyperaccumulating individuals. AhHMHA3 has been identified to be expressed in response to and aid of Zn detoxification. [ 9 ] In another study, using metallophytic and non-metallophytic Arabidopsis populations, back crosses indicated pleiotropy between Cd and Zn tolerances. [ 14 ] This response suggests that plants are unable to detect specific metals, and that hyperaccumulation is likely a result of an overexpressed Zn transportation system. [ 15 ]
The overall effect of these expression patterns has been hypothesized to assist in plant defense systems. In one hypothesis, "the elemental defense hypothesis", provided by Poschenrieder, it is suggested that the expression of these genes assist in antiherbivory or pathogen defenses by making tissues toxic to organisms attempting to feed on that plant. [ 10 ] Another hypothesis, "the joint hypothesis", provided by Boyd, suggests that expression of these genes assists in systemic defense. [ 16 ]
An important trait of hyperaccumulating plant species is enhanced translocation of the absorbed metal to the shoot. Metal toxicity is tolerated by plant species that are native to metalliferous soils. Exclusion, in which plants resist undue metal uptake and transport, and absorption and sequestration, in which plants pick up vast quantities of metal and pass it to the shoot, where it is accumulated, are the two basic methods for metal tolerance. [ 17 ] Hyperaccumulators are plants that have both the second technique and the ability to absorb more than 100 times higher metal concentrations than typical organisms. [ 18 ]
T. caerulescens is found mostly in Zn/Pb-rich soils, as well as serpentines and non-mineralized soils. It was discovered to be a Zn hyperaccumulator. Because of its ability to extract vast quantities of heavy metals from soils. [ 19 ] When grown on mildly polluted soils, a closely related species, Thlaspi ochroleucum , is a heavy metal-tolerant plant, but it accumulates much less Zn in the shoots than T. caerulescens . Thus, T. ochroleucum is a non-hyperaccumulator and of the same family T. caerulescens is a hyperaccumulator. The transfer of Zn from roots to shoots varied significantly between these two species. T. caerulescens had much higher shoot/root Zn concentration levels than T. ochroleucum , which always had higher Zn concentrations in the roots . When Zn was withheld, the amount of Zn previously accumulated in the roots in T. caerulescens decreased even more than in T. ochroleucum , with a concomitantly greater rise in the amount of Zn in the shoots. The decreases in Zn in roots may be mostly due to transport to shoots, since the volume of Zn in shoots increased during the same time span. [ 20 ]
A heavy metal transporter , cDNA, mediates high-affinity Zn 2+ uptake as well as low-affinity Cd 2+ uptake. This transporter is expressed at very high levels in roots and shoots of the hyperaccumulator. According to (Pence et., al. 1999), an overexpression of a Zn transporter gene, ZNT1, in root and shoot tissue is an essential component of the Zn hyperaccumulation trait in T. caerulescens . [ 21 ] This increased gene expression has been shown to be the basis for increased Zn21 uptake from the soil in T. caerulescens roots, and it is possible that the same process underpins the enhanced Zn21 uptake into leaf cells.
13. Souri Z, Karimi N, Luisa M. Sandalio. 2017. Arsenic Hyperaccumulation Strategies: An Overview. Frontiers in Cell and Developmental Biology. 5, 67. DOI: 10.3389/fcell.2017.00067. | https://en.wikipedia.org/wiki/Hyperaccumulator |
This list covers known nickel hyperaccumulators , accumulators or plant species tolerant to nickel.
See also:
Notes | https://en.wikipedia.org/wiki/Hyperaccumulators_table_–_2:_Nickel |
This list covers hyperaccumulators , plant species which accumulate, or are tolerant of, radionuclides ( Cd , Cs-137 , Co , Pu-238 , Ra , Sr , U-234 , 235 , 238 ), hydrocarbons and organic solvents ( Benzene , BTEX , DDT , Dieldrin , Endosulfan , Fluoranthene , MTBE , PCB , PCNB , TCE and by-products), and inorganic compounds ( Potassium ferrocyanide ).
See also: | https://en.wikipedia.org/wiki/Hyperaccumulators_table_–_3 |
A hyperbaric stretcher is a lightweight pressure vessel for human occupancy (PVHO) designed to accommodate one person undergoing initial hyperbaric treatment during or while awaiting transport or transfer to a treatment chamber . [ 1 ] [ 2 ]
Originally developed as advanced diving equipment, it has since been used for other medical conditions such as altitude sickness , carbon monoxide poisoning and smoke inhalation , air and gas embolism and is viewed as potentially important equipment for the early treatment of blast related injuries within the combat zone with the anticipated benefit that traumatic brain injury may not develop in the ensuing months. [ 2 ]
There is currently only one unit approved under the US National Standard - ASME PVHO-1 (2007) and Case 12. This unit, known as the SOS Hyperlite or by the US military as the EEHS (Emergency Evacuation Hyperbaric Stretcher) is, or has been, in service with the US Army, Navy, Air Force, Coast Guard, NOAA and NASA as well as being supplied to other Government Agencies. The EEHS has a length of 2.26 metres (89 inches) and a diameter of 59 cm. (23.5 inches) and operates at a pressure of up to 2.3 bar (33 psi) above ambient pressure with a built-in safety factor of over 6:1. It is pressurised with air and the occupant breathes oxygen or air through a demand mask (BIBS) during treatment. [ 2 ]
The Hyperlite also complies with Lloyds Register and ISO 9001/2000 requirements, and is CE marked. It has applications in military, commercial, scientific, and recreational diving, and in hyperbaric medicine. It is made of flexible material and when the internal pressure matches the external pressure, it is collapsible, which can make transfer under pressure possible with relatively small hyperbaric chambers. [ 2 ]
A hyperbaric stretcher must be portable, and should be compatible with transfer under pressure to and from full size hyperbaric chambers. This can be achieved by making the unit small enough to be loaded inside the hyperbaric facility for transfer under pressure, [ 2 ] or by having a mating flange compatible with the larger chamber, by way of an adapter if necessary. Some types of treatment may be done in the hyperbaric stretcher, provided the patient is sufficiently fit for unattended recompression. [ 2 ]
This article related to medical equipment is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hyperbaric_stretcher |
In finance , economics , and decision theory , hyperbolic absolute risk aversion ( HARA ) [ 1 ] : p.39, [ 2 ] : p.389, [ 3 ] [ 4 ] [ 5 ] [ 6 ] refers to a type of risk aversion that is particularly convenient to model mathematically and to obtain empirical predictions from. It refers specifically to a property of von Neumann–Morgenstern utility functions , which are typically functions of final wealth (or some related variable), and which describe a decision-maker's degree of satisfaction with the outcome for wealth. The final outcome for wealth is affected both by random variables and by decisions. Decision-makers are assumed to make their decisions (such as, for example, portfolio allocations ) so as to maximize the expected value of the utility function.
Notable special cases of HARA utility functions include the quadratic utility function , the exponential utility function , and the isoelastic utility function .
A utility function is said to exhibit hyperbolic absolute risk aversion if and only if the level of risk tolerance T ( W ) {\displaystyle T(W)} —the reciprocal of absolute risk aversion A ( W ) {\displaystyle A(W)} —is a linear function of wealth W :
where A ( W ) is defined as – U " ( W ) / U '( W ). A utility function U ( W ) has this property, and thus is a HARA utility function, if and only if it has the form
with restrictions on wealth and the parameters such that a > 0 {\displaystyle a>0} and b + a W 1 − γ > 0. {\displaystyle b+{\frac {aW}{1-\gamma }}>0.} For a given parametrization, this restriction puts a lower bound on W if γ < 1 {\displaystyle \gamma <1} and an upper bound on W if γ > 1 {\displaystyle \gamma >1} . For the limiting case as γ {\displaystyle \gamma } → 1, L'Hôpital's rule shows that the utility function becomes linear in wealth; and for the limiting case as γ {\displaystyle \gamma } goes to 0, the utility function becomes logarithmic: U ( W ) = log ( a W + b ) {\displaystyle U(W)={\text{log}}(aW+b)} .
Absolute risk aversion is decreasing if A ′ ( W ) < 0 {\displaystyle A'(W)<0} (equivalently T '( W ) > 0), which occurs if and only if γ {\displaystyle \gamma } is finite and less than 1; this is considered the empirically plausible case, since it implies that an investor will put more funds into risky assets the more funds are available to invest. Constant absolute risk aversion occurs as γ {\displaystyle \gamma } goes to positive or negative infinity, and the particularly implausible case of increasing absolute risk aversion occurs if γ {\displaystyle \gamma } is greater than one and finite. [ 2 ]
Relative risk aversion is defined as R ( W )= WA ( W ); it is increasing if R ′ ( W ) > 0 {\displaystyle R'(W)>0} , decreasing if R ′ ( W ) < 0 {\displaystyle R'(W)<0} , and constant if R ′ ( W ) = 0 {\displaystyle R'(W)=0} . Thus relative risk aversion is increasing if b > 0 (for γ ≠ 1 {\displaystyle \gamma \neq 1} ), constant if b = 0, and decreasing if b < 0 (for − ∞ < γ < 1 {\displaystyle -\infty <\gamma <1} ). [ 2 ]
If all investors have HARA utility functions with the same exponent, then in the presence of a risk-free asset a two-fund monetary separation theorem results: [ 7 ] every investor holds the available risky assets in the same proportions as do all other investors, and investors differ from each other in their portfolio behavior only with regard to the fraction of their portfolios held in the risk-free asset rather than in the collection of risky assets.
Moreover, if an investor has a HARA utility function and a risk-free asset is available, then the investor's demands for the risk-free asset and all risky assets are linear in initial wealth. [ 7 ]
In the capital asset pricing model , there exists a representative investor utility function depending on the individual investors' utility functions and wealth levels, independent of the assets available, if and only if all investors have HARA utility functions with the same exponent. The representative utility function depends on the distribution of wealth, and one can describe market behavior as if there were a single investor with the representative utility function. [ 1 ]
With a complete set of state-contingent securities , a sufficient condition for security prices in equilibrium to be independent of the distribution of initial wealth holdings is that all investors have HARA utility functions with identical exponent and identical rate of time preference between beginning-of-period and end-of-period consumption. [ 8 ]
In a discrete time dynamic portfolio optimization context, under HARA utility optimal portfolio choice involves partial myopia if there is a risk-free asset and there is serial independence of asset returns: to find the optimal current-period portfolio, one needs to know no future distributional information about the asset returns except the future risk-free returns. [ 3 ]
With asset returns that are independently and identically distributed through time and with a risk-free asset, risky asset proportions are independent of the investor's remaining lifetime. [ 1 ] : ch.11
With asset returns whose evolution is described by Brownian motion and which are independently and identically distributed through time, and with a risk-free asset, one can obtain an explicit solution for the demand for the unique optimal mutual fund , and that demand is linear in initial wealth. [ 2 ] | https://en.wikipedia.org/wiki/Hyperbolic_absolute_risk_aversion |
In geometry , hyperbolic angle is a real number determined by the area of the corresponding hyperbolic sector of xy = 1 in Quadrant I of the Cartesian plane . The hyperbolic angle parametrizes the unit hyperbola , which has hyperbolic functions as coordinates. In mathematics, hyperbolic angle is an invariant measure as it is preserved under hyperbolic rotation .
The hyperbola xy = 1 is rectangular with semi-major axis 2 {\displaystyle {\sqrt {2}}} , analogous to the circular angle equaling the area of a circular sector in a circle with radius 2 {\displaystyle {\sqrt {2}}} .
Hyperbolic angle is used as the independent variable for the hyperbolic functions sinh, cosh, and tanh, because these functions may be premised on hyperbolic analogies to the corresponding circular (trigonometric) functions by regarding a hyperbolic angle as defining a hyperbolic triangle .
The parameter thus becomes one of the most useful in the calculus of real variables.
Consider the rectangular hyperbola { ( x , 1 x ) : x > 0 } {\displaystyle \textstyle \{(x,{\frac {1}{x}}):x>0\}} , and (by convention) pay particular attention to the branch x > 1 {\displaystyle x>1} .
First define:
Note that, because of the role played by the natural logarithm :
Finally, extend the definition of hyperbolic angle to that subtended by any interval on the hyperbola. Suppose a , b , c , d {\displaystyle a,b,c,d} are positive real numbers such that a b = c d = 1 {\displaystyle ab=cd=1} and c > a > 1 {\displaystyle c>a>1} , so that ( a , b ) {\displaystyle (a,b)} and ( c , d ) {\displaystyle (c,d)} are points on the hyperbola x y = 1 {\displaystyle xy=1} and determine an interval on it. Then the squeeze mapping f : ( x , y ) → ( b x , a y ) {\displaystyle \textstyle f:(x,y)\to (bx,ay)} maps the angle ∠ ( ( a , b ) , ( 0 , 0 ) , ( c , d ) ) {\displaystyle \angle \!\left((a,b),(0,0),(c,d)\right)} to the standard position angle ∠ ( ( 1 , 1 ) , ( 0 , 0 ) , ( b c , a d ) ) {\displaystyle \angle \!\left((1,1),(0,0),(bc,ad)\right)} . By the result of Gregoire de Saint-Vincent , the hyperbolic sectors determined by these angles have the same area, which is taken to be the magnitude of the angle. This magnitude is ln ( b c ) = ln ( c / a ) = ln c − ln a {\displaystyle \operatorname {ln} {(bc)}=\operatorname {ln} (c/a)=\operatorname {ln} c-\operatorname {ln} a} .
A unit circle x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} has a circular sector with an area half of the circular angle in radians. Analogously, a unit hyperbola x 2 − y 2 = 1 {\displaystyle x^{2}-y^{2}=1} has a hyperbolic sector with an area half of the hyperbolic angle.
There is also a projective resolution between circular and hyperbolic cases: both curves are conic sections , and hence are treated as projective ranges in projective geometry . Given an origin point on one of these ranges, other points correspond to angles. The idea of addition of angles, basic to science, corresponds to addition of points on one of these ranges as follows:
Circular angles can be characterized geometrically by the property that if two chords P 0 P 1 and P 0 P 2 subtend angles L 1 and L 2 at the centre of a circle, their sum L 1 + L 2 is the angle subtended by a chord P 0 Q , where P 0 Q is required to be parallel to P 1 P 2 .
The same construction can also be applied to the hyperbola. If P 0 is taken to be the point (1, 1) , P 1 the point ( x 1 , 1/ x 1 ) , and P 2 the point ( x 2 , 1/ x 2 ) , then the parallel condition requires that Q be the point ( x 1 x 2 , 1/ x 1 1/ x 2 ) . It thus makes sense to define the hyperbolic angle from P 0 to an arbitrary point on the curve as a logarithmic function of the point's value of x . [ 1 ] [ 2 ]
Whereas in Euclidean geometry moving steadily in an orthogonal direction to a ray from the origin traces out a circle, in a pseudo-Euclidean plane steadily moving orthogonally to a ray from the origin traces out a hyperbola. In Euclidean space, the multiple of a given angle traces equal distances around a circle while it traces exponential distances upon the hyperbolic line. [ 3 ]
Both circular and hyperbolic angle provide instances of an invariant measure . Arcs with an angular magnitude on a circle generate a measure on certain measurable sets on the circle whose magnitude does not vary as the circle turns or rotates . For the hyperbola the turning is by squeeze mapping , and the hyperbolic angle magnitudes stay the same when the plane is squeezed by a mapping
There is also a curious relation to a hyperbolic angle and the metric defined on Minkowski space . Just as two dimensional Euclidean geometry defines its line element as
the line element on Minkowski space is [ 4 ]
Consider a curve embedded in two dimensional Euclidean space,
Where the parameter t {\displaystyle t} is a real number that runs between a {\displaystyle a} and b {\displaystyle b} ( a ⩽ t < b {\displaystyle a\leqslant t<b} ). The arclength of this curve in Euclidean space is computed as:
If x 2 + y 2 = 1 {\displaystyle x^{2}+y^{2}=1} defines a unit circle, a single parameterized solution set to this equation is x = cos t {\displaystyle x=\cos t} and y = sin t {\displaystyle y=\sin t} . Letting 0 ⩽ t < θ {\displaystyle 0\leqslant t<\theta } , computing the arclength S {\displaystyle S} gives S = θ {\displaystyle S=\theta } . Now doing the same procedure, except replacing the Euclidean element with the Minkowski line element,
and defining a unit hyperbola as y 2 − x 2 = 1 {\displaystyle y^{2}-x^{2}=1} with its corresponding parameterized solution set y = cosh t {\displaystyle y=\cosh t} and x = sinh t {\displaystyle x=\sinh t} , and by letting 0 ⩽ t < η {\displaystyle 0\leqslant t<\eta } (the hyperbolic angle), we arrive at the result of S = η {\displaystyle S=\eta } . Just as the circular angle is the length of a circular arc using the Euclidean metric, the hyperbolic angle is the length of a hyperbolic arc using the Minkowski metric.
The quadrature of the hyperbola is the evaluation of the area of a hyperbolic sector . It can be shown to be equal to the corresponding area against an asymptote . The quadrature was first accomplished by Gregoire de Saint-Vincent in 1647 in Opus geometricum quadrature circuli et sectionum coni . As expressed by a historian,
A. A. de Sarasa interpreted the quadrature as a logarithm and thus the geometrically defined natural logarithm (or "hyperbolic logarithm") is understood as the area under y = 1/ x to the right of x = 1 . As an example of a transcendental function , the logarithm is more familiar than its motivator, the hyperbolic angle. Nevertheless, the hyperbolic angle plays a role when the theorem of Saint-Vincent is advanced with squeeze mapping .
Circular trigonometry was extended to the hyperbola by Augustus De Morgan in his textbook Trigonometry and Double Algebra . [ 6 ] In 1878 W.K. Clifford used the hyperbolic angle to parametrize a unit hyperbola , describing it as "quasi- harmonic motion ".
In 1894 Alexander Macfarlane circulated his essay "The Imaginary of Algebra", which used hyperbolic angles to generate hyperbolic versors , in his book Papers on Space Analysis . [ 7 ] The following year Bulletin of the American Mathematical Society published Mellen W. Haskell 's outline of the hyperbolic functions . [ 8 ]
When Ludwik Silberstein penned his popular 1914 textbook on the new theory of relativity , he used the rapidity concept based on hyperbolic angle a , where tanh a = v / c , the ratio of velocity v to the speed of light . He wrote:
Silberstein also uses Lobachevsky 's concept of angle of parallelism Π( a ) to obtain cos Π( a ) = v / c . [ 9 ]
The hyperbolic angle is often presented as if it were an imaginary number , cos i x = cosh x {\textstyle \cos ix=\cosh x} and sin i x = i sinh x , {\textstyle \sin ix=i\sinh x,} so that the hyperbolic functions cosh and sinh can be presented through the circular functions. But in the Euclidean plane we might alternately consider circular angle measures to be imaginary and hyperbolic angle measures to be real scalars, cosh i x = cos x {\textstyle \cosh ix=\cos x} and sinh i x = i sin x . {\textstyle \sinh ix=i\sin x.}
These relationships can be understood in terms of the exponential function , which for a complex argument z {\textstyle z} can be broken into even and odd parts cosh z = 1 2 ( e z + e − z ) {\textstyle \cosh z={\tfrac {1}{2}}(e^{z}+e^{-z})} and sinh z = 1 2 ( e z − e − z ) , {\textstyle \sinh z={\tfrac {1}{2}}(e^{z}-e^{-z}),} respectively. Then
e z = cosh z + sinh z = cos ( i z ) − i sin ( i z ) , {\displaystyle e^{z}=\cosh z+\sinh z=\cos(iz)-i\sin(iz),}
or if the argument is separated into real and imaginary parts z = x + i y , {\textstyle z=x+iy,} the exponential can be split into the product of scaling e x {\textstyle e^{x}} and rotation e i y , {\textstyle e^{iy},}
e x + i y = e x e i y = ( cosh x + sinh x ) ( cos y + i sin y ) . {\displaystyle e^{x+iy}=e^{x}e^{iy}=(\cosh x+\sinh x)(\cos y+i\sin y).}
As infinite series ,
e z = ∑ k = 0 ∞ z k k ! = 1 + z + 1 2 z 2 + 1 6 z 3 + 1 24 z 4 + … cosh z = ∑ k even z k k ! = 1 + 1 2 z 2 + 1 24 z 4 + … sinh z = ∑ k odd z k k ! = z + 1 6 z 3 + 1 120 z 5 + … cos z = ∑ k even ( i z ) k k ! = 1 − 1 2 z 2 + 1 24 z 4 − … i sin z = ∑ k odd ( i z ) k k ! = i ( z − 1 6 z 3 + 1 120 z 5 − … ) {\displaystyle {\begin{alignedat}{3}e^{z}&=\,\,\sum _{k=0}^{\infty }{\frac {z^{k}}{k!}}&&=1+z+{\tfrac {1}{2}}z^{2}+{\tfrac {1}{6}}z^{3}+{\tfrac {1}{24}}z^{4}+\dots \\\cosh z&=\sum _{k{\text{ even}}}{\frac {z^{k}}{k!}}&&=1+{\tfrac {1}{2}}z^{2}+{\tfrac {1}{24}}z^{4}+\dots \\\sinh z&=\,\sum _{k{\text{ odd}}}{\frac {z^{k}}{k!}}&&=z+{\tfrac {1}{6}}z^{3}+{\tfrac {1}{120}}z^{5}+\dots \\\cos z&=\sum _{k{\text{ even}}}{\frac {(iz)^{k}}{k!}}&&=1-{\tfrac {1}{2}}z^{2}+{\tfrac {1}{24}}z^{4}-\dots \\i\sin z&=\,\sum _{k{\text{ odd}}}{\frac {(iz)^{k}}{k!}}&&=i\left(z-{\tfrac {1}{6}}z^{3}+{\tfrac {1}{120}}z^{5}-\dots \right)\\\end{alignedat}}}
The infinite series for cosine is derived from cosh by turning it into an alternating series , and the series for sine comes from making sinh into an alternating series. | https://en.wikipedia.org/wiki/Hyperbolic_angle |
In mathematics , hyperbolic coordinates are a method of locating points in quadrant I of the Cartesian plane
Hyperbolic coordinates take values in the hyperbolic plane defined as:
These coordinates in HP are useful for studying logarithmic comparisons of direct proportion in Q and measuring deviations from direct proportion.
For ( x , y ) {\displaystyle (x,y)} in Q {\displaystyle Q} take
and
The parameter u is the hyperbolic angle to ( x, y ) and v is the geometric mean of x and y .
The inverse mapping is
The function Q → H P {\displaystyle Q\rightarrow HP} is a continuous mapping , but not an analytic function .
Since HP carries the metric space structure of the Poincaré half-plane model of hyperbolic geometry , the bijective correspondence Q ↔ H P {\displaystyle Q\leftrightarrow HP} brings this structure to Q . It can be grasped using the notion of hyperbolic motions . Since geodesics in HP are semicircles with centers on the boundary, the geodesics in Q are obtained from the correspondence and turn out to be rays from the origin or petal -shaped curves leaving and re-entering the origin. And the hyperbolic motion of HP given by a left-right shift corresponds to a squeeze mapping applied to Q .
Since hyperbolas in Q correspond to lines parallel to the boundary of HP , they are horocycles in the metric geometry of Q .
If one only considers the Euclidean topology of the plane and the topology inherited by Q , then the lines bounding Q seem close to Q . Insight from the metric space HP shows that the open set Q has only the origin as boundary when viewed through the correspondence. Indeed, consider rays from the origin in Q , and their images, vertical rays from the boundary R of HP . Any point in HP is an infinite distance from the point p at the foot of the perpendicular to R , but a sequence of points on this perpendicular may tend in the direction of p . The corresponding sequence in Q tends along a ray toward the origin. The old Euclidean boundary of Q is no longer relevant.
Fundamental physical variables are sometimes related by equations of the form k = x y . For instance, V = I R ( Ohm's law ), P = V I ( electrical power ), P V = k T ( ideal gas law ), and f λ = v (relation of wavelength , frequency , and velocity in the wave medium). When the k is constant, the other variables lie on a hyperbola, which is a horocycle in the appropriate Q quadrant.
For example, in thermodynamics the isothermal process explicitly follows the hyperbolic path and work can be interpreted as a hyperbolic angle change. Similarly, a given mass M of gas with changing volume will have variable density δ = M / V , and the ideal gas law may be written P = k T δ so that an isobaric process traces a hyperbola in the quadrant of absolute temperature and gas density.
For hyperbolic coordinates in the theory of relativity see the History section.
There are many natural applications of hyperbolic coordinates in economics :
The hyperbolic functions sinh, cosh, and tanh can be illustrated with hyperbolic coordinates. Let
Then BCAO forms a rhombus with diagonals intersecting at M = ( e t + e − t 2 , e t + e − t 2 ) {\displaystyle M=({\frac {e^{t}+e^{-t}}{2}},\ {\frac {e^{t}+e^{-t}}{2}})} . The hyperbolic cosine is defined as cosh t = e t + e − t 2 , {\displaystyle \cosh t={\frac {e^{t}+e^{-t}}{2}},} so M = ( cosh t , cosh t ).
The semi-diagonal MA is equipollent to ( − e − t + e t 2 , e t − e − t 2 ) = ( − sinh t , sinh t ) {\displaystyle ({\frac {-e^{-t}+e^{t}}{2}},\ {\frac {e^{t}-e^{-t}}{2}})=(-\sinh t,\ \sinh t)} . Evidently the diagonals divide the rhombus into four congruent right triangles.
The angle MOA is the hyperbolic angle parameter t of cosh and sinh, and tanh t = sinh t cosh t {\displaystyle \tanh t={\frac {\sinh t}{\cosh t}}} and has a value in (–1, 1).
The geometric mean is an ancient concept, but hyperbolic angle was developed in this configuration by Gregoire de Saint-Vincent . He was attempting to perform quadrature with respect to the rectangular hyperbola y = 1/ x . That challenge was a standing open problem since Archimedes performed the quadrature of the parabola . The curve passes through (1,1) where it is opposite the origin in a unit square . The other points on the curve can be viewed as rectangles having the same area as this square. Such a rectangle may be obtained by applying a squeeze mapping to the square. Another way to view these mappings is via hyperbolic sectors . Starting from (1,1) the hyperbolic sector of unit area ends at (e, 1/e), where e is 2.71828…, according to the development of Leonhard Euler in Introduction to the Analysis of the Infinite (1748).
Taking (e, 1/e) as the vertex of rectangle of unit area, and applying again the squeeze that made it from the unit square, yields ( e 2 , e − 2 ) . {\displaystyle (e^{2},\ e^{-2}).} Generally n squeezes yields ( e n , e − n ) . {\displaystyle (e^{n},\ e^{-n}).} A. A. de Sarasa noted a similar observation of G. de Saint Vincent, that as the abscissas increased in a geometric series , the sum of the areas against the hyperbola increased in arithmetic series , and this property corresponded to the logarithm already in use to reduce multiplications to additions. Euler’s work made the natural logarithm a standard mathematical tool, and elevated mathematics to the realm of transcendental functions . The hyperbolic coordinates are formed on the original picture of G. de Saint-Vincent, which provided the quadrature of the hyperbola, and transcended the limits of algebraic functions .
In 1875 Johann von Thünen published a theory of natural wages [ 1 ] which used geometric mean of a subsistence wage and market value of the labor using the employer's capital.
In special relativity the focus is on the 3-dimensional hypersurface in the future of spacetime where various velocities arrive after a given proper time . Scott Walter [ 2 ] explains that in November 1907 Hermann Minkowski alluded to a well-known three-dimensional hyperbolic geometry while speaking to the Göttingen Mathematical Society, but not to a four-dimensional one. [ 3 ] In tribute to Wolfgang Rindler , the author of a standard introductory university-level textbook on relativity, hyperbolic coordinates of spacetime are called Rindler coordinates . | https://en.wikipedia.org/wiki/Hyperbolic_coordinates |
Hyperbolic motion is the motion of an object with constant proper acceleration in special relativity . It is called hyperbolic motion because the equation describing the path of the object through spacetime is a hyperbola , as can be seen when graphed on a Minkowski diagram whose coordinates represent a suitable inertial (non-accelerated) frame. This motion has several interesting features, among them that it is possible to outrun a photon if given a sufficient head start, as may be concluded from the diagram. [ 1 ]
Hermann Minkowski (1908) showed the relation between a point on a worldline and the magnitude of four-acceleration and a "curvature hyperbola" ( German : Krümmungshyperbel ). [ 2 ] In the context of Born rigidity , Max Born (1909) subsequently coined the term "hyperbolic motion" ( German : Hyperbelbewegung ) for the case of constant magnitude of four-acceleration, then provided a detailed description for charged particles in hyperbolic motion, and introduced the corresponding "hyperbolically accelerated reference system" ( German : hyperbolisch beschleunigtes Bezugsystem ). [ 3 ] Born's formulas were simplified and extended by Arnold Sommerfeld (1910). [ 4 ] For early reviews see the textbooks by Max von Laue (1911, 1921) [ 5 ] or Wolfgang Pauli (1921). [ 6 ] See also Galeriu (2015) [ 7 ] or Gourgoulhon (2013), [ 8 ] and Acceleration (special relativity)#History .
The proper acceleration α {\displaystyle \alpha } of a particle is defined as the acceleration that a particle "feels" as it accelerates from one inertial reference frame to another. If the proper acceleration is directed parallel to the line of motion, it is related to the ordinary three-acceleration in special relativity a = d u / d T {\displaystyle a=du/dT} by
where u {\displaystyle u} is the instantaneous speed of the particle, γ {\displaystyle \gamma } the Lorentz factor , c {\displaystyle c} is the speed of light , and T {\displaystyle T} is the coordinate time. Solving for the equation of motion gives the desired formulas, which can be expressed in terms of coordinate time T {\displaystyle T} as well as proper time τ {\displaystyle \tau } . For simplification, all initial values for time, location, and velocity can be set to 0, thus: [ 5 ] [ 6 ] [ 9 ] [ 10 ] [ 11 ]
This gives ( X + c 2 / α ) 2 − c 2 T 2 = c 4 / α 2 {\displaystyle \left(X+c^{2}/\alpha \right)^{2}-c^{2}T^{2}=c^{4}/\alpha ^{2}} , which is a hyperbola in time T and the spatial location variable X {\displaystyle X} . In this case, the accelerated object is located at X = 0 {\displaystyle X=0} at time T = 0 {\displaystyle T=0} . If instead there are initial values different from zero, the formulas for hyperbolic motion assume the form: [ 12 ] [ 13 ] [ 14 ]
The worldline for hyperbolic motion (which from now on will be written as a function of proper time) can be simplified in several ways. For instance, the expression
can be subjected to a spatial shift of amount c 2 / α {\displaystyle c^{2}/\alpha } , thus
by which the observer is at position X = c 2 / α {\displaystyle X=c^{2}/\alpha } at time T = 0 {\displaystyle T=0} . Furthermore, by setting x = c 2 / α {\displaystyle x=c^{2}/\alpha } and introducing the rapidity η = artanh u c = α τ c {\displaystyle \eta =\operatorname {artanh} {\frac {u}{c}}={\frac {\alpha \tau }{c}}} , [ 14 ] the equations for hyperbolic motion reduce to [ 4 ] [ 16 ]
with the hyperbola X 2 − c 2 T 2 = x 2 {\displaystyle X^{2}-c^{2}T^{2}=x^{2}} .
Born (1909), [ 3 ] Sommerfeld (1910), [ 4 ] von Laue (1911), [ 5 ] Pauli (1921) [ 6 ] also formulated the equations for the electromagnetic field of charged particles in hyperbolic motion. [ 7 ] This was extended by Hermann Bondi & Thomas Gold (1955) [ 17 ] and Fulton & Rohrlich (1960) [ 18 ] [ 19 ]
This is related to the controversially [ 20 ] [ 21 ] discussed question, whether charges in perpetual hyperbolic motion do radiate or not, and whether this is consistent with the equivalence principle – even though it is about an ideal situation, because perpetual hyperbolic motion is not possible. While early authors such as Born (1909) or Pauli (1921) argued that no radiation arises, later authors such as Bondi & Gold [ 17 ] and Fulton & Rohrlich [ 18 ] [ 19 ] showed that radiation does indeed arise.
In equation ( 2 ) for hyperbolic motion, the expression x {\displaystyle x} was constant, whereas the rapidity η {\displaystyle \eta } was variable. However, as pointed out by Sommerfeld, [ 16 ] one can define x {\displaystyle x} as a variable, while making η {\displaystyle \eta } constant. This means, that the equations become transformations indicating the simultaneous rest shape of an accelerated body with hyperbolic coordinates ( x , y , z , η ) {\displaystyle (x,y,z,\eta )} as seen by a comoving observer
By means of this transformation, the proper time becomes the time of the hyperbolically accelerated frame. These coordinates, which are commonly called Rindler coordinates (similar variants are called Kottler-Møller coordinates or Lass coordinates ), can be seen as a special case of Fermi coordinates or Proper coordinates, and are often used in connection with the Unruh effect . Using these coordinates, it turns out that observers in hyperbolic motion possess an apparent event horizon , beyond which no signal can reach them.
A lesser known method for defining a reference frame in hyperbolic motion is the employment of the special conformal transformation , consisting of an inversion , a translation , and another inversion. [ 22 ] It is commonly interpreted as a gauge transformation in Minkowski space, though some authors alternatively use it as an acceleration transformation (see Kastrup for a critical historical survey). [ 23 ] It has the form
Using only one spatial dimension by x μ = ( t , x ) {\displaystyle x^{\mu }=(t,x)} , and further simplifying by setting x = 0 {\displaystyle x=0} , and using the acceleration a μ = ( 0 , − α / 2 ) {\displaystyle a^{\mu }=(0,-\alpha /2)} , it follows [ 24 ]
with the hyperbola ( X − 1 / α ) 2 − T 2 = 1 / α 2 {\displaystyle \left(X-1/\alpha \right)^{2}-T^{2}=1/\alpha ^{2}} . It turns out that at t = ± ( x + 2 / α ) {\displaystyle t=\pm (x+2/\alpha )} the time becomes singular, to which Fulton & Rohrlich & Witten [ 24 ] remark that one has to stay away from this limit, while Kastrup [ 23 ] (who is very critical of the acceleration interpretation) remarks that this is one of the strange results of this interpretation. | https://en.wikipedia.org/wiki/Hyperbolic_motion_(relativity) |
In geometry , the relation of hyperbolic orthogonality between two lines separated by the asymptotes of a hyperbola is a concept used in special relativity to define simultaneous events. Two events will be simultaneous when they are on a line hyperbolically orthogonal to a particular timeline. This dependence on a certain timeline is determined by velocity, and is the basis for the relativity of simultaneity . Furthermore, keeping time and space axes hyperbolically orthogonal, as in Minkowski space, gives a constant result when measurements are taken of the speed of light.
Two lines are hyperbolic orthogonal when they are reflections of each other over the asymptote of a given hyperbola .
Two particular hyperbolas are frequently used in the plane:
When reflected in the x-axis, a line y = mx becomes y = − mx .
The relation of hyperbolic orthogonality actually applies to classes of parallel lines in the plane, where any particular line can represent the class. Thus, for a given hyperbola and asymptote A , a pair of lines ( a , b ) are hyperbolic orthogonal if there is a pair ( c , d ) such that a ‖ c , b ‖ d {\displaystyle a\rVert c,\ b\rVert d} , and c is the reflection of d across A .
Similar to the perpendularity of a circle radius to the tangent , a radius to a hyperbola is hyperbolic orthogonal to a tangent to the hyperbola. [ 1 ] [ 2 ]
A bilinear form is used to describe orthogonality in analytic geometry, with two elements orthogonal when their bilinear form vanishes. In the plane of complex numbers z 1 = u + i v , z 2 = x + i y {\displaystyle z_{1}=u+iv,\quad z_{2}=x+iy} , the bilinear form is x u + y v {\displaystyle xu+yv} , while in the plane of hyperbolic numbers w 1 = u + j v , w 2 = x + j y , {\displaystyle w_{1}=u+jv,\quad w_{2}=x+jy,} the bilinear form is x u − y v . {\displaystyle xu-yv.}
The bilinear form may be computed as the real part of the complex product of one number with the conjugate of the other. Then
The notion of hyperbolic orthogonality arose in analytic geometry in consideration of conjugate diameters of ellipses and hyperbolas. [ 4 ] If g and g ′ represent the slopes of the conjugate diameters, then g g ′ = − b 2 a 2 {\displaystyle gg'=-{\frac {b^{2}}{a^{2}}}} in the case of an ellipse and g g ′ = b 2 a 2 {\displaystyle gg'={\frac {b^{2}}{a^{2}}}} in the case of a hyperbola. When a = b the ellipse is a circle and the conjugate diameters are perpendicular while the hyperbola is rectangular and the conjugate diameters are hyperbolic-orthogonal.
In the terminology of projective geometry , the operation of taking the hyperbolic orthogonal line is an involution . Suppose the slope of a vertical line is denoted ∞ so that all lines have a slope in the projectively extended real line . Then whichever hyperbola (A) or (B) is used, the operation is an example of a hyperbolic involution where the asymptote is invariant. Hyperbolically orthogonal lines lie in different sectors of the plane, determined by the asymptotes of the hyperbola, thus the relation of hyperbolic orthogonality is a heterogeneous relation on sets of lines in the plane.
One of the premises of relativity is that the speed of light does not depend on the inertial frame of reference in which the measurements are done. This premise has been associated with null results in the Michaelson–Morley experiment . As long as space and time axes are hyperbolically orthogonal, the measurement of the speed of light will give the same result. The seeming paradox of light speed invariance with respect to moving observers is resolved in special relativity by this feature of Minkowski space .
Since Hermann Minkowski 's foundation for spacetime study in 1908, the concept of points in a spacetime plane being hyperbolic-orthogonal to a timeline (tangent to a world line ) has been used to define simultaneity of events relative to the timeline, or relativity of simultaneity . In Minkowski's development the hyperbola of type (B) above is in use. [ 5 ] Two vectors ( x 1 , y 1 , z 1 , t 1 ) and ( x 2 , y 2 , z 2 , t 2 ) are normal (meaning hyperbolic orthogonal) when
When c = 1 and the y s and z s are zero, x 1 ≠ 0, t 2 ≠ 0, then c t 1 x 1 = x 2 c t 2 {\displaystyle {\frac {c\ t_{1}}{x_{1}}}={\frac {x_{2}}{c\ t_{2}}}} .
Given a hyperbola with asymptote A , its reflection in A produces the conjugate hyperbola . Any diameter of the original hyperbola is reflected to a conjugate diameter . The directions indicated by conjugate diameters are taken for space and time axes in relativity.
As E. T. Whittaker wrote in 1910, "[the] hyperbola is unaltered when any pair of conjugate diameters are taken as new axes, and a new unit of length is taken proportional to the length of either of these diameters." [ 6 ] On this principle of relativity , he then wrote the Lorentz transformation in the modern form using rapidity .
Edwin Bidwell Wilson and Gilbert N. Lewis developed the concept within synthetic geometry in 1912. They note "in our plane no pair of perpendicular [hyperbolic-orthogonal] lines is better suited to serve as coordinate axes than any other pair" [ 1 ] | https://en.wikipedia.org/wiki/Hyperbolic_orthogonality |
A hyperbolic sector is a region of the Cartesian plane bounded by a hyperbola and two rays from the origin to it. For example, the two points ( a , 1/ a ) and ( b , 1/ b ) on the rectangular hyperbola xy = 1 , or the corresponding region when this hyperbola is re-scaled and its orientation is altered by a rotation leaving the center at the origin, as with the unit hyperbola . A hyperbolic sector in standard position has a = 1 and b > 1 .
Hyperbolic sectors are the basis for the hyperbolic functions .
The area of a hyperbolic sector in standard position is natural logarithm of b .
Proof: Integrate under 1/ x from 1 to b , add triangle {(0, 0), (1, 0), (1, 1)}, and subtract triangle {(0, 0), ( b , 0), ( b , 1/ b )} (both triangles of which have the same area). [ 1 ]
When in standard position, a hyperbolic sector corresponds to a positive hyperbolic angle at the origin, with the measure of the latter being defined as the area of the former.
When in standard position, a hyperbolic sector determines a hyperbolic triangle , the right triangle with one vertex at the origin, base on the diagonal ray y = x , and third vertex on the hyperbola
with the hypotenuse being the segment from the origin to the point ( x, y ) on the hyperbola. The length of the base of this triangle is
and the altitude is
where u is the appropriate hyperbolic angle . The usual definitions of the hyperbolic functions can be seen via the legs of right triangles plotted with hyperbolic coordinates . When the length of theses legs is divided by the square root of 2 , they can be graphed as the unit hyperbola with hyperbolic cosine and sine coordinates.
The analogy between circular and hyperbolic functions was described by Augustus De Morgan in his Trigonometry and Double Algebra (1849). [ 2 ] William Burnside used such triangles, projecting from a point on the hyperbola xy = 1 onto the main diagonal, in his article "Note on the addition theorem for hyperbolic functions". [ 3 ]
It is known that f( x ) = x p has an algebraic antiderivative except in the case p = –1 corresponding to the quadrature of the hyperbola. The other cases are given by Cavalieri's quadrature formula . Whereas quadrature of the parabola had been accomplished by Archimedes in the third century BC (in The Quadrature of the Parabola ), the hyperbolic quadrature required the invention in 1647 of a new function: Gregoire de Saint-Vincent addressed the problem of computing the areas bounded by a hyperbola. His findings led to the natural logarithm function, once called the hyperbolic logarithm since it is obtained by integrating, or finding the area, under the hyperbola. [ 4 ]
Before 1748 and the publication of Introduction to the Analysis of the Infinite , the natural logarithm was known in terms of the area of a hyperbolic sector. Leonhard Euler changed that when he introduced transcendental functions such as 10 x . Euler identified e as the value of b producing a unit of area (under the hyperbola or in a hyperbolic sector in standard position). Then the natural logarithm could be recognized as the inverse function to the transcendental function e x .
To accommodate the case of negative logarithms and the corresponding negative hyperbolic angles, different hyperbolic sectors are constructed according to whether x is greater or less than one. A variable right triangle with area 1/2 is V = { ( x , 1 / x ) , ( x , 0 ) , ( 0 , 0 ) } . {\displaystyle V=\{(x,1/x),\ (x,0),\ (0,0)\}.} The isosceles case is T = { ( 1 , 1 ) , ( 1 , 0 ) , ( 0 , 0 ) } . {\displaystyle T=\{(1,1),\ (1,0),\ (0,0)\}.} The natural logarithm is known as the area under y = 1/ x between one and x . A positive hyperbolic angle is given by the area of ∫ 1 x d t t + T − V . {\displaystyle \int _{1}^{x}{\frac {dt}{t}}+T-V.} A negative hyperbolic angle is given by the negative of the area ∫ x 1 d t t + V − T . {\displaystyle \int _{x}^{1}{\frac {dt}{t}}+V-T.} This convention is in accord with a negative natural logarithm for x in (0,1).
When Felix Klein 's book on non-Euclidean geometry was published in 1928, it provided a foundation for the subject by reference to projective geometry . To establish hyperbolic measure on a line, Klein noted that the area of a hyperbolic sector provided visual illustration of the concept. [ 5 ]
Hyperbolic sectors can also be drawn to the hyperbola y = 1 + x 2 {\displaystyle y={\sqrt {1+x^{2}}}} . The area of such hyperbolic sectors has been used to define hyperbolic distance in a geometry textbook. [ 6 ] | https://en.wikipedia.org/wiki/Hyperbolic_sector |
In dynamical systems theory , a subset Λ of a smooth manifold M is said to have a hyperbolic structure with respect to a smooth map f if its tangent bundle may be split into two invariant subbundles , one of which is contracting and the other is expanding under f , with respect to some Riemannian metric on M . An analogous definition applies to the case of flows .
In the special case when the entire manifold M is hyperbolic, the map f is called an Anosov diffeomorphism . The dynamics of f on a hyperbolic set, or hyperbolic dynamics , exhibits features of local structural stability and has been much studied, cf. Axiom A .
Let M be a compact smooth manifold , f : M → M a diffeomorphism , and Df : TM → TM the differential of f . An f -invariant subset Λ of M is said to be hyperbolic , or to have a hyperbolic structure , if the restriction to Λ of the tangent bundle of M admits a splitting into a Whitney sum of two Df -invariant subbundles, called the stable bundle and the unstable bundle and denoted E s and E u . With respect to some Riemannian metric on M , the restriction of Df to E s must be a contraction and the restriction of Df to E u must be an expansion. Thus, there exist constants 0< λ <1 and c >0 such that
and
and
and
If Λ is hyperbolic then there exists a Riemannian metric for which c = 1 — such a metric is called adapted .
This article incorporates material from Hyperbolic Set on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License . | https://en.wikipedia.org/wiki/Hyperbolic_set |
In mathematics , hyperbolic space of dimension n is the unique simply connected , n -dimensional Riemannian manifold of constant sectional curvature equal to −1. [ 1 ] It is homogeneous , and satisfies the stronger property of being a symmetric space . There are many ways to construct it as an open subset of R n {\displaystyle \mathbb {R} ^{n}} with an explicitly written Riemannian metric; such constructions are referred to as models. Hyperbolic 2-space, H 2 , which was the first instance studied, is also called the hyperbolic plane .
It is also sometimes referred to as Lobachevsky space or Bolyai–Lobachevsky space after the names of the author who first published on the topic of hyperbolic geometry . Sometimes the qualificative "real" is added to distinguish it from complex hyperbolic spaces .
Hyperbolic space serves as the prototype of a Gromov hyperbolic space , which is a far-reaching notion including differential-geometric as well as more combinatorial spaces via a synthetic approach to negative curvature. Another generalisation is the notion of a CAT(−1) space.
The n {\displaystyle n} -dimensional hyperbolic space or hyperbolic n {\displaystyle n} -space , usually denoted H n {\displaystyle \mathbb {H} ^{n}} , is the unique simply connected, n {\displaystyle n} -dimensional complete Riemannian manifold with a constant negative sectional curvature equal to −1. [ 1 ] The unicity means that any two Riemannian manifolds that satisfy these properties are isometric to each other. It is a consequence of the Killing–Hopf theorem .
To prove the existence of such a space as described above one can explicitly construct it, for example as an open subset of R n {\displaystyle \mathbb {R} ^{n}} with a Riemannian metric given by a simple formula. There are many such constructions or models of hyperbolic space, each suited to different aspects of its study. They are isometric to each other according to the previous paragraph, and in each case an explicit isometry can be explicitly given. Here is a list of the better-known models which are described in more detail in their namesake articles:
Hyperbolic space, developed independently by Nikolai Lobachevsky , János Bolyai and Carl Friedrich Gauss , is a geometric space analogous to Euclidean space , but such that Euclid's parallel postulate is no longer assumed to hold. Instead, the parallel postulate is replaced by the following alternative (in two dimensions):
It is then a theorem that there are infinitely many such lines through P . This axiom still does not uniquely characterize the hyperbolic plane up to isometry ; there is an extra constant, the curvature K < 0 , that must be specified. However, it does uniquely characterize it up to homothety , meaning up to bijections that only change the notion of distance by an overall constant. By choosing an appropriate length scale, one can thus assume, without loss of generality, that K = −1 .
The hyperbolic plane cannot be isometrically embedded into Euclidean 3-space by Hilbert's theorem . On the other hand the Nash embedding theorem implies that hyperbolic n-space can be isometrically embedded into some Euclidean space of larger dimension (5 for the hyperbolic plane by the Nash embedding theorem).
When isometrically embedded to a Euclidean space every point of a hyperbolic space is a saddle point .
The volume of balls in hyperbolic space increases exponentially with respect to the radius of the ball rather than polynomially as in Euclidean space. Namely, if B ( r ) {\displaystyle B(r)} is any ball of radius r {\displaystyle r} in H n {\displaystyle \mathbb {H} ^{n}} then: V o l ( B ( r ) ) = V o l ( S n − 1 ) ∫ 0 r sinh n − 1 ( t ) d t {\displaystyle \mathrm {Vol} (B(r))=\mathrm {Vol} (S^{n-1})\int _{0}^{r}\sinh ^{n-1}(t)dt} where S n − 1 {\displaystyle S^{n-1}} is the total volume of the Euclidean ( n − 1 ) {\displaystyle (n-1)} -sphere of radius 1.
The hyperbolic space also satisfies a linear isoperimetric inequality , that is there exists a constant i {\displaystyle i} such that any embedded disk whose boundary has length r {\displaystyle r} has area at most i ⋅ r {\displaystyle i\cdot r} . This is to be contrasted with Euclidean space where the isoperimetric inequality is quadratic.
There are many more metric properties of hyperbolic space that differentiate it from Euclidean space. Some can be generalised to the setting of Gromov-hyperbolic spaces, which is a generalisation of the notion of negative curvature to general metric spaces using only the large-scale properties. A finer notion is that of a CAT(−1)-space.
Every complete , connected , simply connected manifold of constant negative curvature −1 is isometric to the real hyperbolic space H n . As a result, the universal cover of any closed manifold M of constant negative curvature −1, which is to say, a hyperbolic manifold , is H n . Thus, every such M can be written as H n / Γ, where Γ is a torsion-free discrete group of isometries on H n . That is, Γ is a lattice in SO + ( n , 1) .
Two-dimensional hyperbolic surfaces can also be understood according to the language of Riemann surfaces . According to the uniformization theorem , every Riemann surface is either elliptic, parabolic or hyperbolic. Most hyperbolic surfaces have a non-trivial fundamental group π 1 = Γ ; the groups that arise this way are known as Fuchsian groups . The quotient space H 2 / Γ of the upper half-plane modulo the fundamental group is known as the Fuchsian model of the hyperbolic surface. The Poincaré half plane is also hyperbolic, but is simply connected and noncompact . It is the universal cover of the other hyperbolic surfaces.
The analogous construction for three-dimensional hyperbolic surfaces is the Kleinian model . | https://en.wikipedia.org/wiki/Hyperbolic_space |
A hyperbolization procedure is a procedure that turns a polyhedral complex K {\displaystyle K} into a non-positively curved space H ( K ) {\displaystyle {\mathcal {H}}(K)} , retaining some of its topological features. Roughly speaking, the procedure consists in replacing every cell of K {\displaystyle K} with a copy of a certain non-positively curved manifold with boundary, which is fixed a priori and is called the hyperbolizing cell of the procedure.
There are many different hyperbolization procedures available in the literature. While they all satisfy some common axioms, they differ by what kind of polyhedral complex is allowed as input and what kind of hyperbolizing cell is used. As a result, different procedures preserve different topological features and provide spaces with different geometric flavors. The first hyperbolization procedures were introduced by Mikhael Gromov in [ 1 ] and later other versions were developed by several mathematicians including Ruth Charney , Michael W. Davis , and Pedro Ontaneda .
It is important to note that the word "hyperbolization" here does not have the same meaning that it has in the uniformization or hyperbolization results typical of low-dimensional geometry. Indeed, the space H ( K ) {\displaystyle {\mathcal {H}}(K)} is not homeomorphic to K {\displaystyle K} . For instance, H ( K ) {\displaystyle {\mathcal {H}}(K)} is always aspherical , regardless of whether K {\displaystyle K} is aspherical. Moreover, despite the name of the procedure, H ( K ) {\displaystyle {\mathcal {H}}(K)} is not always guaranteed to be negatively curved, so some authors refer to these procedures as asphericalization procedures.
An assignment K → H ( K ) {\displaystyle K\to {\mathcal {H}}(K)} is a hyperbolization procedure if it satisfies the following properties:
It follows in particular that if K {\displaystyle K} is a closed orientable n {\displaystyle n} - manifold , then so is H ( K ) {\displaystyle {\mathcal {H}}(K)} .
The following are some examples of common hyperbolization procedures.
In [ 2 ] Charney and Davis introduced a hyperbolization procedure for which H ( K ) {\displaystyle {\mathcal {H}}(K)} is locally CAT(-1). In particular, when K {\displaystyle K} is compact, the fundamental group π 1 ( H ( K ) ) {\displaystyle \pi _{1}({\mathcal {H}}(K))} is a Gromov hyperbolic group . The hyperbolizing cell in this procedure is a real hyperbolic manifold with boundary and corners constructed via arithmetic methods.
In [ 3 ] Ontaneda showed that if K is a smooth triangulation of a smooth manifold, then the strict hyperbolization procedure of Charney-Davis [ 2 ] can be refined to ensure that H ( K ) {\displaystyle {\mathcal {H}}(K)} is a smooth manifold and that it admits a Riemannian metric of negative sectional curvature . Moreover, it is possible to pinch the curvature arbitrarily close to − 1 {\displaystyle -1} .
Any hyperbolization procedure H {\displaystyle {\mathcal {H}}} admits a relative version, which allows to work relatively to a subcomplex, i.e., keep it unaltered under the hyperbolization. [ 1 ] [ 4 ] More precisely, if L ⊆ K {\displaystyle L\subseteq K} is a subcomplex, then one can attach to K {\displaystyle K} the cone over L {\displaystyle L} , apply the hyperbolization procedure to the coned-off complex, and the remove a small neighborhood of the cone point. Thanks to axiom (3) above, the link of the cone point is a copy of L {\displaystyle L} , so removing a small neighborhood of the cone point results in a boundary component homeomorphic to L {\displaystyle L} .
If H {\displaystyle {\mathcal {H}}} is the strict hyperbolization of Charney-Davis, then Belegradek showed that the relative version of H {\displaystyle {\mathcal {H}}} results in a space whose fundamental group is hyperbolic relative to π 1 ( K ) {\displaystyle \pi _{1}(K)} . [ 5 ]
The following are some classical applications of hyperbolization procedures. The general recipe consists in constructing a complex or manifold with some desired topological features, and then applying a hyperbolization procedure to infuse it with non-positive or negative curvature. Depending on which procedure is used, one can get more geometric control on the output.
This mathematics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hyperbolization_procedures |
In geometry , Thurston's geometrization theorem or hyperbolization theorem implies that closed atoroidal Haken manifolds are hyperbolic, and in particular satisfy the Thurston conjecture .
One form of the hyperbolization theorem states: If M is a compact irreducible atoroidal Haken manifold whose boundary has zero Euler characteristic , then the interior of M has a complete hyperbolic structure of finite volume.
The Mostow rigidity theorem implies that if a manifold of dimension at least 3 has a hyperbolic structure of finite volume, then it is essentially unique.
The conditions that the manifold M should be irreducible and atoroidal are necessary, as hyperbolic manifolds have these properties. However the condition that the manifold be Haken is unnecessarily strong. Thurston's hyperbolization conjecture states that a closed irreducible atoroidal 3-manifold with infinite fundamental group is hyperbolic, and this follows from Perelman's proof of the Thurston geometrization conjecture.
Thurston (1982 , 2.3) showed that if a compact 3 manifold is prime, homotopically atoroidal, and has non-empty boundary, then it has a complete hyperbolic structure unless it is homeomorphic to a certain manifold ( T 2 ×[0,1])/ Z /2 Z with boundary T 2 .
A hyperbolic structure on the interior of a compact orientable 3-manifold has finite volume if and only if all boundary components are tori, except for the manifold T 2 ×[0,1] which has a hyperbolic structure but none of finite volume ( Thurston 1982 , p. 359).
Thurston never published a complete proof of his theorem for reasons that he explained in ( Thurston 1994 ), though parts of his argument are contained in Thurston ( 1986 , 1998a , 1998b ). Wall (1984) and Morgan (1984) gave summaries of Thurston's proof. Otal (1996) gave a proof in the case of manifolds that fiber over the circle, and Otal (1998) and Kapovich (2009) gave proofs for the generic case of manifolds that do not fiber over the circle. Thurston's geometrization theorem also follows from Perelman's proof using Ricci flow of the more general Thurston geometrization conjecture .
Thurston's original argument for this case was summarized by Sullivan (1981) . Otal (1996) gave a proof in the case of manifolds that fiber over the circle.
Thurston's geometrization theorem in this special case states that if M is a 3-manifold that fibers over the circle and whose monodromy is a pseudo-Anosov diffeomorphism, then the interior of M has a complete hyperbolic metric of finite volume.
Otal (1998) and Kapovich (2009) gave proofs of Thurston's theorem for the generic case of manifolds that do not fiber over the circle.
The idea of the proof is to cut a Haken manifold M along an incompressible surface, to obtain a new manifold N . By induction one assumes that the interior of N has a hyperbolic structure, and the problem is to modify it so that it can be extended to the boundary of N and glued together. Thurston showed that this follows from the existence of a fixed point for a map of Teichmuller space called the skinning map . The core of the proof of the geometrization theorem is to prove that if N is not an interval bundle over a surface and M is an atoroidal then the skinning map has a fixed point. (If N is an interval bundle then the skinning map has no fixed point, which is why one needs a separate argument when M fibers over the circle.) McMullen (1990) gave a new proof of the existence of a fixed point of the skinning map. | https://en.wikipedia.org/wiki/Hyperbolization_theorem |
Hyperboloid structures are architectural structures designed using a hyperboloid in one sheet. Often these are tall structures, such as towers, where the hyperboloid geometry's structural strength is used to support an object high above the ground. Hyperboloid geometry is often used for decorative effect as well as structural economy. The first hyperboloid structures were built by Russian engineer Vladimir Shukhov (1853–1939), [ 1 ] including the Shukhov Tower in Polibino, Dankovsky District, Lipetsk Oblast , Russia.
Hyperbolic structures have a negative Gaussian curvature , meaning they curve inward rather than curving outward or being straight. As doubly ruled surfaces , they can be made with a lattice of straight beams, hence are easier to build than curved surfaces that do not have a ruling and must instead be built with curved beams. [ 2 ]
Hyperboloid structures are superior in stability against outside forces compared with "straight" buildings, but have shapes often creating large amounts of unusable volume (low space efficiency). Hence they are more commonly used in purpose-driven structures, such as water towers (to support a large mass), cooling towers, and aesthetic features. [ 3 ]
A hyperbolic structure is beneficial for cooling towers . At the bottom, the widening of the tower provides a large area for installation of fill to promote thin film evaporative cooling of the circulated water. As the water first evaporates and rises, the narrowing effect helps accelerate the laminar flow , and then as it widens out, contact between the heated air and atmospheric air supports turbulent mixing. [ citation needed ]
In the 1880s, Shukhov began to work on the problem of the design of roof systems to use a minimum of materials, time and labor. His calculations were most likely derived from mathematician Pafnuty Chebyshev 's work on the theory of best approximations of functions. Shukhov's mathematical explorations of efficient roof structures led to his invention of a new system that was innovative both structurally and spatially. By applying his analytical skills to the doubly curved surfaces Nikolai Lobachevsky named "hyperbolic", Shukhov derived a family of equations that led to new structural and constructional systems, known as hyperboloids of revolution and hyperbolic paraboloids .
The steel gridshells of the exhibition pavilions of the 1896 All-Russian Industrial and Handicrafts Exposition in Nizhny Novgorod were the first publicly prominent examples of Shukhov's new system. Two pavilions of this type were built for the Nizhni Novgorod exposition, one oval in plan and one circular. The roofs of these pavilions were doubly curved gridshells formed entirely of a lattice of straight angle-iron and flat iron bars. Shukhov himself called them azhurnaia bashnia ("lace tower", i.e., lattice tower ). The patent of this system, for which Shukhov applied in 1895, was awarded in 1899.
Shukhov also turned his attention to the development of an efficient and easily constructed structural system (gridshell) for a tower carrying a large load at the top – the problem of the water tower. His solution was inspired by observing the action of a woven basket supporting a heavy weight. Again, it took the form of a doubly curved surface constructed of a light network of straight iron bars and angle iron. Over the next 20 years, he designed and built nearly 200 of these towers, no two exactly alike, most with heights in the range of 12m to 68m.
At least as early as 1911, Shukhov began experimenting with the concept of forming a tower out of stacked sections of hyperboloids. Stacking the sections permitted the form of the tower to taper more at the top, with a less pronounced "waist" between the shape-defining rings at bottom and top. Increasing the number of sections would increase the tapering of the overall form, to the point that it began to resemble a cone.
By 1918 Shukhov had developed this concept into the design of a nine-section stacked hyperboloid radio broadcasting tower in Moscow. Shukhov designed a 350m tower, which would have surpassed the Eiffel Tower in height by 50m, while using less than a quarter of the amount of material. His design, as well as the full set of supporting calculations analyzing the hyperbolic geometry and sizing the network of members, was completed by February 1919. However, the 2200 tons of steel required to build the tower to 350m were not available. In July 1919, Lenin decreed that the tower should be built to a height of 150m, and the necessary steel was to be made available from the army's supplies. Construction of the smaller tower with six stacked hyperboloids began within a few months, and Shukhov Tower was completed by March 1922.
Antoni Gaudi and Shukhov carried out experiments with hyperboloid structures nearly simultaneously, but independently, in 1880–1895. Antoni Gaudi used structures in the form of hyperbolic paraboloid (hypar) and hyperboloid of revolution in the Sagrada Família in 1910. [ 4 ] In the Sagrada Família, there are a few places on the nativity facade – a design not equated with Gaudi's ruled-surface design, where the hyperboloid crops up. All around the scene with the pelican, there are numerous examples (including the basket held by one of the figures). There is a hyperboloid adding structural stability to the cypress tree (by connecting it to the bridge). The "bishop's mitre" spires are capped with hyperboloids. [ citation needed ]
In the Palau Güell , there is one set of interior columns along the main facade with hyperbolic capitals. The crown of the famous parabolic vault is a hyperboloid. The vault of one of the stables at the Church of Colònia Güell is a hyperboloid. There is a unique column in the Park Güell that is a hyperboloid. The famous Spanish engineer and architect Eduardo Torroja designed a thin-shell water tower in Fedala [ 5 ] and the roof of Hipódromo de la Zarzuela [ 6 ] in the form of hyperboloid of revolution. Le Corbusier and Félix Candela used hyperboloid structures ( hypar ). [ citation needed ]
A hyperboloid cooling tower by Frederik van Iterson and Gerard Kuypers was patented in the Netherlands on August 16, 1916. [ 7 ] The first Van Iterson cooling tower was built and put to use at the Dutch State Mine ( DSM ) Emma in 1918. A whole series of the same and later designs would follow. [ 8 ]
The Georgia Dome (1992) was the first Hypar- Tensegrity dome to be built. [ 9 ] | https://en.wikipedia.org/wiki/Hyperboloid_structure |
In particle physics , the hypercharge (a portmanteau of hyperonic and charge ) Y of a particle is a quantum number conserved under the strong interaction . The concept of hypercharge provides a single charge operator that accounts for properties of isospin , electric charge , and flavour . The hypercharge is useful to classify hadrons ; the similarly named weak hypercharge has an analogous role in the electroweak interaction .
Hypercharge is one of two quantum numbers of the SU(3) model of hadrons , alongside isospin I 3 . The isospin alone was sufficient for two quark flavours — namely u and d — whereas presently 6 flavours of quarks are known.
SU(3) weight diagrams (see below) are 2 dimensional, with the coordinates referring to two quantum numbers: I 3 (also known as I z ), which is the z component of isospin, and Y , which is the hypercharge (defined by strangeness S , charm C , bottomness B′ , topness T′ , and baryon number B ). Mathematically, hypercharge is [ 1 ]
Strong interactions conserve hypercharge (and weak hypercharge ), but weak interactions do not .
The Gell-Mann–Nishijima formula relates isospin and electric charge
where I 3 is the third component of isospin and Q is the particle's charge.
Isospin creates multiplets of particles whose average charge is related to the hypercharge by:
since the hypercharge is the same for all members of a multiplet, and the average of the I 3 values is 0.
These definitions in their original form hold only for the three lightest quarks.
The SU(2) model has multiplets characterized by a quantum number J , which is the total angular momentum . Each multiplet consists of 2 J + 1 substates with equally-spaced values of J z , forming a symmetric arrangement seen in atomic spectra and isospin. This formalizes the observation that certain strong baryon decays were not observed, leading to the prediction of the mass, strangeness and charge of the Ω − baryon .
The SU(3) has supermultiplets containing SU(2) multiplets. SU(3) now needs two numbers to specify all its sub-states which are denoted by λ 1 and λ 2 .
( λ 1 + 1) specifies the number of points in the topmost side of the hexagon while ( λ 2 + 1) specifies the number of points on the bottom side.
Hypercharge was a concept developed in the 1960s, to organize groups of particles in the " particle zoo " and to develop ad hoc conservation laws based on their observed transformations. With the advent of the quark model , it is now obvious that strong hypercharge, Y , is the following combination of the numbers of up ( n u ), down ( n d ), strange ( n s ), charm ( n c ), top ( n t ) and bottom ( n b ):
In modern descriptions of hadron interaction, it has become more obvious to draw Feynman diagrams that trace through the individual constituent quarks (which are conserved) composing the interacting baryons and mesons , rather than bothering to count strong hypercharge quantum numbers. Weak hypercharge , however, remains an essential part of understanding the electroweak interaction . | https://en.wikipedia.org/wiki/Hypercharge |
Hyperchromicity is the increase of absorbance ( optical density ) of a material. The most famous example is the hyperchromicity of DNA that occurs when the DNA duplex is denatured. [ 1 ] The UV absorption is increased when the two single DNA strands are being separated, either by heat or by addition of denaturant or by increasing the pH level. The opposite, a decrease of absorbance is called hypochromicity .
Heat denaturation of DNA, also called melting , causes the double helix structure to unwind to form single stranded DNA. When DNA in solution is heated above its melting temperature (usually more than 80 °C), the double-stranded DNA unwinds to form single-stranded DNA. The bases become unstacked and can thus absorb more light. In their native state, the bases of DNA absorb light in the 260-nm wavelength region. When the bases become unstacked, the wavelength of maximum absorbance does not change, but the amount absorbed increases by 37%. A double stranded DNA strand dissociating to two single strands produces a sharp cooperative transition.
Hyperchromicity can be used to track the condition of DNA as temperature changes. The transition/melting temperature (T m ) is the temperature where the absorbance of UV light is 50% between the maximum and minimum, i.e. where 50% of the DNA is denatured. A ten fold increase of monovalent cation concentration increases the temperature by 16.6 °C.
The hyperchromic effect is the striking increase in absorbance of DNA upon denaturation. The two strands of DNA are bound together mainly by the stacking interactions, hydrogen bonds and hydrophobic effect between the complementary bases. The hydrogen bond limits the resonance of the aromatic ring so the absorbance of the sample is limited as well. When the DNA double helix is treated with denatured agents, the interaction force holding the double helical structure is disrupted. The double helix then separates into two single strands which are in the random coiled conformation. At this time, the base-base interaction will be reduced, increasing the UV absorbance of DNA solution because many bases are in free form and do not form hydrogen bonds with complementary bases. As a result, the absorbance for single-stranded DNA will be 37% higher than that for double stranded DNA at the same concentration.
This biology article is a stub . You can help Wikipedia by expanding it .
This spectroscopy -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hyperchromicity |
In organic chemistry , hyperconjugation ( σ-conjugation or no-bond resonance ) refers to the delocalization of electrons with the participation of bonds of primarily σ-character . Usually, hyperconjugation involves the interaction of the electrons in a sigma (σ) orbital (e.g. C–H or C–C) with an adjacent unpopulated non-bonding p or antibonding σ* or π* orbitals to give a pair of extended molecular orbitals . However, sometimes, low-lying antibonding σ* orbitals may also interact with filled orbitals of lone pair character (n) in what is termed negative hyperconjugation . [ 1 ] Increased electron delocalization associated with hyperconjugation increases the stability of the system. [ 2 ] [ 3 ] In particular, the new orbital with bonding character is stabilized, resulting in an overall stabilization of the molecule. [ 4 ] Only electrons in bonds that are in the β position can have this sort of direct stabilizing effect — donating from a sigma bond on an atom to an orbital in another atom directly attached to it. However, extended versions of hyperconjugation (such as double hyperconjugation [ 5 ] ) can be important as well. The Baker–Nathan effect , sometimes used synonymously for hyperconjugation, [ 6 ] is a specific application of it to certain chemical reactions or types of structures. [ 7 ]
Hyperconjugation can be used to rationalize a variety of chemical phenomena, including the anomeric effect , the gauche effect , the rotational barrier of ethane , the beta-silicon effect , the vibrational frequency of exocyclic carbonyl groups, and the relative stability of substituted carbocations and substituted carbon centred radicals , and the thermodynamic Zaitsev's rule for alkene stability. More controversially, hyperconjugation is proposed by quantum mechanical modeling to be a better explanation for the preference of the staggered conformation rather than the old textbook notion of steric hindrance . [ 8 ] [ 9 ]
Hyperconjugation affects several properties. [ 6 ] [ 10 ]
Hyperconjugation was suggested as the reason for the increased stability of carbon-carbon double bonds as the degree of substitution increases. Early studies in hyperconjugation were performed by in the research group of George Kistiakowsky . Their work, first published in 1937, was intended as a preliminary progress report of thermochemical studies of energy changes during addition reactions of various unsaturated and cyclic compounds. The importance of hyperconjugation in accounting for this effect has received support from quantum chemical calculations. [ 12 ] The key interaction is believed to be the donation of electron density from the neighboring C–H σ bond into the π* antibonding orbital of the alkene (σ C–H →π*). The effect is almost an order of magnitude weaker than the case of alkyl substitution on carbocations (σ C–H →p C ), since an unfilled p orbital is lower in energy, and, therefore, better energetically matched to a σ bond. When this effect manifests in the formation of the more substituted product in thermodynamically controlled E1 reactions, it is known as Zaitsev's rule , although in many cases the kinetic product also follows this rule. ( See Hofmann's rule for cases where the kinetic product is the less substituted one. )
One set of experiments by Kistiakowsky involved collected heats of hydrogenation data during gas-phase reactions of a range of compounds that contained one alkene unit. When comparing a range of mono alkyl -substituted alkenes, they found any alkyl group noticeably increased the stability, but that the choice of different specific alkyl groups had little to no effect. [ 13 ]
A portion of Kistiakowsky's work involved a comparison of other unsaturated compounds in the form of CH 2 =CH(CH 2 )n-CH=CH 2 (n=0,1,2). These experiments revealed an important result; when n=0, there is an effect of conjugation to the molecule where the ΔH value is lowered by 3.5 kcal . This is likened to the addition of two alkyl groups into ethylene. Kistiakowsky also investigated open chain systems, where the largest value of heat liberated was found to be during the addition to a molecule in the 1,4-position. Cyclic molecules proved to be the most problematic, as it was found that the strain of the molecule would have to be considered. The strain of five-membered rings increased with a decrease degree of unsaturation. This was a surprising result that was further investigated in later work with cyclic acid anhydrides and lactones . Cyclic molecules like benzene and its derivatives were also studied, as their behaviors were different from other unsaturated compounds. [ 13 ]
Despite the thoroughness of Kistiakowsky's work, it was not complete and needed further evidence to back up his findings. His work was a crucial first step to the beginnings of the ideas of hyperconjugation and conjugation effects.
The conjugation of 1,3- butadiene was first evaluated by Kistiakowsky, a conjugative contribution of 3.5 kcal/mol was found based on the energetic comparison of hydrogenation between conjugated species and unconjugated analogues. [ 13 ] Rogers who used the method first applied by Kistiakowsky, reported that the conjugation stabilization of 1,3-butadiyne was zero, as the difference of Δ hyd H between first and second hydrogenation was zero. The heats of hydrogenation (Δ hyd H) were obtained by computational G3(MP2) quantum chemistry method. [ 14 ]
Another group led by Houk [ 15 ] suggested the methods employed by Rogers and Kistiakowsky was inappropriate, because that comparisons of heats of hydrogenation evaluate not only conjugation effects but also other structural and electronic differences. They obtained -70.6 kcal/mol and -70.4 kcal/mol for the first and second hydrogenation respectively by ab initio calculation, which confirmed Rogers’ data. However, they interpreted the data differently by taking into account the hyperconjugation stabilization. To quantify hyperconjugation effect, they designed the following isodesmic reactions in 1-butyne and 1-butene .
Deleting the hyperconjugative interactions gives virtual states that have energies that are 4.9 and 2.4 kcal/mol higher than those of 1-butyne and 1-butene , respectively. Employment of these virtual states results in a 9.6 kcal/mol conjugative stabilization for 1,3-butadiyne and 8.5 kcal/mol for 1,3-butadiene.
A relatively recent work by Fernández and Frenking (2006) summarized the trends in hyperconjugation among various groups of acyclic molecules, using energy decomposition analysis or EDA. Fernández and Frenking define this type of analysis as "...a method that uses only the pi orbitals of the interacting fragments in the geometry of the molecule for estimating pi interactions. [ 16 ] " For this type of analysis, the formation of bonds between various molecular moieties is a combination of three component terms. ΔE elstat represents what Fernández and Frenking call a molecule's “quasiclassical electrostatic attractions. [ 16 ] ” The second term, ΔE Pauli , represents the molecule's Pauli repulsion. ΔE orb , the third term, represents stabilizing interactions between orbitals, and is defined as the sum of ΔE pi and ΔE sigma . The total energy of interaction, ΔE int , is the result of the sum of the 3 terms. [ 16 ]
A group whose ΔE pi values were very thoroughly analyzed were a group of enones that varied in substituent.
Fernández and Frenking reported that the methyl , hydroxyl , and amino substituents resulted in a decrease in ΔE pi from the parent 2-propenal . Conversely, halide substituents of increasing atomic mass resulted in increasing ΔE pi . Because both the enone study and Hammett analysis study substituent effects (although in different species), Fernández and Frenking felt that comparing the two to investigate possible trends might yield significant insight into their own results. They observed a linear relationship between the ΔE pi values for the substituted enones and the corresponding Hammett constants. The slope of the graph was found to be -51.67, with a correlation coefficient of -0.97 and a standard deviation of 0.54. [ 16 ] Fernández and Frenking conclude from this data that ..."the electronic effects of the substituents R on pi conjugation in homo- and heteroconjugated systems is similar and thus appears to be rather independent of the nature of the conjugating system.". [ 16 ] [ 17 ]
An instance where hyperconjugation may be overlooked as a possible chemical explanation is in rationalizing the rotational barrier of ethane (C 2 H 6 ). It had been accepted as early as the 1930s that the staggered conformations of ethane were more stable than the eclipsed conformation . Wilson had proven that the energy barrier between any pair of eclipsed and staggered conformations is approximately 3 kcal/mol, and the generally accepted rationale for this was the unfavorable steric interactions between hydrogen atoms.
In their 2001 paper, however, Pophristic and Goodman [ 8 ] revealed that this explanation may be too simplistic. [ 18 ] Goodman focused on three principal physical factors: hyperconjugative interactions, exchange repulsion defined by the Pauli exclusion principle , and electrostatic interactions ( Coulomb interactions ). By comparing a traditional ethane molecule and a hypothetical ethane molecule with all exchange repulsions removed, potential curves were prepared by plotting torsional angle versus energy for each molecule. The analysis of the curves determined that the staggered conformation had no connection to the amount of electrostatic repulsions within the molecule. These results demonstrate that Coulombic forces do not explain the favored staggered conformations, despite the fact that central bond stretching decreases electrostatic interactions. [ 8 ]
Goodman also conducted studies to determine the contribution of vicinal (between two methyl groups) vs. geminal (between the atoms in a single methyl group) interactions to hyperconjugation. In separate experiments, the geminal and vicinal interactions were removed, and the most stable conformer for each interaction was deduced. [ 8 ]
From these experiments, it can be concluded that hyperconjugative effects delocalize charge and stabilize the molecule. Further, it is the vicinal hyperconjugative effects that keep the molecule in the staggered conformation. [ 8 ] Thanks to this work, the following model of the stabilization of the staggered conformation of ethane is now more accepted:
Hyperconjugation can also explain several other phenomena whose explanations may also not be as intuitive as that for the rotational barrier of ethane. [ 18 ]
The matter of the rotational barrier of ethane is not settled within the scientific community. An analysis within quantitative molecular orbital theory shows that 2-orbital-4-electron (steric) repulsions are dominant over hyperconjugation. [ 19 ] A valence bond theory study also emphasizes the importance of steric effects. [ 20 ] | https://en.wikipedia.org/wiki/Hyperconjugation |
Hypercubane is a hypothetical polycyclic hydrocarbon with the chemical formula C 40 H 24 . It is a molecular analog of the four-dimensional hypercube or tesseract . Hypercubane possesses an unconventional geometry of the carbon framework. It has O h symmetry like classic cubane C 8 H 8 . The structure is that of octamethyl cubane—a carbon attached to each corner of cubane itself—having each of those carbon substituents joined to each of its neighbors by an ethylene -1,2-diyl linker to form an outer cage. The edge of each inner core and its outer linker form a cyclohexene .
Hypercubane was first proposed in 2014 by Pichierri and studied computationally by density functional theory . [ 1 ] The initial model of hypercubane was constructed from octamethylcubane by removing unnecessary hydrogen atoms and adding the ethylene bridges as well as intercarbon bonds between the sp 2 and sp 3 atoms. To facilitate the future hypercubane spectroscopic identification chemical shifts for both 13 C and 1 H NMR-active nuclei have been calculated by Pichierri. [ 1 ] Two years later, in 2016, studying the pyrolysis of hypercubane by means of tight-binding molecular dynamics simulations, Maslov and Katin demonstrated that hypercubane possessed high thermal stability comparable with the classic cubane C 8 H 8 . [ 2 ] It was shown that hypercubane lifetime at room temperature tended to infinity. Therefore, it can be assumed that hypercubane is a kinetically stable molecular system. Among the possible hypercubane decomposition products at high temperatures (more than 1000 K) one can observe polycyclic airscrew-like hydrocarbon C 34 H 18 based on three combined graphene fragments passivated by hydrogen atoms and three isolated acetylene molecules. [ 2 ]
As of April 2025, there has been no method describing the synthesis of hypercubane.
This article about theoretical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hypercubane |
Hypercyclic morphogenesis refers to the emergence of a higher order of self-reproducing structure or organization or hierarchy within a system, first introduced by J. Barkley Rosser, Jr. in 1991 (Chap. 12). It involves combining the idea of the hypercycle , an idea due to Manfred Eigen and Peter Schuster (1979) with that of morphogenesis , an idea due to D’Arcy W.Thompson (1917). The hypercycle involves the problem in biochemistry of molecules combining in a self-reacting group that is able to stay together, posited by Eigen and Schuster as the foundation for the emergence of multi-cellular organisms. Thompson saw morphogenesis as a central part of the development of an organism as cell differentiation led to new organs appearing as it develops and grows. Alan Turing (1952) would study the chemistry and mathematics involved in such a process, which would also be studied mathematically by René Thom (1972) in his formulation of catastrophe theory .
Rosser suggested applications in political economy such as the emergence of the European Union out of the conscious actions of the leaders of its constituent nation states (1992), or the appearance of a higher level in an urban hierarchy during economic development (1994). It has been applied to the emergence of higher levels in an ecological hierarchy (Rosser, Folke, Günther, Isomäki, Perrings, and Puu, 1994), and it can be argued that the final stage of such a development for combined ecologic-economic systems would be the noosphere of Vladimir I. Vernadsky (1945). | https://en.wikipedia.org/wiki/Hypercyclic_morphogenesis |
Hyperdata are data objects linked to other data objects in other places, as hypertext indicates text linked to other text in other places. Hyperdata enables the formation of a web of data, evolving from the "data on the Web" that is not inter-related (or at least, not linked). [ 1 ]
In the same way that hypertext usually refers to the World Wide Web but is a broader term, hyperdata usually refers to the Semantic Web , but may also be applied more broadly to other data-linking technologies such as microformats – including XHTML Friends Network .
A hypertext link indicates that a link exists between two documents or "information resources". Hyperdata links go beyond simply such a connection, and express semantics about the kind of connection being made. [ 2 ] For instance, in a document about Hillary Clinton , a hypertext link might be made from the word senator to a document about the United States Senate . In contrast, a hyperdata link from the same word to the same document might also state that senator was one of Hillary Clinton's roles, titles, or positions (depending on the ontology being used to define this link).
The Semantic Web introduces the controversial concept of links to non-data resources . In the Semantic Web, links are not limited to "information resources" or documents, such as the typical Web page. Hyperdata links may refer to a physical structure (e.g., "the Eiffel Tower"), a place (" Champ de Mars " where the Eiffel Tower stands), a person ( Gustave Eiffel , the man responsible for the tower's construction), or other "non-information resources". The links in this article are hypertext, not hyperdata, and they all lead to documents which describe the entities named. [ citation needed ]
A hyperdata browser (also called a Semantic Web browser ), is a browser used to navigate the Semantic Web. Semantic Web architecture does not necessarily involve the HTML document format, which typical HTML Web browsers rely upon. A hyperdata browser specifically requests RDF data from Web servers , often through content negotiation or conneg , starting from the same URL as the traditional Web browser; the Web server may immediately return the requested RDF, or it may deliver a redirection to a new URI where the RDF may actually be found, or the RDF may be embedded in the same HTML document which would be returned to a Web browser which did not request RDF. The RDF data will generally describe the resource represented by the originally requested URI . The hyperdata browser then renders the information received as an HTML page that contains hyperlinks for users to navigate to indicated resources. [ citation needed ] | https://en.wikipedia.org/wiki/Hyperdata |
In nuclear physics , hyperdeformation is theoretically predicted states of an atomic nucleus with an extremely elongated shape and a very high angular momentum. Less elongated states, superdeformation , have been well observed, but the experimental evidence for hyperdeformation is more limited. Hyperdeformed states correspond to an axis ratio of 3:1. They would be caused by a third minimum in the potential energy surface , the second causing superdeformation and the first minimum being normal deformation. [ 1 ] [ 2 ] [ 3 ] Hyperdeformation is predicted to be found in 107 Cd .
This nuclear physics or atomic physics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hyperdeformation |
In mathematics , the Gaussian or ordinary hypergeometric function 2 F 1 ( a , b ; c ; z ) is a special function represented by the hypergeometric series , that includes many other special functions as specific or limiting cases . It is a solution of a second-order linear ordinary differential equation (ODE). Every second-order linear ODE with three regular singular points can be transformed into this equation.
For systematic lists of some of the many thousands of published identities involving the hypergeometric function, see the reference works by Erdélyi et al. (1953) and Olde Daalhuis (2010) . There is no known system for organizing all of the identities; indeed, there is no known algorithm that can generate all identities; a number of different algorithms are known that generate different series of identities. The theory of the algorithmic discovery of identities remains an active research topic.
The term "hypergeometric series" was first used by John Wallis in his 1655 book Arithmetica Infinitorum .
Hypergeometric series were studied by Leonhard Euler , but the first full systematic treatment was given by Carl Friedrich Gauss ( 1813 ).
Studies in the nineteenth century included those of Ernst Kummer ( 1836 ), and the fundamental characterisation by Bernhard Riemann ( 1857 ) of the hypergeometric function by means of the differential equation it satisfies.
Riemann showed that the second-order differential equation for 2 F 1 ( z ), examined in the complex plane, could be characterised (on the Riemann sphere ) by its three regular singularities .
The cases where the solutions are algebraic functions were found by Hermann Schwarz ( Schwarz's list ).
The hypergeometric function is defined for | z | < 1 by the power series
2 F 1 ( a , b ; c ; z ) = ∑ n = 0 ∞ ( a ) n ( b ) n ( c ) n z n n ! = 1 + a b c z 1 ! + a ( a + 1 ) b ( b + 1 ) c ( c + 1 ) z 2 2 ! + ⋯ . {\displaystyle {}_{2}F_{1}(a,b;c;z)=\sum _{n=0}^{\infty }{\frac {(a)_{n}(b)_{n}}{(c)_{n}}}{\frac {z^{n}}{n!}}=1+{\frac {ab}{c}}{\frac {z}{1!}}+{\frac {a(a+1)b(b+1)}{c(c+1)}}{\frac {z^{2}}{2!}}+\cdots .}
It is undefined (or infinite) if c equals a non-positive integer . Here ( q ) n is the (rising) Pochhammer symbol , [ note 1 ] which is defined by:
( q ) n = { 1 n = 0 q ( q + 1 ) ⋯ ( q + n − 1 ) n > 0 {\displaystyle (q)_{n}={\begin{cases}1&n=0\\q(q+1)\cdots (q+n-1)&n>0\end{cases}}}
The series terminates if either a or b is a nonpositive integer, in which case the function reduces to a polynomial:
2 F 1 ( − m , b ; c ; z ) = ∑ n = 0 m ( − 1 ) n ( m n ) ( b ) n ( c ) n z n . {\displaystyle {}_{2}F_{1}(-m,b;c;z)=\sum _{n=0}^{m}(-1)^{n}{\binom {m}{n}}{\frac {(b)_{n}}{(c)_{n}}}z^{n}.}
For complex arguments z with | z | ≥ 1 it can be analytically continued along any path in the complex plane that avoids the branch points 1 and infinity. In practice, most computer implementations of the hypergeometric function adopt a branch cut along the line z ≥ 1 .
As c → − m , where m is a non-negative integer, one has 2 F 1 ( z ) → ∞ . Dividing by the value Γ( c ) of the gamma function , we have the limit:
lim c → − m 2 F 1 ( a , b ; c ; z ) Γ ( c ) = ( a ) m + 1 ( b ) m + 1 ( m + 1 ) ! z m + 1 2 F 1 ( a + m + 1 , b + m + 1 ; m + 2 ; z ) {\displaystyle \lim _{c\to -m}{\frac {{}_{2}F_{1}(a,b;c;z)}{\Gamma (c)}}={\frac {(a)_{m+1}(b)_{m+1}}{(m+1)!}}z^{m+1}{}_{2}F_{1}(a+m+1,b+m+1;m+2;z)}
2 F 1 ( z ) is the most common type of generalized hypergeometric series p F q , and is often designated simply F ( z ) .
Using the identity ( a ) n + 1 = a ( a + 1 ) n {\displaystyle (a)_{n+1}=a(a+1)_{n}} , it is shown that
d d z 2 F 1 ( a , b ; c ; z ) = a b c 2 F 1 ( a + 1 , b + 1 ; c + 1 ; z ) {\displaystyle {\frac {d}{dz}}\ {}_{2}F_{1}(a,b;c;z)={\frac {ab}{c}}\ {}_{2}F_{1}(a+1,b+1;c+1;z)}
and more generally,
d n d z n 2 F 1 ( a , b ; c ; z ) = ( a ) n ( b ) n ( c ) n 2 F 1 ( a + n , b + n ; c + n ; z ) {\displaystyle {\frac {d^{n}}{dz^{n}}}\ {}_{2}F_{1}(a,b;c;z)={\frac {(a)_{n}(b)_{n}}{(c)_{n}}}\ {}_{2}F_{1}(a+n,b+n;c+n;z)}
Many of the common mathematical functions can be expressed in terms of the hypergeometric function, or as limiting cases of it. Some typical examples are
2 F 1 ( 1 , 1 ; 2 ; − z ) = ln ( 1 + z ) z 2 F 1 ( a , b ; b ; z ) = ( 1 − z ) − a ( b arbitrary ) 2 F 1 ( 1 2 , 1 2 ; 3 2 ; z 2 ) = arcsin ( z ) z 2 F 1 ( 1 3 , 2 3 ; 3 2 ; − 27 x 2 4 ) = 3 x 3 + 27 x 2 + 4 2 3 − 2 3 x 3 + 27 x 2 + 4 3 x 3 {\displaystyle {\begin{aligned}_{2}F_{1}\left(1,1;2;-z\right)&={\frac {\ln(1+z)}{z}}\\_{2}F_{1}(a,b;b;z)&=(1-z)^{-a}\quad (b{\text{ arbitrary}})\\_{2}F_{1}\left({\frac {1}{2}},{\frac {1}{2}};{\frac {3}{2}};z^{2}\right)&={\frac {\arcsin(z)}{z}}\\\,_{2}F_{1}\left({\frac {1}{3}},{\frac {2}{3}};{\frac {3}{2}};-{\frac {27x^{2}}{4}}\right)&={\frac {{\sqrt[{3}]{\frac {3x{\sqrt {3}}+{\sqrt {27x^{2}+4}}}{2}}}-{\sqrt[{3}]{\frac {2}{3x{\sqrt {3}}+{\sqrt {27x^{2}+4}}}}}}{x{\sqrt {3}}}}\\\end{aligned}}}
When a =1 and b = c , the series reduces into a plain geometric series , i.e.
2 F 1 ( 1 , b ; b ; z ) = 1 F 0 ( 1 ; ; z ) = 1 + z + z 2 + z 3 + z 4 + ⋯ {\displaystyle {\begin{aligned}_{2}F_{1}\left(1,b;b;z\right)&={}_{1}F_{0}\left(1;;z\right)=1+z+z^{2}+z^{3}+z^{4}+\cdots \end{aligned}}}
hence, the name hypergeometric . This function can be considered as a generalization of the geometric series .
The confluent hypergeometric function (or Kummer's function) can be given as a limit of the hypergeometric function
M ( a , c , z ) = lim b → ∞ 2 F 1 ( a , b ; c ; b − 1 z ) {\displaystyle M(a,c,z)=\lim _{b\to \infty }{}_{2}F_{1}(a,b;c;b^{-1}z)}
so all functions that are essentially special cases of it, such as Bessel functions , can be expressed as limits of hypergeometric functions. These include most of the commonly used functions of mathematical physics.
Legendre functions are solutions of a second order differential equation with 3 regular singular points so can be expressed in terms of the hypergeometric function in many ways, for example
2 F 1 ( a , 1 − a ; c ; z ) = Γ ( c ) z 1 − c 2 ( 1 − z ) c − 1 2 P − a 1 − c ( 1 − 2 z ) {\displaystyle {}_{2}F_{1}(a,1-a;c;z)=\Gamma (c)z^{\tfrac {1-c}{2}}(1-z)^{\tfrac {c-1}{2}}P_{-a}^{1-c}(1-2z)}
Several orthogonal polynomials, including Jacobi polynomials P (α,β) n and their special cases Legendre polynomials , Chebyshev polynomials , Gegenbauer polynomials , Zernike polynomials can be written in terms of hypergeometric functions using
2 F 1 ( − n , α + 1 + β + n ; α + 1 ; x ) = n ! ( α + 1 ) n P n ( α , β ) ( 1 − 2 x ) {\displaystyle {}_{2}F_{1}(-n,\alpha +1+\beta +n;\alpha +1;x)={\frac {n!}{(\alpha +1)_{n}}}P_{n}^{(\alpha ,\beta )}(1-2x)}
Other polynomials that are special cases include Krawtchouk polynomials , Meixner polynomials , Meixner–Pollaczek polynomials .
Given z ∈ C ∖ { 0 , 1 } {\displaystyle z\in \mathbb {C} \setminus \{0,1\}} , let
τ = i 2 F 1 ( 1 2 , 1 2 ; 1 ; 1 − z ) 2 F 1 ( 1 2 , 1 2 ; 1 ; z ) . {\displaystyle \tau ={\rm {i}}{\frac {{}_{2}F_{1}{\bigl (}{\frac {1}{2}},{\frac {1}{2}};1;1-z{\bigr )}}{{}_{2}F_{1}{\bigl (}{\frac {1}{2}},{\frac {1}{2}};1;z{\bigr )}}}.}
Then
λ ( τ ) = θ 2 ( τ ) 4 θ 3 ( τ ) 4 = z {\displaystyle \lambda (\tau )={\frac {\theta _{2}(\tau )^{4}}{\theta _{3}(\tau )^{4}}}=z}
is the modular lambda function , where
θ 2 ( τ ) = ∑ n ∈ Z e π i τ ( n + 1 / 2 ) 2 , θ 3 ( τ ) = ∑ n ∈ Z e π i τ n 2 . {\displaystyle \theta _{2}(\tau )=\sum _{n\in \mathbb {Z} }e^{\pi i\tau (n+1/2)^{2}},\quad \theta _{3}(\tau )=\sum _{n\in \mathbb {Z} }e^{\pi i\tau n^{2}}.}
The j-invariant , a modular function , is a rational function in λ ( τ ) {\displaystyle \lambda (\tau )} .
Incomplete beta functions B x ( p , q ) are related by
B x ( p , q ) = x p p 2 F 1 ( p , 1 − q ; p + 1 ; x ) . {\displaystyle B_{x}(p,q)={\tfrac {x^{p}}{p}}{}_{2}F_{1}(p,1-q;p+1;x).}
The complete elliptic integrals K and E are given by [ 1 ]
K ( k ) = π 2 2 F 1 ( 1 2 , 1 2 ; 1 ; k 2 ) , E ( k ) = π 2 2 F 1 ( − 1 2 , 1 2 ; 1 ; k 2 ) . {\displaystyle {\begin{aligned}K(k)&={\tfrac {\pi }{2}}\,_{2}F_{1}\left({\tfrac {1}{2}},{\tfrac {1}{2}};1;k^{2}\right),\\E(k)&={\tfrac {\pi }{2}}\,_{2}F_{1}\left(-{\tfrac {1}{2}},{\tfrac {1}{2}};1;k^{2}\right).\end{aligned}}}
The hypergeometric function is a solution of Euler's hypergeometric differential equation
z ( 1 − z ) d 2 w d z 2 + [ c − ( a + b + 1 ) z ] d w d z − a b w = 0. {\displaystyle z(1-z){\frac {d^{2}w}{dz^{2}}}+\left[c-(a+b+1)z\right]{\frac {dw}{dz}}-ab\,w=0.}
which has three regular singular points : 0,1 and ∞. The generalization of this equation to three arbitrary regular singular points is given by Riemann's differential equation . Any second order linear differential equation with three regular singular points can be converted to the hypergeometric differential equation by a change of variables.
Solutions to the hypergeometric differential equation are built out of the hypergeometric series 2 F 1 ( a , b ; c ; z ). The equation has two linearly independent solutions. At each of the three singular points 0, 1, ∞, there are usually two special solutions of the form x s times a holomorphic function of x , where s is one of the two roots of the indicial equation and x is a local variable vanishing at a regular singular point. This gives 3 × 2 = 6 special solutions, as follows.
Around the point z = 0, two independent solutions are, if c is not a non-positive integer,
2 F 1 ( a , b ; c ; z ) {\displaystyle \,_{2}F_{1}(a,b;c;z)}
and, on condition that c is not an integer,
z 1 − c 2 F 1 ( 1 + a − c , 1 + b − c ; 2 − c ; z ) {\displaystyle z^{1-c}\,_{2}F_{1}(1+a-c,1+b-c;2-c;z)}
If c is a non-positive integer 1− m , then the first of these solutions does not exist and must be replaced by z m F ( a + m , b + m ; 1 + m ; z ) . {\displaystyle z^{m}F(a+m,b+m;1+m;z).} The second solution does not exist when c is an integer greater than 1, and is equal to the first solution, or its replacement, when c is any other integer. So when c is an integer, a more complicated expression must be used for a second solution, equal to the first solution multiplied by ln( z ), plus another series in powers of z , involving the digamma function . See Olde Daalhuis (2010) for details.
Around z = 1, if c − a − b is not an integer, one has two independent solutions
2 F 1 ( a , b ; 1 + a + b − c ; 1 − z ) {\displaystyle \,_{2}F_{1}(a,b;1+a+b-c;1-z)}
and
( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; 1 + c − a − b ; 1 − z ) {\displaystyle (1-z)^{c-a-b}\;_{2}F_{1}(c-a,c-b;1+c-a-b;1-z)}
Around z = ∞, if a − b is not an integer, one has two independent solutions
z − a 2 F 1 ( a , 1 + a − c ; 1 + a − b ; z − 1 ) {\displaystyle z^{-a}\,_{2}F_{1}\left(a,1+a-c;1+a-b;z^{-1}\right)}
and
z − b 2 F 1 ( b , 1 + b − c ; 1 + b − a ; z − 1 ) . {\displaystyle z^{-b}\,_{2}F_{1}\left(b,1+b-c;1+b-a;z^{-1}\right).}
Again, when the conditions of non-integrality are not met, there exist other solutions that are more complicated.
Any 3 of the above 6 solutions satisfy a linear relation as the space of solutions is 2-dimensional, giving ( 6 3 ) = 20 linear relations between them called connection formulas .
A second order Fuchsian equation with n singular points has a group of symmetries acting (projectively) on its solutions, isomorphic to the Coxeter group W( D n ) of order 2 n −1 n !. The hypergeometric equation is the case n = 3, with group of order 24 isomorphic to the symmetric group on 4 points, as first described by Kummer . The appearance of the symmetric group is accidental and has no analogue for more than 3 singular points, and it is sometimes better to think of the group as an extension of the symmetric group on 3 points (acting as permutations of the 3 singular points) by a Klein 4-group (whose elements change the signs of the differences of the exponents at an even number of singular points). Kummer's group of 24 transformations is generated by the three transformations taking a solution F ( a , b ; c ; z ) to one of
( 1 − z ) − a F ( a , c − b ; c ; z z − 1 ) F ( a , b ; 1 + a + b − c ; 1 − z ) ( 1 − z ) − b F ( c − a , b ; c ; z z − 1 ) {\displaystyle {\begin{aligned}(1-z)^{-a}F\left(a,c-b;c;{\tfrac {z}{z-1}}\right)\\F(a,b;1+a+b-c;1-z)\\(1-z)^{-b}F\left(c-a,b;c;{\tfrac {z}{z-1}}\right)\end{aligned}}}
which correspond to the transpositions (12), (23), and (34) under an isomorphism with the symmetric group on 4 points 1, 2, 3, 4. (The first and third of these are actually equal to F ( a , b ; c ; z ) whereas the second is an independent solution to the differential equation.)
Applying Kummer's 24 = 6×4 transformations to the hypergeometric function gives the 6 = 2×3 solutions above corresponding to each of the 2 possible exponents at each of the 3 singular points, each of which appears 4 times because of the identities
2 F 1 ( a , b ; c ; z ) = ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; c ; z ) Euler transformation 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − a 2 F 1 ( a , c − b ; c ; z z − 1 ) Pfaff transformation 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − b 2 F 1 ( c − a , b ; c ; z z − 1 ) Pfaff transformation {\displaystyle {\begin{aligned}{}_{2}F_{1}(a,b;c;z)&=(1-z)^{c-a-b}\,{}_{2}F_{1}(c-a,c-b;c;z)&&{\text{Euler transformation}}\\{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-a}\,{}_{2}F_{1}(a,c-b;c;{\tfrac {z}{z-1}})&&{\text{Pfaff transformation}}\\{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-b}\,{}_{2}F_{1}(c-a,b;c;{\tfrac {z}{z-1}})&&{\text{Pfaff transformation}}\end{aligned}}}
The hypergeometric differential equation may be brought into the Q-form
d 2 u d z 2 + Q ( z ) u ( z ) = 0 {\displaystyle {\frac {d^{2}u}{dz^{2}}}+Q(z)u(z)=0}
by making the substitution u = wv and eliminating the first-derivative term. One finds that
Q = z 2 [ 1 − ( a − b ) 2 ] + z [ 2 c ( a + b − 1 ) − 4 a b ] + c ( 2 − c ) 4 z 2 ( 1 − z ) 2 {\displaystyle Q={\frac {z^{2}[1-(a-b)^{2}]+z[2c(a+b-1)-4ab]+c(2-c)}{4z^{2}(1-z)^{2}}}}
and v is given by the solution to
d d z log v ( z ) = − c − z ( a + b + 1 ) 2 z ( 1 − z ) = − c 2 z − 1 + a + b − c 2 ( z − 1 ) {\displaystyle {\frac {d}{dz}}\log v(z)=-{\frac {c-z(a+b+1)}{2z(1-z)}}=-{\frac {c}{2z}}-{\frac {1+a+b-c}{2(z-1)}}}
which is
v ( z ) = z − c / 2 ( 1 − z ) ( c − a − b − 1 ) / 2 . {\displaystyle v(z)=z^{-c/2}(1-z)^{(c-a-b-1)/2}.}
The Q-form is significant in its relation to the Schwarzian derivative ( Hille 1976 , pp. 307–401).
The Schwarz triangle maps or Schwarz s -functions are ratios of pairs of solutions.
s k ( z ) = ϕ k ( 1 ) ( z ) ϕ k ( 0 ) ( z ) {\displaystyle s_{k}(z)={\frac {\phi _{k}^{(1)}(z)}{\phi _{k}^{(0)}(z)}}}
where k is one of the points 0, 1, ∞. The notation
D k ( λ , μ , ν ; z ) = s k ( z ) {\displaystyle D_{k}(\lambda ,\mu ,\nu ;z)=s_{k}(z)}
is also sometimes used. Note that the connection coefficients become Möbius transformations on the triangle maps.
Note that each triangle map is regular at z ∈ {0, 1, ∞} respectively, with
s 0 ( z ) = z λ ( 1 + O ( z ) ) s 1 ( z ) = ( 1 − z ) μ ( 1 + O ( 1 − z ) ) {\displaystyle {\begin{aligned}s_{0}(z)&=z^{\lambda }(1+{\mathcal {O}}(z))\\s_{1}(z)&=(1-z)^{\mu }(1+{\mathcal {O}}(1-z))\end{aligned}}} and s ∞ ( z ) = z ν ( 1 + O ( 1 z ) ) . {\displaystyle s_{\infty }(z)=z^{\nu }(1+{\mathcal {O}}({\tfrac {1}{z}})).}
In the special case of λ, μ and ν real, with 0 ≤ λ,μ,ν < 1 then the s-maps are conformal maps of the upper half-plane H to triangles on the Riemann sphere , bounded by circular arcs. This mapping is a generalization of the Schwarz–Christoffel mapping to triangles with circular arcs. The singular points 0,1 and ∞ are sent to the triangle vertices. The angles of the triangle are πλ, πμ and πν respectively.
Furthermore, in the case of λ=1/ p , μ=1/ q and ν=1/ r for integers p , q , r , then the triangle tiles the sphere, the complex plane or the upper half plane according to whether λ + μ + ν – 1 is positive, zero or negative; and the s-maps are inverse functions of automorphic functions for the triangle group 〈 p , q , r 〉 = Δ( p , q , r ).
The monodromy of a hypergeometric equation describes how fundamental solutions change when analytically continued around paths in the z plane that return to the same point.
That is, when the path winds around a singularity of 2 F 1 , the value of the solutions at the endpoint will differ from the starting point.
Two fundamental solutions of the hypergeometric equation are related to each other by a linear transformation; thus the monodromy is a mapping (group homomorphism):
π 1 ( C ∖ { 0 , 1 } , z 0 ) → GL ( 2 , C ) {\displaystyle \pi _{1}(\mathbf {C} \setminus \{0,1\},z_{0})\to {\text{GL}}(2,\mathbf {C} )}
where π 1 is the fundamental group . In other words, the monodromy is a two dimensional linear representation of the fundamental group. The monodromy group of the equation is the image of this map, i.e. the group generated by the monodromy matrices. The monodromy representation of the fundamental group can be computed explicitly in terms of the exponents at the singular points. [ 2 ] If (α, α'), (β, β') and (γ,γ') are the exponents at 0, 1 and ∞, then, taking z 0 near 0, the loops around 0 and 1 have monodromy matrices
g 0 = ( e 2 π i α 0 0 e 2 π i α ′ ) g 1 = ( μ e 2 π i β − e 2 π i β ′ μ − 1 μ ( e 2 π i β − e 2 π i β ′ ) ( μ − 1 ) 2 e 2 π i β ′ − e 2 π i β μ e 2 π i β ′ − e 2 π i β μ − 1 ) , {\displaystyle {\begin{aligned}g_{0}&={\begin{pmatrix}e^{2\pi i\alpha }&0\\0&e^{2\pi i\alpha ^{\prime }}\end{pmatrix}}\\g_{1}&={\begin{pmatrix}{\mu e^{2\pi i\beta }-e^{2\pi i\beta ^{\prime }} \over \mu -1}&{\mu (e^{2\pi i\beta }-e^{2\pi i\beta ^{\prime }}) \over (\mu -1)^{2}}\\e^{2\pi i\beta ^{\prime }}-e^{2\pi i\beta }&{\mu e^{2\pi i\beta ^{\prime }}-e^{2\pi i\beta } \over \mu -1}\end{pmatrix}},\end{aligned}}}
where
μ = sin π ( α + β ′ + γ ′ ) sin π ( α ′ + β + γ ′ ) sin π ( α ′ + β ′ + γ ′ ) sin π ( α + β + γ ′ ) . {\displaystyle \mu ={\sin \pi (\alpha +\beta ^{\prime }+\gamma ^{\prime })\sin \pi (\alpha ^{\prime }+\beta +\gamma ^{\prime }) \over \sin \pi (\alpha ^{\prime }+\beta ^{\prime }+\gamma ^{\prime })\sin \pi (\alpha +\beta +\gamma ^{\prime })}.}
If 1− a , c − a − b , a − b are non-integer rational numbers with denominators k , l , m then the monodromy group is finite if and only if 1 / k + 1 / l + 1 / m > 1 {\displaystyle 1/k+1/l+1/m>1} , see Schwarz's list or Kovacic's algorithm .
If B is the beta function then
B ( b , c − b ) 2 F 1 ( a , b ; c ; z ) = ∫ 0 1 x b − 1 ( 1 − x ) c − b − 1 ( 1 − z x ) − a d x ℜ ( c ) > ℜ ( b ) > 0 , {\displaystyle \mathrm {B} (b,c-b)\,_{2}F_{1}(a,b;c;z)=\int _{0}^{1}x^{b-1}(1-x)^{c-b-1}(1-zx)^{-a}\,dx\qquad \Re (c)>\Re (b)>0,}
provided that z is not a real number such that it is greater than or equal to 1. This can be proved by expanding (1 − zx ) − a using the binomial theorem and then integrating term by term for z with absolute value smaller than 1, and by analytic continuation elsewhere. When z is a real number greater than or equal to 1, analytic continuation must be used, because (1 − zx ) is zero at some point in the support of the integral, so the value of the integral may be ill-defined. This was given by Euler in 1748 and implies Euler's and Pfaff's hypergeometric transformations.
Other representations, corresponding to other branches , are given by taking the same integrand, but taking the path of integration to be a closed Pochhammer cycle enclosing the singularities in various orders. Such paths correspond to the monodromy action.
Barnes used the theory of residues to evaluate the Barnes integral
1 2 π i ∫ − i ∞ i ∞ Γ ( a + s ) Γ ( b + s ) Γ ( − s ) Γ ( c + s ) ( − z ) s d s {\displaystyle {\frac {1}{2\pi i}}\int _{-i\infty }^{i\infty }{\frac {\Gamma (a+s)\Gamma (b+s)\Gamma (-s)}{\Gamma (c+s)}}(-z)^{s}\,ds}
as
Γ ( a ) Γ ( b ) Γ ( c ) 2 F 1 ( a , b ; c ; z ) , {\displaystyle {\frac {\Gamma (a)\Gamma (b)}{\Gamma (c)}}\,_{2}F_{1}(a,b;c;z),}
where the contour is drawn to separate the poles 0, 1, 2... from the poles − a , − a − 1, ..., − b , − b − 1, ... . This is valid as long as z is not a nonnegative real number.
The Gauss hypergeometric function can be written as a John transform ( Gelfand, Gindikin & Graev 2003 , 2.1.2).
The six functions
2 F 1 ( a ± 1 , b ; c ; z ) , 2 F 1 ( a , b ± 1 ; c ; z ) , 2 F 1 ( a , b ; c ± 1 ; z ) {\displaystyle {}_{2}F_{1}(a\pm 1,b;c;z),\quad {}_{2}F_{1}(a,b\pm 1;c;z),\quad {}_{2}F_{1}(a,b;c\pm 1;z)}
are called contiguous to 2 F 1 ( a , b ; c ; z ) . Gauss showed that 2 F 1 ( a , b ; c ; z ) can be written as a linear combination of any two of its contiguous functions, with rational coefficients in terms of a , b , c , and z . This gives
( 6 2 ) = 15 {\displaystyle {\begin{pmatrix}6\\2\end{pmatrix}}=15}
relations, given by identifying any two lines on the right hand side of
z d F d z = z a b c F ( a + , b + , c + ) = a ( F ( a + ) − F ) = b ( F ( b + ) − F ) = ( c − 1 ) ( F ( c − ) − F ) = ( c − a ) F ( a − ) + ( a − c + b z ) F 1 − z = ( c − b ) F ( b − ) + ( b − c + a z ) F 1 − z = z ( c − a ) ( c − b ) F ( c + ) + c ( a + b − c ) F c ( 1 − z ) {\displaystyle {\begin{aligned}z{\frac {dF}{dz}}&=z{\frac {ab}{c}}F(a+,b+,c+)\\&=a(F(a+)-F)\\&=b(F(b+)-F)\\&=(c-1)(F(c-)-F)\\&={\frac {(c-a)F(a-)+(a-c+bz)F}{1-z}}\\&={\frac {(c-b)F(b-)+(b-c+az)F}{1-z}}\\&=z{\frac {(c-a)(c-b)F(c+)+c(a+b-c)F}{c(1-z)}}\end{aligned}}}
where F = 2 F 1 ( a , b ; c ; z ), F ( a +) = 2 F 1 ( a + 1, b ; c ; z ) , and so on. Repeatedly applying these relations gives a linear relation over C (z) between any three functions of the form
2 F 1 ( a + m , b + n ; c + l ; z ) , {\displaystyle {}_{2}F_{1}(a+m,b+n;c+l;z),}
where m , n , and l are integers. [ 3 ]
Gauss used the contiguous relations to give several ways to write a quotient of two hypergeometric functions as a continued fraction, for example:
2 F 1 ( a + 1 , b ; c + 1 ; z ) 2 F 1 ( a , b ; c ; z ) = 1 1 + ( a − c ) b c ( c + 1 ) z 1 + ( b − c − 1 ) ( a + 1 ) ( c + 1 ) ( c + 2 ) z 1 + ( a − c − 1 ) ( b + 1 ) ( c + 2 ) ( c + 3 ) z 1 + ( b − c − 2 ) ( a + 2 ) ( c + 3 ) ( c + 4 ) z 1 + ⋱ {\displaystyle {\frac {{}_{2}F_{1}(a+1,b;c+1;z)}{{}_{2}F_{1}(a,b;c;z)}}={\cfrac {1}{1+{\cfrac {{\frac {(a-c)b}{c(c+1)}}z}{1+{\cfrac {{\frac {(b-c-1)(a+1)}{(c+1)(c+2)}}z}{1+{\cfrac {{\frac {(a-c-1)(b+1)}{(c+2)(c+3)}}z}{1+{\cfrac {{\frac {(b-c-2)(a+2)}{(c+3)(c+4)}}z}{1+{}\ddots }}}}}}}}}}}
Transformation formulas relate two hypergeometric functions at different values of the argument z .
Euler's transformation is 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; c ; z ) . {\displaystyle {}_{2}F_{1}(a,b;c;z)=(1-z)^{c-a-b}{}_{2}F_{1}(c-a,c-b;c;z).} It follows by combining the two Pfaff transformations 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − b 2 F 1 ( b , c − a ; c ; z z − 1 ) 2 F 1 ( a , b ; c ; z ) = ( 1 − z ) − a 2 F 1 ( a , c − b ; c ; z z − 1 ) {\displaystyle {\begin{aligned}{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-b}{}_{2}F_{1}\left(b,c-a;c;{\tfrac {z}{z-1}}\right)\\{}_{2}F_{1}(a,b;c;z)&=(1-z)^{-a}{}_{2}F_{1}\left(a,c-b;c;{\tfrac {z}{z-1}}\right)\\\end{aligned}}} which in turn follow from Euler's integral representation. For extension of Euler's first and second transformations, see Rathie & Paris (2007) and Rakha & Rathie (2011) .
It can also be written as linear combination 2 F 1 ( a , b ; c , z ) = Γ ( c ) Γ ( c − a − b ) Γ ( c − a ) Γ ( c − b ) 2 F 1 ( a , b ; a + b + 1 − c ; 1 − z ) + Γ ( c ) Γ ( a + b − c ) Γ ( a ) Γ ( b ) ( 1 − z ) c − a − b 2 F 1 ( c − a , c − b ; 1 + c − a − b ; 1 − z ) . {\displaystyle {\begin{aligned}{}_{2}F_{1}(a,b;c,z)={}&{\frac {\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}}{}_{2}F_{1}(a,b;a+b+1-c;1-z)\\[6pt]&{}+{\frac {\Gamma (c)\Gamma (a+b-c)}{\Gamma (a)\Gamma (b)}}(1-z)^{c-a-b}{}_{2}F_{1}(c-a,c-b;1+c-a-b;1-z).\end{aligned}}}
If two of the numbers 1 − c , c − 1, a − b , b − a , a + b − c , c − a − b are equal or one of them is 1/2 then there is a quadratic transformation of the hypergeometric function, connecting it to a different value of z related by a quadratic equation. The first examples were given by Kummer (1836) , and a complete list was given by Goursat (1881) . A typical example is
2 F 1 ( a , b ; 2 b ; z ) = ( 1 − z ) − a 2 2 F 1 ( 1 2 a , b − 1 2 a ; b + 1 2 ; z 2 4 z − 4 ) {\displaystyle {}_{2}F_{1}(a,b;2b;z)=(1-z)^{-{\frac {a}{2}}}{}_{2}F_{1}\left({\tfrac {1}{2}}a,b-{\tfrac {1}{2}}a;b+{\tfrac {1}{2}};{\frac {z^{2}}{4z-4}}\right)}
If 1− c , a − b , a + b − c differ by signs or two of them are 1/3 or −1/3 then there is a cubic transformation of the hypergeometric function, connecting it to a different value of z related by a cubic equation. The first examples were given by Goursat (1881) . A typical example is
2 F 1 ( 3 2 a , 1 2 ( 3 a − 1 ) ; a + 1 2 ; − z 2 3 ) = ( 1 + z ) 1 − 3 a 2 F 1 ( a − 1 3 , a ; 2 a ; 2 z ( 3 + z 2 ) ( 1 + z ) − 3 ) {\displaystyle {}_{2}F_{1}\left({\tfrac {3}{2}}a,{\tfrac {1}{2}}(3a-1);a+{\tfrac {1}{2}};-{\tfrac {z^{2}}{3}}\right)=(1+z)^{1-3a}\,{}_{2}F_{1}\left(a-{\tfrac {1}{3}},a;2a;2z(3+z^{2})(1+z)^{-3}\right)}
There are also some transformations of degree 4 and 6. Transformations of other degrees only exist if a , b , and c are certain rational numbers ( Vidunas 2005 ). For example, 2 F 1 ( 1 4 , 3 8 ; 7 8 ; z ) ( z 4 − 60 z 3 + 134 z 2 − 60 z + 1 ) 1 / 16 = 2 F 1 ( 1 48 , 17 48 ; 7 8 ; − 432 z ( z − 1 ) 2 ( z + 1 ) 8 ( z 4 − 60 z 3 + 134 z 2 − 60 z + 1 ) 3 ) . {\displaystyle {}_{2}F_{1}\left({\tfrac {1}{4}},{\tfrac {3}{8}};{\tfrac {7}{8}};z\right)(z^{4}-60z^{3}+134z^{2}-60z+1)^{1/16}={}_{2}F_{1}\left({\tfrac {1}{48}},{\tfrac {17}{48}};{\tfrac {7}{8}};{\tfrac {-432z(z-1)^{2}(z+1)^{8}}{(z^{4}-60z^{3}+134z^{2}-60z+1)^{3}}}\right).}
See Slater (1966 , Appendix III) for a list of summation formulas at special points, most of which also appear in Bailey (1935) . Gessel & Stanton (1982) gives further evaluations at more points. Koepf (1995) shows how most of these identities can be verified by computer algorithms.
Gauss's summation theorem, named for Carl Friedrich Gauss , is the identity
2 F 1 ( a , b ; c ; 1 ) = Γ ( c ) Γ ( c − a − b ) Γ ( c − a ) Γ ( c − b ) , ℜ ( c ) > ℜ ( a + b ) {\displaystyle {}_{2}F_{1}(a,b;c;1)={\frac {\Gamma (c)\Gamma (c-a-b)}{\Gamma (c-a)\Gamma (c-b)}},\qquad \Re (c)>\Re (a+b)}
which follows from Euler's integral formula by putting z = 1. It includes the Vandermonde identity as a special case.
For the special case where a = − m {\displaystyle a=-m} , 2 F 1 ( − m , b ; c ; 1 ) = ( c − b ) m ( c ) m {\displaystyle {}_{2}F_{1}(-m,b;c;1)={\frac {(c-b)_{m}}{(c)_{m}}}}
Dougall's formula generalizes this to the bilateral hypergeometric series at z = 1.
There are many cases where hypergeometric functions can be evaluated at z = −1 by using a quadratic transformation to change z = −1 to z = 1 and then using Gauss's theorem to evaluate the result. A typical example is Kummer's theorem, named for Ernst Kummer :
2 F 1 ( a , b ; 1 + a − b ; − 1 ) = Γ ( 1 + a − b ) Γ ( 1 + 1 2 a ) Γ ( 1 + a ) Γ ( 1 + 1 2 a − b ) {\displaystyle {}_{2}F_{1}(a,b;1+a-b;-1)={\frac {\Gamma (1+a-b)\Gamma (1+{\tfrac {1}{2}}a)}{\Gamma (1+a)\Gamma (1+{\tfrac {1}{2}}a-b)}}}
which follows from Kummer's quadratic transformations
2 F 1 ( a , b ; 1 + a − b ; z ) = ( 1 − z ) − a 2 F 1 ( a 2 , 1 + a 2 − b ; 1 + a − b ; − 4 z ( 1 − z ) 2 ) = ( 1 + z ) − a 2 F 1 ( a 2 , a + 1 2 ; 1 + a − b ; 4 z ( 1 + z ) 2 ) {\displaystyle {\begin{aligned}_{2}F_{1}(a,b;1+a-b;z)&=(1-z)^{-a}\;_{2}F_{1}\left({\frac {a}{2}},{\frac {1+a}{2}}-b;1+a-b;-{\frac {4z}{(1-z)^{2}}}\right)\\&=(1+z)^{-a}\,_{2}F_{1}\left({\frac {a}{2}},{\frac {a+1}{2}};1+a-b;{\frac {4z}{(1+z)^{2}}}\right)\end{aligned}}}
and Gauss's theorem by putting z = −1 in the first identity. For generalization of Kummer's summation, see Lavoie, Grondin & Rathie (1996) .
Gauss's second summation theorem is
2 F 1 ( a , b ; 1 2 ( 1 + a + b ) ; 1 2 ) = Γ ( 1 2 ) Γ ( 1 2 ( 1 + a + b ) ) Γ ( 1 2 ( 1 + a ) ) Γ ( 1 2 ( 1 + b ) ) . {\displaystyle _{2}F_{1}\left(a,b;{\tfrac {1}{2}}\left(1+a+b\right);{\tfrac {1}{2}}\right)={\frac {\Gamma ({\tfrac {1}{2}})\Gamma ({\tfrac {1}{2}}\left(1+a+b\right))}{\Gamma ({\tfrac {1}{2}}\left(1+a)\right)\Gamma ({\tfrac {1}{2}}\left(1+b\right))}}.}
Bailey's theorem is
2 F 1 ( a , 1 − a ; c ; 1 2 ) = Γ ( 1 2 c ) Γ ( 1 2 ( 1 + c ) ) Γ ( 1 2 ( c + a ) ) Γ ( 1 2 ( 1 + c − a ) ) . {\displaystyle _{2}F_{1}\left(a,1-a;c;{\tfrac {1}{2}}\right)={\frac {\Gamma ({\tfrac {1}{2}}c)\Gamma ({\tfrac {1}{2}}\left(1+c\right))}{\Gamma ({\tfrac {1}{2}}\left(c+a\right))\Gamma ({\tfrac {1}{2}}\left(1+c-a\right))}}.}
For generalizations of Gauss's second summation theorem and Bailey's summation theorem, see Lavoie, Grondin & Rathie (1996) .
There are many other formulas giving the hypergeometric function as an algebraic number at special rational values of the parameters, some of which are listed in Gessel & Stanton (1982) and Koepf (1995) . Some typical examples are given by
2 F 1 ( a , − a ; 1 2 ; x 2 4 ( x − 1 ) ) = ( 1 − x ) a + ( 1 − x ) − a 2 , {\displaystyle {}_{2}F_{1}\left(a,-a;{\tfrac {1}{2}};{\tfrac {x^{2}}{4(x-1)}}\right)={\frac {(1-x)^{a}+(1-x)^{-a}}{2}},}
which can be restated as
T a ( cos x ) = 2 F 1 ( a , − a ; 1 2 ; 1 2 ( 1 − cos x ) ) = cos ( a x ) {\displaystyle T_{a}(\cos x)={}_{2}F_{1}\left(a,-a;{\tfrac {1}{2}};{\tfrac {1}{2}}(1-\cos x)\right)=\cos(ax)}
whenever −π < x < π and T is the (generalized) Chebyshev polynomial . | https://en.wikipedia.org/wiki/Hypergeometric_function |
In mathematics , hypergeometric identities are equalities involving sums over hypergeometric terms, i.e. the coefficients occurring in hypergeometric series . These identities occur frequently in solutions to combinatorial problems, and also in the analysis of algorithms .
These identities were traditionally found 'by hand'. There exist now several algorithms which can find and prove all hypergeometric identities.
There are two definitions of hypergeometric terms, both used in different cases as explained below. See also hypergeometric series .
A term t k is a hypergeometric term if
is a rational function in k .
A term F(n,k) is a hypergeometric term if
is a rational function in k .
There exist two types of sums over hypergeometric terms, the definite and indefinite sums. A definite sum is of the form
The indefinite sum is of the form
Although in the past proofs have been found for many specific identities, there exist several general algorithms to find and prove identities. These algorithms first find a simple expression for a sum over hypergeometric terms and then provide a certificate which anyone can use to check and prove the correctness of the identity.
For each of the hypergeometric sum types there exist one or more methods to find a simple expression . These methods also provide the certificate to check the identity's proof:
The book A = B by Marko Petkovšek , Herbert Wilf and Doron Zeilberger describes the three main approaches mentioned above. | https://en.wikipedia.org/wiki/Hypergeometric_identity |
In mathematics , a hypergraph is a generalization of a graph in which an edge can join any number of vertices . In contrast, in an ordinary graph, an edge connects exactly two vertices.
Formally, a directed hypergraph is a pair ( X , E ) {\displaystyle (X,E)} , where X {\displaystyle X} is a set of elements called nodes , vertices , points , or elements and E {\displaystyle E} is a set of pairs of subsets of X {\displaystyle X} . Each of these pairs ( D , C ) ∈ E {\displaystyle (D,C)\in E} is called an edge or hyperedge ; the vertex subset D {\displaystyle D} is known as its tail or domain , and C {\displaystyle C} as its head or codomain .
The order of a hypergraph ( X , E ) {\displaystyle (X,E)} is the number of vertices in X {\displaystyle X} . The size of the hypergraph is the number of edges in E {\displaystyle E} . The order of an edge e = ( D , C ) {\displaystyle e=(D,C)} in a directed hypergraph is | e | = ( | D | , | C | ) {\displaystyle |e|=(|D|,|C|)} : that is, the number of vertices in its tail followed by the number of vertices in its head.
The definition above generalizes from a directed graph to a directed hypergraph by defining the head or tail of each edge as a set of vertices ( C ⊆ X {\displaystyle C\subseteq X} or D ⊆ X {\displaystyle D\subseteq X} ) rather than as a single vertex. A graph is then the special case where each of these sets contains only one element. Hence any standard graph theoretic concept that is independent of the edge orders | e | {\displaystyle |e|} will generalize to hypergraph theory.
An undirected hypergraph ( X , E ) {\displaystyle (X,E)} is an undirected graph whose edges connect not just two vertices, but an arbitrary number. [ 2 ] An undirected hypergraph is also called a set system or a family of sets drawn from the universal set .
Hypergraphs can be viewed as incidence structures . In particular, there is a bipartite "incidence graph" or " Levi graph " corresponding to every hypergraph, and conversely, every bipartite graph can be regarded as the incidence graph of a hypergraph when it is 2-colored and it is indicated which color class corresponds to hypergraph vertices and which to hypergraph edges.
Hypergraphs have many other names. In computational geometry , an undirected hypergraph may sometimes be called a range space and then the hyperedges are called ranges . [ 3 ] In cooperative game theory, hypergraphs are called simple games (voting games); this notion is applied to solve problems in social choice theory . In some literature edges are referred to as hyperlinks or connectors . [ 4 ]
The collection of hypergraphs is a category with hypergraph homomorphisms as morphisms .
Undirected hypergraphs are useful in modelling such things as satisfiability problems, [ 5 ] databases, [ 6 ] machine learning, [ 7 ] and Steiner tree problems . [ 8 ] They have been extensively used in machine learning tasks as the data model and classifier regularization (mathematics) . [ 9 ] The applications include recommender system (communities as hyperedges), [ 10 ] [ 11 ] image retrieval (correlations as hyperedges), [ 12 ] and bioinformatics (biochemical interactions as hyperedges). [ 13 ] Representative hypergraph learning techniques include hypergraph spectral clustering that extends the spectral graph theory with hypergraph Laplacian, [ 14 ] and hypergraph semi-supervised learning that introduces extra hypergraph structural cost to restrict the learning results. [ 15 ] For large scale hypergraphs, a distributed framework [ 7 ] built using Apache Spark is also available. It can be desirable to study hypergraphs where all hyperedges have the same cardinality; a k-uniform hypergraph is a hypergraph such that all its hyperedges have size k . (In other words, one such hypergraph is a collection of sets, each such set a hyperedge connecting k nodes.) So a 2-uniform hypergraph is a graph, a 3-uniform hypergraph is a collection of unordered triples, and so on.
Directed hypergraphs can be used to model things including telephony applications, [ 16 ] detecting money laundering , [ 17 ] operations research, [ 18 ] and transportation planning. They can also be used to model Horn-satisfiability . [ 19 ]
Many theorems and concepts involving graphs also hold for hypergraphs, in particular:
In directed hypergraphs: transitive closure , and shortest path problems. [ 18 ]
Although hypergraphs are more difficult to draw on paper than graphs, several researchers have studied methods for the visualization of hypergraphs.
In one possible visual representation for hypergraphs, similar to the standard graph drawing style in which curves in the plane are used to depict graph edges, a hypergraph's vertices are depicted as points, disks, or boxes, and its hyperedges are depicted as trees that have the vertices as their leaves. [ 20 ] [ 21 ] If the vertices are represented as points, the hyperedges may also be shown as smooth curves that connect sets of points, or as simple closed curves that enclose sets of points. [ 22 ] [ 23 ] [ 24 ]
In another style of hypergraph visualization, the subdivision model of hypergraph drawing, [ 25 ] the plane is subdivided into regions, each of which represents a single vertex of the hypergraph. The hyperedges of the hypergraph are represented by contiguous subsets of these regions, which may be indicated by coloring, by drawing outlines around them, or both. An order- n Venn diagram , for instance, may be viewed as a subdivision drawing of a hypergraph with n hyperedges (the curves defining the diagram) and 2 n − 1 vertices (represented by the regions into which these curves subdivide the plane). In contrast with the polynomial-time recognition of planar graphs , it is NP-complete to determine whether a hypergraph has a planar subdivision drawing, [ 26 ] but the existence of a drawing of this type may be tested efficiently when the adjacency pattern of the regions is constrained to be a path, cycle, or tree. [ 27 ]
An alternative representation of the hypergraph called PAOH [ 1 ] is shown in the figure on top of this article. Edges are vertical lines connecting vertices. Vertices are aligned on the left. The legend on the right shows the names of the edges. It has been designed for dynamic hypergraphs but can be used for simple hypergraphs as well.
Classic hypergraph coloring is assigning one of the colors from set { 1 , 2 , 3 , . . . , λ } {\displaystyle \{1,2,3,...,\lambda \}} to every vertex of a hypergraph in such a way that each hyperedge contains at least two vertices of distinct colors. In other words, there must be no monochromatic hyperedge with cardinality at least 2. In this sense it is a direct generalization of graph coloring. The minimum number of used distinct colors over all colorings is called the chromatic number of a hypergraph.
Hypergraphs for which there exists a coloring using up to k colors are referred to as k-colorable . The 2-colorable hypergraphs are exactly the bipartite ones.
There are many generalizations of classic hypergraph coloring. One of them is the so-called mixed hypergraph coloring, when monochromatic edges are allowed. Some mixed hypergraphs are uncolorable for any number of colors. A general criterion for uncolorability is unknown. When a mixed hypergraph is colorable, then the minimum and maximum number of used colors are called the lower and upper chromatic numbers respectively. [ 28 ]
A hypergraph can have various properties, such as:
Because hypergraph links can have any cardinality, there are several notions of the concept of a subgraph, called subhypergraphs , partial hypergraphs and section hypergraphs .
Let H = ( X , E ) {\displaystyle H=(X,E)} be the hypergraph consisting of vertices
and having edge set
where I v {\displaystyle I_{v}} and I e {\displaystyle I_{e}} are the index sets of the vertices and edges respectively.
A subhypergraph is a hypergraph with some vertices removed. Formally, the subhypergraph H A {\displaystyle H_{A}} induced by A ⊆ X {\displaystyle A\subseteq X} is defined as
An alternative term is the restriction of H to A . [ 30 ] : 468
An extension of a subhypergraph is a hypergraph where each hyperedge of H {\displaystyle H} which is partially contained in the subhypergraph H A {\displaystyle H_{A}} is fully contained in the extension E x ( H A ) {\displaystyle Ex(H_{A})} . Formally
The partial hypergraph is a hypergraph with some edges removed. [ 30 ] : 468 Given a subset J ⊂ I e {\displaystyle J\subset I_{e}} of the edge index set, the partial hypergraph generated by J {\displaystyle J} is the hypergraph
Given a subset A ⊆ X {\displaystyle A\subseteq X} , the section hypergraph is the partial hypergraph
The dual H ∗ {\displaystyle H^{*}} of H {\displaystyle H} is a hypergraph whose vertices and edges are interchanged, so that the vertices are given by { e i } {\displaystyle \lbrace e_{i}\rbrace } and whose edges are given by { X m } {\displaystyle \lbrace X_{m}\rbrace } where
When a notion of equality is properly defined, as done below, the operation of taking the dual of a hypergraph is an involution , i.e.,
A connected graph G with the same vertex set as a connected hypergraph H is a host graph for H if every hyperedge of H induces a connected subgraph in G . For a disconnected hypergraph H , G is a host graph if there is a bijection between the connected components of G and of H , such that each connected component G' of G is a host of the corresponding H' .
The 2-section (or clique graph , representing graph , primal graph , Gaifman graph ) of a hypergraph is the graph with the same vertices of the hypergraph, and edges between all pairs of vertices contained in the same hyperedge.
Let V = { v 1 , v 2 , … , v n } {\displaystyle V=\{v_{1},v_{2},~\ldots ,~v_{n}\}} and E = { e 1 , e 2 , … e m } {\displaystyle E=\{e_{1},e_{2},~\ldots ~e_{m}\}} . Every hypergraph has an n × m {\displaystyle n\times m} incidence matrix .
For an undirected hypergraph, I = ( b i j ) {\displaystyle I=(b_{ij})} where
The transpose I t {\displaystyle I^{t}} of the incidence matrix defines a hypergraph H ∗ = ( V ∗ , E ∗ ) {\displaystyle H^{*}=(V^{*},\ E^{*})} called the dual of H {\displaystyle H} , where V ∗ {\displaystyle V^{*}} is an m -element set and E ∗ {\displaystyle E^{*}} is an n -element set of subsets of V ∗ {\displaystyle V^{*}} . For v j ∗ ∈ V ∗ {\displaystyle v_{j}^{*}\in V^{*}} and e i ∗ ∈ E ∗ , v j ∗ ∈ e i ∗ {\displaystyle e_{i}^{*}\in E^{*},~v_{j}^{*}\in e_{i}^{*}} if and only if b i j = 1 {\displaystyle b_{ij}=1} .
For a directed hypergraph, the heads and tails of each hyperedge e j {\displaystyle e_{j}} are denoted by H ( e j ) {\displaystyle H(e_{j})} and T ( e j ) {\displaystyle T(e_{j})} respectively. [ 19 ] I = ( b i j ) {\displaystyle I=(b_{ij})} where
A hypergraph H may be represented by a bipartite graph BG as follows: the sets X and E are the parts of BG , and ( x 1 , e 1 ) are connected with an edge if and only if vertex x 1 is contained in edge e 1 in H .
Conversely, any bipartite graph with fixed parts and no unconnected nodes in the second part represents some hypergraph in the manner described above. This bipartite graph is also called incidence graph .
A parallel for the adjacency matrix of a hypergraph can be drawn from the adjacency matrix of a graph. In the case of a graph, the adjacency matrix is a square matrix which indicates whether pairs of vertices are adjacent . Likewise, we can define the adjacency matrix A = ( a i j ) {\displaystyle A=(a_{ij})} for a hypergraph in general where the hyperedges e k ≤ m {\displaystyle e_{k\leq m}} have real weights w e k ∈ R {\displaystyle w_{e_{k}}\in \mathbb {R} } with
a i j = { w e k i f ( v i , v j ) ∈ E 0 o t h e r w i s e . {\displaystyle a_{ij}=\left\{{\begin{matrix}w_{e_{k}}&\mathrm {if} ~(v_{i},v_{j})\in E\\0&\mathrm {otherwise} .\end{matrix}}\right.}
In contrast with ordinary undirected graphs for which there is a single natural notion of cycles and acyclic graphs . For hypergraphs, there are multiple natural non-equivalent definitions of cycles which collapse to the ordinary notion of cycle when the graph case is considered.
A first notion of cycle was introduced by Claude Berge [ 31 ] . A Berge cycle in a hypergraph is an alternating sequence of distinct vertices and edges ( v 1 , e 1 , … , v n , e n ) {\displaystyle (v_{1},e_{1},\dots ,v_{n},e_{n})} where v i , v i + 1 {\displaystyle v_{i},v_{i+1}} are both in e i {\displaystyle e_{i}} for each i ∈ [ n ] {\displaystyle i\in [n]} (with indices taken modulo n {\displaystyle n} ).
Under this definition a hypergraph is acyclic if and only if its incidence graph (the bipartite graph defined above) is acyclic. Thus Berge-cyclicity can obviously be tested in linear time by an exploration of the incidence graph.
This definition is particularly used for k {\displaystyle k} -uniform hypergraphs, where all hyperedges are of size k {\displaystyle k} . A tight cycle of length n {\displaystyle n} in a hypergraph H {\displaystyle H} is a sequence of distinct vertices v 1 , … , v n {\displaystyle v_{1},\dots ,v_{n}} such that every consecutive k {\displaystyle k} -tuple { v i , … , v i + k − 1 } {\displaystyle \{v_{i},\dots ,v_{i+k-1}\}} (indices modulo n {\displaystyle n} ) forms a hyperedge in H {\displaystyle H} .
This notion was introduced by Katona and Kierstead [ 32 ] and has since garnered considerable attention, particularly in the study of Hamiltonicity in extremal combinatorics. [ 33 ] [ 34 ]
Rödl, Szemerédi, and Ruciński showed that every n {\displaystyle n} -vertex k {\displaystyle k} -uniform hypergraph H {\displaystyle H} in which every ( k − 1 ) {\displaystyle (k-1)} -subset of vertices is contained in at least n / 2 + o ( n ) {\displaystyle n/2+o(n)} hyperedges contains a Hamilton cycle.
This corresponds to an approximate hypergraph-extension of the celebrated Dirac's theorem about Hamilton cycles in graphs. [ 35 ]
The maximum number of hyperedges in a (tightly) acyclic k {\displaystyle k} -uniform hypergraph remains unknown. The best known bounds, obtained by Sudakov and Tomon [ 36 ] , show that every n {\displaystyle n} -vertex k {\displaystyle k} -uniform hypergraph with at least n k − 1 + o ( 1 ) {\displaystyle n^{k-1+o(1)}} hyperedges must contain a tight cycle. This bound is optimal up to the o ( 1 ) {\displaystyle o(1)} error term.
An ℓ {\displaystyle \ell } -cycle generalizes the notion of a tight cycle.
It consists in a sequence of vertices v 1 , … , v n {\displaystyle v_{1},\dots ,v_{n}} and hyperedges e 1 , … , e t {\displaystyle e_{1},\dots ,e_{t}} where each e i {\displaystyle e_{i}} consists of k {\displaystyle k} consecutive vertices in the sequence and | e i ∩ e i + 1 | = ℓ {\displaystyle |e_{i}\cap e_{i+1}|=\ell } for every 1 ≤ i ≤ t {\displaystyle 1\leq i\leq t} . Since every edge of the ℓ {\displaystyle \ell } -cycle contains exactly k − ℓ {\displaystyle k-\ell } vertices which are not contained in the previous edge, n {\displaystyle n} must be divisible by k − ℓ {\displaystyle k-\ell } . Note that ℓ = k − 1 {\displaystyle \ell =k-1} recovers the definition of a tight cycle.
The definition of Berge-acyclicity might seem to be very restrictive: for instance, if a hypergraph has some pair v ≠ v ′ {\displaystyle v\neq v'} of vertices and some pair f ≠ f ′ {\displaystyle f\neq f'} of hyperedges such that v , v ′ ∈ f {\displaystyle v,v'\in f} and v , v ′ ∈ f ′ {\displaystyle v,v'\in f'} , then it is Berge-cyclic.
We can define a weaker notion of hypergraph acyclicity, [ 6 ] later termed α-acyclicity. This notion of acyclicity is equivalent to the hypergraph being conformal (every clique of the primal graph is covered by some hyperedge) and its primal graph being chordal ; it is also equivalent to reducibility to the empty graph through the GYO algorithm [ 37 ] [ 38 ] (also known as Graham's algorithm), a confluent iterative process which removes hyperedges using a generalized definition of ears . In the domain of database theory , it is known that a database schema enjoys certain desirable properties if its underlying hypergraph is α-acyclic. [ 39 ] Besides, α-acyclicity is also related to the expressiveness of the guarded fragment of first-order logic .
We can test in linear time if a hypergraph is α-acyclic. [ 40 ]
Note that α-acyclicity has the counter-intuitive property that adding hyperedges to an α-cyclic hypergraph may make it α-acyclic (for instance, adding a hyperedge containing all vertices of the hypergraph will always make it α-acyclic). Motivated in part by this perceived shortcoming, Ronald Fagin [ 41 ] defined the stronger notions of β-acyclicity and γ-acyclicity. We can state β-acyclicity as the requirement that all subhypergraphs of the hypergraph are α-acyclic, which is equivalent [ 41 ] to an earlier definition by Graham. [ 38 ] The notion of γ-acyclicity is a more restrictive condition which is equivalent to several desirable properties of database schemas and is related to Bachman diagrams . Both β-acyclicity and γ-acyclicity can be tested in polynomial time .
Those four notions of acyclicity are comparable: γ-acyclicity which implies β-acyclicity which implies α-acyclicity. Moreover, Berge-acyclicity implies all of them. None of the reverse implications hold including the Berge one. In other words, these four notions are different. [ 41 ]
A hypergraph homomorphism is a map from the vertex set of one hypergraph to another such that each edge maps to one other edge.
A hypergraph H = ( X , E ) {\displaystyle H=(X,E)} is isomorphic to a hypergraph G = ( Y , F ) {\displaystyle G=(Y,F)} , written as H ≃ G {\displaystyle H\simeq G} if there exists a bijection
and a permutation π {\displaystyle \pi } of I {\displaystyle I} such that
The bijection ϕ {\displaystyle \phi } is then called the isomorphism of the graphs. Note that
When the edges of a hypergraph are explicitly labeled, one has the additional notion of strong isomorphism . One says that H {\displaystyle H} is strongly isomorphic to G {\displaystyle G} if the permutation is the identity. One then writes H ≅ G {\displaystyle H\cong G} . Note that all strongly isomorphic graphs are isomorphic, but not vice versa.
When the vertices of a hypergraph are explicitly labeled, one has the notions of equivalence , and also of equality . One says that H {\displaystyle H} is equivalent to G {\displaystyle G} , and writes H ≡ G {\displaystyle H\equiv G} if the isomorphism ϕ {\displaystyle \phi } has
and
Note that
If, in addition, the permutation π {\displaystyle \pi } is the identity, one says that H {\displaystyle H} equals G {\displaystyle G} , and writes H = G {\displaystyle H=G} . Note that, with this definition of equality, graphs are self-dual:
A hypergraph automorphism is an isomorphism from a vertex set into itself, that is a relabeling of vertices. The set of automorphisms of a hypergraph H (= ( X , E )) is a group under composition, called the automorphism group of the hypergraph and written Aut( H ).
Consider the hypergraph H {\displaystyle H} with edges
and
Then clearly H {\displaystyle H} and G {\displaystyle G} are isomorphic (with ϕ ( a ) = α {\displaystyle \phi (a)=\alpha } , etc. ), but they are not strongly isomorphic. So, for example, in H {\displaystyle H} , vertex a {\displaystyle a} meets edges 1, 4 and 6, so that,
In graph G {\displaystyle G} , there does not exist any vertex that meets edges 1, 4 and 6:
In this example, H {\displaystyle H} and G {\displaystyle G} are equivalent, H ≡ G {\displaystyle H\equiv G} , and the duals are strongly isomorphic: H ∗ ≅ G ∗ {\displaystyle H^{*}\cong G^{*}} .
The rank r ( H ) {\displaystyle r(H)} of a hypergraph H {\displaystyle H} is the maximum cardinality of any of the edges in the hypergraph. If all edges have the same cardinality k , the hypergraph is said to be uniform or k-uniform , or is called a k-hypergraph . A graph is just a 2-uniform hypergraph.
The degree d(v) of a vertex v is the number of edges that contain it. H is k-regular if every vertex has degree k .
The dual of a uniform hypergraph is regular and vice versa.
Two vertices x and y of H are called symmetric if there exists an automorphism such that ϕ ( x ) = y {\displaystyle \phi (x)=y} . Two edges e i {\displaystyle e_{i}} and e j {\displaystyle e_{j}} are said to be symmetric if there exists an automorphism such that ϕ ( e i ) = e j {\displaystyle \phi (e_{i})=e_{j}} .
A hypergraph is said to be vertex-transitive (or vertex-symmetric ) if all of its vertices are symmetric. Similarly, a hypergraph is edge-transitive if all edges are symmetric. If a hypergraph is both edge- and vertex-symmetric, then the hypergraph is simply transitive .
Because of hypergraph duality, the study of edge-transitivity is identical to the study of vertex-transitivity.
A partition theorem due to E. Dauber [ 42 ] states that, for an edge-transitive hypergraph H = ( X , E ) {\displaystyle H=(X,E)} , there exists a partition
of the vertex set X {\displaystyle X} such that the subhypergraph H X k {\displaystyle H_{X_{k}}} generated by X k {\displaystyle X_{k}} is transitive for each 1 ≤ k ≤ K {\displaystyle 1\leq k\leq K} , and such that
where r ( H ) {\displaystyle r(H)} is the rank of H .
As a corollary, an edge-transitive hypergraph that is not vertex-transitive is bicolorable.
Graph partitioning (and in particular, hypergraph partitioning) has many applications to IC design [ 43 ] and parallel computing . [ 44 ] [ 45 ] [ 46 ] Efficient and scalable hypergraph partitioning algorithms are also important for processing large scale hypergraphs in machine learning tasks. [ 7 ]
One possible generalization of a hypergraph is to allow edges to point at other edges. There are two variations of this generalization. In one, the edges consist not only of a set of vertices, but may also contain subsets of vertices, subsets of subsets of vertices and so on ad infinitum . In essence, every edge is just an internal node of a tree or directed acyclic graph , and vertices are the leaf nodes. A hypergraph is then just a collection of trees with common, shared nodes (that is, a given internal node or leaf may occur in several different trees). Conversely, every collection of trees can be understood as this generalized hypergraph. Since trees are widely used throughout computer science and many other branches of mathematics, one could say that hypergraphs appear naturally as well. So, for example, this generalization arises naturally as a model of term algebra ; edges correspond to terms and vertices correspond to constants or variables.
For such a hypergraph, set membership then provides an ordering, but the ordering is neither a partial order nor a preorder , since it is not transitive. The graph corresponding to the Levi graph of this generalization is a directed acyclic graph . Consider, for example, the generalized hypergraph whose vertex set is V = { a , b } {\displaystyle V=\{a,b\}} and whose edges are e 1 = { a , b } {\displaystyle e_{1}=\{a,b\}} and e 2 = { a , e 1 } {\displaystyle e_{2}=\{a,e_{1}\}} . Then, although b ∈ e 1 {\displaystyle b\in e_{1}} and e 1 ∈ e 2 {\displaystyle e_{1}\in e_{2}} , it is not true that b ∈ e 2 {\displaystyle b\in e_{2}} . However, the transitive closure of set membership for such hypergraphs does induce a partial order , and "flattens" the hypergraph into a partially ordered set .
Alternately, edges can be allowed to point at other edges, irrespective of the requirement that the edges be ordered as directed, acyclic graphs. This allows graphs with edge-loops, which need not contain vertices at all. For example, consider the generalized hypergraph consisting of two edges e 1 {\displaystyle e_{1}} and e 2 {\displaystyle e_{2}} , and zero vertices, so that e 1 = { e 2 } {\displaystyle e_{1}=\{e_{2}\}} and e 2 = { e 1 } {\displaystyle e_{2}=\{e_{1}\}} . As this loop is infinitely recursive, sets that are the edges violate the axiom of foundation . In particular, there is no transitive closure of set membership for such hypergraphs. Although such structures may seem strange at first, they can be readily understood by noting that the equivalent generalization of their Levi graph is no longer bipartite , but is rather just some general directed graph .
The generalized incidence matrix for such hypergraphs is, by definition, a square matrix, of a rank equal to the total number of vertices plus edges. Thus, for the above example, the incidence matrix is simply | https://en.wikipedia.org/wiki/Hypergraph |
In mathematics, the hypergraph regularity method is a powerful tool in extremal graph theory that refers to the combined application of the hypergraph regularity lemma and the associated counting lemma. It is a generalization of the graph regularity method, which refers to the use of Szemerédi's regularity and counting lemmas.
Very informally, the hypergraph regularity lemma decomposes any given k {\displaystyle k} -uniform hypergraph into a random-like object with bounded parts (with an appropriate boundedness and randomness notions) that is usually easier to work with. On the other hand, the hypergraph counting lemma estimates the number of hypergraphs of a given isomorphism class in some collections of the random-like parts. This is an extension of Szemerédi's regularity lemma that partitions any given graph into bounded number parts such that edges between the parts behave almost randomly. Similarly, the hypergraph counting lemma is a generalization of the graph counting lemma that estimates number of copies of a fixed graph as a subgraph of a larger graph.
There are several distinct formulations of the method, all of which imply the hypergraph removal lemma and a number of other powerful results, such as Szemerédi's theorem , as well as some of its multidimensional extensions. The following formulations are due to V. Rödl , B. Nagle, J. Skokan, M. Schacht , and Y. Kohayakawa , [ 1 ] for alternative versions see Tao (2006), [ 2 ] and Gowers (2007). [ 3 ]
In order to state the hypergraph regularity and counting lemmas formally, we need to define several rather technical terms to formalize appropriate notions of pseudo-randomness (random-likeness) and boundedness, as well as to describe the random-like blocks and partitions.
Notation
The following defines an important notion of relative density, which roughly describes the fraction of j {\displaystyle j} -edges spanned by ( j − 1 ) {\displaystyle (j-1)} -edges that are in the hypergraph. For example, when j = 3 {\displaystyle j=3} , the quantity d ( G ( 3 ) | Q ( 2 ) ) {\displaystyle d({\mathcal {G}}^{(3)}\vert \mathbf {Q} ^{(2)})} is equal to the fraction of triangles formed by 2-edges in the subhypergraph that are 3-edges.
Definition [Relative density]. For j ≥ 3 {\displaystyle j\geq 3} , fix some classes V i 1 , … , V i j {\displaystyle V_{i_{1}},\ldots ,V_{i_{j}}} of G ( 1 ) {\displaystyle {\mathcal {G}}^{(1)}} with 1 ≤ i 1 < … < i j ≤ l {\displaystyle 1\leq i_{1}<\ldots <i_{j}\leq l} . Suppose r ≥ 1 {\displaystyle r\geq 1} is an integer. Let Q ( j − 1 ) = { Q 1 ( j − 1 ) , … , Q r ( j − 1 ) } {\displaystyle \mathbf {Q} ^{(j-1)}=\{Q_{1}^{(j-1)},\ldots ,Q_{r}^{(j-1)}\}} be a subhypergraph of the induced j {\displaystyle j} -partite graph G ( j − 1 ) [ V i 1 , … , V i j ] {\displaystyle {\mathcal {G}}^{(j-1)}[V_{i_{1}},\ldots ,V_{i_{j}}]} . Define the relative density d ( G ( j ) | Q ( j − 1 ) ) = | G ( j ) ∩ ∪ s ∈ [ r ] K j ( Q s j − 1 ) | | ∪ s ∈ [ r ] K j ( Q s j − 1 ) | {\displaystyle d\left({\mathcal {G}}^{(j)}\vert \mathbf {Q} ^{(j-1)}\right)={\frac {\left|{\mathcal {G}}^{(j)}\cap \cup _{s\in [r]}{\mathcal {K}}_{j}(Q_{s}^{j-1})\right|}{\left|\cup _{s\in [r]}{\mathcal {K}}_{j}(Q_{s}^{j-1})\right|}}} .
What follows is the appropriate notion of pseudorandomness that the regularity method will use. Informally, by this concept of regularity, ( j − 1 ) {\displaystyle (j-1)} -edges ( G ( j − 1 ) {\displaystyle {\mathcal {G}}^{(j-1)}} ) have some control over j {\displaystyle j} -edges ( G ( j ) {\displaystyle {\mathcal {G}}^{(j)}} ). More precisely, this defines a setting where density of j {\displaystyle j} edges in large subhypergraphs is roughly the same as one would expect based on the relative density alone. Formally,
Definition [( δ j , d j , r {\displaystyle \delta _{j},d_{j},r} )-regularity]. Suppose δ j , d j {\displaystyle \delta _{j},d_{j}} are positive real numbers and r ≥ 1 {\displaystyle r\geq 1} is an integer. G ( j ) {\displaystyle {\mathcal {G}}^{(j)}} is ( δ j , d j , r {\displaystyle \delta _{j},d_{j},r} )-regular with respect to G ( j − 1 ) {\displaystyle {\mathcal {G}}^{(j-1)}} if for any choice of classes V i 1 , … , V i j {\displaystyle V_{i_{1}},\ldots ,V_{i_{j}}} and any collection of subhypergraphs Q ( j − 1 ) = { Q 1 ( j − 1 ) , … , Q r ( j − 1 ) } {\displaystyle \mathbf {Q} ^{(j-1)}=\{Q_{1}^{(j-1)},\ldots ,Q_{r}^{(j-1)}\}} of G ( j − 1 ) [ V i 1 , … , V i j ] {\displaystyle {\mathcal {G}}^{(j-1)}[V_{i_{1}},\ldots ,V_{i_{j}}]} satisfying | ∪ s ∈ [ r ] K j ( Q s ) ( j − 1 ) | ≥ δ j | K j ( G ( j − 1 ) [ V i 1 , … , V i j ] ) | {\displaystyle \left|\cup _{s\in [r]}{\mathcal {K}}_{j}(Q_{s})^{(j-1)}\right|\geq \delta _{j}\left|{\mathcal {K}}_{j}({\mathcal {G}}^{(j-1)}[V_{i_{1}},\ldots ,V_{i_{j}}])\right|} we have d ( G ( j ) | Q ( j − 1 ) ) = d j ± δ j {\displaystyle d({\mathcal {G}}^{(j)}\vert \mathbf {Q} ^{(j-1)})=d_{j}\pm \delta _{j}} .
Roughly speaking, the following describes the pseudorandom blocks into which the hypergraph regularity lemma decomposes any large enough hypergraph. In Szemerédi regularity, 2-edges are regularized versus 1-edges (vertices). In this generalized notion, j {\displaystyle j} -edges are regularized versus ( j − 1 ) {\displaystyle (j-1)} -edges for all 2 ≤ j ≤ h {\displaystyle 2\leq j\leq h} . More precisely, this defines a notion of regular hypergraph called ( l , h ) {\displaystyle (l,h)} -complex, in which existence of j {\displaystyle j} -edge implies existence of all underlying ( j − 1 ) {\displaystyle (j-1)} -edges, as well as their relative regularity. For example, if { x , y , z } {\displaystyle \{x,y,z\}} is a 3-edge then { x , y } {\displaystyle \{x,y\}} , { x , z } {\displaystyle \{x,z\}} , and { y , z } {\displaystyle \{y,z\}} are 2-edges in the complex. Moreover, the density of 3-edges over all possible triangles made by 2-edges is roughly the same in every collection of subhypergraphs.
Definition [ ( δ , d , r ) {\displaystyle (\delta ,\mathbf {d} ,r)} -regular ( l , h ) {\displaystyle (l,h)} -complex]. An ( l , h ) {\displaystyle (l,h)} -complex G {\displaystyle \mathbf {G} } is a system { G ( j ) } j = 1 h {\displaystyle \{{\mathcal {G}}^{(j)}\}_{j=1}^{h}} of l {\displaystyle l} -partite j {\displaystyle j} graphs G ( j ) {\displaystyle {\mathcal {G}}^{(j)}} satisfying G ( j ) ⊂ K j ( G ( j − 1 ) ) {\displaystyle {\mathcal {G}}^{(j)}\subset {\mathcal {K}}_{j}({\mathcal {G}}^{(j-1)})} . Given vectors of positive real numbers δ = ( δ 2 , … , δ h ) {\displaystyle \delta =(\delta _{2},\ldots ,\delta _{h})} , d = ( d 2 , … , d h ) {\displaystyle \mathbf {d} =(d_{2},\ldots ,d_{h})} , and an integer r ≥ 1 {\displaystyle r\geq 1} , we say ( l , h ) {\displaystyle (l,h)} -complex is ( δ , d , r ) {\displaystyle (\delta ,\mathbf {d} ,r)} -regular if
The following describes the equitable partition that the hypergraph regularity lemma will induce. A ( μ , δ , d , r ) {\displaystyle (\mu ,\delta ,\mathbf {d} ,r)} -equitable family of partition is a sequence of partitions of 1-edges (vertices), 2-edges (pairs), 3-edges (triples), etc. This is an important distinction from the partition obtained by Szemerédi's regularity lemma, where only vertices are being partitioned. In fact, Gowers [ 3 ] demonstrated that solely vertex partition can not give a sufficiently strong notion of regularity to imply Hypergraph counting lemma.
Definition [ ( μ , δ , d , r ) {\displaystyle (\mu ,\delta ,\mathbf {d} ,r)} -equitable partition]. Let μ > 0 {\displaystyle \mu >0} be a real number, r ≥ 1 {\displaystyle r\geq 1} be an integer, and δ = ( δ 2 , … , δ k − 1 ) {\displaystyle \delta =(\delta _{2},\ldots ,\delta _{k-1})} , d = ( d 2 , … , d k − 1 ) {\displaystyle \mathbf {d} =(d_{2},\ldots ,d_{k-1})} be vectors of positive reals. Let a = ( a 1 , … , a k − 1 ) {\displaystyle \mathbf {a} =(a_{1},\ldots ,a_{k-1})} be a vector of positive integers and V {\displaystyle V} be an n {\displaystyle n} -element vertex set. We say that a family of partitions P = P ( k − 1 , a ) = { P ( 1 ) … , P ( k − 1 ) } {\displaystyle {\mathcal {P}}={\mathcal {P}}(k-1,\mathbf {a} )=\{{\mathcal {P}}^{(1)}\,\ldots ,{\mathcal {P}}^{(k-1)}\}} on V {\displaystyle V} is ( μ , δ , d , r ) {\displaystyle (\mu ,\delta ,\mathbf {d} ,r)} -equitable if it satisfies the following:
Finally, the following defines what it means for a k {\displaystyle k} -uniform hypergraph to be regular with respect to a partition. In particular, this is the main definition that describes the output of hypergraph regularity lemma below.
Definition [Regularity with respect to a partition]. We say that a k {\displaystyle k} -graph H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} is ( δ k , r ) {\displaystyle (\delta _{k},r)} -regular with respect to a family of partitions P {\displaystyle {\mathcal {P}}} if all but at most δ k n k {\displaystyle \delta _{k}n^{k}} k − {\displaystyle k-} edges K {\displaystyle K} of H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} have the property that K ∈ K k ( G ( 1 ) ) {\displaystyle K\in {\mathcal {K}}_{k}({\mathcal {G}}^{(1)})} and if P = { P ( j ) } j = 1 k − 1 {\displaystyle \mathbf {P} =\{P^{(j)}\}_{j=1}^{k-1}} is unique ( k , k − 1 ) {\displaystyle (k,k-1)} -complex for which K ∈ K k ( P ( k − 1 ) ) {\displaystyle K\in {\mathcal {K}}_{k}(P^{(k-1)})} , then H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} is ( δ k , d ( H ( k ) | P ( k − 1 ) ) , r ) {\displaystyle (\delta _{k},d({\mathcal {H}}^{(k)}\vert P^{(k-1)}),r)} regular with respect to P {\displaystyle {\mathcal {P}}} .
For all positive real μ {\displaystyle \mu } , δ k {\displaystyle \delta _{k}} , and functions r : N × ( 0 , 1 ] k − 2 → N {\displaystyle r\,\colon \mathbb {N} \times (0,1]^{k-2}\to \mathbb {N} } , δ j : ( 0 , 1 ] k − j → ( 0 , 1 ] {\displaystyle \delta _{j}\,\colon (0,1]^{k-j}\to (0,1]} for j = 2 , … , k − 1 {\displaystyle j=2,\ldots ,k-1} there exists T 0 {\displaystyle T_{0}} and n 0 {\displaystyle n_{0}} so that the following holds. For any k {\displaystyle k} -uniform hypergraph H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} on n ≥ n 0 {\displaystyle n\geq n_{0}} vertices, there exists a family of partitions P = P ( k − 1 , a ) {\displaystyle {\mathcal {P}}={\mathcal {P}}(k-1,\mathbf {a} )} and a vector d = ( d 2 , … , d k − 1 ) {\displaystyle \mathbf {d} =(d_{2},\ldots ,d_{k-1})} so that, for r = r ( a 1 , d ) {\displaystyle r=r(a_{1},\mathbf {d} )} and δ = ( δ 2 , … , δ k − 1 ) {\displaystyle \mathbf {\delta } =(\delta _{2},\ldots ,\delta _{k-1})} where δ j = δ j ( d j , … , d k − 1 ) {\displaystyle \delta _{j}=\delta _{j}(d_{j},\ldots ,d_{k-1})} for all j {\displaystyle j} , the following holds.
For all integers 2 ≤ k ≤ l {\displaystyle 2\leq k\leq l} the following holds: ∀ γ > 0 ∀ d k > 0 ∃ δ k > 0 ∀ d k − 1 > 0 ∃ δ k − 1 > 0 ⋯ ∀ d 2 > 0 ∃ δ 2 > 0 {\displaystyle \forall \gamma >0\;\;\forall d_{k}>0\;\;\exists \delta _{k}>0\;\;\forall d_{k-1}>0\;\;\exists \delta _{k-1}>0\;\cdots \;\forall d_{2}>0\;\;\exists \delta _{2}>0} and there are integers r {\displaystyle r} and m 0 {\displaystyle m_{0}} so that, with d = ( d 2 , … , d k ) {\displaystyle \mathbf {d} =(d_{2},\ldots ,d_{k})} , δ = ( δ 2 , … , δ k ) {\displaystyle \delta =(\delta _{2},\ldots ,\delta _{k})} , and m ≥ m 0 {\displaystyle m\geq m_{0}} ,
if G = { G ( j ) } j = 1 k {\displaystyle \mathbf {G} =\{{\mathcal {G}}^{(j)}\}_{j=1}^{k}} is a ( δ , d , r ) {\displaystyle (\delta ,\mathbf {d} ,r)} -regular ( l , k ) {\displaystyle (l,k)} complex with vertex partition G ( 1 ) = V 1 ∪ ⋯ ∪ V l {\displaystyle {\mathcal {G}}^{(1)}=V_{1}\cup \cdots \cup V_{l}} and | V i | = m {\displaystyle |V_{i}|=m} , then
| K l ( G ( k ) ) | = ( 1 ± γ ) ∏ h = 2 k d h ( l h ) × m l {\displaystyle \left|{\mathcal {K}}_{l}({\mathcal {G}}^{(k)})\right|=(1\pm \gamma )\prod _{h=2}^{k}d_{h}^{\binom {l}{h}}\times m^{l}} .
The main application through which most others follow is the hypergraph removal lemma , which roughly states that given fixed F ( k ) {\displaystyle {\mathcal {F}}^{(k)}} and large H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} k {\displaystyle k} -uniform hypergraphs, if H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} contains few copies of F ( k ) {\displaystyle {\mathcal {F}}^{(k)}} , then one can delete few hyperedges in H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} to eliminate all of the copies of F ( k ) {\displaystyle {\mathcal {F}}^{(k)}} . To state it more formally,
For all l ≥ k ≥ 2 {\displaystyle l\geq k\geq 2} and every μ > 0 {\displaystyle \mu >0} , there exists ζ > 0 {\displaystyle \zeta >0} and n 0 > 0 {\displaystyle n_{0}>0} so that the following holds. Suppose F ( k ) {\displaystyle {\mathcal {F}}^{(k)}} is a k {\displaystyle k} -uniform hypergraph on l {\displaystyle l} vertices and H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} is that on n ≥ n 0 {\displaystyle n\geq n_{0}} vertices. If H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} contains at most ζ n l {\displaystyle \zeta n^{l}} copies of F ( k ) {\displaystyle {\mathcal {F}}^{(k)}} , then one can delete μ n k {\displaystyle \mu n^{k}} hyperedges in H ( k ) {\displaystyle {\mathcal {H}}^{(k)}} to make it F ( k ) {\displaystyle {\mathcal {F}}^{(k)}} -free.
One of the original motivations for graph regularity method was to prove Szemerédi's theorem , which states that every dense subset of Z {\displaystyle \mathbb {Z} } contains an arithmetic progression of arbitrary length. In fact, by a relatively simple application of the triangle removal lemma , one can prove that every dense subset of Z {\displaystyle \mathbb {Z} } contains an arithmetic progression of length 3.
The hypergraph regularity method and hypergraph removal lemma can prove high-dimensional and ring analogues of density version of Szemerédi's theorems, originally proved by Furstenberg and Katznelson. [ 4 ] In fact, this approach yields first quantitative bounds for the theorems.
This theorem roughly implies that any dense subset of Z d {\displaystyle \mathbb {Z} ^{d}} contains any finite pattern of Z d {\displaystyle \mathbb {Z} ^{d}} . The case when d = 1 {\displaystyle d=1} and the pattern is arithmetic progression of length some length is equivalent to Szemerédi's theorem.
Source: [ 4 ]
Let T {\displaystyle T} be a finite subset of R d {\displaystyle \mathbb {R} ^{d}} and let δ > 0 {\displaystyle \delta >0} be given. Then there exists a finite subset C ⊂ R d {\displaystyle C\subset \mathbb {R} ^{d}} such that every Z ⊂ C {\displaystyle Z\subset C} with | Z | > δ | C | {\displaystyle |Z|>\delta |C|} contains a homothetic copy of T {\displaystyle T} . (i.e. set of form z + λ T {\displaystyle z+\lambda T} , for some z ∈ R d {\displaystyle z\in \mathbb {R} ^{d}} and t ∈ R {\displaystyle t\in \mathbb {R} } )
Moreover, if T ⊂ [ − t ; t ] d {\displaystyle T\subset [-t;t]^{d}} for some t ∈ N {\displaystyle t\in \mathbb {N} } , then there exists N 0 ∈ N {\displaystyle N_{0}\in \mathbb {N} } such that C = [ − N , N ] d {\displaystyle C=[-N,N]^{d}} has this property for all N ≥ N 0 {\displaystyle N\geq N_{0}} .
Another possible generalization that can be proven by the removal lemma is when the dimension is allowed to grow.
Let A {\displaystyle A} be a finite ring. For every δ > 0 {\displaystyle \delta >0} , there exists M 0 {\displaystyle M_{0}} such that, for M ≥ M 0 {\displaystyle M\geq M_{0}} , any subset Z ⊂ A M {\displaystyle Z\subset A^{M}} with | Z | > δ | A M | {\displaystyle |Z|>\delta |A^{M}|} contains a coset of an isomorphic copy of A {\displaystyle A} (as a left A {\displaystyle A} -module).
In other words, there are some r , u ∈ A M {\displaystyle \mathbf {r} ,\mathbf {u} \in A^{M}} such that r + φ ( A ) ⊂ Z {\displaystyle r+\varphi (A)\subset Z} , where φ : A → A M {\displaystyle \varphi \colon A\to A^{M}} , φ ( a ) = a u {\displaystyle \varphi (a)=a\mathbf {u} } is an injection. | https://en.wikipedia.org/wiki/Hypergraph_regularity_method |
Hypergravity is defined as the condition where the force of gravity (real or perceived) exceeds that on the surface of the Earth . [ 1 ] This is expressed as being greater than 1 g . Hypergravity conditions are created on Earth for research on human physiology in aerial combat and space flight, as well as testing of materials and equipment for space missions. Manufacturing of titanium aluminide turbine blades in 20 g is being explored by researchers at the European Space Agency (ESA) via an 8-meter wide Large Diameter Centrifuge (LDC) . [ 2 ]
NASA scientists looking at meteorite impacts discovered that most strains of bacteria were able to reproduce under acceleration exceeding 7,500 g . [ 3 ]
Recent research carried out on extremophiles in Japan involved a variety of bacteria including Escherichia coli and Paracoccus denitrificans being subject to conditions of extreme gravity. The bacteria were cultivated while being rotated in an ultracentrifuge at high speeds corresponding to 403,627 g . Another study that has been published in the Proceedings of the National Academy of Sciences, reports that some bacteria can exist even in extreme "hypergravity". In other words, they can still live and breed despite gravitational forces that are 400,000 times greater than what's felt here on Earth. Paracoccus denitrificans was one of the bacteria which displayed not only survival but also robust cellular growth under these conditions of hyperacceleration which are usually found only in cosmic environments, such as on very massive stars or in the shock waves of supernovas . Analysis showed that the small size of prokaryotic cells is essential for successful growth under hypergravity. The research has implications on the feasibility of existence of exobacteria and panspermia .
High gravity conditions generated by centrifuge is applied in the chemical industry, casting, and material synthesis. [ 4 ] [ 5 ] [ 6 ] [ 7 ] The convection and mass transfer are greatly affected by the gravitational condition. Researchers reported that the high-gravity level can effectively affect the phase composition and morphology of the products. [ 4 ]
See G-force: Human tolerance
Ever since Pearl proposed the rate of living theory of aging, numerous studies have demonstrated its validity in poikilotherms. In mammals, however, satisfactory experimental demonstration is still lacking because an externally imposed increase of basal metabolic rate of these animals (e.g. by placement in the cold) is usually accompanied by general homeostatic disturbance and stress. The present study was based on the finding that rats exposed to slightly increased gravity are able to adapt with little chronic stress but at a higher level of basal metabolic expenditure (increased 'rate of living'). The rate of aging of 17-month-old rats that had been exposed to 3.14 g in an animal centrifuge for 8 months was larger than of controls as shown by apparently elevated lipofuscin content in heart and kidney, reduced numbers and increased size of mitochondria of heart tissue, and inferior liver mitochondria respiration (reduced 'efficiency': 20% larger ADP: 0 ratio, P less than 0.01; reduced 'speed': 8% lower respiratory control ratio, P less than 0.05). [ 8 ] Steady-state food intake per day per kg body weight, which is presumably proportional to 'rate of living' or specific basal metabolic expenditure, was about 18% higher than in controls (P less than 0.01) after an initial 2-month adaptation period. Finally, though half of the centrifuged animals lived only a little shorter than controls (average about 343 vs. 364 days on the centrifuge, difference statistically nonsignificant), the remaining half (longest survivors) lived on the centrifuge an average of 520 days (range 483–572) compared to an average of 574 days (range 502–615) for controls, computed from onset of centrifugation, or 11% shorter (P less than 0.01). Therefore, these results show that a moderate increase of the level of basal metabolism of young adult rats adapted to hypergravity compared to controls in normal gravity is accompanied by a roughly similar increase in the rate of organ aging and reduction of survival, in agreement with Pearl's rate of living theory of aging, previously experimentally demonstrated only in poikilotherms.
Pups from gestating rats exposed to hypergravity (1.8 g ) or to normal gravity at the perinatal period were evaluated. [ 9 ] By comparison to controls, the hypergravity group had shorter latencies before choosing a maze arm in a T-maze and fewer exploratory pokes in a hole board. During dyadic encounters, the hypergravity group had a lower number of self-grooming episodes and shorter latencies before crossing under the opposing rat. | https://en.wikipedia.org/wiki/Hypergravity |
In mathematics , the n -th hyperharmonic number of order r , denoted by H n ( r ) {\displaystyle H_{n}^{(r)}} , is recursively defined by the relations:
and
In particular, H n = H n ( 1 ) {\displaystyle H_{n}=H_{n}^{(1)}} is the n -th harmonic number .
The hyperharmonic numbers were discussed by J. H. Conway and R. K. Guy in their 1995 book The Book of Numbers . [ 1 ] : 258
By definition, the hyperharmonic numbers satisfy the recurrence relation
In place of the recurrences, there is a more effective formula to calculate these numbers:
The hyperharmonic numbers have a strong relation to combinatorics of permutations. The generalization of the identity
reads as
where [ n r ] r {\displaystyle \left[{n \atop r}\right]_{r}} is an r -Stirling number of the first kind. [ 2 ]
The above expression with binomial coefficients easily gives that for all fixed order r>=2 we have. [ 3 ]
that is, the quotient of the left and right hand side tends to 1 as n tends to infinity.
An immediate consequence is that
when m>r .
The generating function of the hyperharmonic numbers is
The exponential generating function is much more harder to deduce. One has that for all r=1,2,...
where 2 F 2 is a hypergeometric function . The r=1 case for the harmonic numbers is a classical result, the general one was proved in 2009 by I. Mező and A. Dil. [ 4 ]
The next relation connects the hyperharmonic numbers to the Hurwitz zeta function : [ 3 ]
It is known, that the harmonic numbers are never integers except the case n=1 . The same question can be posed with respect to the hyperharmonic numbers: are there integer hyperharmonic numbers? István Mező proved [ 5 ] that if r=2 or r=3 , these numbers are never integers except the trivial case when n=1 . He conjectured that this is always the case, namely, the hyperharmonic numbers of order r are never integers except when n=1 . This conjecture was justified for a class of parameters by R. Amrane and H. Belbachir. [ 6 ] Especially, these authors proved that H n ( 4 ) {\displaystyle H_{n}^{(4)}} is not integer for all r<26 and n=2,3,... Extension to high orders was made by Göral and Sertbaş. [ 7 ] These authors have also shown that H n ( r ) {\displaystyle H_{n}^{(r)}} is never integer when n is even or a prime power, or r is odd.
Another result is the following. [ 8 ] Let S ( x ) {\displaystyle S(x)} be the number of non-integer hyperharmonic numbers such that ( n , x ) ∈ [ 0 , x ] × [ 0 , x ] {\displaystyle (n,x)\in [0,x]\times [0,x]} . Then, assuming the Cramér's conjecture ,
Note that the number of integer lattice points in [ 0 , x ] × [ 0 , x ] {\displaystyle [0,x]\times [0,x]} is x 2 + O ( x 2 ) {\displaystyle x^{2}+O(x^{2})} , which shows that most of the hyperharmonic numbers cannot be integer.
The problem was finally settled by D. C. Sertbaş who found that there are infinitely many hyperharmonic integers, albeit they are quite huge. The smallest hyperharmonic number which is an integer found so far is [ 9 ] | https://en.wikipedia.org/wiki/Hyperharmonic_number |
Hyperhydricity (previously known as vitrification) is a physiological malformation that results in excessive hydration , low lignification , impaired stomatal function and reduced mechanical strength of tissue culture-generated plants. The consequence is poor regeneration of such plants without intensive greenhouse acclimation for outdoor growth. [ 1 ] Additionally, it may also lead to leaf-tip and bud necrosis in some cases, which often leads to loss of apical dominance in the shoots. [ 2 ] In general, the main symptom of hyperhydricity is translucent characteristics signified by a shortage of chlorophyll and high water content. Specifically, the presence of a thin or absent cuticular layer, reduced number of palisade cells , irregular stomata , less developed cell wall and large intracellular spaces in the mesophyll cell layer have been described as some of the anatomic changes associated with hyperhydricity. [ 3 ]
The main causes of hyperhydricity in plant tissue culture are those factors triggering oxidative stresses such as high salt concentration, high relative humidity , low light intensity , gas accumulation in the atmosphere of the jar, length of time intervals between subcultures ; number of subcultures, concentration and type of gelling agent , the type of explants used, the concentrations of microelement and hormonal imbalances. [ 4 ] Hyperhydricity is commonly apparent in liquid culture -grown plants or when there is low concentration of gelling agent. High ammonium concentration also contributes to hyperhydricity. [ 5 ]
Hyperhydricity can be monitored by modifying the atmosphere of the culture vessels. Adjusting the relative humidity in the vessel is one of the most important parameters to be controlled. Use of gas-permeable membranes may help in this regard as this allows increased exchange of water vapor and other gases such as ethylene with the surrounding environment. Using higher concentration of a gelling agent, on top of the use of a higher-strength gelling agent may reduce the risk from hyperhydricity. Hyperhydricity can also be controlled by bottom cooling, which allows water to condense on the medium, [ 6 ] the use of cytokinin-meta-topolin (6-(3-Hydroxybenzylamino)purine)</9>, the combination of lower cytokinin and ammonium nitrate in the medium, use of nitrate or glutamine as the sole nitrogen source and decreasing the ratio of NH4+:NO3- in the medium. [ 7 ] In studies on calcium deficiency in tissue cultures of Lavandula angustifolia , it was shown that an increase in calcium in the medium reduced hyperhydricity. [ 8 ]
[ 9 ] | https://en.wikipedia.org/wiki/Hyperhydricity |
In nonstandard analysis , a hyperinteger n is a hyperreal number that is equal to its own integer part . A hyperinteger may be either finite or infinite. A finite hyperinteger is an ordinary integer . An example of an infinite hyperinteger is given by the class of the sequence (1, 2, 3, ...) in the ultrapower construction of the hyperreals.
The standard integer part function :
is defined for all real x and equals the greatest integer not exceeding x . By the transfer principle of nonstandard analysis, there exists a natural extension:
defined for all hyperreal x , and we say that x is a hyperinteger if x = ∗ ⌊ x ⌋ . {\displaystyle x={}^{*}\!\lfloor x\rfloor .} Thus, the hyperintegers are the image of the integer part function on the hyperreals.
The set ∗ Z {\displaystyle ^{*}\mathbb {Z} } of all hyperintegers is an internal subset of the hyperreal line ∗ R {\displaystyle ^{*}\mathbb {R} } . The set of all finite hyperintegers (i.e. Z {\displaystyle \mathbb {Z} } itself) is not an internal subset. Elements of the complement ∗ Z ∖ Z {\displaystyle ^{*}\mathbb {Z} \setminus \mathbb {Z} } are called, depending on the author, nonstandard , unlimited , or infinite hyperintegers. The reciprocal of an infinite hyperinteger is always an infinitesimal .
Nonnegative hyperintegers are sometimes called hypernatural numbers. Similar remarks apply to the sets N {\displaystyle \mathbb {N} } and ∗ N {\displaystyle ^{*}\mathbb {N} } . Note that the latter gives a non-standard model of arithmetic in the sense of Skolem . | https://en.wikipedia.org/wiki/Hyperinteger |
The Hyperion Cantos is a series of science fiction novels by Dan Simmons . The title was originally used for the collection of the first pair of books in the series, Hyperion and The Fall of Hyperion , [ 1 ] [ 2 ] and later came to refer to the overall storyline, including Endymion , The Rise of Endymion , and a number of short stories. [ 3 ] [ 4 ] More narrowly, inside the fictional storyline, after the first volume, the Hyperion Cantos is an epic poem written by the character Martin Silenus covering in verse form the events of the first two books. [ 5 ]
Of the four novels, Hyperion received the Hugo and Locus Awards in 1990; [ 6 ] The Fall of Hyperion won the Locus and British Science Fiction Association Awards in 1991; [ 7 ] and The Rise of Endymion received the Locus Award in 1998. [ 8 ] All four novels were also nominated for various science fiction awards.
First published in 1989, Hyperion has the structure of a frame story , similar to Geoffrey Chaucer 's Canterbury Tales and Giovanni Boccaccio 's Decameron . The story weaves the interlocking tales of a diverse group of travelers sent on a pilgrimage to the Time Tombs on Hyperion. The travelers have been sent by the Hegemony (the government of the human star systems), the All Thing, and the Church of the Final Atonement, alternately known as the Shrike Church, to make a request of the Shrike. As they progress in their journey, each of the pilgrims tells their tale.
This book concludes the story begun in Hyperion . It abandons the storytelling frame structure of the first novel, and is instead presented primarily as a series of dreams by John Keats.
The story commences 274 years after the events in the previous novel. Few main characters from the first two books are present in the later two. The main character is Raul Endymion, an ex-soldier who receives a death sentence after an unfair trial. He is rescued by Martin Silenus and asked to perform a series of rather extraordinarily difficult tasks. The main task is to rescue and protect the daughter of Brawne Lamia (one of the main characters of Hyperion), Aenea, a messiah coming from the time period just after the first books via time travel . The Catholic Church has become a dominant force in the human universe and views Aenea as a potential threat to their power. The group of Aenea, Endymion, and A. Bettik (an android ) evades the Church's forces on several worlds through use of the Consul's spaceship, ending the story on Earth.
This final novel in the series finishes the story begun in Endymion , expanding on the themes in Endymion , as Raul and Aenea battle the Church and meet their respective destinies.
The series also includes three short stories:
The Hyperion universe originated when Simmons was an elementary school teacher, as an extended tale he told at intervals to his young students; this is recorded in " The Death of the Centaur ", and its introduction. It then inspired his short story "Remembering Siri", which eventually became the nucleus around which Hyperion and The Fall of Hyperion formed. After the quartet was published came the short story " Orphans of the Helix ". "Orphans" is currently the final work in the Cantos, both chronologically and internally.
The original Hyperion Cantos has been described as a novel published in two volumes, published separately at first for reasons of length. [ 3 ] [ 9 ] In his introduction to "Orphans of the Helix", Simmons elaborates:
Some readers may know that I've written four novels set in the "Hyperion Universe"— Hyperion , The Fall of Hyperion , Endymion , and The Rise of Endymion . A perceptive subset of those readers—perhaps the majority—know that this so-called epic actually consists of two long and mutually dependent tales, the two Hyperion stories combined and the two Endymion stories combined, broken into four books because of the realities of publishing. [ 10 ]
Much of the appeal of the series stems from its extensive use of references and allusions from a wide array of thinkers such as Teilhard de Chardin , John Muir , Norbert Wiener , and to the poetry of John Keats , the famous 19th-century English Romantic poet, Norse mythology , and the monk Ummon . A large number of technological elements are acknowledged by Simmons to be inspired by elements of Out of Control: The New Biology of Machines, Social Systems, and the Economic World . [ 11 ]
The Hyperion series has many echoes of Jack Vance , explicitly acknowledged in one of the later books.
The title of the first novel, "Hyperion", is taken from one of Keats's poems, the unfinished epic Hyperion . Similarly, the title of the third novel is from Keats' poem Endymion . Quotes from actual Keats poems and the fictional Cantos of Martin Silenus are interspersed throughout the novels. Simmons goes so far as to have two artificial reincarnations of John Keats (" cybrid s": artificial intelligences in human bodies) play a major role in the series.
Much of the action in the series takes place on the planet Hyperion. It is described as having one-fifth less gravity than Earth standard . Hyperion has a number of peculiar indigenous flora and fauna, notably Tesla trees, which are essentially large electricity-spewing trees. It is also a "labyrinthine" planet, which means that it is home to ancient subterranean labyrinths of unknown purpose. Most importantly, Hyperion is the location of the Time Tombs, large artifacts surrounded by "anti-entropic" fields that allow them to move backward through time.
In the fictional universe of the Hyperion Cantos , the Hegemony of Man encompasses over 200 planets. Faster than light communications technology, Fatlines, are said to operate through tachyon bursts. However, in later books it is revealed that they operate through the Void Which Binds. The Farcaster network was given to humanity by the TechnoCore and again it was another use of the Void Which Binds that allowed this instantaneous travel between worlds. The Hawking Drive was developed by the Human scientists, allowing the faster than light travel which led to the Hegira (from the Arabic word هجرة Hijra , meaning 'migration'). The Gideon drive, a Core-provided starship drive, allows for near-instantaneous travel between any two points in human-occupied space. The drive's use kills any human on board a Gideon-propelled starship; thus, the technology is only of use with remote probes or when used in conjunction with the Pax's resurrection technology. The resurrection creche can regenerate someone carrying a cruciform from their remains. Treeships are living trees that are propelled by ergs (spider-like solid-state alien being that emits force fields ) through space.
The region of the Tombs is also the home of the Shrike, a menacing half-mechanical, half-organic four-armed creature that features prominently in the series. [ 12 ] It appears in all four Hyperion Cantos books and is an enigma in the initial two; its purpose is not revealed until the second book, but is still left nebulous. The Shrike appears to act both autonomously and as a servant of some unknown force or entity. In the first two Hyperion books, it exists solely in the area around the Time Tombs on the planet Hyperion. Its portrayal is changed significantly in the last two books, Endymion and The Rise of Endymion . In these novels, the Shrike appears effectively unfettered and protects the heroine Aenea against assassins of the opposing TechnoCore.
Surrounded in mystery, the object of fear, hatred, and even worship by members of the Church of the Final Atonement (the Shrike Cult), the Shrike's origins are described as uncertain. It is portrayed as composed of razorwire , thorns, blades, and cutting edges, having fingers like scalpels and long, curved toe blades. It has the ability to control the flow of time, and may thus appear to travel infinitely fast. The Shrike may kill victims in a flash or it may transport them to an eternity of impalement upon an enormous artificial 'Tree of Thorns,' or 'Tree of Pain' in Hyperion's distant future. The Tree of Thorns is described as an unimaginably large, metallic tree, alive with the agonized writhing of countless human victims of all ages and races. [ 13 ] It is also hinted in the second book that the Tree of Thorns is actually a simulation generated by a mystical interface which connects to human brains via a strong and pulsing (as if it were alive) cord. The name Shrike seems a reference to the Loggerhead Shrike , a small fierce bird that impales its victims on thorns, spines, or twigs. [ 14 ]
In the fictional universe of the Hyperion Cantos , the Hegemony of Man encompasses over 200 planets. The following planets appear or are specifically mentioned in the Hyperion Cantos . | https://en.wikipedia.org/wiki/Hyperion_Cantos |
Hyperjacking is an attack in which a hacker takes malicious control over the hypervisor that creates the virtual environment within a virtual machine (VM) host. [ 1 ] The point of the attack is to target the operating system that is below that of the virtual machines so that the attacker's program can run and the applications on the VMs above it will be completely oblivious to its presence.
Hyperjacking involves installing a malicious, fake hypervisor that can manage the entire server system. Regular security measures are ineffective because the operating system will not be aware that the machine has been compromised. In hyperjacking, the hypervisor specifically operates in stealth mode and runs beneath the machine, making it more difficult to detect and more likely to gain access to computer servers where it can affect the operation of the entire institution or company. If the hacker gains access to the hypervisor, everything that is connected to that server can be manipulated. [ 2 ] The hypervisor represents a single point of failure when it comes to the security and protection of sensitive information. [ 3 ]
For a hyperjacking attack to succeed, an attacker would have to take control of the hypervisor by the following methods: [ 4 ]
Some basic design features in a virtual environment can help mitigate the risks of hyperjacking:
As of early 2015, there had not been any report of an actual demonstration of a successful hyperjacking besides "proof of concept" testing. The VENOM vulnerability ( CVE - 2015-3456 ) was revealed in May 2015 and had the potential to affect many datacenters. [ 5 ] Hyperjackings are rare due to the difficulty of directly accessing hypervisors; however, hyperjacking is considered a real-world threat. [ 6 ]
On September 29, 2022, Mandiant and VMware jointly made public their findings that a hacker group has successfully executed malware -based hyperjacking attacks in the wild, [ 7 ] affecting multiple target systems in an apparent espionage campaign. [ 8 ] [ 9 ] In response, Mandiant released a security guide with recommendations for hardening the VMware ESXi hypervisor environment. [ 10 ] | https://en.wikipedia.org/wiki/Hyperjacking |
Hyperland is a 50-minute-long documentary film about hypertext and surrounding technologies. It was written by Douglas Adams and produced and directed by Max Whitby [ 2 ] for BBC Two in 1990. It stars Douglas Adams as a computer user and Tom Baker , with whom Adams had already worked on Doctor Who , as a personification of a software agent .
In hindsight, what Hyperland describes and predicts is an approximation of today's World Wide Web . [ 3 ]
The self-proclaimed "fantasy documentary" begins with Adams asleep by the fireside with his television still on. In a dream that follows, Adams, fed up by game shows and generally passive, non-interactive linear content, takes his TV to a rubbish dump, where he meets Tom, played by Tom Baker. Tom is a software agent , who shows him the future of TV: interactive multimedia . [ 4 ]
Much like Apple Inc 's Knowledge Navigator concept, Tom acts as a butler within a virtual space populated with hypermedia : linked text, sound, pictures and movies represented by animated icons. The documentary is centred on Adams browsing these media and discovering their interconnectedness.
This process leads him, for example, from the topic Atlantic Ocean to literature about the sea to The Rime of the Ancient Mariner by Samuel Taylor Coleridge to the poem Kubla Khan by the same author to Xanadu and back to the topic of hypertext via Ted Nelson 's Project Xanadu . The references to Coleridge and to Kubla Khan are rather knowing nods to Adams' own book Dirk Gently's Holistic Detective Agency , where they play significant roles in the plot. Dirk Gently was published in 1987 and also touches on the themes of interconnectedness, suggesting that this was a subject Adams had thought about at some length and for some time.
Many aspects of the documentary demonstrate Adams' noted enthusiasm for technology, and for Apple computers in particular. At the beginning of the documentary a Macintosh Portable can be seen, and most of the projects presented run on Apple hardware. Even the general design of the animated icons and environments featured in his dream is inspired by pre-OS X era Mac OS icons and design cues.
While Adams is browsing, many people and projects related to the general theme of hypertext and multimedia are presented:
The dream (and the documentary) ends with a vision of how information might be accessed in 2005. In hindsight, Hyperland does describe a number of features of the modern web and, apart from some underestimates of graphics and processing power available, the documentary paints a not inaccurate picture of hypermedia and hypertext and how they are used today. This is especially noteworthy considering that it predates the public release of the first Web browser by about a year. | https://en.wikipedia.org/wiki/Hyperland |
A superlens , or super lens , is a lens which uses metamaterials to go beyond the diffraction limit . The diffraction limit is a feature of conventional lenses and microscopes that limits the fineness of their resolution depending on the illumination wavelength and the numerical aperture (NA) of the objective lens. Many lens designs have been proposed that go beyond the diffraction limit in some way, but constraints and obstacles face each of them. [ 1 ]
In 1873 Ernst Abbe reported that conventional lenses are incapable of capturing some fine details of any given image. The superlens is intended to capture such details. This limitation of conventional lenses has inhibited progress in the biological sciences . This is because a virus or DNA molecule cannot be resolved with the highest powered conventional microscopes. This limitation extends to the minute processes of cellular proteins moving alongside microtubules of a living cell in their natural environments. Additionally, computer chips and the interrelated microelectronics continue to be manufactured at progressively smaller scales. This requires specialized optical equipment , which is also limited because these use conventional lenses. Hence, the principles governing a superlens show that it has potential for imaging DNA molecules, cellular protein processes, and aiding in the manufacture of even smaller computer chips and microelectronics. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Conventional lenses capture only the propagating light waves . These are waves that travel from a light source or an object to a lens, or the human eye. This can alternatively be studied as the far field . In contrast, a superlens captures propagating light waves and waves that stay on top of the surface of an object, which, alternatively, can be studied as both the far field and the near field . [ 6 ] [ 7 ]
In the early 20th century the term "superlens" was used by Dennis Gabor to describe something quite different: a compound lenslet array system. [ 8 ]
An image of an object can be defined as a tangible or visible representation of the features of that object. A requirement for image formation is interaction with fields of electromagnetic radiation . Furthermore, the level of feature detail, or image resolution , is limited to a length of a wave of radiation . For example, with optical microscopy , image production and resolution depends on the length of a wave of visible light. However, with a superlens, this limitation may be removed, and a new class of image generated. [ 9 ]
Electron beam lithography can overcome this resolution limit . Optical microscopy, on the other hand cannot, being limited to some value just above 200 nanometers . [ 4 ] However, new technologies combined with optical microscopy are beginning to allow increased feature resolution (see sections below).
One definition of being constrained by the resolution barrier , is a resolution cut off at half the wavelength of light . The visible spectrum has a range that extends from 390 nanometers to 750 nanometers. Green light , half way in between, is around 500 nanometers. Microscopy takes into account parameters such as lens aperture , distance from the object to the lens, and the refractive index of the observed material. This combination defines the resolution cutoff, or microscopy optical limit , which tabulates to 200 nanometers. Therefore, conventional lenses, which literally construct an image of an object by using "ordinary" light waves, discard information that produces very fine, and minuscule details of the object that are contained in evanescent waves . These dimensions are less than 200 nanometers. For this reason, conventional optical systems, such as microscopes , have been unable to accurately image very small, nanometer-sized structures or nanometer-sized organisms in vivo , such as individual viruses , or DNA molecules . [ 4 ] [ 5 ]
The limitations of standard optical microscopy ( bright-field microscopy ) lie in three areas:
Live biological cells in particular generally lack sufficient contrast to be studied successfully, because the internal structures of the cell are mostly colorless and transparent. The most common way to increase contrast is to stain the different structures with selective dyes , but often this involves killing and fixing the sample. Staining may also introduce artifacts , apparent structural details that are caused by the processing of the specimen and are thus not a legitimate feature of the specimen.
The conventional glass lens is pervasive throughout our society and in the sciences . It is one of the fundamental tools of optics simply because it interacts with various wavelengths of light. At the same time, the wavelength of light can be analogous to the width of a pencil used to draw the ordinary images. The limit intrudes in all kinds of ways. For example, the laser used in a digital video system cannot read details from a DVD that are smaller than the wavelength of the laser. This limits the storage capacity of DVDs. [ 10 ]
Thus, when an object emits or reflects light there are two types of electromagnetic radiation associated with this phenomenon . These are the near field radiation and the far field radiation. As implied by its description, the far field escapes beyond the object. It is then easily captured and manipulated by a conventional glass lens. However, useful (nanometer-sized) resolution details are not observed, because they are hidden in the near field. They remain localized, staying much closer to the light emitting object, unable to travel, and unable to be captured by the conventional lens. Controlling the near field radiation, for high resolution, can be accomplished with a new class of materials not easily obtained in nature. These are unlike familiar solids , such as crystals , which derive their properties from atomic and molecular units. The new material class, termed metamaterials , obtains its properties from its artificially larger structure. This has resulted in novel properties, and novel responses, which allow for details of images that surpass the limitations imposed by the wavelength of light. [ 10 ]
This has led to the desire to view live biological cell interactions in a real time, natural environment , and the need for subwavelength imaging . Subwavelength imaging can be defined as optical microscopy with the ability to see details of an object or organism below the wavelength of visible light (see discussion in the above sections). In other words, to have the capability to observe, in real time, below 200 nanometers. Optical microscopy is a non-invasive technique and technology because everyday light is the transmission medium . Imaging below the optical limit in optical microscopy (subwavelength) can be engineered for the cellular level, and nanometer level in principle.
For example, in 2007 a technique was demonstrated where a metamaterials-based lens coupled with a conventional optical lens could manipulate visible light to see ( nanoscale ) patterns that were too small to be observed with an ordinary optical microscope. This has potential applications not only for observing a whole living cell, or for observing cellular processes , such as how proteins and fats move in and out of cells. In the technology domain, it could be used to improve the first steps of photolithography and nanolithography , essential for manufacturing ever smaller computer chips . [ 4 ] [ 11 ]
Focusing at subwavelength has become a unique imaging technique which allows visualization of features on the viewed object which are smaller than the wavelength of the photons in use. A photon is the minimum unit of light. While previously thought to be physically impossible, subwavelength imaging has been made possible through the development of metamaterials. This is generally accomplished using a layer of metal such as gold or silver a few atoms thick, which acts as a superlens, or by means of 1D and 2D photonic crystals . [ 12 ] [ 13 ] There is a subtle interplay between propagating waves, evanescent waves, near field imaging and far field imaging discussed in the sections below. [ 4 ] [ 14 ]
Metamaterial lenses ( Superlenses ) are able to reconstruct nanometer sized images by producing a negative refractive index in each instance. This compensates for the swiftly decaying evanescent waves. Prior to metamaterials, numerous other techniques had been proposed and even demonstrated for creating super-resolution microscopy . As far back as 1928, Irish physicist Edward Hutchinson Synge , is given credit for conceiving and developing the idea for what would ultimately become near-field scanning optical microscopy . [ 15 ] [ 16 ] [ 17 ]
In 1974 proposals for two- dimensional fabrication techniques were presented. These proposals included contact imaging to create a pattern in relief, photolithography, electron-beam lithography , X-ray lithography , or ion bombardment, on an appropriate planar substrate. [ 18 ] The shared technological goals of the metamaterial lens and the variety of lithography aim to optically resolve features having dimensions much smaller than that of the vacuum wavelength of the exposing light. [ 19 ] [ 20 ] In 1981 two different techniques of contact imaging of planar (flat) submicroscopic metal patterns with blue light (400 nm ) were demonstrated. One demonstration resulted in an image resolution of 100 nm and the other a resolution of 50 to 70 nm. [ 20 ]
In 1995, John Guerra combined a transparent grating having 50 nm lines and spaces (the "metamaterial") with a conventional microscope immersion objective. The resulting "superlens" resolved a silicon sample also having 50 nm lines and spaces, far beyond the classical diffraction limit imposed by the illumination having 650 nm wavelength in air. [ 21 ]
Since at least 1998 near field optical lithography was designed to create nanometer-scale features. Research on this technology continued as the first experimentally demonstrated negative index metamaterial came into existence in 2000–2001. The effectiveness of electron-beam lithography was also being researched at the beginning of the new millennium for nanometer-scale applications. Imprint lithography was shown to have desirable advantages for nanometer-scaled research and technology. [ 19 ] [ 22 ]
Advanced deep UV photolithography can now offer sub-100 nm resolution, yet the minimum feature size and spacing between patterns are determined by the diffraction limit of light. Its derivative technologies such as evanescent near-field lithography, near-field interference lithography, and phase-shifting mask lithography were developed to overcome the diffraction limit. [ 19 ]
In the year 2000, John Pendry proposed using a metamaterial lens to achieve nanometer-scaled imaging for focusing below the wavelength of light. [ 1 ] [ 23 ]
The original problem of the perfect lens: The general expansion of an EM field emanating from a source consists of both propagating waves and near-field or evanescent waves. An example of a 2-D line source with an electric field which has S-polarization will have plane waves consisting of propagating and evanescent components, which advance parallel to the interface. [ 24 ] As both the propagating and the smaller evanescent waves advance in a direction parallel to the medium interface, evanescent waves decay in the direction of propagation. Ordinary (positive index) optical elements can refocus the propagating components, but the exponentially decaying inhomogeneous components are always lost, leading to the diffraction limit for focusing to an image. [ 24 ]
A superlens is a lens which is capable of subwavelength imaging, allowing for magnification of near field rays. Conventional lenses have a resolution on the order of one wavelength due to the so-called diffraction limit. This limit hinders imaging very small objects, such as individual atoms, which are much smaller than the wavelength of visible light. A superlens is able to beat the diffraction limit. An example is the initial lens described by Pendry, which uses a slab of material with a negative index of refraction as a flat lens . In theory, a perfect lens would be capable of perfect focus – meaning that it could perfectly reproduce the electromagnetic field of the source plane at the image plane.
The performance limitation of conventional lenses is due to the diffraction limit. Following Pendry (2000), the diffraction limit can be understood as follows. Consider an object and a lens placed along the z-axis so the rays from the object are traveling in the +z direction. The field emanating from the object can be written in terms of its angular spectrum method , as a superposition of plane waves :
where k z {\displaystyle k_{z}} is a function of k x , k y {\displaystyle k_{x},k_{y}} :
Only the positive square root is taken as the energy is going in the + z direction. All of the components of the angular spectrum of the image for which k z {\displaystyle k_{z}} is real are transmitted and re-focused by an ordinary lens. However, if
then k z {\displaystyle k_{z}} becomes imaginary, and the wave is an evanescent wave, whose amplitude decays as the wave propagates along the z axis. This results in the loss of the high- angular-frequency components of the wave, which contain information about the high-frequency (small-scale) features of the object being imaged. The highest resolution that can be obtained can be expressed in terms of the wavelength:
A superlens overcomes the limit. A Pendry-type superlens has an index of n =−1 (ε=−1, μ=−1), and in such a material, transport of energy in the + z direction requires the z component of the wave vector to have opposite sign:
For large angular frequencies, the evanescent wave now grows , so with proper lens thickness, all components of the angular spectrum can be transmitted through the lens undistorted. There are no problems with conservation of energy , as evanescent waves carry none in the direction of growth: the Poynting vector is oriented perpendicularly to the direction of growth. For traveling waves inside a perfect lens, the Poynting vector points in direction opposite to the phase velocity. [ 3 ]
Normally, when a wave passes through the interface of two materials, the wave appears on the opposite side of the normal . However, if the interface is between a material with a positive index of refraction and another material with a negative index of refraction, the wave will appear on the same side of the normal. Pendry's idea of a perfect lens is a flat material where n =−1. Such a lens allows near-field rays, which normally decay due to the diffraction limit, to focus once within the lens and once outside the lens, allowing subwavelength imaging. [ 25 ]
Superlens construction was at one time thought to be impossible. In 2000, Pendry claimed that a simple slab of left-handed material would do the job. [ 26 ] The experimental realization of such a lens took, however, some more time, because it is not that easy to fabricate metamaterials with both negative permittivity and permeability . Indeed, no such material exists naturally and construction of the required metamaterials is non-trivial. Furthermore, it was shown that the parameters of the material are extremely sensitive (the index must equal −1); small deviations make the subwavelength resolution unobservable. [ 27 ] [ 28 ] Due to the resonant nature of metamaterials, on which many (proposed) implementations of superlenses depend, metamaterials are highly dispersive. The sensitive nature of the superlens to the material parameters causes superlenses based on metamaterials to have a limited usable frequency range. This initial theoretical superlens design consisted of a metamaterial that compensated for wave decay and reconstructs images in the near field. Both propagating and evanescent waves could contribute to the resolution of the image. [ 1 ] [ 23 ] [ 29 ]
Pendry also suggested that a lens having only one negative parameter would form an approximate superlens, provided that the distances involved are also very small and provided that the source polarization is appropriate. For visible light this is a useful substitute, since engineering metamaterials with a negative permeability at the frequency of visible light is difficult. Metals are then a good alternative as they have negative permittivity (but not negative permeability). Pendry suggested using silver due to its relatively low loss at the predicted wavelength of operation (356 nm). In 2003 Pendry's theory was first experimentally demonstrated [ 13 ] at RF/microwave frequencies. In 2005, two independent groups verified Pendry's lens at UV range, both using thin layers of silver illuminated with UV light to produce "photographs" of objects smaller than the wavelength. [ 30 ] [ 31 ] Negative refraction of visible light was experimentally verified in an yttrium orthovanadate (YVO 4 ) bicrystal in 2003. [ 32 ]
It was discovered that a simple superlens design for microwaves could use an array of parallel conducting wires. [ 33 ] This structure was shown to be able to improve the resolution of MRI imaging.
In 2004, the first superlens with a negative refractive index provided resolution three times better than the diffraction limit and was demonstrated at microwave frequencies. [ 34 ] In 2005, the first near field superlens was demonstrated by N.Fang et al. , but the lens did not rely on negative refraction . Instead, a thin silver film was used to enhance the evanescent modes through surface plasmon coupling. [ 35 ] [ 36 ] Almost at the same time Melville and Blaikie succeeded with a near field superlens. Other groups followed. [ 30 ] [ 37 ] Two developments in superlens research were reported in 2008. [ 38 ] In the second case, a metamaterial was formed from silver nanowires which were electrochemically deposited in porous aluminium oxide. The material exhibited negative refraction. [ 39 ] The imaging performance of such isotropic negative dielectric constant slab lenses were also analyzed with respect to the slab material and thickness. [ 40 ] Subwavelength imaging opportunities with planar uniaxial anisotropic lenses, where the dielectric tensor components are of the opposite sign, have also been studied as a function of the structure parameters. [ 41 ]
The superlens has not yet been demonstrated at visible or near- infrared frequencies (Nielsen, R. B.; 2010). Furthermore, as dispersive materials, these are limited to functioning at a single wavelength. Proposed solutions are metal–dielectric composites (MDCs) [ 42 ] and multilayer lens structures. [ 43 ] The multi-layer superlens appears to have better subwavelength resolution than the single layer superlens. Losses are less of a concern with the multi-layer system, but so far it appears to be impractical because of impedance mis-match. [ 35 ]
While the evolution of nanofabrication techniques continues to push the limits in fabrication of nanostructures, surface roughness remains an inevitable source of concern in the design of nano-photonic devices. The impact of this surface roughness on the effective dielectric constants and subwavelength image resolution of multilayer metal–insulator stack lenses has also been studied. [ 44 ]
When the world is observed through conventional lenses, the sharpness of the image is determined by and limited to the wavelength of light. Around the year 2000, a slab of negative index metamaterial was theorized to create a lens with capabilities beyond conventional ( positive index ) lenses. Pendry proposed that a thin slab of negative refractive metamaterial might overcome known problems with common lenses to achieve a "perfect" lens that would focus the entire spectrum, both the propagating as well as the evanescent spectra. [ 1 ] [ 45 ]
A slab of silver was proposed as the metamaterial. More specifically, such silver thin film can be regarded as a metasurface . As light moves away (propagates) from the source, it acquires an arbitrary phase . Through a conventional lens the phase remains consistent, but the evanescent waves decay exponentially . In the flat metamaterial DNG slab, normally decaying evanescent waves are contrarily amplified . Furthermore, as the evanescent waves are now amplified, the phase is reversed. [ 1 ]
Therefore, a type of lens was proposed, consisting of a metal film metamaterial. When illuminated near its plasma frequency , the lens could be used for superresolution imaging that compensates for wave decay and reconstructs images in the near-field. In addition, both propagating and evanescent waves contribute to the resolution of the image. [ 1 ]
Pendry suggested that left-handed slabs allow "perfect imaging" if they are completely lossless, impedance matched , and their refractive index is −1 relative to the surrounding medium. Theoretically, this would be a breakthrough in that the optical version resolves objects as minuscule as nanometers across. Pendry predicted that Double negative metamaterials (DNG) with a refractive index of n=−1 , can act, at least in principle, as a "perfect lens" allowing imaging resolution which is limited not by the wavelength, but rather by material quality. [ 1 ] [ 46 ] [ 47 ] [ 48 ]
Further research demonstrated that Pendry's theory behind the perfect lens was not exactly correct. The analysis of the focusing of the evanescent spectrum (equations 13–21 in reference [ 1 ] ) was flawed. In addition, this applies to only one (theoretical) instance, and that is one particular medium that is lossless, nondispersive and the constituent parameters are defined as: [ 45 ]
However, the final intuitive result of this theory that both the propagating and evanescent waves are focused, resulting in a converging focal point within the slab and another convergence (focal point) beyond the slab turned out to be correct. [ 45 ]
If the DNG metamaterial medium has a large negative index or becomes lossy or dispersive , Pendry's perfect lens effect cannot be realized. As a result, the perfect lens effect does not exist in general. According to FDTD simulations at the time (2001), the DNG slab acts like a converter from a pulsed cylindrical wave to a pulsed beam. Furthermore, in reality (in practice), a DNG medium must be and is dispersive and lossy, which can have either desirable or undesirable effects, depending on the research or application. Consequently, Pendry's perfect lens effect is inaccessible with any metamaterial designed to be a DNG medium. [ 45 ]
Another analysis, in 2002, [ 24 ] of the perfect lens concept showed it to be in error while using the lossless, dispersionless DNG as the subject. This analysis mathematically demonstrated that subtleties of evanescent waves, restriction to a finite slab and absorption had led to inconsistencies and divergencies that contradict the basic mathematical properties of scattered wave fields. For example, this analysis stated that absorption , which is linked to dispersion , is always present in practice, and absorption tends to transform amplified waves into decaying ones inside this medium (DNG). [ 24 ]
A third analysis of Pendry's perfect lens concept, published in 2003, [ 49 ] used the recent demonstration of negative refraction at microwave frequencies [ 50 ] as confirming the viability of the fundamental concept of the perfect lens. In addition, this demonstration was thought to be experimental evidence that a planar DNG metamaterial would refocus the far field radiation of a point source. However, the perfect lens would require significantly different values for permittivity , permeability, and spatial periodicity than the demonstrated negative refractive sample. [ 49 ] [ 50 ]
This study agrees that any deviation from conditions where ε=μ=−1 results in the normal, conventional, imperfect image that degrades exponentially i.e., the diffraction limit. The perfect lens solution in the absence of losses is again, not practical, and can lead to paradoxical interpretations. [ 24 ]
It was determined that although resonant surface plasmons are undesirable for imaging, these turn out to be essential for recovery of decaying evanescent waves. This analysis discovered that metamaterial periodicity has a significant effect on the recovery of types of evanescent components. In addition, achieving subwavelength resolution is possible with current technologies. Negative refractive indices have been demonstrated in structured metamaterials. Such materials can be engineered to have tunable material parameters, and so achieve the optimal conditions. Losses up to microwave frequencies can be minimized in structures utilizing superconducting elements. Furthermore, consideration of alternate structures may lead to configurations of left-handed materials that can achieve subwavelength focusing. Such structures were being studied at the time. [ 24 ]
An effective approach for the compensation of losses in metamaterials, called plasmon injection scheme, has been recently proposed. [ 51 ] The plasmon injection scheme has been applied theoretically to imperfect negative index flat lenses with reasonable material losses and in the presence of noise [ 52 ] [ 53 ] as well as hyperlenses. [ 54 ] It has been shown that even imperfect negative index flat lenses assisted with plasmon injection scheme can enable subdiffraction imaging of objects which is otherwise not possible due to the losses and noise. Although plasmon injection scheme was originally conceptualized for plasmonic metamaterials, [ 51 ] the concept is general and applicable to all types of electromagnetic modes. The main idea of the scheme is the coherent superposition of the lossy modes in the metamaterial with an appropriately structured external auxiliary field. This auxiliary field accounts for the losses in the metamaterial, hence effectively reducing the losses experienced by the signal beam or object field in the case of a metamaterial lens. The plasmon injection scheme can be implemented either physically [ 53 ] or equivalently through a deconvolution post-processing method. [ 52 ] [ 54 ] However, the physical implementation has shown to be more effective than the deconvolution. Physical construction of convolution and selective amplification of the spatial frequencies within a narrow bandwidth are the keys to the physical implementation of the plasmon injection scheme. This loss compensation scheme is ideally suited especially for metamaterial lenses since it does not require gain medium, nonlinearity, or any interaction with phonons. Experimental demonstration of the plasmon injection scheme has not yet been shown partly because the theory is rather new.
Pendry's theoretical lens was designed to focus both propagating waves and the near-field evanescent waves. From permittivity "ε" and magnetic permeability "μ" an index of refraction "n" is derived. The index of refraction determines how light is bent on traversing from one material to another. In 2003, it was suggested that a metamaterial constructed with alternating, parallel, layers of n=−1 materials and n=+1 materials, would be a more effective design for a metamaterial lens . It is an effective medium made up of a multi-layer stack, which exhibits birefringence , n 2 =∞, n x =0. The effective refractive indices are then perpendicular and parallel , respectively. [ 55 ]
Like a conventional lens, the z-direction is along the axis of the roll. The resonant frequency (w 0 ) – close to 21.3 MHz – is determined by the construction of the roll. Damping is achieved by the inherent resistance of the layers and the lossy part of permittivity. [ 55 ]
Simply put, as the field pattern is transferred from the input to the output face of a slab, so the image information is transported across each layer. This was experimentally demonstrated. To test the two-dimensional imaging performance of the material, an antenna was constructed from a pair of anti-parallel wires in the shape of the letter M. This generated a line of magnetic flux, so providing a characteristic field pattern for imaging. It was placed horizontally, and the material, consisting of 271 Swiss rolls tuned to 21.5 MHz, was positioned on top of it. The material does indeed act as an image transfer device for the magnetic field. The shape of the antenna is faithfully reproduced in the output plane, both in the distribution of the peak intensity, and in the "valleys" that bound the M. [ 55 ]
A consistent characteristic of the very near (evanescent) field is that the electric and magnetic fields are largely decoupled. This allows for nearly independent manipulation of the electric field with the permittivity and the magnetic field with the permeability. [ 55 ]
Furthermore, this is highly anisotropic system . Therefore, the transverse (perpendicular) components of the EM field which radiate the material, that is the wavevector components k x and k y , are decoupled from the longitudinal component k z . So, the field pattern should be transferred from the input to the output face of a slab of material without degradation of the image information. [ 55 ]
In 2003, a group of researchers showed that optical evanescent waves would be enhanced as they passed through a silver metamaterial lens. This was referred to as a diffraction-free lens. Although a coherent , high-resolution, image was not intended, nor achieved, regeneration of the evanescent field was experimentally demonstrated. [ 56 ] [ 57 ]
By 2003 it was known for decades that evanescent waves could be enhanced by producing excited states at the interface surfaces. However, the use of surface plasmons to reconstruct evanescent components was not tried until Pendry's recent proposal (see " Perfect lens " above). By studying films of varying thickness it has been noted that a rapidly growing transmission coefficient occurs, under the appropriate conditions. This demonstration provided direct evidence that the foundation of superlensing is solid, and suggested the path that will enable the observation of superlensing at optical wavelengths. [ 57 ]
In 2005, a coherent, high-resolution image was produced (based on the 2003 results). A thinner slab of silver (35 nm) was better for sub–diffraction-limited imaging, which results in one-sixth of the illumination wavelength. This type of lens was used to compensate for wave decay and reconstruct images in the near-field. Prior attempts to create a working superlens used a slab of silver that was too thick. [ 23 ] [ 46 ]
Objects were imaged as small as 40 nm across. In 2005 the imaging resolution limit for optical microscopes was at about one tenth the diameter of a red blood cell . With the silver superlens this results in a resolution of one hundredth of the diameter of a red blood cell. [ 56 ]
Conventional lenses, whether man-made or natural, create images by capturing the propagating light waves all objects emit and then bending them. The angle of the bend is determined by the index of refraction and has always been positive until the fabrication of artificial negative index materials. Objects also emit evanescent waves that carry details of the object, but are unobtainable with conventional optics. Such evanescent waves decay exponentially and thus never become part of the image resolution, an optics threshold known as the diffraction limit. Breaking this diffraction limit, and capturing evanescent waves are critical to the creation of a 100-percent perfect representation of an object. [ 23 ]
In addition, conventional optical materials suffer a diffraction limit because only the propagating components are transmitted (by the optical material) from a light source. [ 23 ] The non-propagating components, the evanescent waves, are not transmitted. [ 24 ] Moreover, lenses that improve image resolution by increasing the index of refraction are limited by the availability of high-index materials, and point by point subwavelength imaging of electron microscopy also has limitations when compared to the potential of a working superlens. Scanning electron and atomic force microscopes are now used to capture detail down to a few nanometers. However, such microscopes create images by scanning objects point by point, which means they are typically limited to non-living samples, and image capture times can take up to several minutes. [ 23 ]
With current optical microscopes, scientists can only make out relatively large structures within a cell, such as its nucleus and mitochondria. With a superlens, optical microscopes could one day reveal the movements of individual proteins traveling along the microtubules that make up a cell's skeleton, the researchers said. Optical microscopes can capture an entire frame with a single snapshot in a fraction of a second. With superlenses this opens up nanoscale imaging to living materials, which can help biologists better understand cell structure and function in real time. [ 23 ]
Advances of magnetic coupling in the THz and infrared regime provided the realization of a possible metamaterial superlens. However, in the near field, the electric and magnetic responses of materials are decoupled. Therefore, for transverse magnetic (TM) waves, only the permittivity needed to be considered. Noble metals, then become natural selections for superlensing because negative permittivity is easily achieved. [ 23 ]
By designing the thin metal slab so that the surface current oscillations (the surface plasmons) match the evanescent waves from the object, the superlens is able to substantially enhance the amplitude of the field. Superlensing results from the enhancement of evanescent waves by surface plasmons. [ 23 ] [ 56 ]
The key to the superlens is its ability to significantly enhance and recover the evanescent waves that carry information at very small scales. This enables imaging well below the diffraction limit. No lens is yet able to completely reconstitute all the evanescent waves emitted by an object, so the goal of a 100-percent perfect image will persist. However, many scientists believe that a true perfect lens is not possible because there will always be some energy absorption loss as the waves pass through any known material. In comparison, the superlens image is substantially better than the one created without the silver superlens. [ 23 ]
In February 2004, an electromagnetic radiation focusing system, based on a negative index metamaterial plate, accomplished subwavelength imaging in the microwave domain. This showed that obtaining separated images at much less than the wavelength of light is possible. [ 58 ] Also, in 2004, a silver layer was used for sub- micrometre near-field imaging. Super high resolution was not achieved, but this was intended. The silver layer was too thick to allow significant enhancements of evanescent field components. [ 30 ]
In early 2005, feature resolution was achieved with a different silver layer. Though this was not an actual image, it was intended. Dense feature resolution down to 250 nm was produced in a 50 nm thick photoresist using illumination from a mercury lamp . Using simulations ( FDTD ), the study noted that resolution improvements could be expected for imaging through silver lenses, rather than another method of near field imaging. [ 59 ]
Building on this prior research, super resolution was achieved at optical frequencies using a 50 nm flat silver layer. The capability of resolving an image beyond the diffraction limit, for far-field imaging , is defined here as superresolution. [ 30 ]
The image fidelity is much improved over earlier results of the previous experimental lens stack. Imaging of sub-micrometre features has been greatly improved by using thinner silver and spacer layers, and by reducing the surface roughness of the lens stack. The ability of the silver lenses to image the gratings has been used as the ultimate resolution test, as there is a concrete limit for the ability of a conventional (far field) lens to image a periodic object – in this case the image is a diffraction grating. For normal-incidence illumination the minimum spatial period that can be resolved with wavelength λ through a medium with refractive index n is λ/n. Zero contrast would therefore be expected in any (conventional) far-field image below this limit, no matter how good the imaging resist might be. [ 30 ]
The (super) lens stack here results in a computational result of a diffraction-limited resolution of 243 nm. Gratings with periods from 500 nm down to 170 nm are imaged, with the depth of the modulation in the resist reducing as the grating period reduces. All of the gratings with periods above the diffraction limit (243 nm) are well resolved. [ 30 ] The key results of this experiment are super-imaging of the sub-diffraction limit for 200 nm and 170 nm periods. In both cases the gratings are resolved, even though the contrast is diminished, but this gives experimental confirmation of Pendry's superlensing proposal. [ 30 ]
Gradient Index (GRIN) – The larger range of material response available in metamaterials should lead to improved GRIN lens design. In particular, since the permittivity and permeability of a metamaterial can be adjusted independently, metamaterial GRIN lenses can presumably be better matched to free space. The GRIN lens is constructed by using a slab of NIM with a variable index of refraction in the y direction, perpendicular to the direction of propagation z. [ 60 ]
In 2005, a group proposed a theoretical way to overcome the near-field limitation using a new device termed a far-field superlens (FSL), which is a properly designed periodically corrugated metallic slab-based superlens. [ 61 ]
Imaging was experimentally demonstrated in the far field, taking the next step after near-field experiments. The key element is termed as a far-field superlens (FSL) which consists of a conventional superlens and a nanoscale coupler. [ 62 ]
An approach is presented for subwavelength focusing of microwaves using both a time-reversal mirror placed in the far field and a random distribution of scatterers placed in the near field of the focusing point. [ 63 ]
Once capability for near-field imaging was demonstrated, the next step was to project a near-field image into the far-field. This concept, including technique and materials, is dubbed "hyperlens". [ 64 ] [ 65 ]
In May 2012, calculations showed an ultraviolet (1200–1400 THz) hyperlens can be created using alternating layers of boron nitride and graphene . [ 66 ]
In February 2018, a mid-infrared (~5–25 μm) hyperlens was introduced, made from a variably doped indium arsenide multilayer, which offered drastically lower losses. [ 67 ]
The capability of a metamaterial-hyperlens for sub-diffraction-limited imaging is shown below.
With conventional optical lenses, the far field is a limit that is too distant for evanescent waves to arrive intact. When imaging an object, this limits the optical resolution of lenses to the order of the wavelength of light. These non-propagating waves carry detailed information in the form of high spatial resolution , and overcome limitations. Therefore, projecting image details, normally limited by diffraction into the far field does require recovery of the evanescent waves. [ 68 ]
In essence steps leading up to this investigation and demonstration was the employment of an anisotropic metamaterial with a hyperbolic dispersion. The effect was such that ordinary evanescent waves propagate along the radial direction of the layered metamaterial. On a microscopic level the large spatial frequency waves propagate through coupled surface plasmon excitations between the metallic layers. [ 68 ]
In 2007, just such an anisotropic metamaterial was employed as a magnifying optical hyperlens. The hyperlens consisted of a curved periodic stack of thin silver and alumina (at 35 nanometers thick) deposited on a half-cylindrical cavity, and fabricated on a quartz substrate. The radial and tangential permittivities have different signs. [ 68 ]
Upon illumination, the scattered evanescent field from the object enters the anisotropic medium and propagates along the radial direction. Combined with another effect of the metamaterial, a magnified image at the outer diffraction limit-boundary of the hyperlens occurs. Once the magnified feature is larger than (beyond) the diffraction limit, it can then be imaged with a conventional optical microscope, thus demonstrating magnification and projection of a sub-diffraction-limited image into the far field. [ 68 ]
The hyperlens magnifies the object by transforming the scattered evanescent waves into propagating waves in the anisotropic medium, projecting a spatial resolution high-resolution image into the far field. This type of metamaterials-based lens, paired with a conventional optical lens is therefore able to reveal patterns too small to be discerned with an ordinary optical microscope. In one experiment, the lens was able to distinguish two 35-nanometer lines etched 150 nanometers apart. Without the metamaterials, the microscope showed only one thick line. [ 14 ]
In a control experiment, the line pair object was imaged without the hyperlens. The line pair could not be resolved because of the diffraction limit of the (optical) aperture was limited to 260 nm. Because the hyperlens supports the propagation of a very broad spectrum of wave vectors, it can magnify arbitrary objects with sub-diffraction-limited resolution. [ 68 ]
Although this work appears to be limited by being only a cylindrical hyperlens, the next step is to design a spherical lens. That lens will exhibit three-dimensional capability. Near-field optical microscopy uses a tip to scan an object. In contrast, this optical hyperlens magnifies an image that is sub-diffraction-limited. The magnified sub-diffraction image is then projected into the far field. [ 14 ] [ 68 ]
The optical hyperlens shows a notable potential for applications, such as real-time biomolecular imaging and nanolithography. Such a lens could be used to watch cellular processes that have been impossible to see. Conversely, it could be used to project an image with extremely fine features onto a photoresist as a first step in photolithography, a process used to make computer chips. The hyperlens also has applications for DVD technology. [ 14 ] [ 68 ]
In 2010, a spherical hyperlens for two dimensional imaging at visible frequencies was demonstrated experimentally. The spherical hyperlens was based on silver and titanium oxide in alternating layers and had strong anisotropic hyperbolic dispersion allowing super-resolution with visible spectrum. The resolution was 160 nm in the visible spectrum. It will enable biological imaging at the cellular and DNA level, with a strong benefit of magnifying sub-diffraction resolution into far-field. [ 69 ]
In 2007 researchers demonstrated super imaging using materials, which create negative refractive index and lensing is achieved in the visible range. [ 46 ]
Continual improvements in optical microscopy are needed to keep up with the progress in nanotechnology and microbiology . Advancement in spatial resolution is key. Conventional optical microscopy is limited by a diffraction limit which is on the order of 200 nanometers (wavelength). This means that viruses , proteins, DNA molecules and many other samples are hard to observe with a regular (optical) microscope. The lens previously demonstrated with negative refractive index material, a thin planar superlens, does not provide magnification beyond the diffraction limit of conventional microscopes. Therefore, images smaller than the conventional diffraction limit will still be unavailable. [ 46 ]
Another approach achieving super-resolution at visible wavelength is recently developed spherical hyperlens based on silver and titanium oxide alternating layers. It has strong anisotropic hyperbolic dispersion allowing super-resolution with converting evanescent waves into propagating waves. This method is non-fluorescence based super-resolution imaging, which results in real-time imaging without any reconstruction of images and information. [ 69 ]
By 2008 the diffraction limit has been surpassed and lateral imaging resolutions of 20 to 50 nm have been achieved by several "super-resolution" far-field microscopy techniques, including stimulated emission depletion (STED) and its related RESOLFT (reversible saturable optically linear fluorescent transitions) microscopy; saturated structured illumination microscopy (SSIM) ; stochastic optical reconstruction microscopy (STORM); photoactivated localization microscopy (PALM); and other methods using similar principles. [ 70 ]
This began with a proposal by Pendry, in 2003. Magnifying the image required a new design concept in which the surface of the negatively refracting lens is curved. One cylinder touches another cylinder, resulting in a curved cylindrical lens which reproduced the contents of the smaller cylinder in magnified but undistorted form outside the larger cylinder. Coordinate transformations are required to curve the original perfect lens into the cylindrical, lens structure. [ 71 ]
This was followed by a 36-page conceptual and mathematical proof in 2005, that the cylindrical superlens works in the quasistatic regime . The debate over the perfect lens is discussed first. [ 72 ]
In 2007, a superlens utilizing coordinate transformation was again the subject. However, in addition to image transfer other useful operations were discussed; translation, rotation, mirroring and inversion as well as the superlens effect. Furthermore, elements that perform magnification are described, which are free from geometric aberrations, on both the input and output sides while utilizing free space sourcing (rather than waveguide). These magnifying elements also operate in the near and far field, transferring the image from near field to far field. [ 73 ]
The cylindrical magnifying superlens was experimentally demonstrated in 2007 by two groups, Liu et al. [ 68 ] and Smolyaninov et al. [ 46 ] [ 74 ]
Work in 2007 demonstrated that a quasi-periodic array of nanoholes, in a metal screen, were able to focus the optical energy of a plane wave to form subwavelength spots (hot spots). The distances for the spots was a few tens of wavelengths on the other side of the array, or, in other words, opposite the side of the incident plane wave . The quasi-periodic array of nanoholes functioned as a light concentrator. [ 75 ]
In June 2008, this was followed by the demonstrated capability of an array of quasi-crystal nanoholes in a metal screen. More than concentrating hot spots, an image of the point source is displayed a few tens of wavelengths from the array, on the other side of the array (the image plane). Also this type of array exhibited a 1 to 1 linear displacement, – from the location of the point source to its respective, parallel, location on the image plane. In other words, from x to x + δx. For example, other point sources were similarly displaced from x' to x' + δx', from x^ to x^ + δx^, and from x^^ to x^^ + δx^^, and so on. Instead of functioning as a light concentrator, this performs the function of conventional lens imaging with a 1 to 1 correspondence, albeit with a point source. [ 75 ]
However, resolution of more complicated structures can be achieved as constructions of multiple point sources. The fine details, and brighter image, that are normally associated with the high numerical apertures of conventional lenses can be reliably produced. Notable applications for this technology arise when conventional optics is not suitable for the task at hand. For example, this technology is better suited for X-ray imaging , or nano-optical circuits, and so forth. [ 75 ]
In 2010, a nano-wire array prototype, described as a three-dimensional (3D) metamaterial-nanolens, consisting of bulk nanowires deposited in a dielectric substrate was fabricated and tested. [ 76 ] [ 77 ]
The metamaterial nanolens was constructed of millions of nanowires at 20 nanometers in diameter. These were precisely aligned and a packaged configuration was applied. The lens is able to depict a clear, high-resolution image of nano-sized objects because it uses both normal propagating EM radiation, and evanescent waves to construct the image. Super-resolution imaging was demonstrated over a distance of 6 times the wavelength (λ), in the far-field, with a resolution of at least λ/4. This is a significant improvement over previous research and demonstration of other near field and far field imaging, including nanohole arrays discussed below. [ 76 ] [ 77 ]
2009–12. The light transmission properties of holey metal films in the metamaterial limit, where the unit length of the periodic structures is much smaller than the operating wavelength, are analyzed theoretically. [ 78 ]
Theoretically it appears possible to transport a complex electromagnetic image through a tiny subwavelength hole with diameter considerably smaller than the diameter of the image, without losing the subwavelength details. [ 79 ]
When observing the complex processes in a living cell, significant processes (changes) or details are easy to overlook. This can more easily occur when watching changes that take a long time to unfold and require high-spatial-resolution imaging. However, recent research offers a solution to scrutinize activities that occur over hours or even days inside cells, potentially solving many of the mysteries associated with molecular-scale events occurring in these tiny organisms. [ 80 ]
A joint research team, working at the National Institute of Standards and Technology (NIST) and the National Institute of Allergy and Infectious Diseases (NIAID), has discovered a method of using nanoparticles to illuminate the cellular interior to reveal these slow processes. Nanoparticles, thousands of times smaller than a cell, have a variety of applications. One type of nanoparticle called a quantum dot glows when exposed to light. These semiconductor particles can be coated with organic materials, which are tailored to be attracted to specific proteins within the part of a cell a scientist wishes to examine. [ 80 ]
Notably, quantum dots last longer than many organic dyes and fluorescent proteins that were previously used to illuminate the interiors of cells. They also have the advantage of monitoring changes in cellular processes while most high-resolution techniques like electron microscopy only provide images of cellular processes frozen at one moment. Using quantum dots, cellular processes involving the dynamic motions of proteins, are observable (elucidated). [ 80 ]
The research focused primarily on characterizing quantum dot properties, contrasting them with other imaging techniques. In one example, quantum dots were designed to target a specific type of human red blood cell protein that forms part of a network structure in the cell's inner membrane. When these proteins cluster together in a healthy cell, the network provides mechanical flexibility to the cell so it can squeeze through narrow capillaries and other tight spaces. But when the cell gets infected with the malaria parasite, the structure of the network protein changes. [ 80 ]
Because the clustering mechanism is not well understood, it was decided to examine it with the quantum dots. If a technique could be developed to visualize the clustering, then the progress of a malaria infection could be understood, which has several distinct developmental stages. [ 80 ]
Research efforts revealed that as the membrane proteins bunch up, the quantum dots attached to them are induced to cluster themselves and glow more brightly, permitting real time observation as the clustering of proteins progresses. More broadly, the research discovered that when quantum dots attach themselves to other nanomaterials, the dots' optical properties change in unique ways in each case. Furthermore, evidence was discovered that quantum dot optical properties are altered as the nanoscale environment changes, offering greater possibility of using quantum dots to sense the local biochemical environment inside cells. [ 80 ]
Some concerns remain over toxicity and other properties. However, the overall findings indicate that quantum dots could be a valuable tool to investigate dynamic cellular processes. [ 80 ]
The abstract from the related published research paper states (in part): Results are presented regarding the dynamic fluorescence properties of bioconjugated nanocrystals or quantum dots (QDs) in different chemical and physical environments. A variety of QD samples was prepared and compared: isolated individual QDs, QD aggregates, and QDs conjugated to other nanoscale materials...
Metamaterials scientists
This article incorporates public domain material from the National Institute of Standards and Technology | https://en.wikipedia.org/wiki/Hyperlens |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.