text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics, a Hodge–Tate module is an analogue of a Hodge structure over p-adic fields . Serre ( 1967 ) introduced and named Hodge–Tate structures using the results of Tate ( 1967 ) on p-divisible groups .
Suppose that G is the absolute Galois group of a p -adic field K . Then G has a canonical cyclotomic character χ given by its action on the p th power roots of unity . Let C be the completion of the algebraic closure of K . Then a finite-dimensional vector space over C with a semi-linear action of the Galois group G is said to be of Hodge–Tate type if it is generated by the eigenvectors of integral powers of χ.
This algebraic geometry –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hodge–Tate_module |
A hodograph is a diagram that gives a vectorial visual representation of the movement of a body or a fluid . It is the locus of one end of a variable vector, with the other end fixed. [ 1 ] The position of any plotted data on such a diagram is proportional to the velocity of the moving particle. [ 2 ] It is also called a velocity diagram . It appears to have been used by James Bradley , but its practical development is mainly from Sir William Rowan Hamilton , who published an account of it in the Proceedings of the Royal Irish Academy in 1846. [ 2 ]
It is used in physics , astronomy , solid and fluid mechanics to plot deformation of material, motion of planets or any other data that involves the velocities of different parts of a body.
In meteorology , hodographs are used to plot winds from soundings of the Earth's atmosphere . It is a polar diagram where wind direction is indicated by the angle from the center axis and its strength by the distance from the center. In the figure to the right, at the bottom one finds values of wind at 4 heights above ground. They are plotted by the vectors V → 0 {\displaystyle {\vec {V}}_{0}} to V → 4 {\displaystyle {\vec {V}}_{4}} . One has to notice that direction are plotted as mentioned in the upper right corner.
With the hodograph and thermodynamic diagrams like the tephigram , meteorologists can calculate:
It is a method of presenting the velocity field of a point in planar motion. The velocity vector, drawn at scale, is shown perpendicular rather than tangent to the point path, usually oriented away from the center of curvature of the path. [ 3 ]
Hodograph transformation is a technique used to transform nonlinear partial differential equations into linear version. It consists of interchanging the dependent and independent variables in the equation to achieve linearity. [ 4 ]
The study of the hodograph, as a method of investigating the motion of a body, was introduced by Sir W. R. Hamilton. The hodograph may be defined as the path traced out by the extremity of a vector which continually represents, in direction and magnitude, the velocity of a moving body. In applying the method of the hodograph to a planet, the orbit of which is in one plane, we shall find it convenient to suppose the hodograph turned round its origin through a right angle, so that the vector of the hodograph is perpendicular instead of parallel to the velocity it represents. | https://en.wikipedia.org/wiki/Hodograph |
The Hoesch reaction or Houben–Hoesch reaction is an organic reaction in which a nitrile reacts with an arene compound to form an aryl ketone . The reaction is a type of Friedel–Crafts acylation with hydrogen chloride and a Lewis acid catalyst .
The synthesis of 2,4,6-Trihydroxyacetophenone (THAP) from phloroglucinol is representative: [ 1 ] If two-equivalents are added, 2,4-Diacetylphloroglucinol is the product.
An imine can be isolated as an intermediate reaction product. The attacking electrophile is possibly [ 2 ] a species of the type R-C + =NHCl − . The arene must be electron-rich i.e. phenol or aniline type. A related reaction is the Gattermann reaction in which hydrocyanic acid not a nitrile is used.
The reaction is named after Kurt Hoesch [ 3 ] and Josef Houben [ 4 ] who reported about this new reaction type in respectively 1915 and 1926.
The mechanism of the reaction involves two steps. The first step is a nucleophilic addition to the nitrile with the aid of a polarizing Lewis acid, forming an imine, which is later hydrolyzed during the aqueous workup to yield the final aryl ketone. | https://en.wikipedia.org/wiki/Hoesch_reaction |
Hoffman modulation contrast microscopy (HMC microscopy) is an optical microscopy technique for enhancing the contrast in unstained biological specimens. The technique was invented by Robert Hoffman in 1975. [ 2 ] Like differential interference contrast microscopy (DIC microscopy), contrast is increased by using components in the light path which convert phase gradients in the specimen into differences in light intensity that are rendered in an image that appears three-dimensional. The 3D appearance may be misleading, as a feature which appears to cast a shadow may not necessarily have a distinct physical geometry corresponding to the shadow. The technique is particularly suitable for optical sectioning at lower magnifications. [ 3 ] [ 4 ]
An example of the use of HMC illumination is in in-vitro fertilisation , where under brightfield illumination the near-transparent oocyte is hard to see clearly.
HMC systems typically consist of a condenser with a slit aperture, an objective with a slit aperture, and a polariser which is fitted between the condenser and the illumination source and is used to control the degree of contrast. The principle of HMC is used by a number of microscope manufacturers who have introduced their own variants of the technique. | https://en.wikipedia.org/wiki/Hoffman_modulation_contrast_microscopy |
Hoffman nucleation theory is a theory developed by John D. Hoffman and coworkers in the 1970s and 80s that attempts to describe the crystallization of a polymer in terms of the kinetics and thermodynamics of polymer surface nucleation . [ 1 ] The theory introduces a model where a surface of completely crystalline polymer is created and introduces surface energy parameters to describe the process. Hoffman nucleation theory is more of a starting point for polymer crystallization theory and is better known for its fundamental roles in the Hoffman–Weeks lamellar thickening and Lauritzen–Hoffman growth theory .
Polymers contain different morphologies on the molecular level which give rise to their macro properties. Long range disorder in the polymer chain is representative of amorphous solids , and the chain segments are considered amorphous. Long range polymer order is similar to crystalline material, and chain segments are considered crystalline.
The thermal characteristics of polymers are fundamentally different from those of most solid materials. Solid materials typically have one melting point , the T m , above which the material loses internal molecular ordering and becomes a liquid . Polymers have both a melting temperature T m and a glass transition temperature T g . Above the T m , the polymer chains lose their molecular ordering and exhibit reptation , or mobility. Below the T m , but still above the T g , the polymer chains lose some of their long-range mobility and can form either crystalline or amorphous regions. In this temperature range, as the temperature decreases, amorphous regions can transition into crystalline regions, causing the bulk material to become more crystalline over all. Below the T g , molecular motion is stopped and the polymer chains are essentially frozen in place. In this temperature range, amorphous regions can no longer transition into crystalline regions, and the polymer as a whole has reached its maximum crystallinity.
Hoffman nucleation theory addresses the amorphous to crystalline polymer transition, and this transition can only occur in the temperature range between the T m and T g . The transition from an amorphous to a crystalline single polymer chain is related to the random thermal energy required to align and fold sections of the chain to form ordered regions titled lamellae , which are a subset of even bigger structures called spherulites. The crystallization of polymers can be brought about by several different methods, and is a complex topic in itself.
Nucleation is the formation and growth of a new phase with or without the presence of external surface. The presence of this surface results in heterogeneous nucleation whereas in its absence homogeneous nucleation occurs. Heterogeneous nucleation occurs in cases where there are pre-existing nuclei present, such as tiny dust particles suspended in a liquid or gas or reacting with a glass surface containing SiO 2 . For the process of Hoffman nucleation and its progression to Lauritzen–Hoffman growth theory, homogeneous nucleation is the main focus. Homogeneous nucleation occurs where no such contaminants are present and is less commonly seen. Homogeneous nucleation begins with small clusters of molecules forming from one phase to the next. As the clusters grow, they aggregate through the condensation of other molecules. The size continues to increase and ultimately form macroscopic droplets (or bubbles depending on the system).
Nucleation is often described mathematically through the change in Gibbs free energy of n moles of vapor at vapor pressure P that condenses into a drop. Also the nucleation barrier, in polymer crystallization, consists of both enthalpic and entropic components that must be over come. This barrier consists of selection processes taking place in different length and time scales which relates to the multiple regimes later on. [ 2 ] This barrier is the free energy required to overcome in order to form nuclei. It is the formation of the nuclei from the bulk to a surface that is the interfacial free energy. The interfacial free energy is always a positive term and acts to destabilize the nucleus allowing the continuation of the growing polymer chain. The nucleation continues as a favorable reaction.
The Lauritzen–Hoffman plot (right) models the three different regimes when (logG) + U*/k(T-T 0 ) is plotted against (TΔT) −1 . [ 3 ] It can be used to describe the rate at which secondary nucleation competes with lateral addition at the growth front among the different temperatures. This theory can be used to help understand the preferences of nucleation and growth based on the polymer's properties including its standard melting temperature.
For many polymers, the change between the initial lamellar thickness at T c is roughly the same as at T m and can thus be modeled by the Gibbs–Thomson equation fairly well. However, since it implies that the lamellar thickness over the given supercooling range (T m –T c ) is unchanged, and many homogeneous nucleation of polymers implies a change of thickness at the growth front, Hoffman and Weeks pursued a more accurate representation. [ 4 ] In this regard, the Hoffman-Weeks plot was created and can be modeled through the equation
T m = T c β + ( 1 − 1 β ) T m ∘ {\displaystyle T_{\text{m}}={T_{\text{c}} \over \beta }+(1-{1 \over \beta })T_{\text{m}}^{\circ }}
where β is representative of a thickening factor given by L = L 0 β and T c and T m are the crystallization and melting temperatures, respectively.
Applying this experimentally for a constant β allows for the determination of the equilibrium melting temperature, T m ° at the intersection of T c and T m . [ 3 ]
The crystallization process of polymers does not always obey simple chemical rate equations . Polymers can crystallize through a variety of different regimes and unlike simple molecules, the polymer crystal lamellae have two very different surfaces. The two most prominent theories in polymer crystallization kinetics are the Avrami equation and Lauritzen–Hoffman growth theory. [ 5 ]
The Lauritzen–Hoffman growth theory breaks the kinetics of polymer crystallization into ultimately two rates. The model breaks down into the addition of monomers onto a growing surface. This initial step is generally associated with the nucleation of the polymer. From there, the kinetics become the rate which the polymer grows on the surface, or the lateral growth rate, in comparison with the growth rate onto the polymer extending the chain, the secondary nucleation rate. These two rates can result in three situations. [ 6 ]
For Regime I, the growth rate on the front laterally, referred to as g , is the rate-determining step (RDS) and exceeds the secondary nucleation rate, i . In this instance of g >> i , monolayers are formed one at a time so that if the substrate has a length of L p and thickness, b , the overall linear growth can be described through the equation
G I = b i L p {\displaystyle G_{\text{I}}=biL_{\text{p}}}
and the rate of nucleation in specific can further be described by
G I,n = e − ( K g / T Δ T ) {\displaystyle G_{\text{I,n}}=e^{-(K_{\text{g}}/T\Delta T)}\,}
with K g equal to
K g = 4 b σ l σ f T m 0 k Δ h {\displaystyle K_{\text{g}}={4b\sigma _{\text{l}}\sigma _{\text{f}}T_{m}^{0} \over k\Delta h}}
where
This shows that in Region I, lateral nucleation along the front successfully dominates at temperatures close to the melting temperature, however at more extreme temperatures other forces such as diffusion can impact nucleation rates.
In Regime II, the lateral growth rate is either comparable or smaller than the nucleation rate g ≤ i , which causes secondary (or more) layers to form before the initial layer has been covered. This allows the linear growth rate to be modeled by
G II = b i g {\displaystyle G_{\text{II}}=b{\sqrt {ig}}}
Using the assumption that g and i are independent of time, the rate at which new layers are formed can be approximated and the rate of nucleation in regime II can be expressed as
G II,n = e − ( K g ′ / T Δ T ) {\displaystyle G_{\text{II,n}}=e^{-(K_{\text{g}}^{\prime }/T\Delta T)}\,}
with K g ' equal to about 1/2 of the K g from Regime I,
K g ′ = 2 b σ l σ f T m 0 k Δ h {\displaystyle K_{\text{g}}^{\prime }={2b\sigma _{\text{l}}\sigma _{\text{f}}T_{m}^{0} \over k\Delta h}}
Lastly, Regime III in the L-H model depicts the scenario where lateral growth is inconsequential to the overall rate, since the nucleation of multiple sites causes i >> g . This means that the growth rate can be modeled by the same equation as Regime I,
G III = b i L p = G III ∘ e − U ∗ / k ( T − T 0 ) − ( K g / T Δ T ) {\displaystyle G_{\text{III}}=biL_{\text{p}}=G_{\text{III}}^{\circ }e^{{-U^{*}/k(T-T_{\text{0}})}-(K_{\text{g}}/T\Delta T)}}
where G III ° is the prefactor for Regime III and can be experimentally determined through applying the Lauritzen–Hoffman Plot. [ 7 ]
A reza's crystallization depends on the time it takes for layers of its chains to fold and orient themselves in the same direction. This time increases with a molecule's weight and branching. [ 8 ] The table below shows that the growth rate is higher for Sclair 14B.1 than Sclair 2907 (20%), where 2907 is less highly branched than 14B.1. [ 8 ] Here Gc is the crystal growth rate, or how quickly it orders itself depending on the layers, and t is the time it takes to order.
Many additional tests have since been run to apply and compare Hoffman's principles to reality. Among the experiments done, some of the more notable secondary nucleation tests are briefly explained in the table below. | https://en.wikipedia.org/wiki/Hoffman_nucleation_theory |
The Hoffmann kiln is a series of batch process kilns . Hoffmann kilns are the most common kiln used in production of bricks and some other ceramic products. Patented by German Friedrich Hoffmann for brickmaking in 1858, it was later used for lime -burning, and was known as the Hoffmann continuous kiln .
A Hoffmann kiln consists of a main fire passage surrounded on each side by several small rooms. Each room contains a pallet of bricks. In the main fire passage there is a fire wagon , that holds a fire that burns continuously. Each room is fired for a specific time, until the bricks are vitrified properly, and thereafter the fire wagon is rolled to the next room to be fired.
Each room is connected to the next room by a passageway carrying hot gases from the fire. In this way, the hottest gases are directed into the room that is currently being fired. Then the gases pass into the adjacent room that is scheduled to be fired next. There the gases preheat the brick. As the gases pass through the kiln circuit, they gradually cool as they transfer heat to the brick as it is preheated and dried. This is essentially a counter-current heat exchanger , which makes for a very efficient use of heat and fuel. This efficiency is a principal advantage of the Hoffmann kiln, and is one of the reasons for its original development and continued use throughout history. [ 1 ] In addition to the inner opening to the fire passage, each room also has an outside door, through which recently fired brick is removed, and replaced with wet brick to be dried and then fired in the next firing cycle.
In a classic Hoffmann kiln, the fire may burn continuously for years, even decades; in Iran, there are kilns that are still active and have been working continuously for 35 years. Any fuel may be used in a Hoffmann kiln, including gasoline , natural gas , heavy petroleum and wood fuel . The dimensions of a typical Hoffmann kiln are completely variable, but in average about 5 m (height) x 15 m (width) x 150 m (length).
The first kiln of this class was put into operation on November 22, 1859 [ 2 ] in Scholwin (since 1946, Skolwin), [ 3 ] near Stettin, which was then part of Prussia. In 1867 there were already 250 of them, most in the Prussian part of Germany, fifty in England and three in France. In Italy, their expansion began in 1870, after being shown at the Paris Exhibition. [ 4 ] In September 1870, the first brick factory according to Hoffmann's patent was inaugurated in Australia. [ 5 ] The first continuous Hoffmann system kilns installed in Spain would have been in 1880, near Madrid. [ 6 ] In 1900 there were already more than 4,000 kilns of this type, distributed throughout Europe, Russia, the Americas, Africa and even the East Indies. [ 7 ] In 1904, an oven according to the patent of the British William Sercombe and based on the Hoffmann model began operating in Palmerston North, New Zealand. [ 8 ] Hoffman kilns are still in use for brick production in some parts of the world, especially in places where labor costs are low and modern technology is not easily accessible.
The Hoffmann kiln is used in almost every country.
In the British Isles there are only a few Hoffmann kilns remaining, some of which have been preserved. [ 9 ]
The only ones with a chimney are at Prestongrange Industrial Heritage Museum and Llanymynech Heritage Area . The site at Llanymynech , close to Oswestry was used for lime-burning and has recently been partially restored as part of an industrial archaeology conservation project supported by English Heritage and the Heritage Lottery Fund . [ 9 ]
Two examples in North Yorkshire , the Hoffmann lime-burning kiln at Meal Bank Quarry, Ingleton , and that at the former Craven and Murgatroyd lime works, Langcliffe , are scheduled ancient monuments . [ 10 ] [ 11 ]
There is an intact but abandoned Hoffmann kiln without a chimney present at Minera Limeworks ; the site is abandoned but all entrances to the kiln have been grated-off, preventing access. The kiln is in a very poor state of repair, with trees growing out of the walls and the roof. Minera Quarry Trust hopes one day to develop the area into something of a tourist attraction. The Grade II listed Hoffmann brick kiln in Ilkeston , Derbyshire , is also badly neglected, although the recently installed fencing offers some protection for the building and for visitors. [ 12 ]
At Prestongrange Museum, outside Prestonpans in East Lothian , the Hoffman kiln is still standing and visitors can listen to more about it via a mobile phone tour. [ 13 ]
There is a nearly complete kiln in Horeb, Carmarthenshire . [ 13 ]
There is still a working kiln at Kings Dyke in Peterborough , which is the last site of the London Brick Company , owned by Forterra PLC . [ citation needed ]
In Victoria, Australia , at the Brunswick brickworks , there are two surviving kilns converted to residences, and a chimney from a third kiln; there is another in Box Hill, Victoria ; also in Melbourne . [ 14 ]
In Adelaide , South Australia , the last remaining Hoffman kiln in the state is in at the old Hallett Brickworks site in Torrensville. [ 14 ] [ 15 ] [ 16 ]
There is one at St Peters in Sydney , New South Wales . [ 14 ]
In Western Australia , the kiln at the Maylands Brickworks in the Perth suburb of Maylands , which operated from 1927 to 1982 is the only remaining Hoffman kiln in the state. [ 17 ]
There is a complete kiln in the restored Tsalapatas brick Factory in Volos Greece that has been converted to an industrial museum. [ 19 ]
There are two in New Zealand. [ citation needed ]
Kaohsiung city in Taiwan is also home to a Hoffman kiln, built by the Japanese government in 1899. [ 20 ] [ circular reference ] | https://en.wikipedia.org/wiki/Hoffmann_kiln |
Hofmann elimination is an elimination reaction of an amine to form alkenes . The least stable alkene (the one with the fewest substituents on the carbons of the double bond), called the Hofmann product , is formed. This tendency, known as the Hofmann alkene synthesis rule , is in contrast to usual elimination reactions, where Zaitsev's rule predicts the formation of the most stable alkene. It is named after its discoverer, August Wilhelm von Hofmann . [ 1 ] [ 2 ]
The reaction starts with the formation of a quaternary ammonium iodide salt by treatment of the amine with excess methyl iodide ( exhaustive methylation ), followed by treatment with silver oxide and water to form a quaternary ammonium hydroxide. When this salt is decomposed by heat, the Hofmann product is preferentially formed due to the steric bulk of the leaving group causing the hydroxide to abstract the more easily accessible hydrogen.
In the Hofmann elimination, the least substituted alkene is typically favored due to intramolecular steric interactions. The quaternary ammonium group is large, and interactions with alkyl groups on the rest of the molecule are undesirable. As a result, the conformation necessary for the formation of the Zaitsev product is less energetically favorable than the conformation required for the formation of the Hofmann product. As a result, the Hofmann product is formed preferentially. The Cope elimination is very similar to the Hofmann elimination in principle, but occurs under milder conditions. It also favors the formation of the Hofmann product, and for the same reasons. [ 3 ]
An example of a Hofmann elimination (not involving a contrast between a Zaitsev product and a Hofmann product) is the synthesis of trans-cyclooctene . [ 4 ] The trans isomer is selectively trapped as a complex with silver nitrate (in this diagram the trans form looks like a cis form, but see the trans-cyclooctene article for better images):
In a related chemical test , known as the Herzig–Meyer alkimide group determination , a tertiary amine with at least one methyl group and lacking a beta-proton is allowed to react with hydrogen iodide to the quaternary ammonium salt which when heated degrades to methyl iodide and the secondary amine. [ 5 ] | https://en.wikipedia.org/wiki/Hofmann_elimination |
The Hofmann rearrangement ( Hofmann degradation ) is the organic reaction of a primary amide to a primary amine with one less carbon atom. [ 1 ] [ 2 ] [ 3 ] The reaction involves oxidation of the nitrogen followed by rearrangement of the carbonyl and nitrogen to give an isocyanate intermediate. The reaction can form a wide range of products, including alkyl and aryl amines.
The reaction is named after its discoverer, August Wilhelm von Hofmann , and should not be confused with the Hofmann elimination , another name reaction for which he is eponymous .
The reaction of bromine with sodium hydroxide forms sodium hypobromite in situ , which transforms the primary amide into an intermediate isocyanate. The formation of an intermediate nitrene is not possible because it implies also the formation of a hydroxamic acid as a byproduct, which has never been observed. The intermediate isocyanate is hydrolyzed to a primary amine, giving off carbon dioxide . [ 2 ]
Several reagents can be substituted for bromine. Sodium hypochlorite , [ 4 ] lead tetraacetate , [ 5 ] N -bromosuccinimide , and (bis(trifluoroacetoxy)iodo)benzene [ 6 ] have all been used for Hofmann rearrangements.
The intermediate isocyanate can be trapped with various nucleophiles to form stable carbamates or other products rather than undergoing decarboxylation. In the following example, the intermediate isocyanate is trapped by methanol . [ 7 ]
In a similar fashion, the intermediate isocyanate can be trapped by tert -butyl alcohol , yielding the tert -butoxycarbonyl (Boc)-protected amine.
The Hofmann Rearrangement also can be used to yield carbamates from α,β - unsaturated or α- hydroxy amides [ 2 ] [ 8 ] or nitriles from α,β- acetylenic amides [ 2 ] [ 9 ] in good yields (≈70%). | https://en.wikipedia.org/wiki/Hofmann_rearrangement |
In organic chemistry , the Hofmann–Löffler reaction (also referred to as Hofmann–Löffler–Freytag reaction , Löffler–Freytag reaction , Löffler–Hofmann reaction , as well as Löffler's method ) is a cyclization reaction with remote C–H functionalization. [ 1 ] In the reaction, thermal or photochemical decomposition of N -halogenated amine 1 in the presence of a strong acid (concentrated sulfuric acid or concentrated CF 3 CO 2 H ) generates a nitrogen radical intermediate . The radical then abstracts an intramolecular hydrogen atom to give a cyclic amine 2 ( pyrrolidine or, in some cases, piperidine ).
In 1878, the structure of piperidine was still unknown, and A. W. Hofmann believed it unsaturated. [ 2 ] Following standard analytical technique, Hofmann added hydrogen chloride or bromine to it in an attempt to induce hydrohalogenation . Instead, he produced N -haloamines and N -haloamides, whose reactions under acidic and basic conditions he investigated. [ 3 ] [ 4 ]
1‑bromo-2‑propylpiperidine (3) and hot sulfuric acid , followed by basic work-up, formed a tertiary amine, [ 5 ] [ 6 ] later identified as δ-coneceine (4) . [ 7 ]
No further examples of the reaction were reported for about 25 years. But in 1909, K. Löffler and C. Freytag extended the transformation to simple secondary amines and applied the process in their elegant synthesis of nicotine (6) from N -bromo- N -methyl-4-(pyridin-3-yl)butan-1-amine (5) . [ 8 ] [ 9 ] [ 10 ]
The reaction mechanism only became clear around 1950, when S. Wawzonek investigated various N -haloamine cyclizations . [ 11 ] [ 12 ] [ 13 ] Noting that the hydrogen peroxide or ultraviolet light greatly improved yields, Wawzonek and Thelan [ 11 ] suggested a free-radical mechanism. E. J. Corey et al. then examined several features of the reaction: stereochemistry, hydrogen isotope effect, initiation, inhibition, catalysis, intermediates and selectivity of hydrogen transfer. [ 14 ] The results, presented below, conclusively supported Wawzonek and Thelan's hypothesis.
According to Wawzonek and Thelan's 1949 proposal, [ 11 ] an acid first protonates an N -chloroamine, which, in the presence of heat, light, or other initiators, homolyzes to ammonium and chloride free radicals. The ammonium radical intramolecularly abstracts a sterically favored hydrogen atom to afford an alkyl radical which, in a chain reaction, abstracts chlorine from another N -chloroammonium ion to form an alkyl chloride and a new ammonium radical. The alkyl chloride later cyclizes during the basic work-up to the cyclic tertiary amine. [ 15 ]
Because the hydrogen abstraction is radical, any chiral configuration at the δ-carbon racemizes. [ 14 ]
The reaction also has a quite large hydrogen isotope effect : in the decomposition of 10 , the ratio of 1,2-dimethylpyrrolidine 11 and 1,2-dimethylpyrrolidine-2- d 12 (determined by combustion and IR spectra ) suggests k H ⁄ k D ≈ 3.42–3.54 .
Comparable reactions at a primary carbon also give k H ⁄ k D ≫1 , which strongly suggests that the breaking of the C-H bond proceeds to a rather considerable extent in the transition state . [ 14 ]
Molecular oxygen inhibits the reaction ( trapping the radicals), but Fe 2+ salts initiate it. [ 14 ]
Further investigations demonstrated that both the rate of the ultraviolet-catalyzed decomposition of dibutylchloroamine and the yield of newly formed pyrrolidine are strongly dependent on the acidity of the reaction medium – faster and higher-yielding reaction was observed with increasing sulfuric acid concentration. [ 14 ]
An important question in discussing the role of the acid is whether the N -haloamine reacts in the free base or the salt form in the initiation step. Based on the pK a values of the conjugate acids of 2° alkyl amines (which are generally in the range 10–11), it is evident that N -chloroamines exist largely as salts in a solution of high sulfuric acid concentration. As a result, in the case of chemical or thermal initiation, it is reasonable to assume that it is the N -chloroammonium ion which affords the ammonium free radical. The situation changes, however, when the reaction is initiated upon irradiation with UV light. The radiation must be absorbed and the quantum of the incident light must be large enough to dissociate the N-Cl bond in order for a photochemical reaction to occur. Because the conjugate acids of the N -chloroamines have no appreciable UV absorption above 225 nm, whereas the free N -chloroamine absorb UV light of sufficient energy to cause dissociation (λ max 263 nm, ε max 300), [ 16 ] E. J. Corey postulated that in this case it is actually the small percentage of free N -chloroamine that is responsible for most of the initiation. It was also suggested that the newly generated neutral nitrogen radical is immediately protonated. However, it is important to realize that an alternative scenario might be in operation when the reaction is initiated with the UV light; namely, the free N -haloamine might not undergo dissociation upon irradiation, but it might function as a photosensitizer instead.
While it was proposed that the higher acid concentration decreases the rate of the initiation step, the acid catalysis involves acceleration of the propagation steps and/or retardation of the chain termination. The influence of certain acidic solvents on the photolytic Hofmann–Löffler–Freytag reaction was also studied by Neale and co-workers. [ 17 ]
Isolation of 4-chlorodibutylamine from decomposition of dibutylchloroamine in H 2 SO 4 confirmed the intermediacy of δ–chloroamines. [ 13 ] When the acidic solution is made basic, the δ–chloroamine cyclizes to give a cyclic amine and a chloride ion.
In order to determine the structural and geometrical factors affecting the intramolecular hydrogen atom transfer, a number of different N -chloroamines were examined in the Hofmann–Löffler–Freytag reaction. The systems were judiciously chosen in order to obtain data on the following points: relative migration tendencies of primary (1°), secondary (2°) and tertiary (3°) hydrogens; relative rates of 1,5- and 1,6-hydrogen rearrangements; and facility of hydrogen rearrangements in cyclic systems of restricted geometry.
Investigation of the free radical decomposition of N -chlorobutylamylamine 13 allowed to determine 1° vs. 2° hydrogen migration. It was reported that only 1- n -butyl-2-methylpyrrolidine 14 was formed under the reaction conditions, no 1- n -amylpyrrolidine 15 was detected. This observation provided substantial evidence that the radical attack exhibits strong preference for the 2° over 1° hydrogen.
Tendency for 3° vs. 1° hydrogen migration was studied with n -butylisohexylamine 16. When 16 was subjected to the standard reaction conditions, rapid disappearance of 16 was observed, but no pyrrolidine product could be isolated. This result suggested that there is a high selectivity for the 3° hydrogen, but the intermediate tertiary chloro compound 17 is rapidly solvolyzed.
Similarly, no cyclic amine was observed with the reaction of n -amylisohexylamine, which demonstrates the selectivity for the 3° vs. 2° hydrogen migration.
A qualitative study of products from the Hofmann–Löffler–Freytag reaction of N -chloromethyl- n -hexylamine 18 was performed in order to evaluate the relative ease of 1,5- and 1,6-hydrogen migration. UV-catalyzed decomposition of 18 followed by basification produced a 9:1 mixture of 1-methyl-2-ethylpyrrolidine 19 and 1,2-dimethylpiperidine 20, which demonstrates that the extent of formation of six-membered rings can be appreciable.
In terms of the geometrical requirements in the intramolecular rearrangement of hydrogen, it was observed that under identical reaction conditions the UV light-catalyzed decomposition of methylcyclohexylchloroamine and N -chloroazacycloheptane proceeds far more slowly than that of dibutylchloroamine. These findings indicate that the prevailing geometries are in these two cases unfavourable for the rearrangement to occur and the Cδ–H–N bond angle required for the intramolecular hydrogen transfer cannot be easily attained.
It is generally accepted that the first step in the Hofmann–Löffler–Freytag reaction conducted in acidic medium is the protonation of the N -halogenated amine 21 to form the corresponding N -halogenated ammonium salt 22. In case of thermal or chemical initiation of the free radical chain reaction, the N -halogenated ammonium salt 22 undergoes homolytic cleavage of the nitrogen-halogen bond to generate the nitrogen-centered radical cation 23. In contrast, it has been argued that the UV light-catalyzed initiation involves the free form of the N -haloamine and a rapid protonation of the newly generated neutral nitrogen radical (see the section devoted to mechanistic studies for arguments supporting this statement).
Intramolecular 1,5-hydrogen atom transfer produces carbon-centered radical 24, which subsequently abstracts a halogen atom from the N -halogenated ammonium salt 22. This affords the protonated δ-halogenated amine 25 and regenerates the nitrogen-centered radical cation 23, the chain carrier of the reaction. Upon treatment with base, 25 undergoes deprotonation followed by an intramolecular S N 2 reaction to yield pyrrolidine 28 via intermediate 27.
The preferential abstraction of the δ–hydrogen atom corresponds to a six-membered transition state, which can adopt the unstrained cyclohexane chair-type conformation 29.
The Hofmann–Löffler–Freytag reaction is conceptually related to the well-known Barton reaction .
Because the original strongly acidic reaction conditions are often not compatible with the sensitive functional and protecting groups of complex substrates, several modifications of the Hofmann–Löffler–Freytag reaction were introduced:
Similarly, S. W. Baldwin and T. J. Doll examined a modification of the Hofmann–Löffler–Freytag reaction during their studies towards the synthesis of the alkaloid gelsemicine 41. The formation of the pyrrolidine ring of 40 was accomplished by irradiation of N -chloroamide 39. [ 19 ]
The great advantage of the Suárez modification is that the reaction can be performed under very mild neutral conditions compatible with the stability of the protective groups most frequently used in synthetic organic chemistry. Consequently, it permits the use of the Hofmann–Löffler–Freytag reaction with more sensitive molecules. Other notable features of this methodology are the following: (1) the unstable iodoamide intermediates are generated in situ; (2) the iodoamide homolysis proceeds thermally at low temperature (20–40 °C) or by irradiation with visible light, which obviates the need for a UV lamp. The Suárez modification has found numerous applications in synthesis (vide infra).
The most prevalent synthetic utility of the Hofmann–Löffler–Freytag reaction is the assembly of the pyrrolidine ring.
The procedure for the Hofmann–Löffler–Freytag reaction traditionally requires strongly acidic conditions, which limits its appeal. Nonetheless, it has been successfully applied to functionalization of a wide variety of structurally diverse molecules as exemplified below.
In 1980, J. P. Lavergne. et al. [ 31 ] used this methodology to prepare L-proline 49.
P. E. Sonnet and J. E. Oliver [ 32 ] employed classic Hofmann–Löffler–Freytag reaction conditions in the synthesis of potential ant sex pheromone precursors (i.e. octahydroindolizine 51).
Another example of the construction of a bicyclic amine through the standard Hofmann–Löffler–Freytag methodology is the Waegell's synthesis [ 33 ] of azabicyclo[3.2.1]octane derivative 53.
The Hofmann–Löffler–Freytag reaction was employed to synthesize the bridged nitrogen structure of (±)-6,15,16-iminopodocarpane-8,11,13-triene 55, an intermediate useful for the preparation of the kobusine-type alkaloids, from a bicyclic chloroamine 54. [ 34 ] Irradiation of 54 with a 400 W high-pressure mercury lamp in trifluoroacetic acid under a nitrogen atmosphere at room temperature for 5 h afforded a moderate yield of the product.
Derivatives of adamantane have also been prepared using the Hofmann–Löffler–Freytag reaction. [ 35 ] When N -chloroamine 56 was treated with sulfuric acid and heat, 2-adamantanone was formed, but photolysis of 56 in the sulfuric acid-acetic acid mixture, using a low-pressure mercury lamp at 25 °C for 1-hour gave a good yield (85%) of the desired product 57. The cyclization of 57 presented considerable difficulties, but it was finally achieved in 34% yield under forcing conditions (heating at 290 °C for 10 min).
Similarly, it has been demonstrated [ 36 ] that derivatives of diaza-2,6 adamantane such as 60 might be formed under standard Hofmann–Löffler–Freytag reaction conditions; however, the yields are only moderate.
R. P. Deshpande and U. R. Nayak [ 37 ] reported that the Hofmann–Löffler–Freytag reaction is applicable to the synthesis of pyrrolidines containing a longifolene nucleus, e.g. 62.
An outstanding application of the Hofmann–Löffler–Freytag reaction is found in the preparation of the steroidal alkaloid derivatives. J. Hora [ 38 ] and G. van de Woude [ 39 ] [ 40 ] [ 41 ] used this procedure in their syntheses of conessine derivatives shown below.
In case of 64 and 66, the five-membered nitrogen ring is formed by attack on the unactivated C-18 methyl group of the precursor (63 or 65, respectively) by a suitably placed nitrogen-centered radical at C-20. The ease of this reaction is due to the fact that in the rigid steroid framework the β-C-18 methyl group and the β-C-20 side chain carrying the nitrogen radical are suitably arranged in space in order to allow the 1,5-hydrogen abstraction to proceed via the six-membered transition state.
A number of examples of the Hofmann–Löffler–Freytag reaction under neutral conditions have been presented in the section devoted to modifications and improvements of the original reaction conditions. Hence, the main focus of this section are the applications of the Suárez modification of the Hofmann–Löffler–Freytag reaction.
The Suárez modification of the Hofmann–Löffler–Freytag reaction was the basis of the new synthetic method developed by H. Togo et al. [ 42 ] [ 43 ] The authors demonstrated that various N -alkylsaccharins ( N -alkyl-1,2-benzisothiazoline-3-one-1,1,-dioxides) 77 are easily prepared in moderate to good yields by the reaction of N -alkyl( o -methyl)arenesulfonamides 70 with PhI(OAc) 2 in the presence of iodine under the irradiation of a tungsten lamp. 1,5 -Hydrogen abstraction/iodination of the o -methyl group is repeated three times and is most likely followed by cyclization to diiodo intermediate 76, which then undergoes hydrolysis.
A very interesting transformation is observed when sulfonamides of primary amides bearing an aromatic ring at the γ-position are treated with various iodanes and iodine under the irradiation with a tungsten lamp. [ 44 ] The reaction leads to 1,2,3,4-tetrahydroquinoline derivatives and is a good preparative method of six-membered cyclic aromatic amines. For instance, sulfonamide 78 undergoes an intramolecular radical cyclization to afford 79 in relatively good yield.
By the same procedure, 3,4-dihydro-2,1-benzothiazine-2,2-dioxides 81 are obtained from the N -alkyl 2-(aryl)ethanesulfonamides via the sulfonamidyl radical. [ 45 ]
E. Suárez et al. [ 46 ] reported that the amidyl radical intermediates, produced by photolysis of medium-sized lactams, e.g. 82 in the presence of PhI(OAc) 2 and iodine, undergo transannular hydrogen abstraction to afford intramolecularly funcionalized compounds such as oxoindolizidines 83.
E. Suárez and co-workers [ 27 ] also applied their methodology in the synthesis of chiral 8-oxa-6-azabicyclo[3.2.1]-octane 85 and 7-oxa-2-azabicyclo[2.2.1]heptane 87 ring systems. This reaction can be considered to be an intramolecular N -glycosidation that goes through an intramolecular 1,5-hydrogen abstraction promoted by an N -amido radical followed by oxidation of the transient C-radical intermediate to an oxycarbenium ion, which is subsequently trapped by an internal nucleophile.
The utility of the Suárez modification of the Hofmann–Löffler–Freytag reaction was demonstrated by its application in synthesis of a number of steroid and triterpene compounds. [ 25 ] [ 26 ] [ 28 ] [ 29 ] [ 47 ] As illustrated below, the phosphoramidate-initiated funcionalizations generally proceed in higher yields than the reactions involving N -nitro or N -cyanamides.
In 2008 P.S. Baran et al. [ 48 ] reported a new method for the synthesis of 1,3-diols using a variant of the Hofmann–Löffler–Freytag reaction.
In 2017, Nagib et al. [ 49 ] [ 50 ] reported a new method for the synthesis of 1,2-amino-alcohols using a variant of the Hofmann–Löffler–Freytag reaction to promote β selective C-H amination of alcohols. In 2020, an asymmetric variant was disclosed by the same team. [ 51 ] | https://en.wikipedia.org/wiki/Hofmann–Löffler_reaction |
The Hofmeister series or lyotropic series is a classification of ions in order of their lyotrophic properties, which is the ability to salt out or salt in proteins. [ 1 ] [ 2 ] The effects of these changes were first worked out by Franz Hofmeister , who studied the effects of cations and anions on the solubility of proteins . [ 3 ]
Highly charged ions interact strongly with water, breaking hydrogen bonds and inducing electrostatic structuring of nearby water, [ 4 ] and are thus called "structure-makers" or " kosmotropes ". [ 5 ] Conversely, weak ions can disrupt the structure of water, and are thus called "structure-breakers" or
" chaotropes ". [ 5 ] The order of the tendency of ions to make or break water structure is the basis of the Hofmeister series.
Hofmeister discovered a series of salts that have consistent effects on the solubility of proteins and, as it was discovered later, on the stability of their secondary and tertiary structures . Anions appear to have a larger effect than cations , [ 6 ] and are usually ordered as follows: [ 5 ] [ 7 ] [ 8 ]
This is a partial list as many more salts have been studied, which applies to cations as well. The order of cations is usually given as: [ 5 ] [ 7 ] [ 9 ]
When oppositely charged kosmotropic cations and anions are in solution together, they are attracted to each other, rather than to water, and the same can be said for chaotropic cations and anions. [ 5 ] Thus, the preferential associations of oppositely charged ions can be ordered as:
Combining kosmotropic anions with kosmotropic cations reduces the kosmotropic effect of these ions because they are pairing to each other too strongly to be structuring water. [ 5 ] Kosmotropic anions do not readily pair with chaotropic cations. The combination of kosmotropic anions with chaotropic cations is the best ion combination to stabilize proteins. [ 4 ]
The mechanism of the Hofmeister series is not entirely clear, but does not seem to result from changes in general water structure, instead more specific interactions between ions and proteins and ions and the water molecules directly contacting the proteins may be more important. [ 10 ] Simulation studies have shown that the variation in solvation energy between the ions and the surrounding water molecules underlies the mechanism of the Hofmeister series. [ 11 ] [ 12 ] A quantum chemical investigation suggests an electrostatic origin to the Hofmeister series. [ 13 ] This work provides site-centred radial charge densities of the ions' interacting atoms (to approximate the electrostatic potential energy of interaction), and these appear to quantitatively correlate with many reported Hofmeister series for electrolyte properties, reaction rates and macromolecular stability (such as polymer solubility, and virus and enzyme activities).
Early members of the series increase solvent surface tension and decrease the solubility of nonpolar molecules (" salting out "); in effect, they strengthen the hydrophobic interaction . By contrast, later salts in the series increase the solubility of nonpolar molecules (" salting in ") and decrease the order in water; in effect, they weaken the hydrophobic effect . [ 14 ] [ 15 ]
The "salting out" effect is commonly exploited in protein purification through the use of ammonium sulfate precipitation . [ 16 ] However, these salts also interact directly with proteins (which are charged and have strong dipole moments) and may even bind specifically (e.g., phosphate and sulfate binding to ribonuclease A ). As a result, these interactions can lead to protein denaturation or the formation of artificial adducts . [ 16 ]
Ions that have a strong "salting in" effect such as I − and SCN − are strong denaturants, because they salt in the peptide group, and thus, interact much more strongly with the unfolded form of a protein than with its native form. Consequently, they shift the chemical equilibrium of the unfolding reaction towards unfolded protein. [ 17 ]
The denaturing of proteins by an aqueous solution containing many types of ions is more complicated as all the ions can act, according to their Hofmeister activity, i.e., a fractional number specifying the position of the ion in the series (given previously) in terms of its relative efficiency in denaturing a reference protein.
At high salt concentrations lysozyme protein aggregation obeys the Hofmeister series originally observed by Hofmeister in the 1870s, but at low salt concentrations electrostatic interactions rather than ion dispersion forces affect protein stability resulting in the series being reversed. [ 18 ] [ 5 ] However, at high concentrations of salt, the solubility of the proteins drop sharply and proteins can precipitate out, referred to as " salting out ". [ 19 ]
Ion binding to carboxylic surface groups of macromolecules can either follow the Hofmeister series or the reversed Hofmeister series depending on the pH . [ 20 ]
The concept of Hofmeister ionicity I h has been invoked by Dharma-wardana et al. [ 21 ] where it is proposed to define I h as a sum over all ionic species, of the product of the ionic concentration (mole fraction) and a fractional number specifying the "Hofmeister strength" of the ion in denaturing a given reference protein. The concept of ionicity (as a measure of the Hofmeister strength) used here has to be distinguished from ionic strength as used in electrochemistry, and also from its use in the theory of solid semiconductors. [ 22 ]
The relative stability of metal ion-protein binding , which affects the conformational stability of metal cofactor-containing proteins in denaturing solution , is reflected by the Irving–Williams series . [ 23 ] | https://en.wikipedia.org/wiki/Hofmeister_series |
In condensed matter physics , Hofstadter's butterfly is a graph of the spectral properties of non-interacting two-dimensional electrons in a perpendicular magnetic field in a lattice . The fractal, self-similar nature of the spectrum was discovered in the 1976 Ph.D. work of Douglas Hofstadter [ 1 ] and is one of the early examples of modern scientific data visualization. The name reflects the fact that, as Hofstadter wrote, "the large gaps [in the graph] form a very striking pattern somewhat resembling a butterfly." [ 1 ]
The Hofstadter butterfly plays an important role in the theory of the integer quantum Hall effect and the theory of topological quantum numbers .
The first mathematical description of electrons on a 2D lattice, acted on by a perpendicular homogeneous magnetic field, was studied by Rudolf Peierls and his student R. G. Harper in the 1950s. [ 2 ] [ 3 ]
Hofstadter first described the structure in 1976 in an article on the energy levels of Bloch electrons in perpendicular magnetic fields. [ 1 ] It gives a graphical representation of the spectrum of Harper's equation at different frequencies. One key aspect of the mathematical structure of this spectrum – the splitting of energy bands for a specific value of the magnetic field, along a single dimension (energy) – had been previously mentioned in passing by Soviet physicist Mark Azbel in 1964 [ 4 ] (in a paper cited by Hofstadter), but Hofstadter greatly expanded upon that work by plotting all values of the magnetic field against all energy values, creating the two-dimensional plot that first revealed the spectrum's uniquely recursive geometric properties. [ 1 ]
Written while Hofstadter was at the University of Oregon , his paper was influential in directing further research. It predicted on theoretical grounds that the allowed energy level values of an electron in a two-dimensional square lattice , as a function of a magnetic field applied perpendicularly to the system, formed what is now known as a fractal set . That is, the distribution of energy levels for small-scale changes in the applied magnetic field recursively repeats patterns seen in the large-scale structure. [ 1 ] "Gplot", as Hofstadter called the figure, was described as a recursive structure in his 1976 article in Physical Review B , [ 1 ] written before Benoit Mandelbrot 's newly coined word "fractal" was introduced in an English text. Hofstadter also discusses the figure in his 1979 book Gödel, Escher, Bach . The structure became generally known as "Hofstadter's butterfly".
David J. Thouless and his team discovered that the butterfly's wings are characterized by Chern integers , which provide a way to calculate the Hall conductance in Hofstadter's model. [ 5 ]
In 1997 the Hofstadter butterfly was reproduced in experiments with a microwave guide equipped with an array of scatterers. [ 6 ] The similarity between the mathematical description of the microwave guide with scatterers and Bloch's waves in the magnetic field allowed the reproduction of the Hofstadter butterfly for periodic sequences of the scatterers.
In 2001, Christian Albrecht, Klaus von Klitzing , and coworkers realized an experimental setup to test Thouless et al. 's predictions about Hofstadter's butterfly with a two-dimensional electron gas in a superlattice potential. [ 7 ] [ 2 ]
In 2013, three separate groups of researchers independently reported evidence of the Hofstadter butterfly spectrum in graphene devices fabricated on hexagonal boron nitride substrates. [ 8 ] [ 9 ] [ 10 ] In this instance the butterfly spectrum results from the interplay between the applied magnetic field and the large-scale moiré pattern that develops when the graphene lattice is oriented with near zero-angle mismatch to the boron nitride.
In September 2017, John Martinis's group at Google, in collaboration with the Angelakis group at CQT Singapore , published results from a simulation of 2D electrons in a perpendicular magnetic field using interacting photons in 9 superconducting qubits . The simulation recovered Hofstadter's butterfly, as expected. [ 11 ]
In 2021 the butterfly was observed in twisted bilayer graphene at the second magic angle. [ 12 ]
In his original paper, Hofstadter considers the following derivation: [ 1 ] a charged quantum particle in a two-dimensional square lattice, with a lattice spacing a {\displaystyle a} , is described by a periodic Schrödinger equation , under a perpendicular static homogeneous magnetic field restricted to a single Bloch band. For a 2D square lattice, the tight binding energy dispersion relation is
where W ( k ) {\displaystyle W(\mathbf {k} )} is the energy function, k = ( k x , k y ) {\displaystyle \mathbf {k} =(k_{x},k_{y})} is the crystal momentum , and E 0 {\displaystyle E_{0}} is an empirical parameter. The magnetic field B = ∇ × A {\displaystyle \mathbf {B} =\nabla \times \mathbf {A} } , where A {\displaystyle \mathbf {A} } the magnetic vector potential , can be taken into account by using Peierls substitution , replacing the crystal momentum with the canonical momentum ℏ k → p − q A {\displaystyle \hbar \mathbf {k} \to \mathbf {p} -q\mathbf {A} } , where p = ( p x , p y ) {\displaystyle \mathbf {p} =(p_{x},p_{y})} is the particle momentum operator and q {\displaystyle q} is the charge of the particle ( q = − e {\displaystyle q=-e} for the electron, e {\displaystyle e} is the elementary charge ). For convenience we choose the gauge A = ( 0 , B x , 0 ) {\displaystyle \mathbf {A} =(0,Bx,0)} .
Using that e i p j a {\displaystyle e^{ip_{j}a}} is the translation operator , so that e i p j a ψ ( x , y ) = ψ ( x + a , y ) {\displaystyle e^{ip_{j}a}\psi (x,y)=\psi (x+a,y)} , where j = x , y , z {\displaystyle j=x,y,z} and ψ ( r ) = ψ ( x , y ) {\displaystyle \psi (\mathbf {r} )=\psi (x,y)} is the particle's two-dimensional wave function . One can use W ( p − q A ) {\displaystyle W(\mathbf {p} -q\mathbf {A} )} as an effective Hamiltonian to obtain the following time-independent Schrödinger equation:
Considering that the particle can only hop between points in the lattice, we write x = n a , y = m a {\displaystyle x=na,y=ma} , where n , m {\displaystyle n,m} are integers. Hofstadter makes the following ansatz : ψ ( x , y ) = g n e i ν m {\displaystyle \psi (x,y)=g_{n}e^{i\nu m}} , where ν {\displaystyle \nu } depends on the energy, in order to obtain Harper's equation (also known as almost Mathieu operator for λ = 1 {\displaystyle \lambda =1} ):
where ϵ = 2 E / E 0 {\displaystyle \epsilon =2E/E_{0}} and α = ϕ ( B ) / ϕ 0 {\displaystyle \alpha =\phi (B)/\phi _{0}} , ϕ ( B ) = B a 2 {\displaystyle \phi (B)=Ba^{2}} is proportional to the magnetic flux through a lattice cell and ϕ 0 = 2 π ℏ / q {\displaystyle \phi _{0}=2\pi \hbar /q} is the magnetic flux quantum . The flux ratio α {\displaystyle \alpha } can also be expressed in terms of the magnetic length l m = ℏ / e B {\textstyle l_{\rm {m}}={\sqrt {\hbar /eB}}} , such that α = ( 2 π ) − 1 ( a / l m ) 2 {\textstyle \alpha =(2\pi )^{-1}(a/l_{\rm {m}})^{2}} . [ 1 ]
Hofstadter's butterfly is the resulting plot of ϵ α {\displaystyle \epsilon _{\alpha }} as a function of the flux ratio α {\displaystyle \alpha } , where ϵ α {\displaystyle \epsilon _{\alpha }} is the set of all possible ϵ {\displaystyle \epsilon } that are a solution to Harper's equation.
Due to the cosine function's properties, the pattern is periodic on α {\displaystyle \alpha } with period 1 (it repeats for each quantum flux per unit cell). The graph in the region of α {\displaystyle \alpha } between 0 and 1 has reflection symmetry in the lines α = 1 2 {\displaystyle \alpha ={\frac {1}{2}}} and ϵ = 0 {\displaystyle \epsilon =0} . [ 1 ] Note that ϵ {\displaystyle \epsilon } is necessarily bounded between -4 and 4. [ 1 ]
Harper's equation has the particular property that the solutions depend on the rationality of α {\displaystyle \alpha } . By imposing periodicity over n {\displaystyle n} , one can show that if α = P / Q {\displaystyle \alpha =P/Q} (a rational number ), where P {\displaystyle P} and Q {\displaystyle Q} are distinct prime numbers , there are exactly Q {\displaystyle Q} energy bands. [ 1 ] For large Q ≫ P {\displaystyle Q\gg P} , the energy bands converge to thin energy bands corresponding to the Landau levels .
Gregory Wannier showed that by taking into account the density of states , one can obtain a Diophantine equation that describes the system, [ 13 ] as
where
where S {\displaystyle S} and T {\displaystyle T} are integers, and ρ ( ϵ ) {\displaystyle \rho (\epsilon )} is the density of states at a given α {\displaystyle \alpha } . Here n {\displaystyle n} counts the number of states up to the Fermi energy , and n 0 {\displaystyle n_{0}} corresponds to the levels of the completely filled band (from ϵ = − 4 {\displaystyle \epsilon =-4} to ϵ = 4 {\displaystyle \epsilon =4} ). This equation characterizes all the solutions of Harper's equation. Most importantly, one can derive that when α {\displaystyle \alpha } is an irrational number , there are infinitely many solution for ϵ α {\displaystyle \epsilon _{\alpha }} .
The union of all ϵ α {\displaystyle \epsilon _{\alpha }} forms a self-similar fractal that is discontinuous between rational and irrational values of α {\displaystyle \alpha } . This discontinuity is nonphysical, and continuity is recovered for a finite uncertainty in B {\displaystyle B} [ 1 ] or for lattices of finite size. [ 14 ] The scale at which the butterfly can be resolved in a real experiment depends on the system's specific conditions. [ 2 ]
The phase diagram of electrons in a two-dimensional square lattice, as a function of a perpendicular magnetic field, chemical potential and temperature, has infinitely many phases. Thouless and coworkers showed that each phase is characterized by an integral Hall conductance, where all integer values are allowed. These integers are known as Chern numbers . [ 2 ] | https://en.wikipedia.org/wiki/Hofstadter's_butterfly |
Hua-Zhong "Hogan" Yu (于化忠) is presently a professor of materials and analytical chemistry [ 1 ] at Simon Fraser University in metro Vancouver , Canada, where he leads a research laboratory working on Surfaces and Materials for Sensing. [ 2 ] He is also an associate editor for Analyst , the journal for Analytical and Bioanalytical Sciences from the Royal Society of Chemistry in UK, and an adjunct professor in the College of Biomedical Engineering, Taiyuan University of Technology in Shanxi, China.
Born and raised in countryside China, Yu obtained his B.Sc. (Chemistry) from Shandong University in 1991 at an age of 20. He then received his joint M.Sc. from Shandong University and Dalian Institute of Chemical Physics (Chemical Physics) in 1994, and his Ph.D. from Peking University (Materials Chemistry, with Prof. Zhong-Fan Liu) in 1997. [ 1 ] He did his postdoctoral research with Nobel Laureate Ahmed Zewail and electrochemist Fred Anson [ 3 ] at the California Institute of Technology from 1997 to 1999.
After short stays at NRC and Acadia University , Yu was appointed to the Department of Chemistry at Simon Fraser University in 2001 as an assistant professor and promoted to a tenured full professor in 2009. He is now a principal investigator of the CFI-funded Centre for Nanomaterials and Microstructures ( 4D LABS ) and an associate member of the Department of Molecular Biology and Biochemistry, both at SFU. Yu has been perusing his cutting-edge research on solving fundamental problems that have direct impact on applied analytical science and technology. His innovation of adapting mobile electronics (office scanners, disc players, and now smartphones) for portable molecular analysis and his contribution to the de novo construction of ultrasensitive electronic biosensors for disease markers, lead to the possibility of performing many quantitative chemical analysis on-site and biomedical diagnostic tests at point-of-care settings. He has published more than 160 peer-reviewed articles and holds/filed 14 national/international patents. | https://en.wikipedia.org/wiki/Hogan_Yu |
Density functional theory ( DFT ) is a computational quantum mechanical modelling method used in physics , chemistry and materials science to investigate the electronic structure (or nuclear structure ) (principally the ground state ) of many-body systems , in particular atoms, molecules, and the condensed phases . Using this theory, the properties of a many-electron system can be determined by using functionals - that is, functions that accept a function as input and output a single real number. [ 1 ] In the case of DFT, these are functionals of the spatially dependent electron density . DFT is among the most popular and versatile methods available in condensed-matter physics , computational physics , and computational chemistry .
DFT has been very popular for calculations in solid-state physics since the 1970s. However, DFT was not considered accurate enough for calculations in quantum chemistry until the 1990s, when the approximations used in the theory were greatly refined to better model the exchange and correlation interactions. Computational costs are relatively low when compared to traditional methods, such as exchange only Hartree–Fock theory and its descendants that include electron correlation. Since, DFT has become an important tool for methods of nuclear spectroscopy such as Mössbauer spectroscopy or perturbed angular correlation , in order to understand the origin of specific electric field gradients in crystals.
Despite recent improvements, there are still difficulties in using density functional theory to properly describe: intermolecular interactions (of critical importance to understanding chemical reactions), especially van der Waals forces (dispersion); charge transfer excitations; transition states , global potential energy surfaces , dopant interactions and some strongly correlated systems; and in calculations of the band gap and ferromagnetism in semiconductors . [ 2 ] The incomplete treatment of dispersion can adversely affect the accuracy of DFT (at least when used alone and uncorrected) in the treatment of systems which are dominated by dispersion (e.g. interacting noble gas atoms) [ 3 ] or where dispersion competes significantly with other effects (e.g. in biomolecules ). [ 4 ] The development of new DFT methods designed to overcome this problem, by alterations to the functional [ 5 ] or by the inclusion of additive terms, [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] is a current research topic. Classical density functional theory uses a similar formalism to calculate the properties of non-uniform classical fluids.
Despite the current popularity of these alterations or of the inclusion of additional terms, they are reported [ 11 ] to stray away from the search for the exact functional. Further, DFT potentials obtained with adjustable parameters are no longer true DFT potentials, [ 12 ] given that they are not functional derivatives of the exchange correlation energy with respect to the charge density. Consequently, it is not clear if the second theorem of DFT holds [ 12 ] [ 13 ] in such conditions.
In the context of computational materials science , ab initio (from first principles) DFT calculations allow the prediction and calculation of material behavior on the basis of quantum mechanical considerations, without requiring higher-order parameters such as fundamental material properties. In contemporary DFT techniques the electronic structure is evaluated using a potential acting on the system's electrons. This DFT potential is constructed as the sum of external potentials V ext , which is determined solely by the structure and the elemental composition of the system, and an effective potential V eff , which represents interelectronic interactions. Thus, a problem for a representative supercell of a material with n electrons can be studied as a set of n one-electron Schrödinger-like equations , which are also known as Kohn–Sham equations . [ 14 ]
Although density functional theory has its roots in the Thomas–Fermi model for the electronic structure of materials, DFT was first put on a firm theoretical footing by Walter Kohn and Pierre Hohenberg in the framework of the two Hohenberg–Kohn theorems (HK). [ 15 ] The original HK theorems held only for non-degenerate ground states in the absence of a magnetic field , although they have since been generalized to encompass these. [ 16 ] [ 17 ]
The first HK theorem demonstrates that the ground-state properties of a many-electron system are uniquely determined by an electron density that depends on only three spatial coordinates. It set down the groundwork for reducing the many-body problem of N electrons with 3 N spatial coordinates to three spatial coordinates, through the use of functionals of the electron density. This theorem has since been extended to the time-dependent domain to develop time-dependent density functional theory (TDDFT), which can be used to describe excited states.
The second HK theorem defines an energy functional for the system and proves that the ground-state electron density minimizes this energy functional.
In work that later won them the Nobel prize in chemistry , the HK theorem was further developed by Walter Kohn and Lu Jeu Sham to produce Kohn–Sham DFT (KS DFT). Within this framework, the intractable many-body problem of interacting electrons in a static external potential is reduced to a tractable problem of noninteracting electrons moving in an effective potential . The effective potential includes the external potential and the effects of the Coulomb interactions between the electrons, e.g., the exchange and correlation interactions. Modeling the latter two interactions becomes the difficulty within KS DFT. The simplest approximation is the local-density approximation (LDA), which is based upon exact exchange energy for a uniform electron gas , which can be obtained from the Thomas–Fermi model , and from fits to the correlation energy for a uniform electron gas. Non-interacting systems are relatively easy to solve, as the wavefunction can be represented as a Slater determinant of orbitals . Further, the kinetic energy functional of such a system is known exactly. The exchange–correlation part of the total energy functional remains unknown and must be approximated.
Another approach, less popular than KS DFT but arguably more closely related to the spirit of the original HK theorems, is orbital-free density functional theory (OFDFT), in which approximate functionals are also used for the kinetic energy of the noninteracting system.
As usual in many-body electronic structure calculations, the nuclei of the treated molecules or clusters are seen as fixed (the Born–Oppenheimer approximation ), generating a static external potential V , in which the electrons are moving. A stationary electronic state is then described by a wavefunction Ψ( r 1 , …, r N ) satisfying the many-electron time-independent Schrödinger equation
where, for the N -electron system, Ĥ is the Hamiltonian , E is the total energy, T ^ {\displaystyle {\hat {T}}} is the kinetic energy, V ^ {\displaystyle {\hat {V}}} is the potential energy from the external field due to positively charged nuclei, and Û is the electron–electron interaction energy. The operators T ^ {\displaystyle {\hat {T}}} and Û are called universal operators, as they are the same for any N -electron system, while V ^ {\displaystyle {\hat {V}}} is system-dependent. This complicated many-particle equation is not separable into simpler single-particle equations because of the interaction term Û .
There are many sophisticated methods for solving the many-body Schrödinger equation based on the expansion of the wavefunction in Slater determinants . While the simplest one is the Hartree–Fock method, more sophisticated approaches are usually categorized as post-Hartree–Fock methods. However, the problem with these methods is the huge computational effort, which makes it virtually impossible to apply them efficiently to larger, more complex systems.
Here DFT provides an appealing alternative, being much more versatile, as it provides a way to systematically map the many-body problem, with Û , onto a single-body problem without Û . In DFT the key variable is the electron density n ( r ) , which for a normalized Ψ is given by
This relation can be reversed, i.e., for a given ground-state density n 0 ( r ) it is possible, in principle, to calculate the corresponding ground-state wavefunction Ψ 0 ( r 1 , …, r N ) . In other words, Ψ is a unique functional of n 0 , [ 15 ]
and consequently the ground-state expectation value of an observable Ô is also a functional of n 0 :
In particular, the ground-state energy is a functional of n 0 :
where the contribution of the external potential ⟨ Ψ [ n 0 ] | V ^ | Ψ [ n 0 ] ⟩ {\displaystyle {\big \langle }\Psi [n_{0}]{\big |}{\hat {V}}{\big |}\Psi [n_{0}]{\big \rangle }} can be written explicitly in terms of the ground-state density n 0 {\displaystyle n_{0}} :
More generally, the contribution of the external potential ⟨ Ψ | V ^ | Ψ ⟩ {\displaystyle {\big \langle }\Psi {\big |}{\hat {V}}{\big |}\Psi {\big \rangle }} can be written explicitly in terms of the density n {\displaystyle n} :
The functionals T [ n ] and U [ n ] are called universal functionals, while V [ n ] is called a non-universal functional, as it depends on the system under study. Having specified a system, i.e., having specified V ^ {\displaystyle {\hat {V}}} , one then has to minimize the functional
with respect to n ( r ) , assuming one has reliable expressions for T [ n ] and U [ n ] . A successful minimization of the energy functional will yield the ground-state density n 0 and thus all other ground-state observables.
The variational problems of minimizing the energy functional E [ n ] can be solved by applying the Lagrangian method of undetermined multipliers . [ 18 ] First, one considers an energy functional that does not explicitly have an electron–electron interaction energy term,
where T ^ {\displaystyle {\hat {T}}} denotes the kinetic-energy operator, and V ^ s {\displaystyle {\hat {V}}_{\text{s}}} is an effective potential in which the particles are moving. Based on E s {\displaystyle E_{s}} , Kohn–Sham equations of this auxiliary noninteracting system can be derived:
which yields the orbitals φ i that reproduce the density n ( r ) of the original many-body system
The effective single-particle potential can be written as
where V ( r ) {\displaystyle V(\mathbf {r} )} is the external potential, the second term is the Hartree term describing the electron–electron Coulomb repulsion , and the last term V XC is the exchange–correlation potential. Here, V XC includes all the many-particle interactions. Since the Hartree term and V XC depend on n ( r ) , which depends on the φ i , which in turn depend on V s , the problem of solving the Kohn–Sham equation has to be done in a self-consistent (i.e., iterative ) way. Usually one starts with an initial guess for n ( r ) , then calculates the corresponding V s and solves the Kohn–Sham equations for the φ i . From these one calculates a new density and starts again. This procedure is then repeated until convergence is reached. A non-iterative approximate formulation called Harris functional DFT is an alternative approach to this.
The same theorems can be proven in the case of relativistic electrons, thereby providing generalization of DFT for the relativistic case. Unlike the nonrelativistic theory, in the relativistic case it is possible to derive a few exact and explicit formulas for the relativistic density functional.
Let one consider an electron in the hydrogen-like ion obeying the relativistic Dirac equation . The Hamiltonian H for a relativistic electron moving in the Coulomb potential can be chosen in the following form ( atomic units are used):
where V = − eZ / r is the Coulomb potential of a pointlike nucleus, p is a momentum operator of the electron, and e , m and c are the elementary charge , electron mass and the speed of light respectively, and finally α and β are a set of Dirac 2 × 2 matrices :
To find out the eigenfunctions and corresponding energies, one solves the eigenfunction equation
where Ψ = (Ψ(1), Ψ(2), Ψ(3), Ψ(4)) T is a four-component wavefunction , and E is the associated eigenenergy. It is demonstrated in Brack (1983) [ 19 ] that application of the virial theorem to the eigenfunction equation produces the following formula for the eigenenergy of any bound state:
and analogously, the virial theorem applied to the eigenfunction equation with the square of the Hamiltonian yields
It is easy to see that both of the above formulae represent density functionals. The former formula can be easily generalized for the multi-electron case.
One may observe that both of the functionals written above do not have extremals, of course, if a reasonably wide set of functions is allowed for variation. Nevertheless, it is possible to design a density functional with desired extremal properties out of those ones. Let us make it in the following way:
where n e in Kronecker delta symbol of the second term denotes any extremal for the functional represented by the first term of the functional F . The second term amounts to zero for any function that is not an extremal for the first term of functional F . To proceed further we'd like to find Lagrange equation for this functional. In order to do this, we should allocate a linear part of functional increment when the argument function is altered:
Deploying written above equation, it is easy to find the following formula for functional derivative:
where A = mc 2 ∫ n e d τ , and B = √ m 2 c 4 + emc 2 ∫ Vn e d τ , and V ( τ 0 ) is a value of potential at some point, specified by support of variation function δn , which is supposed to be infinitesimal. To advance toward Lagrange equation, we equate functional derivative to zero and after simple algebraic manipulations arrive to the following equation:
Apparently, this equation could have solution only if A = B . This last condition provides us with Lagrange
equation for functional F , which could be finally written down in the following form:
Solutions of this equation represent extremals for functional F . It's easy to see that all real densities,
that is, densities corresponding to the bound states of the system in question, are solutions of written above equation, which could be called the Kohn–Sham equation in this particular case. Looking back onto the definition of the functional F , we clearly see that the functional produces energy of the system for appropriate density, because the first term amounts to zero for such density and the second one delivers the energy value.
The major problem with DFT is that the exact functionals for exchange and correlation are not known, except for the free-electron gas . However, approximations exist which permit the calculation of certain physical quantities quite accurately. [ 20 ] One of the simplest approximations is the local-density approximation (LDA), where the functional depends only on the density at the coordinate where the functional is evaluated:
The local spin-density approximation (LSDA) is a straightforward generalization of the LDA to include electron spin :
In LDA, the exchange–correlation energy is typically separated into the exchange part and the correlation part: ε XC = ε X + ε C . The exchange part is called the Dirac (or sometimes Slater) exchange , which takes the form ε X ∝ n 1/3 . There are, however, many mathematical forms for the correlation part. Highly accurate formulae for the correlation energy density ε C ( n ↑ , n ↓ ) have been constructed from quantum Monte Carlo simulations of jellium . [ 21 ] A simple first-principles correlation functional has been recently proposed as well. [ 22 ] [ 23 ] Although unrelated to the Monte Carlo simulation, the two variants provide comparable accuracy. [ 24 ]
The LDA assumes that the density is the same everywhere. Because of this, the LDA has a tendency to underestimate the exchange energy and over-estimate the correlation energy. [ 25 ] The errors due to the exchange and correlation parts tend to compensate each other to a certain degree. To correct for this tendency, it is common to expand in terms of the gradient of the density in order to account for the non-homogeneity of the true electron density. This allows corrections based on the changes in density away from the coordinate. These expansions are referred to as generalized gradient approximations (GGA) [ 26 ] [ 27 ] [ 28 ] and have the following form:
Using the latter (GGA), very good results for molecular geometries and ground-state energies have been achieved.
Potentially more accurate than the GGA functionals are the meta-GGA functionals, a natural development after the GGA (generalized gradient approximation). Meta-GGA DFT functional in its original form includes the second derivative of the electron density (the Laplacian), whereas GGA includes only the density and its first derivative in the exchange–correlation potential.
Functionals of this type are, for example, TPSS and the Minnesota Functionals . These functionals include a further term in the expansion, depending on the density, the gradient of the density and the Laplacian ( second derivative ) of the density.
Difficulties in expressing the exchange part of the energy can be relieved by including a component of the exact exchange energy calculated from Hartree–Fock theory. Functionals of this type are known as hybrid functionals .
The DFT formalism described above breaks down, to various degrees, in the presence of a vector potential, i.e. a magnetic field . In such a situation, the one-to-one mapping between the ground-state electron density and wavefunction is lost. Generalizations to include the effects of magnetic fields have led to two different theories: current density functional theory (CDFT) and magnetic field density functional theory (BDFT). In both these theories, the functional used for the exchange and correlation must be generalized to include more than just the electron density. In current density functional theory, developed by Vignale and Rasolt, [ 17 ] the functionals become dependent on both the electron density and the paramagnetic current density. In magnetic field density functional theory, developed by Salsbury, Grayce and Harris, [ 29 ] the functionals depend on the electron density and the magnetic field, and the functional form can depend on the form of the magnetic field. In both of these theories it has been difficult to develop functionals beyond their equivalent to LDA, which are also readily implementable computationally.
In general, density functional theory finds increasingly broad application in chemistry and materials science for the interpretation and prediction of complex system behavior at an atomic scale. Specifically, DFT computational methods are applied for synthesis-related systems and processing parameters. In such systems, experimental studies are often encumbered by inconsistent results and non-equilibrium conditions. Examples of contemporary DFT applications include studying the effects of dopants on phase transformation behavior in oxides, magnetic behavior in dilute magnetic semiconductor materials, and the study of magnetic and electronic behavior in ferroelectrics and dilute magnetic semiconductors . [ 2 ] [ 30 ] It has also been shown that DFT gives good results in the prediction of sensitivity of some nanostructures to environmental pollutants like sulfur dioxide [ 31 ] or acrolein , [ 32 ] as well as prediction of mechanical properties. [ 33 ]
In practice, Kohn–Sham theory can be applied in several distinct ways, depending on what is being investigated. In solid-state calculations, the local density approximations are still commonly used along with plane-wave basis sets, as an electron-gas approach is more appropriate for electrons delocalised through an infinite solid. In molecular calculations, however, more sophisticated functionals are needed, and a huge variety of exchange–correlation functionals have been developed for chemical applications. Some of these are inconsistent with the uniform electron-gas approximation; however, they must reduce to LDA in the electron-gas limit. Among physicists, one of the most widely used functionals is the revised Perdew–Burke–Ernzerhof exchange model (a direct generalized gradient parameterization of the free-electron gas with no free parameters); however, this is not sufficiently calorimetrically accurate for gas-phase molecular calculations. In the chemistry community, one popular functional is known as BLYP (from the name Becke for the exchange part and Lee, Yang and Parr for the correlation part). Even more widely used is B3LYP, which is a hybrid functional in which the exchange energy, in this case from Becke's exchange functional, is combined with the exact energy from Hartree–Fock theory. Along with the component exchange and correlation funсtionals, three parameters define the hybrid functional, specifying how much of the exact exchange is mixed in. The adjustable parameters in hybrid functionals are generally fitted to a "training set" of molecules. Although the results obtained with these functionals are usually sufficiently accurate for most applications, there is no systematic way of improving them (in contrast to some of the traditional wavefunction -based methods like configuration interaction or coupled cluster theory). In the current DFT approach it is not possible to estimate the error of the calculations without comparing them to other methods or experiments.
Density functional theory is generally highly accurate but highly computationally-expensive. In recent years, DFT has been used with machine learning techniques - especially graph neural networks - to create machine learning potentials . These graph neural networks approximate DFT, with the aim of achieving similar accuracies with much less computation, and are especially beneficial for large systems. They are trained using DFT-calculated properties of a known set of molecules. Researchers have been trying to approximate DFT with machine learning for decades, but have only recently made good estimators. Breakthroughs in model architecture and data preprocessing that more heavily encoded theoretical knowledge, especially regarding symmetries and invariances, have enabled huge leaps in model performance. Using backpropagation, the process by which neural networks learn from training errors, to extract meaningful information about forces and densities, has similarly improved machine learning potentials accuracy. By 2023, for example, the DFT approximator Matlantis could simulate 72 elements, handle up to 20,000 atoms at a time, and execute calculations up to 20,000,000 times faster than DFT with similar accuracy, showcasing the power of DFT approximators in the artificial intelligence age. ML approximations of DFT have historically faced substantial transferability issues, with models failing to generalize potentials from some types of elements and compounds to others; improvements in architecture and data have slowly mitigated, but not eliminated, this issue. For very large systems, electrically nonneutral simulations, and intricate reaction pathways, DFT approximators often remain insufficiently computationally-lightweight or insufficiently accurate. [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ]
The predecessor to density functional theory was the Thomas–Fermi model , developed independently by both Llewellyn Thomas and Enrico Fermi in 1927. They used a statistical model to approximate the distribution of electrons in an atom. The mathematical basis postulated that electrons are distributed uniformly in phase space with two electrons in every h 3 {\displaystyle h^{3}} of volume. [ 39 ] For each element of coordinate space volume d 3 r {\displaystyle \mathrm {d} ^{3}\mathbf {r} } we can fill out a sphere of momentum space up to the Fermi momentum p F {\displaystyle p_{\text{F}}} [ 40 ]
Equating the number of electrons in coordinate space to that in phase space gives
Solving for p F and substituting into the classical kinetic energy formula then leads directly to a kinetic energy represented as a functional of the electron density:
where
As such, they were able to calculate the energy of an atom using this kinetic-energy functional combined with the classical expressions for the nucleus–electron and electron–electron interactions (which can both also be represented in terms of the electron density).
Although this was an important first step, the Thomas–Fermi equation's accuracy is limited because the resulting kinetic-energy functional is only approximate, and because the method does not attempt to represent the exchange energy of an atom as a conclusion of the Pauli principle . An exchange-energy functional was added by Paul Dirac in 1928.
However, the Thomas–Fermi–Dirac theory remained rather inaccurate for most applications. The largest source of error was in the representation of the kinetic energy, followed by the errors in the exchange energy, and due to the complete neglect of electron correlation .
Edward Teller (1962) showed that Thomas–Fermi theory cannot describe molecular bonding. This can be overcome by improving the kinetic-energy functional.
The kinetic-energy functional can be improved by adding the von Weizsäcker (1935) correction: [ 41 ] [ 42 ]
The Hohenberg–Kohn theorems relate to any system consisting of electrons moving under the influence of an external potential.
Theorem 1. The external potential (and hence the total energy), is a unique functional of the electron density.
Theorem 2. The functional that delivers the ground-state energy of the system gives the lowest energy if and only if the input density is the true ground-state density.
The many-electron Schrödinger equation can be very much simplified if electrons are divided in two groups: valence electrons and inner core electrons . The electrons in the inner shells are strongly bound and do not play a significant role in the chemical binding of atoms ; they also partially screen the nucleus, thus forming with the nucleus an almost inert core. Binding properties are almost completely due to the valence electrons, especially in metals and semiconductors. This separation suggests that inner electrons can be ignored in a large number of cases, thereby reducing the atom to an ionic core that interacts with the valence electrons. The use of an effective interaction, a pseudopotential , that approximates the potential felt by the valence electrons, was first proposed by Fermi in 1934 and Hellmann in 1935. In spite of the simplification pseudo-potentials introduce in calculations, they remained forgotten until the late 1950s.
A crucial step toward more realistic pseudo-potentials was given by William C. Topp and John Hopfield , [ 43 ] who suggested that the pseudo-potential should be adjusted such that they describe the valence charge density accurately. Based on that idea, modern pseudo-potentials are obtained inverting the free-atom Schrödinger equation for a given reference electronic configuration and forcing the pseudo-wavefunctions to coincide with the true valence wavefunctions beyond a certain distance r l . The pseudo-wavefunctions are also forced to have the same norm (i.e., the so-called norm-conserving condition) as the true valence wavefunctions and can be written as
where R l ( r ) is the radial part of the wavefunction with angular momentum l , and PP and AE denote the pseudo-wavefunction and the true (all-electron) wavefunction respectively. The index n in the true wavefunctions denotes the valence level. The distance r l beyond which the true and the pseudo-wavefunctions are equal is also dependent on l .
The electrons of a system will occupy the lowest Kohn–Sham eigenstates up to a given energy level according to the Aufbau principle . This corresponds to the steplike Fermi–Dirac distribution at absolute zero. If there are several degenerate or close to degenerate eigenstates at the Fermi level , it is possible to get convergence problems, since very small perturbations may change the electron occupation. One way of damping these oscillations is to smear the electrons, i.e. allowing fractional occupancies. [ 44 ] One approach of doing this is to assign a finite temperature to the electron Fermi–Dirac distribution. Other ways is to assign a cumulative Gaussian distribution of the electrons or using a Methfessel–Paxton method. [ 45 ] [ 46 ]
Classical density functional theory is a classical statistical method to investigate the properties of many-body systems consisting of interacting molecules, macromolecules, nanoparticles or microparticles. [ 47 ] [ 48 ] [ 49 ] [ 50 ] The classical non-relativistic method is correct for classical fluids with particle velocities less than the speed of light and thermal de Broglie wavelength smaller than the distance between particles. The theory is based on the calculus of variations of a thermodynamic functional, which is a function of the spatially dependent density function of particles, thus the name. The same name is used for quantum DFT, which is the theory to calculate the electronic structure of electrons based on spatially dependent electron density with quantum and relativistic effects. Classical DFT is a popular and useful method to study fluid phase transitions , ordering in complex liquids, physical characteristics of interfaces and nanomaterials . Since the 1970s it has been applied to the fields of materials science , biophysics , chemical engineering and civil engineering . [ 51 ] Computational costs are much lower than for molecular dynamics simulations, which provide similar data and a more detailed description but are limited to small systems and short time scales. Classical DFT is valuable to interpret and test numerical results and to define trends although details of the precise motion of the particles are lost due to averaging over all possible particle trajectories. [ 52 ] As in electronic systems, there are fundamental and numerical difficulties in using DFT to quantitatively describe the effect of intermolecular interaction on structure, correlations and thermodynamic properties.
Classical DFT addresses the difficulty of describing thermodynamic equilibrium states of many-particle systems with nonuniform density. [ 53 ] Classical DFT has its roots in theories such as the van der Waals theory for the equation of state and the virial expansion method for the pressure. In order to account for correlation in the positions of particles the direct correlation function was introduced as the effective interaction between two particles in the presence of a number of surrounding particles by Leonard Ornstein and Frits Zernike in 1914. [ 54 ] The connection to the density pair distribution function was given by the Ornstein–Zernike equation . The importance of correlation for thermodynamic properties was explored through density distribution functions. The functional derivative was introduced to define the distribution functions of classical mechanical systems. Theories were developed for simple and complex liquids using the ideal gas as a basis for the free energy and adding molecular forces as a second-order perturbation. A term in the gradient of the density was added to account for non-uniformity in density in the presence of external fields or surfaces. These theories can be considered precursors of DFT.
To develop a formalism for the statistical thermodynamics of non-uniform fluids functional differentiation was used extensively by Percus and Lebowitz (1961), which led to the Percus–Yevick equation linking the density distribution function and the direct correlation. [ 55 ] Other closure relations were also proposed;the Classical-map hypernetted-chain method , the BBGKY hierarchy . In the late 1970s classical DFT was applied to the liquid–vapor interface and the calculation of surface tension . Other applications followed: the freezing of simple fluids, formation of the glass phase, the crystal–melt interface and dislocation in crystals, properties of polymer systems, and liquid crystal ordering. Classical DFT was applied to colloid dispersions, which were discovered to be good models for atomic systems. [ 56 ] By assuming local chemical equilibrium and using the local chemical potential of the fluid from DFT as the driving force in fluid transport equations, equilibrium DFT is extended to describe non-equilibrium phenomena and fluid dynamics on small scales.
Classical DFT allows the calculation of the equilibrium particle density and prediction of thermodynamic properties and behavior of a many-body system on the basis of model interactions between particles. The spatially dependent density determines the local structure and composition of the material. It is determined as a function that optimizes the thermodynamic potential of the grand canonical ensemble . The grand potential is evaluated as the sum of the ideal-gas term with the contribution from external fields and an excess thermodynamic free energy arising from interparticle interactions. In the simplest approach the excess free-energy term is expanded on a system of uniform density using a functional Taylor expansion . The excess free energy is then a sum of the contributions from s -body interactions with density-dependent effective potentials representing the interactions between s particles. In most calculations the terms in the interactions of three or more particles are neglected (second-order DFT). When the structure of the system to be studied is not well approximated by a low-order perturbation expansion with a uniform phase as the zero-order term, non-perturbative free-energy functionals have also been developed. The minimization of the grand potential functional in arbitrary local density functions for fixed chemical potential, volume and temperature provides self-consistent thermodynamic equilibrium conditions, in particular, for the local chemical potential . The functional is not in general a convex functional of the density; solutions may not be local minima . Limiting to low-order corrections in the local density is a well-known problem, although the results agree (reasonably) well on comparison to experiment.
A variational principle is used to determine the equilibrium density. It can be shown that for constant temperature and volume the correct equilibrium density minimizes the grand potential functional Ω {\displaystyle \Omega } of the grand canonical ensemble over density functions n ( r ) {\displaystyle n(\mathbf {r} )} . In the language of functional differentiation (Mermin theorem):
The Helmholtz free energy functional F {\displaystyle F} is defined as F = Ω + ∫ d 3 r n ( r ) μ ( r ) {\displaystyle F=\Omega +\int d^{3}\mathbf {r} \,n(\mathbf {r} )\mu (\mathbf {r} )} .
The functional derivative in the density function determines the local chemical potential: μ ( r ) = δ F ( r ) / δ n ( r ) {\displaystyle \mu (\mathbf {r} )=\delta F(\mathbf {r} )/\delta n(\mathbf {r} )} .
In classical statistical mechanics the partition function is a sum over probability for a given microstate of N classical particles as measured by the Boltzmann factor in the Hamiltonian of the system. The Hamiltonian splits into kinetic and potential energy, which includes interactions between particles, as well as external potentials. The partition function of the grand canonical ensemble defines the grand potential. A correlation function is introduced to describe the effective interaction between particles.
The s -body density distribution function is defined as the statistical ensemble average ⟨ … ⟩ {\displaystyle \langle \dots \rangle } of particle positions. It measures the probability to find s particles at points in space r 1 , … , r s {\displaystyle \mathbf {r} _{1},\dots ,\mathbf {r} _{s}} :
From the definition of the grand potential, the functional derivative with respect to the local chemical potential is the density; higher-order density correlations for two, three, four or more particles are found from higher-order derivatives:
The radial distribution function with s = 2 measures the change in the density at a given point for a change of the local chemical interaction at a distant point.
In a fluid the free energy is a sum of the ideal free energy and the excess free-energy contribution Δ F {\displaystyle \Delta F} from interactions between particles. In the grand ensemble the functional derivatives in the density yield the direct correlation functions c s {\displaystyle c_{s}} :
The one-body direct correlation function plays the role of an effective mean field . The functional derivative in density of the one-body direct correlation results in the direct correlation function between two particles c 2 {\displaystyle c_{2}} . The direct correlation function is the correlation contribution to the change of local chemical potential at a point r {\displaystyle \mathbf {r} } for a density change at r ′ {\displaystyle \mathbf {r} '} and is related to the work of creating density changes at different positions. In dilute gases the direct correlation function is simply the pair-wise interaction between particles ( Debye–Huckel equation ). The Ornstein–Zernike equation between the pair and the direct correlation functions is derived from the equation
Various assumptions and approximations adapted to the system under study lead to expressions for the free energy. Correlation functions are used to calculate the free-energy functional as an expansion on a known reference system. If the non-uniform fluid can be described by a density distribution that is not far from uniform density a functional Taylor expansion of the free energy in density increments leads to an expression for the thermodynamic potential using known correlation functions of the uniform system. In the square gradient approximation a strong non-uniform density contributes a term in the gradient of the density. In a perturbation theory approach the direct correlation function is given by the sum of the direct correlation in a known system such as hard spheres and a term in a weak interaction such as the long range London dispersion force . In a local density approximation the local excess free energy is calculated from the effective interactions with particles distributed at uniform density of the fluid in a cell surrounding a particle. Other improvements have been suggested such as the weighted density approximation for a direct correlation function of a uniform system which distributes the neighboring particles with an effective weighted density calculated from a self-consistent condition on the direct correlation function.
The variational Mermin principle leads to an equation for the equilibrium density and system properties are calculated from the solution for the density. The equation is a non-linear integro-differential equation and finding a solution is not trivial, requiring numerical methods, except for the simplest models. Classical DFT is supported by standard software packages, and specific software is currently under development. Assumptions can be made to propose trial functions as solutions, and the free energy is expressed in the trial functions and optimized with respect to parameters of the trial functions. Examples are a localized Gaussian function centered on crystal lattice points for the density in a solid, the hyperbolic function tanh ( r ) {\displaystyle \tanh(r)} for interfacial density profiles.
Classical DFT has found many applications, for example:
The extension of classical DFT towards nonequilibrium systems is known as dynamical density functional theory (DDFT). [ 58 ] DDFT allows to describe the time evolution of the one-body density ρ ( r , t ) {\displaystyle \rho ({\boldsymbol {r}},t)} of a colloidal system, which is governed by the equation
with the mobility Γ {\displaystyle \Gamma } and the free energy F {\displaystyle F} . DDFT can be derived from the microscopic equations of motion for a colloidal system (Langevin equations or Smoluchowski equation) based on the adiabatic approximation, which corresponds to the assumption that the two-body distribution in a nonequilibrium system is identical to that in an equilibrium system with the same one-body density. For a system of noninteracting particles, DDFT reduces to the standard diffusion equation. | https://en.wikipedia.org/wiki/Hohenberg-Kohn_theorems |
In radiation thermodynamics , a hohlraum ( German: [ˈhoːlˌʁaʊ̯m] ⓘ ; a non-specific German word for a "hollow space", "empty room", or "cavity") is a cavity whose walls are in radiative equilibrium with the radiant energy within the cavity. First proposed by Gustav Kirchhoff in 1860 and used in the study of black-body radiation ( hohlraumstrahlung ), [ 1 ] this idealized cavity can be approximated in practice by a hollow container of any opaque material. The radiation escaping through a small perforation in the wall of such a container will be a good approximation of black-body radiation at the temperature of the interior of the container. [ 2 ] Indeed, a hohlraum can even be constructed from cardboard, as shown by Purcell's Black Body Box, a hohlraum demonstrator. [ 3 ]
In spectroscopy, the Hohlraum effect occurs when an object achieves thermodynamic equilibrium with an enclosing hohlraum. As a consequence of Kirchhoff’s law , everything optically blends together and contrast between the walls and the object effectively disappears. [ 4 ]
Hohlraums are used in High Energy Density Physics (HEDP) and Inertial Confinement Fusion (ICF) experiments to convert laser energy to thermal x-rays for imploding capsules, heating targets, and generating thermal radiation waves. [ 5 ] They may also be used in Nuclear Weapon designs.
The indirect drive approach to inertial confinement fusion is as follows: the fusion fuel capsule is held inside a cylindrical hohlraum. The hohlraum body is manufactured using a high-Z (high atomic number) element, usually gold or uranium.
Inside the hohlraum is a fuel capsule containing deuterium and tritium (D-T) fuel. A frozen layer of D-T ice adheres inside the fuel capsule.
The fuel capsule wall is synthesized using light elements such as plastic, beryllium, or high density carbon, i.e. diamond. The outer portion of the fuel capsule explodes outward when ablated by the x-rays produced by the hohlraum wall upon irradiation by lasers. Due to Newton's third law, the inner portion of the fuel capsule implodes, causing the D-T fuel to be supercompressed, activating a fusion reaction.
The radiation source (e.g., laser ) is pointed at the interior of the hohlraum rather than at the fuel capsule itself. The hohlraum absorbs and re-radiates the energy as X-rays , a process known as indirect drive. The advantage to this approach, compared to direct drive, is that high mode structures from the laser spot are smoothed out when the energy is re-radiated from the hohlraum walls. The disadvantage to this approach is that low mode asymmetries are harder to control. It is important to be able to control both high mode and low mode asymmetries to achieve a uniform implosion .
The hohlraum walls must have surface roughness less than 1 micron, and hence accurate machining is required during fabrication. Any imperfection of the hohlraum wall during fabrication will cause uneven and non-symmetrical compression of the fuel capsule inside the hohlraum during inertial confinement fusion. Hence imperfection is to be carefully prevented so surface finishing is extremely important, as during ICF laser shots, due to intense pressure and temperature, results are highly susceptible to hohlraum texture roughness. The fuel capsule must be precisely spherical, with texture roughness less than one nanometer, for fusion ignition to start. Otherwise, instability will cause fusion to fizzle. The fuel capsule contains a small fill hole with less than 5 microns diameter to inject the capsule with D-T gas.
The X-ray intensity around the capsule must be very symmetrical to avoid hydrodynamic instabilities during compression. Earlier designs had radiators at the ends of the hohlraum, but it proved difficult to maintain adequate X-ray symmetry with this geometry. By the end of the 1990s, target physicists developed a new family of designs in which the ion beams are absorbed in the hohlraum walls, so that X-rays are radiated from a large fraction of the solid angle surrounding the capsule. With a judicious choice of absorbing materials, this arrangement, referred to as a "distributed-radiator" target, gives better X-ray symmetry and target gain in simulations than earlier designs. [ 6 ]
The term hohlraum is also used to describe the casing of a thermonuclear bomb following the Teller-Ulam design . The casing's purpose is to contain and focus the energy of the primary ( fission ) stage in order to implode the secondary ( fusion ) stage. | https://en.wikipedia.org/wiki/Hohlraum |
A hoist controller is the controller for a hoist . The term is used primarily in the context of electrically operated hoists, but it is apparent that the control systems of many 20th century steam hoists also incorporated controllers of significant complexity. Consider the control system of the Quincy Mine No. 2 Hoist. [ 1 ] This control system included interlocks to close the throttle valve at the end of trip and to prevent opening the throttle again until the winding engine was reversed. The control system also incorporated a governor to control the speed of the hoist and indicator wheels to show the hoist operator the positions of the skips in the mine shaft .
The hoist controllers for modern electric mining hoists have long included such features as automatic starting of the hoist when the weight of coal or ore in the skip reaches a set point, automatic acceleration of the hoist to full speed and automatic deceleration at the end of travel. [ 2 ] Hoist controllers need both velocity and absolute position references taken, typically taken from the winding drum of the hoist. [ 3 ] Modern hoist controllers replace many of the mechanical analog mechanisms of earlier controllers with digital control systems.
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hoist_controller |
The hok/sok system is a postsegregational killing mechanism employed by the R1 plasmid in Escherichia coli . It was the first type I toxin-antitoxin pair to be identified through characterisation of a plasmid -stabilising locus . [ 1 ] It is a type I system because the toxin is neutralised by a complementary RNA, rather than a partnered protein (type II toxin-antitoxin). [ 2 ]
The hok/sok system involves three genes: [ 3 ]
When E. coli undergoes cell division , the two daughter cells inherit the long-lived hok toxin from the parent cell. Due to the short half-life of the sok antitoxin, daughter cells inherit only small amounts and it quickly degrades. [ 3 ]
If a daughter cell has inherited the R1 plasmid, it has inherited the sok gene and a strong promoter which brings about high levels of transcription . So much so that in an R1-positive cell, Sok transcript exists in considerable molar excess over Hok mRNA. [ 5 ] Sok RNA then indirectly inhibits the translation of hok by inhibiting mok translation. There is a complementary region where sok transcript binds hok mRNA directly ( pictured ), but it does not occlude the Shine-Dalgarno sequence . Instead, sok RNA regulates the translation of the mok open reading frame , which nearly entirely overlaps that of hok . It is this translation-coupling which effectively allows sok RNA to repress the translation of hok mRNA. [ 6 ]
The sok transcript forms a duplex with the leader region of hok mRNA and this is recognized by RNase III and degraded. The cleavage products are very unstable and soon decay. [ 7 ]
Daughter cells without a copy of the R1 plasmid die because they do not have the means to produce more sok antitoxin transcript to inhibit translation of the inherited hok mRNA. The killing system is said to be postsegregational (PSK), [ 8 ] since cell death occurs after segregation of the plasmid. [ 9 ] [ 10 ]
The hok gene codes for a 52 amino acid toxic protein which causes cell death by depolarization of the cell membrane . [ 11 ] [ 12 ] It works in a similar way to holin proteins which are produced by bacteriophages before cell lysis . [ 2 ] [ 13 ]
hok/sok homologues denoted flmA/B (FlmA is the protein toxin and FlmB RNA the antisense regulator) [ 14 ] are carried on the F plasmid which operate in the same way to maintain the stability of the plasmid. [ 15 ] The F plasmid contains another homologous toxin-antitoxin system called srnB . [ 11 ]
The first type I toxin-antitoxin system to be found in gram-positive bacteria is the RNAI-RNAII system of the pAD1 plasmid in Enterococcus faecalis . Here, RNAI encodes a toxic protein Fst while RNAII is the regulatory sRNA. [ 16 ]
In E. coli strain K-12 there are four long direct repeats (ldr) which encode short open reading frames of 35 codons organised in a homologous manner to the hok/sok system. One of the repeats encodes LdrD, a toxic protein which causes cell death. An unstable antisense RNA regulator (Rd1D) blocks the translation of the LdrD transcript. [ 17 ] A mok homologue which overlaps each ldr loci has also been found. [ 3 ]
IstR RNA works in a similar system in conjunction with the toxic TisB protein. [ 18 ] | https://en.wikipedia.org/wiki/Hok/sok_system |
The reflected binary code ( RBC ), also known as reflected binary ( RB ) or Gray code after Frank Gray , is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).
For example, the representation of the decimal value "1" in binary would normally be " 001 ", and "2" would be " 010 ". In Gray code, these values are represented as " 001 " and " 011 ". That way, incrementing a value from 1 to 2 requires only one bit to change, instead of two.
Gray codes are widely used to prevent spurious output from electromechanical switches and to facilitate error correction in digital communications such as digital terrestrial television and some cable TV systems. The use of Gray code in these devices helps simplify logic operations and reduce errors in practice. [ 3 ]
Many devices indicate position by closing and opening switches. If that device uses natural binary codes , positions 3 and 4 are next to each other but all three bits of the binary representation differ:
The problem with natural binary codes is that physical switches are not ideal: it is very unlikely that physical switches will change states exactly in synchrony. In the transition between the two states shown above, all three switches change state. In the brief period while all are changing, the switches will read some spurious position. Even without keybounce , the transition might look like 011 — 001 — 101 — 100 . When the switches appear to be in position 001 , the observer cannot tell if that is the "real" position 1, or a transitional state between two other positions. If the output feeds into a sequential system, possibly via combinational logic , then the sequential system may store a false value.
This problem can be solved by changing only one switch at a time, so there is never any ambiguity of position, resulting in codes assigning to each of a contiguous set of integers , or to each member of a circular list, a word of symbols such that no two code words are identical and each two adjacent code words differ by exactly one symbol. These codes are also known as unit-distance , [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] single-distance , single-step , monostrophic [ 9 ] [ 10 ] [ 7 ] [ 8 ] or syncopic codes , [ 9 ] in reference to the Hamming distance of 1 between adjacent codes.
In principle, there can be more than one such code for a given word length, but the term Gray code was first applied to a particular binary code for non-negative integers, the binary-reflected Gray code , or BRGC . Bell Labs researcher George R. Stibitz described such a code in a 1941 patent application, granted in 1943. [ 11 ] [ 12 ] [ 13 ] Frank Gray introduced the term reflected binary code in his 1947 patent application, remarking that the code had "as yet no recognized name". [ 14 ] He derived the name from the fact that it "may be built up from the conventional binary code by a sort of reflection process".
In the standard encoding of the Gray code the least significant bit follows a repetitive pattern of 2 on, 2 off (... 11001100 ...); the next digit a pattern of 4 on, 4 off; the i -th least significant bit a pattern of 2 i on 2 i off. The most significant digit is an exception to this: for an n -bit Gray code, the most significant digit follows the pattern 2 n −1 on, 2 n −1 off, which is the same (cyclic) sequence of values as for the second-most significant digit, but shifted forwards 2 n −2 places. The four-bit version of this is shown below:
For decimal 15 the code rolls over to decimal 0 with only one switch change. This is called the cyclic or adjacency property of the code. [ 15 ]
In modern digital communications , Gray codes play an important role in error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Despite the fact that Stibitz described this code [ 11 ] [ 12 ] [ 13 ] before Gray, the reflected binary code was later named after Gray by others who used it. Two different 1953 patent applications use "Gray code" as an alternative name for the "reflected binary code"; [ 16 ] [ 17 ] one of those also lists "minimum error code" and "cyclic permutation code" among the names. [ 17 ] A 1954 patent application refers to "the Bell Telephone Gray code". [ 18 ] Other names include "cyclic binary code", [ 12 ] "cyclic progression code", [ 19 ] [ 12 ] "cyclic permuting binary" [ 20 ] or "cyclic permuted binary" (CPB). [ 21 ] [ 22 ]
The Gray code is sometimes misattributed to 19th century electrical device inventor Elisha Gray . [ 13 ] [ 23 ] [ 24 ] [ 25 ]
Reflected binary codes were applied to mathematical puzzles before they became known to engineers.
The binary-reflected Gray code represents the underlying scheme of the classical Chinese rings puzzle , a sequential mechanical puzzle mechanism described by the French Louis Gros in 1872. [ 26 ] [ 13 ]
It can serve as a solution guide for the Towers of Hanoi problem, based on a game by the French Édouard Lucas in 1883. [ 27 ] [ 28 ] [ 29 ] [ 30 ] Similarly, the so-called Towers of Bucharest and Towers of Klagenfurt game configurations yield ternary and pentary Gray codes. [ 31 ]
Martin Gardner wrote a popular account of the Gray code in his August 1972 "Mathematical Games" column in Scientific American . [ 32 ]
The code also forms a Hamiltonian cycle on a hypercube , where each bit is seen as one dimension.
When the French engineer Émile Baudot changed from using a 6-unit (6-bit) code to 5-unit code for his printing telegraph system, in 1875 [ 33 ] or 1876, [ 34 ] [ 35 ] he ordered the alphabetic characters on his print wheel using a reflected binary code, and assigned the codes using only three of the bits to vowels. With vowels and consonants sorted in their alphabetical order, [ 36 ] [ 37 ] [ 38 ] and other symbols appropriately placed, the 5-bit character code has been recognized as a reflected binary code. [ 13 ] This code became known as Baudot code [ 39 ] and, with minor changes, was eventually adopted as International Telegraph Alphabet No. 1 (ITA1, CCITT-1) in 1932. [ 40 ] [ 41 ] [ 38 ]
About the same time, the German-Austrian Otto Schäffler [ de ] [ 42 ] demonstrated another printing telegraph in Vienna using a 5-bit reflected binary code for the same purpose, in 1874. [ 43 ] [ 13 ]
Frank Gray , who became famous for inventing the signaling method that came to be used for compatible color television, invented a method to convert analog signals to reflected binary code groups using vacuum tube -based apparatus. Filed in 1947, the method and apparatus were granted a patent in 1953, [ 14 ] and the name of Gray stuck to the codes. The " PCM tube " apparatus that Gray patented was made by Raymond W. Sears of Bell Labs, working with Gray and William M. Goodall, who credited Gray for the idea of the reflected binary code. [ 44 ]
Gray was most interested in using the codes to minimize errors in converting analog signals to digital; his codes are still used today for this purpose.
Gray codes are used in linear and rotary position encoders ( absolute encoders and quadrature encoders ) in preference to weighted binary encoding. This avoids the possibility that, when multiple bits change in the binary representation of a position, a misread will result from some of the bits changing before others.
For example, some rotary encoders provide a disk which has an electrically conductive Gray code pattern on concentric rings (tracks). Each track has a stationary metal spring contact that provides electrical contact to the conductive code pattern. Together, these contacts produce output signals in the form of a Gray code. Other encoders employ non-contact mechanisms based on optical or magnetic sensors to produce the Gray code output signals.
Regardless of the mechanism or precision of a moving encoder, position measurement error can occur at specific positions (at code boundaries) because the code may be changing at the exact moment it is read (sampled). A binary output code could cause significant position measurement errors because it is impossible to make all bits change at exactly the same time. If, at the moment the position is sampled, some bits have changed and others have not, the sampled position will be incorrect. In the case of absolute encoders, the indicated position may be far away from the actual position and, in the case of incremental encoders, this can corrupt position tracking.
In contrast, the Gray code used by position encoders ensures that the codes for any two consecutive positions will differ by only one bit and, consequently, only one bit can change at a time. In this case, the maximum position error will be small, indicating a position adjacent to the actual position.
Due to the Hamming distance properties of Gray codes, they are sometimes used in genetic algorithms . [ 15 ] They are very useful in this field, since mutations in the code allow for mostly incremental changes, but occasionally a single bit-change can cause a big leap and lead to new properties.
Gray codes are also used in labelling the axes of Karnaugh maps since 1953 [ 45 ] [ 46 ] [ 47 ] as well as in Händler circle graphs since 1958, [ 48 ] [ 49 ] [ 50 ] [ 51 ] both graphical methods for logic circuit minimization .
In modern digital communications , 1D- and 2D-Gray codes play an important role in error prevention before applying an error correction . For example, in a digital modulation scheme such as QAM where data is typically transmitted in symbols of 4 bits or more, the signal's constellation diagram is arranged so that the bit patterns conveyed by adjacent constellation points differ by only one bit. By combining this with forward error correction capable of correcting single-bit errors, it is possible for a receiver to correct any transmission errors that cause a constellation point to deviate into the area of an adjacent point. This makes the transmission system less susceptible to noise .
Digital logic designers use Gray codes extensively for passing multi-bit count information between synchronous logic that operates at different clock frequencies. The logic is considered operating in different "clock domains". It is fundamental to the design of large chips that operate with many different clocking frequencies.
If a system has to cycle sequentially through all possible combinations of on-off states of some set of controls, and the changes of the controls require non-trivial expense (e.g. time, wear, human work), a Gray code minimizes the number of setting changes to just one change for each combination of states. An example would be testing a piping system for all combinations of settings of its manually operated valves.
A balanced Gray code can be constructed, [ 52 ] that flips every bit equally often. Since bit-flips are evenly distributed, this is optimal in the following way: balanced Gray codes minimize the maximal count of bit-flips for each digit.
George R. Stibitz utilized a reflected binary code in a binary pulse counting device in 1941 already. [ 11 ] [ 12 ] [ 13 ]
A typical use of Gray code counters is building a FIFO (first-in, first-out) data buffer that has read and write ports that exist in different clock domains. The input and output counters inside such a dual-port FIFO are often stored using Gray code to prevent invalid transient states from being captured when the count crosses clock domains. [ 53 ] The updated read and write pointers need to be passed between clock domains when they change, to be able to track FIFO empty and full status in each domain. Each bit of the pointers is sampled non-deterministically for this clock domain transfer. So for each bit, either the old value or the new value is propagated. Therefore, if more than one bit in the multi-bit pointer is changing at the sampling point, a "wrong" binary value (neither new nor old) can be propagated. By guaranteeing only one bit can be changing, Gray codes guarantee that the only possible sampled values are the new or old multi-bit value. Typically Gray codes of power-of-two length are used.
Sometimes digital buses in electronic systems are used to convey quantities that can only increase or decrease by one at a time, for example the output of an event counter which is being passed between clock domains or to a digital-to-analog converter. The advantage of Gray codes in these applications is that differences in the propagation delays of the many wires that represent the bits of the code cannot cause the received value to go through states that are out of the Gray code sequence. This is similar to the advantage of Gray codes in the construction of mechanical encoders, however the source of the Gray code is an electronic counter in this case. The counter itself must count in Gray code, or if the counter runs in binary then the output value from the counter must be reclocked after it has been converted to Gray code, because when a value is converted from binary to Gray code, [ nb 1 ] it is possible that differences in the arrival times of the binary data bits into the binary-to-Gray conversion circuit will mean that the code could go briefly through states that are wildly out of sequence. Adding a clocked register after the circuit that converts the count value to Gray code may introduce a clock cycle of latency, so counting directly in Gray code may be advantageous. [ 54 ]
To produce the next count value in a Gray-code counter, it is necessary to have some combinational logic that will increment the current count value that is stored. One way to increment a Gray code number is to convert it into ordinary binary code, [ 55 ] add one to it with a standard binary adder, and then convert the result back to Gray code. [ 56 ] Other methods of counting in Gray code are discussed in a report by Robert W. Doran , including taking the output from the first latches of the master-slave flip flops in a binary ripple counter. [ 57 ]
As the execution of program code typically causes an instruction memory access pattern of locally consecutive addresses, bus encodings using Gray code addressing instead of binary addressing can reduce the number of state changes of the address bits significantly, thereby reducing the CPU power consumption in some low-power designs. [ 58 ] [ 59 ]
The binary-reflected Gray code list for n bits can be generated recursively from the list for n − 1 bits by reflecting the list (i.e. listing the entries in reverse order), prefixing the entries in the original list with a binary 0 , prefixing the entries in the reflected list with a binary 1 , and then concatenating the original list with the reversed list. [ 13 ] For example, generating the n = 3 list from the n = 2 list:
The one-bit Gray code is G 1 = ( 0,1 ). This can be thought of as built recursively as above from a zero-bit Gray code G 0 = ( Λ ) consisting of a single entry of zero length. This iterative process of generating G n +1 from G n makes the following properties of the standard reflecting code clear:
These characteristics suggest a simple and fast method of translating a binary value into the corresponding Gray code. Each bit is inverted if the next higher bit of the input value is set to one. This can be performed in parallel by a bit-shift and exclusive-or operation if they are available: the n th Gray code is obtained by computing n ⊕ ⌊ n 2 ⌋ {\displaystyle n\oplus \left\lfloor {\tfrac {n}{2}}\right\rfloor } . Prepending a 0 bit leaves the order of the code words unchanged, prepending a 1 bit reverses the order of the code words. If the bits at position i {\displaystyle i} of codewords are inverted, the order of neighbouring blocks of 2 i {\displaystyle 2^{i}} codewords is reversed. For example, if bit 0 is inverted in a 3 bit codeword sequence, the order of two neighbouring codewords is reversed
If bit 1 is inverted, blocks of 2 codewords change order:
If bit 2 is inverted, blocks of 4 codewords reverse order:
Thus, performing an exclusive or on a bit b i {\displaystyle b_{i}} at position i {\displaystyle i} with the bit b i + 1 {\displaystyle b_{i+1}} at position i + 1 {\displaystyle i+1} leaves the order of codewords intact if b i + 1 = 0 {\displaystyle b_{i+1}={\mathtt {0}}} , and reverses the order of blocks of 2 i + 1 {\displaystyle 2^{i+1}} codewords if b i + 1 = 1 {\displaystyle b_{i+1}={\mathtt {1}}} . Now, this is exactly the same operation as the reflect-and-prefix method to generate the Gray code.
A similar method can be used to perform the reverse translation, but the computation of each bit depends on the computed value of the next higher bit so it cannot be performed in parallel. Assuming g i {\displaystyle g_{i}} is the i {\displaystyle i} th Gray-coded bit ( g 0 {\displaystyle g_{0}} being the most significant bit), and b i {\displaystyle b_{i}} is the i {\displaystyle i} th binary-coded bit ( b 0 {\displaystyle b_{0}} being the most-significant bit), the reverse translation can be given recursively: b 0 = g 0 {\displaystyle b_{0}=g_{0}} , and b i = g i ⊕ b i − 1 {\displaystyle b_{i}=g_{i}\oplus b_{i-1}} . Alternatively, decoding a Gray code into a binary number can be described as a prefix sum of the bits in the Gray code, where each individual summation operation in the prefix sum is performed modulo two.
To construct the binary-reflected Gray code iteratively, at step 0 start with the c o d e 0 = 0 {\displaystyle \mathrm {code} _{0}={\mathtt {0}}} , and at step i > 0 {\displaystyle i>0} find the bit position of the least significant 1 in the binary representation of i {\displaystyle i} and flip the bit at that position in the previous code c o d e i − 1 {\displaystyle \mathrm {code} _{i-1}} to get the next code c o d e i {\displaystyle \mathrm {code} _{i}} . The bit positions start 0, 1, 0, 2, 0, 1, 0, 3, ... [ nb 2 ] See find first set for efficient algorithms to compute these values.
The following functions in C convert between binary numbers and their associated Gray codes. While it may seem that Gray-to-binary conversion requires each bit to be handled one at a time, faster algorithms exist. [ 60 ] [ 55 ] [ nb 1 ]
On newer processors, the number of ALU instructions in the decoding step can be reduced by taking advantage of the CLMUL instruction set . If MASK is the constant binary string of ones ended with a single zero digit, then carryless multiplication of MASK with the grey encoding of x will always give either x or its bitwise negation.
In practice, "Gray code" almost always refers to a binary-reflected Gray code (BRGC). However, mathematicians have discovered other kinds of Gray codes. Like BRGCs, each consists of a list of words, where each word differs from the next in only one digit (each word has a Hamming distance of 1 from the next word).
It is possible to construct binary Gray codes with n bits with a length of less than 2 n , if the length is even. One possibility is to start with a balanced Gray code and remove pairs of values at either the beginning and the end, or in the middle. [ 61 ] OEIS sequence A290772 [ 62 ] gives the number of possible Gray sequences of length 2 n that include zero and use the minimum number of bits.
0 → 000 1 → 001 2 → 002 10 → 012 11 → 011 12 → 010 20 → 020 21 → 021 22 → 022 100 → 122 101 → 121 102 → 120 110 → 110 111 → 111 112 → 112 120 → 102 121 → 101 122 → 100 200 → 200 201 → 201 202 → 202 210 → 212 211 → 211 212 → 210 220 → 220 221 → 221
There are many specialized types of Gray codes other than the binary-reflected Gray code. One such type of Gray code is the n -ary Gray code , also known as a non-Boolean Gray code . As the name implies, this type of Gray code uses non- Boolean values in its encodings.
For example, a 3-ary ( ternary ) Gray code would use the values 0,1,2. [ 31 ] The ( n , k )- Gray code is the n -ary Gray code with k digits. [ 63 ] The sequence of elements in the (3, 2)-Gray code is: 00,01,02,12,11,10,20,21,22. The ( n , k )-Gray code may be constructed recursively, as the BRGC, or may be constructed iteratively . An algorithm to iteratively generate the ( N , k )-Gray code is presented (in C ):
There are other Gray code algorithms for ( n , k )-Gray codes. The ( n , k )-Gray code produced by the above algorithm is always cyclical; some algorithms, such as that by Guan, [ 63 ] lack this property when k is odd. On the other hand, while only one digit at a time changes with this method, it can change by wrapping (looping from n − 1 to 0). In Guan's algorithm, the count alternately rises and falls, so that the numeric difference between two Gray code digits is always one.
Gray codes are not uniquely defined, because a permutation of the columns of such a code is a Gray code too. The above procedure produces a code in which the lower the significance of a digit, the more often it changes, making it similar to normal counting methods.
See also Skew binary number system , a variant ternary number system where at most two digits change on each increment, as each increment can be done with at most one digit carry operation.
Although the binary reflected Gray code is useful in many scenarios, it is not optimal in certain cases because of a lack of "uniformity". [ 52 ] In balanced Gray codes , the number of changes in different coordinate positions are as close as possible. To make this more precise, let G be an R -ary complete Gray cycle having transition sequence ( δ k ) {\displaystyle (\delta _{k})} ; the transition counts ( spectrum ) of G are the collection of integers defined by
λ k = | { j ∈ Z R n : δ j = k } | , for k ∈ Z n {\displaystyle \lambda _{k}=|\{j\in \mathbb {Z} _{R^{n}}:\delta _{j}=k\}|\,,{\text{ for }}k\in \mathbb {Z} _{n}}
A Gray code is uniform or uniformly balanced if its transition counts are all equal, in which case we have λ k = R n n {\displaystyle \lambda _{k}={\tfrac {R^{n}}{n}}} for all k . Clearly, when R = 2 {\displaystyle R=2} , such codes exist only if n is a power of 2. [ 64 ] If n is not a power of 2, it is possible to construct well-balanced binary codes where the difference between two transition counts is at most 2; so that (combining both cases) every transition count is either 2 ⌊ 2 n 2 n ⌋ {\displaystyle 2\left\lfloor {\tfrac {2^{n}}{2n}}\right\rfloor } or 2 ⌈ 2 n 2 n ⌉ {\displaystyle 2\left\lceil {\tfrac {2^{n}}{2n}}\right\rceil } . [ 52 ] Gray codes can also be exponentially balanced if all of their transition counts are adjacent powers of two, and such codes exist for every power of two. [ 65 ]
For example, a balanced 4-bit Gray code has 16 transitions, which can be evenly distributed among all four positions (four transitions per position), making it uniformly balanced: [ 52 ]
whereas a balanced 5-bit Gray code has a total of 32 transitions, which cannot be evenly distributed among the positions. In this example, four positions have six transitions each, and one has eight: [ 52 ]
We will now show a construction [ 66 ] and implementation [ 67 ] for well-balanced binary Gray codes which allows us to generate an n -digit balanced Gray code for every n . The main principle is to inductively construct an ( n + 2)-digit Gray code G ′ {\displaystyle G'} given an n -digit Gray code G in such a way that the balanced property is preserved. To do this, we consider partitions of G = g 0 , … , g 2 n − 1 {\displaystyle G=g_{0},\ldots ,g_{2^{n}-1}} into an even number L of non-empty blocks of the form
{ g 0 } , { g 1 , … , g k 2 } , { g k 2 + 1 , … , g k 3 } , … , { g k L − 2 + 1 , … , g − 2 } , { g − 1 } {\displaystyle \left\{g_{0}\right\},\left\{g_{1},\ldots ,g_{k_{2}}\right\},\left\{g_{k_{2}+1},\ldots ,g_{k_{3}}\right\},\ldots ,\left\{g_{k_{L-2}+1},\ldots ,g_{-2}\right\},\left\{g_{-1}\right\}}
where k 1 = 0 {\displaystyle k_{1}=0} , k L − 1 = − 2 {\displaystyle k_{L-1}=-2} , and k L ≡ − 1 ( mod 2 n ) {\displaystyle k_{L}\equiv -1{\pmod {2^{n}}}} ). This partition induces an ( n + 2 ) {\displaystyle (n+2)} -digit Gray code given by
If we define the transition multiplicities
m i = | { j : δ k j = i , 1 ≤ j ≤ L } | {\displaystyle m_{i}=\left|\left\{j:\delta _{k_{j}}=i,1\leq j\leq L\right\}\right|}
to be the number of times the digit in position i changes between consecutive blocks in a partition, then for the ( n + 2)-digit Gray code induced by this partition the transition spectrum λ i ′ {\displaystyle \lambda '_{i}} is
λ i ′ = { 4 λ i − 2 m i , if 0 ≤ i < n L , otherwise {\displaystyle \lambda '_{i}={\begin{cases}4\lambda _{i}-2m_{i},&{\text{if }}0\leq i<n\\L,&{\text{ otherwise }}\end{cases}}}
The delicate part of this construction is to find an adequate partitioning of a balanced n -digit Gray code such that the code induced by it remains balanced, but for this only the transition multiplicities matter; joining two consecutive blocks over a digit i {\displaystyle i} transition and splitting another block at another digit i {\displaystyle i} transition produces a different Gray code with exactly the same transition spectrum λ i ′ {\displaystyle \lambda '_{i}} , so one may for example [ 65 ] designate the first m i {\displaystyle m_{i}} transitions at digit i {\displaystyle i} as those that fall between two blocks. Uniform codes can be found when R ≡ 0 ( mod 4 ) {\displaystyle R\equiv 0{\pmod {4}}} and R n ≡ 0 ( mod n ) {\displaystyle R^{n}\equiv 0{\pmod {n}}} , and this construction can be extended to the R -ary case as well. [ 66 ]
Long run (or maximum gap ) Gray codes maximize the distance between consecutive changes of digits in the same position. That is, the minimum run-length of any bit remains unchanged for as long as possible. [ 68 ]
Monotonic codes are useful in the theory of interconnection networks, especially for minimizing dilation for linear arrays of processors. [ 69 ] If we define the weight of a binary string to be the number of 1s in the string, then although we clearly cannot have a Gray code with strictly increasing weight, we may want to approximate this by having the code run through two adjacent weights before reaching the next one.
We can formalize the concept of monotone Gray codes as follows: consider the partition of the hypercube Q n = ( V n , E n ) {\displaystyle Q_{n}=(V_{n},E_{n})} into levels of vertices that have equal weight, i.e.
V n ( i ) = { v ∈ V n : v has weight i } {\displaystyle V_{n}(i)=\{v\in V_{n}:v{\text{ has weight }}i\}}
for 0 ≤ i ≤ n {\displaystyle 0\leq i\leq n} . These levels satisfy | V n ( i ) | = ( n i ) {\displaystyle |V_{n}(i)|=\textstyle {\binom {n}{i}}} . Let Q n ( i ) {\displaystyle Q_{n}(i)} be the subgraph of Q n {\displaystyle Q_{n}} induced by V n ( i ) ∪ V n ( i + 1 ) {\displaystyle V_{n}(i)\cup V_{n}(i+1)} , and let E n ( i ) {\displaystyle E_{n}(i)} be the edges in Q n ( i ) {\displaystyle Q_{n}(i)} . A monotonic Gray code is then a Hamiltonian path in Q n {\displaystyle Q_{n}} such that whenever δ 1 ∈ E n ( i ) {\displaystyle \delta _{1}\in E_{n}(i)} comes before δ 2 ∈ E n ( j ) {\displaystyle \delta _{2}\in E_{n}(j)} in the path, then i ≤ j {\displaystyle i\leq j} .
An elegant construction of monotonic n -digit Gray codes for any n is based on the idea of recursively building subpaths P n , j {\displaystyle P_{n,j}} of length 2 ( n j ) {\displaystyle 2\textstyle {\binom {n}{j}}} having edges in E n ( j ) {\displaystyle E_{n}(j)} . [ 69 ] We define P 1 , 0 = ( 0 , 1 ) {\displaystyle P_{1,0}=({\mathtt {0}},{\mathtt {1}})} , P n , j = ∅ {\displaystyle P_{n,j}=\emptyset } whenever j < 0 {\displaystyle j<0} or j ≥ n {\displaystyle j\geq n} , and
P n + 1 , j = 1 P n , j − 1 π n , 0 P n , j {\displaystyle P_{n+1,j}={\mathtt {1}}P_{n,j-1}^{\pi _{n}},{\mathtt {0}}P_{n,j}}
otherwise. Here, π n {\displaystyle \pi _{n}} is a suitably defined permutation and P π {\displaystyle P^{\pi }} refers to the path P with its coordinates permuted by π {\displaystyle \pi } . These paths give rise to two monotonic n -digit Gray codes G n ( 1 ) {\displaystyle G_{n}^{(1)}} and G n ( 2 ) {\displaystyle G_{n}^{(2)}} given by
G n ( 1 ) = P n , 0 P n , 1 R P n , 2 P n , 3 R ⋯ and G n ( 2 ) = P n , 0 R P n , 1 P n , 2 R P n , 3 ⋯ {\displaystyle G_{n}^{(1)}=P_{n,0}P_{n,1}^{R}P_{n,2}P_{n,3}^{R}\cdots {\text{ and }}G_{n}^{(2)}=P_{n,0}^{R}P_{n,1}P_{n,2}^{R}P_{n,3}\cdots }
The choice of π n {\displaystyle \pi _{n}} which ensures that these codes are indeed Gray codes turns out to be π n = E − 1 ( π n − 1 2 ) {\displaystyle \pi _{n}=E^{-1}\left(\pi _{n-1}^{2}\right)} . The first few values of P n , j {\displaystyle P_{n,j}} are shown in the table below.
These monotonic Gray codes can be efficiently implemented in such a way that each subsequent element can be generated in O ( n ) time. The algorithm is most easily described using coroutines .
Monotonic codes have an interesting connection to the Lovász conjecture , which states that every connected vertex-transitive graph contains a Hamiltonian path. The "middle-level" subgraph Q 2 n + 1 ( n ) {\displaystyle Q_{2n+1}(n)} is vertex-transitive (that is, its automorphism group is transitive, so that each vertex has the same "local environment" and cannot be differentiated from the others, since we can relabel the coordinates as well as the binary digits to obtain an automorphism ) and the problem of finding a Hamiltonian path in this subgraph is called the "middle-levels problem", which can provide insights into the more general conjecture. The question has been answered affirmatively for n ≤ 15 {\displaystyle n\leq 15} , and the preceding construction for monotonic codes ensures a Hamiltonian path of length at least 0.839 N , where N is the number of vertices in the middle-level subgraph. [ 70 ]
Another type of Gray code, the Beckett–Gray code , is named for Irish playwright Samuel Beckett , who was interested in symmetry . His play " Quad " features four actors and is divided into sixteen time periods. Each period ends with one of the four actors entering or leaving the stage. The play begins and ends with an empty stage, and Beckett wanted each subset of actors to appear on stage exactly once. [ 71 ] Clearly the set of actors currently on stage can be represented by a 4-bit binary Gray code. Beckett, however, placed an additional restriction on the script: he wished the actors to enter and exit so that the actor who had been on stage the longest would always be the one to exit. The actors could then be represented by a first in, first out queue , so that (of the actors onstage) the actor being dequeued is always the one who was enqueued first. [ 71 ] Beckett was unable to find a Beckett–Gray code for his play, and indeed, an exhaustive listing of all possible sequences reveals that no such code exists for n = 4. It is known today that such codes do exist for n = 2, 5, 6, 7, and 8, and do not exist for n = 3 or 4. An example of an 8-bit Beckett–Gray code can be found in Donald Knuth 's Art of Computer Programming . [ 13 ] According to Sawada and Wong, the search space for n = 6 can be explored in 15 hours, and more than 9500 solutions for the case n = 7 have been found. [ 72 ]
Snake-in-the-box codes, or snakes , are the sequences of nodes of induced paths in an n -dimensional hypercube graph , and coil-in-the-box codes, [ 73 ] or coils , are the sequences of nodes of induced cycles in a hypercube. Viewed as Gray codes, these sequences have the property of being able to detect any single-bit coding error. Codes of this type were first described by William H. Kautz in the late 1950s; [ 5 ] since then, there has been much research on finding the code with the largest possible number of codewords for a given hypercube dimension.
Yet another kind of Gray code is the single-track Gray code (STGC) developed by Norman B. Spedding [ 74 ] [ 75 ] and refined by Hiltgen, Paterson and Brandestini in Single-track Gray Codes (1996). [ 76 ] [ 77 ] The STGC is a cyclical list of P unique binary encodings of length n such that two consecutive words differ in exactly one position, and when the list is examined as a P × n matrix , each column is a cyclic shift of the first column. [ 78 ]
The name comes from their use with rotary encoders , where a number of tracks are being sensed by contacts, resulting for each in an output of 0 or 1 . To reduce noise due to different contacts not switching at exactly the same moment in time, one preferably sets up the tracks so that the data output by the contacts are in Gray code. To get high angular accuracy, one needs lots of contacts; in order to achieve at least 1° accuracy, one needs at least 360 distinct positions per revolution, which requires a minimum of 9 bits of data, and thus the same number of contacts.
If all contacts are placed at the same angular position, then 9 tracks are needed to get a standard BRGC with at least 1° accuracy. However, if the manufacturer moves a contact to a different angular position (but at the same distance from the center shaft), then the corresponding "ring pattern" needs to be rotated the same angle to give the same output. If the most significant bit (the inner ring in Figure 1) is rotated enough, it exactly matches the next ring out. Since both rings are then identical, the inner ring can be cut out, and the sensor for that ring moved to the remaining, identical ring (but offset at that angle from the other sensor on that ring). Those two sensors on a single ring make a quadrature encoder. That reduces the number of tracks for a "1° resolution" angular encoder to 8 tracks. Reducing the number of tracks still further cannot be done with BRGC.
For many years, Torsten Sillke [ 79 ] and other mathematicians believed that it was impossible to encode position on a single track such that consecutive positions differed at only a single sensor, except for the 2-sensor, 1-track quadrature encoder. So for applications where 8 tracks were too bulky, people used single-track incremental encoders (quadrature encoders) or 2-track "quadrature encoder + reference notch" encoders.
Norman B. Spedding, however, registered a patent in 1994 with several examples showing that it was possible. [ 74 ] Although it is not possible to distinguish 2 n positions with n sensors on a single track, it is possible to distinguish close to that many. Etzion and Paterson conjecture that when n is itself a power of 2, n sensors can distinguish at most 2 n − 2 n positions and that for prime n the limit is 2 n − 2 positions. [ 80 ] The authors went on to generate a 504-position single track code of length 9 which they believe is optimal. Since this number is larger than 2 8 = 256, more than 8 sensors are required by any code, although a BRGC could distinguish 512 positions with 9 sensors.
An STGC for P = 30 and n = 5 is reproduced here:
Each column is a cyclic shift of the first column, and from any row to the next row only one bit changes. [ 81 ] The single-track nature (like a code chain) is useful in the fabrication of these wheels (compared to BRGC), as only one track is needed, thus reducing their cost and size.
The Gray code nature is useful (compared to chain codes , also called De Bruijn sequences ), as only one sensor will change at any one time, so the uncertainty during a transition between two discrete states will only be plus or minus one unit of angular measurement the device is capable of resolving. [ 82 ]
Since this 30 degree example was added, there has been a lot of interest in examples with higher angular resolution. In 2008, Gary Williams, [ 83 ] [ user-generated source? ] based on previous work, [ 80 ] discovered a 9-bit single track Gray code that gives a 1 degree resolution. This Gray code was used to design an actual device which was published on the site Thingiverse . This device [ 84 ] was designed by etzenseep (Florian Bauer) in September 2022.
An STGC for P = 360 and n = 9 is reproduced here:
Two-dimensional Gray codes are used in communication to minimize the number of bit errors in quadrature amplitude modulation (QAM) adjacent points in the constellation . In a typical encoding the horizontal and vertical adjacent constellation points differ by a single bit, and diagonal adjacent points differ by 2 bits. [ 85 ]
Two-dimensional Gray codes also have uses in location identifications schemes, where the code would be applied to area maps such as a Mercator projection of the earth's surface and an appropriate cyclic two-dimensional distance function such as the Mannheim metric be used to calculate the distance between two encoded locations, thereby combining the characteristics of the Hamming distance with the cyclic continuation of a Mercator projection. [ 86 ]
If a subsection of a specific codevalue is extracted from that value, for example the last 3 bits of a 4-bit Gray code, the resulting code will be an "excess Gray code". This code shows the property of counting backwards in those extracted bits if the original value is further increased. Reason for this is that Gray-encoded values do not show the behaviour of overflow, known from classic binary encoding, when increasing past the "highest" value.
Example: The highest 3-bit Gray code, 7, is encoded as (0)100. Adding 1 results in number 8, encoded in Gray as 1100. The last 3 bits do not overflow and count backwards if you further increase the original 4 bit code.
When working with sensors that output multiple, Gray-encoded values in a serial fashion, one should therefore pay attention whether the sensor produces those multiple values encoded in 1 single Gray code or as separate ones, as otherwise the values might appear to be counting backwards when an "overflow" is expected.
The bijective mapping { 0 ↔ 00 , 1 ↔ 01 , 2 ↔ 11 , 3 ↔ 10 } establishes an isometry between the metric space over the finite field Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} with the metric given by the Hamming distance and the metric space over the finite ring Z 4 {\displaystyle \mathbb {Z} _{4}} (the usual modular arithmetic ) with the metric given by the Lee distance . The mapping is suitably extended to an isometry of the Hamming spaces Z 2 2 m {\displaystyle \mathbb {Z} _{2}^{2m}} and Z 4 m {\displaystyle \mathbb {Z} _{4}^{m}} . Its importance lies in establishing a correspondence between various "good" but not necessarily linear codes as Gray-map images in Z 2 2 {\displaystyle \mathbb {Z} _{2}^{2}} of ring-linear codes from Z 4 {\displaystyle \mathbb {Z} _{4}} . [ 87 ] [ 88 ]
There are a number of binary codes similar to Gray codes, including:
The following binary-coded decimal (BCD) codes are Gray code variants as well: | https://en.wikipedia.org/wiki/Hoklas_code |
The Holarctic realm is a biogeographic realm that comprises the majority of habitats found throughout the continents in the Northern Hemisphere . It corresponds to the floristic Boreal Kingdom . It includes both the Nearctic zoogeographical region (which covers most of North America ), and Alfred Wallace 's Palearctic zoogeographical region (which covers North Africa , and all of Eurasia except for Southeast Asia , the Indian subcontinent , the southern Arabian Peninsula ).
These regions are further subdivided into a variety of ecoregions . Many ecosystems and the animal and plant communities that depend on them extend across a number of continents and cover large portions of the Holarctic realm. This continuity is the result of those regions’ shared glacial history.
Within the Holarctic realm, there are a variety of ecosystems. The type of ecosystem found in a given area depends on its latitude and the local geography. In the far north, a band of Arctic tundra encircles the shore of the Arctic Ocean . The ground beneath this land is permafrost (frozen year-round). In these difficult growing conditions, few plants can survive. South of the tundra, the boreal forest stretches across North America and Eurasia. This land is characterized by coniferous trees . Further south, the ecosystems become more diverse. Some areas are temperate grassland , while others are temperate forests dominated by deciduous trees . Many of the southernmost parts of the Holarctic are deserts , which are dominated by plants and animals adapted to the dry conditions. [ 1 ]
A variety of animal species are distributed across continents, throughout much of the Holarctic realm. These include the brown bear, grey wolf, red fox, wolverine, moose, caribou, golden eagle and common raven.
The brown bear ( Ursus arctos ) is found in mountainous and semi-open areas distributed throughout the Holarctic. It once occupied much larger areas, but has been driven out by human development and the resulting habitat fragmentation . Today it is only found in remaining wilderness areas.
The grey wolf ( Canis lupus ) is found in a wide variety of habitats from tundra to desert, with different populations adapted for each. Its historical distribution encompasses the vast majority of the Holarctic realm, though human activities such as development and active extermination have extirpated the species from much of this range.
The red fox ( Vulpes vulpes ) is a highly adaptable predator. It has the widest distribution of any terrestrial carnivore, and is adapted to a wide range of habitats, including areas of intense human development. Like the wolf, it is distributed throughout the majority of the Holarctic, but it has avoided extirpation.
The wolverine ( Gulo gulo ) is a large member of the weasel family found primarily in the arctic and in boreal forests, ranging south in mountainous regions. It is distributed in such areas throughout Eurasia and North America.
The moose ( Alces alces ) is the largest member of the deer family. It is found throughout most of the boreal forest through continental Eurasia into Scandinavia, eastern North America, and boreal and montane regions of western North America. In some areas it ranges south into the deciduous forest.
The caribou, or reindeer ( Rangifer tarandus ) is found in boreal forest and tundra in the northern parts of the Holarctic. In Eurasia, it has been domesticated. It is divided into several subspecies, which are adapted to different habitats and geographic areas.
The golden eagle ( Aquila chrysaetos ) is one of the best-known birds of prey in the Northern Hemisphere. It is the most widely distributed species of eagle. Golden eagles use their agility and speed combined with powerful feet and massive, sharp talons to snatch up a variety of prey (mainly hares, rabbits, marmots and other ground squirrels).
The common raven ( Corvus corax ) is the most widespread of the corvids , and one of the largest. It is found in a variety of habitats, but primarily wooded northern areas. It has been known to adapt well to areas of human activity. Their distribution also makes up most of the Holarctic realm.
Leptothorax acervorum is a small red Holarctic ant widely distributed across Eurasia, ranging from central Spain and Italy to the northernmost parts of Scandinavia and Siberia.
Zygiella x-notata is a species of orb-weaving spider with a Holarctic distribution, mostly inhabiting urban and suburban regions of Europe and parts of North America.
Hemerobius humulinus is a species of brown lacewing in the family Hemerobiidae. It is found in Europe and Northern Asia (excluding China), North America, and Southern Asia. [ 2 ]
The continuity of the northern parts of the Holarctic results from their shared glacial history. During the Pleistocene (Ice Age), these areas were subjected to repeated glaciations. Icecaps expanded, scouring the land of life and reshaping its topography. During glacial periods, species survived in refugia , small areas that maintained a suitable climate due to local geography. These areas are believed to have been primarily in southern regions, but some genetic and paleontological evidence points to additional refugia in the sheltered areas of the north. [ 3 ]
Wherever these areas were found, they became source populations during interglacial periods . When the glaciers receded, plants and animals spread rapidly into the newly opened areas. Different taxa responded to these rapidly changing conditions in different ways. Tree species spread outward from refugia during interglacial periods, but in varied patterns, with different trees dominating in different periods. [ 4 ] Insects , on the other hand, shifted their ranges with the climate, maintaining consistency in species for the most part throughout the period. [ 5 ] Their high degree of mobility allowed them to move as the glaciers advanced or retreated, maintaining a constant habitat despite the climatic oscillations . Despite their apparent lack of mobility, plants managed to colonize new areas rapidly as well. Studies of fossil pollen indicate that trees recolonized these lands at an exponential rate. [ 6 ] Mammals recolonized at varying rates. Brown bears, for instance, moved quickly from refugia with the receding glaciers, becoming one of the first large mammals to recolonize the land. [ 7 ] The Last Glacial Period ended about 10,000 years ago, resulting in the present distribution of ecoregions.
Another factor contributing to the continuity of Holarctic ecosystems is the movement between continents allowed by the Bering land bridge , which was exposed by the lowering of sea level due to the expansion of the ice caps. The communities found in the Palearctic and the Nearctic are different, but have many species in common. This is the result of several faunal interchanges that took place across the Bering land bridge. However, these migrations were mostly limited to large, cold-tolerant species. [ 8 ] Today it is mainly these species which are found throughout the realm.
As the Holarctic is an enormous area, it is subject to environmental problems of international scale. The primary threats throughout the region result from global warming and habitat fragmentation . [ citation needed ] The former is of particular concern in the north, as these ecosystems are adapted to cold. The latter is more of a concern in the south, where development is prevalent.
Global warming is a threat to all the Earth's ecosystems, but it is a more immediate threat to those found in cold climates. The communities of species found at these latitudes are adapted to the cold, so any significant warming can upset the balance. For instance, insects struggle to survive the cold winters typical of the boreal forest. Many do not make it, especially in harsh winters. However, recently the winters have grown milder, which has had a drastic effect on the forest. Winter mortality of some insect species drastically decreased, allowing the population to build on itself in subsequent years. In some areas the effects have been severe. Spruce beetle outbreaks have wiped out up to ninety percent of the Kenai Peninsula 's spruce trees; this is blamed primarily on a series of unusually warm years since 1987. [ 9 ]
In this case a native species has caused massive disturbance of habitat as a result of climate change. Warming temperatures may also allow pest species to enlarge their range, moving into habitats that were previously unsuitable. Studies of potential areas for outbreaks of bark beetles indicate that as the climate shifts, these beetles will expand to the north and to higher elevations than they have previously affected. [ 10 ] With warmer temperatures, insect infestation will become a greater problem throughout the northern parts of the Holarctic.
Another potential effect of global warming to northern ecosystems is the melting of permafrost . This can have significant effects on the plant communities that are adapted to the frozen soil, and may also have implications for further climate change. As permafrost melts, any trees growing above it may die, and the land shifts from forest to peatland . In the far north, shrubs may later take over what was formerly tundra. The precise effect depends on whether the water that was locked up is able to drain off. In either case, the habitat will undergo a shift. Melting permafrost may also accelerate climate change in the future. Within the permafrost, vast quantities of carbon are locked up. If this soil melts, the carbon may be released into the air as either carbon dioxide or methane . Both of these are greenhouse gases . [ 11 ]
Habitat fragmentation threatens a wide variety of habitats throughout the world, and the Holarctic is no exception. Fragmentation has a variety of negative effects on populations. As populations become cut off, their genetic diversity suffers and they become susceptible to sudden disasters and extinction. While the northern parts of the Holarctic represent some of the largest areas of wilderness left on Earth, the southern parts are in some places extensively developed. This realm contains most of the world's developed countries , including the United States and the nations of Western Europe. Temperate forests were the primary ecosystem in many of the most developed areas today. These lands are now used for intensive agriculture or have become urbanized. As lands have been developed for agricultural uses and human occupation, natural habitat has for the most part become limited to areas considered unsuitable for human use, such as slopes or rocky areas. [ 12 ] This pattern of development limits the ability of animals, especially large ones, to migrate from place to place.
Large carnivores are particularly affected by habitat fragmentation. These mammals, such as brown bears and wolves, require large areas of land with relatively intact habitat to survive as individuals. Much larger areas are required to maintain a sustainable population. They may also serve as keystone species , regulating the populations of the species they prey on. Thus, their conservation has direct implications for a wide range of species, and is difficult to accomplish politically due to the large size of the areas they need. [ 13 ] With increasing development, these species in particular are at risk, which could have effects that carry down throughout the ecosystem.
The threats to the Holarctic realm are not going unrecognized. Many efforts are being made to mitigate these threats, with the hope of preserving the biodiversity of the region. International agreements to combat global warming may help to lessen the effects of climate change on this region. Efforts are also underway to fight habitat fragmentation, both on local and regional scales.
The most comprehensive effort to combat global warming to date is the Kyoto Protocol . Developed countries who sign this protocol agree to cut their collective greenhouse gas emissions by five percent since 1990 by sometime between 2008 and 2012. The vast majority of these nations are found within the Holarctic. Each country is given a target for emission levels, and they may trade emissions credits in a market-based system that includes developing countries as well. Once this period is ended, a new agreement will be written to further mitigate the effects of climate change . The process of drafting a new agreement has already begun. In late 2007, an international meeting in Bali was held to begin planning for the successor to the Kyoto Protocol. This agreement will aim to build on the successes and failures of Kyoto to produce a more effective method of cutting greenhouse gas emissions ( UNFCCC ). If these efforts are successful, the biodiversity of the Holarctic and the rest of the world will see fewer effects of climate change.
Fighting habitat fragmentation is a major challenge in conserving the wide-ranging species of the Holarctic. Some efforts are limited to a local scale of protection, while others are regional in scope. Local efforts include creating reserves and establishing safe routes for animals to cross roads and other human-made barriers. Regional efforts to combat habitat fragmentation take a broader scope.
One major such effort in the Holarctic is the Yellowstone to Yukon Conservation Initiative . This organization was started in 1997 to help establish a contiguous network of protection for the northern Rocky Mountains , from mid Wyoming to the border between Alaska and Canada's Yukon . It brings together a wide variety of environmental organizations for a shared purpose. The goal of the Initiative is to create a core of protected areas, connected by corridors and surrounded by buffer zones. This will build on the many existing protected areas in this region, with a focus on integrating existing and future human activities into the conservation plan rather than seeking to exclude them (Yellowstone to Yukon). If these efforts are successful, they will be especially beneficial to wide-ranging species such as grizzly bears . If these species can survive, other members of the communities they live in will survive as well. | https://en.wikipedia.org/wiki/Holarctic_realm |
The Holborn 9100 was a personal computer introduced in 1981 by a small Dutch company called Holborn [ nl ] . Only around 200 of these devices were sold, with Holborn going into bankruptcy in 1983.
The Holborn 9100 was released in 1981 by Dutch computer manufacturer Holborn. The appearance was designed by an industrial design office named Studio Vos and developed by Hans Polak and Henny Beavers. The name comes from the phrase "born in Holland." There were two main versions, a larger one with a proprietary multi-user operating system named Holborn OS with support for a light pen , and a smaller machine with the operating system CP/M and no light pen support. These versions were split into four different models released: the 9100, 7100, 6500, and 6100. The systems consist primarily of a combined desktop monitor , terminal , and keyboard with an external unit housing two 8 inch floppy drives and an optional hard drive . The 6500 series had a detachable keyboard. [ 1 ] The computer was primarily marketed to small companies for administration and bookkeeping purposes. [ 2 ] The lower-numbered models were produced after the release of the 9100. [ 1 ] Around 200 machines were produced, with 50 of them being the larger models. [ 2 ] Due to the high price of the system at the time at 30,000 guilder [ 3 ] or $10,000 USD (equivalent to $30,591 in 2023) and competition from IBM , the system was a commercial failure and caused the manufacturer to go bankrupt in 1983. The most popular version was the 6100. [ 4 ] It is estimated that around 20 systems survive. [ 1 ]
Holborn OS (HOS) was a multi-user operating system developed by Holborn. It was stored entirely on ROM and used floppy disks to start applications. [ 5 ] The application software was menu driven and could be controlled by a light pen to aid use by technically unskilled users. [ 6 ]
Holborn Computers was founded in 1979 by Hans Polak and Dick Gerdzen. The goal of the company was to supply computers based on microprocessor technology to specific types of retailers. [ 6 ] The company saw a market for cheaper computer systems as IBM computers were expensive at the time. Henny Beavers joined soon afterwards. The company aimed to design and manufacture the systems as quickly as possible to get them out in the market. Once the Holborn 9100 was released in 1981, they marketed it using displays at trade shows and advertisements. Due to competition from more popular IBM computers, Holborn remained dependent on financial aid from the Overijssel development company OOM. [ 5 ] OOM could not supply enough financial resources and Gerdzen had to leave the company, causing the company to go bankrupt on 27 April 1983. They ultimately considered the 9100 a failure. [ 4 ]
The Holborn 9100 is noted for its design, being referred to as "something out of a retro-futuristic movie" by cybernews. [ 7 ] Inexhibit noted that "The design of the Holborn set it apart from most computers of the time," calling it futuristic and organic. [ 1 ] | https://en.wikipedia.org/wiki/Holborn_9100 |
The Holbrook Superconductor Project is the world's first production superconducting transmission power cable. [ 1 ] The lines were commissioned in 2008. [ 2 ] The suburban Long Island electrical substation is fed by a 600 meter long tunnel containing approximately 155,000 meters of high-temperature superconductor wire manufactured by American Superconductor , installed underground and chilled to superconducting temperature with liquid nitrogen . [ 3 ]
The project was funded by the United States Department of Energy , and operates as part of the Long Island Power Authority (LIPA) power grid. [ 1 ] The project team comprised American Superconductor, Nexans , Air Liquide and LIPA. It broke ground on July 4, 2006, was first energized April 22, 2008, and was commissioned on June 25, 2008. [ 4 ] Between commissioning and March 2009 refrigeration events impacted normal operation. [ 4 ]
The superconductor is bismuth strontium calcium copper oxide (BSCCO) which superconducts at liquid nitrogen temperatures. Other parts of the system include a 13,000 U.S. gallons (49,000 L) liquid nitrogen storage tank, a Brayton cycle Helium refrigerator , and a number of cryostats which manage the transition between cryogenic and ambient temperatures . [ 1 ] The system capacity is 574 MVA with an operating voltage of 138 kV at a maximum current of 2400 A. | https://en.wikipedia.org/wiki/Holbrook_Superconductor_Project |
The hole drilling method is a method for measuring residual stresses, [ 1 ] [ 2 ] in a material. Residual stress occurs in a material in the absence of external loads. Residual stress interacts with the applied loading on the material to affect the overall strength, fatigue, and corrosion performance of the material. Residual stresses are measured through experiments. The hole drilling method is one of the most used methods for residual stress measurement. [ 3 ]
The hole drilling method can measure macroscopic residual stresses near the material surface. The principle is based on drilling of a small hole into the material. When the material containing residual stress is removed the remaining material reaches a new equilibrium state. The new equilibrium state has associated deformations around the drilled hole. The deformations are related to the residual stress in the volume of material that was removed through drilling. The deformations around the hole are measured during the experiment using strain gauges or optical methods. The original residual stress in the material is calculated from the measured deformations. The hole drilling method is popular for its simplicity and it is suitable for a wide range of applications and materials.
Key advantages of the hole drilling method include rapid preparation, versatility of the technique for different materials, and reliability. Conversely, the hole drilling method is limited in depth of analysis and specimen geometry, and is at least semi-destructive.
The idea of measuring the residual stress by drilling a hole and registering the change of the hole diameter was first proposed by Mathar in 1934. In 1966 Rendler and Vignis introduced a systematic and repeatable procedure of hole drilling to measure the residual stress. In the following period the method was further developed in terms of drilling techniques, measuring the relieved deformations, and the residual stress evaluation itself. A very important milestone is the use of finite element method to compute the calibration coefficients and to evaluate the residual stresses from the measured relieved deformations (Schajer, 1981). That allowed especially the evaluation of residual stresses which are not constant along the depth. It also brought further possibilities of using the method, e.g., for inhomogeneous materials, coatings, etc. The measurement and evaluation procedure is standardised by the norm ASTM E837 [ 4 ] of the American Society for Testing and Materials which also contributed to the popularity of the method. The hole drilling is currently one of the most widespread methods of measuring the residual stress. Modern computational methods are used for the evaluation. The method is being developed especially in terms of drilling techniques and the possibilities of measuring the deformations. Some laboratories, such as the company MELIAD, offer residual stress measurement services and the sale of measurement equipment according to ASTM E837. Today this method is integrated within several large companies in the energy and aeronautics sectors.
The hole drilling method of measuring the residual stresses is based on drilling a small hole in the material surface. This relieves the residual stresses and the associated deformations around the hole. The relieved deformations are measured in at least three independent directions around the hole. The original residual stress in the material is then evaluated based on the measured deformations and using the so-called calibration coefficients. The hole is made by a cylindrical end mill or by alternative techniques. Deformations are most often measured using strain gauges (strain gauge rosettes).
The biaxial stress in the surface plane can be measured. The method is often referred to as semi-destructive thanks to the small material damage. The method is relatively simple, fast, the measuring device is usually portable. Disadvantages include the destructive character of the technique, limited resolution, and a lower accuracy of the evaluation in the case of nonuniform stresses or inhomogeneous material properties.
The so-called calibration coefficients play an important role in the residual stress evaluation. They are used to convert the relieved deformations to the original residual stress in the material. The coefficients can be theoretically derived for a through hole and a homogeneous stress. Then they depend only on the material properties, hole radius, and the distance from the hole. In the vast majority of practical applications, however, the preconditions for using the theoretically derived coefficients are not met, e.g., the integral deformation over the tensometer area is not included, the hole is blind instead of through, etc. Therefore, coefficients taking into account the practical aspects of measuring are used. They are mostly determined by a numerical computation using the finite element method. They express the relation between the relieved deformations and the residual stresses, taking into account the hole size, hole depth, shape of the tensometric rosette, material, and other parameters.
The evaluation of the residual stresses depends on the method used to calculate them from the measured relieved deformations. All the evaluation methods are built on the basic principles. They differ in the preconditions for use, the accuracy requirements on the calibration coefficients, or the possibility to take additional influences into account. In general, the hole is made in successive steps and the relieved deformations are measured after each step.
Several methods have been developed for the evaluation of residual stresses from the relieved deformations. The fundamental method is the equivalent uniform stress method . The coefficients for particular hole diameter, rosette type, and hole depth are published in the norm ASTM E837. [ 4 ] The method is suitable for a constant or little changing stress along the depth. It can be used as a guideline for non-constant stresses, however, the method may give highly distorted results.
The most general method is the integral method . It calculates the influence of the relieved stress in the given depth which, however, changes with the total depth of the hole. The calibration coefficients are expressed as matrices. The evaluation leads to a system of equations whose solution is a vector of residual stresses in particular depths. A numerical simulation is required to get the calibration coefficients. The integral method and its coefficients are defined in the norm ASTM E837. [ 4 ]
There are other evaluation methods that have lower demands on the calibration coefficients and on the evaluation process itself. These include the average stress method and the incremental strain method . Both the methods are based on the assumption that the change in deformation is caused solely by the relieved stress on the drilled increment. They are suitable only if there are small changes in the stress profiles. Both the methods give numerically correct results for uniform stresses.
The power series method and the spline method are other modifications of the integral method. They both take into account both the distance of the stress effect from the surface and the total hole depth. Contrary to the integral method, the resulting stress values are approximated by a polynomial or a spline. The power series method is very stable but it cannot capture rapidly changing stress values. The spline method is more stable and less susceptible to errors than the integral method. It can capture the actual stress values better than the power series method. The main disadvantage are the complicated mathematical calculations needed to solve a system of nonlinear equations.
The hole drilling method Archived 2018-02-22 at the Wayback Machine finds its use in many industrial areas dealing with material production and processing. The most important technologies include heat treatment, mechanical and thermal surface finishing, machining, welding, coating, or manufacturing composites. Despite its relative universality, the method requires these fundamental preconditions to be met: the possibility to drill the material, the possibility to apply the tensometric rosettes (or other means of measuring the deformations), and the knowledge of the material properties. Additional conditions can affect the accuracy and repeatability of the measuring. These include especially the size and shape of the sample, distance of the measured area from the edges, homogeneity of the material, presence of residual stress gradients, etc. Hole drilling can be performed in the laboratory or as a field measurement, making it ideal for measuring actual stresses in large components that cannot be moved. | https://en.wikipedia.org/wiki/Hole_drilling_method |
Holevo's theorem is an important limitative theorem in quantum computing , an interdisciplinary field of physics and computer science . It is sometimes called Holevo's bound , since it establishes an upper bound to the amount of information that can be known about a quantum state (accessible information). It was published by Alexander Holevo in 1973.
Suppose Alice wants to send a classical message to Bob by encoding it into a quantum state, and suppose she can prepare a state from some fixed set { ρ 1 , . . . , ρ n } {\displaystyle \{\rho _{1},...,\rho _{n}\}} , with the i-th state prepared with probability p i {\displaystyle p_{i}} . Let X {\displaystyle X} be the classical register containing the choice of state made by Alice. Bob's objective is to recover the value of X {\displaystyle X} from measurement results on the state he received. Let Y {\displaystyle Y} be the classical register containing Bob's measurement outcome. Note that Y {\displaystyle Y} is therefore a random variable whose probability distribution depends on Bob's choice of measurement .
Holevo's theorem bounds the amount of correlation between the classical registers X {\displaystyle X} and Y {\displaystyle Y} , regardless of Bob's measurement choice, in terms of the Holevo information . This is useful in practice because the Holevo information does not depend on the measurement choice, and therefore its computation does not require performing an optimization over the possible measurements.
More precisely, define the accessible information between X {\displaystyle X} and Y {\displaystyle Y} as the (classical) mutual information between the two registers maximized over all possible choices of measurements on Bob's side: I a c c ( X : Y ) = sup { Π i B } i I ( X : Y | { Π i B } i ) , {\displaystyle I_{\rm {acc}}(X:Y)=\sup _{\{\Pi _{i}^{B}\}_{i}}I(X:Y|\{\Pi _{i}^{B}\}_{i}),} where I ( X : Y | { Π i B } i ) {\displaystyle I(X:Y|\{\Pi _{i}^{B}\}_{i})} is the (classical) mutual information of the joint probability distribution given by p i j = p i Tr ( Π j B ρ i ) {\displaystyle p_{ij}=p_{i}\operatorname {Tr} (\Pi _{j}^{B}\rho _{i})} . There is currently no known formula to analytically solve the optimization in the definition of accessible information in the general case. Nonetheless, we always have the upper bound: I a c c ( X : Y ) ≤ χ ( η ) ≡ S ( ∑ i p i ρ i ) − ∑ i p i S ( ρ i ) , {\displaystyle I_{\rm {acc}}(X:Y)\leq \chi (\eta )\equiv S\left(\sum _{i}p_{i}\rho _{i}\right)-\sum _{i}p_{i}S(\rho _{i}),} where η ≡ { ( p i , ρ i ) } i {\displaystyle \eta \equiv \{(p_{i},\rho _{i})\}_{i}} is the ensemble of states Alice is using to send information, and S {\displaystyle S} is the von Neumann entropy . This χ ( η ) {\displaystyle \chi (\eta )} is called the Holevo information or Holevo χ quantity .
Note that the Holevo information also equals the quantum mutual information of the classical-quantum state corresponding to the ensemble: χ ( η ) = I ( ∑ i p i | i ⟩ ⟨ i | ⊗ ρ i ) , {\displaystyle \chi (\eta )=I\left(\sum _{i}p_{i}|i\rangle \!\langle i|\otimes \rho _{i}\right),} with I ( ρ A B ) ≡ S ( ρ A ) + S ( ρ B ) − S ( ρ A B ) {\displaystyle I(\rho _{AB})\equiv S(\rho _{A})+S(\rho _{B})-S(\rho _{AB})} the quantum mutual information of the bipartite state ρ A B {\displaystyle \rho _{AB}} . It follows that Holevo's theorem can be concisely summarized as a bound on the accessible information in terms of the quantum mutual information for classical-quantum states.
Consider the composite system that describes the entire communication process, which involves Alice's classical input X {\displaystyle X} , the quantum system Q {\displaystyle Q} , and Bob's classical output Y {\displaystyle Y} . The classical input X {\displaystyle X} can be written as a classical register ρ X := ∑ x = 1 n p x | x ⟩ ⟨ x | {\displaystyle \rho ^{X}:=\sum \nolimits _{x=1}^{n}p_{x}|x\rangle \langle x|} with respect to some orthonormal basis { | x ⟩ } x = 1 n {\displaystyle \{|x\rangle \}_{x=1}^{n}} . By writing X {\displaystyle X} in this manner, the von Neumann entropy S ( X ) {\displaystyle S(X)} of the state ρ X {\displaystyle \rho ^{X}} corresponds to the Shannon entropy H ( X ) {\displaystyle H(X)} of the probability distribution { p x } x = 1 n {\displaystyle \{p_{x}\}_{x=1}^{n}} :
The initial state of the system, where Alice prepares the state ρ x {\displaystyle \rho _{x}} with probability p x {\displaystyle p_{x}} , is described by
Afterwards, Alice sends the quantum state to Bob. As Bob only has access to the quantum system Q {\displaystyle Q} but not the input X {\displaystyle X} , he receives a mixed state of the form ρ := tr X ( ρ X Q ) = ∑ x = 1 n p x ρ x {\displaystyle \rho :=\operatorname {tr} _{X}\left(\rho ^{XQ}\right)=\sum \nolimits _{x=1}^{n}p_{x}\rho _{x}} . Bob measures this state with respect to the POVM elements { E y } y = 1 m {\displaystyle \{E_{y}\}_{y=1}^{m}} , and the probabilities { q y } y = 1 m {\displaystyle \{q_{y}\}_{y=1}^{m}} of measuring the outcomes y = 1 , 2 , … , m {\displaystyle y=1,2,\dots ,m} form the classical output Y {\displaystyle Y} . This measurement process can be described as a quantum instrument
where q y | x = tr ( E y ρ x ) {\displaystyle q_{y|x}=\operatorname {tr} \left(E_{y}\rho _{x}\right)} is the probability of outcome y {\displaystyle y} given the state ρ x {\displaystyle \rho _{x}} , while ρ y | x = W E y ρ x E y W † / q y | x {\displaystyle \rho _{y|x}=W{\sqrt {E_{y}}}\rho _{x}{\sqrt {E_{y}}}W^{\dagger }/q_{y|x}} for some unitary W {\displaystyle W} is the normalised post-measurement state . Then, the state of the entire system after the measurement process is
Here I X {\displaystyle {\mathcal {I}}^{X}} is the identity channel on the system X {\displaystyle X} . Since E Q {\displaystyle {\mathcal {E}}^{Q}} is a quantum channel , and the quantum mutual information is monotonic under completely positive trace-preserving maps, [ 1 ] S ( X : Q ′ Y ) ≤ S ( X : Q ) {\displaystyle S(X:Q'Y)\leq S(X:Q)} . Additionally, as the partial trace over Q ′ {\displaystyle Q'} is also completely positive and trace-preserving, S ( X : Y ) ≤ S ( X : Q ′ Y ) {\displaystyle S(X:Y)\leq S(X:Q'Y)} . These two inequalities give
On the left-hand side, the quantities of interest depend only on
with joint probabilities p x , y = p x q y | x {\displaystyle p_{x,y}=p_{x}q_{y|x}} . Clearly, ρ X Y {\displaystyle \rho ^{XY}} and ρ Y := tr X ( ρ X Y ) {\displaystyle \rho ^{Y}:=\operatorname {tr} _{X}(\rho ^{XY})} , which are in the same form as ρ X {\displaystyle \rho ^{X}} , describe classical registers. Hence,
Meanwhile, S ( X : Q ) {\displaystyle S(X:Q)} depends on the term
where I Q {\displaystyle I^{Q}} is the identity operator on the quantum system Q {\displaystyle Q} . Then, the right-hand side is
which completes the proof.
In essence, the Holevo bound proves that given n qubits , although they can "carry" a larger amount of (classical) information (thanks to quantum superposition), the amount of classical information that can be retrieved , i.e. accessed , can be only up to n classical (non-quantum encoded) bits . It was also established, both theoretically and experimentally, that there are computations where quantum bits carry more information through the process of the computation than is possible classically. [ 2 ] | https://en.wikipedia.org/wiki/Holevo's_theorem |
Holger Militz (born in 1960) is a German wood scientist and professor at the University of Goettingen , [ 1 ] who is an elected fellow ( FIAWS ) and distinguished member of the International Academy of Wood Science , [ 2 ] and recipient of the Schweighofer Prize. [ 3 ]
Militz was born in 1960 in Waldbröl , a small town in the countryside at Germany. [ 4 ]
He pursued his studies in wood science at the University of Hamburg . He then completed his PhD work in 1990 at the University of Wageningen in the Netherlands, focusing on enhancing the impregnation of wood through anatomical cell wall changes.
Between 1987 and 2000, he held positions in the Netherlands, initially serving as the head of wood technology at TNO Timber Research and later becoming the director of SHR Timber Research in Wageningen .
Militz along with his research corkers started up during the 90's at SHR the first feasible pilot plant, leading thus to the scaling-up of the today-commercial wood acetylation process, that had been initiated by American chemist, Alfred J. Stamm during the 1940's at Forest Products Laboratory .
Since 2000, he has held the position of a full professor at Wood Biology and Wood Products in the Georg-August-University in Göttingen . He has been a part-time professor at the Norwegian University of Life Sciences from 2010-2018.
His main research interests include wood technology, wood decay, wood protection and especially, wood modification applying green technologies. He possesses over 600 publications in many scientific journals and book articles in the area of wood science and technology. [ 5 ]
Militz has won several awards for his yearlong work in the area of wood products and wood modification. [ 6 ] [ 7 ] [ 8 ] He has been an active member of the editorial boards of the international wood journals, Holzforschung , European Journal of Wood and Wood Products , Wood Research , and Holztechnologie , while he is the chairman of the ECWM - Wood Modification in Europe since 2001.
In October 2023, a meta-research carried out by John Ioannidis et al.,. at Stanford University included Holger Militz in Elsevier Data 2022, where he was ranked in the top 2% of researchers in wood science ( forestry – materials ), having a c-score of 3.495, one of the highest three in this scientific area. [ 9 ] In August 2024, Militz has acquired the same high international distinction for his research and scientific work in wood sciences ( Elsevier Data 2023 ). [ 10 ] | https://en.wikipedia.org/wiki/Holger_Militz |
A holistic community (also referred to as closed or unitary community) is an ecosystem where species within the community are interdependent , relying on each other to maintain the balance and stability of the system. These communities are described as working like one unit, meaning that every species plays an important part in the overall well-being of the ecosystem in which the community resides; much like the organelles within a cell , or even the cells making up one organism . Holistic communities have diffused boundaries and an independent species range. Co-evolution is likely to be found in communities structured after this model, as a result of the interdependence and high rates of interaction found among the different populations. Species compositions of communities change sharply at environmental edges (known as ecotones ).
According to a widespread narrative, the ideas of a holistic ecological community were introduced by plant ecologist Frederic Clements in 1916, and countered by Henry Gleason in 1917, when he proposed the individualistic/open community concept (in applications to plants). [ 1 ] However, this seems to be wrong in at least two essential respects:
While Warming might have been the first to propose an organismic theory of ecological communities, one of the first to elaborate such a theory has been the limnologist, zoologist and ecologist August Thienemann . According to Thienemann , a biocoenosis "is not just an aggregate, a sum of organisms that coexist in the same biotope owing to alike exogenous habitat conditions but a (supra-individual) whole, a togetherness and a for-each-other of organisms" (Thienemann 1939: 275). He even assumes that the members of a biocoenosis feature "specific mutual relations that are vital for their life" (ibid.: 268), whereby this mutual "bond either exists directly from organism to organism or operates indirectly by the medium of vitally created modifications of the physiographic conditions of the biotope " (Thienemann 1941: 105). [ 3 ]
Neither organismic nor individualistic communities have been found to exist in nature in entirety, both are theoretical concepts that can be applied to empirical communities. For example, a community's composition can be better explained by holism than individualism, or vice versa. This ecological concept is based on the broader concept of holism , which describes the functionality of any system as having many individual parts, all of which are extremely important to the system's viability.
"A community has been viewed as a super organism with integrity analogous to that of cells in an organism. This is the holistic or unitary view of a community, and one championed by Clements (1916). He regarded the community to be a highly integrated unit that operated very much within itself with little interaction with surrounding communities—a closed community." [ 4 ]
"The holistic model considers all living beings as its subjects who are manifestations of the Absolute and part of the whole. It includes all relations that arise between them. The most efficient satisfaction of their interest is the most important designation of the holistic system (holistic community). The holistic community is equally responsible for the human development and for the harmonious evolution of all other subjects of holistic model. The subjects of holistic model are the following:
This characteristic illustrates that the holistic model is universal and exceeds just human relations. It is not just humanistic but also respectful for all life in general. The necessity to identify the subjects in the frame of the whole is the first condition for the proper satisfaction of their interests. Without differentiating them and without knowing well how they function, it is impossible to conduct rational action which aims to fulfil the needs of everyone in the system and to refine their environment." [ 5 ] | https://en.wikipedia.org/wiki/Holistic_community |
In agriculture , holistic management (from ὅλος holos , a Greek word meaning "all, whole, entire, total") is an approach to managing resources that was originally developed by Allan Savory [ 1 ] for grazing management . [ 2 ] [ better source needed ] Holistic management has been likened to "a permaculture approach to rangeland management". [ 3 ] Holistic management is a registered trademark of Holistic Management International (no longer associated with Allan Savory). It has faced criticism from many researchers who argue it is unable to provide the benefits claimed. [ 4 ] [ 5 ]
"Holistic management" describes a systems thinking approach to managing resources. Originally developed by Allan Savory, it is now being adapted for use in managing other systems with complex social, ecological and economic factors.
Holistic planned grazing is similar to rotational grazing but differs in that it more explicitly recognizes and provides a framework for adapting to the four basic ecosystem processes: the water cycle , [ 6 ] [ 7 ] the mineral cycle including the carbon cycle , [ 8 ] [ 9 ] energy flow , and community dynamics (the relationship between organisms in an ecosystem ), [ 10 ] giving equal importance to livestock production and social welfare. Holistic Management has been likened to "a permaculture approach to rangeland management". [ 3 ]
The Holistic Management decision-making framework uses six key steps to guide the management of resources: [ 11 ] [ 1 ]
Savory stated four key principles of Holistic Management planned grazing, which he intended to take advantage of the symbiotic relationship between large herds of grazing animals and the grasslands that support them: [ 12 ]
The idea of holistic planned grazing was developed in the 1960s by Allan Savory , a wildlife biologist in his native Southern Rhodesia . Setting out to understand desertification in the context of the larger environmental movement , and influenced by the work of André Voisin , [ 13 ] [ 14 ] he hypothesized that the spread of deserts, the loss of wildlife, and the resulting human impoverishment were related to the reduction of the natural herds of large grazing animals and, even more, the changed behavior of the few remaining herds. [ 2 ] Savory hypothesized further that livestock could be substituted for natural herds to provide important ecosystem services like nutrient cycling . [ 15 ] [ 16 ] However, while livestock managers had found that rotational grazing systems can work for livestock management purposes, scientific experiments demonstrated it does not necessarily improve ecological issues such as desertification. As Savory saw it, a more comprehensive framework for the management of grassland systems — an adaptive, holistic management plan — was needed. For that reason Holistic Management has been used as a Whole Farm/Ranch Planning tool [1] In 1984, he founded the Center for Holistic Resource Management which became Holistic Management International. [2]
In many regions, pastoralism and communal land use are blamed for environmental degradation caused by overgrazing . After years of research and experience, Savory came to understand this assertion was often wrong, and that sometimes removing animals actually made matters worse. [ disputed – discuss ] This concept is a variation of the trophic cascade , where humans are seen as the top level predator and the cascade follows from there.
Savory developed a management system that he claimed would improve grazing systems. Holistic planned grazing is one of a number of newer grazing management systems that aim to more closely simulate the behavior of natural herds of wildlife and has been claimed to improve riparian habitats and water quality over systems that often led to land degradation , and claimed to improve range condition for both livestock and wildlife . [ 6 ] [ 7 ] [ 17 ] [ 18 ] [ 19 ]
Savory claims that Holistic Planned Grazing holds potential in mitigating climate change , while building soil, increasing biodiversity, and reversing desertification. [ 20 ] [ 21 ] This practice uses fencing and/or herders to restore grasslands . [ 7 ] [ 22 ] [ 23 ] Carefully planned movements of large herds of livestock mimic the processes of nature where grazing animals are kept concentrated by pack predators and forced to move on after eating, trampling, and manuring an area, returning only after it has fully recovered. This grazing method seeks to emulate what occurred during the past 40 million years as the expansion of grass-grazer ecosystems built deep, rich grassland soils , sequestering carbon, and consequently cooling the planet. [ 24 ]
While originally developed as a tool for range land use [ 22 ] and restoring desertified land, [ 25 ] the Holistic Management system can be applied to other areas with multiple complex socioeconomic and environmental factors. One such example is integrated water resources management , which promotes sector integration in development and management of water resources to ensure that water is allocated fairly between different users, maximizing economic and social welfare without compromising the sustainability of vital ecosystems. [ 26 ] [ failed verification ] Another example is mine reclamation . [ 27 ] A fourth use of Holistic Management® is in certain forms of no till crop production, intercropping, and permaculture. [ 3 ] [ 28 ] [ 29 ] [ 30 ] Holistic Management has been acknowledged [ weasel words ] by the United States Department of Agriculture. [ 30 ] [ 31 ] The most comprehensive use of Holistic Management is as a Whole Farm/Ranch Planning tool which has been used successfully by farmers and ranchers. For that reason, the USDA invested six years of Beginning Farmer/Rancher Development funding to use it to train beginning women farmers and ranchers. [3] [4]
There are many peer-reviewed studies and journalistic publications that dispute the claims of Holistic Management theory. [ 5 ] [ 32 ] [ 33 ]
A 2014 review examined five specific ecological assumptions of Holistic Management and found that none were supported by scientific evidence in the Western US. [ 34 ] A paper by Richard Teague et al . claims that the different criticisms had examined rotational systems in general and not holistic planned grazing. [ 35 ] A meta-analysis of relevant studies between 1972 and 2016 found that Holistic Planned Grazing had no better effect than continuous grazing on plant cover, plant biomass and animal production, although it may have benefited some areas with higher precipitation . [ 36 ] Conversely, at least three studies have documented soil improvement as measured by soil carbon , soil nitrogen, soil biota , water retention, nutrient-holding capacity, and ground litter on grazed land using multi-pasture grazing methods compared to continuously grazed land. [ 7 ] [ 37 ] [ 38 ]
There is also evidence that multi-pasture grazing methods may increase water retention compared to non-grazed land. [ 22 ] However, George Wuerthner, writing in The Wildlife News in a 2013 article titled, "Allan Savory: Myth And Reality" stated, "The few scientific experiments that Savory supporters cite as vindication of his methods (out of hundreds that refute his assertions), often fail to actually test his theories. Several of the studies cited on HM web site had utilization levels (degree of vegetation removed) well below the level that Savory actually recommends." [ 39 ]
These critiques have been challenged on the grounds that many studies examined rotational grazing systems in general and not Holistic Management or Holistic Planned Grazing. [ 35 ] In addition to a grazing method, Holistic Management involves goal setting, experiential learning and an emphasis on monitoring and adaptive decision-making that have not been captured by many scientific field trials. [ 33 ] [ 38 ] This has been proposed as a reason why many land managers have reported a more positive experience of Holistic Management than scientific studies. [ 40 ] However, a 2022 review of 22 “farm-scale” studies, many of which included adaptive management, again found that Holistic Management had no effect on or reduced plant or animal productivity. [ 40 ] The same study found that Holistic Management was associated with improved social cohesion and peer-to-peer learning, but concluded that the “social cohesion, learning and networking so prevalent on HM farms could be adopted by any farming community without accepting the unfounded HM rhetoric”. [ 40 ]
Savory has also faced criticisms for claiming the carbon sequestration potential of holistic grazing is immune from empirical scientific study. [ 41 ] For instance, in 2000, Savory said that "the scientific method never discovers anything" and “the scientific method protects us from cranks like me". [ 42 ] A 2017 factsheet authored by Savory stated that “Every study of holistic planned grazing that has been done has provided results that are rejected by range scientists because there was no replication!". [ 43 ] TABLE Debates sums this up by saying "Savory argues that standardisation, replication, and therefore experimental testing of HPG [Holistic Planned Grazing] as a whole (rather than just the grazing system associated with it) is not possible, and that therefore, it is incapable of study by experimental science", but "he does not explain how HPG can make causal knowledge claims with regards to combating desertification and climate mitigation, without recourse to science demonstrating such connections." [ 41 ]
There is a less developed evidence base comparing Holistic management with the absence of livestock on grasslands. Several peer-reviewed studies have found that excluding livestock completely from semi-arid grasslands can lead to significant recovery of vegetation and soil carbon sequestration. [ 44 ] [ 45 ] [ 46 ] [ 47 ] [ 48 ] A 2021 peer-reviewed paper found that sparsely grazed and natural grasslands account for 80% of the total cumulative carbon sink of the world’s grasslands, whereas managed grasslands (i.e. with greater livestock density) have been a net greenhouse gas source over the past decade. [ 49 ] A 2011 study found that multi-paddock grazing of the type endorsed by Savory resulted in more soil carbon sequestration than heavy continuous grazing, but very slightly less soil carbon sequestration than "graze exclosure" (excluding grazing livestock from land). [ 7 ] Another peer-reviewed paper found that if current pastureland was restored to its former state as wild grasslands, shrublands, and sparse savannas without livestock this could store an estimated 15.2 - 59.9 Gt additional carbon. [ 50 ]
In 2013 the Savory Institute published a response to some of their critics. [ 51 ] The same month Savory was a guest speaker with TED and gave a presentation titled "How to Fight Desertification and Reverse Climate Change". [ 52 ] [ 53 ] In his TED Talk, Savory has claimed that holistic grazing could reduce carbon dioxide levels to pre-industrial levels in a span of 40 years, solving the problems caused by climate change . Commenting on his TED talk, Savory has since denied claiming that holistic grazing can reverse climate change, saying that “I have only used the words address climate change… although I have written and talked about reversing man-made desertification”. [ 41 ]
RealClimate.org published a piece saying that Savory's claims that his technique can bring atmospheric carbon "back to pre-industrial levels" are "simply not reasonable." [ 54 ] [ 55 ] According to Skeptical Science , "it is not possible to increase productivity, increase numbers of cattle and store carbon using any grazing strategy, never-mind Holistic Management [...] Long term studies on the effect of grazing on soil carbon storage have been done before, and the results are not promising.[...] Because of the complex nature of carbon storage in soils, increasing global temperature, risk of desertification and methane emissions from livestock, it is unlikely that Holistic Management, or any management technique, can reverse climate change. [ 56 ]
According to a 2016 study published by the Swedish University of Agricultural Sciences , the actual rate at which improved grazing management could contribute to carbon sequestration is seven times lower than the claims made by Savory. The study concludes that Holistic Management cannot reverse climate change. [ 55 ] A study by the Food and Climate Research Network in 2017 has concluded that Savory's claims about carbon sequestration are "unrealistic" and very different from those issued by peer-reviewed studies. [ 57 ] The FCRN study estimates that, on the basis of meta-study of the scientific literature, the total global soil carbon sequestration potential from grazing management ranges from 0.3-0.8 Gt CO2eq per year, which is equivalent to offsetting a maximum of 4-11% of current total global livestock emissions, and that “Expansion or intensification in the grazing sector as an approach to sequestering more carbon would lead to substantial increases in methane, nitrous oxide and land use change-induced CO2 emissions” [ 57 ] Project Drawdown estimates the total carbon sequestration potential of improved managed grazing at 13.72 - 20.92 Gigatons CO2eq between 2020–2050, equal to 0.46-0.70 Gt CO2eq per year. [ 58 ] A 2022 peer-reviewed paper estimated the carbon sequestration potential of improved grazing management at a similar level of 0.15-0.70 Gt CO2eq per year. [ 59 ]
Savory received the 2003 Banksia International Award [ 60 ] and in 2010 the Africa Centre for Holistic Management in Zimbabwe, Operation Hope (a " proof of concept " project using Holistic Management) was named the winner of the 2010 Buckminster Fuller Challenge for "recognizing initiatives which take a comprehensive, anticipatory, design approach to radically advance human well being and the health of our planet's ecosystems". [ 21 ] [ 61 ] In addition, numerous Holistic Management practitioners have received awards for their environmental stewardship through using Holistic Management practices. [5] | https://en.wikipedia.org/wiki/Holistic_management_(agriculture) |
Holland's schema theorem , also called the fundamental theorem of genetic algorithms , [ 1 ] is an inequality that results from coarse-graining an equation for evolutionary dynamics . The Schema Theorem says that short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations. The theorem was proposed by John Holland in the 1970s. It was initially widely taken to be the foundation for explanations of the power of genetic algorithms . However, this interpretation of its implications has been criticized in several publications reviewed in, [ 2 ] where the Schema Theorem is shown to be a special case of the Price equation with the schema indicator function as the macroscopic measurement.
A schema is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets , and hence form a topological space .
Consider binary strings of length 6. The schema 1*10*1 describes the set of all strings of length 6 with 1's at positions 1, 3 and 6 and a 0 at position 4. The * is a wildcard symbol, which means that positions 2 and 5 can have a value of either 1 or 0. The order of a schema o ( H ) {\displaystyle o(H)} is defined as the number of fixed positions in the template, while the defining length δ ( H ) {\displaystyle \delta (H)} is the distance between the first and last specific positions. The order of 1*10*1 is 4 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. Using the established methods and genetic operators of genetic algorithms , the schema theorem states that short, low-order schemata with above-average fitness increase exponentially in successive generations. Expressed as an equation:
Here m ( H , t ) {\displaystyle m(H,t)} is the number of strings belonging to schema H {\displaystyle H} at generation t {\displaystyle t} , f ( H ) {\displaystyle f(H)} is the observed average fitness of schema H {\displaystyle H} and a t {\displaystyle a_{t}} is the observed average fitness at generation t {\displaystyle t} . The probability of disruption p {\displaystyle p} is the probability that crossover or mutation will destroy the schema H {\displaystyle H} . Under the assumption that p m ≪ 1 {\displaystyle p_{m}\ll 1} , it can be expressed as:
where o ( H ) {\displaystyle o(H)} is the order of the schema, l {\displaystyle l} is the length of the code, p m {\displaystyle p_{m}} is the probability of mutation and p c {\displaystyle p_{c}} is the probability of crossover. So a schema with a shorter defining length δ ( H ) {\displaystyle \delta (H)} is less likely to be disrupted. An often misunderstood point is why the Schema Theorem is an inequality rather than an equality. The answer is in fact simple: the Theorem neglects the small, yet non-zero, probability that a string belonging to the schema H {\displaystyle H} will be created "from scratch" by mutation of a single string (or recombination of two strings) that did not belong to H {\displaystyle H} in the previous generation. Moreover, the expression for p {\displaystyle p} is clearly pessimistic: depending on the mating partner, recombination may not disrupt the scheme even when a cross point is selected between the first and the last fixed position of H {\displaystyle H} .
The schema theorem holds under the assumption of a genetic algorithm that maintains an infinitely large population, but does not always carry over to (finite) practice: due to sampling error in the initial population, genetic algorithms may converge on schemata that have no selective advantage. This happens in particular in multimodal optimization , where a function can have multiple peaks: the population may drift to prefer one of the peaks, ignoring the others. [ 3 ]
The reason that the Schema Theorem cannot explain the power of genetic algorithms is that it holds for all problem instances, and cannot distinguish between problems in which genetic algorithms perform poorly, and problems for which genetic algorithms perform well. | https://en.wikipedia.org/wiki/Holland's_schema_theorem |
A Hollow fiber bioreactor is a 3 dimensional cell culturing system based on hollow fibers, which are small, semi-permeable capillary membranes arranged in parallel array with a typical molecular weight cut-off (MWCO) range of 10-30 kDa. These hollow fiber membranes are often bundled and housed within tubular polycarbonate shells to create hollow fiber bioreactor cartridges. Within the cartridges, which are also fitted with inlet and outlet ports, are two compartments: the intracapillary (IC) space within the hollow fibers, and the extracapillary (EC) space surrounding the hollow fibers.
Cells are seeded into the EC space of the hollow fiber bioreactor and expand there. Cell culture medium is pumped through the IC space and delivers oxygen and nutrients to the cells via hollow fiber membrane perfusion. As the cells expand, their waste products and CO 2 also perfuse the hollow fiber membranes and are carried away by the pumping of medium through the IC space. As waste products build up due to increased cell mass, the rate of medium flow can also be increased so that cell growth is not inhibited by waste product toxicity.
Because thousands of hollow fibers may be packed into a single hollow fiber bioreactor, they increase the surface area of the cartridge considerably. As a result, cells can fill up the EC space to densities >10 8 cells/ml. However, the cartridge itself takes up a very small volume (oftentimes the volume of a 12-oz soda can). The fact that hollow fiber bioreactors are very small and yet enable incredibly high cell densities has led to their development for both research and commercial applications, including monoclonal antibody and influenza vaccine [ 1 ] production. Likewise, because hollow fiber bioreactors use up significantly less medium and growth factors than traditional cell culture methods such as stirred-tank bioreactors , they offer a significant cost savings. Finally, hollow fiber bioreactors are sold as single-use disposables, resulting in significant time savings for laboratory staff and technicians.
In 1972, the Richard Knazek [ 2 ] group at the NIH reported how mouse fibroblasts cultured on 1.5 cm 3 hollow fiber capillary membranes composed of cellulose acetate were able to form 1 mm-wide nodules in 28 days. The group recorded the final cell number as approximately 1.7 x 10 7 cells from a starter batch of only 200,000 cells. When the same group cultured human choriocarcinoma cells on polymeric and silicone polycarbonate capillary membranes totaling less than 3 cm 3 in volume, the cells expanded to an amount approximating 2.17 x 10 8 cells.
The Knazek group was awarded the patent for hollow fiber bioreactor technology in 1974. [ 3 ] Based on this patented technology, companies began building different and larger (commercial) scale hollow fiber bioreactors, with significant development and technological improvement occurring in the late 1980s to early 1990s. By 1990, at least three companies were reported to offer commercially available hollow fiber bioreactors. [ 4 ]
One engineering advance included adding a gas exchange cartridge, which enabled better control of system's pH and oxygen levels. Similar to a mammalian lung , the gas exchange cartridge efficiently oxygenated the culture medium, allowing the bioreactor to support higher numbers of cells. Combined with the ability to add or remove CO 2 for precise pH control, the limitations commonly associated with large-scale cell culture were eliminated, resulting in densely packed cell cultures that could be maintained for several months.
In addition, control of the fluid dynamics within each hollow fiber bioreactor led to further optimization of the cell culture environment. By alternating the pressure gradient across the hollow fiber membrane, media could flow back and forth between the EC side (cell compartment) and the IC side (hollow fiber lumen). This process, combined with the axial media flow created when media passes down the length of the fibers, optimized the growth environment throughout the entire bioreactor.
This concept is termed EC cycling, [ 5 ] and was developed as a solution to the gradients that form within hollow fiber bioreactors when media is pushed down the length of their fibers. Higher hydrostatic pressure at the axial end (media entering the fiber lumen) compared to the distal end of the bioreactor creates a Starling flow in the EC space, which is similar to what is observed in the body. This phenomenon also creates a nutrient-rich axial region and a nutrient-depleted distal region within the bioreactor. By incorporating EC cycling, the effects of Starling flow are eliminated and the entire bioreactor becomes nutrient-rich and optimized for cell growth.
Optimal IC and EC space perfusion rates must be achieved in order to efficiently deliver media nutrients and growth supplements, respectively, and to collect supernatant. During the cell growth phase within these bioreactors, the media feed rate is increased to accommodate the expanding cell population. More specifically, the IC media perfusion rate is increased to provide additional glucose and oxygen to the cells while continually removing metabolic wastes such as lactic acid . When the cell space is completely filled with cells, the media feed rate plateaus, resulting in constant glucose consumption, oxygen uptake and lactate production rates.
With the introduction of hybridoma technology in 1975, [ 6 ] cell culture could be applied towards the generation of secreted proteins such as monoclonal antibodies, growth hormones , and even some categories of vaccines. In order to produce these proteins on a commercial scale, new methods for culturing large batches of cells had to be developed. One such technological development was the hollow fiber bioreactor.
Hollow fiber bioreactors are used to generate high concentrations of cell-derived products including monoclonal antibodies, recombinant proteins , growth factors, viruses and virus-like particles. This is possible because the semi-permeable hollow fiber membranes allow for the passage of low molecular weight nutrients and wastes from the cell-containing EC into the non-cell-containing IC space, but they do not allow the passage of larger products such as antibodies. Therefore, as a cell line (e.g., hybridoma) expands and expresses a target protein, that protein remains within the EC space and is not flushed out. At a given time point (or continually during the culture), the harvest supernatant (product) is collected, clarified and refrigerated for a future downstream application.
Smaller hollow fiber bioreactors are often used for selection and optimization of cell lines [ 7 ] [ 8 ] prior to stepping up to larger cell culturing systems. Doing so saves on growth factor costs because a significant portion of the cell culture media does not require the addition of expensive components like fetal bovine serum. Likewise, the smaller hollow fiber bioreactors can be housed in a laboratory incubator just like cell culture plates and flasks.
Recently, hollow fiber bioreactors have been tested as novel platforms for the commercial production of high-titer influenza A virus. [ 9 ] In this study, both adherent and suspension Madin-Darby Canine Kidney Epithelial Cells (MDCK) were infected with two different strains of influenza: A/PR/8/34 (H1N1), and the pandemic strain A/Mexico/4108/2009 (H1N1). High titers were achieved for both the suspension and adherent strains; furthermore, the hollow fiber bioreactor technology was found comparable in its production capacity to that of other commercial bioreactors on the market, including classic stirred-tank and wave bioreactors (Wave) and ATF perfusion systems. | https://en.wikipedia.org/wiki/Hollow_fiber_bioreactor |
Hollow fiber membranes ( HFMs ) are a class of artificial membranes containing a semi-permeable barrier in the form of a hollow fiber. Originally developed in the 1960s for reverse osmosis applications, hollow fiber membranes have since become prevalent in water treatment, desalination, cell culture, medicine, and tissue engineering. [ 1 ] Most commercial hollow fiber membranes are packed into cartridges which can be used for a variety of liquid and gaseous separations.
HFMs are commonly produced using artificial polymers . The specific production methods involved are heavily dependent on the type of polymer used as well as its molecular weight . HFM production, commonly referred to as "spinning", can be divided into four general types:
Common to each of these methods is the use of a spinneret , a device containing a needle through which solvent is extruded and an annulus through which a polymer solution is extruded. As the polymer is extruded through the annulus of the spinneret, it retains a hollow cylindrical shape. As the polymer exits the spinneret, it solidifies into a membrane through a process known as phase inversion . The properties of the membrane -such as average pore diameter and membrane thickness- can be finely tuned by changing the dimensions of the spinneret, temperature and composition of "dope" (polymer) and "bore" (solvent) solutions, length of air gap (for dry-jet wet spinning), temperature and composition of the coagulant, as well as the speed at which produced fiber is collected by a motorized spool. Extrusion of the polymer and solvent through the spinneret can be accomplished either through the use of gas-extrusion or a metered pump. Some of the polymers most commonly used for fabricating HFMs include cellulose acetate , polysulfone , polyethersulfone , and polyvinylidene fluoride . [ 5 ]
After fibers are created, they are typically assembled together in a membrane module, with many fibers in parallel. Fiber ends are fixed together in a resin or epoxy at both ends. [ 6 ] This part may be cut clean through to more readily expose their entrance/exits. Typically, these are place inside a cylinder, which has inlets and outlets on opposite sides for the bore (Lumen) side, and side ports for allowing flow to go over the membranes on the shell side. Typically, the higher pressure feed is on the bore side, to avoid fiber collapse.
The properties of HFMs can be characterized using the same techniques commonly used for other types of membranes. The primary properties of interest for HFMs are average pore diameter and pore distribution, measurable via a technique known as porosimetry , a feature of several laboratory instruments used for measuring pore size. [ 7 ] Pore diameter can also be measured via a technique known as evapoporometry , in which evaporation of 2-propanol through the pores of a membrane is related to pore size via the Kelvin equation . [ 8 ] [ 9 ] Depending on the diameters of pores in an HFM, scanning electron microscopy or transmission electron microscopy can be used to yield a qualitative perspective of pore size.
Hollow fiber membranes are ubiquitously used in industrial separations, especially the filtration of drinking water. [ 11 ]
Industrial water filters are mainly equipped with ultrafiltration hollow fiber membranes. Domestic water filtration systems have microfiltration hollow fiber membranes. In microfiltration a membrane pore diameter of 0.1 micrometers cuts-off microorganisms like germs and bacteria, Giardia cysts and other intestinal parasites, as well removing sediments. Ultrafiltration membranes are capable of removing not only bacteria, but also viruses.
Hollow fibers are commonly used substrates for specialized bioreactor systems , with the ability of some hollow fiber cartridges to culture billions of anchorage-dependent cells within a relatively low (<100 mL) bioreactor volume. [ 12 ]
Hollow fibers can be used for drug efficacy testing in cancer research, as an alternative to the traditional, but more expensive, xenograft model. [ 13 ]
Hollow fiber membranes are used in Membrane oxygenators in extracorporeal membrane oxygenation which oxygenates blood, replacing lungs in critically ill patients. | https://en.wikipedia.org/wiki/Hollow_fiber_membrane |
A hollow structural section ( HSS ) is a type of metal profile with a hollow cross section . The term is used predominantly in the United States, or other countries which follow US construction or engineering terminology.
HSS members can be circular, square, or rectangular sections, although other shapes such as elliptical are also available. HSS is only composed of structural steel per code.
HSS is sometimes mistakenly referenced as hollow structural steel . Rectangular and square HSS are also commonly called tube steel or box section . Circular HSS are sometimes mistakenly called steel pipe , although true steel pipe is actually dimensioned and classed differently from HSS. (HSS dimensions are based on exterior dimensions of the profile; pipes are also manufactured to an exterior tolerance, albeit to a different standard.) The corners of HSS are heavily rounded, having a radius which is approximately twice the wall thickness. The wall thickness is uniform around the section.
In the UK, or other countries which follow British construction or engineering terminology, the term HSS is not used. Rather, the three basic shapes are referenced as CHS, SHS, and RHS, being circular, square, and rectangular hollow sections. Typically, these designations will also relate to metric sizes, thus the dimensions and tolerances differ slightly from HSS.
HSS, especially rectangular sections, are commonly used in welded steel frames where members experience loading in multiple directions. Square and circular HSS have very efficient shapes for this multiple-axis loading as they have uniform geometry along two or more cross-sectional axes, and thus uniform strength characteristics. This makes them good choices for columns . They also have excellent resistance to torsion .
HSS can also be used as beams , although wide flange or I-beam shapes are in many cases a more efficient structural shape for this application. However, the HSS has superior resistance to lateral torsional buckling .
The flat square surfaces of rectangular HSS can ease construction, and they are sometimes preferred for architectural aesthetics in exposed structures, although elliptical HSS are becoming more popular in exposed structures for the same aesthetic reasons.
In the recent past, HSS was commonly available in mild steel , such as A500 grade B . Today, HSS is commonly available in mild steel , A500 grade C . Other steel grades available for HSS are A847 (weathering steel), A1065 (large sections up to 50 inch sq made with SAW process), and recently approved A1085 (higher strength, tighter tolerances than A500).
Square HSS is made the same way as pipe. During the manufacturing process flat steel plate is gradually changed in shape to become round where the edges are presented ready to weld. The edges are then welded together to form the mother tube. During the manufacturing process the mother tube goes through a series of shaping stands which form the round HSS (mother tube) into the final square or rectangular shape. Most American manufacturers adhere to the ASTM A500 or newly adopted ASTM A1085 standards, while Canadian manufacturers follow both ASTM A500 and CSA G40.21. European hollow sections are generally in accordance with the EN 10210 standard.
HSS is often filled with concrete to improve fire rating, as well as robustness. When this is done, the product is referred to as a Lally column after its inventor John Lally of Waltham, Massachusetts. (The pronunciation is often corrupted to lolly column .) For example, barriers around parking areas ( bollards ) made of HSS are often filled, to at least bumper height, with concrete. This is an inexpensive (when replacement costs are factored in) way of adding compressive strength to the bollard, which can help prevent unsightly local denting, though it does not generally significantly increase the overall structural properties of the bollard. | https://en.wikipedia.org/wiki/Hollow_structural_section |
Hollywood is a RNA splicing database containing data for the splicing of orthologous genes in different species. [ 1 ]
This Biological database -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hollywood_(database) |
Lite-Trac is a trading name of Holme Farm Supplies Ltd , a manufacturer of agricultural machinery registered in England and based in Peterborough. [ 1 ] The Lite-Trac name comes from "lite tractor", due to the patented chassis design enabling the inherently very heavy machines manufactured by the company to have a light footprint for minimum soil compaction.
Holme Farm Supplies Ltd agricultural products, sold under the Lite-Trac name, include tool carriers, self-propelled lime and fertiliser spreaders, sprayers , granular applicators and tank masters. Lite-Trac is currently the manufacturer of Europe's largest four-wheeled self-propelled crop sprayers . [ 2 ] The company's products are identifiable by the combination of unpainted stainless steel tanks and booms with bright yellow cabs and detailing.
Lite-Trac had existed as a name since before 2006, with first prototypes built in 2004. [ 3 ] Following the success of initial testing, Lite-Trac was incorporated in 2008 to develop and manufacture the SS2400 Tool Carrier with its patented chassis layout. Since then Lite-Trac has gone on to develop its own crop sprayers which exploit the potential of the SS2400 chassis and has branched out into spreading and granular application.
Lite-Trac manufactures a range of agricultural machinery .
The Lite-Trac SS2400 is a tool carrier with a 14 tonne payload capacity. The patented chassis layout with its mid-mounted driveline gives equal weight distribution on all four wheels. The fully mechanical driveline is particularly suited to heavy work in hilly conditions and includes a 6 speed automatic power shift gearbox with locking torque converter which is capable of 50 km/h. It also features a 260 horse power (194 kW) engine , air suspension , anti-roll bars and a forward mounted cab . [ 4 ]
When it comes to self-propelled sprayers, there is big and bigger. Then we come to the Lite Trac.
A Lite-Trac crop sprayer, or liquid fertiliser applicator, mounts onto the SS2400 Tool Carrier centrally between both axles to maintain equal weight distribution on all four wheels and a low centre of gravity whether empty or full. [ 6 ] The stainless steel tanks are manufactured in capacities of up to 8,000 litres, whilst Pommier aluminium booms of up to 48 meters can be fitted, making these Europe's largest four-wheeled self-propelled sprayers.
Lite-Trac also manufacture bespoke sprayers. [ 7 ]
Agri-Spread high capacity demountable spreader mounts onto the SS2400 Tool Carrier and is specifically designed to take full advantage of its 50/50 weight distribution. [ 8 ] [ 9 ] The spreader is capable of spreading a wide range of materials including fertiliser, fibre foss, lime and poultry waste.
The standard hopper can hold up to 12t lime, or 5.6t fertiliser, and extension sides are available for lighter materials. Spreading widths of 24m and above are achievable at rates of 20 kg/Ha to 4t/Ha, subject to the product.
Lite-Trac pneumatic applicators for precision application of slug pellets and Avadex are manufactured in widths of up to 36 metres.
Lite-Trac showcases its machinery at the following shows and events: | https://en.wikipedia.org/wiki/Holme_Farm_Supplies_Ltd |
Holmes is a cognitive computing system developed by the Indian technology corporation Wipro and announced in 2016. Its name is a reference to IBM's Watson , and is a backronym for "Heuristics and Ontology-based Learning Machines and Experiential Systems". [ 1 ]
Its uses include development of digital virtual agents, predictive systems, cognitive process automation, visual computing applications, knowledge virtualization, robotics and drones. The HOLMES platform Vision was created by Ramprasad K.R. (Rampi), he was the chief technologist for AI at Wipro .
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Holmes_(computer) |
In the theory of partial differential equations , Holmgren's uniqueness theorem , or simply Holmgren's theorem , named after the Swedish mathematician Erik Albert Holmgren (1873–1943), is a uniqueness result for linear partial differential equations with real analytic coefficients. [ 1 ]
We will use the multi-index notation :
Let α = { α 1 , … , α n } ∈ N 0 n , {\displaystyle \alpha =\{\alpha _{1},\dots ,\alpha _{n}\}\in \mathbb {N} _{0}^{n},} ,
with N 0 {\displaystyle \mathbb {N} _{0}} standing for the nonnegative integers;
denote | α | = α 1 + ⋯ + α n {\displaystyle |\alpha |=\alpha _{1}+\cdots +\alpha _{n}} and
Holmgren's theorem in its simpler form could be stated as follows:
This statement, with "analytic" replaced by "smooth", is Hermann Weyl 's classical lemma on elliptic regularity : [ 2 ]
This statement can be proved using Sobolev spaces .
Let Ω {\displaystyle \Omega } be a connected open neighborhood in R n {\displaystyle \mathbb {R} ^{n}} , and let Σ {\displaystyle \Sigma } be an analytic hypersurface in Ω {\displaystyle \Omega } , such that there are two open subsets Ω + {\displaystyle \Omega _{+}} and Ω − {\displaystyle \Omega _{-}} in Ω {\displaystyle \Omega } , nonempty and connected, not intersecting Σ {\displaystyle \Sigma } nor each other, such that Ω = Ω − ∪ Σ ∪ Ω + {\displaystyle \Omega =\Omega _{-}\cup \Sigma \cup \Omega _{+}} .
Let P = ∑ | α | ≤ m A α ( x ) ∂ x α {\displaystyle P=\sum _{|\alpha |\leq m}A_{\alpha }(x)\partial _{x}^{\alpha }} be a differential operator with real-analytic coefficients.
Assume that the hypersurface Σ {\displaystyle \Sigma } is noncharacteristic with respect to P {\displaystyle P} at every one of its points:
Above,
the principal symbol of P {\displaystyle P} . N ∗ Σ {\displaystyle N^{*}\Sigma } is a conormal bundle to Σ {\displaystyle \Sigma } , defined as N ∗ Σ = { ( x , ξ ) ∈ T ∗ R n : x ∈ Σ , ξ | T x Σ = 0 } {\displaystyle N^{*}\Sigma =\{(x,\xi )\in T^{*}\mathbb {R} ^{n}:x\in \Sigma ,\,\xi |_{T_{x}\Sigma }=0\}} .
The classical formulation of Holmgren's theorem is as follows:
Consider the problem
with the Cauchy data
Assume that F ( t , x , z ) {\displaystyle F(t,x,z)} is real-analytic with respect to all its arguments in the neighborhood of t = 0 , x = 0 , z = 0 {\displaystyle t=0,x=0,z=0} and that ϕ k ( x ) {\displaystyle \phi _{k}(x)} are real-analytic in the neighborhood of x = 0 {\displaystyle x=0} .
Note that the Cauchy–Kowalevski theorem does not exclude the existence of solutions which are not real-analytic. [ citation needed ]
On the other hand, in the case when F ( t , x , z ) {\displaystyle F(t,x,z)} is polynomial of order one in z {\displaystyle z} , so that
Holmgren's theorem states that the solution u {\displaystyle u} is real-analytic and hence, by the Cauchy–Kowalevski theorem, is unique. | https://en.wikipedia.org/wiki/Holmgren's_uniqueness_theorem |
Holmium titanate is an inorganic compound with the chemical formula Ho 2 Ti 2 O 7 .
Holmium titanate is a spin ice material [ 3 ] like dysprosium titanate and holmium stannate . [ 4 ]
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Holmium_titanate |
In economics , Holmström's theorem is an impossibility theorem or trilemma attributed to Bengt R. Holmström proving that no incentive system for a team of agents can make all of the following true:
Thus a Pareto-efficient system with a balanced budget does not have any point at which an agent can not do better by changing their effort level, even if everyone else's effort level stays the same, meaning that the agents can never settle down to a stable strategy; a Pareto-efficient system with a Nash equilibrium does not distribute all revenue, or spends more than it has; and a system with a Nash equilibrium and balanced budget does not maximise the total profit of everybody.
The Gibbard–Satterthwaite theorem in social choice theory is a related impossibility theorem dealing with voting systems .
Suppose there is a team of n > 1 risk neutral agents whose preference functions are strictly concave and increasing, and also additively separable in money and effort. Then, under an incentive system that distributes exactly the output among the team members, any Nash equilibrium is Pareto inefficient.
Rasmusen [ 1 ] studies the relaxation of this problem obtained by removing the assumption that the agents are risk neutral (Holmström: "linear in money").
The economic reason for Holmström's result is a "Sharing problem". A team member faces efficient incentives if he receives the full marginal returns from an additional unit of his input. Under a budget-balanced sharing scheme, however, the team members cannot be incentivized this way. This problem would be circumvented if the output could be distributed n times instead of only once. This requires that the team members promise fixed payments to an "Anti-Sharer", as demonstrated by Kirstein and Cooter. [ 2 ] However, if one of the team members takes over the role of the Anti-Sharer, this player has no incentive whatsoever to spend effort. The article derives conditions under which internal Anti-Sharing induces the team members to spend more effort than a budget-balanced sharing contract. The research paper that Holmström wrote named, "Moral Hazard and Observability" demonstrates the point that executive pay should not rely mainly on the company's managerial statistics, this also includes mainly stock prices and its movement along the chart. These indicators reflect collective outcomes rather than individual effort, which aligns with the theorem’s implication that linking incentives too tightly to aggregate results can lead to inefficiencies. [ 3 ] | https://en.wikipedia.org/wiki/Holmström's_theorem |
Holochain is an open source framework for developing and deploying distributed applications. [ 1 ] Its purpose is to enable the kinds of activities people do on the Internet every day (wikis, blogs, social networks, marketplaces, etc.) without using centralized servers. [ 2 ] Instead, Holochain applications are run on the users’ devices. [ 3 ] It has been proposed as an alternative technology to blockchain -based systems and centralized platforms. [ 2 ] [ 4 ] [ 5 ] [ 6 ]
Although the Holochain project has been in development for more than 10 years, the first Beta release happened in January 2023. [ 7 ] Therefore, it has not yet been tested as extensively as blockchains in real-world environments. [ 3 ] [ 4 ]
Co-founders of Holochain are Arthur Brock and Eric Harris-Braun. [ 8 ]
Holochain is an open-source framework for developing distributed applications, utilizing a novel implementation of distributed hash tables (DHT) called RRDHT, [ 1 ] inspired by BitTorrent , along with concepts similar to Git for data management, and cryptographic signatures for ensuring data integrity. [ 3 ] [ 9 ] The main difference between Holochain and other distributed ledger technologies (DLTs) is that Holochain is agent-centric while other DLTs are data-centric. [ 10 ] [ 11 ] It has been classified as a type of private and permissionless DLT for building decentralized applications. [ 12 ]
An agent-centric approach to applications means that each agent stores only relevant data, and unlike Bitcoin and other blockchain systems, there is no singular centralized database to be maintained. [ 13 ] [ 14 ] Where blockchains operate with a global consensus model, often leading to scalability issues, [ 3 ] Holochain implements an agent-centric model. This means that rather than maintaining a single, unified ledger of all transactions or interactions, each user on Holochain has their own individual chain of data, or "source chain", that represents their history of actions. The personal chain interacts with a DHT that provides a means for the larger network to verify the data by peers that are randomly selected. This difference makes it more efficient than blockchain , as demonstrated by its ability to run over 50 full nodes on a single Raspberry Pi . [ 15 ] In the scientific literature, there are performance analyses conducted to compare the efficiency, resource usage, privacy, and security levels of Holochain-based solutions and blockchain-based solutions. [ 16 ] [ 17 ] Holochain has been compared to proof-of-work and proof-of-stake blockchains in terms of its environmental impact. [ 18 ]
Here are some Holochain-based projects being developed by independent groups:
hREA (Holochain Resource-Event-Agent ) is an implementation of the Valueflows specification, providing a scalable and distributed framework for economic network coordination. It allows for transparent and trusted resource and information flows between decentralized agents in various ecosystems, enabling organizations and businesses to track and trace value flows in both conventional and alternative economies. [ 19 ]
HummHive is a platform that allows people to create, store, and share content, like stories and media, securely and privately. It gives users the freedom to customize their experience and manage their audience without relying on centralized servers or being subjected to tracking and advertising. [ 20 ]
MewsFeed is a microblogging platform, alternative to Twitter or Mastodon . [ 21 ] [ 22 ]
AD4M is a framework for distributed social spaces, enabling seamless interoperability among various apps. With its core concepts of agents, languages, and perspectives, AD4M facilitates collaborative data sharing and group interactions, while remaining technology-agnostic, allowing developers to integrate with various technologies like Holochain, blockchains, and centralized APIs. [ 23 ]
Flux is a distributed social platform that allows users to create private communities and organize communication channels. Built on the AD4M framework, it aims to provide a customizable user interface and feature set, with plans for future features such as distributed governance and Web3 integrations. [ 24 ]
Internet of Energy Network (IOEN) is an open-source project that aims to promote clean energy by enabling community and local energy system balancing and coordination. It employs a dual-layered solution with IOENFuel for local transactions and the IOEN Token for global value transfer. [ 25 ] [ 26 ]
Kizuna is an open-source, non-profit messaging app built on Holochain, focused on user privacy and data control. It offers encrypted communication and self-destructing messages, with an emphasis on censorship resistance. Kizuna is ad-free and claims to be independent of corporate interests. [ 27 ]
Neighbourhoods focuses on user-centric social apps and collaboration tools. It allows users to customize group interactions and store records on their devices for peer-to-peer communication. The project features an embedded marketplace for developers and uses its native currency, $NHT, to support the code commons. [ 28 ]
Sustafy is a supply chain transparency app that collects real-time origin data for handmade craft makers, helping them comply with the EU's Digital Product Passport (DPP) legislation. By supporting the UN Sustainable Development Goals, the project aims to provide artisans and small-scale farmers with improved access to financial tools, markets, and cooperatives. [ 29 ]
Trust Graph is an open protocol and toolkit for building and reading distributed trust graphs, designed to share trust, ratings, and attestation information across decentralized apps and platforms. [ 30 ]
Hylo is a platform for managing and organizing communities, providing tools for communication, collaboration, and membership management. Suitable for various group sizes, it supports conversation organization, file sharing, and networking among affiliated communities. [ 31 ]
Despite its potential advantages and innovative approach, Holochain also faces several challenges and limitations. This section outlines some of these issues, including technical limitations, adoption barriers, and competition from other technologies. | https://en.wikipedia.org/wiki/Holochain |
The hologenome theory of evolution [ 1 ] [ 2 ] [ 3 ] [ 4 ] recasts the individual animal or plant (and other multicellular organisms) as a community or a " holobiont " – the host plus all of its symbiotic microbes. Consequently, the collective genomes of the holobiont form a "hologenome". Holobionts and hologenomes are structural entities [ 5 ] that replace misnomers in the context of host-microbiota symbioses such as superorganism (i.e., an integrated social unit composed of conspecifics), organ, and metagenome . Variation in the hologenome may encode phenotypic plasticity of the holobiont and can be subject to evolutionary changes caused by selection and drift, if portions of the hologenome are transmitted between generations with reasonable fidelity. One of the important outcomes of recasting the individual as a holobiont subject to evolutionary forces is that genetic variation in the hologenome can be brought about by changes in the host genome and also by changes in the microbiome, including new acquisitions of microbes, horizontal gene transfers, and changes in microbial abundance within hosts. Although there is a rich literature on binary host–microbe symbioses, the hologenome concept distinguishes itself by including the vast symbiotic complexity inherent in many multicellular hosts.
Lynn Margulis coined the term holobiont in her 1991 book Symbiosis as a Source of Evolutionary Innovation: Speciation and Morphogenesis (MIT Press), [ 6 ] though this was not in the context of diverse populations of microbes. The term holobiont is derived from the Ancient Greek ὅλος (hólos, "whole"), and the word biont for a unit of life. [ 7 ]
In September 1994, Richard Jefferson coined the term hologenome when he introduced the hologenome theory of evolution at a presentation at Cold Spring Harbor Laboratory . [ 1 ] [ 8 ] [ 9 ] At the CSH Symposium and earlier, the unsettling number and diversity of microbes that were being discovered through the powerful tool of PCR-amplification of 16S ribosomal RNA genes was exciting, but confusing interpretations in diverse studies. A number of speakers referred to microbial contributions to mammalian or plant DNA samples as 'contamination'. In his lecture, Jefferson argued that these were likely not contamination, but rather essential components of the samples that reflected the actual genetic composition of the organism being studied, integral to the complex system in which it lives. This implied that the logic of the organism's performance and capabilities would be embedded only in the hologenome. Observations on the ubiquity of microbes in plant and soil samples as well as laboratory work on molecular genetics of vertebrate-associated microbial enzymes impacting hormone action informed this hypothesis. [ 10 ] References was made to work indicating that mating pheromones were only released after skin microbiota activated the precursors. [ 11 ]
At the 14th South African Congress of Biochemistry and Molecular Biology in 1997, [ 12 ] Jefferson described how the modulation of steroid and other hormone levels by microbial glucuronidases and arylsulfatase profoundly impacted the performance of the composite entity. Following on work done isolating numerous and diverse glucuronidases from microbial samples of African animal feces, [ 13 ] and their differential cleavage of hormones, he hypothesized that this phenomenon, microbially-mediated hormone modulation, could underlie evolution of disease and social behavior as well as the holobiont fitness and system resilience. In his lectures, Jefferson coined and defined the term 'Ecotherapeutics', referring to adjustment of the population structure of the microbial composition in plants and animals - the microbiome - and their support ecosystem to improve performance. [ 9 ] [ 12 ] In 2007, Jefferson followed with a series of posts on the logic of hologenome theory on Cambia's Science as Social Enterprise page. [ 14 ]
In 2008, Eugene Rosenberg and Ilana Zilber-Rosenberg apparently independently used the term hologenome and developed the hologenome theory of evolution. [ 15 ] This theory was originally based on their observations of Vibrio shiloi-mediated bleaching of the coral Oculina patagonica . Since its first introduction, the theory has been promoted as a fusion of Lamarckism and Darwinism and expanded to all of evolution, not just that of corals. The history of the development of the hologenome theory and the logic undergirding its development was the focus of a cover article by Carrie Arnold in New Scientist in January, 2013. [ 16 ] A comprehensive treatment of the theory, including updates by the Rosenbergs on neutrality, pathogenesis and multi-level selection, can be found in their 2013 book . [ 2 ]
In 2013, Robert Brucker and Seth Bordenstein [ 17 ] re-invigorated the hologenome concept by showing that the gut microbiomes of closely related Nasonia wasp species are distinguishable, and contribute to hybrid death. This set interactions between hosts and microbes in a conceptual continuum with interactions between genes in the same genome. In 2015, Bordenstein and Kevin R. Theis outlined a conceptual framework that aligns with pre-existing theories in biology. [ 4 ]
Multicellular life is made possible by the coordination of physically and temporally distinct processes, most prominently through hormones . Hormones mediate critical activities in vertebrates, including ontogeny, somatic and reproductive physiology, sexual development, performance and behaviour.
Many of these hormones – including most steroids and thyroxines – are secreted in inactive form through the endocrine and apocrine systems into epithelial corridors in which microbiota are widespread and diverse, including gut, urinary tract, lung and skin. There, the inactive hormones can be re-activated by cleavage of the glucuronide or sulfate residue, allowing them to be reabsorbed. Thus the concentration and bioavailability of many of the hormones is impacted by microbial cleavage of conjugated intermediaries, itself determined by a diverse population with redundant enzymatic capabilities. Aspects of enterohepatic circulation have been known for decades, but had been viewed as an ancillary effect of detoxification and excretion of metabolites and xenobiotics, including effects on lifetimes of pharmaceuticals, including birth control formulations.
The basic premise of Jefferson's first exposition of the hologenome theory is that a spectrum of hormones can be re-activated and resorbed from epithelia, potentially modulating effective time and dose relationships of many vertebrate hormones. The ability to alter and modulate, amplify and suppress, disseminate and recruit new capabilities as microbially-encoded 'traits' means that sampling, sensing and responding to the environment become intrinsic features and emergent capabilities of the holobiont, with mechanisms that can provide rapid, sensitive, nuanced and persistent performance changes.
Studies by Froebe et al. [ 18 ] in 1990 indicating that essential mating pheromones, including androstenols, required activation by skin-associated microbial glucuronidases and sulfatases. In the absence of microbial populations in the skin, no detectable aromatic pheromone was released, as the pro-pheromone remained water-soluble and non-volatile. This effectively meant that the microbes in the skin were essential to produce a mating signal. [ 19 ]
Subsequent re-articulation describing the hologenome theory by Rosenberg and Zilber-Rosenberg, published 13 years after Jefferson's definition of the theory, was based on their observations of corals, and the coral probiotic hypothesis.
Coral reefs are the largest structures created by living organisms, and contain abundant and highly complex microbial communities. A coral "head" is a colony of genetically identical polyps , which secrete an exoskeleton near the base. Depending on the species, the exoskeleton may be hard, based on calcium carbonate , or soft and proteinaceous. Over many generations, the colony creates a large skeleton that is characteristic of the species. Diverse forms of life take up residence in a coral colony, including photosynthetic algae such as Symbiodinium , as well as a wide range of bacteria including nitrogen fixers , [ 20 ] and chitin decomposers, [ 21 ] all of which form an important part of coral nutrition. [ 22 ] The association between coral and its microbiota is species dependent, and different bacterial populations are found in mucus, skeleton and tissue from the same coral fragment. [ 23 ]
Over the past several decades, major declines in coral populations have occurred. Climate change , water pollution and overfishing are three stress factors that have been described as leading to disease susceptibility. Over twenty different coral diseases have been described, but of these, only a handful have had their causative agents isolated and characterized.
Coral bleaching is the most serious of these diseases. In the Mediterranean Sea, the bleaching of Oculina patagonica was first described in 1994 and, through a rigorous application of Koch's Postulates , determined to be due to infection by Vibrio shiloi . [ 24 ] From 1994 to 2002, bacterial bleaching of O. patagonica occurred every summer in the eastern Mediterranean. Surprisingly, however, after 2003, O. patagonica in the eastern Mediterranean has been resistant to V. shiloi infection, although other diseases still cause bleaching.
The surprise stems from the knowledge that corals are long lived, with lifespans on the order of decades, [ 25 ] and do not have adaptive immune systems . Their innate immune systems do not produce antibodies, and they should seemingly not be able to respond to new challenges except over evolutionary time scales. Yet multiple researchers have documented variations in bleaching susceptibility that may be termed 'experience-mediated tolerance'. [ 26 ] [ 27 ] The puzzle of how corals managed to acquire resistance to a specific pathogen led Eugene Rosenberg and Ilana Zilber-Rosenberg to propose the Coral Probiotic Hypothesis. [ 23 ] This hypothesis proposes that a dynamic relationship exists between corals and their symbiotic microbial communities. Beneficial mutations can arise and spread among the symbiotic microbes much faster than in the host corals. By altering its microbial composition, the "holobiont" can adapt to changing environmental conditions far more rapidly than by genetic mutation and selection in the host species alone.
Extrapolating the coral probiotic hypothesis to other organisms, including higher plants and animals, led to the Rosenberg's support for and publications around the hologenome theory of evolution.
The framework of the hologenome theory of evolution is as follows (condensed from Rosenberg et al. , 2007): [ 28 ]
Some authors supplement the above principles with an additional one. If a given holobiont is to be considered a unit of natural selection:
Ten principles of holobionts and hologenomes were presented in PLOS Biology: [ 4 ]
Many case studies clearly demonstrate the importance of an organism's associated microbiota to its existence. (For example, see the numerous case studies in the Microbiome article.) However, horizontal versus vertical transmission of endosymbionts must be distinguished. [ 31 ] Endosymbionts whose transmission is predominantly vertical may be considered as contributing to the heritable genetic variation present in a host species. [ 29 ]
In the case of colonial organisms such as corals, the microbial associations of the colony persist even though individual members of the colony, reproducing asexually, live and die. Corals also have a sexual mode of reproduction, resulting in planktonic larva; it is less clear whether microbial associations persist through this stage of growth. Also, the bacterial community of a colony may change with the seasons. [ 23 ]
Many insects maintain heritable obligate symbiosis relationships with bacterial partners. For example, normal development of female wasps of the species Asobara tabida is dependent on Wolbachia infection. If "cured" of the infection, their ovaries degenerate. [ 32 ] Transmission of the infection is vertical through the egg cytoplasm.
In contrast, many obligate symbiosis relationships have been described in the literature where transmission of the symbionts is via horizontal transfer. A well-studied example is the nocturnally feeding squid Euprymna scolopes , which camouflages its outline against the moonlit ocean surface by emitting light from its underside with the aid of the symbiotic bacterium Vibrio fischeri . [ 33 ] The Rosenbergs cite this example within the context of the hologenome theory of evolution. [ 34 ] Squid and bacterium maintain a highly co-evolved relationship. The newly hatched squid collects its bacteria from the sea water, and lateral transfer of symbionts between hosts permits faster transfer of beneficial mutations within a host species than are possible with mutations within the host genome.
Another traditional distinction between endosymbionts has been between primary and secondary symbionts. [ 29 ] Primary endosymbionts reside in specialized host cells that may be organized into larger, organ-like structures (in insects, the bacteriome ). Associations between hosts and primary endosymbionts are usually ancient, with an estimated age of tens to hundreds of millions of years. According to endosymbiotic theory , extreme cases of primary endosymbionts include mitochondria , plastids (including chloroplasts ), and possibly other organelles of eukaryotic cells. Primary endosymbionts are usually transmitted exclusively vertically, and the relationship is always mutualistic and generally obligate for both partners. Primary endosymbiosis is surprisingly common. An estimated 15% of insect species, for example, harbor this type of endosymbiont. [ 35 ] In contrast, secondary endosymbiosis is often facultative, at least from the host point of view, and the associations are less ancient. Secondary endosymbionts do not reside in specialized host tissues, but may dwell in the body cavity dispersed in fat, muscle, or nervous tissue, or may grow within the gut. Transmission may be via vertical, horizontal, or both vertical and horizontal transfer. The relationship between host and secondary endosymbiont is not necessarily beneficial to the host; indeed, the relationship may be parasitic. [ 29 ]
The distinction between vertical and horizontal transfer, and between primary and secondary endosymbiosis is not absolute, but follows a continuum, and may be subject to environmental influences. For example, in the stink bug Nezara viridula , the vertical transmission rate of symbionts, which females provide to offspring by smearing the eggs with gastric caeca , was 100% at 20 °C, but decreased to 8% at 30 °C. [ 36 ] Likewise, in aphids, the vertical transmission of bacteriocytes containing the primary endosymbiont Buchnera is drastically reduced at high temperature. [ 37 ] In like manner, the distinction between commensal , mutualistic , and parasitic relationships is also not absolute. An example is the relationship between legumes and rhizobial species: N 2 uptake is energetically more costly than the uptake of fixed nitrogen from the soil, so soil N is preferred if not limiting. During the early stages of nodule formation, the plant-rhizobial relationship actually resembles a pathogenesis more than it does a mutualistic association. [ citation needed ]
Lamarckism , the concept that an organism can pass on characteristics that it acquired during its lifetime to its offspring (also known as inheritance of acquired characteristics or soft inheritance ) incorporated two common ideas of its time:
Although Lamarckian theory was rejected by the neo-Darwinism of the modern evolutionary synthesis in which evolution occurs through random variations being subject to natural selection , the hologenome theory has aspects that harken back to Lamarckian concepts. In addition to the traditionally recognized modes of variation ( i.e. sexual recombination , chromosomal rearrangement, mutation), the holobiont allows for two additional mechanisms of variation that are specific to the hologenome theory: (1) changes in the relative population of existing microorganisms ( i.e. amplification and reduction) and (2) acquisition of novel strains from the environment, which may be passed on to offspring. [ 34 ]
Changes in the relative population of existing microorganisms corresponds to Lamarckian "use and disuse", while the ability to acquire novel strains from the environment, which may be passed on to offspring, corresponds to Lamarckian "inheritance of acquired traits". The hologenome theory, therefore, is said by its proponents to incorporate Lamarckian aspects within a Darwinian framework. [ 34 ]
The pea aphid Acyrthosiphon pisum maintains an obligate symbiotic relationship with the bacterium Buchnera aphidicola , which is transmitted maternally to the embryos that develop within the mother's ovarioles . Pea aphids live on sap, which is rich in sugars but deficient in amino acids. They rely on their Buchnera endosymbiotic population for essential amino acids, supplying in exchange nutrients as well as a protected intracellular environment that allows Buchnera to grow and reproduce. [ 38 ] The relationship is actually more complicated than mutual nutrition; some strains of Buchnera increases host thermotolerance, while other strains do not. Both strains are present in field populations, suggesting that under some conditions, increased heat tolerance is advantageous to the host, while under other conditions, decreased heat tolerance but increased cold tolerance may be advantageous. [ 39 ] One can consider the variant Buchnera genomes as alleles for the larger hologenome. [ 30 ] The association between Buchnera and aphids began about 200 million years ago, with host and symbiont co-evolving since that time; in particular, it has been discovered that genome size in various Buchnera species has become extremely reduced, in some cases down to 450 kb, which is far smaller even than the 580 kb genome of Mycoplasma genitalium . [ 40 ]
Development of mating preferences, i.e. sexual selection , is considered to be an early event in speciation . In 1989, Dodd reported mating preferences in Drosophila that were induced by diet. [ 41 ] It has recently been demonstrated that when otherwise identical populations of Drosophila were switched in diet between molasses medium and starch medium, that the "molasses flies" preferred to mate with other molasses flies, while the "starch flies" preferred to mate with other starch flies. This mating preference appeared after only one generation and was maintained for at least 37 generations. The origin of these differences were changes in the flies' populations of a particular bacterial symbiont, Lactobacillus plantarum . Antibiotic treatment abolished the induced mating preferences. It has been suggested that the symbiotic bacteria changed the levels of cuticular hydrocarbon sex pheromones , [ 42 ] however several other research papers have been unable to replicate this effect. [ 43 ] [ 44 ] [ 45 ]
Zilber-Rosenberg and Rosenberg (2008) have tabulated many of the ways in which symbionts are transmitted and their contributions to the fitness of the holobiont, beginning with mitochondria found in all eukaryotes , chloroplast in plants, and then various associations described in specific systems. The microbial contributions to host fitness included provision of specific amino acids, growth at high temperatures, provision of nutritional needs from cellulose, nitrogen metabolism, recognition signals, more efficient food utilization, protection of eggs and embryos against metabolism, camouflage against predators, photosynthesis, breakdown of complex polymers, stimulation of the immune system, angiogenesis , vitamin synthesis, fiber breakdown, fat storage, supply of minerals from the soil, supply of organics, acceleration of mineralization, carbon cycling, and salt tolerance. [ 15 ]
The hologenome theory is debated. [ 46 ] A major criticism by Ainsworth et al. has been their claim that V. shiloi was misidentified as the causative agent of coral bleaching, and that its presence in bleached O. patagonica was simply that of opportunistic colonization. [ 47 ]
If this is true, the original observation that led to Rosenberg's later articulation of the theory would be invalid. On the other hand, Ainsworth et al. [ 47 ] performed their samplings in 2005, two years after the Rosenberg group discovered O. patagonica no longer to be susceptible to V. shiloi infection; therefore their finding that bacteria are not the primary cause of present-day bleaching in Mediterranean coral O. patagonica should not be considered surprising. The rigorous satisfaction of Koch's postulates, as employed in Kushmaro et al. (1997), [ 24 ] is generally accepted as providing a definitive identification of infectious disease agents.
Baird et al. (2009) [ 25 ] have questioned basic assumptions made by Reshef et al. (2006) [ 23 ] in presuming that (1) coral generation times are too slow to adjust to novel stresses over the observed time scales, and that (2) the scale of dispersal of coral larvae is too large to allow for adaptation to local environments. They may simply have underestimated the potential rapidity of conventional means of natural selection. In cases of severe stress, multiple cases have been documented of ecologically significant evolutionary change occurring over a handful of generations. [ 48 ] Novel adaptive mechanisms such as switching symbionts might not be necessary for corals to adjust to rapid climate change or novel stressors. [ 25 ]
Organisms in symbiotic relationships evolve to accommodate each other, and the symbiotic relationship increases the overall fitness of the participant species. Although the hologenome theory is still being debated, it has gained a significant degree of popularity within the scientific community as a way of explaining rapid adaptive changes that are difficult to accommodate within a traditional Darwinian framework. [ 34 ]
Definitions and uses of the words holobiont and hologenome also differ between proponents and skeptics, [ 5 ] and the misuse of the terms has led to confusions over what comprises evidence related to the hologenome. Ongoing discourse is attempting to clear this confusion. Theis et al. clarify that "critiquing the hologenome concept is not synonymous with critiquing coevolution, and arguing that an entity is not a primary unit of selection dismisses the fact that the hologenome concept has always embraced multilevel selection." [ 5 ]
For instance, [ 49 ] Chandler and Turelli (2014) criticize the conclusions of Brucker and Bordenstein (2013), noting that their observations are also consistent with an alternative explanation. Brucker and Bordenstein (2014) responded to these criticisms, claiming they were unfounded [ 50 ] because of factual inaccuracies and altered arguments and definitions that were not advanced by Brucker and Bordenstein (2013).
Recently, Forest L Rohwer and colleagues developed a novel statistical test to examine the potential for the hologenome theory of evolution in coral species. [ 51 ] They found that coral species do not inherit microbial communities, and are instead colonized by a core group of microbes that associate with a diversity of species. The authors conclude: "Identification of these two symbiont communities supports the holobiont model and calls into question the hologenome theory of evolution." However, other studies in coral adhere to the original and pluralistic definitions of holobionts and hologenomes. [ 52 ] David Bourne, Kathleen Morrow and Nicole Webster clarify that "The combined genomes of this coral holobiont form a coral hologenome, and genomic interactions within the hologenome ultimately define the coral phenotype." | https://en.wikipedia.org/wiki/Hologenome_theory_of_evolution |
Holographic direct sound printing ( HDSP ) is a method of 3D printing which use acoustic holograms , developed by researchers at Concordia University. [ 1 ] [ 2 ] Researchers claim that the printing process can be carried out 20 times faster and that it presents the advantages that an object can be created at once and several objects can be created at the same time. According to researchers, it can be used to print inside opaque surfaces, for example inside the human body, thus opening new opportunities in medicine.
It is based on Direct Sound Printing method, introduced in 2022.
A similar method, to print 3D objects using ultrasound holograms, based on acoustic trapping, was proposed by researchers at the Max Planck Institute for Medical Research and Heidelberg University , in February 2023. [ 3 ]
This technology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Holographic_direct_sound_printing |
Holographic interference microscopy ( HIM ) is holographic interferometry applied for microscopy for visualization of phase micro-objects. Phase micro-objects are invisible because they do not change intensity of light, they insert only invisible phase shifts. The holographic interference microscopy distinguishes itself from other microscopy methods by using a hologram and the interference for converting invisible phase shifts into intensity changes.
Other microscopy methods related to holographic interference microscopy are phase contrast microscopy and holographic interferometry.
Holography was born as "new microscopy principle". D. Gabor invented holography for electron microscopy . For some reasons his idea is not applied in this branch of microscopy . But invention of holography opened up new possibilities in imaging of phase micro-objects due to the application of the holographic interference methods in microscopy that allow not only qualitative, but quantitative study. Combining the holographic interference microscopy with methods of numerical processing has solved the problem of 3D imaging of untreated, native biological phase micro-object. [ 1 ] [ 2 ] [ 3 ]
In the holographic interference method the images appear as the result of the interference of two object waves passed the same path through the microscope optical system but in different points of time: the reconstructed from the hologram "empty" object wave, and the object wave disturbed by the phase micro-objects under study. The hologram of the "empty" object wave is recorded using a reference beam, and it is used as an optical element of the holographic interference microscope.
In the dependence on conditions of the interference two methods of the holographic interference microscopy can be realized: the holographic phase-contrast method and the holographic interference-contrast method. In the first case, the phase shifts inserted by the phase micro-object into the light wave passing through it are converted into intensity changes in its image; and in the second case – into deviations of interference fringes.
Holographic phase-contrast method is the holographic interference microscopy technique for phase micro-object visualization that converts the phase shifts inserted by the micro-object to the wave of light passed through it into intensity changes in the image. The method is based on the holographic addition ( constructive interference ) or holographic subtraction ( destructive interference ) of the "empty" wave reconstructed from the hologram , and the object wave disturbed by the phase micro-objects under study. The image can be considered as interferogram in interference fringes of infinite width.
The method solves the same problem as does F. Zernike phase contrast method. But in comparison with F. Zernike phase contrast method, the method has some advantages. Due to equal intensities of the interfering waves, the holographic phase-contrast method allows obtaining maximal contrast of images. The sizes of the micro-object do not restrict the application of the method, though F. Zernike phase contrast method works the more successfully, the smaller the micro-object in thick and sizes. The image in the holographic phase-contrast method is the result of interaction of two identical waves, and it is free of aberrations .
The method can be realized as the method of holographic addition and subtraction in an interference fringe . A small angle is introduced between the interfering waves so that the period of resulting system of interference fringes significantly exceeds the size of the images. The conditions for the waves to be antiphased or in-phased (holographic subtraction or addition) are automatically created within a dark and bright interference fringes, correspondingly.
The intensities in the image of the micro-object I i m + {\displaystyle I_{im+}} and the intensity of the background I b + {\displaystyle I_{b+}} in the case of wave addition in a bright interference fringe are determined by the expressions:
I i m + ( x ′ , y ′ ) = 2 I 0 [ 1 + c o s f ( x , y ) ] {\displaystyle I_{im+}(x',y')=2I_{0}[1+cosf(x,y)]} ; I b + = 4 I 0 {\displaystyle I_{b+}=4I_{0}}
and the intensities in the image of the micro-object I i m − {\displaystyle I_{im-}} and the intensity of the background I b − {\displaystyle I_{b-}} in the case of wave subtraction in the dark interference fringe (waves are antiphased):
I i m − ( x ′ , y ′ ) = 2 I 0 [ 1 − c o s f ( x , y ) ] {\displaystyle I_{im-}(x',y')=2I_{0}[1-cosf(x,y)]} ; I b − = 0 {\displaystyle I_{b-}=0}
where f ( x , y ) {\displaystyle f(x,y)} is the phase shift inserted by the micro-object into the wave transmitted through it; I 0 {\displaystyle I_{0}} is the intensity of each of the two waves. So, dark images of phase micro-objects can be observed against the bright background in the case of wave addition, and bright images against the dark background – in the case of wave subtraction. The contrast of images is maximal.
The intensity distribution in the images depends on the phase shifts inserted by the micro-objects under study. So, the method allows measuring the phase shifts, and 3D images of the phase micro-objects can be reconstructed under computer processing of their phase-contrast images.
The high sensitivity to vibrations is the main backgrounds of the method. It requires developing the hologram in its place. So, the method remains "exotic", and it is not widely applied.
Holographic interference-contrast method is the holographic interference microscopy technique for phase micro-object visualization that converts the phase shifts inserted by the micro-object to the passed light wave light into deviations of interference fringes in its image. A certain angle is introduced between the "empty" wave and the wave disturbed by the phase micro-objects so the system of straight interference fringes is obtained, which are deviated in the image of the micro-object. The image can be considered as an interferogram in the fringes of finite width. The deviation h ( x ′ , y ′ ) {\displaystyle h(x',y')} of the interference fringe in a point of the image is linearly dependent on the phase shift f ( x , y ) {\displaystyle f(x,y)} inserted in the corresponding point of the micro-object:
h ( x ′ , y ′ ) = f ( x , y ) T / 2 π {\displaystyle h(x',y')=f(x,y)T/2\pi } ,
where T {\displaystyle T} is the set period of the system of interference fringes. So, the interference-contrast image ( interferogram ) visualizes phase silhouette of the micro-object in the form of the deviated lines; and the phase shifts can be measured just by a "ruler". This makes it possible to calculate optical thickness of the micro-object in every point. The method allows measuring thickness of the micro-object if its refractive index is known or to measure its refractive index if the thickness is known. If the micro-object has a homogeneous refractive index distribution, it is possible to reconstruct its physical 3D shape under digital processing the images.
The method can be used for thick and thin, small and large micro-objects. Due to equal intensities of the interfering waves, contrast of images is maximal. The "empty" wave reconstructed from the hologram is a replica of the object wave. So, due to interference of identical waves optical aberrations of the optical system are compensated, and images are free of optical aberrations.
Both methods of holographic interference microscopy can be realized in a single device of the holographic interference microscope uses an optical microscope in an off-axis conventional holographic set-up, with the reference wave, which is usual for the holography , a laser as a coherent source of light, and the hologram. The "empty" object wave produced by the objective in the absence of the micro-objects under study is recorded on the hologram using the reference wave. The developed hologram is returned in its original position, and it works as a fixed optical element of the holographic interference microscope. The images appear under simultaneous observation of the real object wave disturbed by micro-object and the "empty" object wave reconstructed from the hologram. The period of the observed interference picture is adjusted just by cross shift of the hologram from its initial position.
The main background of the HIM methods are coherent noise and speckle structure of the images appearing as the result of using a coherent source of light.
The methods of holographic interference microscopy were worked out and applied for phase micro-object study in the 1980s. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
In the late 1990s, a computer began to be used for 3D imaging of phase micro-objects by their interferograms. 3D images were obtained for the first time when investigating blood erythrocytes. [ 9 ] From the beginning of the 21st century, holographic interference microscopy has become digital holographic interference microscopy.
Digital holographic interference microscopy ( DHIM ) is a combination of the holographic interference microscopy with digital methods of image processing for 3D imaging of phase micro-objects. The holographic phase-contrast or interference-contrast images (interferograms) are recorded by a digital camera from which a computer reconstructs 3D images by using numerical algorithms .
The closest method to the digital holographic interference microscopy is the digital holographic microscopy . The both method solve the same problem of micro-object 3D imaging. Both method use the reference wave to obtain phase information. The digital holographic interference microscopy is more "optical" method. This makes this method more obvious and precise, it uses clear and simple numerical algorithms. The digital holographic microscopy is more "digital" method. It is not so obvious; application of complicated approximate numerical algorithms does not allow reaching the optical accuracy.
Digital holographic interference microscopy allows 3D imaging and non-invasive quantitative study of biomedical micro-objects, such as cells of an organism. The method has been successfully used for study of 3D morphology of blood erythrocytes in different diseases; [ 10 ] [ 11 ] [ 12 ] [ 13 ] to study how ozone therapy affects the shape of erythrocytes, [ 14 ] to study alteration of 3D shape of blood erythrocytes in a patient with sickle-cell anemia when the oxygen concentration in blood was reduced, and the effect of gamma-radiation in a superlethal dose on the shape of rat erythrocytes. [ 15 ]
The method can be used for measurements of thickness of thin transparent films, crystals, and/or 3D imaging of their surfaces for quality control. [ 16 ] [ 17 ] [ 18 ] | https://en.wikipedia.org/wiki/Holographic_interference_microscopy |
The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a lower-dimensional boundary to the region – such as a light-like boundary like a gravitational horizon . [ 1 ] [ 2 ] First proposed by Gerard 't Hooft , it was given a precise string theoretic interpretation by Leonard Susskind , [ 3 ] who combined his ideas with previous ones of 't Hooft and Charles Thorn . [ 3 ] [ 4 ] Susskind said, "The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface." [ 5 ] As pointed out by Raphael Bousso , [ 6 ] Thorn observed in 1978 that string theory admits a lower-dimensional description in which gravity emerges from it in what would now be called a holographic way. The prime example of holography is the AdS/CFT correspondence .
The holographic principle was inspired by the Bekenstein bound of black hole thermodynamics , which conjectures that the maximum entropy in any region scales with the radius squared , rather than cubed as might be expected. In the case of a black hole , the insight was that the information content of all the objects that have fallen into the hole might be entirely contained in surface fluctuations of the event horizon . The holographic principle resolves the black hole information paradox within the framework of string theory. [ 5 ] However, there exist classical solutions to the Einstein equations that allow values of the entropy larger than those allowed by an area law (radius squared), hence in principle larger than those of a black hole. These are the so-called " Wheeler's bags of gold". The existence of such solutions conflicts with the holographic interpretation, and their effects in a quantum theory of gravity including the holographic principle are not yet fully understood. [ 7 ]
The physical universe is widely seen to be composed of "matter" and "energy". In his 2003 article published in Scientific American magazine, Jacob Bekenstein speculatively summarized a current trend started by John Archibald Wheeler , which suggests scientists may "regard the physical world as made of information , with energy and matter as incidentals". Bekenstein asks "Could we, as William Blake memorably penned, 'see a world in a grain of sand', or is that idea no more than ' poetic license '?", [ 8 ] referring to the holographic principle.
Bekenstein's topical overview "A Tale of Two Entropies" [ 9 ] describes potentially profound implications of Wheeler's trend, in part by noting a previously unexpected connection between the world of information theory and classical physics. This connection was first described shortly after the seminal 1948 papers of American applied mathematician Claude Shannon introduced today's most widely used measure of information content, now known as Shannon entropy . As an objective measure of the quantity of information, Shannon entropy has been enormously useful, as the design of all modern communications and data storage devices, from cellular phones to modems to hard disk drives and DVDs , rely on Shannon entropy.
In thermodynamics (the branch of physics dealing with heat), entropy is popularly described as a measure of the " disorder " in a physical system of matter and energy. In 1877, Austrian physicist Ludwig Boltzmann described it more precisely in terms of the number of distinct microscopic states that the particles composing a macroscopic "chunk" of matter could be in, while still "looking" like the same macroscopic "chunk". As an example, for the air in a room, its thermodynamic entropy would equal the logarithm of the count of all the ways that the individual gas molecules could be distributed in the room and all the ways they could be moving.
Shannon's efforts to find a way to quantify the information contained in, for example, a telegraph message, led him unexpectedly to a formula with the same form as Boltzmann's . In an article in the August 2003 issue of Scientific American titled "Information in the Holographic Universe", Bekenstein summarizes that "Thermodynamic entropy and Shannon entropy are conceptually equivalent: the number of arrangements that are counted by Boltzmann entropy reflects the amount of Shannon information one would need to implement any particular arrangement" of matter and energy. The only salient difference between the thermodynamic entropy of physics and Shannon's entropy of information is in the units of measure; the former is expressed in units of energy divided by temperature, the latter in essentially dimensionless "bits" of information.
The holographic principle states that the entropy of ordinary mass (not just black holes) is also proportional to surface area and not volume; that volume itself is illusory and the universe is really a hologram which is isomorphic to the information "inscribed" on the surface of its boundary. [ 10 ]
The anti-de Sitter/conformal field theory correspondence , sometimes called Maldacena duality (after ref. [ 11 ] ) or gauge/gravity duality , is a conjectured relationship between two kinds of physical theories. On one side are anti-de Sitter spaces (AdS) which are used in theories of quantum gravity , formulated in terms of string theory or M-theory . On the other side of the correspondence are conformal field theories (CFT) which are quantum field theories , including theories similar to the Yang–Mills theories that describe elementary particles.
The duality represents a major advance in understanding of string theory and quantum gravity. [ 12 ] This is because it provides a non-perturbative formulation of string theory with certain boundary conditions and because it is the most successful realization of the holographic principle.
It also provides a powerful toolkit for studying strongly coupled quantum field theories. [ 13 ] Much of the usefulness of the duality results from a strong-weak duality: when the fields of the quantum field theory are strongly interacting, the ones in the gravitational theory are weakly interacting and thus more mathematically tractable. This fact has been used to study many aspects of nuclear and condensed matter physics by translating problems in those subjects into more mathematically tractable problems in string theory.
The AdS/CFT correspondence was first proposed by Juan Maldacena in late 1997. [ 11 ] Important aspects of the correspondence were elaborated in articles by Steven Gubser , Igor Klebanov , and Alexander Markovich Polyakov , and by Edward Witten . By 2015, Maldacena's article had over 10,000 citations, becoming the most highly cited article in the field of high energy physics . [ 14 ]
An object with relatively high entropy is microscopically random, like a hot gas. A known configuration of classical fields has zero entropy: there is nothing random about electric and magnetic fields , or gravitational waves . Since black holes are exact solutions of Einstein's equations , they were thought not to have any entropy.
But Jacob Bekenstein noted that this leads to a violation of the second law of thermodynamics . If one throws a hot gas with entropy into a black hole, once it crosses the event horizon , the entropy would disappear. The random properties of the gas would no longer be seen once the black hole had absorbed the gas and settled down. One way of salvaging the second law is if black holes are in fact random objects with an entropy that increases by an amount greater than the entropy of the consumed gas.
Given a fixed volume, a black hole whose event horizon encompasses that volume should be the object with the highest amount of entropy. Otherwise, imagine something with a larger entropy, then by throwing more mass into that something, we obtain a black hole with less entropy, violating the second law. [ 3 ]
In a sphere of radius R , the entropy in a relativistic gas increases as the energy increases. The only known limit is gravitational ; when there is too much energy, the gas collapses into a black hole. Bekenstein used this to put an upper bound on the entropy in a region of space, and the bound was proportional to the area of the region. He concluded that the black hole entropy is directly proportional to the area of the event horizon. [ 15 ] Gravitational time dilation causes time, from the perspective of a remote observer, to stop at the event horizon. Due to the natural limit on maximum speed of motion , this prevents falling objects from crossing the event horizon no matter how close they get to it. Since any change in quantum state requires time to flow, all objects and their quantum information state stay imprinted on the event horizon. Bekenstein concluded that from the perspective of any remote observer, the black hole entropy is directly proportional to the area of the event horizon.
Stephen Hawking had shown earlier that the total horizon area of a collection of black holes always increases with time. The horizon is a boundary defined by light-like geodesics ; it is those light rays that are just barely unable to escape. If neighboring geodesics start moving toward each other they eventually collide, at which point their extension is inside the black hole. So the geodesics are always moving apart, and the number of geodesics which generate the boundary, the area of the horizon, always increases. Hawking's result was called the second law of black hole thermodynamics , by analogy with the law of entropy increase .
At first, Hawking did not take the analogy too seriously. He argued that the black hole must have zero temperature, since black holes do not radiate and therefore cannot be in thermal equilibrium with any black body of positive temperature. [ 16 ] Then he discovered that black holes do radiate. When heat is added to a thermal system, the change in entropy is the increase in mass–energy divided by temperature:
(Here the term δM c 2 is substituted for the thermal energy added to the system, generally by non-integrable random processes, in contrast to d S , which is a function of a few "state variables" only, i.e. in conventional thermodynamics only of the Kelvin temperature T and a few additional state variables, such as the pressure.)
If black holes have a finite entropy, they should also have a finite temperature. In particular, they would come to equilibrium with a thermal gas of photons. This means that black holes would not only absorb photons, but they would also have to emit them in the right amount to maintain detailed balance .
Time-independent solutions to field equations do not emit radiation, because a time-independent background conserves energy. Based on this principle, Hawking set out to show that black holes do not radiate. But, to his surprise, a careful analysis convinced him that they do , and in just the right way to come to equilibrium with a gas at a finite temperature. Hawking's calculation fixed the constant of proportionality at 1/4; the entropy of a black hole is one quarter its horizon area in Planck units . [ 17 ]
The entropy is proportional to the logarithm of the number of microstates , the enumerated ways a system can be configured microscopically while leaving the macroscopic description unchanged. Black hole entropy is deeply puzzling – it says that the logarithm of the number of states of a black hole is proportional to the area of the horizon, not the volume in the interior. [ 10 ]
Later, Raphael Bousso came up with a covariant version of the bound based upon null sheets. [ 18 ]
Hawking's calculation suggested that the radiation which black holes emit is not related in any way to the matter that they absorb. The outgoing light rays start exactly at the edge of the black hole and spend a long time near the horizon, while the infalling matter only reaches the horizon much later. The infalling and outgoing mass/energy interact only when they cross. It is implausible that the outgoing state would be completely determined by some tiny residual scattering. [ citation needed ]
Hawking interpreted this to mean that when black holes absorb some photons in a pure state described by a wave function , they re-emit new photons in a thermal mixed state described by a density matrix . This would mean that quantum mechanics would have to be modified because, in quantum mechanics, states which are superpositions with probability amplitudes never become states which are probabilistic mixtures of different possibilities. [ note 1 ]
Troubled by this paradox, Gerard 't Hooft analyzed the emission of Hawking radiation in more detail. [ 19 ] [ self-published source? ] He noted that when Hawking radiation escapes, there is a way in which incoming particles can modify the outgoing particles. Their gravitational field would deform the horizon of the black hole, and the deformed horizon could produce different outgoing particles than the undeformed horizon. When a particle falls into a black hole, it is boosted relative to an outside observer, and its gravitational field assumes a universal form. 't Hooft showed that this field makes a logarithmic tent-pole shaped bump on the horizon of a black hole, and like a shadow, the bump is an alternative description of the particle's location and mass. For a four-dimensional spherical uncharged black hole, the deformation of the horizon is similar to the type of deformation which describes the emission and absorption of particles on a string-theory world sheet . Since the deformations on the surface are the only imprint of the incoming particle, and since these deformations would have to completely determine the outgoing particles, 't Hooft believed that the correct description of the black hole would be by some form of string theory.
This idea was made more precise by Leonard Susskind, who had also been developing holography, largely independently. Susskind argued that the oscillation of the horizon of a black hole is a complete description [ note 2 ] of both the infalling and outgoing matter, because the world-sheet theory of string theory was just such a holographic description. While short strings have zero entropy, he could identify long highly excited string states with ordinary black holes. This was a deep advance because it revealed that strings have a classical interpretation in terms of black holes.
This work showed that the black hole information paradox is resolved when quantum gravity is described in an unusual string-theoretic way assuming the string-theoretical description is complete, unambiguous and non-redundant. [ 21 ] The space-time in quantum gravity would emerge as an effective description of the theory of oscillations of a lower-dimensional black-hole horizon, and suggest that any black hole with appropriate properties, not just strings, would serve as a basis for a description of string theory.
In 1995, Susskind, along with collaborators Tom Banks , Willy Fischler , and Stephen Shenker , presented a formulation of the new M-theory using a holographic description in terms of charged point black holes, the D0 branes of type IIA string theory . The matrix theory they proposed was first suggested as a description of two branes in eleven-dimensional supergravity by Bernard de Wit , Jens Hoppe , and Hermann Nicolai . The later authors reinterpreted the same matrix models as a description of the dynamics of point black holes in particular limits. Holography allowed them to conclude that the dynamics of these black holes give a complete non-perturbative formulation of M-theory . In 1997, Juan Maldacena gave the first holographic descriptions of a higher-dimensional object, the 3+1-dimensional type IIB membrane , which resolved a long-standing problem of finding a string description which describes a gauge theory . These developments simultaneously explained how string theory is related to some forms of supersymmetric quantum field theories.
Information content is defined as the logarithm of the reciprocal of the probability that a system is in a specific microstate, and the information entropy of a system is the expected value of the system's information content. This definition of entropy is equivalent to the standard Gibbs entropy used in classical physics. Applying this definition to a physical system leads to the conclusion that, for a given energy in a given volume, there is an upper limit to the density of information (the Bekenstein bound) about the whereabouts of all the particles which compose matter in that volume. In particular, a given volume has an upper limit of information it can contain, at which it will collapse into a black hole.
This suggests that matter itself cannot be subdivided infinitely many times and there must be an ultimate level of fundamental particles . As the degrees of freedom of a particle are the product of all the degrees of freedom of its sub-particles, were a particle to have infinite subdivisions into lower-level particles, the degrees of freedom of the original particle would be infinite, violating the maximal limit of entropy density. The holographic principle thus implies that the subdivisions must stop at some level.
The most rigorous realization of the holographic principle is the AdS/CFT correspondence by Juan Maldacena. However, J. David Brown and Marc Henneaux had rigorously proved in 1986, that the asymptotic symmetry of 2+1 dimensional gravity gives rise to a Virasoro algebra , whose corresponding quantum theory is a 2-dimensional conformal field theory. [ 22 ]
The Fermilab physicist Craig Hogan claims that the holographic principle would imply quantum fluctuations in spatial position [ 23 ] that would lead to apparent background noise or "holographic noise" measurable at gravitational wave detectors, in particular GEO 600 . [ 24 ] However these claims have not been widely accepted, or cited, among quantum gravity researchers and appear to be in direct conflict with string theory calculations. [ 25 ]
Analyses in 2011 of measurements of gamma ray burst GRB 041219A in 2004 by the INTEGRAL space observatory launched in 2002 by the European Space Agency , shows that Craig Hogan's noise is absent down to a scale of 10 −48 meters, as opposed to the scale of 10 −35 meters predicted by Hogan, and the scale of 10 −16 meters found in measurements of the GEO 600 instrument. [ 26 ] Research continued at Fermilab under Hogan as of 2013. [ 27 ]
Jacob Bekenstein claimed to have found a way to test the holographic principle with a tabletop photon experiment. [ 28 ] | https://en.wikipedia.org/wiki/Holographic_principle |
See text
Holomastigotoides is a genus of parabasalids found in the hindgut of lower termites . It is characterized by its dense, organized arrangement of flagella on the cell surface and the presence of a mitotic spindle outside its nucleus during the majority of its cell cycle. As a symbiont of termites, Holomastigotoides is able to ingest wood and aid its host in digestion. [ 1 ] In return, Holomastigotoides is supplied with a stable habitat and steady supply of food. Holomastigotoides has notably been studied to observe the mechanisms of chromosomal pairing and segregation in haploid and diploid cells. [ 2 ]
Holomastigotoides was first described by Max Hartmann in 1910. Hartmann mistakenly identified Holomastigotoides as the female form of the parabasalid Trichonympha hertwigi , which he observed living in a species of termite, Coptotermes sp. , in Brazil. [ 3 ] [ 4 ] After initial discovery, Giovanni Battista Grassi and Anna Foa reclassified Hartmann's “male” form of T. hertwigi to Holomastigotoides in 1911, thus establishing the first use of the genus. [ 3 ] The original host species of Holomastigotoides described by Hartmann was later invalidated due to lack of description, and Coptotermes testaceus was subsequently named the type host for Holomastigotoides hertwigi as it is the only species of Coptotermes native to Brazil. [ 4 ]
The following species are recognized: [ 5 ]
Holomastigotoides is an obligate symbiont of lower termites. [ 5 ] Holomastigotoides lives in hindguts of lower termites, where it feeds on wood and assists the termite in wood digestion. This allows the termite to access and use nutrients found in wood that they would not have been able to digest otherwise. [ 1 ] Holomastigotoides can be transferred from termite to termite by way of feeding on anal secretions of other termites during juvenile stages. [ 1 ] [ 5 ]
Since discovery, Holomastigotoides species have been found in multiple termite genera, including Coptotermes, Heterotermes , Prorhinotermes , Psammotermes , and Anacanthotermes . [ 1 ] [ 4 ] [ 6 ] It is possible for multiple species of Holomastigotoides to reside in an individual host termite species. This may be a result of speciation of Holomastigotoides within a single host species or a result of possible co-speciation between Holomastigotoides and its hosts. [ 6 ]
Holomastigotoides is a cone-shaped cell. [ 1 ] One of the most notable features of Holomastigotoides is the high density of flagella on the cell surface, with some reports of up to 10 000 flagella on a single cell. [ 1 ] [ 7 ] The organization of the flagella in Holomastigotoides is attributed to the arrangement of its flagellar bands in a spiral formation around the cell. The flagellar bands originate from the anterior apex of the cell and spiral posteriorly in progressively larger spirals, wrapping around the circumference of the cell. [ 1 ] [ 7 ] An individual flagellar band is made up of many basal bodies arranged in a single row, and a single flagellum emerges from each basal body, giving Holomastigotoides its characteristic, highly flagellated appearance. The basal bodies of a flagellar band are linked by a fiber system that consists of three different fiber types. Each flagellar band is associated with an axostyle, endoplasmic reticulum, and Golgi bodies. [ 1 ] [ 7 ] The high density of external flagella helps prevent pieces of ingested wood in the termite hindgut from contacting and damaging cell surfaces. The number of flagellar bands varies based on the species of Holomastigotoides . [ 1 ] The posterior base of Holomastigotoides cells are not flagellated, and contain vesicles that are likely used for phagocytosis of wood. [ 1 ]
Near the anterior apex of the cell, the basal bodies are arranged tightly together within the flagellar bands, to such an extent that some basal bodies will overlap with each other. The fiber system associated with the basal bodies is also compressed in this apical region, and thus the fiber types are more difficult to distinguish. [ 1 ] As basal bodies become more widely spaced further away from the cell apex, the fiber types are also easier to distinguish. Basal bodies transition into flagella distally, and the transition point is indicated by a transition plate. [ 1 ] An axosome is found between the transition plate and the central microtubules of an individual flagellum. [ 1 ]
Holomastigotoides also possesses parabasal bodies, as is characteristic of parabasalids. The parabasal bodies consist of a Golgi body and a parabasal fiber, and are closely associated with the basal bodies of the flagella. Golgi bodies have been observed to overlap with parabasal fibers near the base of the nucleus. [ 1 ]
The basal bodies of a flagellar band are linked by a fiber system, which consists of the parabasal fiber, fibrous ribbon, and KI fiber. The parabasal fiber provides a surface for microtubule formation, and there is one parabasal fiber for each flagellar band. The parabasal fiber possesses a dark lining that has been suggested to be a microtubule organizing centre for the axostyle. [ 1 ] The size of parabasal fibers decreases as they extend further past the apex, to the point where they cannot be observed in the mid-region or base of the cell. [ 1 ] Parabasal fibers are densely concentrated in the cell's apex, and axostyles closely associated with the parabasal fibers also accumulate in this location. [ 1 ] The fibrous ribbon is a long sheet that looks like an accordion, and connects all the basal bodies in an individual flagellar band. An individual fibrous ribbon is as long as the length of an individual flagellar body. [ 1 ] KI fibers are named for their distinctive shape, and specifically link basal bodies in triplets. KI fibers can change shape, which also changes the distance between basal bodies and regulates how close or far they are from each other. [ 1 ] The fibrous ribbon and KI fiber are thought to have a role in controlling cell shape by moving the flagellar bands. [ 7 ] They also play roles in regulating the direction a Holomastigotoides cell moves in, coordinating the beating of flagella, and assisting in accommodating large pieces of wood during phagocytosis. [ 1 ] [ 7 ]
Axostyles can be located along the entire length of a flagellar band. [ 1 ] They can extend from the cytoplasm to the cell base and surround the nucleus. They can also be found in the cortical cytoplasm, which is the cytoplasm that falls between the plasma membrane and flagellar basal bodies. [ 1 ] Axostyles in the cortical cytoplasm extend along the entire length of the flagellar bands. Some axostyles follow the spiral arrangement of the flagellar bands and regulate the positions of the Golgi bodies and endoplasmic reticulum in the cell. [ 1 ] Notably, flagellar bands 4 and 5 are specialized, and possess extensions into the cytoplasm that contain the poles of the cell's extra-nuclear mitotic spindle. [ 1 ]
Centrin is a protein found in the cytoskeleton of eukaryotic cells, and plays a role in cell division . [ 7 ] In Holomastigotoides cells, there is a high concentration of centrin at the apex of the cell associated with the parabasal fibers, the flagellar bands, and the mitotic spindle. As these are sites where changes in cell shape and movement are initiated, this implies a possible role of centrin in controlling cell shape, direction of movement, and mitosis. [ 7 ] Holomastigotoides has been observed to change cell shape and direction of movement constantly. Intracellular calcium ion concentration affects centrin, which in turn can change flagellar band structure and basal body orientation. [ 7 ]
In the cytoplasm, food vacuoles are distributed widely and contain ingested wood. [ 1 ] Ingested wood particles and glycogen have also been observed to be freely distributed throughout the cytoplasm. [ 1 ] [ 6 ]
Instead of mitochondria , hydrogenosomes are found in Holomastigotoides cells. [ 1 ] They are responsible for producing ATP when converting pyruvate to acetate, providing Holomastigotoides cells with energy. The hydrogenosomes are located either between the plasma membrane and flagellar basal bodies or dispersed throughout the cytoplasm. [ 1 ] They are thought to accumulate near the basal bodies to support high energy demands of the flagella, and have been observed to divide independently. [ 1 ] Golgi bodies can be found on the interior side of flagellar bands, spaced evenly. Endoplasmic reticulum elements can be found between Golgi and basal bodies. [ 1 ]
The nucleus of Holomastigotoides is located in the anterior apex of the cell, and is associated with a mitotic spindle located outside of the nucleus. [ 1 ] [ 7 ] This mitotic spindle is persistent throughout most of the cell cycle, which is unusual for eukaryotic cells and characteristic of Holomastigotoides . [ 1 ] An extranuclear matrix surrounds the nuclear envelope, except at the points where it contacts the mitotic spindle. [ 7 ] Kinetochores insert into the nuclear envelope at the points of contact with the spindle poles. [ 1 ] The nucleus maintains its characteristic position at the cell's apex through contact between kinetochores and spindle poles and apical parabasal fibers. [ 1 ] [ 7 ] In many other eukaryotic cells, most of the cytoplasmic microtubules are dissociated to form the mitotic spindle. [ 1 ] However, this is not the case in Holomastigotoides cells. The mitotic spindle of Holomastigotoides is unique in that it remains in the cell during most of the cell cycle, along with the flagella. [ 1 ] Spindle poles are present to maintain spindle microtubules while the mitotic spindle is present. This is possible because cytoplasmic microtubules and mitotic microtubules have different origins in the Holomastigotoides cell. [ 1 ] The microtubules used for the cytoskeleton and mitosis are separate, and thus the cytoskeleton does not need to be disassembled for cell division to be initiated in Holomastigotoides . [ 1 ] The persistence of the extra-nuclear mitotic spindle and presence of MPM-2, a mitotic protein, indicates that Holomastigotoides spend most of their cell cycle in a suspended stage of prophase . [ 7 ]
Holomastigotoides has two forms: haploid and diploid. In the haploid form, it possesses two chromosomes. In the diploid form, it possesses four chromosomes. [ 2 ] Forms with greater ploidies have also been observed, and ploidies can vary between individuals belonging to the same species of Holomastigotoides. [ 8 ] [ 9 ]
The chromosomes of Holomastigotoides can easily be distinguished due to size, as one will be shorter than the other. [ 2 ] As the chromosomes replicate, they uncoil and appear to extend in length. After replication, the sister chromatids re-coil and shorten before separating and pairing with their homologues. [ 2 ] Chromosomes have been observed to have terminal centromeres. Crossing over has been observed, possibly to prevent complete segregation or no segregation of the chromatids. [ 2 ]
Holomastigotoides has been observed to reproduce through asexual division. During cell division, the nucleus and chromosomes elongate longitudinally. [ 10 ] A constriction forms in the middle of the nucleus until two daughter nuclei are produced, effectively splitting the chromosomes in half so that each daughter nucleus has the same chromosomes. Chromosome division has been observed to occur in a longitudinal direction, rather than transverse. [ 10 ] In Holomastigotoides , telophase has been observed in greater detail. Telophase occurs via the separation and coiling of flagellar band. [ 5 ] While this flagellar band coils, it pulls a daughter nucleus to the basal end of the cell. [ 7 ] The number of flagellar bands in a daughter cell is determined by duplication of basal bodies at the end of cell division. [ 1 ]
The species of Holomastigotoides found in the Rhinotermitidae form a monophyletic group, which suggests that Holomastigotoides has been ancestrally present in this group of termites. [ 4 ] This is supported by the observation of Holomastigotoides in Prorhinotermes simplex and other genera in the Rhinotermitidae. P. simplex branches separately from other genera in the Rhinotermitidae, implying the ancestral condition of Holomastigotoides . [ 4 ] Two Holomastigotoides species in Coptotermes testaceus branch with two Holomastigotoides species in C. formosanus , which suggests that Holomastigotoides may have speciated alongside its host termites. [ 4 ] However, the presence of multiple Holomastigotoides species in host species eliminates the possibility that Holomastigotoides strictly co-speciated with its host termites, and other mechanisms are likely involved in the phenomena observed. [ 6 ]
There is strong support for Holomastigotoides to form a monophyletic group with species found in Coptotermes . [ 5 ] | https://en.wikipedia.org/wiki/Holomastigotoides |
The Fermilab Holometer in Illinois is intended to be the world's most sensitive laser interferometer , surpassing the sensitivity of the GEO600 and LIGO systems, and theoretically able to detect holographic fluctuations in spacetime . [ 1 ] [ 2 ] [ 3 ]
According to the director of the project, the Holometer should be capable of detecting fluctuations in the light of a single attometer , meeting or exceeding the sensitivity required to detect the smallest units in the universe called Planck units . [ 1 ] Fermilab states: "Everyone is familiar these days with the blurry and pixelated images, or noisy sound transmission, associated with poor internet bandwidth. The Holometer seeks to detect the equivalent blurriness or noise in reality itself, associated with the ultimate frequency limit imposed by nature." [ 2 ]
Craig Hogan , a particle astrophysicist at Fermilab, states about the experiment, "What we’re looking for is when the lasers lose step with each other. We’re trying to detect the smallest unit in the universe. This is really great fun, a sort of old-fashioned physics experiment where you don’t know what the result will be."
Experimental physicist Hartmut Grote of the Max Planck Institute in Germany states that although he is skeptical that the apparatus will successfully detect the holographic fluctuations, if the experiment is successful "it would be a very strong impact to one of the most open questions in fundamental physics. It would be the first proof that space-time, the fabric of the universe, is quantized ." [ 1 ]
Holometer has started, in 2014, collecting data that will help determine whether the universe fits the holographic principle . [ 4 ] The hypothesis that holographic noise may be observed in this manner has been criticized on the grounds that the theoretical framework used to derive the noise violates Lorentz-invariance . Lorentz-invariance violation is however very strongly constrained already, an issue that has been very unsatisfactorily addressed in the mathematical treatment. [ 5 ]
The Fermilab holometer has found also other uses than studying the holographic fluctuations of spacetime. It has shown constraints on the existence of high-frequency gravitational waves and primordial black holes . [ 6 ]
The Holometer will consist of two 39 m arm-length power-recycled Michelson interferometers , similar to the LIGO instruments. The interferometers will be able to be operated in two spatial configurations, termed "nested" and "back-to-back". [ 7 ] According to Hogan's hypothesis, in the nested configuration the interferometers' beamsplitters should appear to wander in step with each other (that is, the wandering should be correlated ); conversely, in the back-to-back configuration any wandering of the beamsplitters should be uncorrelated. [ 7 ] The presence or absence of the correlated wandering effect in each configuration can be determined by cross-correlating the interferometers' outputs.
The experiment started one year of data collection in August 2014. [ 8 ] A paper about the project titled Now Broadcasting in Planck Definition by Craig Hogan ends with the statement "We don't know what we will find." [ 9 ]
A new result of the experiment released on December 3, 2015, after a year of data collection, has ruled out Hogan's theory of a pixelated universe to a high degree of statistical significance (4.6 sigma). The study found that space-time is not quantized at the scale being measured. [ 10 ] | https://en.wikipedia.org/wiki/Holometer |
Holomovement is a theoretical concept proposed by physicist David Bohm to describe a dynamic and unbroken totality that underlies all of reality. It forms the foundation of Bohm's interpretation of quantum mechanics and his metaphysical model, particularly as articulated in his book Wholeness and the Implicate Order (1980). The holomovement integrates two key ideas: undivided wholeness and constant process. It suggests that everything in the universe is interconnected and in continual motion, with all forms and structures being temporary abstractions from this deeper, flowing unity.
Louis de Broglie introduced a formalism for quantum mechanics at the 1927 Solvay Congress which explained quantum effects in terms of underlying processes such as a hypothesized pilot wave . This was met with strong criticism, particularly by Wolfgang Pauli , which caused de Broglie to abandon this suggestion. [ 1 ] In 1952, Bohm revived the notion of a pilot wave guiding elementary particles in a way that withstood Pauli's criticism. [ 2 ] Bohm and Basil Hiley criticized a solely epistemological model which only accounts for what can be known about physical processes; developing this pilot-wave theory into an ontological interpretation. [ 3 ]
Bohm felt the extended version of this causal interpretation , [ 4 ] [ 5 ] particularly the notion of quantum potential , impled a "radically new notion of unbroken wholeness of the entire universe". [ 6 ] In this wholeness, which he termed the holomovement , "all things found in the unfolded, explicate order emerge from the holomovement in which they are enfolded as potentialities and ultimately they fall back into it." [ 7 ]
Bohm's dissatisfaction with mechanistic explanations in physics led him to propose a new worldview that emphasized interconnectedness and process. Influenced by his collaborations with Hiley and later F. David Peat , Bohm expanded his framework into a metaphysical model encompassing not only physical reality but also consciousness and cosmology. [ 8 ]
Bohm defines 'holomovement' as an "unknown and indescribable totality." He goes on to say:
"Thus in its totality, the holomovement is not limited in any specifiable way at all. It is not required to conform to any particular order, or to be bounded by any particular measure. Thus, the holomovement is undefinable and immeasurable."
In the first essay of Wholeness and the Implicate Order , Bohm introduces the idea of "undivided wholeness in flowing movement" as a paradigm shift from the fragmentary view of classical physics. He argues that all things are temporary abstractions from a continuous process of becoming, and that wholeness precedes the parts. [ 10 ] Bohm's notion has been interpreted by scholars as a shift toward a process-based ontology grounded in quantum realism. [ 11 ]
Bohm distinguishes between two orders of reality: the implicate (enfolded) order and the explicate (unfolded) order. The implicate order represents the hidden, generative structure of reality from which observable phenomena emerge. The holomovement is the ground from which the implicate and explicate orders arise, and into which they return. [ 10 ]
Echoing the philosophy of Heraclitus , Bohm emphasizes that all reality is process: "All is flux." He contrasts this with the mechanistic view of isolated particles and static laws, proposing instead that process and movement are the primary realities. [ 10 ] Bohm's emphasis on flux and interrelation has been compared to classical Chinese thought, including the processual logic of the Yijing (Book of Changes), which models reality in terms of instability and transformation. [ 12 ]
Bohm proposed, in a metaphysical extension of his quantum theory, that life and consciousness might emerge from the same implicate order that underlies physical processes. [ 13 ] This view has been taken up in transpersonal psychology and speculative cosmology , but remains outside mainstream neuroscience. [ 14 ]
Recent interpretations in integrative biology have extended the holomovement concept to propose models of "omni-local consciousness," suggesting that consciousness may be a fundamental and distributed property of the holofield. [ 15 ]
The holomovement has also been invoked in spiritual and activist communities as a metaphor for collective awakening and planetary coherence, sometimes framing it as a foundation for a "new story" in sociocultural evolution. [ 16 ]
Theckedath, in his review [ 17 ] of The Undivided Universe: An Ontological Interpretation of Quantum Theory by D. Bohm, B. J. Hiley, criticized their characterization of holomovement as having two "poles", one mental and one physical. According to Theckedath, the mental pole adds an element of mysticism to the holomovement concept and separates holomovement from objective matter, creating a "notion of motion without matter".
Paavo Pylkkänen and Gordon Globus, have explored its potential relevance to mind-matter interactions and holistic neuroscience. [ 18 ] In the field of religious studies, Wouter Hanegraaff has classified the holomovement as a "scientific myth" characteristic of New Age metaphysics. [ 14 ] Nonetheless, it has inspired dialogues in fields such as systems theory , consciousness studies , and transpersonal psychology . [ 19 ]
The holomovement has also been cited in speculative ethical frameworks concerning posthuman and extraterrestrial intelligences, where it serves as a basis for modeling universal interconnectivity and moral coherence. [ 20 ]
Theologian Kevin J. Sharpe has proposed that Bohm's holomovement provides a viable framework for a non-dualistic metaphysical theology that preserves transcendence while allowing for dynamic immanence. [ 21 ] Kabbalist and science scholar Jeffrey Gordon has argued that Bohm's concept of holomovement resonates with kabbalistic notions of divine unfolding, reflecting broader efforts to align mystical cosmologies with emerging scientific paradigms. [ 22 ] Bohm's focus on vibratory enfoldment has also been compared to tantric meditative models in which primordial sound and vibration structure the unfolding of reality. [ 23 ] | https://en.wikipedia.org/wiki/Holomovement |
Holonomic brain theory is a branch of neuroscience investigating the idea that consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. [ 1 ] Holonomic brain theory is opposed [ citation needed ] by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry.
This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of holograms originally formulated by Dennis Gabor . It describes human cognition by modeling the brain as a holographic storage network. [ 2 ] [ 3 ] Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. [ 4 ] [ 5 ] [ 6 ] These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform . [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ]
Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. [ 2 ] [ 9 ] In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. [ 2 ] [ 9 ] [ 10 ] This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons). [ 2 ] [ 11 ] [ 12 ]
In 1946 Dennis Gabor invented the hologram mathematically, describing a system where an image can be reconstructed through information that is stored throughout the hologram. [ 4 ] He demonstrated that the information pattern of a three-dimensional object can be encoded in a beam of light, which is more-or-less two-dimensional. Gabor also developed a mathematical model for demonstrating a holographic associative memory . [ 13 ] One of Gabor's colleagues, Pieter Jacobus Van Heerden, also developed a related holographic mathematical memory model in 1963. [ 14 ] [ 15 ] [ 16 ] This model contained the key aspect of non-locality, which became important years later when, in 1967, experiments by both Braitenberg and Kirschfield showed that exact localization of memory in the brain was false. [ 10 ]
Karl Pribram had worked with psychologist Karl Lashley on Lashley's engram experiments, which used lesions to determine the exact location of specific memories in primate brains. [ 2 ] Lashley made small lesions in the brains and found that these had little effect on memory. On the other hand, Pribram removed large areas of cortex, leading to multiple serious deficits in memory and cognitive function. Memories were not stored in a single neuron or exact location, but were spread over the entirety of a neural network. Lashley suggested that brain interference patterns could play a role in perception, but was unsure how such patterns might be generated in the brain or how they would lead to brain function. [ 17 ]
Several years later an article by neurophysiologist John Eccles described how a wave could be generated at the branching ends of pre-synaptic axons. Multiple of these waves could create interference patterns. Soon after, Emmett Leith was successful in storing visual images through the interference patterns of laser beams, inspired by Gabor's previous use of Fourier transformations to store information within a hologram. [ 18 ] After studying the work of Eccles and that of Leith, [ 17 ] Pribram put forward the hypothesis that memory might take the form of interference patterns that resemble laser-produced holograms. [ 19 ] In 1980, physicist David Bohm presented his ideas of holomovement and Implicate and explicate order . [ 20 ] Pribram became aware of Bohm's work in 1975 [ 21 ] and realized that, since a hologram could store information within patterns of interference and then recreate that information when activated, it could serve as a strong metaphor for brain function. [ 17 ] Pribram was further encouraged in this line of speculation by the fact that neurophysiologists Russell and Karen DeValois [ 22 ] together established "the spatial frequency encoding displayed by cells of the visual cortex was best described as a Fourier transform of the input pattern." [ 23 ]
A main characteristic of a hologram is that every part of the stored information is distributed over the entire hologram. [ 3 ] Both processes of storage and retrieval are carried out in a way described by Fourier transformation equations. [ 24 ] As long as a part of the hologram is large enough to contain the interference pattern , that part can recreate the entirety of the stored image, but the image may have unwanted changes, called noise . [ 9 ]
An analogy to this is the broadcasting region of a radio antenna. In each smaller individual location within the entire area it is possible to access every channel, similar to how the entirety of the information of a hologram is contained within a part. [ 4 ] Another analogy of a hologram is the way sunlight illuminates objects in the visual field of an observer. It doesn't matter how narrow the beam of sunlight is. The beam always contains all the information of the object, and when conjugated by a lens of a camera or the eyeball, produces the same full three-dimensional image. The Fourier transform formula converts spatial forms to spatial wave frequencies and vice versa, as all objects are in essence vibratory structures. Different types of lenses, acting similarly to optic lenses , can alter the frequency nature of information that is transferred.
This non-locality of information storage within the hologram is crucial, because even if most parts are damaged, the entirety will be contained within even a single remaining part of sufficient size. Pribram and others noted the similarities between an optical hologram and memory storage in the human brain. According to the holonomic brain theory, memories are stored within certain general regions, but stored non-locally within those regions. [ 25 ] This allows the brain to maintain function and memory even when it is damaged. [ 3 ] [ 24 ] [ 26 ] It is only when there exist no parts big enough to contain the whole that the memory is lost. [ 4 ] This can also explain why some children retain normal intelligence when large portions of their brain—in some cases, half—are removed. It can also explain why memory is not lost when the brain is sliced in different cross-sections. [5]
Pribram proposed that neural holograms were formed by the diffraction patterns of oscillating electric waves within the cortex. [ 26 ] Representation occurs as a dynamical transformation in a distributed network of dendritic microprocesses. [ 27 ] It is important to note the difference between the idea of a holonomic brain and a holographic one. Pribram does not suggest that the brain functions as a single hologram. Rather, the waves within smaller neural networks create localized holograms within the larger workings of the brain. [ 6 ] This patch holography is called holonomy or windowed Fourier transformations.
A holographic model can also account for other features of memory that more traditional models cannot. The Hopfield memory model has an early memory saturation point before which memory retrieval drastically slows and becomes unreliable. [ 24 ] On the other hand, holographic memory models have much larger theoretical storage capacities. Holographic models can also demonstrate associative memory, store complex connections between different concepts, and resemble forgetting through " lossy storage ". [ 13 ]
In classic brain theory the summation of electrical inputs to the dendrites and soma (cell body) of a neuron either inhibit the neuron or excite it and set off an action potential down the axon to where it synapses with the next neuron. However, this fails to account for different varieties of synapses beyond the traditional axodendritic (axon to dendrite). There is evidence for the existence of other kinds of synapses, including serial synapses and those between dendrites and soma and between different dendrites. [ 5 ] Many synaptic locations are functionally bipolar, meaning they can both send and receive impulses from each neuron, distributing input and output over the entire group of dendrites. [ 5 ]
Processes in this dendritic arbor, the network of teledendrons and dendrites, occur due to the oscillations of polarizations in the membrane of the fine-fibered dendrites, not due to the propagated nerve impulses associated with action potentials. [ 4 ] Pribram posits that the length of the delay of an input signal in the dendritic arbor before it travels down the axon is related to mental awareness. [ 5 ] [ 28 ] The shorter the delay the more unconscious the action, while a longer delay indicates a longer period of awareness. A study by David Alkon showed that after unconscious Pavlovian conditioning there was a proportionally greater reduction in the volume of the dendritic arbor, akin to synaptic elimination when experience increases the automaticity of an action. [ 5 ] Pribram and others theorize that, while unconscious behavior is mediated by impulses through nerve circuits, conscious behavior arises from microprocesses in the dendritic arbor. [ 4 ]
At the same time, the dendritic network is extremely complex, able to receive 100,000 to 200,000 inputs in a single tree, due to the large amount of branching and the many dendritic spines protruding from the branches. [ 5 ] Furthermore, synaptic hyperpolarization and depolarization remains somewhat isolated due to the resistance from the narrow dendritic spine stalk, allowing a polarization to spread without much interruption to the other spines. This spread is further aided intracellularly by the microtubules and extracellularly by glial cells . These polarizations act as waves in the synaptodendritic network, and the existence of multiple waves at once gives rise to interference patterns. [ 5 ]
Pribram suggests that there are two layers of cortical processing: a surface structure of separated and localized neural circuits and a deep structure of the dendritic arborization that binds the surface structure together. The deep structure contains distributed memory, while the surface structure acts as the retrieval mechanism. [ 4 ] Binding occurs through the temporal synchronization of the oscillating polarizations in the synaptodendritic web. It had been thought that binding only occurred when there was no phase lead or lag present, but a study by Saul and Humphrey found that cells in the lateral geniculate nucleus do in fact produce these. [ 5 ] Here phase lead and lag act to enhance sensory discrimination, acting as a frame to capture important features. [ 5 ] These filters are also similar to the lenses necessary for holographic functioning.
Pribram notes that holographic memories show large capacities, parallel processing and content addressability for rapid recognition, associative storage for perceptual completion and for associative recall. [ 29 ] [ 30 ] In systems endowed with memory storage, these interactions therefore lead to progressively more self-determination. [ 27 ]
While Pribram originally developed the holonomic brain theory as an analogy for certain brain processes, several papers (including some more recent ones by Pribram himself) have proposed that the similarity between hologram and certain brain functions is more than just metaphorical, but actually structural. [ 11 ] [ 28 ] [ 31 ] Others still maintain that the relationship is only analogical. [ 32 ] Several studies have shown that the same series of operations used in holographic memory models are performed in certain processes concerning temporal memory and optomotor responses . This indicates at least the possibility of the existence of neurological structures with certain holonomic properties. [ 10 ] Other studies have demonstrated the possibility that biophoton emission (biological electrical signals that are converted to weak electromagnetic waves in the visible range) may be a necessary condition for the electric activity in the brain to store holographic images. [ 11 ] [ 31 ] These may play a role in cell communication and certain brain processes including sleep, but further studies are needed to strengthen current ones. [ 28 ] Other studies have shown the correlation between more advanced cognitive function and homeothermy . Taking holographic brain models into account, this temperature regulation would reduce distortion of the signal waves, an important condition for holographic systems. [ 11 ] See: Computation approach in terms of holographic codes and processing. [ 33 ]
Pribram's holonomic model of brain function did not receive widespread attention at the time, but other quantum models have been developed since, including brain dynamics by Jibu & Yasue and Vitiello's dissipative quantum brain dynamics. Though not directly related to the holonomic model, they continue to move beyond approaches based solely in classic brain theory. [ 3 ] [ 11 ]
In 1969 scientists D. Wilshaw, O. P. Buneman and H. Longuet-Higgins proposed an alternative, non-holographic model that fulfilled many of the same requirements as Gabor's original holographic model. The Gabor model did not explain how the brain could use Fourier analysis on incoming signals or how it would deal with the low signal-noise ratio in reconstructed memories. Longuet-Higgin's correlograph model built on the idea that any system could perform the same functions as a Fourier holograph if it could correlate pairs of patterns. It uses minute pinholes that do not produce diffraction patterns to create a similar reconstruction as that in Fourier holography. [ 3 ] Like a hologram, a discrete correlograph can recognize displaced patterns and store information in a parallel and non-local way so it usually will not be destroyed by localized damage. [ 34 ] They then expanded the model beyond the correlograph to an associative net where the points become parallel lines arranged in a grid. Horizontal lines represent axons of input neurons while vertical lines represent output neurons. Each intersection represents a modifiable synapse. Though this cannot recognize displaced patterns, it has a greater potential storage capacity. This was not necessarily meant to show how the brain is organized, but instead to show the possibility of improving on Gabor's original model. [ 34 ] One property of the associative net that makes it attractive as a neural model is that good retrieval can be obtained even when some of the storage elements are damaged or when some of the components of the address are incorrect. [ 35 ] P. Van Heerden countered this model by demonstrating mathematically that the signal-noise ratio of a hologram could reach 50% of ideal. He also used a model with a 2D neural hologram network for fast searching imposed upon a 3D network for large storage capacity. A key quality of this model was its flexibility to change the orientation and fix distortions of stored information, which is important for our ability to recognize an object as the same entity from different angles and positions, something the correlograph and association network models lack. [ 16 ] | https://en.wikipedia.org/wiki/Holonomic_brain_theory |
In classical mechanics , holonomic constraints are relations between the position variables (and possibly time) [ 1 ] that can be expressed in the following form:
f ( u 1 , u 2 , u 3 , … , u n , t ) = 0 {\displaystyle f(u_{1},u_{2},u_{3},\ldots ,u_{n},t)=0}
where { u 1 , u 2 , u 3 , … , u n } {\displaystyle \{u_{1},u_{2},u_{3},\ldots ,u_{n}\}} are n generalized coordinates that describe the system (in unconstrained configuration space ). For example, the motion of a particle constrained to lie on the surface of a sphere is subject to a holonomic constraint, but if the particle is able to fall off the sphere under the influence of gravity, the constraint becomes non-holonomic. For the first case, the holonomic constraint may be given by the equation
r 2 − a 2 = 0 {\displaystyle r^{2}-a^{2}=0}
where r {\displaystyle r} is the distance from the centre of a sphere of radius a {\displaystyle a} , whereas the second non-holonomic case may be given by
r 2 − a 2 ≥ 0 {\displaystyle r^{2}-a^{2}\geq 0}
Velocity-dependent constraints (also called semi-holonomic constraints) [ 2 ] such as
f ( u 1 , u 2 , … , u n , u ˙ 1 , u ˙ 2 , … , u ˙ n , t ) = 0 {\displaystyle f(u_{1},u_{2},\ldots ,u_{n},{\dot {u}}_{1},{\dot {u}}_{2},\ldots ,{\dot {u}}_{n},t)=0}
are not usually holonomic. [ citation needed ]
In classical mechanics a system may be defined as holonomic if all constraints of the system are holonomic. For a constraint to be holonomic it must be expressible as a function : f ( u 1 , u 2 , u 3 , … , u n , t ) = 0 , {\displaystyle f(u_{1},\ u_{2},\ u_{3},\ \ldots ,\ u_{n},\ t)=0,\,} i.e. a holonomic constraint depends only on the coordinates u j {\displaystyle u_{j}} and maybe time t {\displaystyle t} . [ 1 ] It does not depend on the velocities or any higher-order derivative with respect to t . A constraint that cannot be expressed in the form shown above is a nonholonomic constraint .
As described above, a holonomic system is (simply speaking) a system in which one can deduce the state of a system by knowing only the change of positions of the components of the system over time, but not needing to know the velocity or in what order the components moved relative to each other. In contrast, a nonholonomic system is often a system where the velocities of the components over time must be known to be able to determine the change of state of the system, or a system where a moving part is not able to be bound to a constraint surface, real or imaginary. Examples of holonomic systems are gantry cranes, pendulums, and robotic arms. Examples of nonholonomic systems are Segways , unicycles, and automobiles.
The configuration space u {\displaystyle \mathbf {u} } lists the displacement of the components of the system, one for each degree of freedom . A system that can be described using a configuration space is called scleronomic .
u = [ u 1 u 2 … u n ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}u_{1}&u_{2}&\ldots &u_{n}\end{bmatrix}}^{\mathrm {T} }}
The event space is identical to the configuration space except for the addition of a variable t {\displaystyle t} to represent the change in the system over time (if needed to describe the system). A system that must be described using an event space, instead of only a configuration space, is called rheonomic . Many systems can be described either scleronomically or rheonomically. For example, the total allowable motion of a pendulum can be described with a scleronomic constraint, but the motion over time of a pendulum must be described with a rheonomic constraint.
u = [ u 1 u 2 … u n t ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}u_{1}&u_{2}&\ldots &u_{n}&t\end{bmatrix}}^{\mathrm {T} }}
The state space q {\displaystyle \mathbf {q} } is the configuration space, plus terms describing the velocity of each term in the configuration space.
q = [ u u ˙ ] = [ u 1 … u n u ˙ 1 … u ˙ n ] T {\displaystyle \mathbf {q} ={\begin{bmatrix}\mathbf {u} \\\mathbf {\dot {u}} \end{bmatrix}}={\begin{bmatrix}u_{1}&\ldots &u_{n}&{\dot {u}}_{1}&\ldots &{\dot {u}}_{n}\end{bmatrix}}^{\mathrm {T} }}
The state-time space adds time t {\displaystyle t} .
q = [ u u ˙ ] = [ u 1 … u n t u ˙ 1 … u ˙ n ] T {\displaystyle \mathbf {q} ={\begin{bmatrix}\mathbf {u} \\\mathbf {\dot {u}} \end{bmatrix}}={\begin{bmatrix}u_{1}&\ldots &u_{n}&t&{\dot {u}}_{1}&\ldots &{\dot {u}}_{n}\end{bmatrix}}^{\mathrm {T} }}
As shown on the right, a gantry crane is an overhead crane that is able to move its hook in 3 axes as indicated by the arrows. Intuitively, we can deduce that the crane should be a holonomic system as, for a given movement of its components, it doesn't matter what order or velocity the components move: as long as the total displacement of each component from a given starting condition is the same, all parts and the system as a whole will end up in the same state. Mathematically we can prove this as such:
We can define the configuration space of the system as: u = [ x y z ] {\displaystyle \mathbf {u} ={\begin{bmatrix}x\\y\\z\end{bmatrix}}}
We can say that the deflection of each component of the crane from its "zero" position are B {\displaystyle B} , G {\displaystyle G} , and O {\displaystyle O} , for the blue, green, and orange components, respectively. The orientation and placement of the coordinate system does not matter in whether a system is holonomic, but in this example the components happen to move parallel to its axes. If the origin of the coordinate system is at the back-bottom-left of the crane, then we can write the position constraint equation as: ( x − B ) + ( y − G ) + ( z − ( h − O ) ) = 0 {\displaystyle (x-B)+(y-G)+(z-(h-O))=0}
Where h {\displaystyle h} is the height of the crane. Optionally, we may simplify to the standard form where all constants are placed after the variables: x + y + z − ( B + G + h − O ) = 0 {\displaystyle x+y+z-(B+G+h-O)=0}
Because we have derived a constraint equation in holonomic form (specifically, our constraint equation has the form f ( x , y , z ) = 0 {\displaystyle f(x,y,z)=0} where { x , y , z } ∈ u {\displaystyle \{x,y,z\}\in \mathbf {u} } ), we can see that this system must be holonomic.
As shown on the right, a simple pendulum is a system composed of a weight and a string. The string is attached at the top end to a pivot and at the bottom end to a weight. Being inextensible, the string’s length is a constant. This system is holonomic because it obeys the holonomic constraint
x 2 + y 2 − L 2 = 0 , {\displaystyle {x^{2}+y^{2}}-L^{2}=0,}
where ( x , y ) {\displaystyle (x,\ y)} is the position of the weight and L {\displaystyle L} is length of the string.
The particles of a rigid body obey the holonomic constraint
( r i − r j ) 2 − L i j 2 = 0 , {\displaystyle (\mathbf {r} _{i}-\mathbf {r} _{j})^{2}-L_{ij}^{2}=0,\,}
where r i {\displaystyle \mathbf {r} _{i}} , r j {\displaystyle \mathbf {r} _{j}} are respectively the positions of particles P i {\displaystyle P_{i}} and P j {\displaystyle P_{j}} , and L i j {\displaystyle L_{ij}} is the distance between them. If a given system is holonomic, rigidly attaching additional parts to components of the system in question cannot make it non-holonomic, assuming that the degrees of freedom are not reduced (in other words, assuming the configuration space is unchanged).
Consider the following differential form of a constraint:
∑ j A i j d u j + A i d t = 0 {\displaystyle \sum _{j}\ A_{ij}\,du_{j}+A_{i}\,dt=0}
where A i j , A i {\displaystyle A_{ij},A_{i}} are the coefficients of the differentials d u j , d t {\displaystyle du_{j},dt} for the i th constraint equation. This form is called the Pfaffian form or the differential form .
If the differential form is integrable, i.e., if there is a function f i ( u 1 , u 2 , u 3 , … , u n , t ) = 0 {\displaystyle f_{i}(u_{1},\ u_{2},\ u_{3},\ \ldots ,\ u_{n},\ t)=0} satisfying the equality
d f i = ∑ j A i j d u j + A i d t = 0 {\displaystyle df_{i}=\sum _{j}\ A_{ij}\,du_{j}+A_{i}\,dt=0}
then this constraint is a holonomic constraint; otherwise, it is nonholonomic. Therefore, all holonomic and some nonholonomic constraints can be expressed using the differential form. Examples of nonholonomic constraints that cannot be expressed this way are those that are dependent on generalized velocities. [ clarification needed ] With a constraint equation in Pfaffian form, whether the constraint is holonomic or nonholonomic depends on whether the Pfaffian form is integrable. See Universal test for holonomic constraints below for a description of a test to verify the integrability (or lack of) of a Pfaffian form constraint.
When the constraint equation of a system is written in Pfaffian constraint form, there exists a mathematical test to determine whether the system is holonomic.
For a constraint equation, or i {\displaystyle i} sets of constraint equations (note that variable(s) representing time can be included, as from above A i ∈ A i j {\displaystyle A_{i}\in A_{ij}} and d t ∈ d u j {\displaystyle \,dt\in du_{j}} in the following form): ∑ j n A i j d u j = 0 ; {\displaystyle \sum _{j}^{n}\ A_{ij}\,du_{j}=0;\,}
we can use the test equation: A γ ( ∂ A β ∂ u α − ∂ A α ∂ u β ) + A β ( ∂ A α ∂ u γ − ∂ A γ ∂ u α ) + A α ( ∂ A γ ∂ u β − ∂ A β ∂ u γ ) = 0 {\displaystyle A_{\gamma }\left({\frac {\partial A_{\beta }}{\partial u_{\alpha }}}-{\frac {\partial A_{\alpha }}{\partial u_{\beta }}}\right)+A_{\beta }\left({\frac {\partial A_{\alpha }}{\partial u_{\gamma }}}-{\frac {\partial A_{\gamma }}{\partial u_{\alpha }}}\right)+A_{\alpha }\left({\frac {\partial A_{\gamma }}{\partial u_{\beta }}}-{\frac {\partial A_{\beta }}{\partial u_{\gamma }}}\right)=0} where α , β , γ = 1 , 2 , 3 … n {\displaystyle \alpha ,\beta ,\gamma =1,2,3\ldots n} in ( n 3 ) = n ( n − 1 ) ( n − 2 ) 6 {\textstyle {\binom {n}{3}}={\frac {n(n-1)(n-2)}{6}}} combinations of test equations per constraint equation, for all i {\displaystyle i} sets of constraint equations.
In other words, a system of three variables would have to be tested once with one test equation with the terms α , β , γ {\displaystyle \alpha ,\beta ,\gamma } being terms 1 , 2 , 3 {\displaystyle 1,2,3} in the constraint equation (in any order), but to test a system of four variables the test would have to be performed up to four times with four different test equations, with the terms α , β , γ {\displaystyle \alpha ,\beta ,\gamma } being terms 1 , 2 , 3 {\displaystyle 1,2,3} , 1 , 2 , 4 {\displaystyle 1,2,4} , 1 , 3 , 4 {\displaystyle 1,3,4} , and 2 , 3 , 4 {\displaystyle 2,3,4} in the constraint equation (each in any order) in four different tests. For a system of five variables, ten tests would have to be performed on a holonomic system to verify that fact, and for a system of five variables with three sets of constraint equations, thirty tests (assuming a simplification like a change-of-variable could not be performed to reduce that number). For this reason, it is advisable when using this method on systems of more than three variables to use common sense as to whether the system in question is holonomic, and only pursue testing if the system likely is not. Additionally, it is likewise best to use mathematical intuition to try to predict which test would fail first and begin with that one, skipping tests at first that seem likely to succeed.
If every test equation is true for the entire set of combinations for all constraint equations, the system is holonomic. If it is untrue for even one test combination, the system is nonholonomic.
Consider this dynamical system described by a constraint equation in Pfaffian form. cos ( θ ) d x + sin ( θ ) d y + [ y cos ( θ ) − x sin ( θ ) ] d θ = 0 {\displaystyle \cos(\theta )dx+\sin(\theta )dy+\left[y\cos(\theta )-x\sin(\theta )\right]d\theta =0}
The configuration space, by inspection, is u = [ x y θ ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}x&y&\theta \end{bmatrix}}^{\mathrm {T} }} . Because there are only three terms in the configuration space, there will be only one test equation needed.
We can organize the terms of the constraint equation as such, in preparation for substitution: A α = cos θ {\displaystyle A_{\alpha }=\cos \theta } A β = sin θ {\displaystyle A_{\beta }=\sin \theta } A γ = y cos θ − x sin θ {\displaystyle A_{\gamma }=y\cos \theta -x\sin \theta } u α = d x {\displaystyle u_{\alpha }=dx} u β = d y {\displaystyle u_{\beta }=dy} u γ = d θ {\displaystyle u_{\gamma }=d\theta }
Substituting the terms, our test equation becomes: ( y cos θ − x sin θ ) [ ∂ ∂ x sin θ − ∂ ∂ y cos θ ] + sin θ [ ∂ ∂ θ cos θ − ∂ ∂ x ( y cos θ − x sin θ ) ] + cos θ [ ∂ ∂ y ( y cos θ − x sin θ ) − ∂ ∂ θ sin θ ] = 0 {\displaystyle \left(y\cos \theta -x\sin \theta \right)\left[{\frac {\partial }{\partial x}}\sin \theta -{\frac {\partial }{\partial y}}\cos \theta \right]+\sin \theta \left[{\frac {\partial }{\partial \theta }}\cos \theta -{\frac {\partial }{\partial x}}(y\cos \theta -x\sin \theta )\right]+\cos \theta \left[{\frac {\partial }{\partial y}}(y\cos \theta -x\sin \theta )-{\frac {\partial }{\partial \theta }}\sin \theta \right]=0}
After calculating all partial derivatives, we get: ( y cos θ − x sin θ ) [ 0 − 0 ] + sin θ [ − sin θ − ( − sin θ ) ] + cos θ [ cos θ − cos θ ] = 0 {\displaystyle (y\cos \theta -x\sin \theta )\left[0-0\right]+\sin \theta \left[-\sin \theta -(-\sin \theta )\right]+\cos \theta \left[\cos \theta -\cos \theta \right]=0}
Simplifying, we find that: 0 = 0 {\displaystyle 0=0} We see that our test equation is true, and thus, the system must be holonomic.
We have finished our test, but now knowing that the system is holonomic, we may wish to find the holonomic constraint equation. We can attempt to find it by integrating each term of the Pfaffian form and attempting to unify them into one equation, as such: ∫ cos θ d x = x cos θ + f ( y , θ ) {\displaystyle \int \cos \theta \,dx=x\cos \theta +f(y,\theta )} ∫ sin θ d y = y sin θ + f ( x , θ ) {\displaystyle \int \sin \theta \,dy=y\sin \theta +f(x,\theta )} ∫ ( y cos θ − x sin θ ) d θ = y sin θ + x cos θ + f ( x , y ) {\displaystyle \int \left(y\cos \theta -x\sin \theta \right)d\theta =y\sin \theta +x\cos \theta +f(x,y)}
It's easy to see that we can combine the results of our integrations to find the holonomic constraint equation: y sin θ + x cos θ + C = 0 {\displaystyle y\sin \theta +x\cos \theta +C=0} where C is the constant of integration.
For a given Pfaffian constraint where every coefficient of every differential is a constant, in other words, a constraint in the form: ∑ j A i j d u j + A i d t = 0 ; { A i j , A i ; j = 1 , 2 , … ; i = 1 , 2 , … } ∈ R {\displaystyle \sum _{j}\ A_{ij}\,du_{j}+A_{i}\,dt=0;\;\{A_{ij},A_{i};\,j=1,2,\ldots ;\,i=1,2,\ldots \}\in \mathbb {R} }
the constraint must be holonomic.
We may prove this as follows: consider a system of constraints in Pfaffian form where every coefficient of every differential is a constant, as described directly above. To test whether this system of constraints is holonomic, we use the universal test . We can see that in the test equation, there are three terms that must sum to zero. Therefore, if each of those three terms in every possible test equation are each zero, then all test equations are true and this the system is holonomic. Each term of each test equation is in the form: A 3 ( ∂ A 2 ∂ u 1 − ∂ A 1 ∂ u 2 ) {\displaystyle A_{3}\left({\frac {\partial A_{2}}{\partial u_{1}}}-{\frac {\partial A_{1}}{\partial u_{2}}}\right)} where:
Additionally, there are i {\displaystyle i} sets of test equations.
We can see that, by definition, all A n {\displaystyle A_{n}} are constants. It is well-known in calculus that any derivative (full or partial) of any constant is 0 {\displaystyle 0} . Hence, we can reduce each partial derivative to: A 3 ( 0 − 0 ) {\displaystyle A_{3}{\big (}0-0{\big )}}
and hence each term is zero, the left side each test equation is zero, each test equation is true, and the system is holonomic.
Any system that can be described by a Pfaffian constraint and has a configuration space or state space of only two variables or one variable is holonomic.
We may prove this as such: consider a dynamical system with a configuration space or state space described as: u = [ u 1 u 2 ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}u_{1}&u_{2}\end{bmatrix}}^{\mathrm {T} }}
if the system is described by a state space, we simply say that u 2 {\displaystyle u_{2}} equals our time variable t {\displaystyle t} . This system will be described in Pfaffian form: A i 1 d u 1 + A i 2 d u 2 = 0 {\displaystyle A_{i1}\,du_{1}+A_{i2}\,du_{2}=0}
with i {\displaystyle i} sets of constraints. The system will be tested by using the universal test. However, the universal test requires three variables in the configuration or state space. To accommodate this, we simply add a dummy variable λ {\displaystyle \lambda } to the configuration or state space to form: u = [ u 1 u 2 λ ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}u_{1}&u_{2}&\lambda \end{bmatrix}}^{\mathrm {T} }}
Because the dummy variable λ {\displaystyle \lambda } is by definition not a measure of anything in the system, its coefficient in the Pfaffian form must be 0 {\displaystyle 0} . Thus we revise our Pfaffian form: A i 1 d u 1 + A i 2 d u 2 + 0 d λ = 0 {\displaystyle A_{i1}\,du_{1}+A_{i2}\,du_{2}+0\,d\lambda =0}
Now we may use the test as such, for a given constraint i {\displaystyle i} if there are a set of constraints: 0 ( ∂ A i 2 ∂ u 1 − ∂ A i 1 ∂ u 2 ) + A i 2 ( ∂ A i 1 ∂ λ − ∂ ∂ u 1 0 ) + A i 1 ( ∂ ∂ u 2 0 − ∂ A i 2 ∂ λ ) = 0 {\displaystyle 0\left({\frac {\partial A_{i2}}{\partial u_{1}}}-{\frac {\partial A_{i1}}{\partial u_{2}}}\right)+A_{i2}\left({\frac {\partial A_{i1}}{\partial \lambda }}-{\frac {\partial }{\partial u_{1}}}0\right)+A_{i1}\left({\frac {\partial }{\partial u_{2}}}0-{\frac {\partial A_{i2}}{\partial \lambda }}\right)=0}
Upon realizing that : ∂ ∂ λ f ( u 1 , u 2 ) = 0 {\displaystyle {\frac {\partial }{\partial \lambda }}f(u_{1},u_{2})=0} because the dummy variable λ {\displaystyle \lambda } cannot appear in the coefficients used to describe the system, we see that the test equation must be true for all sets of constraint equations and thus the system must be holonomic. A similar proof can be conducted with one actual variable in the configuration or state space and two dummy variables to confirm that one-degree-of-freedom systems describable in Pfaffian form are also always holonomic.
In conclusion, we realize that even though it is possible to model nonholonomic systems in Pfaffian form, any system modellable in Pfaffian form with two or fewer degrees of freedom (the number of degrees of freedom is equal to the number of terms in the configuration space) must be holonomic.
Important note: realize that the test equation failed because the dummy variable, and hence the dummy differential included in the test, will differentiate anything that is a function of the actual configuration or state space variables to 0 {\displaystyle 0} . Having a system with a configuration or state space of: u = [ u 1 u 2 u 3 ] T {\displaystyle \mathbf {u} ={\begin{bmatrix}u_{1}&u_{2}&u_{3}\end{bmatrix}}^{\mathrm {T} }}
and a set of constraints where one or more constraints are in the Pfaffian form: A i 1 d u 1 + A i 2 d u 2 + 0 d u 3 = 0 {\displaystyle A_{i1}du_{1}+A_{i2}du_{2}+0du_{3}=0}
does not guarantee the system is holonomic, as even though one differential has a coefficient of 0 {\displaystyle 0} , there are still three degrees of freedom described in the configuration or state space.
The holonomic constraint equations can help us easily remove some of the dependent variables in our system. For example, if we want to remove x d {\displaystyle x_{d}} , which is a parameter in the constraint equation f i {\displaystyle f_{i}} , we can rearrange the equation into the following form, assuming it can be done,
x d = g i ( x 1 , x 2 , x 3 , … , x d − 1 , x d + 1 , … , x N , t ) , {\displaystyle x_{d}=g_{i}(x_{1},\ x_{2},\ x_{3},\ \dots ,\ x_{d-1},\ x_{d+1},\ \dots ,\ x_{N},\ t),\,}
and replace the x d {\displaystyle x_{d}} in every equation of the system using the above function. This can always be done for general physical systems, provided that the derivative of f i {\displaystyle f_{i}} is continuous, then by the implicit function theorem , the solution g i {\displaystyle g_{i}} , is guaranteed in some open set. Thus, it is possible to remove all occurrences of the dependent variable x d {\displaystyle x_{d}} .
Suppose that a physical system has N {\displaystyle N} degrees of freedom. Now, h {\displaystyle h} holonomic constraints are imposed on the system. Then, the number of degrees of freedom is reduced to m = N − h {\displaystyle m=N-h} . We can use m {\displaystyle m} independent generalized coordinates ( q j {\displaystyle q_{j}} ) to completely describe the motion of the system. The transformation equation can be expressed as follows:
x i = x i ( q 1 , q 2 , … , q m , t ) , i = 1 , 2 , … N . {\displaystyle x_{i}=x_{i}(q_{1},\ q_{2},\ \ldots ,\ q_{m},\ t)\ ,\qquad i=1,\ 2,\ \ldots N.\,}
In order to study classical physics rigorously and methodically, we need to classify systems. Based on previous discussion, we can classify physical systems into holonomic systems and non-holonomic systems . One of the conditions for the applicability of many theorems and equations is that the system must be a holonomic system. For example, if a physical system is a holonomic system and a monogenic system , then Hamilton's principle is the necessary and sufficient condition for the correctness of Lagrange's equation . [ 3 ] | https://en.wikipedia.org/wiki/Holonomic_constraints |
Holophonics is a binaural recording system created by Hugo Zuccarelli that is based on the claim that the human auditory system acts as an interferometer . It relies on phase variance , just like stereophonic sound . The sound characteristics of holophonics are most clearly heard through headphones , though they can be effectively demonstrated with two-channel stereo speakers, provided that they are phase-coherent. The word "holophonics" is related to "acoustic hologram ".
Holophonics was created by Argentine inventor Hugo Zuccarelli in 1980, during his studies at the Politecnico di Milano university. In 1983, Zuccarelli released a recording entitled Zuccarelli Holophonics (The Matchbox Shaker) in the United Kingdom (UK) that was produced by CBS . The recording consisted entirely of short recordings of sounds designed to show off the Holophonics system. These included a shaking matchbox, haircut and blower, bees, balloon, plastic bag, birds, airplanes, fireworks , thunder and racing cars. In its early years, Holophonics was used by various artists, including Pink Floyd for The Final Cut , [ 1 ] Roger Waters solo album, The Pros and Cons of Hitch Hiking [ 2 ] and Psychic TV 's Dreams Less Sweet [ citation needed ] . The system has been used in film soundtracks , popular music, television and theme parks. [ 3 ] Most famous sound effects were recorded in Modena at Umbi's Studios by sound engineer Maurizio Maggi. Holophonic is patented and registered by Umberto Maggi (Italy). Zuccarelli states that the human auditory system is a sound emitter, producing a reference sound that combines with incoming sound to form an interference pattern inside the ear. The nature of this pattern is sensitive to the direction of the incoming sound. According to the hypothesis, the cochlea detects and analyzes this pattern as if it were an acoustic hologram. The brain then interprets this data and infers the direction of the sound. An article from Zuccarelli presenting this theory was printed in the magazine New Scientist in 1983. This article was soon followed by two letters, casting doubt on Zuccarelli's theory and his scientific abilities. [ 4 ] [ 5 ]
To date, there has been no evidence provided that any acoustic emissions are used for sound localization. Holophonics, like binaural recording, instead reproduces the interaural differences (arrival time and amplitude between the ears), as well as rudimentary head-related transfer functions (HRTF). These create the illusion that sounds produced in the membrane of a speaker emanate from specific directions.
While otoacoustic emissions do exist, there is no evidence to support the assertion that these play a role in sound localization, nor is any mechanism for this "interference" effect claimed by Zuccarelli supported. On the contrary, there is abundant literature proving that properly presented spatial cues via HRTF synthesis (mimicking binaural heads) or binaural recording is adequate to reproduce realistic spatial recordings comparable to real listening, and comparable to the Holophonics demonstrations. [ 6 ] | https://en.wikipedia.org/wiki/Holophonics |
Holozoic nutrition (Greek: holo -whole ; zoikos -of animals) is a type of heterotrophic nutrition that is characterized by the internalization ( ingestion ) and internal processing of liquids or solid food particles. [ 1 ] Protozoa , such as amoebas , and most of the free living animals, such as humans, exhibit this type of nutrition where food is taken into the body as a liquid or solid and then further broken down is known as holozoic nutrition.
In Holozoic nutrition, the energy and organic building blocks are obtained by ingesting and then digesting other organisms or pieces of other organisms, including blood, flesh and decaying organic matter. This contrasts with holophytic nutrition , in which energy and organic building blocks are obtained through photosynthesis or chemosynthesis , and with saprozoic nutrition, in which digestive enzymes are released externally and the resulting monomers (small organic molecules) are absorbed directly from the environment.
There are several stages of holozoic nutrition, which often occur in separate compartments within an organism (such as the stomach and intestines): | https://en.wikipedia.org/wiki/Holozoic_nutrition |
The Holstein–Herring method , [ 1 ] [ 2 ] [ 3 ] [ 4 ] also called the surface integral method , [ 5 ] [ 6 ] or Smirnov's method [ 7 ] is an effective means of getting the exchange energy splittings of asymptotically degenerate energy states in molecular systems. Although the exchange energy becomes elusive at large internuclear systems, it is of prominent importance in theories of molecular binding and magnetism. This splitting results from the symmetry under exchange of identical nuclei ( Pauli exclusion principle ). The basic idea pioneered by Theodore Holstein , Conyers Herring and Boris M. Smirnov in the 1950-1960.
The method can be illustrated for the hydrogen molecular ion or more generally, atom-ion systems or one-active electron systems, as follows. We consider states that are represented by even or odd functions with respect to behavior under space inversion. This is denoted with the suffixes g and u from the German gerade and ungerade and are standard practice for the designation of electronic states of diatomic molecules, whereas for atomic states the terms even and odd are used.
The electronic time-independent Schrödinger equation can be written as:
where E is the (electronic) energy of a given quantum mechanical state (eigenstate), with the electronic state function ψ = ψ ( r ) {\displaystyle \psi =\psi (\mathbf {r} )} depending on the spatial coordinates of the electron and where V {\displaystyle V} is the electron-nuclear Coulomb potential energy function. For the hydrogen molecular ion , this is:
For any gerade (or even) state, the electronic Schrödinger wave equation can be written in atomic units ( ℏ = m = e = 4 π ε 0 = 1 {\displaystyle \hbar =m=e=4\pi \varepsilon _{0}=1} ) as:
For any ungerade (or odd) state, the corresponding wave equation can be written as:
For simplicity, we assume real functions (although the result can be generalized to the complex case). We then multiply the gerade wave equation by ψ − {\displaystyle \psi _{-}} on the left and the ungerade wave equation on the left by ψ + {\displaystyle \psi _{+}} and subtract to obtain:
where Δ E = E − − E + {\displaystyle \Delta E=E_{-}-E_{+}} is the exchange energy splitting . Next, without loss of generality, we define orthogonal single-particle functions, ϕ A {\displaystyle \phi _{A}^{}} and ϕ B {\displaystyle \phi _{B}^{}} , located at the nuclei and write:
This is similar to the LCAO ( linear combination of atomic orbitals ) method used in quantum chemistry, but we emphasize that the functions ϕ A {\displaystyle \phi _{A}^{}} and ϕ B {\displaystyle \phi _{B}^{}} are in general polarized i.e. they are not pure eigenfunctions of angular momentum with respect to their nuclear center, see
also below). Note, however, that in the limit as R → ∞ {\displaystyle R\rightarrow \infty } , these localized functions ϕ A , B {\displaystyle \phi _{A,B}^{}} collapse into the well-known atomic (hydrogenic) psi functions ϕ A , B 0 {\displaystyle \phi _{A,B}^{0}} . We denote M {\displaystyle M} as the mid-plane located exactly between the two nuclei (see diagram for hydrogen molecular ion for more details), with z {\displaystyle {\mathbf {z} }} representing the unit normal vector of this plane (which is parallel to the Cartesian z {\displaystyle z} -direction), so that the full R 3 {\displaystyle \mathbf {R} ^{3}} space is divided into left ( L {\displaystyle L} ) and right ( R {\displaystyle R} ) halves. By considerations of symmetry:
This implies that:
Also, these localized functions are normalized, which leads to:
and conversely. Integration of the above in the whole space left to the mid-plane yields:
and
From a variation of the divergence theorem on the above, we finally obtain:
where d S {\displaystyle d{\mathbf {S} }} is a differential surface element of the mid-plane. This is the Holstein–Herring formula. From the latter, Conyers Herring was the first to show [ 3 ] that the lead term for the asymptotic expansion of the energy difference between the two lowest states of the hydrogen molecular ion, namely the first excited state 2 p σ μ {\displaystyle 2p\sigma _{\mu }} and the ground state 1 s σ g {\displaystyle 1s\sigma _{g}} (as expressed in molecular notation —see graph for energy curves), was found to be:
Previous calculations based on the LCAO of atomic orbitals had erroneously given a lead coefficient of 4 / 3 {\displaystyle 4/3} instead of 4 / e {\displaystyle 4/e} . While it is true that for the Hydrogen molecular ion, the eigenenergies can be mathematically expressed in terms of a generalization of the Lambert W function , these asymptotic formulae are more useful in the long range and the Holstein–Herring method has a much wider range of applications than this particular molecule.
The Holstein–Herring formula had limited applications until around 1990 when Kwong-Tin Tang , Jan Peter Toennies , and C. L. Yiu [ 8 ] demonstrated that ϕ A {\displaystyle \phi _{A}^{}} can be a polarized wave function, i.e. an atomic wave function localized at a particular nucleus but perturbed by the other nuclear center, and consequently without apparent gerade or ungerade symmetry, and nonetheless the Holstein–Herring formula above can be used to generate the correct asymptotic series expansions for the exchange energies. In this way, one has successfully recast a two-center formulation into an effective one-center formulation. Subsequently, it has been applied with success to one-active electron systems. Later, Scott et al. explained and clarified their results while sorting out subtle but important issues concerning the true convergence of the polarized wave function. [ 9 ] [ 10 ] [ 11 ]
The outcome meant that it was possible to solve for the asymptotic exchange energy splittings to any order. The Holstein–Herring method has been extended to the two-active electron case i.e. the hydrogen molecule for the two lowest discrete states of H 2 {\displaystyle {\text{H}}_{2}} [ 12 ] and also for general atom-atom systems. [ 13 ]
The Holstein–Herring formula can be physically interpreted as the electron undergoing " quantum tunnelling " between both nuclei, thus creating a current whose flux through the mid-plane allows us to isolate the exchange energy. The energy is thus shared, i.e. exchanged , between the two nuclear centers. Related to the tunnelling effect, a complementary interpretation from Sidney Coleman 's Aspects of Symmetry (1985) has an " instanton " travelling near and about the classical paths within path integral formulation . Note that the volume integral in the denominator of the Holstein–Herring formula is sub-dominant in R {\displaystyle R} . Consequently this denominator is almost unity for sufficiently large internuclear distances R {\displaystyle R} and only the surface integral of the numerator need be considered. | https://en.wikipedia.org/wiki/Holstein–Herring_method |
In quantum mechanics , the Holstein–Primakoff transformation is a mapping from boson creation and annihilation operators to the spin operators , effectively truncating their infinite-dimensional Fock space to finite-dimensional subspaces.
One important aspect of quantum mechanics is the occurrence of—in general— non-commuting operators which represent observables , quantities that can be measured.
A standard example of a set of such operators are the three components of the angular momentum operators, which are crucial in many quantum systems.
These operators are complicated, and one would like to find a simpler representation, which can be used to generate approximate calculational schemes.
The transformation was developed [ 1 ] in 1940 by Theodore Holstein , a graduate student at the time, [ 2 ] and Henry Primakoff . This method has found widespread applicability and has been extended in many different directions.
There is a close link to other methods of boson mapping of operator algebras: in particular, the (non-Hermitian) Dyson –Maleev [ 3 ] [ 4 ] technique, and to a lesser extent the Jordan–Schwinger map . [ 5 ] There is, furthermore, a close link to the theory of (generalized) coherent states in Lie algebras .
The basic idea can be illustrated for the basic example of spin operators of quantum mechanics.
For any set of right-handed orthogonal axes, define the components of this vector operator as S x {\displaystyle S_{x}} , S y {\displaystyle S_{y}} and S z {\displaystyle S_{z}} , which are mutually noncommuting , i.e., [ S x , S y ] = i ℏ S z {\displaystyle \left[S_{x},S_{y}\right]=i\hbar S_{z}} and its cyclic permutations.
In order to uniquely specify the states of a spin, one may diagonalise any set of commuting operators. Normally one uses the SU(2) Casimir operators S 2 {\displaystyle S^{2}} and S z {\displaystyle S_{z}} , which leads to
states with the quantum numbers | s , m s ⟩ {\displaystyle \left|s,m_{s}\right\rangle } ,
The projection quantum number m s {\displaystyle m_{s}} takes on all the values ( − s , − s + 1 , … , s − 1 , s ) {\displaystyle (-s,-s+1,\ldots ,s-1,s)} .
Consider a single particle of spin s (i.e., look at a single irreducible representation of SU(2)). Now take the state with maximal projection | s , m s = + s ⟩ {\displaystyle \left|s,m_{s}=+s\right\rangle } , the extremal weight state as a vacuum for a set of boson operators, and each subsequent state with lower projection quantum number as a boson excitation of the previous one,
Each additional boson then corresponds to a decrease of ħ in the spin projection. Thus, the spin raising and lowering operators S + = S x + i S y {\displaystyle S_{+}=S_{x}+iS_{y}} and S − = S x − i S y {\displaystyle S_{-}=S_{x}-iS_{y}} , so that [ S + , S − ] = 2 ℏ S z {\displaystyle [S_{+},S_{-}]=2\hbar S_{z}} , correspond (in the sense detailed below) to the bosonic annihilation and creation operators, respectively.
The precise relations between the operators must be chosen to ensure the correct commutation relations for the spin operators, such that they act on a finite-dimensional space, unlike the original Fock space.
The resulting Holstein–Primakoff transformation can be written as
S + = ℏ 2 s 1 − a † a 2 s a , S − = ℏ 2 s a † 1 − a † a 2 s , S z = ℏ ( s − a † a ) . {\displaystyle S_{+}=\hbar {\sqrt {2s}}{\sqrt {1-{\frac {a^{\dagger }a}{2s}}}}\,a~,\qquad S_{-}=\hbar {\sqrt {2s}}a^{\dagger }\,{\sqrt {1-{\frac {a^{\dagger }a}{2s}}}}~,\qquad S_{z}=\hbar (s-a^{\dagger }a)~.}
The transformation is particularly useful in the case where s is large, when the square roots can be expanded as Taylor series , to give an expansion in decreasing powers of s .
Alternatively to a Taylor expansion there has been recent progress [ 6 ] [ 7 ] with a resummation of the series that made expressions possible that are polynomial in bosonic operators but still mathematically exact (on the physical subspace). The first method develops a resummation method [ 6 ] that is exact for spin s = 1 / 2 {\displaystyle s=1/2} , while the latter [ 7 ] employs a Newton series (a finite difference) expansion with an identical result, as shown below
S + ( 1 / 2 ) = ℏ 2 s [ 1 + ( 1 − 1 2 s − 1 ) a † a ] a , S − ( 1 / 2 ) = ( S + ( 1 / 2 ) ) † , S z ( 1 / 2 ) = ℏ ( s − a † a ) . {\displaystyle S_{+}^{(1/2)}=\hbar {\sqrt {2s}}\left[1+\left({\sqrt {1-{\frac {1}{2s}}}}-1\right)a^{\dagger }a\right]a,\qquad S_{-}^{(1/2)}=(S_{+}^{(1/2)})^{\dagger },\qquad S_{z}^{(1/2)}=\hbar (s-a^{\dagger }a)~.}
While the expression above is not exact for spins higher than 1/2 it is an improvement over the Taylor series. Exact expressions also exist for higher spins and include 2 s + 1 {\displaystyle 2s+1} terms. Much like the result above also for the expressions of higher spins S + = S − † {\displaystyle S_{+}=S_{-}^{\dagger }} and therefore the resummation is hermitian.
There also exists a non-Hermitian Dyson–Maleev (by Freeman Dyson and S.V. Maleev) variant realization J is related to the above and valid for all spins,
satisfying the same commutation relations and characterized by the same Casimir invariant.
The technique can be further extended to the Witt algebra , [ 8 ] which is the centerless Virasoro algebra . | https://en.wikipedia.org/wiki/Holstein–Primakoff_transformation |
The Holt-Dern process is a method by which silver and gold can be extracted from low-grade ores. [ 1 ]
The method was applied in mining at Park City, Utah , and in the Tintic Mining District at the Tintic Smelter Site . It was named for George Dern and T. P. Holt .
This chemical process -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Holt-Dern_process |
The Holton Taxol total synthesis , published by Robert A. Holton and his group at Florida State University in 1994, was the first total synthesis of Taxol (generic name: paclitaxel). [ 1 ] [ 2 ]
The Holton Taxol total synthesis is a good example of a linear synthesis . The synthesis starts from patchoulene oxide, a commercially available natural compound . [ 3 ] This epoxide can be obtained in two steps from the terpene patchoulol and also from borneol . [ 4 ] [ 5 ] The reaction sequence is also enantioselective , synthesizing (+)-Taxol from (−)-patchoulene oxide or (−)-Taxol from (−)-borneol with a reported specific rotation of +- 47° (c=0.19 / MeOH). The Holton sequence to Taxol is relatively short compared to that of the other groups (46 linear steps from patchoulene oxide). One of the reasons is that patchoulene oxide already contains 15 of the 20 carbon atoms required for the Taxol ABCD ring framework.
Other raw materials required for this synthesis include 4-pentenal, m-chloroperoxybenzoic acid , methyl magnesium bromide and phosgene . Two key chemical transformations in this sequence are a Chan rearrangement and a sulfonyloxaziridine enolate oxidation .
It was envisaged that Taxol ( 51 ) could be accessed through tail addition of the Ojima lactam 48 to alcohol 47 . Of the four rings of Taxol, the D ring was formed last, the result of a simple intramolecular S N 2 reaction of hydroxytosylate 38 , which could be synthesized from hydroxyketone 27 . Formation of the six-membered C ring took place through a Dieckmann condensation of lactone 23 , which could be obtained through a Chan rearrangement of carbonate ester 15. Substrate 15 could be derived from ketone 6 , which, after several oxidations and rearrangements, could be furnished from commercially available patchoulene oxide 1 .
As shown in Scheme 1 , the first steps in the synthesis created the bicyclo[5.3.1]undecane AB ring system of Taxol. Reaction of epoxide 1 with tert-butyllithium removed the acidic α-epoxide proton, leading to an elimination reaction and simultaneous ring-opening of the epoxide to give allylic alcohol 2 . The allylic alcohol was epoxidized to epoxyalcohol 3 using tert-butyl hydroperoxide and titanium(IV)tetraisopropoxide . In the subsequent reaction, the Lewis acid boron trifluoride catalyzed the ring opening of the epoxide followed by skeletal rearrangement and an elimination reaction to give unsaturated diol 4 . The newly created hydroxyl group was protected as the triethylsilyl ether ( 5 ). A tandem epoxidation with meta-chloroperbenzoic acid and Lewis acid-catalyzed Grob fragmentation gave ketone 6 , which was then protected as the tert-butyldimethylsilyl ether 7 in 94% yield over three steps.
As shown in Scheme 2 , the next phase involved addition of the carbon atoms required for the formation of the C ring. Ketone 7 was treated with magnesium bromide diisopropylamide and underwent an aldol reaction with 4-pentenal ( 8 ) to give β-hydroxyketone 9 . The hydroxyl group was protected as the asymmetric carbonate ester (10) . Oxidation of the enolate of ketone 10 with (-)- camphorsulfonyl oxaziridine ( 11 ) gave α-hydroxyketone 12 . Reduction of the ketone group with 20 equivalents of sodium bis(2-methoxyethoxy)aluminumhydride (Red-Al) gave triol 13 , which was immediately converted to carbonate 14 by treatment with phosgene . Swern oxidation of alcohol 14 gave ketone 15 . The next step set the final carbon-carbon bond between the B and C rings. This was achieved through a Chan rearrangement of 15 using lithium tetramethylpiperidide to give α-hydroxylactone 16 in 90% yield. The hydroxyl group was reductively removed using samarium(II) iodide to give an enol, and chromatography of this enol on silica gel gave the separable diastereomers cis 17c (77%) and trans 17t (15%), which could be recycled to 17c through treatment with potassium tert-butoxide . Treatment of pure 17c with lithium tetramethylpiperidide and (±)- camphorsulfonyl oxaziridine gave separable α-hydroxyketones 18c (88%) and 18t (8%) in addition to some recovered starting material ( 3% ). Reduction of pure ketone 18c using Red-Al followed by basic work-up resulted in epimerization to give the required trans-fused diol 19 in 88% yield.
As shown in Scheme 3 , diol 19 was protected with phosgene as a carbonate ester ( 20 ). The terminal alkene group of 20 was next converted to a methyl ester using ozonolysis followed by oxidation with potassium permanganate and esterification with diazomethane . Ring expansion to give the cyclohexane C ring 24 was achieved using a Dieckman condensation of lactone 23 with lithium diisopropylamide as a base at -78 °C. Decarboxylation of 24 required protection of the hydroxyl group as the 2-methoxy-2-propyl (MOP) ether ( 25 ). With the protecting group in place, decarboxylation was effected with potassium thiophenolate in dimethylformamide to give protected hydroxy ketone 26 . In the next two steps the MOP protecting group was removed under acidic conditions, and alcohol 27 was reprotected as the more robust benzyloxymethyl ether 28 . The ketone was converted to the trimethylsilyl enol ether 29 , which was subsequently oxidized in a Rubottom oxidation using m -chloroperbezoic acid to give the trimethylsilyl protected acyloin 30 . At this stage the final missing carbon atom in the Taxol ring framework was introduced in a Grignard reaction of ketone 30 using a 10-fold excess of methylmagnesium bromide to give tertiary alcohol 31 . Treatment of this tertiary alcohol with the Burgess reagent ( 32 ) gave exocyclic alkene 33 .
In this section of the Holton Taxol synthesis ( Scheme 4 ), the oxetane D ring was completed and ring B was functionalized with the correct substituents. Allylic alcohol 34 , obtained from deprotection of silyl enol ether 33 with hydrofluoric acid , was oxidized with osmium tetroxide in pyridine to give triol 35 . After protection of the primary hydroxyl group, the secondary hydroxyl group in 36 was converted to a good leaving group using p-toluenesulfonyl chloride . Subsequent deprotection of the trimethylsilyl ether 37 gave tosylate 38 , which underwent cyclization to give oxetane 39 by nucleophilic displacement of the tosylate that occurred with inversion of configuration . The remaining unprotected tertiary alcohol was acylated , and the triethylsilyl group was removed to give allylic alcohol 41 . The carbonate ester was cleaved by reaction with phenyllithium in tetrahydrofuran at -78 °C to give alcohol 42 . The unprotected secondary alcohol was oxidized to ketone 43 using tetrapropylammonium perruthenate (TPAP) and N-methylmorpholine N-oxide (NMO) . This ketone was deprotonated with potassium tert-butoxide in tetrahydrofuran at low temperature and further oxidized by reaction with benzeneseleninic anhydride to give α-hydroxyketone 44 . Further treatment of 44 with potassium tert-butoxide furnished α-hydroxyketone 45 through a Lobry-de Bruyn-van Ekenstein Rearrangement . Substrate 45 was subsequently acylated to give α-acetoxyketone 46 .
In the final stages of the synthesis ( Scheme 5 ), the hydroxyl group in 46 was deprotected to give alcohol 47 . Reaction of the lithium alkoxide of 47 with the Ojima lactam 48 adds the tail in 49 . Deprotection of the triethylsilyl ether with hydrofluoric acid and removal of the BOM group under reductive conditions gave (−)-Taxol 51 in 46 steps.
Patchoulene oxide ( 1 ) could be accessed from terpene patchoulol ( 52 ) through a series of acid-catalyzed carbocation rearrangements proceeded by an elimination following Zaitzev's rule to give pathoulene ( 53 ). The driving force for the rearrangement is relief of ring strain . Epoxidation of 53 with peracetic acid gave patchoulene oxide 1 .
The total synthesis makes use of multiple protecting groups as follows: | https://en.wikipedia.org/wiki/Holton_Taxol_total_synthesis |
Holus is a 3D-image simulation product under development by H+Technology. The concept was first developed in 2013, before funding via Kickstarter meant the product could be taken to market. The purpose of Holus is to simulate holographic experiences and is technically different from typical hologram stickers found on credit cards and currency notes.
Holus has been criticized by some commentators as a revamping Pepper's ghost , a 19th-century optical trick. [ 1 ] [ 2 ]
Holus was developed in late 2013 by a team in Vancouver, British Columbia , Canada . [ 3 ] [ 4 ]
Shortly before H+ Tech began looking for funding for the device, Holus won a number of awards for its design. This included he Vancouver User Experience Award in the non-profit category for partnering up with Ronald McDonald House to build Magic Room and the Peoples Choice Award to achieve excellence in joy, elegance, and creativity. [ 5 ]
Its first major coverage came from a review by the Massachusetts Institute of Technology in early 2015. At the time, the technology was demonstrated to bring animals to life within the 3D glass box. The product was referred to in the review as roughly the "size of a microwave". [ 6 ] The concept went on to win two awards at the NextBC awards in Canada in early 2015. [ 7 ]
In order to build mass versions of the product, a Kickstarter campaign was launched in order to take the idea to market. It used a similar technology to the optical illusion known as Pepper's ghost . This drew criticism from some during its Kickstarter campaign. It launched its Kickstarter campaign in June 2015 and generated twice its target of $40,000 within the first 48 hours. [ 8 ]
The technology is similar to the technology used to display the music artists Tupac Shakur and Michael Jackson . [ 9 ] Since then the technology has advanced, with a number of startups entering the market. One of these was H+ Technology, who first began working on the technology in early 2013. The aim of the product at the time has remained the same until today, to produce 3D technology that can be used in the home on a tabletop. [ citation needed ]
Due to the technology being in its infancy, the media has covered the R&D of the product and its potential. [ 10 ] Spatial light modulators have been mentioned as one potential development on future versions of Holus. The University of British Columbia and Simon Fraser University have both assisted with the research work of such displays. [ 11 ] | https://en.wikipedia.org/wiki/Holus |
In the ancient Israelite religion , the holy anointing oil ( Biblical Hebrew : שמן המשחה , romanized: shemen ha-mishchah , lit. 'oil of anointing') formed an integral part of the ordination of the priesthood and the High Priest as well as in the consecration of the articles of the Tabernacle ( Exodus 30:26) [ 1 ] and subsequent temples in Jerusalem . The primary purpose of anointing with the holy anointing oil was to sanctify, to set the anointed person or object apart as qodesh , or "holy" (Exodus 30:29). [ 2 ]
Originally, the oil was used exclusively for the priests and the Tabernacle articles, but its use was later extended to include kings (1 Samuel 10:1). [ 3 ] It was forbidden to be used on an outsider (Exodus 30:33) [ 4 ] or to be used on the body of any common person (Exodus 30:32a) [ 5 ] and the Israelites were forbidden to duplicate any like it for themselves (Exodus 30:32b). [ 6 ]
Some segments of Christianity have continued the practice of using holy anointing oil as a devotional practice, as well as in various liturgies. [ 7 ] A variant form, known as oil of Abramelin , is used in Ecclesia Gnostica Catholica , the ecclesiastical arm of Ordo Templi Orientis (O.T.O.), an international fraternal initiatory organization devoted to promulgating the Law of Thelema . [ 8 ]
A number of religious groups have traditions of continuity of the holy anointing oil, with part of the original oil prepared by Moses remaining to this day. These groups include rabbinical Judaism , [ 9 ] the Armenian Church , [ 10 ] the Assyrian Church of the East , [ 11 ] The Church of Jesus Christ of Latter-day Saints , [ 12 ] the Coptic Church , [ 13 ] [ 14 ] the Saint Thomas Nazrani churches, [ 15 ] and others.
The holy anointing oil described in Exodus 30:22–25 [ 16 ] was created from: [ 17 ]
While sources agree about the identity of four of the five ingredients of anointing oil, the identity of the fifth, kaneh bosem , has been a matter of debate. The Bible indicates that it was an aromatic cane or grass, which was imported from a distant land by way of the spice routes , and that a related plant grows in Israel (kaneh bosem is referenced as a cultivated plant in the Song of Songs 4:14. [ 18 ] [ 19 ] Several different plants have been named as possibly being the kaneh bosem .
Most lexicographers, botanists, and biblical commentators translate kaneh bosem as "cane balsam". [ 20 ] [ 21 ] The Aramaic Targum Onkelos renders the Hebrew kaneh bosem in Aramaic as q'nei busma . [ 22 ] Ancient translations and sources identify this with the plant variously referred to as sweet cane, or sweet flag (the Septuagint , the Rambam on Kerithoth 1:1, Saadia Gaon and Jonah ibn Janah ). This plant is known to botanists as Acorus calamus . [ 23 ] According to Aryeh Kaplan in The Living Torah , "It appears that a similar species grew in the Holy Land, in the Hula region in ancient times (Theophrastus, History of Plants 9:7)." [ 24 ]
Maimonides , in contrast, indicates that it was the Indian plant, rosha grass ( Cymbopogon martinii ), which resembles red straw. [ 25 ] Many standard reference works on Bible plants by Michael Zohary (University of Jerusalem, Cambridge, 1985), James A. Duke (2010), and Hans Arne Jensen (Danish 2004, English translation 2012) support this conclusion, arguing that the plant was a variety of Cymbopogon . James A. Duke, quoting Zohary, notes that it is "hopeless to speculate" about the exact species, but that Cymbopogon citratus (Indian lemon-grass) and Cymbopogon schoenanthus are also possibilities. [ 26 ] [ 27 ] Kaplan follows Maimonides in identifying it as the Cymbopogon martinii or palmarosa plant. [ 24 ] [ 28 ]
Sula Benet , in Early Diffusion and Folk Uses of Hemp (1967), identified it as cannabis . [ 29 ] Rabbi Aryeh Kaplan notes that "On the basis of cognate pronunciation and Septuagint readings, some identify Keneh bosem with the English and Greek cannabis , the hemp plant." Benet argued that equating Keneh Bosem with sweet cane could be traced to a mistranslation in the Septuagint , which mistook Keneh Bosem, later referred to as "cannabos" in the Talmud, as "kalabos", a common Egyptian marsh cane plant. [ 29 ]
Customs varied in the cultures of the Middle East. However, anointing with special oil in Israel was either a strictly priestly or kingly right. When a prophet was anointed, it was because he was first a priest. [ citation needed ] When a non-king was anointed, such as Elijah's anointing of Hazael and Jehu , it was a sign that Hazael was to become king of Aram (Syria) and Jehu was to become king of Israel. [ 30 ] Extra-biblical sources show that it was common to anoint kings in many ancient Near Eastern monarchies. Therefore, in Israel, anointing was not only a sacred act but also a socio-political one. [ 31 ]
In the Hebrew Bible, bad smells appear as indications of the presence of disease, decay, rotting processes and death (Exodus 7:18), [ 32 ] [ 33 ] while pleasant aromas suggest places that were biologically clean and conducive to habitation and/or food production and harvesting. Spices and oils were chosen which assisted mankind in orienting themselves and in creating a sense of safety as well as a sense of elevation above the physical world of decay. The sense of smell was also considered highly esteemed by deity. In Deuteronomy 4:28 and Psalms 115:5–6, [ 34 ] [ 35 ] the sense of smell is included in connection with the polemics against idols. In the Hebrew Bible God takes pleasure in inhaling the "soothing odor" ( reah hannihoah ) of offerings (Genesis 8:21; [ 36 ] the phrase is also seen in other verses). [ 37 ]
To the ancient Israelite there was no oil or fat with more symbolic meaning than olive oil. [ citation needed ] It was used as an emollient, a fuel for lighting lamps, for nutrition, and for many other purposes. It was scented olive oil that was chosen to be a holy anointing oil for the Israelites.
The Talmud asserts that the original anointing oil prepared by Moses remained miraculously intact and was used by future generations without replacement, including in the future Third Temple when it is rebuilt. [ 9 ] [ 38 ] This suggests that, following ancient customs, new oil was added to the old thus continuing the original oil for all time. [ citation needed ]
Anointing oil is used in Christian communities for various reasons. Anointing of the sick is prescribed in this passage in the New Testament:
Is any sick among you? let him call for the elders of the church; and let them pray over him, anointing him with oil in the name of the Lord.
The epithet " Christ " as a title for Jesus refers to "the anointed one".
The holy anointing oil of the Armenian Church is called the holy muron ('muron' means myrrh ). [ 40 ] The church holds a special reverence for the continuity factor of the oil. [ 10 ] [ 41 ] According to tradition, a portion of the holy anointing oil of Exodus 30, which Moses and Aaron had blessed, still remained in Jesus' time. Jesus Christ blessed this oil and then gave some of it to Thaddeus, who took the holy oil to Armenia and healed King Abkar of a terrible skin disease by anointing him with the holy oil. Thaddeus is said to have buried a bottle of the holy anointing oil in Daron under an evergreen tree. Gregory the Illuminator discovered the hidden treasure and mixed it with muron that he had blessed. It is said that "To this day, whenever a new batch of muron is prepared and blessed, a few drops of the old one go into it, so that the Armenian muron always contains a small amount of the original oil blessed by Moses, Jesus Christ, and Gregory the Illuminator." [ 10 ]
The holy muron is composed of olive oil and 48 aromas and flowers. The remaining portion of the previous blessed holy oil is poured into the newly prepared oil during the blessing ceremony and passes the blessing from generation to generation. It is said that this procedure has been followed for nearly 1700 years. The Catholicos of all Armenians in Etchmiadzin combines a new mixture of holy muron in the cauldron every seven years using a portion of the holy muron from the previous blend. This is distributed to all of the Armenian churches throughout the world. Before Christianity, muron was reserved solely for the enthroning of royalty and for very special events. In later years, it was used with extreme unction and to heal the sick, and to anoint ordained clergy. [ 42 ]
It is said by the Assyrian Church that the holy anointing oil "was given and handed down to us by our holy fathers Mar Addai and Mar Mari and Mar Tuma." The holy anointing oil of the Assyrian Church is variously referred to as the Oil of the Holy Horn, the Oil of the Qarna, or the Oil of Unction. This holy oil is an apostolic tradition, believed to have originated from the oil consecrated by the apostles themselves, and which by succession has been handed down in the Church into the modern day. [ 43 ] [ page needed ] The original oil which the disciples blessed began to run low and more oil was added to it. The Assyrian Church believes that this has continued to this very day with new oil being added as the oil level lowers. This succession of holy oil is believed to be a continuity of the blessings placed upon the oil from the beginning. [ 11 ]
Both the Oil of Unction and the Holy Leaven are referred to as "leaven", although there is no actual leavening agent present in the oil. Yohanan bar Abgareh referred to the oil in 905, as did Shlemon d-Basra in the 13th century. Yohanan bar Zo'bee in the 14th century integrated the Holy Oil of unction with baptism and other rites. [ citation needed ]
Isaaq Eshbadhnaya in the 15th century wrote the Scholion which is a commentary on specific theological topics, stating that John the Baptist gave John the Evangelist a baptismal vessel of water from Christ's baptism, which was collected by John the Baptist from water dripping from Christ after his baptism in Jordan River. Jesus gave each disciple a "loaf," at the Last Supper, but the Scholion states that to John he gave two loaves, with the instructions to eat only one and to save the other. At the crucifixion, John collected the water from Jesus's side in the vessel and the blood he collected on the loaf from the Last Supper. After the descent of the Holy Spirit on Pentecost the disciples took the vessel and mixed it with oil and each took a horn of it. The loaf they ground up and added flour and salt to it. Each took a portion of the holy oil and the holy bread which were distributed in every land by the hand of those who missionized there. [ 44 ] [ 45 ]
The Assyrian Church has two types of holy oils; the one is ordinary olive oil, blessed or not blessed, the other is the oil of the Holy Horn which is believed to have been handed down from the apostles. The Holy Horn is constantly renewed by the addition of oil blessed by a bishop on Maundy Thursday. While almost anyone can by tradition be anointed with the regular oil, the oil of the Holy Horn is restricted for ordination and sanctification purposes. [ citation needed ]
The holy anointing oil of the Coptic Church is referred to as the holy myron ('myron' means myrrh). The laying on of hands for the dwelling of the Holy Spirit is believed to have been a specific rite of the apostles and their successors the bishops, and as the regions of mission increased, consequently numbers of Christian believers and converts increased. It was not possible for the apostles to wander through all the countries and cities to lay hands on all of those baptized, so they established anointment by the holy myron as an alternative, it is believed, for the laying on of the hands for the Holy Spirit's indwelling.
The first who made the myron were the apostles who had kept the fragrant oils which were on the body of Jesus Christ during his burial, and they added the spices which were brought by those women who prepared them to anoint Christ, but had discovered he had been resurrected. They melted all these spices in pure olive oil, prayed on it in the upper room in Zion, and made it a holy anointing oil. They decided that their successors, the bishops, must renew the making of the myron whenever it is nearly used up, by incorporating the original oil with the new. Today the Coptic Church uses it for ordination, in the sanctification of baptismal water, and in the consecration of churches and church altars and vessels.
It is said that when Mark the Evangelist went to Alexandria, he took with him some of the holy myron oil made by the apostles and that he used it in the sacrament of Chrism , as did the patriarchs who succeeded him. This continued until the era of Athanasius the Apostolic , the 20th patriarch, who then decided to remake the myron in Alexandria. Hence, it is reported, he prepared all of the needed perfumes and spices, with pure olive oil, from which God ordered Moses to make the holy anointing oil as specified in the recipe in the thirtieth chapter of the book of Exodus. Then the sanctification of the holy myron was fulfilled in Alexandria, and Athanasius was entrusted with the holy oil, which contained spices which touched Jesus's body while it was in the tomb, as well as the original oil which had been prepared by the apostles and brought to Egypt by Mark. He distributed the oil to the churches abroad: to the See of Rome, Antioch and Constantinople, together with a document of its authenticity, and all of the patriarchs are said to have rejoiced in receiving it. [ 46 ]
The Coptic Church informs that the fathers of the Church and scholars like Justin Martyr , Tertullian , Hippolytus , Origen , Ambrose , and Cyril of Jerusalem , spoke about the holy myron and how they received its use in anointing by tradition. For example, Hippolytus, in his Apostolic Tradition , speaks of the holy oil "according to ancient custom" [ 47 ] Origen writes about the holy oil "according to the tradition of the church" [ 48 ] Cyril of Jerusalem goes into further detail in speaking about the grace of the Holy Spirit in the holy myron: "this oil is not just any oil: after the epiclesis of the Spirit, it becomes charism of Christ and power of the Holy Spirit through the presence of the deity". [ 49 ]
The early fathers and scholars mention the use of the holy myron, as well as a documentation by Abu'l-Barakat Ibn Kabar, a 14th-century Coptic priest and scholar, in his book Misbah az-Zulmah fi idah al-khidmah (The Lamp of Darkness in Clarifying the Service). According to his account, the holy apostles took from the spices that were used to anoint the body of Jesus Christ when he was buried, [ 50 ] added pure olive oil to it, and prayed over it in Upper Zion, the first church where the Holy Spirit fell in the upper room.
This holy oil was then distributed among all of the apostles so that wherever they preached, new converts would be anointed with it as a seal. They also commanded that whenever a new batch of Holy Myron was made, they add to it the old holy myron to keep the first holy myron continually with all that would ever be made afterwards.
According to the available resources, the holy myron in the Church of Egypt has been made 34 times. [ 51 ] [ 52 ] [ 53 ] [ 54 ] [ 55 ]
According to tradition, Thomas the Apostle laid the original foundation for Christianity in India . It is reported that Jewish communities already present in India enticed Thomas to make his missionary journey there. It is said that he brought holy anointing oil with him and that the St. Thomas Christians still have this oil to this day. [ 15 ]
Patriarch Ya'qub, of the Syrian Malabar Nasrani Church, is remembered for his celebration of the liturgy and his humble encouragement to accept the simple way of life. After he consecrated sacred myron in the Mor Gabriel monastery in 1964, holy myron flowed from the glass container the following day and many people were said to have been healed by it. [ 56 ]
In many evangelical denominations, such as those of the Baptist, Methodist and Pentecostal traditions, holy anointing oil is often used in the anointing of the sick and in deliverance ministry . [ 57 ] It is additionally used "anoint babies as a sign of blessing and protection for the new life ahead" and to "anoint clergy as they begin a new assignment in ministry". [ 58 ] Bottles of holy anointing oil are often sold at Christian religious goods stores , being purchased by both clergy and laity for use in prayer or house blessings . [ 59 ]
In Mandaeism , anointing sesame oil , called misha ( ࡌࡉࡔࡀ ) in Mandaic , is used during rituals such as the masbuta (baptism) and masiqta (death mass), both of which are performed by Mandaean priests . [ 60 ]
Abramelin oil , also called oil of Abramelin , is an anointing oil used in Western esotericism , especially in ceremonial magic . It is blended from aromatic plant materials. Its name came about due to its having been described in a medieval grimoire called The Book of the Sacred Magic of Abramelin the Mage (1897) written by Abraham the Jew (presumed to have lived from c. 1362 – c. 1458). The recipe is adapted from that of the biblical holy anointing oil described in the Book of Exodus (30:22-25) and attributed to Moses . In the English translation The Book of Abramelin: A New Translation (2006) by Steven Guth of Georg Dehn, which was compiled from all the known German manuscript sources, [ 61 ] [ 62 ] [ 63 ] [ 64 ] the formula reads as follows:
Take one part of the best myrrh , half a part of cinnamon , one part of cassia , one part galanga root, and a quarter of the combined total weight of good, fresh olive oil . Make these into an ointment or oil as is done by the chemists. Keep it in a clean container until you need it. Put the container together with the other accessories in the cupboard under the altar. [ 65 ]
In the first printed edition, Peter Hammer, 1725, the recipe reads:
Nimm Myrrhen des besten 1 Theil, Zimmt 1/2 Theil, soviel des Calmus als Zimmet, Cassien soviel als der Myrrhen im Gewicht und gutes frisches Baumöl... " (Take 1 part of the best myrrh, 1/2 part cinnamon, as much calamus as cinnamon, of cassia as much as the myrrh in weight and good fresh tree oil...) [ 66 ]
Note that the proportions in this edition conform with the recipe for holy anointing oil from the Bible ( Exodus 30:22-25). [ 67 ]
The original popularity of Abramelin oil rested on the importance magicians place upon Jewish traditions of holy oils and, more recently, upon S. L. MacGregor Mathers ' translation of The Book of Abramelin and the resurgence of 20th-century occultism , such as found in the works of the Hermetic Order of the Golden Dawn and Aleister Crowley , the founder of Thelema , who used a similar version of the oil in his system of Magick , and has since spread into other modern occult traditions. [ 68 ] There are multiple recipes in use today.
This oil is currently used in several ceremonies of the Thelemic church, Ecclesia Gnostica Catholica , including the rites of confirmation [ 69 ] and ordination . [ 70 ] It is also commonly used to consecrate magical implements and temple furniture. [ 71 ] The eucharistic host of the Gnostic Mass —called the Cake of Light —includes this oil as an important ingredient. [ 8 ]
According to the S. L. MacGregor Mathers English translation from 1897, which derives from an incomplete French manuscript copy of The Book of Abramelin , the recipe is:
You shall prepare the sacred oil in this manner: Take of myrrh in tears, one part; of fine cinnamon, two parts; of galangal half a part; and the half of the total weight of these drugs of the best oil olive. The which aromatics you shall mix together according unto the art of the apothecary, and shall make thereof a balsam, the which you shall keep in a glass vial which you shall put within the cupboard (formed by the interior) of the altar. [ 72 ]
Early in the 20th century, the Aleister Crowley created his own version of Abramelin oil, which is called "oil of Abramelin" in The Book of the Law . [ 73 ] It was based on S. L. MacGregor Mathers' substitution of galangal for calamus. Crowley also abandoned the book's method of preparation—which specifies blending myrrh "tears" (resin) and "fine" (finely ground) cinnamon—instead opting for using distilled essential oils in a base of olive oil. His recipe (from his commentary to The Book of the Law ) reads as follows: [ 74 ]
Crowley weighed out his proportions of essential oils according to the recipe specified by Mathers' translation for weighing out raw materials. The result is to give the cinnamon a strong presence, so that when it is placed upon the skin "it should burn and thrill through the body with an intensity as of fire". [ 75 ] This formula is unlike the grimoire recipe and it cannot be used for practices that require the oil to be poured over the head. Rather, Crowley intended it to be applied in small amounts, usually to the top of the head or the forehead, [ 76 ] and to be used for anointing of magical equipment as an act of consecration. [ 71 ]
Oil of Abramelin was seen as highly important by Crowley, and he used his version of it throughout his life. In Crowley's magical system, the oil came to symbolize the aspiration to what he called the Great Work —"The oil consecrates everything that is touched with it; it is his aspiration; all acts performed in accordance with that are holy". [ 77 ] Crowley went on to say:
The Holy Oil is the Aspiration of the Magician; it is that which consecrates him to the performance of the Great Work; and such is its efficacy that it also consecrates all the furniture of the Temple and the instruments thereof. It is also the grace or chrism; for this aspiration is not ambition; it is a quality bestowed from above. For this reason the Magician will anoint first the top of his head before proceeding to consecrate the lower centres in their turn (...) It is the pure light translated into terms of desire. It is not the Will of the Magician, the desire of the lower to reach the higher; but it is that spark of the higher in the Magician which wishes to unite the lower with itself. [ 76 ]
Crowley also had a symbolic view of the ingredients:
This oil is compounded of four substances. The basis of all is the oil of the olive. The olive is, traditionally, the gift of Minerva, the Wisdom of God, the Logos. In this are dissolved three other oils; oil of myrrh, oil of cinnamon, oil of galangal. The Myrrh is attributed to Binah, the Great Mother, who is both the understanding of the Magician and that sorrow and compassion which results from the contemplation of the Universe. The Cinnamon represents Tiphereth, the Sun -- the Son, in whom Glory and Suffering are identical. The Galangal represents both Kether and Malkuth, the First and the Last, the One and the Many, since in this Oil they are One. [...] These oils taken together represent therefore the whole Tree of Life. The ten Sephiroth are blended into the perfect gold. [ 76 ]
Mathers' use of the ingredient galangal instead of calamus and/or Crowley's innovative use of essential oils rather than raw ingredients has resulted in some changes from the original recipe: | https://en.wikipedia.org/wiki/Holy_anointing_oil |
Homarine ( N -methyl picolinic acid betaine ) is an organic compound with the chemical formula C 7 H 7 NO 2 . [ 2 ] It is commonly found in aquatic organisms from phytoplankton to crustaceans , although it is not found in vertebrates. [ 3 ] [ 4 ]
Homarine functions as an osmolyte by affecting the ionic strength of the cytosol and thereby maintaining osmotic pressure within the cell. [ 5 ]
Homarine may also act as a methyl group donor in the biosynthesis of various other N -methylated chemicals, such as glycine betaine and choline . The process of methyl donation converts homarine into picolinic acid and is reversible. [ 6 ]
The name of this chemical comes from the initial discovery of the molecule in 1933 in lobster tissue: [ 4 ] the word homarine as an adjective means "of, or relating to, lobsters" (i.e. genus Homarus ). | https://en.wikipedia.org/wiki/Homarine |
The HomeLink Wireless Control System is a radio frequency (RF) transmitter integrated into some automobiles that can be programmed to activate devices such as garage door openers , RF-controlled lighting , gates and locks, including those with rolling codes .
The system typically features three buttons, most often found on the driver-side visor or on the overhead console, which can be programmed via a training sequence to replace existing remote controls. It is compatible with most RF-controlled garage door openers, as well as home automation systems such as those based on the X10 protocol.
HomeLink is compatible with radio frequency devices operating between 288 and 433 MHz. Select 2007 and newer vehicles are compatible up to 433 MHz. [ 1 ]
HomeLink won the Automotive News PACE Award in 1997, for supplying automotive technology to improve consumer interaction between the car and the home. [ 2 ]
By 2003, it had been installed on over 20,000,000 automobiles. [ 3 ] Originally supplied by Johnson Controls , the HomeLink product line was sold to Gentex in 2013. [ 4 ] | https://en.wikipedia.org/wiki/HomeLink_Wireless_Control_System |
HomeOS was the working title of a home automation operating system being developed at Microsoft Research in the early 2010s. [ 1 ] [ 2 ] Microsoft Research announced the project in 2010 and abandoned it in 2012. [ 3 ]
HomeOS communicated with Lab of Things, a cloud-based Internet of Things infrastructure also developed by Microsoft . [ 4 ] [ 5 ] [ 6 ]
Microsoft's slogan for their HomeOS project was "Enabling smarter homes for everyone." [ 7 ]
Microsoft's HomeOS development team has written three sample applications that make use of multiple devices , including a "sticky media" app that plays music in parts of the house that are lit up, but not other rooms; a two-factor authentication app that uses audio from smartphones and images from a front-door camera to turn on lights when a user is identified; and a home browser for viewing and controlling a user's access to all devices in a home. [ 8 ]
Some staff who worked on the project cited Microsoft CEO Steve Ballmer 's focus on enterprise applications , productivity software , and cloud computing as the reason for the stalled development . [ 3 ] | https://en.wikipedia.org/wiki/HomeOS |
HomeRF was a wireless networking specification for home devices. It was developed in 1998 by the Home Radio Frequency Working Group, a consortium of mobile wireless companies that included Proxim Wireless , Intel , Siemens AG , Motorola , Philips and more than 100 other companies. [ 1 ]
The group was disbanded in January 2003, after other wireless networks became accessible to home users and Microsoft began including support for them in its Windows operating systems . As a result, HomeRF fell into obsolescence.
Initially called Shared Wireless Access Protocol (SWAP) and later just HomeRF, this open specification allowed PCs, peripherals, cordless phones and other consumer devices to share and communicate voice and data in and around the home without the complication and expense of running new wires. HomeRF combined several wireless technologies in the 2.4 GHz ISM band , including IEEE 802.11 FH (the frequency-hopping version of wireless data networking) and DECT (the most prevalent digital cordless telephony standard in the world) to meet the unique home networking requirements for security, quality of service (QoS) and interference immunity—issues that still plagued Wi-Fi (802.11b and g). [ citation needed ]
HomeRF used frequency hopping spread spectrum (FHSS) in the 2.4 GHz frequency band and in theory could achieve a maximum of 10 Mbit/s throughput; its nodes could travel within a 50-meter range of a wireless access point while remaining connected to the personal area network (PAN). Several standards and working groups focused on wireless networking technology in radio frequency (RF). Other standards include the popular IEEE 802.11 family, IEEE 802.16 , and Bluetooth .
Proxim Wireless was the only supplier of HomeRF chipsets, and since Proxim also made end products, other manufacturers complained that they had to buy components from their competitor. The fact that HomeRF was developed by a consortium and not an official standards body also put it at a disadvantage against Wi-Fi and its IEEE 802.11 standard. [ citation needed ]
AT&T joined the group because HomeRF was designed for high-speed broadband services and the need to support PCs, phones, stereos and televisions; but last-mile deployment occurred more slowly than expected and with slower speeds. So it was natural that the home networking market focused more on multi-PC households sharing Internet connections for email and browsing than on integrating phone and entertainment services into a broadband service bundle. As a result, the original promoter companies gradually started pulling out of the group rather than supporting multiple standards. They included IBM, Hewlett-Packard, Compaq, Microsoft, and lastly Intel. That left only companies like Motorola, National Semiconductor, Proxim, and Siemens. Even Proxim started pulling away when negative media surrounding HomeRF started affecting its core data networking business, and that left Siemens to do the work of integrating voice, data and video. Siemens was willing to do it alone with HomeRF technology but was concerned by growing uncertainties in the cordless phone market, including mobile phone as home phone, VoIP over Wi-Fi, and 5 GHz vs. 2.4 GHz. When Siemens eventually got out of the cordless phone market, it was the final nail in the HomeRF coffin. [ citation needed ]
HomeRF received some success because of its low cost and ease of installation. [ 2 ] By September 2000, some confusion came from the "home" in the name, leading some to associate HomeRF with home networks , using other technologies such as IEEE 802.11b for businesses. [ 3 ] A digital media receiver for audio was marketed under the name "Motorola SimpleFi" that used HomeRF. [ 4 ] [ 5 ] In March 2001, Intel announced they would not support further development of HomeRF technology for its Anypoint line. [ 6 ] The group promoting 802.11 technology, the Wireless Ethernet Compatibility Alliance (WECA) changed their name to the Wi-Fi Alliance in 2002, as the Wi-Fi brand became popular. [ 7 ]
The fact that WECA members lobbied the FCC for two years, which was effective in delaying the approval of wideband frequency-hopping, helped 802.11b catch up and gain an insurmountable lead in the market, which was then extended with 802.11g. The use of OFDM in 802.11a and .11g solved many of the RF interference problems of .11b. WPA and 802.11x also improved security over WEP encryption, which was especially important in the corporate world. [ citation needed ]
By January 2003, the Home Radio Frequency Working Group had disbanded. [ 8 ] Archives of the HomeRF Working Group are maintained by Palo Wireless and Wayne Caswell. [ 1 ] [ 9 ] | https://en.wikipedia.org/wiki/HomeRF |
Home Assistant is free and open-source software used for home automation . It serves as an integration platform and smart home hub , allowing users to control smart home devices. The software emphasizes local control and privacy and is designed to be independent of any specific Internet of Things (IoT) ecosystem. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Its interface can be accessed through a web-based user interface , by using companion apps for Android and iOS , or by voice commands via a supported virtual assistant , such as Google Assistant , Amazon Alexa , Apple Siri , and Home Assistant's own "Assist" (a built-in local voice assistant) using natural language. [ 6 ] [ 7 ] [ 8 ]
The Home Assistant software application is commonly run on a computer appliance with "Home Assistant Operating System" that will act as a central control system for home automation (commonly called a smart home hub/gateway/bridge/controller), [ 9 ] [ 10 ] [ 11 ] [ 12 ] that has the purpose of controlling IoT connectivity technology devices, software, applications and services from third-parties via modular integration components, including native integration components for common wired or wireless communication protocols and standards for IoT products such as Bluetooth , Zigbee , Z-Wave , EnOcean , and Thread / Matter (used to create either local personal area networks or direct ad hoc connections with small smart home devices using low-power digital radios ), or Wi-Fi and Ethernet connected devices on a home network / local area network (LAN) . [ 13 ] [ 14 ] [ 15 ] [ 16 ]
Home Assistant as such supports controlling devices and services connected via either open and proprietary ecosystems or commercial smart home hubs/gateways/bridges as long they provide public access via some kind of open API or MQTT interface to allow for third-party integration over either the local area network or Internet , which includes integrations for Alexa Smart Home (Amazon Echo) , Google Nest (Google Home) , HomeKit (Apple Home) , Samsung SmartThings , and Philips Hue . [ 17 ] [ 18 ] [ 19 ]
Information from all devices and their attributes (entities) that the application sees can be used and controlled via automation or script using scheduling or subroutines (including preconfigured "blueprint"), e.g. for controlling lighting, climate, entertainment systems and smart home appliances. [ 20 ] [ 21 ] [ 22 ] [ 23 ]
The project was started as a Python application by Paulus Schoutsen in September 2013 and first published publicly on GitHub in November 2013. [ 24 ]
In July 2017, a managed operating system called Hass.io was initially introduced to make it easier to use Home Assistant on single-board computers like the Raspberry Pi series. This has since been renamed to "Home Assistant Operating System" (and is often referred to as "Home Assistant OS"), and uses the concept of a bundled "supervisor" management system that allows users to manage, backup, update the local installation and enable the option to extend the functionality of the software with add-ons (plug-in applications) to run as services on the same platform for tighter integrations with Home Assistant core. [ 25 ]
An optional "Home Assistant Cloud" subscription service was introduced in December 2017 as an external cloud computing service officially supported by the Home Assistant founders to solve the complexities associated with secured remote access, as well as linking to various third-party cloud services, such as Amazon Alexa and Google Assistant. [ 26 ] Nabu Casa, Inc. was formed in September 2018 to take over this subscription service. [ 27 ] The company's funding is based solely on revenue from the "Home Assistant Cloud" subscription service. The money earned is used to finance the project's infrastructure and to pay for full-time employees contributing to the Home Assistant and ESPHome projects. [ 28 ]
In January 2020, branding was adjusted to make it easier to refer to different parts of the project. The main piece of software was renamed Home Assistant Core , while the full suite of software with the Hass.io embedded operating system with a bundled "supervisor" management system was renamed Home Assistant (though it is also commonly referred to as "HAOS" as in short for "Home Assistant OS"). [ 29 ]
In April 2024, ownership of the Home Assistant source code and brand name was transferred to the newly created "Open Home Foundation" non-profit organization. The founder of Home Assistant made statements in the announcement that this transfer of ownership and change in governance should mean no practical change to its developers or users as it was primarily done to ensure that Home Assistant source code will remain a free and open-source software and with a continued focus on privacy and local control. Statements in the press release also included secondary plans and goals of making Home Assistant transition from an enthusiast platform to a mainstream consumer product. Ownership of many of the open-source libraries that Home Assistant uses as dependencies and other related entities was also transferred to the Open Home Foundation non-profit organization. [ 30 ] [ 31 ] [ 32 ] [ 33 ]
Home Assistant is supported and can be installed on multiple platforms. Pre-installed hardware appliances are also available for purchase from a few different manufacturers. Compatible hardware platforms include single-board computers (for example Hardkernel ODROID , Raspberry Pi , Intel NUC ), operating systems like Windows , macOS , Linux as well as virtual machines and NAS systems. [ 34 ] Windows support is via a Windows VM or installing the Windows Subsystem for Linux (WSL). [ 35 ] [ 36 ]
On officially recommended hardware platforms like the Hardkernel ODROID N2+ and Raspberry Pi 4/5 single-board computers, the installation requires flashing a corresponding system image onto a microSD card, eMMC , or other local storage from which the system can boot. [ 34 ] It is possible to use Home Assistant as a gateway or bridge for devices using different IoT technologies like Zigbee or Z-Wave ; necessary hardware can be mounted onto GPIO (Serial/I2C/SMBus) , UART , or using USB ports. [ 37 ] [ 38 ] Moreover, it can connect directly or indirectly to local IoT devices, control hubs/gateways/bridges or cloud services from many different vendors, including other open and closed smart home ecosystems. [ 39 ] [ 40 ] [ 41 ] [ 42 ]
In December 2020, a customized ODROID N2+ computer appliance with bundled software was introduced under the product name "Home Assistant Blue" as an officially supported common hardware reference platform. The same package is also referred to as "ODROID-N2+ Home Assistant Bundle" when sold without the official custom-made enclosure. It comes with Home Assistant OS pre-installed on local eMMC storage, a power-adapter, and a custom Home Assistant themed enclosure. Home Assistant founders made it clear that the release of official hardware would not keep them from supporting other hardware platforms like the Raspberry Pi series. [ 43 ] [ 44 ]
In September 2021, Home Assistant developers at Nabu Casa announced a crowdfunding campaign on Crowd Supply for pre-orders of "Home Assistant Yellow" (initially called "Home Assistant Amber"), a new official home automation controller hardware platform with Home Assistant pre-installed, a spiritual successor to "Home Assistant Blue". "Home Assistant Yellow" is designed to be an appliance, and its internals are architected with a carrier board (or "baseboard") for a computer-on-modules compatible with the Raspberry Pi Compute Module 4 (CM4) embedded computer as well as an integrated M.2 expansion slot meant for either an NVMe SSD as expanded storage or for an AI accelerator card, and an onboard EFR32 based radio module made by Silicon Labs capable of acting as a Zigbee Coordinator or Thread Leader (Thread Border Router) , as well as optional variant with PoE (Power over Ethernet) support. The most otherwise notable features missing on "Home Assistant Yellow" are an HDMI or DisplayPort to connect a monitor, (which is likely due to it like most smart home hubs being purpose-built to act as a headless system ), as well as lack of onboard Bluetooth , Wi-Fi , and a USB 3.0 port by default. Shipping of "Home Assistant Yellow" is targeted for June 2022. [ 45 ] [ 46 ]
In June 2022, Home Assistant developers at Nabu Casa announced their officially supported "Home Assistant SkyConnect", a multi-protocol IoT USB radio dongle capable of Zigbee and/or Thread low-power wireless protocols, that enable plug-and-play support for Home Assistant's built-in Zigbee gateway (the "ZHA" integration) and experimental Thread/Matter integrations. [ 47 ] [ 48 ]
In September 2023, Home Assistant developers at Nabu Casa announced their officially supported "Home Assistant Green" as an entry-level computer appliance that is meant to make it easier for new users to get started with Home Assistant from scratch. It does however only feature an Ethernet port (for connection to the user's LAN) and two USB ports. That is, unlike the previous "Home Assistant Yellow" this new computer appliance does not include any built-in IoT radios for Zigbee and Thread low-power wireless protocols, so users wanting to connect such devices will need to buy separate USB radio dongles for each such protocol. [ 49 ] [ 50 ]
The primary front-end dashboard system is called Home Assistant dashboards, [ 51 ] which offers different cards to display information and control devices. Cards can display information provided by a connected device or control a resource (lights, thermostats, and other devices). The interface design language is based on Material Design and can be customized using global themes. The GUI is customizable using the integrated editor or by modifying the underlying YAML code. Cards can be extended with custom resources, which are often created by community members.
Home Assistant acts as a central smart home controller hub by combining different devices and services in a single place and integrating them as entities. The provided rule-based system for automation allows creating custom routines based on a trigger event, conditions and actions, including scripts. These enable building automation , alarm management of security alarms and video surveillance for home security system as well as monitoring of energy measuring devices . [ 52 ] [ 53 ] [ 54 ] [ 55 ] Since December 2020, it is possible to use automation blueprints - pre-made automation from the community that can be easily added to an existing system. [ 56 ]
Home Assistant is an on-premises software product with a focus on local control, which has been described as beneficial to the security of the platform, specifically when compared to closed-source home automation software based on proprietary hardware and cloud services. [ 2 ] [ 3 ] [ 4 ] [ 5 ]
There is no remote access enabled by default and data is stored solely on the device itself. User accounts can be secured with two-factor authentication to prevent access even if the user password becomes compromised. Add-ons receive a security rating based on their access to system resources.
At the beginning of January 2021, cybersecurity analyst Oriel Goel found a directory traversal security vulnerability in custom integrations from third parties. Home Assistant made a public service announcement on January 22, 2021 disclosing the vulnerability problems and informing that that the issues had been addressed in Home Assistant version 2021.1.5 which was released on January 23. Later in January 2021, it made a second security disclosure about another security vulnerability that also had been fixed. There is no information about whether any vulnerability was ever exploited. [ 57 ] [ 58 ]
In March 2023, a full authentication bypass was discovered in Home Assistant, earning a CVE score of 10/10. [ 59 ] This security issue affected Home Assistant's default remote access solution, Nabu Casa, due to Nabu Casa's remote access security model that publicly exposes the local Home Assistant server to the public internet. This security issue allows bad actors full control of any Home Assistant server they can access due to the full auth bypass. [ 60 ]
Home Assistant took second place in 2017 [ 61 ] and 2018 [ 62 ] for the Thomas Krenn Award (formerly Open Source Grant), later winning first place in 2019. [ 63 ] Home Assistant also won a DINACon award in 2018 for their "Open Internet Award" category, [ 64 ] [ 65 ] as well as being a nominee for the same awards in 2013. [ 66 ]
Home Assistant has been included in a number of product and platform comparisons, where, like many other non-commercial smart home hubs/gateways/bridges/controllers for home automation , it has often been criticized for forcing users into a tedious file-based setup procedure using text-based YAML markup-language instead of graphical user interfaces. [ 67 ] [ 68 ] [ 69 ] [ 52 ] [ 70 ] However, newer versions of Home Assistant produced by the core development team make the configuration (from initial installation as well as most basic configurations) more user-friendly by allowing configuration using the web-based graphical user interface as well as the original YAML scripting. [ 71 ] [ 72 ] [ 73 ] [ 74 ] [ 75 ] [ 76 ]
GitHub's " State of the Octoverse " in 2019 listed Home Assistant as the tenth biggest open-source project on its platform with 6,300 contributors. [ 77 ] At the GitHub's " State of the Octoverse " in 2024, Home Assistant became the top open source project on the platform by contributors, with more than 21,000 users contributing, which only included the Home Assistant core application (and not the frontend and not other libraries that the project develops for Home Assistant). [ 78 ] | https://en.wikipedia.org/wiki/Home_Assistant |
The Home Computing Initiative (HCI) was a UK Government program which allowed employers to provide personal computers , software and computer peripherals to their employees without the benefit being taxed as a salary. The HCI was introduced in 1999 to improve the IT literacy of the British workforce. It was also aimed at bridging Britain's digital divide - the increasing gap between those who have access to, and the skills to use, information technology, and those who do not. The program gained traction after four years, in 2003 after it was re-branded. The Trade Union Congress and the Department of Trade and Industry also made the initiative more user-friendly by publishing standard guideline that employers could easily adopt.
The HCI program was a lease agreement between the employer and the employee. The agreement usually lasted for three years, costing a maximum of £500 a year. [ 1 ] At the end of the lease period, the employee was given the option to purchase the computer at its market value, which was typically £10 at that time.
The HCI scheme was very popular. More than 1250 firms, employing 4.5 million people, had adopted the scheme. [ 2 ]
On 23 March 2006, in his UK Budget , Chancellor Gordon Brown announced the removal of HCI tax exemption for employer-loaned computers. As of 6 April 2006 [update] the HCI program was discontinued. [ 1 ] [ 2 ] The move was made without any consultation with the employers or employees' bodies, a stark contrast to the extensive consultation that preceded its creation.
The Treasury of the United Kingdom made initial claims that the scheme's consumers were being dominated by higher-rate taxpayers. However, research by the HCI Alliance found that 75% of employees who purchased personal computers through the HCI were basic or starting rate taxpayers and 50% were " blue collar " workers. [ 2 ] The HCI Alliance, created in 2003, was a group of industry leaders who worked together with the UK Government. Their aim was to increase access to personal computing in the UK.
Another reason for the HCI being cancelled was that computers had become relatively more affordable. Most people in the workplace had access to computers and therefore, the purpose of the scheme had been achieved. [ 3 ] [ 4 ]
In the days following the budget announcement, a significant lobbying campaign ensued, resulting in the treasury announcing that it would consider alternatives to HCI in its current format rather than disbanding it altogether. This led to the creation of the Educational Technology Allowance.
In 2008, the Gordon Brown administration announced the £300 million Educational Technology Allowance incentive. The program granted up to £700 to low-income households with schooling children who had no internet access at home. The policy was aimed towards helping approximately 1.4 million children who did not have access to a broadband connection at home. [ 5 ] The program was piloted in two local authority areas in 2010 and was completely rolled out across England in 2011. The funding for the Educational Technology Allowance came from the Children's and Schools budget.
The money could be used by families to pay for computer equipment, technical support and cabling in the street, if necessary. [ 6 ] The Educational Technology Allowance was not offered in Scotland, Wales and Northern Ireland. | https://en.wikipedia.org/wiki/Home_Computer_Initiative |
A Home Node B , or HNB , is the 3GPP 's term for a 3G femtocell or Small Cell .
A Node B is an element of a 3G macro Radio Access Network, or RAN . A femtocell performs many of the function of a Node B, but is optimized for deployment in the indoor premises and small coverage public hotspots. The femtocell concept was originally conceived for residential environment. However, it has evolved to include other usages such as enterprise and public hotspots.
Home eNode B is an LTE counterpart of the HNB.
Within an HNB Access Network there are three new network elements: the Home Node B (or femtocell), the Security Gateway (SeGW) and the Home Node B Gateway, or HNB-GW.
Between the HNB and the HNB-GW is a new interface known as Iu-h .
Home Node B (HNB) – Connected to an existing residential broadband service, an HNB provides 3G radio coverage for 3G handsets within a home. HNBs incorporate the capabilities of a standard Node B as well as the radio resource management functions of a standard Radio Network Controller RNC .
Security Gateway (SeGW) - Installed in an operator’s network, the Security Gateway establishes IPsec tunnels with HNBs using IKEv2 signaling for IPsec tunnel management. IPsec tunnels are responsible for delivering all voice, messaging and packet data services between HNB and the core network. The SeGW forwards traffic to HNB-GW.
HNB Gateway (HNB-GW) - Installed within an operator’s network, the HNB Gateway aggregates traffic from a large number of HNBs back into an existing core service network through the standard Iu-cs and Iu-ps interfaces.
Iu-h Interface - Residing between an HNB and HNB-GW, the Iu-h interface defines the security architecture used to provide a secure, scalable communications over the Internet. The Iu-h interface also defines an efficient, reliable method for transporting Iu-based traffic as well as a new protocol ( HNBAP ) for enabling highly scalable ad hoc HNB deployment.
O&M Interface - Management interface between HNB and Home NodeB Management System (HMS). It uses TR-069 as the management protocol and TR-196 data model. The main purpose is for the configuration of the HNB.
The following 3GPP documents are currently available: | https://en.wikipedia.org/wiki/Home_NodeB |
Home automation for the elderly and disabled focuses on making it possible for older adults and people with disabilities to remain at home, safe and comfortable. Home automation is becoming a viable option for older adults and people with disabilities who would prefer to stay in the comfort of their homes rather than move to a healthcare facility. This field uses much of the same technology and equipment as home automation for security, entertainment, and energy conservation but tailors it towards old people and people with disabilities.
There are two basic forms of home automation systems for the elderly: embedded health systems and private health networks . Embedded health systems integrate sensors and microprocessors in appliances, furniture, and clothing which collect data that is analyzed and can be used to diagnose diseases and recognize risk patterns. Private health networks implement wireless technology to connect portable devices and store data in a household health database. Due to the need for more healthcare options for the aging population "there is a significant interest from industry and policy makers in developing these technologies". [ 1 ]
Home automation is implemented in homes of older adults and people with disabilities in order to maintain their independence and safety, also saving the costs and anxiety of moving to a health care facility. [ 2 ] For those with disabilities smart homes give them opportunity for independence, providing emergency assistance systems, security features, fall prevention, automated timers, and alerts, also allowing monitoring from family members via an internet connection.
Telehealth is the use of electronic technology services to provide patient care and improve the healthcare delivery system. The term is often confused with telemedicine , which specifically involves remote clinical services of healthcare delivery. Telehealth is the delivery of remote clinical and non-clinical services of healthcare delivery . Telehealth promotes the diagnosis, treatment, education, and self-management away from health care providers and into people's homes. [ 3 ]
The goal of telehealth is to complement the traditional healthcare setting. There is an increased demand on the healthcare system from a growing elderly population and shortage of healthcare providers. [ 4 ] Many elderly and disabled patients are faced with limited access to health care and providers. Telehealth may bridge the gap between patient demand and healthcare accessibility. [ 4 ] Telehealth may also decrease healthcare costs and mitigate transportation concerns. [ 5 ] For the elderly and disabled populations, telehealth would allow them to stay in the comfort and convenience of their homes. [ 6 ]
Geriatrics is the role of healthcare in providing care to the elderly population. The elderly population involves many health complications. According to the National Institute of Health, "the main threats are non-communicable diseases, including heart, stroke, cancer, diabetes, hypertension, and dementia". Telehealth may help provide management and monitoring of chronic disease in patient homes. [ 6 ]
One telemonitoring device measures vital signs: blood pressure, pulse, oxygen saturation, and weight. [ 6 ] Another telemonitoring device is video-conferencing, which can provide patient-provider consultation and electronic delivery of medication instructions and general health information. [ 7 ] Some studies have been done to analyze the effectiveness of telehealth on the elderly population. Some have found positive telehealth effects including reduction of symptoms and self-efficacy in the elderly population with chronic conditions. [ 8 ] Other studies have found the opposite effect, where telehealth home care produced greater mortality than the traditional hospital setting. [ 9 ] Then there are other studies that have found inconclusive results.
Persons with severe functional disabilities are statistically the highest users of all health care services and represent a large portion of health care costs and designated service. The disabled population requires many self-monitoring and self-management strategies that place a strenuous burden on healthcare providers. Telecommunications technologies may provide a way to meet the healthcare demands of the disabled population. According to the National Institutes of Health, "the largest proportion of health care... result from individuals with severe functional disabilities, such as stroke and traumatic brain injury". [ 10 ] Patients with functional disabilities may benefit from telehealth care. According to the World Health Organization, functional limitation refers to the physical or mental conditions, which impair, interfere with, or impede one or more of the individual's major life activities and instrumental activities of daily living. [ 11 ] Patients with spina bifida, musculoskeletal disorders, mental illness, or neurological disorders may also benefit from telehealth care services. Telehealth technologies include vital sign telemonitoring devices, exercise routines, problem-solving assessments, and therapeutic self-care management tasks. [ 10 ] Telehealth care, however, is not sufficient in complete care but rather a supplement of separate care.
Concerns of telehealth implementation include the limited scope of research that confirm conclusive benefits of telehealth in comparison to the healthcare setting. Currently there is no definitive conclusion that telehealth is the superior mode of healthcare delivery. [ 12 ] There are also ethical issues about patient autonomy and patient acceptance of telehealth as a mode of healthcare. Lack of face-to-face patient-provider care in contrast to direct care in a traditional healthcare setting is an ethical concern for many. [ 13 ]
In 2015 the Texas Medical Board ruled that state physicians had to physically meet patients before remotely treating ailments or prescribing medication. The telemedicine company Teladoc sued [ 14 ] over the rule in Teladoc v. Texas Medical Board , arguing the bill violated antitrust laws [ 15 ] by inflating prices and limiting the supply of health care providers in the state. The bill, meant to go active on June 3, 2015, was then stalled. [ 14 ] Teladoc voluntarily dropped the lawsuit in 2017 after Texas passed a new bill allowing for remote treatment without a prior in-person interaction, which Teladoc Health had lobbied heavily for. [ 15 ] On September 15, 2017, the Texas Medical Board amended its regulations to allow state-licensed healthcare providers to care for patients without required face-to-face interaction, [ 16 ] potentially affecting up to 28 million patients in Texas. [ 16 ]
Home automation for healthcare can range from very simple alerts to lavish computer controlled network interfaces. Some of the monitoring or safety devices that can be installed in a home include lighting and motion sensors, environmental controls, video cameras, automated timers, emergency assistance systems, and alerts.
In order to maintain the security of the home many home automation systems integrate features such as remote keyless entry systems which will allow seniors to view who is at the door and then remotely open the door. Home networks can also be programmed to automatically lock doors and shut blinds in order to maintain privacy.
Emergency assistance for older adults and people with disabilities can be classified into three categories: First, Second, and Third Generation emergency assistance systems or tools. [ 17 ]
These simple systems and tools include personal alarm systems and emergency response telephones that do not have to be integrated into a smart home system. [ 17 ] A typical system consists of a small wireless pendant transceiver to be worn around the neck or wrist. The system has a central unit plugged into a telephone jack, with a loudspeaker and microphone. When the pendant is activated a 24-hour control center is contacted.
Generally the 24 hour control center speaks to the user and identifies that help is required e.g. Emergency services are dispatched. The control center also has information of the user, e.g. medical symptoms, medication allergies, etc. The unit has a built in rechargeable battery backup and the ability to notify the control center if the battery is running low or if the system loses power. Modern systems have active wireless pendants that are polled frequently advising battery, and signal strength status as older style pendant could have a battery that has failed rendering the pendant useless when required in an emergency.
These systems and tools generate alarms and alerts automatically if significant changes are observed in the user's vital signs. [ 17 ] These systems are usually fully integrated into a home network and allow health professionals to monitor patients at home. The system consists of an antenna that a patient holds over their implanted cardiac device to transmit data for downloading over the telephone line and viewing by the patient's physician. The collected data can be accessed by the patient or family members. Another example of this type of system is a Smart Shirt that measures heart rate, electrocardiogram results, respiration, temperature and other vital functions and alerts the patient or physician if there is a problem. [ 18 ]
These types of systems would help older adults and people with disabilities deal with loneliness and depression by connecting them with other elderly or disabled individuals through the Internet, reducing their sense of isolation. [ 17 ]
Home automation systems may include automatic reminder systems for the elderly. [ 2 ] Such systems are connected to the Internet and make announcements over an intercom. They can prompt about doctor's appointments and taking medicine, as well as everyday activities such as turning off the stove, closing the blinds, locking doors, etc. Users choose what activities to be reminded of. The system can be set up to automatically perform tasks based on user activity, such as turning on the lights or adjusting room temperature when the user enters specified areas. Other systems can remind users at home or away from home to take their medicine, and how much, by using an alarm wristwatch with text message and medical alert. Reminder systems can also remind about everyday tasks such as eating lunch or walking the dog.
Some communities offer free telephone reassurance services [ 19 ] to residents, which includes both safety check calls as well as reminders. These services have been credited with saving the lives of many elderly and senior citizens who choose to remain at home. [ 20 ]
Smart homes can implement medication dispensing devices in order to ensure that necessary medications are taken at appropriate times. Automated pill dispensers can dispense only the pills that are to be taken at that time and are locked; versions are available for Alzheimer's patients that have a lock on them. For diabetic patients a talking glucose monitor allows the patient to check their blood sugar level and take the appropriate injection. [ 2 ] Digital thermometers are able to recognize a fever and alert physicians. Blood pressure and pulse monitors dispense hypertensive medications when needed.
There are also spoon-feeding robots.
Domestic robots , connected to the domotic network, are included to perform or help in household chores such as cooking, cleaning etc. Dedicated robots can administer medications and alert a remote caregiver if the patient is about to miss his or her medicine dose (oral or no-oral medications). [ 21 ]
The recent advances made in tailoring home automation toward the elderly have generated opposition. It has been stated that "Smart home technology will be helpful only if it is tailored to meet the individual needs of each patient". This currently creates a problem because many of the interfaces designed for home automation "are not designed to take functional limitations, associated with age, into consideration". [ 2 ] Another presented problem involves making the system user-friendly for the elderly who often have difficulty operating electronic devices. The cost of the systems has also presented a challenge, as the U.S. government currently provides no assistance to seniors who choose to install these systems (in some countries such as Spain the Dependency Law includes this assistance).
The biggest concern expressed by potential users of smart home technology is "fear of lack of human responders or the possible replacement of human caregivers by technology", [ 2 ] but home automation should be seen as something that augments, but does not replace, human care. | https://en.wikipedia.org/wiki/Home_automation_for_the_elderly_and_disabled |
Home construction or residential construction is the process of constructing a house , apartment building, or similar residential building [ 1 ] generally referred to as a ' home ' when giving consideration to the people who might now or someday reside there. Beginning with simple pre-historic shelters, home construction techniques have evolved to produce the vast multitude of living accommodations available today. Different levels of wealth and power have warranted various sizes, luxuries, and even defenses in a "home". Environmental considerations and cultural influences have created an immensely diverse collection of architectural styles, creating a wide array of possible structures for homes.
The cost of housing and access to it is often controlled by the modern realty trade , which frequently has a certain level of market force speculation. The level of economic activity in the home-construction section is reported as housing starts , though this is contrarily denominated in terms of distinct habitation units, rather than distinct construction efforts. 'Housing' is also the chosen term in the related concepts of housing tenure , affordable housing , and housing unit (aka dwelling). Four of the primary trades involved in home construction are carpenters , masons , electricians and plumbers , but there are many others as well.
Global access to homes is not consistent around the world, with many economies not providing adequate support for the right to housing . Sustainable Development Goal 11 includes a goal to create "Adequate, safe, and affordable housing and basic services and upgrade slums". [ 2 ] Based on current and expected global population growth, UN habitat projects needing 96,000 new dwelling units built each day to meet global demands. [ 3 ] An important part of housing construction to meet this global demand, is upgrading and retrofitting existing buildings to provide adequate housing.
While homes may have originated in pre-history, there are many notable stages through which cultures pass to reach the current level of modernization. Countries and communities throughout the world currently exhibit very diverse concepts of housing, at many different stages of home development.
Two methods for constructing a home can be distinguished: the method in which architects simply assume free choice of materials and parts, and the method in which reclaimed materials are used, and the house is thus during its entire construction a "work in progress" (meaning every single aspect of it is subject to change at any given time, depending on what materials are found).
The second method has been used throughout history, as materials have always been scarce.
In Britain, there is comparatively little demand for innovative homes produced through radically different production methods, materials, and components. Over the years, a combination of trade protectionism and technical-product conservatism all round has also stymied the growth of indigenous producers of housing products such as aluminum cladding and curtain walling, wall tiles, advanced specialist ironmongery, and structural steel. [ 4 ]
Civil Site Plans, Architectural Drawings and Specifications comprise the document set needed to construct a new home. Specifications consist of a precise description of the materials to be used in construction. Specifications are typically organized by each trade required to construct a home.
The modern family home has many more systems and facets of construction than one might initially believe. With sufficient study, an average person can understand everything there is to know about any given phase of home construction. The do it yourself (DIY) boom of the late twentieth century was due, in large part, to this fact. And an international proliferation of kitset home and prefabricated home suppliers, often consisting of components of Chinese origin has further increased supply and made DIY home building more prevalent. [ 5 ]
The process often starts with a planning stage in which plans are prepared by an architect and approved by the client and any regulatory authority. [ 6 ] Then the site is cleared, foundations are laid and trenches for connection to services such as sewerage , water, and electricity are established. If the house is wooden-framed, a framework is constructed to support the boards, siding and roof. If the house is of brick construction, then courses of bricks are laid to construct the walls. Floors, beams and internal walls are constructed as the building develops, with plumbing and wiring for water and electricity being installed as appropriate. Once the main structure is complete, internal fitting with lights and other fitments is done, Decorate home and furnished with furniture, cupboards, carpets, curtains and other fittings. [ 7 ] [ 8 ] [ better source needed ]
To avoid running out of money, consider building your house in phases. [ 9 ] This phased approach allows homeowners to prioritize essential components of the house, such as the foundation, structure, and basic utilities, while deferring less critical elements to later phases. It provides the flexibility to pause construction temporarily, if necessary, and resume when funds become available.
The cost of building a house varies by country widely. According to data from the National Association of Realtors , the median cost of buying an existing single-family house in the United States is $274,600, whereas the average cost to build is $296,652. [ 10 ] [ 11 ] [ when? ] Several different factors can impact the cost of building a house, including the size of the dwelling, the location, and availability of resources, the slope of the land, the quality of the fixtures and fittings, and the difficulty in finding construction and building materials talent. [ 12 ] Some of the typical expenses involved in a site cost can be connections to services such as water, sewer, electricity, and gas; fences; retaining walls; site clearance (trees, roots, bushes); site survey; soil tests. [ 13 ]
According to data from the U.S. Census and Bureau of Labor Statistics found the average floor area of a home in the United States has steadily increased over the past one hundred years, with an estimated 18.5 square foot increase in the average floor area per year. In 1920, the average floor area was 1,048 square feet (97.4 m 2 ), which rose to 1,500 square feet (140 m 2 ) by 1970 and today sits at around 2,261 square feet (210.1 m 2 ). [ 14 ] [ 15 ]
Some have criticized the housebuilding industry. Mass housebuilders can be risk averse, preferring cost-efficient building methods rather than adopting new technologies for improved building performance . [ 16 ] Traditional vernacular building methods that suit local conditions and climates can be dispensed with in favour of a generic 'cookie-cutter' housing type. [ 16 ] | https://en.wikipedia.org/wiki/Home_construction |
A home lift is a type of lift specifically designed for private homes. Home lifts do not require a shaft and usually has an open cab, which means that they generally can be more basic and lower cost, compared to a home elevator which requires a shaft and usually has an enclosed cab.
Home lifts usually takes into consideration the following non-functional requirements :
A home lift may be linked to specific country codes or directives. For example, the European standard of Machine Directive 2006 42 EC requires compliance with 194 parameters of safety for a lift to be installed inside a private property. [ 1 ]
Home lifts are compact lifts for 2 to 4 persons which typically run on domestic electricity. Unlike hydraulic lifts or traditional " gear and counterweight " operated elevators, a home lift doesn't require additional space for machine room, over head, or pit, making it more suitable for domestic and private use. Often, maintenance costs are also lower than a more conventional lift.
The driving system for a home lift can be built inside the lift structure itself and features a screw , an electric motor , and a nut mounted behind the control panel of the lift's platform; it is thus referred to as a "screw and nut" system. When the lift is operated, the engine forces the nut to rotate around the screw, pushing the lift up and down. Most home lifts come with an open platform structure to free even more space and grant access from 3 different sides of the platform. This requires all producers to include specific safety mechanisms and, in some countries , to limit the travel speed. [ citation needed ] .
Home lifts have been present on the market for decades, and represent a growing trend. Many home lifts producers sell their products through their network, but it is not rare to see them providing their lifts to bigger elevating system groups. Several lift manufacturers enter new markets like India with customization and installation partners who have scaled up their technical capabilities. [ 2 ]
Electric home lifts are powered by an electric motor that plugs into a common power socket (e.g. 13 A ), like any other household appliance. They use a steel roped drum-braked gear motor drive system which means it is self-contained within the roof space of the lift car itself. 'Through floor' dual rail lifts create a self-supporting structure and the weight of the entire structure and lift are in compression through the rails into the floor of the home.
Cable-driven home lifts consist of a shaft, a cabin, a control system and counterweights. Some models also require a technical room . Cable-driven lifts are similar to those found in commercial buildings. These elevators take up most space due to the shaft and the equipment room, so installing a cable system in a new building is much easier than trying to retrofit an existing building. Traction elevators need a pulley system for movement. They are less common for new buildings, as hydraulic technology is used in most cases.
Chain-driven home lifts are similar to cable-driven lifts, but they use a chain wrapped around a drum instead of a cable to raise and lower the car. Chains are more durable than cables and do not need to be replaced as often. Chain-driven home lifts also do not require a separate machine room, which saves space.
Machine room-less home lifts operate by sliding up and down a travel path with a counterweight. This type is an excellent choice for existing residential buildings, since neither machine rooms nor pits reaching into the ground are required. However, traction elevators still require additional space above the elevator roof to accommodate the components required to raise and lower the car. Shaftless home lifts consist of a rectangular elevator cabin positioned on a rail. The lift travels on the route from the lower floor to the upper floor and back.
Hydraulic home lifts are driven by a piston that moves in a cylinder. Since the drive system is completely housed in the elevator shaft, no machine room is required and the control system is small enough to fit into a cabinet on a wall near the elevator. For hydraulic systems with holes, the cylinder must extend to the depth of the floor corresponding to the feet of the elevator, while hydraulic systems without holes do not require a pit.
Pneumatic home lifts use a vacuum system inside a tube to drive their movement. A pit or machine room is not required, so pneumatic home lifts are easiest to retrofit into an existing home. Pneumatic lifts consist of acrylic or glass tubes (typically about 800 mm in diameter). It looks like a larger version of Pneumatic mail tubes found in older buildings. Pneumatic elevators are not hidden in the wall and are normally placed in the near a staircase . [ 3 ]
Screw-nut driven home lifts are designed around the concept of a motor that rotates a nut, which turns the screw thus moves the lift up and down. It's known to be reliable, safe and space efficient, and requires less maintenance than hydraulic or belt driven elevators. Most commonly used up to 6 floors. [ 4 ]
Home lifts, pre-installed or retro fitted, usually comes with some design options, this is so the owner can make it fit their house. Colour and size are the most common choices such as white, grey and black. However, some lift producers go beyond this and provide options for the artwall (backwall) carpet colours and patterns, giving the customer variety of options to consider and to match each homes interior design. | https://en.wikipedia.org/wiki/Home_lift |
A home range is the area in which an animal lives and moves on a periodic basis. It is related to the concept of an animal's territory which is the area that is actively defended. The concept of a home range was introduced by W. H. Burt in 1943. He drew maps showing where the animal had been observed at different times. An associated concept is the utilization distribution which examines where the animal is likely to be at any given time. Data for mapping a home range used to be gathered by careful observation, but in more recent years, the animal is fitted with a transmission collar or similar GPS device.
The simplest way of measuring the home range is to construct the smallest possible convex polygon around the data but this tends to overestimate the range. The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods . More recently, nonparametric methods such as the Burgman and Fox's alpha-hull and Getz and Wilmers local convex hull have been used. Software is available for using both parametric and nonparametric kernel methods.
The concept of the home range can be traced back to a publication in 1943 by W. H. Burt, who constructed maps delineating the spatial extent or outside boundary of an animal's movement during the course of its everyday activities. [ 1 ] Associated with the concept of a home range is the concept of a utilization distribution , which takes the form of a two dimensional probability density function that represents the probability of finding an animal in a defined area within its home range. [ 2 ] [ 3 ] The home range of an individual animal is typically constructed from a set of location points that have been collected over a period of time, identifying the position in space of an individual at many points in time. Such data are now collected automatically using collars placed on individuals that transmit through satellites or using mobile cellphone technology and global positioning systems ( GPS ) technology, at regular intervals.
The simplest way to draw the boundaries of a home range from a set of location data is to construct the smallest possible convex polygon around the data. This approach is referred to as the minimum convex polygon (MCP) method which is still widely employed, [ 4 ] [ 5 ] [ 6 ] [ 7 ] but has many drawbacks including often overestimating the size of home ranges. [ 8 ]
The best known methods for constructing utilization distributions are the so-called bivariate Gaussian or normal distribution kernel density methods . [ 9 ] [ 10 ] [ 11 ] This group of methods is part of a more general group of parametric kernel methods that employ distributions other than the normal distribution as the kernel elements associated with each point in the set of location data.
Recently, the kernel approach to constructing utilization distributions was extended to include a number of nonparametric methods such as the Burgman and Fox's alpha-hull method [ 12 ] and Getz and Wilmers local convex hull (LoCoH) method. [ 13 ] This latter method has now been extended from a purely fixed-point LoCoH method to fixed radius and adaptive point/radius LoCoH methods. [ 14 ]
Although, currently, more software is available to implement parametric than nonparametric methods (because the latter approach is newer), the cited papers by Getz et al. demonstrate that LoCoH methods generally provide more accurate estimates of home range sizes and have better convergence properties as sample size increases than parametric kernel methods.
Home range estimation methods that have been developed since 2005 include:
Computer packages for using parametric and nonparametric kernel methods are available online. [ 21 ] [ 22 ] [ 23 ] [ 24 ] In the appendix of a 2017 JMIR article, the home ranges for over 150 different bird species in Manitoba are reported. [ 25 ] | https://en.wikipedia.org/wiki/Home_range |
A home theater PC ( HTPC ) or media center computer is a convergent device that combines some or all the capabilities of a personal computer with a software application that focuses on video, photo, audio playback, and sometimes video recording functionality. Since the mid-2000s, other types of consumer electronics , including game consoles and dedicated media devices, have crossed over to manage video and music content. The term "media center" also refers to specialized application software designed to run on standard personal computers . [ 1 ]
HTPC and other convergent devices integrate components of a home theater into a unit co-located with a home entertainment system. An HTPC system typically has a remote control and the software interface normally has a 10-foot (3 m) user interface design so that it can be comfortably viewed at typical television viewing distances. An HTPC can be purchased pre-configured with the required hardware and software needed to add video programming or music to the PC. Enthusiasts can also piece together a system out of discrete components as part of a software-based HTPC. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ]
Since 2007, digital media players and smart TV software has been incorporated into consumer electronics through software or hardware changes including video game consoles, Blu-ray players, networked media players , televisions, and set-top boxes . The increased availability of specialized devices, coupled with paid and free digital online content, now offers an alternative to multipurpose (and more costly) personal computers. [ 6 ]
The HTPC as a concept is the product of several technology innovations including high-powered home computers, digital media, and the shift from standard-resolution CRT to high-definition monitors, projectors, and large-screen televisions.
Integrating televisions and personal computers dates back to the late 1980s with tuner cards that could be added to Amiga computers via the Video Toaster . This adaptation would allow a small video window to appear on the screen with broadcast or cable content. Apple Computer also developed the Macintosh TV in late 1993 that included a tuner card built into a Macintosh LC 520 chassis but quickly withdrew from the market with only 10,000 units shipped. [ 7 ] [ 8 ]
In 1996 Gateway 2000 unveiled the Destination computer, which included a tuner card and video card. The unit cost $4,000 and mostly integrated television viewing and computer functions on one color monitor. [ 7 ] The Destination was called a "PC-TV Combo" but by December the term "Home-theater PC" appeared in mainstream media: "The home theater PC will be a combination entertainment and information appliance." [ 9 ]
By 2000, DVD players had become relatively ubiquitous and consumers were seeking ways to improve the picture. The value of using a computer instead of standalone DVD player drove more usage of the PC as a home media device. In particular, the desire for progressive scanning DVD players ( 480p instead of 480i ) with better video fidelity led some consumers to consider their computers instead of very expensive DVD players. [ 10 ]
As DVD players dropped in price, so did PCs and their related video-processing and storage capabilities. In 2000, DVD decryption software using the DeCSS algorithm allowed DVD owners to consolidate their DVD video libraries on hard drives. [ 11 ] Innovations such as TiVo and ReplayTV allowed viewers to store and timeshift broadcast content using specially designed computers. ReplayTV for instance ran on a VxWorks platform. Incorporating these capabilities into PCs was well within the ability of a computer hobbyist who was willing to build and program these systems. Key benefits of these DIY projects included lower cost and more features. [ 12 ] Advancements in hardware identified another weak link: the absence of media management software to make it easy to display and control the video from a distance. [ 10 ]
By 2002, major software developments also facilitated media management, hardware integration, and content presentation. MythTV provided a free and open source solution using Linux . The concept was to combine a digital tuner with digital video recording, program guides, and computer capabilities with a 10-foot (3 m) user interface. [ 13 ] XBMC was another free and open software project started with re-purposing the Xbox as a home theater PC but has since been ported to Windows and Macintosh operating systems in various forms including Boxee and Plex . [ 14 ] Mainstream commercial software packages included Microsoft's Windows XP Media Center Edition (2002) and Apple's Front Row (2005), bundled with Mac OS X until 10.7. By early 2006, commercial examples of this integration included the Mac mini which had the Apple Remote, 5.1 digital audio, and an updated Front Row interface that would play shared media. Because of these features and the Mini's small form factor, consumers began using the Mini as a Mac-based home theater PC. [ 15 ]
As digital cable and satellite became the norm, HTPC software became more dependent on external decoder boxes, and the subscription costs that came with them. For instance, MythTV is capable of capturing unencrypted HDTV streams, such as those broadcast over the air or on cable using a QAM tuner. However, most U.S. cable and satellite set-top boxes provide only encrypted HD streams for "non-basic" content, which can be decoded only by OpenCable-approved hardware or software. [ 16 ] [ 17 ] In September 2009, OEM restrictions were officially lifted for cableCARD devices, [ 18 ] opening up the possibility of HTPC integration. [ 19 ]
The advent of fully digital HDTV displays helped to complete the value and ease of use of a HTPC system. Digital projectors , plasma and LCD displays often came pre-configured to accept computer video outputs including VGA , DVI and component video . Furthermore, both the computers and the displays could include video scalers to better conform the image to the screen format and resolutions. Likewise, computers also included HDMI ports that carry both audio and video signals to home video displays or AV receivers.
The simplified integration of computer and home theater displays has allowed for fully digital content distribution over the internet. For instance, by 2007 Netflix "watch instantly" subscribers could view streaming content using their HTPCs with a browser [ 20 ] or with plug-ins with applications such as Plex and XBMC. Similar plug-ins are also available for Hulu , YouTube , and broadcasters such as NBC , CBS and PBS . [ 21 ]
The media itself may be stored , received by terrestrial , satellite or cable broadcasting or streamed from the internet. Stored media is kept either on a local hard drive or on network attached storage . Some software is capable of doing other tasks, such as finding news ( RSS ) from the Internet .
Beyond functioning as a standard PC, normally HTPCs have some additional characteristics:
Standard PC units are usually connected to a CRT or LCD display, while HTPCs are designed to be connected to a television . All HTPCs should feature a TV-out option, using either an HDMI , DVI , DisplayPort , component video , VGA (for some LCD televisions), S-Video , or composite video output. [ 22 ]
Integrating a HTPC into a typical living room requires a way of controlling it from a distance. Many TV tuner/capture cards include remote controls for use with the applications included with the card. Software such as Boxee , GB-PVR , SageTV , MediaPortal and Beyond TV support the use of Windows MCE and other remote controls. Another option is an in-air mouse pointer such as the Wii Remote , GlideTV Navigator, or Loop Pointer, which gives cursor control from a distance. It is also possible to use common wireless keyboards and other peripherals to achieve the same effect (though the range may not be as long as a typical remote control's). [ 22 ]
Some HTPCs, such as the Plex/Mac Mini combination, support programmable remote controls designed for a wide range of typical home theater devices. [ 24 ] More recent innovations include remote-control applications for Android and Apple iOS smartphones and tablets. [ 23 ]
Because of the nature of the HTPC, units require higher-than-average capacities for storage of pictures, music, television shows, videos, and other multimedia . [ 22 ] Designed almost as a 'permanent storage' device, space can quickly run out on these devices. Because of restrictions on internal space for hard disk drives and a desire for low noise levels, many HTPC units use a NAS (Network Attached Storage) device, or another type of network-connected file server . [ 3 ]
A TV tuner card is a computer component that allows television signals to be received by a computer. Most TV tuners also function as video capture cards, allowing them to record television programs onto a hard disk. Several manufacturers build combined TV tuner plus capture cards for PCs. Many such cards offer hardware MPEG encoding to reduce the computing requirements. Some cards are designed for analog TV signals such as standard definition cable or off the air television, while others are designed for high-definition digital TV. [ 22 ]
A network TV tuner or TV gateway is a TV server that converts TV signal from satellite, cable or antenna to IP. With multiple TV tuners, the TV gateway can stream multiple TV channels to devices across the network. Several TV gateway manufacturers build the device to stream the entire DVB stream, relying on the host player device to process the feed and to capture/record, while other devices such as VBox Home TV Gateway provide a variety of option from full PVR and live TV features, to streaming of specific DVB layers to support less powerful devices and to save network bandwidth.
A common user complaint with using standard PCs as HTPC units is background noise, especially in quieter film scenes. Most personal computers are designed for maximum performance, while the functions of a HTPC system may not be processor-intensive. Thus, passive cooling systems, low-noise fans, vibration-absorbing elastic mounts for fans and hard drives, and other noise-minimizing devices are used in place of conventional cooling systems. [ 22 ]
HTPC options exist for each of the major operating systems: Microsoft Windows , Mac OS X and Linux . The software is sometimes called "Media Center Software".
A number of media center software solutions exist for Linux-, Unix-, and BSD-based operating systems; for example MythTV is a fully fledged integrated suite of software which incorporates TV recording, video library, video game library, image/picture gallery, information portal and music collection playback among other capabilities. Kodi is also available (as it is for many platforms), and can be used to present all the available media including TV programmes recorded by MythTV. Freevo , VDR , SageTV and Boxee are other solutions.
Linux, partially due to its opensource nature, is available as customised versions including the mediacentre pre-installed and with superfluous software removed. Examples include MythBuntu (based on Xubuntu ), and Ubuntu TV or Kodibuntu (formerly XBMCbuntu) , (all based on Ubuntu ).
LinuxMCE is a complete home automation solution including lighting/curtains, security, and MythTV capability.
For Mac OS X versions before 10.7 (Lion), HTPC functionality is built into the operating system itself. Specifically, the programs Front Row and Cover Flow , used in conjunction with the Apple Remote , allow users easily to browse and view any multimedia content stored on their Macs. With the July 2011 release of Mac OS X Lion, Front Row has been discontinued. [ 15 ]
Several third-party applications provide HTPC support, including Plex , [ 25 ] and XBMC . [ 26 ]
Beyond the operating system itself, add-on hardware-plus-software combinations (for adding more full-featured HTPC abilities to any Mac) include Elgato 's EyeTV series PVRs , [ 26 ] AMD 's " ATI Wonder" external USB 2.0 TV-tuners, and various individual devices from third-party manufacturers.
For Microsoft Windows , a common approach was to install a version that contains the Windows Media Center ( Home Premium, Professional, Enterprise or Ultimate for Windows 7 or Home Premium or Ultimate for Windows Vista ). Windows Media Center included additional software that covered the PVR functions of the proposed HTPC, including free program guide information and automatic program recording. Windows 7, Windows Vista Home Premium and Windows Vista Ultimate included an MPEG2 decoder. [ 2 ] [ 3 ] [ 4 ] With the introduction of Windows 8, Media Center was no longer included with the operating system; instead it was necessary to buy Windows 8 Pro and then purchase the Media Center Pack via the Windows Control Panel. Windows Media Centre is not available at all for Windows 10. However, it may be restored by a number of unofficial ways.
Alternative HTPC software may be built with the addition of a third party software PVR to a Windows PC. SageTV, GB-PVR, and DVBViewer have integrated placeshifting comparable to the Slingbox , allowing client PCs and the Hauppauge MediaMVP to be connected to the server over the network. Snapstream provides heuristic commercial detection and program recompression. When using a faster CPU, SageTV and Beyond TV can record content from TV capture cards which do not include hardware MPEG2 compression. For a free alternative, GB-PVR and MediaPortal provide full home theater support and good multi-card DVR capabilities. [ original research? ] GB-PVR also has a free client, free mediaMVP client, and free network media playback. MediaPortal provides a full client/server set-up with live TV/DVR (recorded or timeshifted) streaming. MediaPortal is open-source and offers a variety of skins and plugins for music videos, Netflix, Pandora and others. [ 2 ] [ 3 ] [ 4 ] [ 27 ]
Although digital media players are often built using similar components to personal computers, they are often smaller, quieter and less costly than the full-featured computers adapted to multi-media entertainment. [ 6 ]
In recent years, convergence devices for home entertainment including gaming systems, DVRs, Blu-Ray players and dedicated devices such as the Roku have also started managing local video, music and streaming internet content. Likewise, some managed video services such as Verizon's FiOS allow users to incorporate their photographs, video, and music from their personal computers to their FiOS set-top-box including DVRs. [ 30 ] Gaming systems such as the Nintendo Wii , [ 31 ] Sony PlayStation 3 [ 32 ] and the Microsoft Xbox 360 [ 33 ] [ 34 ] support media management beyond their original gaming orientation.
As computing power increases and costs fall, traditional media devices such as televisions have been given network capabilities. So-called Smart TVs from Sony , Samsung , and LG (to name a few) have models that allow owners to include some free or subscription media content available on the Internet. [ 35 ] The rapid growth in the availability of online content, including music, video and games, has also made it easier for consumers to use these networked devices. YouTube , for instance, is a common plug-in available on most networked devices. Netflix has also struck deals with many consumer electronics makers to have their interface available for their streaming subscribers. This symbiotic relationship between Netflix and consumer electronics makers has helped propel Netflix to become the largest subscription video service in the U.S., [ 36 ] using up to 20% of U.S. bandwidth at peak times. [ 37 ]
Other digital media retailers such as Apple, Amazon.com and Blockbuster have purchase and rental options for video and music on demand. Apple in particular has developed a tightly integrated device and content management ecosystem with their iTunes Store , personal computers, iOS devices , and the Apple TV digital media receiver . [ 38 ] The most recent version of the Apple TV, at $99, has lost the hard drive included in its predecessor and fully depends either on streaming internet content, or another computer on the home network for media. [ 39 ]
The convergence of content, technology, and broadband access allows consumers to stream television shows and movies to their high-definition television in competition with traditional service providers ( cable TV and satellite television ). The research company SNL Kagan expects 12 million households, roughly 10%, to go without cable, satellite or telco video service by 2015 using over-the-top services. [ 40 ] This represents a new trend in the broadcast television industry, as the list of options for watching movies and TV over the Internet grows every day. Research also shows that even as traditional television service providers are trimming their customer base, adding broadband Internet customers. Nearly 76.6 million U.S. households get broadband from leading cable and telephone companies, [ 41 ] although only a portion have sufficient speeds to support quality video streaming. [ 42 ] [ dead link ] Convergent devices for home entertainment will likely play a much larger role in the future of broadcast television, effectively shifting traditional revenue streams while providing consumers with more options. [ 42 ] | https://en.wikipedia.org/wiki/Home_theater_PC |
The Homeland Security Information Network ( HSIN ) is a web-based platform, run by the Department of Homeland Security , which is designed to allow local, state, tribal, and federal government agencies to share "Sensitive But Unclassified (SBU)" information with each other over a secure channel. [ 1 ] [ 2 ] [ 3 ]
The HSIN provides three main functional categories. First, it provides a SharePoint web portal system which allows agencies and events to have a basic workspace for collaboration. Second, it provides a Jabber chat system, with user managed rooms. Third, it provides the Common Operational Picture, a custom executive situational awareness web application based on Oracle HTML DB. [ 4 ]
The Department of Homeland Security has publicly announced that the network has so far been hacked at least twice in 2009—once in March and once in April. [ 5 ]
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homeland_Security_Information_Network |
In fluid mechanics , a homentropic flow has uniform and constant entropy . It distinguishes itself from an isentropic or particle isentropic flow , where the entropy level of each fluid particle does not change with time, but may vary from particle to particle. This means that a homentropic flow is necessarily isentropic, but an isentropic flow need not be homentropic.
A homentropic and perfect gas is an example of a barotropic fluid where the pressure and density are related by P ( ρ ) = K ρ γ , {\displaystyle P(\rho )=K\rho ^{\gamma },} where K {\textstyle K} is a constant. [ 1 ]
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homentropic_flow |
2KT0
79923
71950
ENSG00000111704
ENSMUSG00000012396
Q9H9S0
Q80Z64
NM_001297698 NM_024865
NM_028016 NM_001289828 NM_001289830 NM_001289831
NP_001284627 NP_079141
NP_001276757 NP_001276759 NP_001276760 NP_082292
Homeobox protein NANOG (hNanog) is a transcriptional factor that helps embryonic stem cells (ESCs) maintain pluripotency by suppressing cell determination factors . [ 5 ] hNanog is encoded in humans by the NANOG gene. Several types of cancer are associated with NANOG . [ 6 ]
The name NANOG derives from Tír na nÓg (Irish for "Land of the Young"), a name given to the Celtic Otherworld in Irish and Scottish mythology. [ 7 ] [ 8 ]
The human hNanog protein coded by the NANOG gene, consists of 305 amino acids and possesses 3 functional domains: the N-terminal domain, the C- terminal domain, and the conserved homeodomain motif. The homeodomain region facilitates DNA binding. The NANOG is located on chromosome 12, and the mRNA contains a 915 bp open reading frame (ORF) with 4 exons and 3 introns. [ 8 ]
The N-terminal region of hNanog is rich in serine , threonine and proline residues, and the C-terminus contains a tryptophan -rich domain. The homeodomain in hNANOG ranges from residues 95 to 155. There are also additional NANOG genes (NANOG2, NANOG p8) which potentially affect ESCs' differentiation. Scientists have shown that NANOG is fundamental for self-renewal and pluripotency, and NANOG p8 is highly expressed in cancer cells. [ 9 ]
NANOG is a transcription factor in embryonic stem cells (ESCs) and is thought to be a key factor in maintaining pluripotency . NANOG is thought to function in concert with other factors such as POU5F1 (Oct-4) and SOX2 to establish ESC identity. These cells offer an important area of study because of their ability to maintain pluripotency. In other words, these cells have the ability to become virtually any cell of any of the three germ layers ( endoderm , ectoderm , mesoderm ). It is for this reason that understanding the mechanisms that maintain a cell's pluripotency is critical for researchers to understand how stem cells work, and may lead to future advances in treating degenerative diseases.
NANOG has been described to be expressed in the posterior side of the epiblast at the onset of gastrulation. [ 10 ] There, NANOG has been implicated in inhibiting embryonic hematopoiesis by repressing the expression of the transcription factor Tal1 . [ 11 ] In this embryonic stage, NANOG represses Pou3f1 , a transcription factor crucial for the anterior-posterior axis formation. [ 10 ]
Analysis of arrested embryos demonstrated that embryos express pluripotency marker genes such as POU5F1 , NANOG and Rex1 . Derived human ESC lines also expressed specific pluripotency markers:
These markers allowed for the differentiation in vitro and in vivo conditions into derivatives of all three germ layers. [ 12 ]
POU5F1 , TDGF1 (CRIPTO), SALL4 , LECT1, and BUB1 are also related genes all responsible for self-renewal and pluripotent differentiation. [ 13 ]
The NANOG protein has been found to be a transcriptional activator for the Rex1 promoter, playing a key role in sustaining Rex1 expression. Knockdown of NANOG in embryonic stem cells results in a reduction of Rex1 expression, while forced expression of NANOG stimulates Rex1 expression. [ 14 ]
Besides the effects of NANOG in the embryonic stages of life, ectopic expression of NANOG in the adult stem cells can restore the proliferation and differentiation potential that is lost due to organismal aging or cellular senescence. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ]
NANOG is highly expressed in cancer stem cells and may thus function as an oncogene to promote carcinogenesis. High expression of NANOG correlates with poor survival in cancer patients. [ 20 ] [ 21 ] [ 22 ]
Recent research has shown that the localization of NANOG and other transcription factors have potential consequences on cellular function. Experimental evidence has shown that the level of NANOG p8 expression is elevated specially in cancer cells, which mean that NANOG p8 gene is a critical member in (CSCs) Cancer stem cells, so knocking it down could reduce the cancer malignancy. [ 9 ]
NANOG p8 gene has been evaluated as a prognostic and predictive cancer biomarker. [ 23 ]
Nanog is a transcription factor that controls both self-renewal and pluripotency of embryonic stem cells . Similarly, the expression of Nanog family proteins is increased in many types of cancer and correlates with a worse prognosis. [ 9 ]
Humans and chimpanzees share ten NANOG pseudogenes (NanogP2-P11) during evaluation, two of them are located on the X chromosome and they characterized by the 5’ promoter sequences and the absence of introns as a result of mRNA retrotransposition [ 8 ] all in the same places: one duplication pseudogene and nine retropseudogenes. Of the nine shared NANOG retropseudogenes, two lack the poly-(A) tails characteristic of most retropseudogenes, indicating that copying errors occurred during their creation. Due to the high improbability that the same pseudogenes (copying errors included) would exist in the same places in two unrelated genomes , evolutionary biologists point to NANOG and its pseudogenes as providing evidence of common descent between humans and chimpanzees. [ 24 ] | https://en.wikipedia.org/wiki/Homeobox_protein_NANOG |
In graph theory , two graphs G {\displaystyle G} and G ′ {\displaystyle G'} are homeomorphic if there is a graph isomorphism from some subdivision of G {\displaystyle G} to some subdivision of G ′ {\displaystyle G'} . If the edges of a graph are thought of as lines drawn from one vertex to another (as they are usually depicted in diagrams), then two graphs are homeomorphic to each other in the graph-theoretic sense precisely if their diagrams are homeomorphic in the topological sense. [ 1 ]
In general, a subdivision of a graph G (sometimes known as an expansion [ 2 ] ) is a graph resulting from the subdivision of edges in G . The subdivision of some edge e with endpoints { u , v } yields a graph containing one new vertex w , and with an edge set replacing e by two new edges, { u , w } and { w , v }. For directed edges, this operation shall preserve their propagating direction.
For example, the edge e , with endpoints { u , v }:
can be subdivided into two edges, e 1 and e 2 , connecting to a new vertex w of degree -2, or indegree -1 and outdegree -1 for the directed edge:
Determining whether for graphs G and H , H is homeomorphic to a subgraph of G , is an NP-complete problem. [ 3 ]
The reverse operation, smoothing out or smoothing a vertex w with regards to the pair of edges ( e 1 , e 2 ) incident on w , removes both edges containing w and replaces ( e 1 , e 2 ) with a new edge that connects the other endpoints of the pair. Here, it is emphasized that only degree -2 (i.e., 2-valent) vertices can be smoothed. The limit of this operation is realized by the graph that has no more degree -2 vertices.
For example, the simple connected graph with two edges, e 1 { u , w } and e 2 { w , v }:
has a vertex (namely w ) that can be smoothed away, resulting in:
The barycentric subdivision subdivides each edge of the graph. This is a special subdivision, as it always results in a bipartite graph . This procedure can be repeated, so that the n th barycentric subdivision is the barycentric subdivision of the n −1st barycentric subdivision of the graph. The second such subdivision is always a simple graph .
It is evident that subdividing a graph preserves planarity . Kuratowski's theorem states that
In fact, a graph homeomorphic to K 5 or K 3,3 is called a Kuratowski subgraph .
A generalization, following from the Robertson–Seymour theorem , asserts that for each integer g , there is a finite obstruction set of graphs L ( g ) = { G i ( g ) } {\displaystyle L(g)=\left\{G_{i}^{(g)}\right\}} such that a graph H is embeddable on a surface of genus g if and only if H contains no homeomorphic copy of any of the G i ( g ) {\displaystyle G_{i}^{(g)\!}} . For example, L ( 0 ) = { K 5 , K 3 , 3 } {\displaystyle L(0)=\left\{K_{5},K_{3,3}\right\}} consists of the Kuratowski subgraphs.
In the following example, graph G and graph H are homeomorphic.
If G′ is the graph created by subdivision of the outer edges of G and H′ is the graph created by subdivision of the inner edge of H , then G′ and H′ have a similar graph drawing:
Therefore, there exists an isomorphism between G' and H' , meaning G and H are homeomorphic.
The following mixed graphs are homeomorphic. The directed edges are shown to have an intermediate arrow head. | https://en.wikipedia.org/wiki/Homeomorphism_(graph_theory) |
In mathematics , particularly topology , the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation . They are important to the theory of topological spaces, generally exemplary of automorphism groups and topologically invariant in the group isomorphism sense.
There is a natural group action of the homeomorphism group of a space on that space. Let X {\displaystyle X} be a topological space and denote the homeomorphism group of X {\displaystyle X} by G {\displaystyle G} . The action is defined as follows:
G × X ⟶ X ( φ , x ) ⟼ φ ( x ) {\displaystyle {\begin{aligned}G\times X&\longrightarrow X\\(\varphi ,x)&\longmapsto \varphi (x)\end{aligned}}}
This is a group action since for all φ , ψ ∈ G {\displaystyle \varphi ,\psi \in G} ,
φ ⋅ ( ψ ⋅ x ) = φ ( ψ ( x ) ) = ( φ ∘ ψ ) ( x ) {\displaystyle \varphi \cdot (\psi \cdot x)=\varphi (\psi (x))=(\varphi \circ \psi )(x)} ,
where ⋅ {\displaystyle \cdot } denotes the group action, and the identity element of G {\displaystyle G} (which is the identity function on X {\displaystyle X} ) sends points to themselves. If this action is transitive , then the space is said to be homogeneous .
As with other sets of maps between topological spaces, the homeomorphism group can be given a topology, such as the compact-open topology .
In the case of regular , locally compact spaces the group multiplication is then continuous.
If the space is compact and Hausdorff , the inversion is continuous as well and Homeo ( X ) {\displaystyle \operatorname {Homeo} (X)} becomes a topological group . If X {\displaystyle X} is Hausdorff, locally compact, and locally connected this holds as well. [ 1 ] Some locally compact separable metric spaces exhibit an inversion map that is not continuous, resulting in Homeo ( X ) {\displaystyle {\text{Homeo}}(X)} not forming a topological group. [ 1 ]
In geometric topology especially, one considers the quotient group obtained by quotienting out by isotopy , called the mapping class group :
The MCG can also be interpreted as the 0th homotopy group , M C G ( X ) = π 0 ( H o m e o ( X ) ) {\displaystyle {\rm {MCG}}(X)=\pi _{0}({\rm {Homeo}}(X))} .
This yields the short exact sequence :
In some applications, particularly surfaces, the homeomorphism group is studied via this short exact sequence, and by first studying the mapping class group and group of isotopically trivial homeomorphisms, and then (at times) the extension. | https://en.wikipedia.org/wiki/Homeomorphism_group |
Homeorhesis , derived from the Greek for "similar flow", is a concept encompassing dynamical systems which return to a trajectory , as opposed to systems which return to a particular state, which is termed homeostasis .
Homeorhesis is steady flow. Often biological systems are inaccurately described as homeostatic, being in a steady state. Steady state implies equilibrium which is never reached, nor are organisms and ecosystems in a closed environment. During his tenure at the State University of New York at Oneonta, Dr William Butts [ 1 ] correctly applied the term homeorhesis to biological organisms. The term was created by C.H. Waddington and first used in biology in his book Strategy of the Genes (1957), where he described the tendency of developing or changing organisms to continue development or adapting to their environment and changing towards a given state.
In ecology the concept is important as an element of the Gaia hypothesis , where the system under consideration is the ecological balance of different forms of life on the planet. James Lovelock and Lynn Margulis , coauthors of Gaia hypothesis, wrote in particular that only homeorhetic, and not homeostatic, balances are involved in the theory. [ 2 ] That is, the composition of Earth's atmosphere, hydrosphere, and lithosphere are regulated around "set points" as in homeostasis, but those set points change with time.
This ecology -related article is a stub . You can help Wikipedia by expanding it .
This systems -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homeorhesis |
In evolutionary developmental biology , homeosis is the transformation of one organ into another, arising from mutation in or misexpression of certain developmentally critical genes , specifically homeotic genes . In animals, these developmental genes specifically control the development of organs on their anteroposterior axis. [ 1 ] In plants, however, the developmental genes affected by homeosis may control anything from the development of a stamen or petals to the development of chlorophyll. [ 2 ] Homeosis may be caused by mutations in Hox genes , found in animals, or others such as the MADS-box family in plants. Homeosis is a characteristic that has helped insects become as successful and diverse as they are. [ 3 ]
Homeotic mutations work by changing segment identity during development. For example, the Ultrabithorax genotype gives a phenotype wherein metathoracic and first abdominal segments become mesothoracic segments. [ 4 ] Another well-known example is Antennapedia : a gain-of-function allele causes legs to develop in the place of antennae . [ 5 ]
In botany , Rolf Sattler has revised the concept of homeosis (replacement) by his emphasis on partial homeosis in addition to complete homeosis; [ 6 ] this revision is now widely accepted.
Homeotic mutants in angiosperms are thought to be rare in the wild: in the annual plant Clarkia ( Onagraceae ), homeotic mutants are known where the petals are replaced by a second whorl of sepal-like organs, originating in a mutation of a single gene. [ 7 ] The absence of lethal or deleterious consequences in floral mutants resulting in distinct morphological expressions has been a factor in the evolution of Clarkia , and perhaps also in many other plant groups. [ 8 ]
Following the work on homeotic mutants by Ed Lewis , [ 9 ] the phenomenology of homeosis in animals was further elaborated by discovery of a conserved DNA binding sequence present in many homeotic proteins. [ 10 ] Thus, the 60 amino acid DNA binding protein domain was named the homeodomain , while the 180 bp nucleotide sequence encoding it was named the homeobox . The homeobox gene clusters studied by Ed Lewis were named the Hox genes , although many more homeobox genes are encoded by animal genomes than those in the Hox gene clusters.
The homeotic-function of certain proteins was first postulated to be that of a "selector" as proposed by Antonio Garcia-Bellido . [ 11 ] By definition selectors were imagined to be ( transcription factor ) proteins that stably determined one of two possible cell fates for a cell and its cellular descendants in a tissue .
While most animal homeotic functions are associated with homeobox-containing factors, not all homeotic proteins in animals are encoded by homeobox genes, and further not all homeobox genes are necessarily associated with homeotic functions or ( mutant ) phenotypes.
The concept of homeotic selectors was further elaborated or at least qualified by Michael Akam in a so-called "post-selector gene" model that incorporated additional findings and "walked back" the "orthodoxy" of selector-dependent stable binary switches. [ 12 ]
The concept of tissue compartments is deeply intertwined with the selector model of homeosis because the selector-mediated maintenance of cell fate can be restricted into different organizational units of an animal's body plan . [ 13 ] In this context, newer insights into homeotic mechanisms were found by Albert Erives and colleagues by focusing on enhancer DNAs that are co-targeted by homeotic selectors and different combinations of developmental signals. [ 14 ] This work identifies a protein biochemical difference between the transcription factors that function as homeotic selectors versus the transcription factors that function as effectors of developmental signaling pathways, such as the Notch signaling pathway and the BMP signaling pathway . [ 14 ] This work proposes that homeotic selectors function to "license" enhancer DNAs in a restricted tissue compartment so that the enhancers are enabled to read-out developmental signals, which are then integrated via polyglutamine -mediated aggregation. [ 14 ]
Like the complex multicellularity seen in animals , the multicellularity of land plants is developmentally organized into tissue and organ units via transcription factor genes with homeotic effects. [ 15 ] Although plants have homeobox-containing genes, plant homeotic factors tend to possess MADS-box DNA binding domains.
Animal genomes also possess a small number MADS-box factors.
Thus, in the independent evolution of multicellularity in plants and animals, different eukaryotic transcription factor families were co-opted to serve homeotic functions.
MADS-domain factors have been proposed to function as co-factors to more specialized factors and thereby help to determine organ identity. [ 15 ] This has been proposed to correspond more closely to the interpretation of animal homeotics outlined by Michael Akam . [ 16 ] | https://en.wikipedia.org/wiki/Homeosis |
In biology , homeostasis ( British also homoeostasis ; / h ɒ m i oʊ ˈ s t eɪ s ɪ s , - m i ə -/ hoh-mee-oh- STAY -sis ) is the state of steady internal physical and chemical conditions maintained by living systems . [ 1 ] This is the condition of optimal functioning for the organism and includes many variables, such as body temperature and fluid balance , being kept within certain pre-set limits (homeostatic range). Other variables include the pH of extracellular fluid , the concentrations of sodium , potassium , and calcium ions , as well as the blood sugar level , and these need to be regulated despite changes in the environment, diet, or level of activity. Each of these variables is controlled by one or more regulators or homeostatic mechanisms, which together maintain life.
Homeostasis is brought about by a natural resistance to change when already in optimal conditions, [ 2 ] and equilibrium is maintained by many regulatory mechanisms; it is thought to be the central motivation for all organic action. All homeostatic control mechanisms have at least three interdependent components for the variable being regulated: a receptor, a control center, and an effector. [ 3 ] The receptor is the sensing component that monitors and responds to changes in the environment, either external or internal. Receptors include thermoreceptors and mechanoreceptors . Control centers include the respiratory center and the renin-angiotensin system . An effector is the target acted on, to bring about the change back to the normal state. At the cellular level, effectors include nuclear receptors that bring about changes in gene expression through up-regulation or down-regulation and act in negative feedback mechanisms. An example of this is in the control of bile acids in the liver . [ 4 ]
Some centers, such as the renin–angiotensin system , control more than one variable. When the receptor senses a stimulus, it reacts by sending action potentials to a control center. The control center sets the maintenance range—the acceptable upper and lower limits—for the particular variable, such as temperature. The control center responds to the signal by determining an appropriate response and sending signals to an effector , which can be one or more muscles, an organ, or a gland . When the signal is received and acted on, negative feedback is provided to the receptor that stops the need for further signaling. [ 5 ]
The cannabinoid receptor type 1 , located at the presynaptic neuron , is a receptor that can stop stressful neurotransmitter release to the postsynaptic neuron; it is activated by endocannabinoids such as anandamide ( N -arachidonoylethanolamide) and 2-arachidonoylglycerol via a retrograde signaling process in which these compounds are synthesized by and released from postsynaptic neurons, and travel back to the presynaptic terminal to bind to the CB1 receptor for modulation of neurotransmitter release to obtain homeostasis. [ 6 ]
The polyunsaturated fatty acids are lipid derivatives of omega-3 ( docosahexaenoic acid , and eicosapentaenoic acid ) or of omega-6 ( arachidonic acid ). They are synthesized from membrane phospholipids and used as precursors for endocannabinoids to mediate significant effects in the fine-tuning adjustment of body homeostasis. [ 7 ]
The word homeostasis ( / ˌ h oʊ m i oʊ ˈ s t eɪ s ɪ s / [ 8 ] [ 9 ] hoh-mee-oh- STAY -sis [ 10 ] ) uses combining forms of homeo- and -stasis , Neo-Latin from Greek : ὅμοιος homoios , "similar" and στάσις stasis , "standing still", yielding the idea of "staying the same".
The concept of the regulation of the internal environment was described by French physiologist Claude Bernard in 1849, and the word homeostasis was coined by Walter Bradford Cannon in 1926. [ 11 ] [ 12 ] In 1932, Joseph Barcroft , a British physiologist, was the first to say that higher brain function required the most stable internal environment. Thus, to Barcroft homeostasis was not only organized by the brain—homeostasis served the brain. [ 13 ] Homeostasis is an almost exclusively biological term, referring to the concepts described by Bernard and Cannon, concerning the constancy of the internal environment in which the cells of the body live and survive. [ 11 ] [ 12 ] [ 14 ] The term cybernetics is applied to technological control systems such as thermostats , which function as homeostatic mechanisms but are often defined much more broadly than the biological term of homeostasis. [ 5 ] [ 15 ] [ 16 ] [ 17 ]
The metabolic processes of all organisms can only take place in very specific physical and chemical environments. The conditions vary with each organism, and also with whether the chemical processes take place inside the cell or in the interstitial fluid bathing the cells. The best-known homeostatic mechanisms in humans and other mammals are regulators that keep the composition of the extracellular fluid (or the "internal environment") constant, especially with regard to the temperature , pH , osmolality , and the concentrations of sodium , potassium , glucose , carbon dioxide , and oxygen . However, a great many other homeostatic mechanisms, encompassing many aspects of human physiology , control other entities in the body. Where the levels of variables are higher or lower than those needed, they are often prefixed with hyper- and hypo- , respectively such as hyperthermia and hypothermia or hypertension and hypotension . [ citation needed ]
If an entity is homeostatically controlled it does not imply that its value is necessarily absolutely steady in health. Core body temperature is, for instance, regulated by a homeostatic mechanism with temperature sensors in, amongst others, the hypothalamus of the brain . [ 18 ] However, the set point of the regulator is regularly reset. [ 19 ] For instance, core body temperature in humans varies during the course of the day (i.e. has a circadian rhythm ), with the lowest temperatures occurring at night, and the highest in the afternoons. Other normal temperature variations include those related to the menstrual cycle . [ 20 ] [ 21 ] The temperature regulator's set point is reset during infections to produce a fever. [ 18 ] [ 22 ] [ 23 ] Organisms are capable of adjusting somewhat to varied conditions such as temperature changes or oxygen levels at altitude, by a process of acclimatisation .
Homeostasis does not govern every activity in the body. [ 24 ] [ 25 ] For instance, the signal (be it via neurons or hormones ) from the sensor to the effector is, of necessity, highly variable in order to convey information about the direction and magnitude of the error detected by the sensor. [ 26 ] [ 27 ] [ 28 ] Similarly, the effector's response needs to be highly adjustable to reverse the error – in fact it should be very nearly in proportion (but in the opposite direction) to the error that is threatening the internal environment. [ 16 ] [ 17 ] For instance, arterial blood pressure in mammals is homeostatically controlled and measured by stretch receptors in the walls of the aortic arch and carotid sinuses at the beginnings of the internal carotid arteries . [ 18 ] The sensors send messages via sensory nerves to the medulla oblongata of the brain indicating whether the blood pressure has fallen or risen, and by how much. The medulla oblongata then distributes messages along motor or efferent nerves belonging to the autonomic nervous system to a wide variety of effector organs, whose activity is consequently changed to reverse the error in the blood pressure. One of the effector organs is the heart whose rate is stimulated to rise ( tachycardia ) when the arterial blood pressure falls, or to slow down ( bradycardia ) when the pressure rises above the set point. [ 18 ] Thus the heart rate (for which there is no sensor in the body) is not homeostatically controlled but is one of the effector responses to errors in arterial blood pressure. Another example is the rate of sweating . This is one of the effectors in the homeostatic control of body temperature, and therefore highly variable in rough proportion to the heat load that threatens to destabilize the body's core temperature, for which there is a sensor in the hypothalamus of the brain. [ citation needed ]
Mammals regulate their core temperature using input from thermoreceptors in the hypothalamus , brain, [ 18 ] [ 29 ] spinal cord , internal organs , and great veins. [ 30 ] [ 31 ] Apart from the internal regulation of temperature, a process called allostasis can come into play that adjusts behaviour to adapt to the challenge of very hot or cold extremes (and to other challenges). [ 32 ] These adjustments may include seeking shade and reducing activity, seeking warmer conditions and increasing activity, or huddling. [ 33 ] Behavioral thermoregulation takes precedence over physiological thermoregulation since necessary changes can be affected more quickly and physiological thermoregulation is limited in its capacity to respond to extreme temperatures. [ 34 ]
When the core temperature falls, the blood supply to the skin is reduced by intense vasoconstriction . [ 18 ] The blood flow to the limbs (which have a large surface area) is similarly reduced and returned to the trunk via the deep veins which lie alongside the arteries (forming venae comitantes ). [ 29 ] [ 33 ] [ 35 ] This acts as a counter-current exchange system that short-circuits the warmth from the arterial blood directly into the venous blood returning into the trunk, causing minimal heat loss from the extremities in cold weather. [ 29 ] [ 33 ] [ 36 ] The subcutaneous limb veins are tightly constricted, [ 18 ] not only reducing heat loss from this source but also forcing the venous blood into the counter-current system in the depths of the limbs.
The metabolic rate is increased, initially by non-shivering thermogenesis , [ 37 ] followed by shivering thermogenesis if the earlier reactions are insufficient to correct the hypothermia .
When core temperature rises are detected by thermoreceptors , the sweat glands in the skin are stimulated via cholinergic sympathetic nerves to secrete sweat onto the skin, which, when it evaporates, cools the skin and the blood flowing through it. Panting is an alternative effector in many vertebrates, which cools the body also by the evaporation of water, but this time from the mucous membranes of the throat and mouth. [ 38 ]
Blood sugar levels are regulated within fairly narrow limits. [ 39 ] In mammals, the primary sensors for this are the beta cells of the pancreatic islets . [ 40 ] [ 41 ] The beta cells respond to a rise in the blood sugar level by secreting insulin into the blood and simultaneously inhibiting their neighboring alpha cells from secreting glucagon into the blood. [ 40 ] This combination (high blood insulin levels and low glucagon levels) act on effector tissues, the chief of which is the liver , fat cells , and muscle cells . The liver is inhibited from producing glucose , taking it up instead, and converting it to glycogen and triglycerides . The glycogen is stored in the liver, but the triglycerides are secreted into the blood as very low-density lipoprotein (VLDL) particles which are taken up by adipose tissue , there to be stored as fats. The fat cells take up glucose through special glucose transporters ( GLUT4 ), whose numbers in the cell wall are increased as a direct effect of insulin acting on these cells. The glucose that enters the fat cells in this manner is converted into triglycerides (via the same metabolic pathways as are used by the liver) and then stored in those fat cells together with the VLDL-derived triglycerides that were made in the liver. Muscle cells also take glucose up through insulin-sensitive GLUT4 glucose channels, and convert it into muscle glycogen. [ 42 ]
A fall in blood glucose, causes insulin secretion to be stopped, and glucagon to be secreted from the alpha cells into the blood. This inhibits the uptake of glucose from the blood by the liver, fats cells, and muscle. Instead the liver is strongly stimulated to manufacture glucose from glycogen (through glycogenolysis ) and from non-carbohydrate sources (such as lactate and de-aminated amino acids ) using a process known as gluconeogenesis . [ 43 ] The glucose thus produced is discharged into the blood correcting the detected error ( hypoglycemia ). The glycogen stored in muscles remains in the muscles, and is only broken down, during exercise, to glucose-6-phosphate and thence to pyruvate to be fed into the citric acid cycle or turned into lactate . It is only the lactate and the waste products of the citric acid cycle that are returned to the blood. The liver can take up only the lactate, and, by the process of energy-consuming gluconeogenesis , convert it back to glucose. [ citation needed ]
Iron homeostasis is a crucial physiological process that regulates iron levels in the body, ensuring that this essential nutrient is available for vital functions while preventing potential toxicity from excess iron. [ 44 ] The primary site for iron absorption is the duodenum , where dietary iron exists in two forms: heme iron, sourced from animal products, and non-heme iron , found in plant foods. Heme iron is more efficiently absorbed than non-heme iron, which requires factors like vitamin C for optimal uptake. Once absorbed, iron enters the bloodstream bound to transferrin , a transport protein that delivers it to various tissues and organs. Cells uptake iron through transferrin receptors, making it available for critical processes such as oxygen transport and DNA synthesis. Excess iron is stored in the liver, spleen, and bone marrow as ferritin and hemosiderin. The regulation of iron levels is primarily controlled by the hormone hepcidin , produced by the liver, which adjusts intestinal absorption and the release of stored iron based on the body’s needs. Disruptions in iron homeostasis can lead to conditions such as iron deficiency anemia or iron overload disorders like hemochromatosis , highlighting the importance of maintaining the delicate balance of this vital nutrient for overall health.
Copper is absorbed, transported, distributed, stored, and excreted in the body according to complex homeostatic processes which ensure a constant and sufficient supply of the micronutrient while simultaneously avoiding excess levels. [ 45 ] If an insufficient amount of copper is ingested for a short period of time, copper stores in the liver will be depleted. Should this depletion continue, a copper health deficiency condition may develop. If too much copper is ingested, an excess condition can result. Both of these conditions, deficiency and excess, can lead to tissue injury and disease. However, due to homeostatic regulation, the human body is capable of balancing a wide range of copper intakes for the needs of healthy individuals. [ 46 ]
Many aspects of copper homeostasis are known at the molecular level. [ 47 ] Copper's essentiality is due to its ability to act as an electron donor or acceptor as its oxidation state fluxes between Cu 1+ ( cuprous ) and Cu 2+ ( cupric ). As a component of about a dozen cuproenzymes , copper is involved in key redox (i.e., oxidation-reduction) reactions in essential metabolic processes such as mitochondrial respiration, synthesis of melanin , and cross-linking of collagen . [ 48 ] Copper is an integral part of the antioxidant enzyme copper-zinc superoxide dismutase, and has a role in iron homeostasis as a cofactor in ceruloplasmin. [ citation needed ]
Changes in the levels of oxygen, carbon dioxide, and plasma pH are sent to the respiratory center , in the brainstem where they are regulated.
The partial pressure of oxygen and carbon dioxide in the arterial blood is monitored by the peripheral chemoreceptors ( PNS ) in the carotid artery and aortic arch . A change in the partial pressure of carbon dioxide is detected as altered pH in the cerebrospinal fluid by central chemoreceptors ( CNS ) in the medulla oblongata of the brainstem . Information from these sets of sensors is sent to the respiratory center which activates the effector organs – the diaphragm and other muscles of respiration . An increased level of carbon dioxide in the blood, or a decreased level of oxygen, will result in a deeper breathing pattern and increased respiratory rate to bring the blood gases back to equilibrium.
Too little carbon dioxide, and, to a lesser extent, too much oxygen in the blood can temporarily halt breathing, a condition known as apnea , which freedivers use to prolong the time they can stay underwater.
The partial pressure of carbon dioxide is more of a deciding factor in the monitoring of pH. [ 49 ] However, at high altitude (above 2500 m) the monitoring of the partial pressure of oxygen takes priority, and hyperventilation keeps the oxygen level constant. With the lower level of carbon dioxide, to keep the pH at 7.4 the kidneys secrete hydrogen ions into the blood and excrete bicarbonate into the urine. [ 50 ] [ 51 ] This is important in acclimatization to high altitude . [ 52 ]
The kidneys measure the oxygen content rather than the partial pressure of oxygen in the arterial blood. When the oxygen content of the blood is chronically low, oxygen-sensitive cells secrete erythropoietin (EPO) into the blood. [ 53 ] The effector tissue is the red bone marrow which produces red blood cells (RBCs, also called erythrocytes ). The increase in RBCs leads to an increased hematocrit in the blood, and a subsequent increase in hemoglobin that increases the oxygen carrying capacity. This is the mechanism whereby high altitude dwellers have higher hematocrits than sea-level residents, and also why persons with pulmonary insufficiency or right-to-left shunts in the heart (through which venous blood by-passes the lungs and goes directly into the systemic circulation) have similarly high hematocrits. [ 54 ] [ 55 ]
Regardless of the partial pressure of oxygen in the blood, the amount of oxygen that can be carried, depends on the hemoglobin content. The partial pressure of oxygen may be sufficient for example in anemia , but the hemoglobin content will be insufficient and subsequently as will be the oxygen content. Given enough supply of iron, vitamin B12 and folic acid , EPO can stimulate RBC production, and hemoglobin and oxygen content restored to normal. [ 54 ] [ 56 ]
The brain can regulate blood flow over a range of blood pressure values by vasoconstriction and vasodilation of the arteries. [ 57 ]
High pressure receptors called baroreceptors in the walls of the aortic arch and carotid sinus (at the beginning of the internal carotid artery ) monitor the arterial blood pressure . [ 58 ] Rising pressure is detected when the walls of the arteries stretch due to an increase in blood volume . This causes heart muscle cells to secrete the hormone atrial natriuretic peptide (ANP) into the blood. This acts on the kidneys to inhibit the secretion of renin and aldosterone causing the release of sodium, and accompanying water into the urine, thereby reducing the blood volume. [ 59 ] This information is then conveyed, via afferent nerve fibers , to the solitary nucleus in the medulla oblongata . [ 60 ] From here motor nerves belonging to the autonomic nervous system are stimulated to influence the activity of chiefly the heart and the smallest diameter arteries, called arterioles . The arterioles are the main resistance vessels in the arterial tree , and small changes in diameter cause large changes in the resistance to flow through them. When the arterial blood pressure rises the arterioles are stimulated to dilate making it easier for blood to leave the arteries, thus deflating them, and bringing the blood pressure down, back to normal. At the same time, the heart is stimulated via cholinergic parasympathetic nerves to beat more slowly (called bradycardia ), ensuring that the inflow of blood into the arteries is reduced, thus adding to the reduction in pressure, and correcting the original error.
Low pressure in the arteries, causes the opposite reflex of constriction of the arterioles, and a speeding up of the heart rate (called tachycardia ). If the drop in blood pressure is very rapid or excessive, the medulla oblongata stimulates the adrenal medulla , via "preganglionic" sympathetic nerves , to secrete epinephrine (adrenaline) into the blood. This hormone enhances the tachycardia and causes severe vasoconstriction of the arterioles to all but the essential organs in the body (especially the heart, lungs, and brain). These reactions usually correct the low arterial blood pressure ( hypotension ) very effectively.
The plasma ionized calcium (Ca 2+ ) concentration is very tightly controlled by a pair of homeostatic mechanisms. [ 61 ] The sensor for the first one is situated in the parathyroid glands , where the chief cells sense the Ca 2+ level by means of specialized calcium receptors in their membranes. The sensors for the second are the parafollicular cells in the thyroid gland . The parathyroid chief cells secrete parathyroid hormone (PTH) in response to a fall in the plasma ionized calcium level; the parafollicular cells of the thyroid gland secrete calcitonin in response to a rise in the plasma ionized calcium level.
The effector organs of the first homeostatic mechanism are the bones , the kidney , and, via a hormone released into the blood by the kidney in response to high PTH levels in the blood, the duodenum and jejunum . Parathyroid hormone (in high concentrations in the blood) causes bone resorption , releasing calcium into the plasma. This is a very rapid action which can correct a threatening hypocalcemia within minutes. High PTH concentrations cause the excretion of phosphate ions via the urine. Since phosphates combine with calcium ions to form insoluble salts (see also bone mineral ), a decrease in the level of phosphates in the blood, releases free calcium ions into the plasma ionized calcium pool. PTH has a second action on the kidneys. It stimulates the manufacture and release, by the kidneys, of calcitriol into the blood. This steroid hormone acts on the epithelial cells of the upper small intestine, increasing their capacity to absorb calcium from the gut contents into the blood. [ 62 ]
The second homeostatic mechanism, with its sensors in the thyroid gland, releases calcitonin into the blood when the blood ionized calcium rises. This hormone acts primarily on bone, causing the rapid removal of calcium from the blood and depositing it, in insoluble form, in the bones. [ 63 ]
The two homeostatic mechanisms working through PTH on the one hand, and calcitonin on the other can very rapidly correct any impending error in the plasma ionized calcium level by either removing calcium from the blood and depositing it in the skeleton, or by removing calcium from it. The skeleton acts as an extremely large calcium store (about 1 kg) compared with the plasma calcium store (about 180 mg). Longer term regulation occurs through calcium absorption or loss from the gut.
Another example are the most well-characterised endocannabinoids like anandamide ( N -arachidonoylethanolamide; AEA) and 2-arachidonoylglycerol (2-AG), whose synthesis occurs through the action of a series of intracellular enzymes activated in response to a rise in intracellular calcium levels to introduce homeostasis and prevention of tumor development through putative protective mechanisms that prevent cell growth and migration by activation of CB1 and/or CB2 and adjoining receptors . [ 64 ]
The homeostatic mechanism which controls the plasma sodium concentration is rather more complex than most of the other homeostatic mechanisms described on this page.
The sensor is situated in the juxtaglomerular apparatus of kidneys, which senses the plasma sodium concentration in a surprisingly indirect manner. Instead of measuring it directly in the blood flowing past the juxtaglomerular cells , these cells respond to the sodium concentration in the renal tubular fluid after it has already undergone a certain amount of modification in the proximal convoluted tubule and loop of Henle . [ 65 ] These cells also respond to rate of blood flow through the juxtaglomerular apparatus, which, under normal circumstances, is directly proportional to the arterial blood pressure , making this tissue an ancillary arterial blood pressure sensor.
In response to a lowering of the plasma sodium concentration, or to a fall in the arterial blood pressure, the juxtaglomerular cells release renin into the blood. [ 65 ] [ 66 ] [ 67 ] Renin is an enzyme which cleaves a decapeptide (a short protein chain, 10 amino acids long) from a plasma α-2-globulin called angiotensinogen . This decapeptide is known as angiotensin I . [ 65 ] It has no known biological activity. However, when the blood circulates through the lungs a pulmonary capillary endothelial enzyme called angiotensin-converting enzyme (ACE) cleaves a further two amino acids from angiotensin I to form an octapeptide known as angiotensin II . Angiotensin II is a hormone which acts on the adrenal cortex , causing the release into the blood of the steroid hormone , aldosterone . Angiotensin II also acts on the smooth muscle in the walls of the arterioles causing these small diameter vessels to constrict, thereby restricting the outflow of blood from the arterial tree, causing the arterial blood pressure to rise. This, therefore, reinforces the measures described above (under the heading of "Arterial blood pressure"), which defend the arterial blood pressure against changes, especially hypotension .
The angiotensin II-stimulated aldosterone released from the zona glomerulosa of the adrenal glands has an effect on particularly the epithelial cells of the distal convoluted tubules and collecting ducts of the kidneys. Here it causes the reabsorption of sodium ions from the renal tubular fluid , in exchange for potassium ions which are secreted from the blood plasma into the tubular fluid to exit the body via the urine. [ 65 ] [ 68 ] The reabsorption of sodium ions from the renal tubular fluid halts further sodium ion losses from the body, and therefore preventing the worsening of hyponatremia . The hyponatremia can only be corrected by the consumption of salt in the diet. However, it is not certain whether a "salt hunger" can be initiated by hyponatremia, or by what mechanism this might come about.
When the plasma sodium ion concentration is higher than normal ( hypernatremia ), the release of renin from the juxtaglomerular apparatus is halted, ceasing the production of angiotensin II, and its consequent aldosterone-release into the blood. The kidneys respond by excreting sodium ions into the urine, thereby normalizing the plasma sodium ion concentration. The low angiotensin II levels in the blood lower the arterial blood pressure as an inevitable concomitant response.
The reabsorption of sodium ions from the tubular fluid as a result of high aldosterone levels in the blood does not, of itself, cause renal tubular water to be returned to the blood from the distal convoluted tubules or collecting ducts . This is because sodium is reabsorbed in exchange for potassium and therefore causes only a modest change in the osmotic gradient between the blood and the tubular fluid. Furthermore, the epithelium of the distal convoluted tubules and collecting ducts is impermeable to water in the absence of antidiuretic hormone (ADH) in the blood. ADH is part of the control of fluid balance . Its levels in the blood vary with the osmolality of the plasma, which is measured in the hypothalamus of the brain. Aldosterone's action on the kidney tubules prevents sodium loss to the extracellular fluid (ECF). So there is no change in the osmolality of the ECF, and therefore no change in the ADH concentration of the plasma. However, low aldosterone levels cause a loss of sodium ions from the ECF, which could potentially cause a change in extracellular osmolality and therefore of ADH levels in the blood.
High potassium concentrations in the plasma cause depolarization of the zona glomerulosa cells' membranes in the outer layer of the adrenal cortex . [ 69 ] This causes the release of aldosterone into the blood.
Aldosterone acts primarily on the distal convoluted tubules and collecting ducts of the kidneys, stimulating the excretion of potassium ions into the urine. [ 65 ] It does so, however, by activating the basolateral Na + /K + pumps of the tubular epithelial cells. These sodium/potassium exchangers pump three sodium ions out of the cell, into the interstitial fluid and two potassium ions into the cell from the interstitial fluid. This creates an ionic concentration gradient which results in the reabsorption of sodium (Na + ) ions from the tubular fluid into the blood, and secreting potassium (K + ) ions from the blood into the urine (lumen of collecting duct). [ 70 ] [ 71 ]
The total amount of water in the body needs to be kept in balance. Fluid balance involves keeping the fluid volume stabilized, and also keeping the levels of electrolytes in the extracellular fluid stable. Fluid balance is maintained by the process of osmoregulation and by behavior. Osmotic pressure is detected by osmoreceptors in the median preoptic nucleus in the hypothalamus . Measurement of the plasma osmolality to give an indication of the water content of the body, relies on the fact that water losses from the body, (through unavoidable water loss through the skin which is not entirely waterproof and therefore always slightly moist, water vapor in the exhaled air , sweating , vomiting , normal feces and especially diarrhea ) are all hypotonic , meaning that they are less salty than the body fluids (compare, for instance, the taste of saliva with that of tears. The latter has almost the same salt content as the extracellular fluid, whereas the former is hypotonic with respect to the plasma. Saliva does not taste salty, whereas tears are decidedly salty). Nearly all normal and abnormal losses of body water therefore cause the extracellular fluid to become hypertonic . Conversely, excessive fluid intake dilutes the extracellular fluid causing the hypothalamus to register hypotonic hyponatremia conditions.
When the hypothalamus detects a hypertonic extracellular environment, it causes the secretion of an antidiuretic hormone (ADH) called vasopressin which acts on the effector organ, which in this case is the kidney . The effect of vasopressin on the kidney tubules is to reabsorb water from the distal convoluted tubules and collecting ducts , thus preventing aggravation of the water loss via the urine. The hypothalamus simultaneously stimulates the nearby thirst center causing an almost irresistible (if the hypertonicity is severe enough) urge to drink water. The cessation of urine flow prevents the hypovolemia and hypertonicity from getting worse; the drinking of water corrects the defect.
Hypo-osmolality results in very low plasma ADH levels. This results in the inhibition of water reabsorption from the kidney tubules, causing high volumes of very dilute urine to be excreted, thus getting rid of the excess water in the body.
Urinary water loss, when the body water homeostat is intact, is a compensatory water loss, correcting any water excess in the body. However, since the kidneys cannot generate water, the thirst reflex is the all-important second effector mechanism of the body water homeostat, correcting any water deficit in the body.
The plasma pH can be altered by respiratory changes in the partial pressure of carbon dioxide; or altered by metabolic changes in the carbonic acid to bicarbonate ion ratio. The bicarbonate buffer system regulates the ratio of carbonic acid to bicarbonate to be equal to 1:20, at which ratio the blood pH is 7.4 (as explained in the Henderson–Hasselbalch equation ). A change in the plasma pH gives an acid–base imbalance .
In acid–base homeostasis there are two mechanisms that can help regulate the pH. Respiratory compensation a mechanism of the respiratory center , adjusts the partial pressure of carbon dioxide by changing the rate and depth of breathing, to bring the pH back to normal. The partial pressure of carbon dioxide also determines the concentration of carbonic acid, and the bicarbonate buffer system can also come into play. Renal compensation can help the bicarbonate buffer system.
The sensor for the plasma bicarbonate concentration is not known for certain. It is very probable that the renal tubular cells of the distal convoluted tubules are themselves sensitive to the pH of the plasma. [ citation needed ] The metabolism of these cells produces carbon dioxide, which is rapidly converted to hydrogen and bicarbonate through the action of carbonic anhydrase . [ 72 ] When the ECF pH falls (becoming more acidic) the renal tubular cells excrete hydrogen ions into the tubular fluid to leave the body via urine. Bicarbonate ions are simultaneously secreted into the blood that decreases the carbonic acid, and consequently raises the plasma pH. [ 72 ] The converse happens when the plasma pH rises above normal: bicarbonate ions are excreted into the urine, and hydrogen ions released into the plasma.
When hydrogen ions are excreted into the urine, and bicarbonate into the blood, the latter combines with the excess hydrogen ions in the plasma that stimulated the kidneys to perform this operation. The resulting reaction in the plasma is the formation of carbonic acid which is in equilibrium with the plasma partial pressure of carbon dioxide. This is tightly regulated to ensure that there is no excessive build-up of carbonic acid or bicarbonate. The overall effect is therefore that hydrogen ions are lost in the urine when the pH of the plasma falls. The concomitant rise in the plasma bicarbonate mops up the increased hydrogen ions (caused by the fall in plasma pH) and the resulting excess carbonic acid is disposed of in the lungs as carbon dioxide. This restores the normal ratio between bicarbonate and the partial pressure of carbon dioxide and therefore the plasma pH.
The converse happens when a high plasma pH stimulates the kidneys to secrete hydrogen ions into the blood and to excrete bicarbonate into the urine. The hydrogen ions combine with the excess bicarbonate ions in the plasma, once again forming an excess of carbonic acid which can be exhaled, as carbon dioxide, in the lungs, keeping the plasma bicarbonate ion concentration, the partial pressure of carbon dioxide and, therefore, the plasma pH, constant.
Cerebrospinal fluid (CSF) allows for regulation of the distribution of substances between cells of the brain, [ 73 ] and neuroendocrine factors, to which slight changes can cause problems or damage to the nervous system. For example, high glycine concentration disrupts temperature and blood pressure control, and high CSF pH causes dizziness and syncope . [ 74 ]
Inhibitory neurons in the central nervous system play a homeostatic role in the balance of neuronal activity between excitation and inhibition. Inhibitory neurons using GABA , make compensating changes in the neuronal networks preventing runaway levels of excitation. [ 75 ] An imbalance between excitation and inhibition is seen to be implicated in a number of neuropsychiatric disorders . [ 76 ]
The neuroendocrine system is the mechanism by which the hypothalamus maintains homeostasis, regulating metabolism , reproduction, eating and drinking behaviour, energy utilization, osmolarity and blood pressure.
The regulation of metabolism, is carried out by hypothalamic interconnections to other glands. [ 77 ] Three endocrine glands of the hypothalamic–pituitary–gonadal axis (HPG axis) often work together and have important regulatory functions. Two other regulatory endocrine axes are the hypothalamic–pituitary–adrenal axis (HPA axis) and the hypothalamic–pituitary–thyroid axis (HPT axis).
The liver also has many regulatory functions of the metabolism. An important function is the production and control of bile acids . Too much bile acid can be toxic to cells and its synthesis can be inhibited by activation of FXR a nuclear receptor . [ 4 ]
At the cellular level, homeostasis is carried out by several mechanisms including transcriptional regulation that can alter the activity of genes in response to changes.
The amount of energy consumed through dietary intake must align closely with the amount of energy expended by the body in order to maintain overall energy balance, a state known as energy homeostasis. This critical process is managed through the regulation of appetite, which is influenced by two key hormones: ghrelin and leptin . Ghrelin is known as the hunger hormone , as it plays a significant role in stimulating feelings of hunger, thereby prompting individuals to seek out and consume food. On the other hand, leptin serves a different function; it signals satiety, or the feeling of fullness, telling the body that it has consumed enough food.
In a comprehensive review conducted in 2019 that examined various weight-change interventions—including dieting, exercise, and instances of overeating—it was determined that the body’s mechanisms for regulating weight homeostasis are not capable of precisely correcting for energetic errors . These energetic errors refer to the notable loss or gain of calories that can occur in the short term. This research highlights the complexity of energy balance, showing that the body may struggle to adjust rapidly to fluctuations in calorie intake or expenditure, thereby complicating the process of maintaining a stable body weight in response to immediate changes in energy consumption and usage. [ 78 ]
Many diseases are the result of a homeostatic failure. Almost any homeostatic component can malfunction either as a result of an inherited defect , an inborn error of metabolism , or an acquired disease. Some homeostatic mechanisms have inbuilt redundancies, which ensures that life is not immediately threatened if a component malfunctions; but sometimes a homeostatic malfunction can result in serious disease, which can be fatal if not treated. A well-known example of a homeostatic failure is shown in type 1 diabetes mellitus . Here blood sugar regulation is unable to function because the beta cells of the pancreatic islets are destroyed and cannot produce the necessary insulin . The blood sugar rises in a condition known as hyperglycemia . [ 79 ]
The plasma ionized calcium homeostat can be disrupted by the constant, unchanging, over-production of parathyroid hormone by a parathyroid adenoma resulting in the typically features of hyperparathyroidism , namely high plasma ionized Ca 2+ levels and the resorption of bone, which can lead to spontaneous fractures. The abnormally high plasma ionized calcium concentrations cause conformational changes in many cell-surface proteins (especially ion channels and hormone or neurotransmitter receptors) [ 80 ] giving rise to lethargy, muscle weakness, anorexia, constipation and labile emotions. [ 81 ]
The body water homeostat can be compromised by the inability to secrete ADH in response to even the normal daily water losses via the exhaled air, the feces , and insensible sweating . On receiving a zero blood ADH signal, the kidneys produce huge unchanging volumes of very dilute urine, causing dehydration and death if not treated.
As organisms age, the efficiency of their control systems becomes reduced. The inefficiencies gradually result in an unstable internal environment that increases the risk of illness, and leads to the physical changes associated with aging. [ 5 ]
Various chronic diseases are kept under control by homeostatic compensation, which masks a problem by compensating for it (making up for it) in another way. However, the compensating mechanisms eventually wear out or are disrupted by a new complicating factor (such as the advent of a concurrent acute viral infection), which sends the body reeling through a new cascade of events. Such decompensation unmasks the underlying disease, worsening its symptoms. Common examples include decompensated heart failure , kidney failure , and liver failure . [ citation needed ]
In the Gaia hypothesis , James Lovelock [ 82 ] stated that the entire mass of living matter on Earth (or any planet with life) functions as a vast homeostatic superorganism that actively modifies its planetary environment to produce the environmental conditions necessary for its own survival. In this view, the entire planet maintains several homeostasis (the primary one being temperature homeostasis). Whether this sort of system is present on Earth is open to debate. However, some relatively simple homeostatic mechanisms are generally accepted. For example, it is sometimes claimed that when atmospheric carbon dioxide levels rise, certain plants may be able to grow better and thus act to remove more carbon dioxide from the atmosphere. However, warming has exacerbated droughts, making water the actual limiting factor on land. When sunlight is plentiful and the atmospheric temperature climbs, it has been claimed that the phytoplankton of the ocean surface waters, acting as global sunshine, and therefore heat sensors, may thrive and produce more dimethyl sulfide (DMS). The DMS molecules act as cloud condensation nuclei , which produce more clouds, and thus increase the atmospheric albedo , and this feeds back to lower the temperature of the atmosphere. However, rising sea temperature has stratified the oceans, separating warm, sunlit waters from cool, nutrient-rich waters. Thus, nutrients have become the limiting factor, and plankton levels have actually fallen over the past 50 years, not risen. As scientists discover more about Earth, vast numbers of positive and negative feedback loops are being discovered, that, together, maintain a metastable condition, sometimes within a very broad range of environmental conditions.
Predictive homeostasis is an anticipatory response to an expected challenge in the future, such as the stimulation of insulin secretion by gut hormones which enter the blood in response to a meal. [ 40 ] This insulin secretion occurs before the blood sugar level rises, lowering the blood sugar level in anticipation of a large influx into the blood of glucose resulting from the digestion of carbohydrates in the gut. [ 83 ] Such anticipatory reactions are open loop systems which are based, essentially, on "guess work", and are not self-correcting. [ 84 ] Anticipatory responses always require a closed loop negative feedback system to correct the 'over-shoots' and 'under-shoots' to which the anticipatory systems are prone.
The term has come to be used in other fields, for example:
An actuary may refer to risk homeostasis , where (for example) people who have anti-lock brakes have no better safety record than those without anti-lock brakes, because the former unconsciously compensate for the safer vehicle via less-safe driving habits. Previous to the innovation of anti-lock brakes, certain maneuvers involved minor skids, evoking fear and avoidance: Now the anti-lock system moves the boundary for such feedback, and behavior patterns expand into the no-longer punitive area. It has also been suggested that ecological crises are an instance of risk homeostasis in which a particular behavior continues until proven dangerous or dramatic consequences actually occur. [ 85 ] [ self-published source? ]
Sociologists and psychologists may refer to stress homeostasis , the tendency of a population or an individual to stay at a certain level of stress , often generating artificial stresses if the "natural" level of stress is not enough. [ 86 ] [ self-published source? ]
Jean-François Lyotard , a postmodern theorist, has applied this term to societal 'power centers' that he describes in The Postmodern Condition , as being 'governed by a principle of homeostasis,' for example, the scientific hierarchy, which will sometimes ignore a radical new discovery for years because it destabilizes previously accepted norms.
Familiar technological homeostatic mechanisms include:
The use of sovereign power, codes of conduct, religious and cultural practices and other dynamic processes in a society can be described as a part of an evolved homeostatic system of regularizing life and maintaining an overall equilibrium that protects the security of the whole from internal and external imbalances or dangers. [ 93 ] [ 94 ] Healthy civic cultures can be said to have achieved an optimal homeostatic balance between multiple contradictory concerns such as in the tension between respect for individual rights and concern for the public good, [ 95 ] or that between governmental effectiveness and responsiveness to the interests of citizens. [ 96 ] [ 97 ] | https://en.wikipedia.org/wiki/Homeostasis |
Homeotic genes are genes which regulate the development of anatomical structures in various organisms such as echinoderms, [ 1 ] insects, mammals, and plants. Homeotic genes often encode transcription factor proteins, and these proteins affect development by regulating downstream gene networks involved in body patterning. [ 2 ]
Mutations in homeotic genes cause displaced body parts ( homeosis ), such as antennae growing at the posterior of the fly instead of at the head. [ 3 ] Mutations that lead to development of ectopic structures are usually lethal. [ 4 ]
There are several subsets of homeotic genes. They include many of the Hox and ParaHox genes that are important for segmentation . [ 5 ] Hox genes are found in bilateral animals, including Drosophila (in which they were first discovered) and humans. Hox genes are a subset of the homeobox genes. The Hox genes are often conserved across species, so some of the Hox genes of Drosophila are homologous to those in humans. In general, Hox genes play a role of regulating expression of genes as well as aiding in development and assignment of specific structures during embryonic growth. This can range from segmentation in Drosophila to central nervous system (CNS) development in vertebrates. [ 6 ] Both Hox and ParaHox are grouped as HOX-Like (HOXL) genes, a subset of the ANTP class (named after the Drosophila gene, Antennapedia ). [ 7 ]
They also include the MADS-box -containing genes involved in the ABC model of flower development . [ 8 ] Besides flower-producing plants, the MADS-box motif is also present in other organisms such as insects, yeasts, and mammals. They have various functions depending on the organism including flower development, proto-oncogene transcription, and gene regulation in specific cells (such as muscle cells). [ 9 ]
Despite the terms being commonly interchanged, not all homeotic genes are Hox genes; the MADS-box genes are homeotic but not Hox genes. Thus, the Hox genes are a subset of homeotic genes.
One of the most commonly studied model organisms in regards to homeotic genes is the fruit fly Drosophila melanogaster . Its homeotic Hox genes occur in either the Antennapedia complex (ANT-C) or the Bithorax complex (BX-C) discovered by Edward B. Lewis . [ 10 ] Each of the complexes focuses on a different area of development. The antennapedia complex consists of five genes, including proboscipedia , and is involved in the development of the front of the embryo, forming the segments of the head and thorax. [ 11 ] The bithorax complex consists of three main genes and is involved in the development of the back of the embryo, namely the abdomen and the posterior segments of the thorax. [ 12 ]
During development (starting at the blastoderm stage of the embryo), these genes are constantly expressed to assign structures and roles to the different segments of the fly's body. [ 13 ] For Drosophila , these genes can be analyzed using the Flybase database.
Much research has been done on homeotic genes in different organisms, ranging from basic understanding of how the molecules work to mutations to how homeotic genes affect the human body. Changing the expression levels of homeotic genes can negatively impact the organism. For example, in one study, a pathogenic phytoplasma caused homeotic genes in a flowering plant to either be significantly upregulated or downregulated. This led to severe phenotypic changes including dwarfing, defects in the pistils, hypopigmentation, and the development of leaf-like structures on most floral organs. [ 14 ] In another study, it was found that the homeotic gene Cdx2 acts as a tumor suppressor . In normal expression levels, the gene prevents tumorgenesis and colorectal cancer when exposed to carcinogens ; however, when Cdx2 was not well expressed, carcinogens caused tumor development. [ 15 ] These studies, along with many others, show the importance of homeotic genes even after development. | https://en.wikipedia.org/wiki/Homeotic_gene |
In liquid crystals , homeotropic alignment is one of the ways of alignment of liquid crystalline molecules. Homeotropic alignment is the state in which a rod-like liquid crystalline molecule aligns perpendicularly to the substrate. In the polydomain state, the parts also are called homeotropic domains. In contrast, the state in which the molecule aligns to a substance in parallel is called homogeneous alignment . [ 1 ]
There are various other ways of alignment in liquid crystals. Because homeotropic alignment is not anisotropic optically, a dark field is observed between crossed polarizers in polarizing optical microscopy .
By conoscope observation, however, a cross image is observed in the homeotropic alignments. Homeotropic alignment often appears in the smectic A phase (S A ).
In discotic liquid crystals homeotropic alignment is defined as the state in which an axis of the column structure, which is formed by disc-like liquid crystalline molecules , aligns perpendicularly to a substance. In other words, this alignment looks like a state in which columns formed by piled-up coins are arranged in an orderly way on a table.
In practice, the homeotropic alignment is usually achieved by surfactants and detergent for example lecithin, some esilanes or some special polyimide (PI 1211). Generally liquid crystals align homeotropically at an air or glass interface.
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homeotropic_alignment |
In superconductivity , Homes's law is an empirical relation that states that a superconductor's critical temperature ( T c ) is proportional to the strength of the superconducting state for temperatures well below T c close to zero temperature (also referred to as the fully formed superfluid density, ρ s 0 {\displaystyle \rho _{s0}} ) multiplied by the electrical resistivity ρ d c {\displaystyle \rho _{dc}} measured just above the critical temperature. In cuprate high-temperature superconductors the relation follows the form
or alternatively
Many novel superconductors are anisotropic, so the resistivity and the superfluid density are
tensor quantities; the superscript α {\displaystyle \alpha } denotes the crystallographic direction
along which these quantities are measured.
Note that this expression assumes that the conductivity and temperature have both been recast in units
of cm −1 (or s −1 ), and that the superfluid density has units of cm −2 (or s −2 ); the constant is dimensionless. The expected form for a BCS dirty-limit superconductor
has slightly larger numerical constant of ~8.1.
The law is named for physicist Christopher Homes and was first presented in the July 29, 2004 edition of Nature , [ 1 ] and was the subject of a News and Views article by Jan Zaanen in the same issue [ 2 ] in which he speculated that the high transition temperatures observed in the
cuprate superconductors are because the metallic states in these materials are as viscous as
permitted by the laws of quantum physics. A more detailed version of this scaling relation subsequently appeared in Physical Review B in 2005, [ 3 ] in which it was argued that any material that falls on the scaling line is likely in the
dirty limit (superconducting coherence length ξ 0 is much greater than the normal-state mean-free path l ,
ξ 0 ≫ l ); however, a paper by Vladimir Kogan in Physical Review B in 2013 has shown that the
scaling relation is valid even when ξ 0 ~ l , [ 4 ] suggesting that only materials in the clean limit (ξ 0 ≪ l ) will fall off of this scaling line.
Francis Pratt and Stephen Blundell have argued that Homes's law is violated in the organic superconductors . This
work was first presented in Physical Review Letters in March 2005. [ 5 ] On the other hand, it has been recently demonstrated by Sasa Dordevic and coworkers that
if the dc conductivity and the superfluid density are measured on the same sample at the same time using either infrared
or microwave impedance spectroscopy, then the organic superconductors do indeed fall on the universal scaling line,
along with a number of other exotic superconductors. This work was published in Scientific Reports in
2013. [ 6 ] | https://en.wikipedia.org/wiki/Homes's_law |
In game theory , the homicidal chauffeur problem is a mathematical pursuit problem which pits a hypothetical runner, who can only move slowly, but is highly maneuverable, against the driver of a motor vehicle, which is much faster but far less maneuverable, who is attempting to run him down. Both runner and driver are assumed to never tire. The question to be solved is: under what circumstances, and with what strategy, can the driver of the car guarantee that he can always catch the pedestrian, or the pedestrian guarantee that he can indefinitely elude the car?
The problem is often used as an unclassified proxy for missile defense and other military targeting, allowing scientists to publish on it without security implications. [ 1 ]
The problem was proposed by Rufus Isaacs in a 1951 report [ 2 ] for the RAND Corporation , and in the book Differential Games . [ 3 ]
The homicidal chauffeur problem is a classic example of a differential game played in continuous time in a continuous state space . The calculus of variations and level set methods can be used as a mathematical framework for investigating solutions of the problem. Although the problem is phrased as a recreational problem, it is an important model problem for mathematics used in a number of real-world applications.
A discrete version of the problem was described by Martin Gardner (in his book Mathematical Carnival , chapter 16), where a squad car of speed 2 chases a crook of speed 1 on a rectangular grid, where the squad car but not the crook is constrained not to make left-hand turns or U-turns. | https://en.wikipedia.org/wiki/Homicidal_chauffeur_problem |
Homing is the inherent ability of an animal to navigate towards an original location through unfamiliar areas. This location may be a home territory or a breeding spot.
Homing abilities can be used to find the way back to home in a migration . It is often used in reference to going back to a breeding spot seen years before, as in the case of salmon . Homing abilities can also be used to go back to familiar territory when displaced over long distances, such as with the red-bellied newt .
Some animals use true navigation for their homing. This means in familiar areas they will use landmarks such as roads, rivers or mountains when flying, or islands and other landmarks while swimming. However, this only works in familiar territory. Homing pigeons , for example, will often navigate using familiar landmarks , such as roads. [ 1 ] Sea turtles will also use landmarks to orient themselves. [ 2 ]
Many animals use magnetic orientation based on the Earth's magnetic field to find their way home. This is usually used together with other methods, such as a sun compass, as in bird migration and in the case of turtles. This is also commonly used when no other methods are available, as in the case of lobsters , [ 3 ] which live underwater, and mole rats , [ 4 ] which home through their burrows .
Celestial orientation, navigation using the stars, is commonly used for homing. Displaced marbled newts , for example, can only home when stars are visible. [ 5 ]
There is evidence that olfaction , or smell, is used in homing with several salamanders, such as the red-bellied newt . [ 6 ] Olfaction is also necessary for the homing of salmon . [ 7 ]
Topographic memory, memory of the contours surrounding the destination, is one common method for navigation. This is mainly used by animals with less intelligence, such as molluscs. Limpets use this to find their way back to the home scrape; although whether this is true homing has been disputed. [ 8 ] | https://en.wikipedia.org/wiki/Homing_(biology) |
The homing endonucleases are a collection of endonucleases encoded either as freestanding genes within introns , as fusions with host proteins, or as self-splicing inteins . They catalyze the hydrolysis of genomic DNA within the cells that synthesize them, but do so at very few, or even singular, locations. Repair of the hydrolyzed DNA by the host cell frequently results in the gene encoding the homing endonuclease having been copied into the cleavage site, hence the term 'homing' to describe the movement of these genes. Homing endonucleases can thereby transmit their genes horizontally within a host population, increasing their allele frequency at greater than Mendelian rates.
Although the origin and function of homing endonucleases is still being researched, the most established hypothesis considers them as selfish genetic elements , [ 1 ] similar to transposons , because they facilitate the perpetuation of the genetic elements that encode them independent of providing a functional attribute to the host organism.
Homing endonuclease recognition sequences are long enough to occur randomly only with a very low probability (approximately once every 7 × 10 9 bp ), [ 2 ] and are normally found in one or very few instances per genome . Generally, owing to the homing mechanism, the gene encoding the endonuclease (the HEG, "homing endonuclease gene") is located within the recognition sequence which the enzyme cuts, thus interrupting the homing endonuclease recognition sequence and limiting DNA cutting only to sites that do not (yet) carry the HEG.
Prior to transmission, one allele carries the gene (HEG + ) while the other does not (HEG − ), and is therefore susceptible to being cut by the enzyme. Once the enzyme is synthesized, it breaks the chromosome in the HEG − allele, initiating a response from the cellular DNA repair system. The damage is repaired using recombination , taking the pattern of the opposite, undamaged DNA allele, HEG + , that contains the gene for the endonuclease. Thus, the gene is copied to the allele that initially did not have it and it is propagated through successive generations. [ 3 ] This process is called "homing". [ 3 ]
Homing endonucleases are always indicated with a prefix that identifies their genomic origin, followed by a hyphen: "I-" for homing endonucleases encoded within an intron, "PI-" (for "protein insert") for those encoded within an intein. Some authors have proposed using the prefix "F-" ("freestanding") for viral enzymes and other natural enzymes not encoded by introns nor inteins, [ 4 ] and "H-" ("hybrid") for enzymes synthesized in a laboratory. [ 5 ] Next, a three-letter name is derived from the binominal name of the organism, taking one uppercase letter from the genus name and two lowercase letters from the specific name. (Some mixing is usually done for hybrid enzymes.) Finally, a Roman numeral distinguishes different enzymes found in the same organism:
Homing endonucleases differ from Type II restriction enzymes in the several respects: [ 4 ]
Currently there are six known structural families. Their conserved structural motifs are: [ 4 ]
The yeast homing endonuclease PI-Sce is a LAGLIDADG-type endonuclease encoded as an intein that splices itself out of another protein ( P17255 ). The high-resolution structure reveals two domains : an endonucleolytic centre resembling the C-terminal domain of Hedgehog proteins , and a Hint domain (Hedgehog/Intein) containing the protein-splicing active site . [ 31 ] | https://en.wikipedia.org/wiki/Homing_endonuclease |
In chemical nomenclature , nor- is a prefix to name a structural analog that can be derived from a parent compound by the removal of one carbon atom along with the accompanying hydrogen atoms. The nor-compound can be derived by removal of a CH 3 , CH 2 , or CH group, or of a C atom. The "nor-" prefix also includes the elimination of a methylene bridge in a cyclic parent compound, followed by ring contraction . (The prefix " homo- " which indicates the next higher member in a homologous series , is usually limited to noncyclic carbons). [ 1 ] [ 2 ] [ 3 ] The terms desmethyl- or demethyl- are synonyms of "nor-".
"Nor" is an abbreviation of normal. Originally, the term was used to denote the completely demethylated form of the parent compound. [ 4 ] Later, the meaning was restricted to the removal of one group. Nor is written directly in front of the stem name, without a hyphen between, unless there is another prefix after nor (for example α-). If multiple groups are eliminated the prefix dinor, trinor, tetranor, etcetera is used. The prefix is preceded by the position number (locant) of the carbon atoms that disappear (for example 2,3-dinor). The original numbering of the parent compound is retained. According to IUPAC nomenclature, this prefix is not written with italic letters [ 5 ] and unlike nor, when it is a di or higher nor, at the end of the numbers separated by commas, a hyphen is used (as for example 2,3-dinor-6-keto Prostaglandin F1α is produced by beta oxidation of the parent compound 6-keto Prostaglandin F1α). [ 6 ] Here, though, carbon 1 and 2 are lost by oxidation. The new carbon 1 has now become a CCOH similar to the parent compound, looking as if just carbon 2 and 3 have been removed from the parent compound. "Dinor" does not have to be reduction in adjacent carbons, e.g. 5-Acetyl-4,18-dinor-retinoic acid, where 4 referred to a ring carbon and 18 referred to a methyl group on the 5th carbon on the ring. [ 3 ]
The alternative use of "nor" in naming the unbranched form of a compound within a series of isomers (also referred to as "normal") is obsolete and not allowed in IUPAC names.
Perhaps the earliest known use of the prefix "nor" is that by A. Matthiessen and G.C. Foster in 1867 in a publication about the reaction between a strong acid and opianic acid (see image). Opianic acid (C 10 H 10 O 5 ) is a compound with two methyl groups — in the publication in question the authors called it "dimethyl nor-opianic acid". After reaction with a strong acid a compound was attained with only one methyl (C 9 H 8 O 5 ). This partially demethylated opianic acid they called "methyl normal opianic acid". The completely demethylated compound (C 8 H 6 O 5 ) was denoted by the term "normal opianic acid", abbreviated as "nor-opianic acid".
Similarly Matthiessen and Foster called narcotine , which has three methoxy groups, "trimethyl nor-narcotine". The singular demethylated narcotine was called "dimethyl nor-narcotine", the more demethylated narcotine "methyl nor-narcotine" and the completely demethylated form "normal narcotine" or "nor-narcotine". [ 7 ]
"Since that time the meaning of the prefix has been generalized to denote the replacement of one or more methyl groups by H, or the disappearance of CH 2 from a carbon chain" . [ 4 ]
At present, the meaning is restricted to denote the removal of only one group from the parent structure, rather than the completely demethylated form of the parent compound. [ 1 ]
In literature, "nor" is sometimes called the "next lower homologue", although in this context "homologue" is an inexact term. "Nor" only refers to the removal of one carbon atom with the accompanying hydrogen, not the removal of other units. "Nor" compares two related compounds; it does not describe the relation to a homologous series .
It is suggested that "nor" is an acronym of German " N o hne R adikal" (" nitrogen without radical "). At first, the British pharmacologist John H. Gaddum followed this theory, [ 8 ] but in response to a review of A.M. Woolman, [ 9 ] Gaddum retracted his support for this etymology. [ 4 ] Woolman believed that "N ohne Radikal" was a German mnemonic and likely a backronym , rather than the real meaning of the prefix "nor". This can be argued with the fact "that the prefix nor is used for many compounds which contain no nitrogen at all" . [ 9 ]
Originally, "nor" had an ambiguous meaning, as the term "normal" could also refer to the unbranched form in a series of isomers, for example as with alkanes , alkanols and some amino acids. [ 10 ] [ 11 ] [ 12 ]
Names of unbranched alkanes and alkanols, like " normal butane " and " normal propyl alcohol ", which are obsolete now, [ 13 ] have become the prefix n- , however, not "nor". [ 14 ] Other "normal" compounds got the prefix "nor". The IUPAC encourages that older trivial names , like norleucine and norvaline , not be used; [ 11 ] the use of the prefix for isomeric compounds was already discouraged in 1955 or earlier. [ 10 ] | https://en.wikipedia.org/wiki/Homo- |
The homo Favorskii rearrangement is the rearrangement of β-halo ketones and cyclobutanones, which in ring systems may yield ring contraction. [ 1 ] This rearrangement takes place in the presence of a base, yielding a carboxylic acid derivative corresponding to the nucleophile (most often the base itself). E1cb will occur if α-carbon adjacent to the halogen atom has hydrogens on it.
The reaction proceeds in an analogous manner to that of the Favorskii rearrangement . The major difference is that the cyclopropanone intermediate is replaced by a cyclobutanone intermediate, and therefore the intermediate's formation cannot be viewed as a 2-electron electrocyclization reaction . [ 1 ]
The selectivity is similar to the Favorskii rearrangement in that the most stable carbanion is formed.
The homo-Favorskii rearrangement is a key step in the synthesis of Kelsoene, constructing its four-membered ring. In this particular example, the nucleophile is absent and the base, t-BuOK , is very bulky. Therefore, the cyclobutanone intermediate can be isolated and is further reacted to yield the product. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Homo-Favorskii_rearrangement |
Homo consumericus ( mock Latin for consumerist person) is a neologism used in social sciences, notably by Gilles Lipovetsky in Le Bonheur Paradoxal (2006) [ 2 ] and Gad Saad in his 2007 book, The Evolutionary Bases of Consumption [ 3 ] According to these and other scholars, the phenomenon of mass consumption can be compared to certain traits of human psychology described by evolutionary scientists pointing out similarities between Darwinian principles and consumer behavior . [ 4 ] [ 5 ] Lipovetsky has noted that modern times have brought about the rise of a "third" type of Homo consumericus , who is unpredictable and insatiable. [ 6 ] A similar expression, Homo Consumens , was used by Erich Fromm in Socialist Humanism , written in 1965. [ 7 ] Fromm wrote: "Homo consumens is the man whose main goal is not primarily to own things, but to consume more and more, and thus to compensate for his inner vacuity, passivity, loneliness, and anxiety." The expression Homo Consumens has been used by several other authors, including Mihailo Marković . [ 8 ] | https://en.wikipedia.org/wiki/Homo_consumericus |
The term Homo economicus , or economic man , is the portrayal of humans as agents who are consistently rational and narrowly self-interested , and who pursue their subjectively defined ends optimally . It is a wordplay on Homo sapiens , used in some economic theories and in pedagogy . [ 1 ]
In game theory , Homo economicus is often (but not necessarily) modelled through the assumption of perfect rationality . It assumes that agents always act in a way that maximize utility as a consumer and profit as a producer , [ 2 ] and are capable of arbitrarily complex deductions towards that end. They will always be capable of thinking through all possible outcomes and choosing that course of action which will result in the best possible result.
The rationality implied in Homo economicus does not restrict what sort of preferences are admissible. Only naive applications of the Homo economicus model assume that agents know what is best for their long-term physical and mental health. For example, an agent's utility function could be linked to the perceived utility of other agents (such as one's husband or children), making Homo economicus compatible with other models such as Homo reciprocans , which emphasizes human cooperation .
As a theory on human conduct, it contrasts to the concepts of behavioral economics , which examines cognitive biases and other irrationalities , and to bounded rationality , which assumes that practical elements such as cognitive and time limitations restrict the rationality of agents.
The term "economic man" was used for the first time in the late nineteenth century by critics of John Stuart Mill 's work on political economy. [ 3 ] Below is a passage from Mill's work that critics referred to:
[Political economy] does not treat the whole of man's nature as modified by the social state, nor of the whole conduct of man in society. It is concerned with him solely as a being who desires to possess wealth, and who is capable of judging the comparative efficacy of means for obtaining that end. [ 4 ]
Later in the same work, Mill stated that he was proposing "an arbitrary definition of man, as a being who inevitably does that by which he may obtain the greatest amount of necessaries, conveniences, and luxuries, with the smallest quantity of labour and physical self-denial with which they can be obtained."
Adam Smith , in The Theory of Moral Sentiments , had claimed that individuals have sympathy for the well-being of others. On the other hand, in The Wealth of Nations , Smith wrote:
It is not from the benevolence of the butcher, the brewer, or the baker that we expect our dinner, but from their regard to their own interest. [ 5 ]
This comment is perfectly in line with the notion of Homo economicus and the idea, propounded by Smith in The Wealth of Nations and, in the 20th century, by the likes of Ayn Rand (in The Virtue of Selfishness , for example), that pursuing one's individual self-interest promotes social well-being. In Book V, Chapter I, Smith argues, "The man whose whole life is spent in performing a few simple operations, of which the effects are perhaps always the same, or very nearly the same, has no occasion to exert his understanding or to exercise his invention in finding out expedients for removing difficulties which never occur. He naturally loses, therefore, the habit of such exertion, and generally becomes as stupid and ignorant as it is possible for a human creature to become." This could be seen as prefiguring one part of Marx's theory of alienation of labor; and also as a pro-worker argument against the division of labor and the restrictions it places upon freedom of occupation. But even so, taken in the context of the work as a whole, Smith clearly intends it in a pro-capitalism, pro-bourgeoisie, way: "removing difficulties", such as reducing the time needed for travel and trade, through "expedients", such as steam-engine ships, here means the typical argument that capitalism brings freedom of entrepreneurship and innovation, which then bring prosperity. Thus, Smith is not unreasonably called "The Father of Capitalism"; early on, he theorized many of today's most widespread and deep-seated pro-capitalism arguments.
The early role of Homo Economicus within neoclassical theory was summarised to include a general objective of discovering laws and principles to accelerate further growth within the national economy and the welfare of ordinary citizens. These laws and principles were determined by two governing factors, natural and social. [ 6 ] It had been found to be the foundation of neoclassical theory of the firm which assumed that individual agents would act rationally amongst other rational individuals. [ 7 ] In which Adam Smith explains that the actions of those that are rational and self-interested under homo economicus promotes the general good overall which was understood as the efficient allocation of material wealth. However, social scientists had doubted the actual importance of income and wealth to overall happiness in societies. [ 8 ]
The term 'Homo economicus' was initially critiqued for its portrayal of the economic agent as a narrowly defined, money-making animal, a characterization heavily influenced by the works of Adam Smith and John Stuart Mill. Authors from the English Historical School of Economics sought to demote this model from its broad classification under the 'genus homo', arguing that it insufficiently captured the complex ethical and behavioral dimensions of human decision-making. Their critique emphasized the need for a more nuanced understanding of human agency beyond the mere pursuit of economic rationality. [ 9 ]
Economists in the late 19th century—such as Francis Edgeworth , William Stanley Jevons , Léon Walras , and Vilfredo Pareto —built mathematical models on these economic assumptions. In the 20th century, the rational choice theory of Lionel Robbins came to dominate mainstream economics. The term "economic man" then took on a more specific meaning: a person who acted rationally on complete knowledge out of self-interest and the desire for wealth.
Homo economicus is a term used for an approximation or model of Homo sapiens that acts to obtain the highest possible well-being for themself given available information about opportunities and other constraints, both natural and institutional , on their ability to achieve their predetermined goals. This approach has been formalized in certain social sciences models, particularly in economics .
Homo economicus is usually seen as "rational" in the sense that well-being as defined by the utility function is optimized given perceived opportunities. [ 10 ] That is, the individual seeks to attain very specific and predetermined goals to the greatest extent with the least possible cost. Note that this kind of "rationality" does not say that the individual's actual goals are "rational" in some larger ethical, social, or human sense, only that they try to attain them at minimal cost. Only naïve applications of the Homo economicus model assume that this hypothetical individual knows what is best for their long-term physical and mental health and can be relied upon to always make the right decision for themself. See rational choice theory and rational expectations for further discussion; the article on rationality widens the discussion.
As in social science, these assumptions are at best approximations. The term is often used derogatorily in academic literature, perhaps most commonly by sociologists , many of whom tend to prefer structural explanations to ones based on rational action by individuals.
The use of the Latin form Homo economicus is certainly long established; Persky [ 3 ] traces it back to Pareto (1906) [ 11 ] but notes that it may be older. The English term economic man can be found even earlier, in John Kells Ingram 's A History of Political Economy (1888). [ 12 ] The Oxford English Dictionary (O.E.D.) cites the use of Homo oeconomicus by C. S. Devas in his 1883 work The Groundwork of Economics in reference to Mill's writings, as one of a number of phrases that imitate the scientific name for the human species:
Mill has only examined the Homo oeconomicus , or dollar-hunting animal. [ 13 ]
According to the OED , the human genus name Homo is
Used with L. or mock-L. adjs. in names imitating Homo sapiens, etc., and intended to personify some aspect of human life or behaviour (indicated by the adj.). Homo faber ("feIb@(r)) [H. Bergson L'Evolution Créatrice (1907) ii. 151], a term used to designate man as a maker of tools.) Variants are often comic: Homo insipiens; Homo turisticus. [ 14 ]
Note that such forms should logically keep the capital for the "genus" name— i.e., H omo economicus rather than h omo economicus. Actual usage is inconsistent.
Amartya Sen has argued there are grave pitfalls in assuming that rationality is limited to selfish rationality. Economics should build into its assumptions the notion that people can give credible commitments to a course of conduct. He demonstrates the absurdity with the narrowness of the assumptions by some economists with the following example of two strangers meeting on a street. [ 15 ]
"Where is the railway station?" he asks me. "There," I say, pointing at the post office, "and would you please post this letter for me on the way?" "Yes," he says, determined to open the envelope and check whether it contains something valuable.
Homo economicus bases its choices on a consideration of its own personal "utility function".
The system established by the concept of the homo economicus has become the basis for the concepts used in economics. [ 16 ] "Self-interest is the main motivation of human beings in their transactions" is a theoretical structure in the concept of homo economicus. Over the years, economists have studied and discussed institutional economics, behavioural economics, political economy, economic anthropology and ecological economics . The economic man solution is considered to be inadequate and flawed. [ 17 ]
Economists Thorstein Veblen , John Maynard Keynes , Herbert A. Simon , and many of the Austrian School criticise Homo economicus as an actor with too great an understanding of macroeconomics and economic forecasting in his decision making. They stress uncertainty and bounded rationality in the making of economic decisions, rather than relying on the rational man who is fully informed of all circumstances impinging on his decisions. They argue that perfect knowledge never exists, which means that all economic activity implies risk. Austrian economists rather prefer to use as a model tool the Homo agens .
Empirical studies by Amos Tversky questioned the assumption that investors are rational. In 1995, Tversky demonstrated the tendency of investors to make risk-averse choices in gains, and risk-seeking choices in losses. The investors appeared as very risk-averse for small losses but indifferent for a small chance of a very large loss. This violates economic rationality as usually understood. Further research on this subject, showing other deviations from conventionally defined economic rationality, is being done in the growing field of experimental or behavioral economics . Some of the broader issues involved in this criticism are studied in decision theory , of which rational choice theory is only a subset.
Behavioral economists Richard Thaler and Daniel Kahneman have criticized the notion of economic agents possessing stable and well-defined preferences that they consistently act upon in a self-interested manner. Using insights from psychological experiments found explanations for anomalies in economic decision-making that seemed to violate rational choice theory. Writing a column in the Journal of Economic Perspectives under the title Anomalies , Thaler wrote features on the many ways observed economic behavior in markets deviated from theory. One such anomaly was the endowment effect by which individual preferences are framed based on reference positions (Kahneman et al., 1990). In an experiment in which one group was given a mug and the other was asked how much they were willing to pay (WTP) for the mug, it was found that the price that those endowed with the mug where willingness to accept (WTA) greatly exceeded that of the WTP. This was seen as falsifying the Coase theorem in which for every person the WTA equals the WTP that is the basis of the efficient-market hypothesis . From this they argued the endowment effect acts on us by making it painful for us to give up the endowment. Kahneman also argued against the rational-agent model in which agents make decisions with all of the relevant context including weighing all possible future opportunities and risks. Evidence supports the claim that decisions are often made by "narrow framing" with investors making portfolio decisions in isolation from their entire portfolio (Nicholas Barberis et al., 2003). Shlomo Benartzi and Thaler found that investors also tended to use unreasonable time periods in evaluating their investments. [ 18 ]
In Kahneman-Tversky’s criticism of the Homo Economicus model, many mainstream economists had utilised deductive logic to further progress the Homo Economicus idea as opposed to Daniel Kahneman and Amos Tversky in which they had applied inductive logic. Further findings of their experiments that opposed Homo Economicus had found that individuals will constantly adjust their choices according to changes in their income and market prices. Furthermore, Kahneman and Tversky had conducted experiments exploring prospect theory where results from several experiments concluded that individuals will generally put higher importance on avoiding loss over making a gain. [ 6 ] Homo economicus assumptions have been criticized not only by economists on the basis of logical arguments, but also on empirical grounds by cross-cultural comparison. Economic anthropologists such as Marshall Sahlins , [ 19 ] Karl Polanyi , [ 20 ] Marcel Mauss [ 21 ] and Maurice Godelier [ 22 ] have demonstrated that in traditional societies, choices people make regarding production and exchange of goods follow patterns of reciprocity which differ sharply from what the Homo economicus model postulates. Such systems have been termed gift economy rather than market economy. Criticisms of the Homo economicus model put forward from the standpoint of ethics usually refer to this traditional ethic of kinship-based reciprocity that held together traditional societies. Philosophers Amartya Sen and Axel Honneth are noted for their criticisms of the normative assumptions made by the self-interested utility function. [ 23 ]
Swiss economist Bruno Frey , points to the excessive emphasis on extrinsic motivation (rewards and punishments from the social environment) as opposed to intrinsic motivation . For example, it is difficult if not impossible to understand how Homo economicus would be a hero in war or would get inherent pleasure from craftsmanship . Frey and others argue that too much emphasis on rewards and punishments can "crowd out" (discourage) intrinsic motivation: paying a boy for doing household tasks may push him from doing those tasks "to help the family" to doing them simply for the reward.
Another weakness is highlighted by economic sociologists, who argue that Homo economicus ignores an extremely important question, i.e. the origins of tastes and the parameters of the utility function by social influences, training, education, and the like. The exogeneity of tastes (preferences) in this model is the major distinction from Homo sociologicus , in which tastes are taken as partially or even totally determined by the societal environment. Comparisons between economics and sociology have resulted in a corresponding term Homo sociologicus , introduced by German sociologist Ralf Dahrendorf in 1958, to parody the image of human nature given in some sociological models that attempt to limit the social forces that determine individual tastes and social values. [ 24 ] (The alternative or additional source of these would be biology.) Hirsch et al. say that Homo sociologicus is largely a tabula rasa upon which societies and cultures write values and goals; unlike economicus , sociologicus acts not to pursue selfish interests but to fulfill social roles [ 25 ] (though the fulfillment of social roles may have a selfish rationale—e.g. politicians or socialites ). This "individual" may appear to be all society and no individual.
The as of 2015 emerging science of " neuroeconomics " suggests that there are serious shortcomings in the conventional theories of economic rationality. [ 26 ] Rational economic decision making has been shown to produce high levels of cortisol , epinephrine and corticosteroids , associated with elevated levels of stress. It seems that the dopaminic system is only activated upon achieving the reward, and otherwise the "pain" receptors, particularly in the prefrontal cortex of the left hemisphere of the brain show a high level of activation. [ 27 ] Serotonin and oxytocin levels are minimised, and the general immune system shows a level of suppression. Such a pattern is associated with a generalised reduction in the levels of trust. Unsolicited "gift giving", considered irrational from the point of view of Homo economicus , by comparison, shows an elevated stimulation of the pleasure circuits of the whole brain, reduction in the levels of stress, optimal functioning of the immune system, reduction in cortico-steroids and epinephrine and cortisol, activation of the substantia nigra , the striatum and the nucleus accumbens (associated with the placebo effect ), all associated with the building of social trust. Mirror neurons result in a win-win positive sum game in which the person giving the gift receives a pleasure equivalent to the person receiving it. [ 28 ] This confirms the findings of anthropology which suggest that a " gift economy " preceded the more recent market systems where win-lose or risk-avoidance lose-lose calculations apply. [ 29 ]
Critics [ citation needed ] , learning from the broadly defined psychoanalytic tradition, criticize the Homo economicus model as ignoring the inner conflicts that real-world individuals suffer, as between short-term and long-term goals ( e.g., eating chocolate cake and losing weight) or between individual goals and societal values. Such conflicts may lead to "irrational" behavior involving inconsistency, psychological paralysis, neurosis, and psychic pain. Further irrational human behaviour can occur as a result of habit, laziness, mimicry and simple obedience. [ citation needed ] According to Sergio Caruso, one should distinguish between the purely "methodological" version of Homo economicus , aimed at practical use in the economic sphere (e.g. economic calculus), and the" anthropological" version, aimed at depicting a certain type of man, or even human nature in general. The former has proved unrealistic, liable to be corrected resorting to economic psychology . Depicting different types of "economic man" (each depending on the social context) is possible with the help of cultural anthropology , and social psychology if only those types are contrived as socially and/or historically determined abstractions (such as Weber's , Korsch's , and Fromm's concepts of Idealtypus , "historical specification", and "social character"). Marxist theoretician Gramsci admitted of the Homo economicus as a useful abstraction on the ground of economic theory, provided that we grant there be as many homines oeconomici as the modes of production. However the concept of Homo economicus puts aside all other aspects of human nature (such as Homo faber , Homo loquens , Homo ludens , Homo reciprocans , and so on). [ 30 ] [ page needed ]
In advanced-level theoretical economics, scholars have modified models enough to more realistically depict real-life decision-making. For example, models of individual behavior under bounded rationality and of people suffering from envy can be found in the literature. [ 31 ] | https://en.wikipedia.org/wiki/Homo_economicus |
Homoaromaticity , in organic chemistry , refers to a special case of aromaticity in which conjugation is interrupted by a single sp 3 hybridized carbon atom. Although this sp 3 center disrupts the continuous overlap of p-orbitals , traditionally thought to be a requirement for aromaticity, considerable thermodynamic stability and many of the spectroscopic, magnetic, and chemical properties associated with aromatic compounds are still observed for such compounds. This formal discontinuity is apparently bridged by p-orbital overlap, maintaining a contiguous cycle of π electrons that is responsible for this preserved chemical stability . [ 1 ]
The concept of homoaromaticity was pioneered by Saul Winstein in 1959, prompted by his studies of the “tris-homocyclopropenyl” cation. [ 2 ] Since the publication of Winstein's paper, much research has been devoted to understanding and classifying these molecules, which represent an additional class of aromatic molecules included under the continuously broadening definition of aromaticity.
To date, homoaromatic compounds are known to exist as cationic and anionic species, and some studies support the existence of neutral homoaromatic molecules, though these are less common. [ 3 ] The 'homotropylium' cation (C 8 H 9 + ) is perhaps the best studied example of a homoaromatic compound.
The term "homoaromaticity" derives from the structural similarity between homoaromatic compounds and the analogous homo-conjugated alkenes previously observed in the literature. [ 2 ] The IUPAC Gold Book requires that Bis-, Tris-, etc. prefixes be used to describe homoaromatic compounds in which two, three, etc. sp 3 centers separately interrupt conjugation of the aromatic system.
The concept of homoaromaticity has its origins in the debate over the non-classical carbonium ions that occurred in the 1950s. Saul Winstein , a famous proponent of the non-classical ion model, first described homoaromaticity while studying the 3-bicyclo[3.1.0]hexyl cation.
In a series of acetolysis experiments, Winstein et al. observed that the solvolysis reaction occurred empirically faster when the tosyl leaving group was in the equatorial position. The group ascribed this difference in reaction rates to the anchimeric assistance invoked by the "cis" isomer. This result thus supported a non-classical structure for the cation. [ 4 ]
Winstein subsequently observed that this non-classical model of the 3-bicyclo[3.1.0]hexyl cation is analogous to the previously well-studied aromatic cyclopropenyl cation. Like the cyclopropenyl cation, positive charge is delocalized over three equivalent carbons containing two π electrons. This electronic configuration thus satisfies Huckel's rule (requiring 4n+2 π electrons) for aromaticity. Indeed, Winstein noticed that the only fundamental difference between this aromatic propenyl cation and his non-classical hexyl cation was the fact that, in the latter ion, conjugation is interrupted by three −CH 2 − units . The group thus proposed the name "tris-homocyclopropenyl"—the tris-homo counterpart to the cyclopropenyl cation.
The criterion for aromaticity has evolved as new developments and insights continue to contribute to our understanding of these remarkably stable organic molecules . [ 5 ] The required characteristics of these molecules has thus remained the subject of some controversy. Classically, aromatic compounds were defined as planar molecules that possess a cyclically delocalized system of (4n+2)π electrons, satisfying Huckel's rule . Most importantly, these conjugated ring systems are known to exhibit enormous thermochemical stability relative to predictions based on localized resonance structures. Three important features seem to characterize aromatic compounds: [ 6 ]
A number of exceptions to these conventional rules exist, however. Many molecules, including Möbius 4nπ electron species, pericyclic transition states , molecules in which delocalized electrons circulate in the ring plane or through σ (rather than π ) bonds, many transition-metal sandwich molecules, and others have been deemed aromatic though they somehow deviate from the conventional parameters for aromaticity. [ 7 ]
Consequently, the criterion for homoaromatic delocalization remains similarly ambiguous and somewhat controversial. The homotropylium cation, (C 8 H 9 + ), though not the first example of a homoaromatic compound ever discovered, has proven to be the most studied of the compounds classified as homoaromatic, and is therefore often considered the classic example of homoaromaticity. By the mid-1980s, there were more than 40 reported substituted derivatives of the homotropylium cation, reflecting the importance of this ion in formulating our understanding of homoaromatic compounds. [ 6 ]
After initial reports of a "homoaromatic" structure for the tris-homocyclopropenyl cation were published by Winstein, many groups began to report observations of similar compounds. One of the best studied of these molecules is the homotropylium cation, the parent compound of which was first isolated as a stable salt by Pettit, et al. in 1962, when the group reacted cyclooctatraene with strong acids. [ 8 ] Much of the early evidence for homoaromaticity comes from observations of unusual NMR properties associated with this molecule.
While characterizing the compound resulting from deprotonation of cyclooctatriene by 1 H NMR spectroscopy , the group observed that the resonance corresponding to two protons bonded to the same methylene bridge carbon exhibited an astonishing degree of separation in chemical shift .
From this observation, Pettit, et al. concluded that the classical structure of the cyclooctatrienyl cation must be incorrect. Instead, the group proposed the structure of the bicyclo[5.1.0]octadienyl compound, theorizing that the cyclopropane bond located on the interior of the eight-membered ring must be subject to considerable delocalization , thus explaining the dramatic difference in observed chemical shift. Upon further consideration, Pettit was inclined to represent the compound as the "homotropylium ion," which shows the "internal cyclopropane" bond totally replaced by electron delocalization. This structure shows how delocalization is cyclic and involves 6 π electrons, consistent with Huckel's rule for aromaticity. The magnetic field of the NMR could thus induce a ring current in the ion, responsible for the significant differences in resonance between the exo and endo protons of this methylene bridge. Pettit, et al. thus emphasized the remarkable similarity between this compound and the aromatic tropylium ion, describing a new "homo-counterpart" to an aromatic species already known, precisely as predicted by Winstein.
Subsequent NMR studies undertaken by Winstein and others sought to evaluate the properties of metal carbonyl complexes with the homotropylium ion. Comparison between a molybdenum-complex and an iron-complex proved particularly fruitful. Molybdenum tricarbonyl was expected to coordinate to the homotropylium cation by accepting 6 π electrons, thereby preserving the homoaromatic features of the complex. By contrast, iron tricarbonyl was expected to coordinate to the cation by accepting only 4 π electrons from the homotropylium ion, creating a complex in which the electrons of the cation are localized. Studies of these complexes by 1 H NMR spectroscopy showed a large difference in chemical shift values for methylene protons of the Mo-complex, consistent with a homoaromatic structure, but detected virtually no comparable difference in resonance for the same protons in the Fe-complex. [ 9 ]
An important piece of early evidence in support of the homotropylium cation structure that did not rely on the magnetic properties of the molecule involved the acquisition of its UV spectrum . Winstein et al. determined that the absorption maxima for the homotropylium cation exhibited a considerably shorter wavelength than would be precited for the classical cyclooctatrienyl cation or the bicyclo[5.1.0]octadienyl compound with the fully formed internal cyclopropane bond (and a localized electronic structure). Instead, the UV spectrum most resembled that of the aromatic tropylium ion . Further calculations allowed Winstein to determine that the bond order between the two carbon atoms adjacent to the outlying methylene bridge is comparable to that of the π-bond separating the corresponding carbon atoms in the tropylium cation. [ 10 ] Although this experiment proved to be highly illuminating, UV spectra are generally considered to be poor indicators of aromaticity or homoaromaticity. [ 6 ]
More recently, work has been done to investigate the structure of the purportedly homoaromatic homotropylium ion by employing various other experimental techniques and theoretical calculations. One key experimental study involved analysis of a substituted homotropylium ion by X-ray crystallography . These crystallographic studies have been used to demonstrate that the internuclear distance between the atoms at the base of the cyclopropenyl structure is indeed longer than would be expected for a normal cyclopropane molecule, while the external bonds appear to be shorter, indicating involvement of the internal cyclopropane bond in charge delocalization. [ 6 ]
The molecular orbital explanation of the stability of homoaromaticity has been widely discussed with numerous diverse theories, mostly focused on the homotropenylium cation as a reference. R.C. Haddon initially proposed a Mobius model where the outer electrons of the sp 3 hybridized methylene bridge carbon(2) back-donate to the adjacent carbons to stabilize the C1-C3 distance. [ 11 ]
Homoaromaticity can better be explained using Perturbation Molecular Orbital Theory (PMO) as described in a 1975 study by Robert C. Haddon. The homotropenylium cation can be considered as a perturbed version of the tropenylium cation due to the addition of a homoconjugate linkage interfering with the resonance of the original cation. [ 12 ]
The most important factor in influencing homoaromatic character is the addition of a single homoconjugate linkage into the parent aromatic compound. The location of the homoconjugate bond is not important as all homoaromatic species can be derived from aromatic compounds that possess symmetry and equal bond order between all carbons. The insertion of a homoconjugate linkage perturbs the π-electron density an amount δβ, which depending on the ring size, must be greater than 0 and less than 1, where 0 represents no perturbation and 1 represents total loss of aromaticity (destabilization equivalent to the open chain form). [ 12 ] It is believed that with increasing ring size, the resonance stabilization of homoaromaticity is offset by the strain in forming the homoconjugate bridge. In fact, the maximum ring size for homoaromaticity is fairly low as a 16-membered annulene ring favours the formation of the aromatic dication over the strained bridged homocation. [ 13 ]
A significant second-order effect on the Perturbation Molecular Orbital model of homoaromaticity is the addition of a second homoconjugate linkage and its influence on stability. The effect is often a doubling of the instability brought about by the addition of a single homoconjugate linkage, although there is an additional term that depends on the proximity of the two linkages. In order to minimize δβ and thus keep the coupling term to a minimum, bishomoaromatic compounds form depending on the conformation of greatest stability by resonance and smallest steric hindrance. The synthesis of the 1,3-bishomotropenylium cation by protonating cis-bicyclo[6.1.0]nona-2,4,6-triene agrees with theoretical calculations and maximizes stability by forming the two methylene bridges at the 1st and 3rd carbons. [ 12 ]
The addition of a substituent to a homoaromatic compound has a large influence over the stability of the compound. Depending on the relative locations of the substituent and the homoconjugate linkage, the substituent can either have a stabilizing or destabilizing effect. This interaction is best demonstrated by looking at a substituted tropenylium cation. If an inductively electron-donating group is attached to the cation at the 1st or 3rd carbon position, it has a stabilizing effect, improving the homoaromatic character of the compound. However, if this same substituent is attached at the 2nd or 4th carbon, the interaction between the substituent at the homoconjugate bridge has a destabilizing effect. Therefore, protonation of methyl or phenyl substituted cyclooctatetraenes will result in the 1 isomer of the homotropenylium cation. [ 12 ]
Following the discovery of the first homoaromatic compounds, research has gone into synthesizing new homoaromatic compounds that possess similar stability to their aromatic parent compounds. There are several classes of homoaromatic compounds, each of which have been predicted theoretically and proven experimentally.
The most established and well-known homoaromatic species are cationic homoaromatic compounds. As stated earlier, the homotropenylium cation is one of the most studied homoaromatic compounds. Many homoaromatic cationic compounds use as a basis a cyclopropenyl cation, a tropylium cation, or a cyclobutadiene dication as these compounds exhibit strong aromatic character. [ 14 ]
In addition to the homotropylium cation, another well established cationic homoaromatic compound is the norbornen-7-yl cation, which has been shown to be strongly homoaromatic, proven both theoretically and experimentally. [ 15 ]
An intriguing case of σ-bishomoaromaticity can be found in the dications of pagodanes . In these 4-center-2-electron systems the delocalization happens in the plane that is defined by the four carbon atoms (prototype for the phenomenon of σ-aromaticity is cyclopropane which gains about 11.3 kcal mol −1 stability from the effect [ 16 ] ). The dications are accessible either via oxidation of pagodane or via oxidation of the corresponding bis-seco-dodecahedradiene: [ 17 ]
Reduction of the corresponding six electrons dianions was not possible so far.
There are many classes of neutral homoaromatic compounds although there is much debate as to whether they truly exhibit homoaromatic character or not.
One class of neutral homoaromatics are called monohomoaromatics, one of which is cycloheptatriene, and numerous complex monohomoaromatics have been synthesized. One particular example is a 60-carbon fulleroid derivative that has a single methylene bridge. UV and NMR analysis have shown that the aromatic character of this modified fulleroid is not disrupted by the addition of a homoconjugate linkage, therefore this compound is definitively homoaromatic. [ 18 ]
Substituted neutral barbaralane derivatives (homoannulenes) have been disclosed as stable ground state homoaromatic molecules in 2023. Evidence for the homoaromatic character in this class of molecules stems from bond length analysis ( X-Ray structural analysis ) as well as shifts in the NMR spectrum. [ 19 ] [ 20 ] The homoannulenes also act as photoswitches by which means a local 6π homoaromaticity can be switched to a global 10π homoaromaticity.
It was long considered that the best examples of neutral homoaromatics are bishomoaromatics such as barrelene and semibullvalene. First synthesized in 1966, [ 21 ] semibullvalene has a structure that should lend itself well to homoaromaticity although there has been much debate whether semibullvalene derivatives can provide a true delocalized, ground state neutral homoaromatic compound or not. In an effort to further stabilize the delocalized transition structure by substituting semibullvalene with electron donating and accepting groups , it has been found that the activation barrier to this rearrangement can be lowered, but not eliminated. [ 22 ] [ 23 ] However, with the introduction of ring strain into the molecule, aimed at destabilizing the localized ground state structure's through the strategic addition of cyclic annulations, a delocalized homoaromatic ground-state structure can indeed be achieved. [ 24 ]
Of the neutral homoaromatics, the compounds best believed to exhibit neutral homoaromaticity are boron containing compounds of 1,2-diboretane and its derivatives. Substituted diboretanes are shown to have a much greater stabilization in the delocalized state over the localized one, giving strong indications of homoaromaticity. [ 25 ] When electron-donating groups are attached to the two boron atoms, the compound favors a classical model with localized bonds. Homoaromatic character is best seen when electron-withdrawing groups are bonded to the boron atoms, causing the compound to adopt a nonclassical, delocalized structure.
As the name suggests, trishomoaromatics are defined as containing one additional methylene bridge compared to bishomoaromatics, therefore containing three of these homoconjugate bridges in total. Just like semibullvalene, there is still much debate as to the extent of the homoaromatic character of trishomoaromatics. While theoretically they are homoaromatic, these compounds show a stabilization of no more than 5% of benzene due to delocalization. [ 26 ]
Unlike neutral homoaromatic compounds, anionic homoaromatics are widely accepted to exhibit "true" homoaromaticity. These anionic compounds are often prepared from their neutral parent compounds through lithium metal reduction. 1,2-diboretanide derivatives show strong homoaromatic character through their three-atom (boron, boron, carbon), two-electron bond, which contains shorter C-B bonds than in the neutral classical analogue. [ 27 ] These 1,2-diboretanides can be expanded to larger ring sizes with different substituents and all contain some degree of homoaromaticity.
Anionic homoaromaticity can also be seen in dianionic bis-diazene compounds, which contain a four-atom (four nitrogens), six-electron center. Experiment results have shown the shortening of the transannular nitrogen-nitrogen distance, therefore demonstrating that dianionic bis-diazene is a type of anionic bishomoaromatic compound. Peculiar feature of these systems is that the cyclic electron delocalization is taking place in the σ-plane defined by the four nitrogens. These bis-diazene-dianions are therefore the first examples for 4-center-6-electron σ-bishomoaromaticity . [ 28 ] [ 29 ] The corresponding 2 electron σ-bishomoaromatic systems were realized in the form of pagodane dications (see above).
There are also reports of antihomoaromatic compounds. Just as aromatic compounds exhibit exceptional stability, antiaromatic compounds, which deviate from Huckel's rule and contain a closed loop of 4n π electrons, are relatively unstable. The bridged bicyclo[3.2.1]octa-3,6-dien-2-yl cation contains only 4 π electrons, and is therefore "bishomoantiaromatic." A series of theoretical calculations confirm that it is indeed less stable than the corresponding allyl cation. [ 30 ]
Similarly, a substituted bicyclo[3.2.1]octa-3,6-dien-2-yl cation (the 2-(4'-Fluorophenyl) bicyclo[3.2.1]oct-3,6-dien-2-yl cation) was also shown to be an antiaromate when compared to its corresponding allyl cation, corroborated by theoretical calculations as well as by NMR analysis. [ 30 ] | https://en.wikipedia.org/wiki/Homoaromaticity |
In acid–base chemistry , homoassociation (an IUPAC term) [ 2 ] is an association between a base and its conjugate acid through a hydrogen bond . [ 1 ]
Most commonly, homoassociation leads to the enhancement of the acidity of an acid by itself. The effect is accentuated at high concentrations , i.e. the ionization of an acid varies nonlinearly with concentration. This effect arises from the stabilization of the conjugate base by its formation of a hydrogen bond to the parent acid. A well known case is hydrofluoric acid , which is a significantly stronger acid when concentrated than when dilute due to the following equilibria:
Overall:
The bifluoride anion (HF 2 − ) encourages the ionization of HF by stabilizing the F − . Thus, the usual ionization constant for hydrofluoric acid (10 −3.15 ) understates the acidity of concentrated solutions of HF.
The effect of homoassociation is often high in solvents other than water (non- aqueous solutions ), wherein dissociation is often low. Carboxylic acids and phenols exhibit this effect, [ 3 ] for example in sodium diacetate . | https://en.wikipedia.org/wiki/Homoassociation |
Homochirality is a uniformity of chirality , or handedness. Objects are chiral when they cannot be superposed on their mirror images. For example, the left and right hands of a human are approximately mirror images of each other but are not their own mirror images, so they are chiral. In biology , 19 of the 20 natural amino acids are homochiral, being L -chiral (left-handed) with exception of Glycine which is achiral (its own mirror molecule), while sugars are D -chiral (right-handed). [ 1 ] Homochirality can also refer to enantiopure substances in which all the constituents are the same enantiomer (a right-handed or left-handed version of an atom or molecule), but some sources discourage this use of the term.
It is unclear whether homochirality has a purpose; however, it appears to be a form of information storage. [ 2 ] One suggestion is that it reduces entropy barriers in the formation of large organized molecules. [ 3 ] It has been experimentally verified that amino acids form large aggregates in larger abundance from an enantiopure samples of the amino acid than from racemic (enantiomerically mixed) ones. [ 3 ]
It is not clear whether homochirality emerged before or after life, and many mechanisms for its origin have been proposed. [ 4 ] Some of these models propose three distinct steps: mirror-symmetry breaking creates a minute enantiomeric imbalance, chiral amplification builds on this imbalance, and chiral transmission is the transfer of chirality from one set of molecules to another.
Amino acids are the building blocks of peptides and enzymes while sugar-peptide chains are the backbone of RNA and DNA . [ 5 ] [ 6 ] In biological organisms, amino acids appear almost exclusively in the left-handed form ( L -amino acids) and sugars in the right-handed form (R-sugars). [ 7 ] [ failed verification ] Since the enzymes catalyze reactions, they enforce homochirality on a great variety of other chemicals, including hormones , toxins, fragrances and food flavors. [ 8 ] : 493–494 Glycine is achiral, as are some other non- proteinogenic amino acids that are either achiral (such as dimethylglycine ) or of the D enantiomeric form.
Biological organisms easily discriminate between molecules with different chiralities. This can affect physiological reactions such as smell and taste. Carvone , a terpenoid found in essential oils , smells like mint in its L-form and caraway in its R-form. [ 8 ] : 494 [ failed verification ] Limonene tastes like citrus when right-handed and pine when left-handed. [ 9 ] : 168
Homochirality also affects the response to drugs. Thalidomide , in its left-handed form, cures morning sickness ; in its right-handed form, it causes birth defects. [ 9 ] : 168 Unfortunately, even if a pure left-handed version is administered, some of it can convert to the right-handed form in the patient. [ 10 ] Many drugs are available as both a racemic mixture (equal amounts of both chiralities) and an enantiopure drug (only one chirality). Depending on the manufacturing process, enantiopure forms can be more expensive to produce than stereochemical mixtures. [ 9 ] : 168
Chiral preferences can also be found at a macroscopic level. Snail shells can be right-turning or left-turning helices, but one form or the other is strongly preferred in a given species. In the edible snail Helix pomatia , only one out of 20,000 is left-helical. [ 11 ] : 61–62 The coiling of plants can have a preferred chirality and even the chewing motion of cows has a 10% excess in one direction. [ 12 ]
Theories for the origin of homochirality in the molecules of life can be classified as deterministic or based on chance depending on their proposed mechanism. If there is a relationship between cause and effect — that is, a specific chiral field or influence causing the mirror symmetry breaking — the theory is classified as deterministic; otherwise it is classified as a theory based on chance (in the sense of randomness) mechanisms. [ 13 ]
Another classification for the different theories of the origin of biological homochirality could be made depending on whether life emerged before the enantiodiscrimination step (biotic theories) or afterwards (abiotic theories). Biotic theories claim that homochirality is simply a result of the natural autoamplification process of life—that either the formation of life as preferring one chirality or the other was a chance rare event which happened to occur with the chiralities we observe, or that all chiralities of life emerged rapidly but due to catastrophic events and strong competition, the other unobserved chiral preferences were wiped out by the preponderance and metabolic, enantiomeric enrichment from the 'winning' chirality choices. [ citation needed ] If this was the case, remains of the extinct chirality sign should be found. Since this is not the case, nowadays biotic theories are no longer supported.
The emergence of chirality consensus as a natural autoamplification process has also been associated with the 2nd law of thermodynamics . [ 14 ]
Deterministic theories can be divided into two subgroups: if the initial chiral influence took place in a specific space or time location (averaging zero over large enough areas of observation or periods of time), the theory is classified as local deterministic; if the chiral influence is permanent at the time the chiral selection occurred, then it is classified as universal deterministic. The classification groups for local determinist theories and theories based on chance mechanisms can overlap. Even if an external chiral influence produced the initial chiral imbalance in a deterministic way, the outcome sign could be random since the external chiral influence has its enantiomeric counterpart elsewhere.
In deterministic theories, the enantiomeric imbalance is created due to an external chiral field or influence, and the ultimate sign imprinted in biomolecules will be due to it. Deterministic mechanisms for the production of non-racemic mixtures from racemic starting materials include: asymmetric physical laws, such as the electroweak interaction (via cosmic rays [ 15 ] ) or asymmetric environments, such as those caused by circularly polarized light, quartz crystals , or the Earth's rotation, β-Radiolysis or the magnetochiral effect. [ 16 ] [ 17 ] The most accepted universal deterministic theory is the electroweak interaction. Once established, chirality would be selected for. [ 18 ]
One supposition is that the discovery of an enantiomeric imbalance in molecules in the Murchison meteorite supports an extraterrestrial origin of homochirality: there is evidence for the existence of circularly polarized light originating from Mie scattering on aligned interstellar dust particles which may trigger the formation of an enantiomeric excess within chiral material in space. [ 11 ] : 123–124 Interstellar and near-stellar magnetic fields can align dust particles in this fashion. [ 19 ] Another speculation (the Vester-Ulbricht hypothesis) suggests that fundamental chirality of physical processes such as that of the beta decay (see Parity violation ) leads to slightly different half-lives of biologically relevant molecules.
Chance theories are based on the assumption that " Absolute asymmetric synthesis, i.e., the formation of enantiomerically enriched products from achiral precursors without the intervention of chiral chemical reagents or catalysts, is in practice unavoidable on statistical grounds alone ". [ 20 ]
Consider the racemic state as a macroscopic property described by a binomial distribution; the experiment of tossing a coin, where the two possible outcomes are the two enantiomers is a good analogy. The discrete probability distribution P p ( n , N ) {\displaystyle P_{p}(n,N)} of obtaining n successes out of N {\displaystyle N} Bernoulli trials, where the result of each Bernoulli trial occurs with probability p {\displaystyle p} and the opposite occurs with probability q = ( 1 − p ) {\displaystyle q=(1-p)} is given by:
P p ( n , N ) = ( N n ) p n ( 1 − p ) N − n {\displaystyle P_{p}(n,N)={\binom {N}{n}}p^{n}(1-p)^{N-n}} .
The discrete probability distribution P ( N / 2 , N ) {\displaystyle P(N/2,N)} of having exactly N / 2 {\displaystyle N/2} molecules of one chirality and N / 2 {\displaystyle N/2} of the other, is given by:
P 1 / 2 ( N / 2 , N ) = ( N N / 2 ) ( 1 2 ) N / 2 ( 1 2 ) N / 2 ≈ 2 π N {\displaystyle P_{1/2}(N/2,N)={\binom {N}{N/2}}\left({\frac {1}{2}}\right)^{N/2}\left({\frac {1}{2}}\right)^{N/2}\approx {\sqrt {\frac {2}{\pi N}}}} .
As in the experiment of tossing a coin, in this case, we assume both events ( L {\displaystyle L} or D {\displaystyle D} ) to be equiprobable, p = q = 1 / 2 {\displaystyle p=q=1/2} . The probability of having exactly the same amount of both enantiomers is inversely proportional to the square root of the total number of molecules N {\displaystyle N} . For one mol of a racemic compound, N = N A ≈ 6.022 ⋅ 10 23 {\displaystyle N=N_{A}\approx 6.022\cdot 10^{23}} molecules, this probability becomes P 1 / 2 ( N A / 2 , N A ) ≈ 10 − 12 {\displaystyle P_{1/2}(N_{A}/2,N_{A})\approx 10^{-12}} . The probability of finding the racemic state is so small that we can consider it negligible.
In this scenario, there is a need to amplify the initial stochastic enantiomeric excess through any efficient mechanism of amplification. [ 4 ] The most likely path for this amplification step is by asymmetric autocatalysis . An autocatalytic chemical reaction is that in which the reaction product is itself a reactive, in other words, a chemical reaction is autocatalytic if the reaction product is itself the catalyst of the reaction. In asymmetric autocatalysis, the catalyst is a chiral molecule, which means that a chiral molecule is catalysing its own production. An initial enantiomeric excess, such as can be produced by polarized light, then allows the more abundant enantiomer to outcompete the other.
In 1953, Charles Frank proposed a model to demonstrate that homochirality is a consequence of autocatalysis . [ 21 ] [ 22 ] In his model the L and D enantiomers of a chiral molecule are autocatalytically produced from an achiral molecule A
while suppressing each other through a reaction that he called mutual antagonism
L + D → k d ∅ . {\displaystyle {\begin{aligned}L+D\xrightarrow {k_{d}} \varnothing .\\\end{aligned}}}
In this model the racemic state is unstable in the sense that the slightest enantiomeric excess will be amplified to a completely homochiral state. This can be shown by computing the reaction rates from the law of mass action :
where k a {\displaystyle k_{a}} is the rate constant for the autocatalytic reactions, k d {\displaystyle k_{d}} is the rate constant for mutual antagonism reaction, and the concentration of A is kept constant for simplicity.
The analytical solutions for are found to be [ L ] / [ D ] = [ L ] 0 / [ D ] 0 e k d ( [ L ] 0 − [ D ] 0 ) ( e k a t − 1 ) {\displaystyle [L]/[D]=[L]_{0}/[D]_{0}\,e^{kd([L]_{0}-[D]_{0})(e^{k_{a}t}-1)}} . The ratio [ L ] / [ D ] {\displaystyle [L]/[D]} increases at a more than exponential rate if ( [ L ] 0 − [ D ] 0 ) {\displaystyle ([L]_{0}-[D]_{0})} is positive (and vice versa). Every starting conditions different to
[ L ] 0 = [ D ] 0 {\displaystyle [L]_{0}=[D]_{0}} lead to one of the asymptotes [ L ] = 0 {\displaystyle [L]=0} or [ D ] = 0 {\displaystyle [D]=0} . Thus the equality of [ L ] 0 {\displaystyle [L]_{0}} and [ D ] 0 {\displaystyle [D]_{0}} and so of [ L ] {\displaystyle [L]} and [ D ] {\displaystyle [D]} represents a condition of unstable equilibrium, this result depending on the presence of the term representing mutual antagonism.
By defining the enantiomeric excess e e {\displaystyle ee} as
the rate of change of enantiomeric excess can be calculated using chain rule from the rate of change of the concentrations of enantiomers L and D .
Linear stability analysis of this equation shows that the racemic state e e = 0 {\displaystyle ee=0} is unstable. Starting from almost everywhere in the concentration space, the system evolves to a homochiral state.
It is generally understood that autocatalysis alone does not yield to homochirality, and the presence of the mutually antagonistic relationship between the two enantiomers is necessary for the instability of the racemic mixture. However, recent studies show that homochirality could be achieved from autocatalysis in the absence of the mutually antagonistic relationship, but the underlying mechanism for symmetry-breaking is different. [ 4 ] [ 23 ]
There are several laboratory experiments that demonstrate how a small amount of one enantiomer at the start of a reaction can lead to a large excess of a single enantiomer as the product. For example, the Soai reaction is autocatalytic . [ 24 ] [ 25 ] If the reaction is started with some of one of the product enantiomers already present, the product acts as an enantioselective catalyst for production of more of that same enantiomer. [ 26 ] The initial presence of just 0.2 equivalent one enantiomer can lead to up to 93% enantiomeric excess of the product.
Another study [ 27 ] concerns the proline catalyzed aminoxylation of propionaldehyde by nitrosobenzene . In this system, a small enantiomeric excess of catalyst leads to a large enantiomeric excess of product.
Serine octamer clusters [ 28 ] [ 29 ] are also contenders. These clusters of 8 serine molecules appear in mass spectrometry with an unusual homochiral preference, however there is no evidence that such clusters exist under non-ionizing conditions and amino acid phase behavior is far more prebiotically relevant. [ 30 ] The recent observation that partial sublimation of a 10% enantioenriched sample of leucine results in up to 82% enrichment in the sublimate shows that enantioenrichment of amino acids could occur in space. [ 31 ] Partial sublimation processes can take place on the surface of meteors where large variations in temperature exist. This finding may have consequences for the development of the Mars Organic Detector scheduled for launch in 2013 which aims to recover trace amounts of amino acids from the Mars surface exactly by a sublimation technique.
A high asymmetric amplification of the enantiomeric excess of sugars are also present in the amino acid catalyzed asymmetric formation of carbohydrates [ 32 ]
One classic study involves an experiment that takes place in the laboratory. [ 33 ] When sodium chlorate is allowed to crystallize from water and the collected crystals examined in a polarimeter , each crystal turns out to be chiral and either the L form or the D form. When allowed to sit undisturbed, the amount of L crystals collected equals the amount of D crystals (corrected for statistical effects). However, when the sodium chlorate solution is stirred during the crystallization process, the crystals form with a single chirality, though any given experiment could give either exclusively L or exclusively D . The explanation for this symmetry breaking is unclear but is related to autocatalysis taking place in the nucleation process.
In a related experiment, a crystal suspension of a racemic amino acid derivative continuously stirred, results in a 100% crystal phase of one of the enantiomers because the enantiomeric pair is able to equilibrate in solution (compare with dynamic kinetic resolution ). [ 34 ]
Once a significant enantiomeric enrichment has been produced in a system, the transference of chirality through the entire system is customary. This last step is known as the chiral transmission step. Many strategies in asymmetric synthesis are built on chiral transmission. Especially important is the so-called organocatalysis of organic reactions by proline for example in Mannich reactions .
Some proposed models for the transmission of chiral asymmetry are polymerization, [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] epimerization [ 41 ] [ 42 ] or copolymerization. [ 43 ] [ 44 ]
There exists no theory elucidating correlations among L -amino acids. If one takes, for example, alanine , which has a small methyl group, and phenylalanine , which has a larger benzyl group, a simple question is in what aspect, L -alanine resembles L -phenylalanine more than D -phenylalanine, and what kind of mechanism causes the selection of all L -amino acids, because it might be possible that alanine was L and phenylalanine was D .
It was reported [ 45 ] in 2004 that excess racemic D , L -asparagine (Asn), which spontaneously forms crystals of either isomer during recrystallization, induces asymmetric resolution of a co-existing racemic amino acid such as arginine (Arg), aspartic acid (Asp), glutamine (Gln), histidine (His), leucine (Leu), methionine (Met), phenylalanine (Phe), serine (Ser), valine (Val), tyrosine (Tyr), and tryptophan (Trp). The enantiomeric excess ee = 100 ×( L - D )/( L + D ) of these amino acids was correlated almost linearly with that of the inducer, i.e., Asn. When recrystallizations from a mixture of 12 D , L -amino acids (Ala, Asp, Arg, Glu, Gln, His, Leu, Met, Ser, Val, Phe, and Tyr) and excess D , L -Asn were made, all amino acids with the same configuration with Asn were preferentially co-crystallized. [ 45 ] It was incidental whether the enrichment took place in L - or D -Asn, however, once the selection was made, the co-existing amino acid with the same configuration at the α-carbon was preferentially involved because of thermodynamic stability in the crystal formation. The maximal ee was reported to be 100%. Based on these results, it is proposed that a mixture of racemic amino acids causes spontaneous and effective optical resolution, even if asymmetric synthesis of a single amino acid does not occur without an aid of an optically active molecule.
This is the first study elucidating reasonably the formation of chirality from racemic amino acids with experimental evidences.
This term was introduced by Lord Kelvin in 1904, the year that he published his Baltimore Lecture of 1884. Kelvin used the term homochirality as a relationship between two molecules, i.e. two molecules are homochiral if they have the same chirality. [ 32 ] [ 46 ] Recently, however, homochiral has been used in the same sense as enantiomerically pure. This is permitted in some journals (but not encouraged), [ 47 ] : 342 [ 48 ] its meaning changing into the preference of a process or system for a single optical isomer in a pair of isomers in these journals. | https://en.wikipedia.org/wiki/Homochirality |
Homocitric acid is an organic compound with the formula HOC(CO 2 H)(CH 2 CO 2 H)(C 2 H 4 CO 2 H). This tricarboxylic acid occurs naturally as a component of the iron-molybdenum cofactor of certain nitrogenase proteins. [ 1 ] Biochemists often refer to this cofactor as homocitrate, which is the conjugate bases that predominate in neutral aqueous solutions of this species.
The molecule is related to citric acid by the addition of one methylene unit , hence the prefix "homo." Unlike citric acid, homocitric acid is chiral . The acid exists in equilibrium with the lactone .
This biochemistry article is a stub . You can help Wikipedia by expanding it .
This article about an acid is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homocitric_acid |
In dynamical systems , a branch of mathematics , a homoclinic connection is a structure formed by the stable manifold and unstable manifold of a fixed point .
Let f : M → M {\displaystyle f:M\to M} be a map defined on a manifold M {\displaystyle M} , with a fixed point p {\displaystyle p} .
Let W s ( f , p ) {\displaystyle W^{s}(f,p)} and W u ( f , p ) {\displaystyle W^{u}(f,p)} be the stable manifold and the unstable manifold of the fixed point p {\displaystyle p} , respectively. Let V {\displaystyle V} be a connected invariant manifold such that
Then V {\displaystyle V} is called a homoclinic connection .
It is a similar notion, but it refers to two fixed points, p {\displaystyle p} and q {\displaystyle q} . The condition satisfied by V {\displaystyle V} is replaced with:
This notion is not symmetric with respect to p {\displaystyle p} and q {\displaystyle q} .
When the invariant manifolds W s ( f , p ) {\displaystyle W^{s}(f,p)} and W u ( f , q ) {\displaystyle W^{u}(f,q)} , possibly with p = q {\displaystyle p=q} , intersect but there is no homoclinic/heteroclinic connection, a different structure is formed by the two manifolds, sometimes referred to as the homoclinic/heteroclinic tangle . The figure has a conceptual drawing illustrating their complicated structure. The theoretical result supporting the drawing is the lambda-lemma . Homoclinic tangles are always accompanied by a Smale horseshoe .
For continuous flows , the definition is essentially the same.
When a dynamical system is perturbed, a homoclinic connection splits . It becomes a disconnected invariant set. Near it, there will be a chaotic set called Smale's horseshoe . Thus, the existence of a homoclinic connection can potentially lead to chaos . For example, when a pendulum is placed in a box, and the box is subjected to small horizontal oscillations, the pendulum may exhibit chaotic behavior. | https://en.wikipedia.org/wiki/Homoclinic_connection |
In the study of dynamical systems , a homoclinic orbit is a path through phase space which joins a saddle equilibrium point to itself. More precisely, a homoclinic orbit lies in the intersection of the stable manifold and the unstable manifold of an equilibrium. It is a heteroclinic orbit –a path between any two equilibrium points–in which the endpoints are one and the same.
Consider the continuous dynamical system described by the ordinary differential equation
Suppose there is an equilibrium at x = x 0 {\displaystyle x=x_{0}} , then a solution Φ ( t ) {\displaystyle \Phi (t)} is a homoclinic orbit if
If the phase space has three or more dimensions , then it is important to consider the topology of the unstable manifold of the saddle point. The figures show two cases. First, when the stable manifold is topologically a cylinder , and secondly, when the unstable manifold is topologically a Möbius strip ; in this case the homoclinic orbit is called twisted .
Homoclinic orbits and homoclinic points are defined in the same way for iterated functions , as the intersection of the stable set and unstable set of some fixed point or periodic point of the system.
We also have the notion of homoclinic orbit when considering discrete dynamical systems. In such a case, if f : M → M {\displaystyle f:M\rightarrow M} is a diffeomorphism of a manifold M {\displaystyle M} , we say that x {\displaystyle x} is a homoclinic point if it has the same past and future - more specifically, if there exists a fixed (or periodic) point p {\displaystyle p} such that
The existence of one homoclinic point implies the existence of an infinite number of them. [ 1 ] This comes from its definition: the intersection of a stable and unstable set. Both sets are invariant by definition, which means that the forward iteration of the homoclinic point is both on the stable and unstable set. By iterating N times, the map approaches the equilibrium point by the stable set, but in every iteration it is on the unstable manifold too, which shows this property.
This property suggests that complicated dynamics arise by the existence of a homoclinic point. Indeed, Smale (1967) [ 2 ] showed that these points leads to horseshoe map like dynamics, which is associated with chaos.
By using the Markov partition , the long-time behaviour of a hyperbolic system can be studied using the techniques of symbolic dynamics . In this case, a homoclinic orbit has a particularly simple and clear representation. Suppose that S = { 1 , 2 , … , M } {\displaystyle S=\{1,2,\ldots ,M\}} is a finite set of M symbols. The dynamics of a point x is then represented by a bi-infinite string of symbols
A periodic point of the system is simply a recurring sequence of letters. A heteroclinic orbit is then the joining of two distinct periodic orbits. It may be written as
where p = t 1 t 2 ⋯ t k {\displaystyle p=t_{1}t_{2}\cdots t_{k}} is a sequence of symbols of length k , (of course, t i ∈ S {\displaystyle t_{i}\in S} ), and q = r 1 r 2 ⋯ r m {\displaystyle q=r_{1}r_{2}\cdots r_{m}} is another sequence of symbols, of length m (likewise, r i ∈ S {\displaystyle r_{i}\in S} ). The notation p ω {\displaystyle p^{\omega }} simply denotes the repetition of p an infinite number of times. Thus, a heteroclinic orbit can be understood as the transition from one periodic orbit to another. By contrast, a homoclinic orbit can be written as
with the intermediate sequence s 1 s 2 ⋯ s n {\displaystyle s_{1}s_{2}\cdots s_{n}} being non-empty, and, of course, not being p , as otherwise, the orbit would simply be p ω {\displaystyle p^{\omega }} . | https://en.wikipedia.org/wiki/Homoclinic_orbit |
Homoenolates are a type of functional group that have been used in synthetic organic chemistry since the 1980s. They are related to enolates , but represent an umpolung of their reactivity . Homoenolates can be formed with a variety of different metal counterions , including lithium , iron , silver , lead , titanium , tin , tellurium , zirconium , niobium , mercury , zinc , antimony , bismuth , nickel , palladium , and copper . Homoenolates stability and reactivity varies by counterion identity and other nearby functional groups. Common pathways of decomposition include proto-demetalation and β-hydride elimination . Multiple reviews on the topic of homoenolates and their reactivity have been published. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ]
Homoenolates are typically derived from the ring-opening reaction of cyclopropanol derivatives in the presence of metal and base. The ring opening reaction of a substituted cyclopropanol typically results in the carbon-metal bond being formed on the less substituted position (except if the cyclopropane is substituted with electron withdrawing substituents, in which case the carbon-metal bond forms at the alpha position to the electron withdrawing group.) [ 7 ]
This organic chemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homoenolates |
A homoeoid or homeoid is a shell (a bounded region) bounded by two concentric , similar ellipses (in 2D) or ellipsoids (in 3D). [ 1 ] [ 2 ] When the thickness of the shell becomes negligible, it is called a thin homoeoid . The name homoeoid was coined by Lord Kelvin and Peter Tait . [ 3 ] Closely related is the focaloid , a shell between concentric , confocal ellipses or ellipsoids. [ 4 ]
If the outer shell is given by
with semiaxes a , b , c {\displaystyle a,b,c} , the inner shell of a homoeoid is given for 0 ≤ m ≤ 1 {\displaystyle 0\leq m\leq 1} by
and a focaloid is defined for λ ≥ 0 {\displaystyle \lambda \geq 0} by
The thin homoeoid is then given by the limit m → 1 {\displaystyle m\to 1} , and the thin focaloid is the limit λ → 0 {\displaystyle \lambda \to 0} . [ 3 ]
Thin focaloids and homoeoids can be used as elements of an ellipsoidal matter or charge distribution that generalize the shell theorem for spherical shells. The gravitational or electromagnetic potential of a homoeoid homogeneously filled with matter or charge is constant inside the shell, so there is no force on a test particle inside of it. [ 5 ] Meanwhile, two uniform, concentric focaloids with the same mass or charge exert the same potential on a test particle outside of both. [ 4 ] [ 1 ]
This mathematical physics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homoeoid_and_focaloid |
Homoeriodictyol is a bitter-masking flavanone extracted from Yerba Santa ( Eriodictyon californicum ) a plant growing in America . [ 1 ]
Homoeriodictyol (3`-methoxy-4`,5,7-trihydroxyflavanone) is one of the 4 flavanones identified by Symrise in this plant eliciting taste-modifying property: homoeriodictyol sodium salt, eriodictyol and sterubin . Homoeriodictyol Sodium salt elicited the most potent bitter-masking activity by reducing from 10 to 40% the bitterness of salicin , amarogentin , paracetamol and quinine . However no bitter-masking activity was detected with bitter linoleic acid emulsions. According to Symrise's scientists homoeriodictyol sodium salt seems to be a taste-modifier with large potential in food applications and pharmaceuticals . [ 2 ]
Structural relatives investigation based on eriodictyol and homoeriodictyol, found 2,4-Dihydroxybenzoic acid vanillylamide to elicits bitter-masking activity. At 0.1g/L, this vanillin derivative, was able to reduce the bitterness of a 0.5g/L caffeine solution by about 30%. [ 3 ] | https://en.wikipedia.org/wiki/Homoeriodictyol |
Homogamy is used in biology in four separate senses:
As opposed to outcrossing or outbreeding, inbreeding is the process by which organisms with common descent come together to mate and eventually procreate. [ 4 ] An archetype of inbreeding is self-pollination . When a plant has both anthers and a stigma, the process of inbreeding can occur. Another word for this self-fertilization is autogamy , which is when an anther releases pollen to attach to the stigma on the same plant. Self-pollination is promoted by homogamy. Homogamy is when the anthers and the stigma of a flower are being matured at the same time. [ 5 ] The action of self-pollination guides the plant to homozygosity , causing a specific gene to be received from each of the parents leading to the possession of two exact formats of that gene. [ 6 ]
Assortative mating is the choosing of a mate to breed with based on their physical characteristics, phenotypical traits. [ 7 ] There are social factors that enhance one's choosing, such as religion, physical traits, and culture. For instance, research has been conducted by sociologists who found that men and women look for individuals who fall under the educational homogamy he or she is in. [ 8 ] The homogamy theory holds that when organisms look for a potential partner, they search for organisms that have similar traits to themselves. The idea of sexual imprinting plays a role in this theory. [ 3 ] Based on whether or not an individual is a male or a female, the individual tends to be attracted to other people that have most similar characteristics to their parent of the opposite gender. This is a form of positive assortative mating , where people choose a mate with attributes that correlate with their own. According to Kalmijn and Flap, there are five places individuals could become acquainted with each other in. These five places are: work, school, neighborhood, common family networks, and voluntary associations. They also studied the five criteria that are usually looked for to decide on the status of wanting to mate. As such, the five traits are: age, education, class destinations, class origins, and religious background. [ 3 ]
There is an evolutionary theory that explain that there are two specific qualities that are looked out. These two traits are male dominance and the attractiveness of the female. [ 9 ] According to the evolutionary perspective, the purpose of mating is to procreate for the purpose of survival. It is the ones with the best features and traits that survive, a known phrase called survival of the fittest . If there was a couple who lacked the ability to become fertile or have a child with a disease or a handicap, there is a great rise in the risk of the couple receiving a divorce. [ 10 ] When there are traits that are found in a spouse that are not favorable, then the homogamy in the relationship decreases, and there begins to have a need for it for a better production of children. | https://en.wikipedia.org/wiki/Homogamy_(biology) |
In physics, a homogeneous material or system has the same properties at every point; it is uniform without irregularities. [ 1 ] [ 2 ] A uniform electric field (which has the same strength and the same direction at each point) would be compatible with homogeneity (all points experience the same physics). A material constructed with different constituents can be described as effectively homogeneous in the electromagnetic materials domain, when interacting with a directed radiation field (light, microwave frequencies, etc.). [ 3 ] [ 4 ]
Mathematically, homogeneity has the connotation of invariance , as all components of the equation have the same degree of value whether or not each of these components are scaled to different values, for example, by multiplication or addition. Cumulative distribution fits this description. "The state of having identical cumulative distribution function or values". [ 3 ] [ 4 ]
The definition of homogeneous strongly depends on the context used. For example, a composite material is made up of different individual materials, known as " constituents " of the material, but may be defined as a homogeneous material when assigned a function. For example, asphalt paves our roads, but is a composite material consisting of asphalt binder and mineral aggregate, and then laid down in layers and compacted. However, homogeneity of materials does not necessarily mean isotropy . In the previous example, a composite material may not be isotropic.
In another context, a material is not homogeneous in so far as it is composed of atoms and molecules . However, at the normal level of our everyday world, a pane of glass, or a sheet of metal is described as glass, or stainless steel. In other words, these are each described as a homogeneous material.
A few other instances of context are: dimensional homogeneity (see below) is the quality of an equation having quantities of same units on both sides; homogeneity (in space) implies conservation of momentum ; and homogeneity in time implies conservation of energy .
In the context of composite metals is an alloy. A blend of a metal with one or more metallic or nonmetallic materials is an alloy. The components of an alloy do not combine chemically but, rather, are very finely mixed. An alloy might be homogeneous or might contain small particles of components that can be viewed with a microscope. Brass is an example of an alloy, being a homogeneous mixture of copper and zinc. Another example is steel, which is an alloy of iron with carbon and possibly other metals. The purpose of alloying is to produce desired properties in a metal that naturally lacks them. Brass, for example, is harder than copper and has a more gold-like color. Steel is harder than iron and can even be made rust proof (stainless steel). [ 5 ]
Homogeneity, in another context plays a role in cosmology . From the perspective of 19th-century cosmology (and before), the universe was infinite , unchanging, homogeneous, and therefore filled with stars . However, German astronomer Heinrich Olbers asserted that if this were true, then the entire night sky would be filled with light and bright as day; this is known as Olbers' paradox . Olbers presented a technical paper in 1826 that attempted to answer this conundrum. The faulty premise, unknown in Olbers' time, was that the universe is not infinite, static, and homogeneous. The Big Bang cosmology replaced this model (expanding, finite, and inhomogeneous universe ). However, modern astronomers supply reasonable explanations to answer this question. One of at least several explanations is that distant stars and galaxies are red shifted , which weakens their apparent light and makes the night sky dark. [ 6 ] However, the weakening is not sufficient to actually explain Olbers' paradox. Many cosmologists think that the fact that the Universe is finite in time, that is that the Universe has not been around forever, is the solution to the paradox. [ 7 ] The fact that the night sky is dark is thus an indication for the Big Bang.
By translation invariance, one means independence of (absolute) position, especially when referring to a law of physics, or to the evolution of a physical system.
Fundamental laws of physics should not (explicitly) depend on position in space. That would make them quite useless. In some sense, this is also linked to the requirement that experiments should be reproducible .
This principle is true for all laws of mechanics ( Newton's laws , etc.), electrodynamics, quantum mechanics, etc.
In practice, this principle is usually violated, since one studies only a small subsystem of the universe, which of course "feels" the influence of the rest of the universe. This situation gives rise to "external fields" (electric, magnetic, gravitational, etc.) which make the description of the evolution of the system depend upon its position ( potential wells , etc.). This only stems from the fact that the objects creating these external fields are not considered as (a "dynamical") part of the system.
Translational invariance as described above is equivalent to shift invariance in system analysis , although here it is most commonly used in linear systems, whereas in physics the distinction is not usually made.
The notion of isotropy , for properties independent of direction, is not a consequence of homogeneity. For example, a uniform electric field (i.e., which has the same strength and the same direction at each point) would be compatible with homogeneity (at each point physics will be the same), but not with isotropy , since the field singles out one "preferred" direction.
In the Lagrangian formalism, homogeneity in space implies conservation of momentum , and homogeneity in time implies conservation of energy . This is shown, using variational calculus , in standard textbooks like the classical reference text of Landau & Lifshitz. [ 8 ] This is a particular application of Noether's theorem .
As said in the introduction, dimensional homogeneity is the quality of an equation having quantities of same units on both sides. A valid equation in physics must be homogeneous, since equality cannot apply between quantities of different nature. This can be used to spot errors in formula or calculations. For example, if one is calculating a speed , units must always combine to [length]/[time]; if one is calculating an energy , units must always combine to [mass][length] 2 /[time] 2 , etc. For example, the following formulae could be valid expressions for some energy:
if m is a mass, v and c are velocities , p is a momentum , h is the Planck constant , λ a length. On the other hand, if the units of the right hand side do not combine to [mass][length] 2 /[time] 2 , it cannot be a valid expression for some energy .
Being homogeneous does not necessarily mean the equation will be true, since it does not take into account numerical factors. For example, E = mv 2 could be or could not be the correct formula for the energy of a particle of mass m traveling at speed v , and one cannot know if hc / λ should be divided or multiplied by 2 π .
Nevertheless, this is a very powerful tool in finding characteristic units of a given problem, see dimensional analysis . | https://en.wikipedia.org/wiki/Homogeneity_(physics) |
Homogeneity and heterogeneity are concepts relating to the uniformity of a substance , process or image. A homogeneous feature is uniform in composition or character (i.e., color, shape, size, weight, height, distribution, texture, language, income, disease, temperature, radioactivity, architectural design, etc.); one that is heterogeneous is distinctly nonuniform in at least one of these qualities. [ 1 ] [ 2 ]
The words homogeneous and heterogeneous come from Medieval Latin homogeneus and heterogeneus , from Ancient Greek ὁμογενής ( homogenēs ) and ἑτερογενής ( heterogenēs ), from ὁμός ( homos , "same") and ἕτερος ( heteros , "other, another, different") respectively, followed by γένος ( genos , "kind"); -ous is an adjectival suffix. [ 3 ]
Alternate spellings omitting the last -e- (and the associated pronunciations) are common, but mistaken: [ 4 ] homogenous is strictly a biological/pathological term which has largely been replaced by homologous . But use of homogenous to mean homogeneous has seen a rise since 2000, enough for it to now be considered an "established variant". [ 5 ] Similarly, heterogenous is a spelling traditionally reserved to biology and pathology , referring to the property of an object in the body having its origin outside the body. [ 6 ]
The concepts are the same to every level of complexity. From atoms to galaxies , plants , animals , humans , and other living organisms all share both a common or unique set of complexities.
Hence, an element may be homogeneous on a larger scale, compared to being heterogeneous on a smaller scale. This is known as an effective medium approximation . [ 7 ] [ 8 ]
Various disciplines understand heterogeneity , or being heterogeneous , in different ways. [ 2 ]
Environmental heterogeneity is a hypernym for different environmental factors that contribute to the diversity of species, like climate, topography, and land cover. [ 9 ] Biodiversity is correlated with geodiversity on a global scale. Heterogeneity in geodiversity features and environmental variables are indicators of environmental heterogeneity. They drive biodiversity at local and regional scales.
Scientific literature in ecology contains a big number of different terms for environmental heterogeneity, often undefined or conflicting in their meaning. [ 10 ] Habitat diversity and habitat heterogeneity are a synonyms of environmental heterogeneity. [ 10 ]
In chemistry , a heterogeneous mixture consists of either or both of 1) multiple states of matter or 2) hydrophilic and hydrophobic substances in one mixture; an example of the latter would be a mixture of water, octane , and silicone grease . Heterogeneous solids, liquids, and gases may be made homogeneous by melting, stirring, or by allowing time to pass for diffusion to distribute the molecules evenly. For example, adding dye to water will create a heterogeneous solution at first, but will become homogeneous over time. Entropy allows for heterogeneous substances to become homogeneous over time. [ 11 ]
A heterogeneous mixture is a mixture of two or more compounds . Examples are: mixtures of sand and water or sand and iron filings, a conglomerate rock, water and oil, a salad, trail mix , and concrete (not cement). [ 12 ] A mixture can be determined to be homogeneous when everything is settled and equal, and the liquid, gas, the object is one color or the same form. Various models have been proposed to model the concentrations in different phases. The phenomena to be considered are mass rates and reaction. [ citation needed ]
Homogeneous reactions are chemical reactions in which the reactants and products are in the same phase , while heterogeneous reactions have reactants in two or more phases. Reactions that take place on the surface of a catalyst of a different phase are also heterogeneous. A reaction between two gases or two miscible liquids is homogeneous. A reaction between a gas and a liquid, a gas and a solid or a liquid and a solid is heterogeneous. [ citation needed ]
Earth is a heterogeneous substance in many aspects; for instance, rocks (geology) are inherently heterogeneous, usually occurring at the micro-scale and mini-scale. [ 7 ]
In formal semantics , homogeneity is the phenomenon in which plural expressions imply "all" when asserted but "none" when negated . For example, the English sentence "Robin read the books" means that Robin read all the books, while "Robin didn't read the books" means that she read none of them. Neither sentence can be asserted if Robin read exactly half of the books. This is a puzzle because the negative sentence does not appear to be the classical negation of the sentence. A variety of explanations have been proposed including that natural language operates on a trivalent logic . [ 13 ]
With information technology , heterogeneous computing occurs in a network comprising different types of computers, potentially with vastly differing memory sizes, processing power and even basic underlying architecture. [ citation needed ]
In algebra, homogeneous polynomials have the same number of factors of a given kind.
In the study of binary relations , a homogeneous relation R is on a single set ( R ⊆ X × X ) while a heterogeneous relation concerns possibly distinct sets ( R ⊆ X × Y , X = Y or X ≠ Y ). [ 14 ]
In statistical meta-analysis , study heterogeneity is when multiple studies on an effect are measuring somewhat different effects due to differences in subject population, intervention, choice of analysis, experimental design, etc.; this can cause problems in attempts to summarize the meaning of the studies.
In medicine and genetics , a genetic or allelic heterogeneous condition is one where the same disease or condition can be caused, or contributed to, by several factors, or in genetic terms, by varying or different genes or alleles .
In cancer research , cancer cell heterogeneity is thought to be one of the underlying reasons that make treatment of cancer difficult. [ 15 ]
In physics , "heterogeneous" is understood to mean "having physical properties that vary within the medium".
In sociology , "heterogeneous" may refer to a society or group that includes individuals of differing ethnicities, cultural backgrounds, sexes, or ages. Diverse is the more common synonym in the context. [ 16 ]
In landscape ecology , heterogeneity refers to the different elements of a system. [ 17 ] Heterogeneous systems support higher biodiversity and is a target for many landscape restoration efforts. [ 18 ] | https://en.wikipedia.org/wiki/Homogeneity_and_heterogeneity |
Homogeneous broadening is a type of emission spectrum broadening in which all atoms radiating from a specific level under consideration radiate with equal opportunity. [ 1 ] If an optical emitter (e.g. an atom) shows homogeneous broadening, its spectral linewidth is its natural linewidth, with a Lorentzian profile .
Broadening in laser physics is a physical phenomenon that affects the spectroscopic line shape of the laser emission profile. The laser emission is due to the (excitation and subsequent) relaxation of a quantum system (atom, molecule , ion, etc.) between an excited state (higher in energy) and a lower one. These states can be thought of as the eigenstates of the energy operator. The difference in energy between these states is proportional to the frequency/wavelength of the photon emitted. Since this energy difference has a fluctuation, then the frequency/wavelength of the "macroscopic emission" (the beam) will have a certain width (i.e. it will be "broadened" with respect to the "ideal" perfectly monochromatic emission).
Depending on the nature of the fluctuation, there can be two types of broadening. If the fluctuation in the frequency/wavelength is due to a phenomenon that is the same for each quantum emitter, there is homogeneous broadening, while if each quantum emitter has a different type of fluctuation, the broadening is inhomogeneous .
Examples of situations where the fluctuation is the same for each system (homogeneous broadening) are natural or lifetime broadening , and collisional or pressure broadening . In these cases each system is affected "on average" in the same way (e.g. by the collisions due to the pressure).
The most frequent situation in solid state systems where the fluctuation is different for each system ( inhomogeneous broadening ) is when because of the presence of dopants , the local electric field is different for each emitter, and so the Stark effect changes the energy levels in an inhomogeneous way. The homogeneous broadened emission line will have a Lorentzian profile (i.e. will be best fitted by a Lorentzian function), while the inhomogeneously broadened emission will have a Gaussian profile . One or more phenomena may be present at the same time, but if one has a wider fluctuation, it will be the one responsible for the character of the broadening.
These effects are not limited to laser systems, or even to optical spectroscopy. They are relevant in magnetic resonance as well, where the frequency range is in the radiofrequency region for NMR , and one can also refer to these effects in EPR where the lineshape is observed at fixed ( microwave ) frequency and in a magnetic field range.
In semiconductors , if all oscillations have the same eigenfrequency ω 0 {\displaystyle \omega _{0}} and the broadening in the imaginary part of the dielectric function ε 2 ( ω ) {\displaystyle \varepsilon _{2}(\omega )} results only from a finite damping γ {\displaystyle \gamma } , the system is said to be homogeneously broadened , and ε 2 ( ω ) {\displaystyle \varepsilon _{2}(\omega )} has a Lorentzian profile . If the system contains many oscillators with slightly different frequencies about ω 0 {\displaystyle \omega _{0}} however, then the system is inhomogeneously broadened . [ 2 ] | https://en.wikipedia.org/wiki/Homogeneous_broadening |
In chemistry, homogeneous catalysis is catalysis where the catalyst is in same phase as reactants, principally by a soluble catalyst in a solution. In contrast, heterogeneous catalysis describes processes where the catalysts and substrate are in distinct phases, typically solid and gas, respectively. [ 1 ] The term is used almost exclusively to describe solutions and implies catalysis by organometallic compounds . Homogeneous catalysis is an established technology that continues to evolve. An illustrative major application is the production of acetic acid . Enzymes are examples of homogeneous catalysts. [ 2 ]
The proton is a pervasive homogeneous catalyst [ 4 ] because water is the most common solvent. Water forms protons by the process of self-ionization of water . In an illustrative case, acids accelerate (catalyze) the hydrolysis of esters :
At neutral pH, aqueous solutions of most esters do not hydrolyze at practical rates.
A prominent class of reductive transformations are hydrogenations . In this process, H 2 added to unsaturated substrates. A related methodology, transfer hydrogenation , involves by transfer of hydrogen from one substrate (the hydrogen donor) to another (the hydrogen acceptor). Related reactions entail "HX additions" where X = silyl ( hydrosilylation ) and CN ( hydrocyanation ). Most large-scale industrial hydrogenations – margarine, ammonia, benzene-to-cyclohexane – are conducted with heterogeneous catalysts. Fine chemical syntheses, however, often rely on homogeneous catalysts.
Hydroformylation , a prominent form of carbonylation , involves the addition of H and "C(O)H" across a double bond. This process is almost exclusively conducted with soluble rhodium - and cobalt -containing complexes. [ 5 ]
A related carbonylation is the conversion of alcohols to carboxylic acids. MeOH and CO react in the presence of homogeneous catalysts to give acetic acid , as practiced in the Monsanto process and Cativa processes . Related reactions include hydrocarboxylation and hydroesterifications .
A number of polyolefins, e.g. polyethylene and polypropylene, are produced from ethylene and propylene by Ziegler-Natta catalysis . Heterogeneous catalysts dominate, but many soluble catalysts are employed especially for stereospecific polymers. [ 6 ] Olefin metathesis is usually catalyzed heterogeneously in industry, but homogeneous variants are valuable in fine chemical synthesis. [ 7 ]
Homogeneous catalysts are also used in a variety of oxidations. In the Wacker process , acetaldehyde is produced from ethene and oxygen . Many non-organometallic complexes are also widely used in catalysis, e.g. for the production of terephthalic acid from xylene . Alkenes are epoxidized and dihydroxylated by metal complexes, as illustrated by the Halcon process and the Sharpless dihydroxylation .
Enzymes are homogeneous catalysts that are essential for life but are also harnessed for industrial processes. A well-studied example is carbonic anhydrase , which catalyzes the release of CO 2 into the lungs from the bloodstream. Enzymes possess properties of both homogeneous and heterogeneous catalysts. As such, they are usually regarded as a third, separate category of catalyst. Water is a common reagent in enzymatic catalysis. Esters and amides are slow to hydrolyze in neutral water, but the rates are sharply affected by metalloenzymes , which can be viewed as large coordination complexes. Acrylamide is prepared by the enzyme-catalyzed hydrolysis of acrylonitrile . [ 8 ] US demand for acrylamide was 253,000,000 pounds (115,000,000 kg) as of 2007. | https://en.wikipedia.org/wiki/Homogeneous_catalysis |
Homogeneous charge compression ignition ( HCCI ) is a form of internal combustion in which well-mixed fuel and oxidizer (typically air) are compressed to the point of auto-ignition. As in other forms of combustion , this exothermic reaction produces heat that can be transformed into work in a heat engine .
HCCI combines characteristics of conventional gasoline engine and diesel engines . Gasoline engines combine homogeneous charge (HC) with spark ignition (SI), abbreviated as HCSI. Modern direct injection diesel engines combine stratified charge (SC) with compression ignition (CI), abbreviated as SCCI.
As in HCSI, HCCI injects fuel during the intake stroke. However, rather than using an electric discharge (spark) to ignite a portion of the mixture, HCCI raises density and temperature by compression until the entire mixture reacts spontaneously.
Stratified charge compression ignition also relies on temperature and density increase resulting from compression. However, it injects fuel later, during the compression stroke. Combustion occurs at the boundary of the fuel and air, producing higher emissions, but allowing a leaner and higher compression burn, producing greater efficiency.
Controlling HCCI requires microprocessor control and physical understanding of the ignition process. HCCI designs achieve gasoline engine-like emissions with diesel engine-like efficiency.
HCCI engines achieve extremely low levels of oxides of nitrogen emissions ( NO x ) without a catalytic converter . Hydrocarbons (unburnt fuels and oils) and carbon monoxide emissions still require treatment to meet automobile emissions control regulations.
Recent research has shown that the hybrid fuels combining different reactivities (such as gasoline and diesel) can help in controlling HCCI ignition and burn rates. RCCI, or reactivity controlled compression ignition , has been demonstrated to provide highly efficient, low emissions operation over wide load and speed ranges. [ 1 ]
HCCI engines have a long history, even though HCCI has not been as widely implemented as spark ignition or diesel injection. It is essentially an Otto combustion cycle . HCCI was popular before electronic spark ignition was used. One example is the hot-bulb engine which used a hot vaporization chamber to help mix fuel with air. The extra heat combined with compression induced the conditions for combustion. Another example is the "diesel" model aircraft engine .
A mixture of fuel and air ignites when the concentration and temperature of reactants is sufficiently high. The concentration and/or temperature can be increased in several different ways:
Once ignited, combustion occurs very quickly. When auto-ignition occurs too early or with too much chemical energy, combustion is too fast and high in-cylinder pressures can destroy an engine. For this reason, HCCI is typically operated at lean overall fuel mixtures.
HCCI is more difficult to control than other combustion engines, such as SI and diesel. In a typical gasoline engine , a spark is used to ignite the pre-mixed fuel and air. In Diesel engines , combustion begins when the fuel is injected into pre-compressed air. In both cases, combustion timing is explicitly controlled. In an HCCI engine, however, the homogeneous mixture of fuel and air is compressed and combustion begins whenever sufficient pressure and temperature are reached. This means that no well-defined combustion initiator provides direct control. Engines must be designed so that ignition conditions occur at the desired timing. To achieve dynamic operation, the control system must manage the conditions that induce combustion. Options include the compression ratio, inducted gas temperature, inducted gas pressure, fuel-air ratio, or quantity of retained or re-inducted exhaust. Several control approaches are discussed below.
Two compression ratios are significant. The geometric compression ratio can be changed with a movable plunger at the top of the cylinder head . This system is used in diesel model aircraft engines . The effective compression ratio can be reduced from the geometric ratio by closing the intake valve either very late or very early with variable valve actuation ( variable valve timing that enables the Miller cycle ). Both approaches require energy to achieve fast response. Additionally, implementation is expensive, but is effective. [ 9 ] The effect of compression ratio on HCCI combustion has also been studied extensively. [ 10 ]
HCCI's autoignition event is highly sensitive to temperature. The simplest temperature control method uses resistance heaters to vary the inlet temperature, but this approach is too slow to change on a cycle-to-cycle frequency. [ 11 ] Another technique is fast thermal management (FTM). It is accomplished by varying the intake charge temperature by mixing hot and cold air streams. It is fast enough to allow cycle-to-cycle control. [ 12 ] It is also expensive to implement and has limited bandwidth associated with actuator energy.
Exhaust gas is very hot if retained or re-inducted from the previous combustion cycle or cool if recirculated through the intake as in conventional EGR systems. The exhaust has dual effects on HCCI combustion. It dilutes the fresh charge, delaying ignition and reducing the chemical energy and engine output. Hot combustion products conversely increase gas temperature in the cylinder and advance ignition. Control of combustion timing HCCI engines using EGR has been shown experimentally. [ 13 ]
Variable valve actuation (VVA) extends the HCCI operating region by giving finer control over the temperature-pressure-time envelope within the combustion chamber. VVA can achieve this via either:
While electro-hydraulic and camless VVA systems offer control over the valve event, the componentry for such systems is currently complicated and expensive. Mechanical variable lift and duration systems, however, although more complex than a standard valvetrain, are cheaper and less complicated. It is relatively simple to configure such systems to achieve the necessary control over the valve lift curve.
Another means to extend the operating range is to control the onset of ignition and the heat release rate [ 14 ] [ 15 ] by manipulating the fuel itself. This is usually carried out by blending multiple fuels "on the fly" for the same engine. [ 16 ] Examples include blending of commercial gasoline and diesel fuels, [ 17 ] adopting natural gas [ 18 ] or ethanol. [ 19 ] This can be achieved in a number of ways:
Compression Ignition Direct Injection (CIDI) combustion is a well-established means of controlling ignition timing and heat release rate and is adopted in diesel engine combustion. Partially Pre-mixed Charge Compression Ignition (PPCI) also known as Premixed Charge Compression Ignition (PCCI) is a compromise offering the control of CIDI combustion with the reduced exhaust gas emissions of HCCI, specifically lower soot . [ 20 ] The heat release rate is controlled by preparing the combustible mixture in such a way that combustion occurs over a longer time duration making it less prone to knocking . This is done by timing the injection event such that a range of air/fuel ratios spread across the combustion cylinder when ignition begins. Ignition occurs in different regions of the combustion chamber at different times - slowing the heat release rate. This mixture is designed to minimize the number of fuel-rich pockets, reducing soot formation. [ 21 ] The adoption of high EGR and diesel fuels with a greater resistance to ignition (more "gasoline like") enable longer mixing times before ignition and thus fewer rich pockets that produce soot and NO x [ 20 ] [ 21 ]
In a typical ICE, combustion occurs via a flame. Hence at any point in time, only a fraction of the total fuel is burning. This results in low peak pressures and low energy release rates. In HCCI however, the entire fuel/air mixture ignites and burns over a much smaller time interval, resulting in high peak pressures and high energy release rates. To withstand the higher pressures, the engine has to be structurally stronger. Several strategies have been proposed to lower the rate of combustion and peak pressure. Mixing fuels, with different autoignition properties, can lower the combustion speed. [ 22 ] However, this requires significant infrastructure to implement. Another approach uses dilution (i.e. with exhaust gases) to reduce the pressure and combustion rates (and output). [ 23 ]
In the divided combustion chamber approach [1] , there are two cooperating combustion chambers: a small auxiliary and a big main. A high compression ratio is used in the auxiliary combustion chamber. A moderate compression ratio is used in the main combustion chamber wherein a homogeneous air-fuel mixture is compressed / heated near, yet below, the auto-ignition threshold. The high compression ratio in the auxiliary combustion chamber causes the auto-ignition of the homogeneous lean air-fuel mixture therein (no spark plug required); the burnt gas bursts - through some "transfer ports", just before the TDC - into the main combustion chamber triggering its auto-ignition. The engine needs not be structurally stronger.
In ICEs, power can be increased by introducing more fuel into the combustion chamber. These engines can withstand a boost in power because the heat release rate in these engines is slow. However, in HCCI engines increasing the fuel/air ratio results in higher peak pressures and heat release rates. In addition, many viable HCCI control strategies require thermal preheating of the fuel, which reduces the density and hence the mass of the air/fuel charge in the combustion chamber, reducing power. These factors make increasing the power in HCCI engines challenging.
One technique is to use fuels with different autoignition properties. This lowers the heat release rate and peak pressures and makes it possible to increase the equivalence ratio. Another way is to thermally stratify the charge so that different points in the compressed charge have different temperatures and burn at different times, lowering the heat release rate and making it possible to increase power. [ 24 ] A third way is to run the engine in HCCI mode only at part load conditions and run it as a diesel or SI engine at higher load conditions. [ 25 ]
Because HCCI operates on lean mixtures, the peak temperature is much lower than that encountered in SI and diesel engines. This low peak temperature reduces the formation of NO x , but it also leads to incomplete burning of fuel, especially near combustion chamber walls. This produces relatively high carbon monoxide and hydrocarbon emissions. An oxidizing catalyst can remove the regulated species, because the exhaust is still oxygen-rich.
Engine knock or pinging occurs when some of the unburnt gases ahead of the flame in an SI engine spontaneously ignite. This gas is compressed as the flame propagates and the pressure in the combustion chamber rises. The high pressure and corresponding high temperature of unburnt reactants can cause them to spontaneously ignite. This causes a pressure wave to traverse from the end gas region and an expansion wave to traverse into the end gas region. The two waves reflect off the boundaries of the combustion chamber and interact to produce high amplitude standing waves , thus forming a primitive thermoacoustic device where the resonance is amplified by the increased heat release during the wave travel similar to a Rijke tube .
A similar ignition process occurs in HCCI. However, rather than part of the reactant mixture igniting by compression ahead of a flame front, ignition in HCCI engines occurs due to piston compression more or less simultaneously in the bulk of the compressed charge. Little or no pressure differences occur between the different regions of the gas, eliminating any shock wave and knocking, but the rapid pressure rise is still present and desirable from the point of seeking maximum efficiency from near-ideal isochoric heat addition.
Computational models for simulating combustion and heat release rates of HCCI engines require detailed chemistry models. [ 17 ] [ 26 ] [ 27 ] This is largely because ignition is more sensitive to chemical kinetics than to turbulence/spray or spark processes as are typical in SI and diesel engines. Computational models have demonstrated the importance of accounting for the fact that the in-cylinder mixture is actually in-homogeneous, particularly in terms of temperature. This in-homogeneity is driven by turbulent mixing and heat transfer from the combustion chamber walls. The amount of temperature stratification dictates the rate of heat release and thus tendency to knock. [ 28 ] This limits the usefulness of considering the in-cylinder mixture as a single zone, resulting in the integration of 3D computational fluid dynamics codes such as Los Alamos National Laboratory's KIVA CFD code and faster solving probability density function modelling codes. [ 29 ] [ 30 ]
Several car manufacturers have functioning HCCI prototypes.
To date, few prototype engines run in HCCI mode, but HCCI research has resulted in advancements in fuel and engine development. Examples include: | https://en.wikipedia.org/wiki/Homogeneous_charge_compression_ignition |
In mathematics , homogeneous coordinates or projective coordinates , introduced by August Ferdinand Möbius in his 1827 work Der barycentrische Calcul , [ 1 ] [ 2 ] [ 3 ] are a system of coordinates used in projective geometry , just as Cartesian coordinates are used in Euclidean geometry . They have the advantage that the coordinates of points, including points at infinity , can be represented using finite coordinates. Formulas involving homogeneous coordinates are often simpler and more symmetric than their Cartesian counterparts. Homogeneous coordinates have a range of applications, including computer graphics and 3D computer vision , where they allow affine transformations and, in general, projective transformations to be easily represented by a matrix . They are also used in fundamental elliptic curve cryptography algorithms. [ 4 ]
If homogeneous coordinates of a point are multiplied by a non-zero scalar then the resulting coordinates represent the same point. Since homogeneous coordinates are also given to points at infinity, the number of coordinates required to allow this extension is one more than the dimension of the projective space being considered. For example, two homogeneous coordinates are required to specify a point on the projective line and three homogeneous coordinates are required to specify a point in the projective plane.
The real projective plane can be thought of as the Euclidean plane with additional points added, which are called points at infinity , and are considered to lie on a new line, the line at infinity . There is a point at infinity corresponding to each direction (numerically given by the slope of a line), informally defined as the limit of a point that moves in that direction away from the origin. Parallel lines in the Euclidean plane are said to intersect at a point at infinity corresponding to their common direction. Given a point ( x , y ) {\displaystyle (x,y)} on the Euclidean plane, for any non-zero real number Z {\displaystyle Z} , the triple ( x Z , y Z , Z ) {\displaystyle (xZ,yZ,Z)} is called a set of homogeneous coordinates for the point. By this definition, multiplying the three homogeneous coordinates by a common, non-zero factor gives a new set of homogeneous coordinates for the same point. In particular, ( x , y , 1 ) {\displaystyle (x,y,1)} is such a system of homogeneous coordinates for the point ( x , y ) {\displaystyle (x,y)} .
For example, the Cartesian point ( 1 , 2 ) {\displaystyle (1,2)} can be represented in homogeneous coordinates as ( 1 , 2 , 1 ) {\displaystyle (1,2,1)} or ( 2 , 4 , 2 ) {\displaystyle (2,4,2)} . The original Cartesian coordinates are recovered by dividing the first two positions by the third. Thus unlike Cartesian coordinates, a single point can be represented by infinitely many homogeneous coordinates.
The equation of a line through the origin ( 0 , 0 ) {\displaystyle (0,0)} may be written n x + m y = 0 {\displaystyle nx+my=0} where n {\displaystyle n} and m {\displaystyle m} are not both 0 {\displaystyle 0} . In parametric form this can be written x = m t , y = − n t {\displaystyle x=mt,y=-nt} . Let Z = 1 / t {\displaystyle Z=1/t} , so the coordinates of a point on the line may be written ( m / Z , − n / Z ) {\displaystyle (m/Z,-n/Z)} . In homogeneous coordinates this becomes ( m , − n , Z ) {\displaystyle (m,-n,Z)} . In the limit, as t {\displaystyle t} approaches infinity, in other words, as the point moves away from the origin, Z {\displaystyle Z} approaches 0 {\displaystyle 0} and the homogeneous coordinates of the point become ( m , − n , 0 ) {\displaystyle (m,-n,0)} . Thus we define ( m , − n , 0 ) {\displaystyle (m,-n,0)} as the homogeneous coordinates of the point at infinity corresponding to the direction of the line n x + m y = 0 {\displaystyle nx+my=0} . As any line of the Euclidean plane is parallel to a line passing through the origin, and since parallel lines have the same point at infinity, the infinite point on every line of the Euclidean plane has been given homogeneous coordinates.
To summarize:
The triple ( 0 , 0 , 0 ) {\displaystyle (0,0,0)} is omitted and does not represent any point. The origin of the Euclidean plane is represented by ( 0 , 0 , 1 ) {\displaystyle (0,0,1)} . [ 5 ]
Some authors use different notations for homogeneous coordinates which help distinguish them from Cartesian coordinates. The use of colons instead of commas, for example ( x : y : z ) {\displaystyle (x:y:z)} instead of ( x , y , z ) {\displaystyle (x,y,z)} , emphasizes that the coordinates are to be considered ratios. [ 6 ] Square brackets, as in [ x , y , z ] {\displaystyle [x,y,z]} emphasize that multiple sets of coordinates are associated with a single point. [ 7 ] Some authors use a combination of colons and square brackets, as in [ x : y : z ] {\displaystyle [x:y:z]} . [ 8 ]
The discussion in the preceding section applies analogously to projective spaces other than the plane. So the points on the projective line may be represented by pairs of coordinates ( x , y ) {\displaystyle (x,y)} , not both zero. In this case, the point at infinity is ( 1 , 0 ) {\displaystyle (1,0)} . Similarly the points in projective n {\displaystyle n} -space are represented by ( n + 1 ) {\displaystyle (n+1)} -tuples. [ 9 ]
The use of real numbers gives homogeneous coordinates of points in the classical case of the real projective spaces, however any field may be used, in particular, the complex numbers may be used for complex projective space . For example, the complex projective line uses two homogeneous complex coordinates and is known as the Riemann sphere . Other fields, including finite fields , can be used.
Homogeneous coordinates for projective spaces can also be created with elements from a division ring (a skew field). However, in this case, care must be taken to account for the fact that multiplication may not be commutative . [ 10 ]
For the general ring A , a projective line over A can be defined with homogeneous factors acting on the left and the projective linear group acting on the right.
Another definition of the real projective plane can be given in terms of equivalence classes . For non-zero elements
of R 3 {\displaystyle \mathbb {R} ^{3}} , define ( x 1 , y 1 , z 1 ) ∼ ( x 2 , y 2 , z 2 ) {\displaystyle (x_{1},y_{1},z_{1})\sim (x_{2},y_{2},z_{2})} to mean there is a
non-zero λ {\displaystyle \lambda } so that ( x 1 , y 1 , z 1 ) = ( λ x 2 , λ y 2 , λ z 2 ) {\displaystyle (x_{1},y_{1},z_{1})=(\lambda x_{2},\lambda y_{2},\lambda z_{2})} . Then ∼ {\displaystyle \sim } is an equivalence relation and the projective plane can be defined as the
equivalence classes of R 3 ∖ { 0 } . {\displaystyle \mathbb {R} ^{3}\setminus \left\{0\right\}.} If ( x , y , z ) {\displaystyle (x,y,z)} is
one of the elements of the equivalence class p {\displaystyle p} then these are taken to be homogeneous coordinates of p {\displaystyle p} .
Lines in this space are defined to be sets of solutions of equations of the form a x + b y + c z = 0 {\displaystyle ax+by+cz=0} where not all of a {\displaystyle a} , b {\displaystyle b} and c {\displaystyle c} are zero. Satisfaction of the condition a x + b y + c z = 0 {\displaystyle ax+by+cz=0} depends only on the equivalence class of ( x , y , z ) , {\displaystyle (x,y,z),} so the equation defines a set
of points in the projective plane. The mapping ( x , y ) → ( x , y , 1 ) {\displaystyle (x,y)\rightarrow (x,y,1)} defines an inclusion from the
Euclidean plane to the projective plane and the complement of the image is the set of points with z = 0 {\displaystyle z=0} . The equation z = 0 {\displaystyle z=0} is an equation of a line in the projective plane
( see definition of a line in the projective plane ), and is
called the line at infinity.
The equivalence classes, p {\displaystyle p} , are the lines through the origin with the origin removed. The origin does not really play an
essential part in the previous discussion so it can be added back in without changing the properties of the projective
plane. This produces a variation on the definition, namely the projective plane is defined as the set of lines in R 3 {\displaystyle \mathbb {R} ^{3}} that pass through the origin and the coordinates of a non-zero element ( x , y , z ) {\displaystyle (x,y,z)} of a line are taken to be homogeneous coordinates of the line. These lines are now interpreted as points in the
projective plane.
Again, this discussion applies analogously to other dimensions. So the projective space of dimension n can be defined as
the set of lines through the origin in R n + 1 {\displaystyle \mathbb {R} ^{n+1}} . [ 11 ]
Homogeneous coordinates are not uniquely determined by a point, so a function defined on the coordinates, say f ( x , y , z ) {\displaystyle f(x,y,z)} , does not determine a function defined on points as with Cartesian coordinates.
But a condition f ( x , y , z ) = 0 {\displaystyle f(x,y,z)=0} defined on the coordinates, as might be used to describe a
curve, determines a condition on points if the function is homogeneous . Specifically, suppose
there is a k {\displaystyle k} such that
f ( λ x , λ y , λ z ) = λ k f ( x , y , z ) . {\displaystyle f(\lambda x,\lambda y,\lambda z)=\lambda ^{k}f(x,y,z).}
If a set of coordinates represents the same point as ( x , y , z ) {\displaystyle (x,y,z)} then it can be written ( λ x , λ y , λ z ) {\displaystyle (\lambda x,\lambda y,\lambda z)} for some non-zero value of λ {\displaystyle \lambda } . Then
f ( x , y , z ) = 0 ⟺ f ( λ x , λ y , λ z ) = λ k f ( x , y , z ) = 0. {\displaystyle f(x,y,z)=0\iff f(\lambda x,\lambda y,\lambda z)=\lambda ^{k}f(x,y,z)=0.}
A polynomial g ( x , y ) {\displaystyle g(x,y)} of degree k {\displaystyle k} can be turned into a homogeneous polynomial by
replacing x {\displaystyle x} with x / z {\displaystyle x/z} , y {\displaystyle y} with y / z {\displaystyle y/z} and multiplying by z k {\displaystyle z^{k}} , in other words by
defining
f ( x , y , z ) = z k g ( x / z , y / z ) . {\displaystyle f(x,y,z)=z^{k}g(x/z,y/z).}
The resulting function f {\displaystyle f} is a polynomial, so it makes sense to extend its domain to triples where z = 0 {\displaystyle z=0} . The process can be reversed by setting z = 1 {\displaystyle z=1} , or
g ( x , y ) = f ( x , y , 1 ) . {\displaystyle g(x,y)=f(x,y,1).}
The equation f ( x , y , z ) = 0 {\displaystyle f(x,y,z)=0} can then be thought of as the homogeneous form of g ( x , y ) = 0 {\displaystyle g(x,y)=0} and it defines the same curve when restricted to the Euclidean plane. For example,
the homogeneous form of the equation of the line a x + b y + c = 0 {\displaystyle ax+by+c=0} is a x + b y + c z = 0. {\displaystyle ax+by+cz=0.} [ 12 ]
The equation of a line in the projective plane may be given as s x + t y + u z = 0 {\displaystyle sx+ty+uz=0} where s {\displaystyle s} , t {\displaystyle t} and u {\displaystyle u} are constants. Each triple ( s , t , u ) {\displaystyle (s,t,u)} determines a line, the line determined is
unchanged if it is multiplied by a non-zero scalar, and at least one of s {\displaystyle s} , t {\displaystyle t} and u {\displaystyle u} must be non-zero. So the
triple ( s , t , u ) {\displaystyle (s,t,u)} may be taken to be homogeneous coordinates of a line in the projective plane,
that is line coordinates as opposed to point coordinates. If in s x + t y + u z = 0 {\displaystyle sx+ty+uz=0} the letters s {\displaystyle s} , t {\displaystyle t} and u {\displaystyle u} are taken as variables and x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} are taken as constants then the equation becomes an equation of a set of lines in the space of
all lines in the plane. Geometrically it represents the set of lines that pass through the point ( x , y , z ) {\displaystyle (x,y,z)} and may be interpreted as the equation of the point in line-coordinates. In the same way, planes in 3-space may
be given sets of four homogeneous coordinates, and so on for higher dimensions. [ 13 ]
The same relation, s x + t y + u z = 0 {\displaystyle sx+ty+uz=0} , may be regarded as either the equation of a line or the
equation of a point. In general, there is no difference either algebraically or logically between homogeneous
coordinates of points and lines. So plane geometry with points as the fundamental elements and plane geometry with lines
as the fundamental elements are equivalent except for interpretation. This leads to the concept of duality in
projective geometry, the principle that the roles of points and lines can be interchanged in a theorem in projective
geometry and the result will also be a theorem. Analogously, the theory of points in projective 3-space is dual to the
theory of planes in projective 3-space, and so on for higher dimensions. [ 14 ]
Assigning coordinates to lines in projective 3-space is more complicated since it would seem that a total of 8
coordinates, either the coordinates of two points which lie on the line or two planes whose intersection is the line,
are required. A useful method, due to Julius Plücker , creates a set of six coordinates as the determinants x i y j − x j y i ( 1 ≤ i < j ≤ 4 ) {\displaystyle x_{i}y_{j}-x_{j}y_{i}(1\leq i<j\leq 4)} from the homogeneous coordinates of two points ( x 1 , x 2 , x 3 , x 4 ) {\displaystyle (x_{1},x_{2},x_{3},x_{4})} and ( y 1 , y 2 , y 3 , y 4 ) {\displaystyle (y_{1},y_{2},y_{3},y_{4})} on the line. The Plücker embedding is the generalization of this to create homogeneous coordinates of elements of any dimension m {\displaystyle m} in a projective space of dimension n {\displaystyle n} . [ 15 ] [ 16 ]
The homogeneous form for the equation of a circle in the real or complex projective plane is x 2 + y 2 + 2 a x z + 2 b y z + c z 2 = 0 {\displaystyle x^{2}+y^{2}+2axz+2byz+cz^{2}=0} . The intersection of
this curve with the line at infinity can be found by setting z = 0 {\displaystyle z=0} . This produces the equation x 2 + y 2 = 0 {\displaystyle x^{2}+y^{2}=0} which has two solutions over the complex numbers, giving rise to
the points with homogeneous coordinates ( 1 , i , 0 ) {\displaystyle (1,i,0)} and ( 1 , − i , 0 ) {\displaystyle (1,-i,0)} in the complex projective
plane. These points are called the circular points at infinity and can be regarded as the common points of
intersection of all circles. This can be generalized to curves of higher order as circular algebraic curves . [ 17 ]
Just as the selection of axes in the Cartesian coordinate system is somewhat arbitrary, the selection of a single system of homogeneous coordinates out of all possible systems is somewhat arbitrary. Therefore, it is useful to know how the different systems are related to each other.
Let ( x , y , z {\displaystyle (x,y,z} ) be homogeneous coordinates of a point in the projective plane. A fixed matrix A = ( a b c d e f g h i ) , {\displaystyle A={\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}},} with nonzero determinant , defines a new system of coordinates ( X , Y , Z ) {\displaystyle (X,Y,Z)} by the equation ( X Y Z ) = A ( x y z ) . {\displaystyle {\begin{pmatrix}X\\Y\\Z\end{pmatrix}}=A{\begin{pmatrix}x\\y\\z\end{pmatrix}}.} Multiplication of ( x , y , z ) {\displaystyle (x,y,z)} by a scalar results in the multiplication of ( X , Y , Z ) {\displaystyle (X,Y,Z)} by the same scalar, and X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} cannot be all 0 {\displaystyle 0} unless x {\displaystyle x} , y {\displaystyle y} and z {\displaystyle z} are all zero since A {\displaystyle A} is nonsingular. So ( X , Y , Z ) {\displaystyle (X,Y,Z)} are a new system of homogeneous coordinates for the same point of the projective plane.
Möbius's original formulation of homogeneous coordinates specified the position of a point as the center of mass (or barycenter) of a system of three point masses placed at the vertices of a fixed triangle. Points within the triangle are represented by positive masses and points outside the triangle are represented by allowing negative masses. Multiplying the masses in the system by a scalar does not affect the center of mass, so this is a special case of a system of homogeneous coordinates.
Let l {\displaystyle l} , m {\displaystyle m} and n {\displaystyle n} be three lines in the plane and define a set of coordinates X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} of a point p {\displaystyle p} as the signed distances from p {\displaystyle p} to these three lines. These are called the trilinear coordinates of p {\displaystyle p} with respect to the triangle whose vertices are the pairwise intersections of the lines. Strictly speaking these are not
homogeneous, since the values of X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} are determined exactly, not just up to proportionality. There is
a linear relationship between them however, so these coordinates can be made homogeneous by allowing multiples of ( X , Y , Z ) {\displaystyle (X,Y,Z)} to represent the same point. More generally, X {\displaystyle X} , Y {\displaystyle Y} and Z {\displaystyle Z} can be defined as
constants p {\displaystyle p} , r {\displaystyle r} and q {\displaystyle q} times the distances to l {\displaystyle l} , m {\displaystyle m} and n {\displaystyle n} , resulting in a different system of
homogeneous coordinates with the same triangle of reference. This is, in fact, the most general type of system of
homogeneous coordinates for points in the plane if none of the lines is the line at infinity. [ 18 ]
Homogeneous coordinates are ubiquitous in computer graphics because they allow common vector operations such as translation , rotation , scaling and perspective projection to be represented as a matrix by which the vector is multiplied. By the chain rule, any sequence of such operations can be multiplied out into a single matrix, allowing simple and efficient processing. By contrast, using Cartesian coordinates, translations and perspective projection cannot be expressed as matrix multiplications, though other operations can. Modern OpenGL and Direct3D graphics cards take advantage of homogeneous coordinates to implement a vertex shader efficiently using vector processors with 4-element registers. [ 19 ] [ 20 ]
For example, in perspective projection, a position in space is associated with the line from it to a fixed point called
the center of projection . The point is then mapped to a plane by finding the point of intersection of that plane and
the line. This produces an accurate representation of how a three-dimensional object appears to the eye. In the simplest
situation, the center of projection is the origin and points are mapped to the plane z = 1 {\displaystyle z=1} , working for
the moment in Cartesian coordinates. For a given point in space, ( x , y , z ) {\displaystyle (x,y,z)} , the point where the
line and the plane intersect is ( x / z , y / z , 1 ) {\displaystyle (x/z,y/z,1)} . Dropping the now superfluous z {\displaystyle z} coordinate,
this becomes ( x / z , y / z ) {\displaystyle (x/z,y/z)} . In homogeneous coordinates, the point ( x , y , z ) {\displaystyle (x,y,z)} is represented by ( x w , y w , z w , w ) {\displaystyle (xw,yw,zw,w)} and the point it maps to on the plane is
represented by ( x w , y w , z w ) {\displaystyle (xw,yw,zw)} , so projection can be represented in matrix form as ( 1 0 0 0 0 1 0 0 0 0 1 0 ) {\displaystyle {\begin{pmatrix}1&0&0&0\\0&1&0&0\\0&0&1&0\end{pmatrix}}} Matrices representing other geometric transformations can be combined with this and each other by matrix multiplication.
As a result, any perspective projection of space can be represented as a single matrix. [ 21 ] [ 22 ] | https://en.wikipedia.org/wiki/Homogeneous_coordinates |
Within the field of fluid dynamics , Homogeneous isotropic turbulence is an idealized version of the realistic turbulence , but amenable to analytical studies. The concept of isotropic turbulence was first introduced by G.I. Taylor in 1935. [ 1 ] The meaning of the turbulence is given below, [ 2 ] [ 3 ] [ 4 ]
G.I. Taylor also suggested a way of obtaining almost homogeneous isotropic turbulence by passing fluid over a uniform grid. The theory was further developed by Theodore von Kármán and Leslie Howarth ( Kármán–Howarth equation ) under dynamical considerations. Kolmogorov's theory of 1941 was developed using Taylor's idea as a platform.
This fluid dynamics –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homogeneous_isotropic_turbulence |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.