text
stringlengths
11
320k
source
stringlengths
26
161
In mathematics , a number of concepts employ the word harmonic . The similarity of this terminology to that of music is not accidental: the equations of motion of vibrating strings, drums and columns of air are given by formulas involving Laplacians ; the solutions to which are given by eigenvalues corresponding to their modes of vibration. [ citation needed ] Thus, the term "harmonic" is applied when one is considering functions with sinusoidal variations, or solutions of Laplace's equation and related concepts. [ citation needed ] Mathematical terms whose names include "harmonic" include:
https://en.wikipedia.org/wiki/Harmonic_(mathematics)
Harmonic balance is a method used to calculate the steady-state response of nonlinear differential equations , [ 1 ] and is mostly applied to nonlinear electrical circuits . [ 2 ] [ 3 ] [ 4 ] It is a frequency domain method for calculating the steady state, as opposed to the various time-domain steady-state methods. The name "harmonic balance" is descriptive of the method, which starts with Kirchhoff's Current Law written in the frequency domain and a chosen number of harmonics. A sinusoidal signal applied to a nonlinear component in a system will generate harmonics of the fundamental frequency. Effectively the method assumes a linear combination of sinusoids can represent the solution, then balances current and voltage sinusoids to satisfy Kirchhoff's law. The method is commonly used to simulate circuits which include nonlinear elements, [ 5 ] and is most applicable to systems with feedback in which limit cycles occur. Microwave circuits were the original application for harmonic balance methods in electrical engineering. Microwave circuits were well-suited because, historically, microwave circuits consist of many linear components which can be directly represented in the frequency domain, plus a few nonlinear components. System sizes were typically small. For more general circuits, the method was considered impractical for all but these very small circuits until the mid-1990s, when Krylov subspace methods were applied to the problem. [ 6 ] [ 7 ] The application of preconditioned Krylov subspace methods allowed much larger systems to be solved, both in the size of the circuit and in the number of harmonics. This made practical the present-day use of harmonic balance methods to analyze radio-frequency integrated circuits (RFICs). [ 8 ] Consider the differential equation x Β¨ + x 3 = 0 {\displaystyle {\ddot {x}}+x^{3}=0} . We use the ansatz solution x = A cos ⁑ ( Ο‰ t ) {\displaystyle x=A\cos(\omega t)} , and plugging in, we obtain βˆ’ A Ο‰ 2 cos ⁑ ( Ο‰ t ) + A 3 1 4 ( cos ⁑ ( 3 Ο‰ t ) + 3 cos ⁑ ( Ο‰ t ) ) = 0. {\displaystyle -A\omega ^{2}\cos(\omega t)+A^{3}{\frac {1}{4}}(\cos(3\omega t)+3\cos(\omega t))=0.} Then by matching the cos ⁑ ( Ο‰ t ) {\displaystyle \cos(\omega t)} terms, we have Ο‰ = 3 4 A {\displaystyle \omega ={\sqrt {\frac {3}{4}}}A} , which yields approximate period T = 2 Ο€ Ο‰ β‰ˆ 7.2552 A {\displaystyle T={\frac {2\pi }{\omega }}\approx {\frac {7.2552}{A}}} . For a more exact approximation, we use ansatz solution x = A 1 cos ⁑ ( Ο‰ t ) + A 3 cos ⁑ ( 3 Ο‰ t ) {\displaystyle x=A_{1}\cos(\omega t)+A_{3}\cos(3\omega t)} . Plugging these in and matching the cos ⁑ ( Ο‰ t ) {\displaystyle \cos(\omega t)} , cos ⁑ ( 3 Ο‰ t ) {\displaystyle \cos(3\omega t)} terms, we obtain after routine algebra: Ο‰ = 3 4 A 1 1 + y + 2 y 2 , y = A 3 / A 1 , 51 y 3 + 27 y 2 + 21 y βˆ’ 1 = 0. {\displaystyle \omega ={\sqrt {\frac {3}{4}}}A_{1}{\sqrt {1+y+2y^{2}}},\quad y=A_{3}/A_{1},\quad 51y^{3}+27y^{2}+21y-1=0.} The cubic equation for y {\displaystyle y} has only one real root y β‰ˆ 0.0448 {\displaystyle y\approx 0.0448} . With that, we obtain an approximate period T = 2 Ο€ ( 1 + y ) 3 4 A 1 + y + 2 y 2 β‰ˆ 7.402 A {\displaystyle T={\frac {2\pi (1+y)}{{\sqrt {\frac {3}{4}}}A{\sqrt {1+y+2y^{2}}}}}\approx {\frac {7.402}{A}}} Thus we approach the exact solution T = 7.4163 β‹― / A {\displaystyle T=7.4163\cdots /A} . The harmonic balance algorithm is a special version of Galerkin's method . It is used for the calculation of periodic solutions of autonomous and non-autonomous differential-algebraic systems of equations . The treatment of non-autonomous systems is slightly simpler than the treatment of autonomous ones. A non-autonomous DAE system has the representation with a sufficiently smooth function F : R Γ— C n Γ— C n β†’ C n {\displaystyle F:\mathbb {R} \times \mathbb {C} ^{n}\times \mathbb {C} ^{n}\rightarrow \mathbb {C} ^{n}} where n {\displaystyle n} is the number of equations and t , x , x Λ™ {\displaystyle t,x,{\dot {x}}} are placeholders for time, the vector of unknowns, and the vector of time derivatives. The system is non-autonomous if the function t ∈ R ↦ F ( t , x , x Λ™ ) {\displaystyle t\in \mathbb {R} \mapsto F(t,x,{\dot {x}})} is not constant for (some) fixed x {\displaystyle x} and x Λ™ {\displaystyle {\dot {x}}} . Nevertheless, we require that there is a known excitation period T > 0 {\displaystyle T>0} such that t ∈ R ↦ F ( t , x , x Λ™ ) {\displaystyle t\in \mathbb {R} \mapsto F(t,x,{\dot {x}})} is T {\displaystyle T} -periodic. A natural candidate set for the T {\displaystyle T} -periodic solutions of the system equations is the Sobolev space H p e r 1 ( ( 0 , T ) , C n ) {\displaystyle H_{\rm {per}}^{1}((0,T),\mathbb {C} ^{n})} of weakly differentiable functions on the interval [ 0 , T ] {\displaystyle [0,T]} with periodic boundary conditions x ( 0 ) = x ( T ) {\displaystyle x(0)=x(T)} . We assume that the smoothness and the structure of F {\displaystyle F} ensures that F ( t , x ( t ) , x Λ™ ( t ) ) {\displaystyle F(t,x(t),{\dot {x}}(t))} is square-integrable for all x ∈ H p e r 1 ( ( 0 , T ) , C n ) {\displaystyle x\in H_{\rm {per}}^{1}((0,T),\mathbb {C} ^{n})} . The system B := { ψ k ∣ k ∈ Z } {\displaystyle B:=\left\{\psi _{k}\mid k\in \mathbb {Z} \right\}} of harmonic functions ψ k := exp ⁑ ( i k 2 Ο€ t T ) {\displaystyle \psi _{k}:=\exp \left(ik{\frac {2\pi t}{T}}\right)} is a Schauder basis of H p e r 1 ( ( 0 , T ) , C n ) {\displaystyle H_{\rm {per}}^{1}((0,T),\mathbb {C} ^{n})} and forms a :Hilbert basis of the Hilbert space H := L 2 ( [ 0 , T ] , C ) {\displaystyle H:=L^{2}([0,T],\mathbb {C} )} of square-integrable functions. Therefore, each solution candidate x ∈ H p e r 1 ( ( 0 , T ) , C n ) {\displaystyle x\in H_{\rm {per}}^{1}((0,T),\mathbb {C} ^{n})} can be represented by a Fourier-series x ( t ) = βˆ‘ k = βˆ’ ∞ ∞ x ^ k exp ⁑ ( i k 2 Ο€ t T ) {\displaystyle x(t)=\sum _{k=-\infty }^{\infty }{\hat {x}}_{k}\exp \left(ik{\frac {2\pi t}{T}}\right)} with Fourier-coefficients x ^ k := 1 T ∫ 0 T ψ k βˆ— ( t ) β‹… x ( t ) d t {\displaystyle {\hat {x}}_{k}:={\frac {1}{T}}\int _{0}^{T}\psi _{k}^{*}(t)\cdot x(t)dt} and the system equation is satisfied in the weak sense if for every base function ψ ∈ B {\displaystyle \psi \in B} the variational equation is fulfilled. This variational equation represents an infinite sequence of scalar equations since it has to be tested for the infinite number of base functions ψ {\displaystyle \psi } in B {\displaystyle B} . The Galerkin approach to the harmonic balance is to project the candidate set as well as the test space for the variational equation to the finitely dimensional sub-space spanned by the finite base B N := { ψ k ∣ k ∈ Z with βˆ’ N ≀ k ≀ N } {\displaystyle B_{N}:=\{\psi _{k}\mid k\in \mathbb {Z} {\text{ with }}-N\leq k\leq N\}} . This gives the finite-dimensional solution x ( t ) = βˆ‘ k = βˆ’ N N x ^ k ψ k ( t ) = βˆ‘ k = βˆ’ N N x ^ k exp ⁑ ( i k 2 Ο€ t T ) {\displaystyle x(t)=\sum _{k=-N}^{N}{\hat {x}}_{k}\psi _{k}(t)=\sum _{k=-N}^{N}{\hat {x}}_{k}\exp \left(ik{\frac {2\pi t}{T}}\right)} and the finite set of equations which can be solved numerically. In the special context of electronics, the algorithm starts with Kirchhoff's current law written in the frequency-domain . To increase the efficiency of the procedure, the circuit may be partitioned into its linear and nonlinear parts, since the linear part is readily described and calculated using nodal analysis directly in the frequency domain. First, an initial guess is made for the solution, then an iterative process continues: Convergence is reached when Ο΅ {\displaystyle \epsilon } is acceptably small, at which point all voltages and currents of the steady-state solution are known, most often represented as Fourier coefficients.
https://en.wikipedia.org/wiki/Harmonic_balance
In mathematics, a real differential one-form Ο‰ on a surface is called a harmonic differential if Ο‰ and its conjugate one-form, written as Ο‰ βˆ— , are both closed . Consider the case of real one-forms defined on a two dimensional real manifold . Moreover, consider real one-forms that are the real parts of complex differentials. Let Ο‰ = A d x + B d y , and formally define the conjugate one-form to be Ο‰ βˆ— = A d y βˆ’ B d x . There is a clear connection with complex analysis . Let us write a complex number z in terms of its real and imaginary parts, say x and y respectively, i.e. z = x + iy . Since Ο‰ + iΟ‰ βˆ— = ( A βˆ’ iB )(d x + i d y ) , from the point of view of complex analysis , the quotient ( Ο‰ + iΟ‰ βˆ— )/d z tends to a limit as d z tends to 0. In other words, the definition of Ο‰ βˆ— was chosen for its connection with the concept of a derivative ( analyticity ). Another connection with the complex unit is that ( Ο‰ βˆ— ) βˆ— = βˆ’ Ο‰ (just as i 2 = βˆ’1 ). For a given function f , let us write Ο‰ = d f , i.e. Ο‰ = ⁠ βˆ‚ f / βˆ‚ x ⁠ d x + ⁠ βˆ‚ f / βˆ‚ y ⁠ d y , where βˆ‚ denotes the partial derivative . Then (d f ) βˆ— = ⁠ βˆ‚ f / βˆ‚ x ⁠ d y βˆ’ ⁠ βˆ‚ f / βˆ‚ y ⁠ d x . Now d((d f ) βˆ— ) is not always zero, indeed d((d f ) βˆ— ) = Ξ” f d x d y , where Ξ” f = ⁠ βˆ‚ 2 f / βˆ‚ x 2 ⁠ + ⁠ βˆ‚ 2 f / βˆ‚ y 2 ⁠ . As we have seen above: we call the one-form Ο‰ harmonic if both Ο‰ and Ο‰ βˆ— are closed. This means that ⁠ βˆ‚ A / βˆ‚ y ⁠ = ⁠ βˆ‚ B / βˆ‚ x ⁠ ( Ο‰ is closed) and ⁠ βˆ‚ B / βˆ‚ y ⁠ = βˆ’ ⁠ βˆ‚ A / βˆ‚ x ⁠ ( Ο‰ βˆ— is closed). These are called the Cauchy–Riemann equations on A βˆ’ iB . Usually they are expressed in terms of u ( x , y ) + iv ( x , y ) as ⁠ βˆ‚ u / βˆ‚ x ⁠ = ⁠ βˆ‚ v / βˆ‚ y ⁠ and ⁠ βˆ‚ v / βˆ‚ x ⁠ = βˆ’ ⁠ βˆ‚ u / βˆ‚ y ⁠ .
https://en.wikipedia.org/wiki/Harmonic_differential
In mathematics , a harmonic divisor number or Ore number is a positive integer whose divisors have a harmonic mean that is an integer. The first few harmonic divisor numbers are Harmonic divisor numbers were introduced by Øystein Ore , who showed that every perfect number is a harmonic divisor number and conjectured that there are no odd harmonic divisor numbers other than 1. The number 6 has the four divisors 1, 2, 3, and 6. Their harmonic mean is an integer: 4 1 1 + 1 2 + 1 3 + 1 6 = 2. {\displaystyle {\frac {4}{{\frac {1}{1}}+{\frac {1}{2}}+{\frac {1}{3}}+{\frac {1}{6}}}}=2.} Thus 6 is a harmonic divisor number. Similarly, the number 140 has divisors 1, 2, 4, 5, 7, 10, 14, 20, 28, 35, 70, and 140. Their harmonic mean is 12 1 1 + 1 2 + 1 4 + 1 5 + 1 7 + 1 10 + 1 14 + 1 20 + 1 28 + 1 35 + 1 70 + 1 140 = 5. {\displaystyle {\frac {12}{{\frac {1}{1}}+{\frac {1}{2}}+{\frac {1}{4}}+{\frac {1}{5}}+{\frac {1}{7}}+{\frac {1}{10}}+{\frac {1}{14}}+{\frac {1}{20}}+{\frac {1}{28}}+{\frac {1}{35}}+{\frac {1}{70}}+{\frac {1}{140}}}}=5.} Since 5 is an integer, 140 is a harmonic divisor number. The harmonic mean H ( n ) of the divisors of any number n can be expressed as the formula H ( n ) = n Οƒ 0 ( n ) Οƒ 1 ( n ) {\displaystyle H(n)={\frac {n\sigma _{0}(n)}{\sigma _{1}(n)}}} where Οƒ i ( n ) is the sum of i th powers of the divisors of n : Οƒ 0 is the number of divisors, and Οƒ 1 is the sum of divisors ( Cohen 1997 ). All of the terms in this formula are multiplicative but not completely multiplicative . Therefore, the harmonic mean H ( n ) is also multiplicative. This means that, for any positive integer n , the harmonic mean H ( n ) can be expressed as the product of the harmonic means of the prime powers in the factorization of n . For instance, we have H ( 4 ) = 3 1 + 1 2 + 1 4 = 12 7 , {\displaystyle H(4)={\frac {3}{1+{\frac {1}{2}}+{\frac {1}{4}}}}={\frac {12}{7}},} H ( 5 ) = 2 1 + 1 5 = 5 3 , {\displaystyle H(5)={\frac {2}{1+{\frac {1}{5}}}}={\frac {5}{3}},} H ( 7 ) = 2 1 + 1 7 = 7 4 , {\displaystyle H(7)={\frac {2}{1+{\frac {1}{7}}}}={\frac {7}{4}},} and H ( 140 ) = H ( 4 β‹… 5 β‹… 7 ) = H ( 4 ) β‹… H ( 5 ) β‹… H ( 7 ) = 12 7 β‹… 5 3 β‹… 7 4 = 5. {\displaystyle H(140)=H(4\cdot 5\cdot 7)=H(4)\cdot H(5)\cdot H(7)={\frac {12}{7}}\cdot {\frac {5}{3}}\cdot {\frac {7}{4}}=5.} For any integer M , as Ore observed, the product of the harmonic mean and arithmetic mean of its divisors equals M itself, as can be seen from the definitions. Therefore, M is harmonic, with harmonic mean of divisors k , if and only if the average of its divisors is the product of M with a unit fraction 1/ k . Ore showed that every perfect number is harmonic. To see this, observe that the sum of the divisors of a perfect number M is exactly 2M ; therefore, the average of the divisors is M (2/Ο„( M )), where Ο„( M ) denotes the number of divisors of M . For any M , Ο„( M ) is odd if and only if M is a square number , for otherwise each divisor d of M can be paired with a different divisor M / d . But no perfect number can be a square: this follows from the known form of even perfect numbers and from the fact that odd perfect numbers (if they exist) must have a factor of the form q Ξ± where Ξ± ≑ 1 ( mod 4). Therefore, for a perfect number M , Ο„( M ) is even and the average of the divisors is the product of M with the unit fraction 2/Ο„( M ); thus, M is a harmonic divisor number. Ore conjectured that no odd harmonic divisor numbers exist other than 1. If the conjecture is true, this would imply the nonexistence of odd perfect numbers . W. H. Mills (unpublished; see Muskat) showed that any odd harmonic divisor number above 1 must have a prime power factor greater than 10 7 , and Cohen showed that any such number must have at least three different prime factors. Cohen & Sorli (2010) showed that there are no odd harmonic divisor numbers smaller than 10 24 . Cohen, Goto, and others starting with Ore himself have performed computer searches listing all small harmonic divisor numbers. From these results, lists are known of all harmonic divisor numbers up to 2 × 10 9 , and all harmonic divisor numbers for which the harmonic mean of the divisors is at most 300.
https://en.wikipedia.org/wiki/Harmonic_divisor_number
The harmonic mixer and subharmonic mixer are a type of frequency mixer , which is a circuit that changes one signal frequency to another. The ordinary mixer has two input signals and one output signal. If the two input signals are sinewaves at frequencies f 1 and f 2 , then the output signal consists of frequency components at the sum f 1 + f 2 and difference f 1 βˆ’ f 2 frequencies. In contrast, the harmonic and subharmonic mixers form sum and difference frequencies at a harmonic multiple of one of the inputs. The output signal then contains frequencies such as f 1 + kf 2 and f 1 βˆ’ kf 2 where k is an integer. The classic frequency mixer is a multiplier. Multiplying two sinewaves produces just the sum and difference frequencies; the input frequencies are suppressed, and, in theory, there are no other heterodyne products. In practice, the multiplier is not perfect, and the input frequencies and other heterodyne products will be present. An actual multiplier is not needed. The significant requirement is a nonlinearity, and at microwave frequencies it is easier to use a nonlinearity rather than an ideal multiplier. A Taylor series expansion of a nonlinearity will show multiplications that give rise to the desired higher order products. Design goals for mixers seek to select the desired heterodyne products and suppress the undesired ones. Diode mixers. Overdriven diode bridge mixers. Drive signal looks like odd harmonic waveform (essentially a square wave). One classic design for a harmonic mixer uses a step recovery diode (SRD). [ 1 ] The mixer's subharmonic input is first amplified to a power level that might be around 1Β watt. That signal then drives a step recovery diode impulse generator circuit that turns the sine wave into something approximating an impulse train. The resulting impulse train has the harmonics of the input sine wave present to a high frequency (such as 18Β GHz). The impulse train can then be used with a diode mixer (also called a sampler). [ 2 ] The SRD usually has a very high frequency multiplication ratio, and can be used as the basis of a comb receiver, monitoring several harmonically related frequencies at once. This forms the basis of many simple 'bug detectors' where the intention is to detect transmission on any frequency, even if not known in advance. (This is not the same as a 'rake' receiver which is a correlation device.) When the required frequency multiple is lower, such as doubling, tripling or quadrupling, then Schottky diode circuits are more common. The conduction angle can be adjusted by changing drive level or temperature, and determines which part of the I/V curve is used and therefore the relative strengths of the different harmonically related outputs. If an even multiple is desired then an anti-parallel pair of diodes will suppress the odd local oscillator contribution, to the level that the diodes can be made identical and experience the same source impedance. Unlike a normal mixer, there is a fairly clear optimum drive level, above which the conversion loss increases. A harmonic mixer can be used to avoid the complexity of generating a microwave local oscillator, and is common as a simple and reliable frequency extender to a low frequency design. [ 3 ] [ 4 ] [ 5 ] Subharmonic mixers (a particular form of harmonic mixer where the LO is provided at a sub multiple of the frequency to be mixed with the incoming signal) are often used in direct-digital, or zero IF , communications system in order to eliminate the unwanted effects of LO self-mixing which occurs in many fundamental frequency mixers. Used in frequency synthesizers and network analyzers . A variation on the subharmonic mixer exists that has two switching stages is used to improved mixer gain in a direct downconversion receiver. The first switching stage mixes a received RF signal to an intermediate frequency that is one-half the received RF signal frequency. The second switching stage mixes the intermediate frequency to baseband. By connecting the two switching stages in series, current is reused and harmonic content from the first stage is fed into the second stage thereby improving the mixer gain. Synthesizer using harmonic mixing
https://en.wikipedia.org/wiki/Harmonic_mixer
In mathematics , the n -th harmonic number is the sum of the reciprocals of the first n natural numbers : [ 1 ] H n = 1 + 1 2 + 1 3 + β‹― + 1 n = βˆ‘ k = 1 n 1 k . {\displaystyle H_{n}=1+{\frac {1}{2}}+{\frac {1}{3}}+\cdots +{\frac {1}{n}}=\sum _{k=1}^{n}{\frac {1}{k}}.} Starting from n = 1 , the sequence of harmonic numbers begins: 1 , 3 2 , 11 6 , 25 12 , 137 60 , … {\displaystyle 1,{\frac {3}{2}},{\frac {11}{6}},{\frac {25}{12}},{\frac {137}{60}},\dots } Harmonic numbers are related to the harmonic mean in that the n -th harmonic number is also n times the reciprocal of the harmonic mean of the first n positive integers. Harmonic numbers have been studied since antiquity and are important in various branches of number theory . They are sometimes loosely termed harmonic series , are closely related to the Riemann zeta function , and appear in the expressions of various special functions . The harmonic numbers roughly approximate the natural logarithm function [ 2 ] :β€Š143 and thus the associated harmonic series grows without limit, albeit slowly. In 1737, Leonhard Euler used the divergence of the harmonic series to provide a new proof of the infinity of prime numbers . His work was extended into the complex plane by Bernhard Riemann in 1859, leading directly to the celebrated Riemann hypothesis about the distribution of prime numbers . When the value of a large quantity of items has a Zipf's law distribution, the total value of the n most-valuable items is proportional to the n -th harmonic number. This leads to a variety of surprising conclusions regarding the long tail and the theory of network value . The Bertrand-Chebyshev theorem implies that, except for the case n = 1 , the harmonic numbers are never integers. [ 3 ] By definition, the harmonic numbers satisfy the recurrence relation H n + 1 = H n + 1 n + 1 . {\displaystyle H_{n+1}=H_{n}+{\frac {1}{n+1}}.} The harmonic numbers are connected to the Stirling numbers of the first kind by the relation H n = 1 n ! [ n + 1 2 ] . {\displaystyle H_{n}={\frac {1}{n!}}\left[{n+1 \atop 2}\right].} The harmonic numbers satisfy the series identities βˆ‘ k = 1 n H k = ( n + 1 ) H n βˆ’ n {\displaystyle \sum _{k=1}^{n}H_{k}=(n+1)H_{n}-n} and βˆ‘ k = 1 n H k 2 = ( n + 1 ) H n 2 βˆ’ ( 2 n + 1 ) H n + 2 n . {\displaystyle \sum _{k=1}^{n}H_{k}^{2}=(n+1)H_{n}^{2}-(2n+1)H_{n}+2n.} These two results are closely analogous to the corresponding integral results ∫ 0 x log ⁑ y d y = x log ⁑ x βˆ’ x {\displaystyle \int _{0}^{x}\log y\ dy=x\log x-x} and ∫ 0 x ( log ⁑ y ) 2 d y = x ( log ⁑ x ) 2 βˆ’ 2 x log ⁑ x + 2 x . {\displaystyle \int _{0}^{x}(\log y)^{2}\ dy=x(\log x)^{2}-2x\log x+2x.} There are several infinite summations involving harmonic numbers and powers of Ο€ : [ 4 ] [ betterΒ sourceΒ needed ] βˆ‘ n = 1 ∞ H n n β‹… 2 n = Ο€ 2 12 βˆ‘ n = 1 ∞ H n 2 n 2 = 17 360 Ο€ 4 βˆ‘ n = 1 ∞ H n 2 ( n + 1 ) 2 = 11 360 Ο€ 4 βˆ‘ n = 1 ∞ H n n 3 = Ο€ 4 72 {\displaystyle {\begin{aligned}\sum _{n=1}^{\infty }{\frac {H_{n}}{n\cdot 2^{n}}}&={\frac {\pi ^{2}}{12}}\\\sum _{n=1}^{\infty }{\frac {H_{n}^{2}}{n^{2}}}&={\frac {17}{360}}\pi ^{4}\\\sum _{n=1}^{\infty }{\frac {H_{n}^{2}}{(n+1)^{2}}}&={\frac {11}{360}}\pi ^{4}\\\sum _{n=1}^{\infty }{\frac {H_{n}}{n^{3}}}&={\frac {\pi ^{4}}{72}}\end{aligned}}} An integral representation given by Euler [ 5 ] is H n = ∫ 0 1 1 βˆ’ x n 1 βˆ’ x d x . {\displaystyle H_{n}=\int _{0}^{1}{\frac {1-x^{n}}{1-x}}\,dx.} The equality above is straightforward by the simple algebraic identity 1 βˆ’ x n 1 βˆ’ x = 1 + x + β‹― + x n βˆ’ 1 . {\displaystyle {\frac {1-x^{n}}{1-x}}=1+x+\cdots +x^{n-1}.} Using the substitution x = 1 βˆ’ u , another expression for H n is H n = ∫ 0 1 1 βˆ’ x n 1 βˆ’ x d x = ∫ 0 1 1 βˆ’ ( 1 βˆ’ u ) n u d u = ∫ 0 1 [ βˆ‘ k = 1 n ( n k ) ( βˆ’ u ) k βˆ’ 1 ] d u = βˆ‘ k = 1 n ( n k ) ∫ 0 1 ( βˆ’ u ) k βˆ’ 1 d u = βˆ‘ k = 1 n ( n k ) ( βˆ’ 1 ) k βˆ’ 1 k . {\displaystyle {\begin{aligned}H_{n}&=\int _{0}^{1}{\frac {1-x^{n}}{1-x}}\,dx=\int _{0}^{1}{\frac {1-(1-u)^{n}}{u}}\,du\\[6pt]&=\int _{0}^{1}\left[\sum _{k=1}^{n}{\binom {n}{k}}(-u)^{k-1}\right]\,du=\sum _{k=1}^{n}{\binom {n}{k}}\int _{0}^{1}(-u)^{k-1}\,du\\[6pt]&=\sum _{k=1}^{n}{\binom {n}{k}}{\frac {(-1)^{k-1}}{k}}.\end{aligned}}} The n th harmonic number is about as large as the natural logarithm of n . The reason is that the sum is approximated by the integral ∫ 1 n 1 x d x , {\displaystyle \int _{1}^{n}{\frac {1}{x}}\,dx,} whose value is ln n . The values of the sequence H n βˆ’ ln n decrease monotonically towards the limit lim n β†’ ∞ ( H n βˆ’ ln ⁑ n ) = Ξ³ , {\displaystyle \lim _{n\to \infty }\left(H_{n}-\ln n\right)=\gamma ,} where Ξ³ β‰ˆ 0.5772156649 is the Euler–Mascheroni constant . The corresponding asymptotic expansion is H n ∼ ln ⁑ n + Ξ³ + 1 2 n βˆ’ βˆ‘ k = 1 ∞ B 2 k 2 k n 2 k = ln ⁑ n + Ξ³ + 1 2 n βˆ’ 1 12 n 2 + 1 120 n 4 βˆ’ β‹― , {\displaystyle {\begin{aligned}H_{n}&\sim \ln {n}+\gamma +{\frac {1}{2n}}-\sum _{k=1}^{\infty }{\frac {B_{2k}}{2kn^{2k}}}\\&=\ln {n}+\gamma +{\frac {1}{2n}}-{\frac {1}{12n^{2}}}+{\frac {1}{120n^{4}}}-\cdots ,\end{aligned}}} where B k are the Bernoulli numbers . A generating function for the harmonic numbers is βˆ‘ n = 1 ∞ z n H n = βˆ’ ln ⁑ ( 1 βˆ’ z ) 1 βˆ’ z , {\displaystyle \sum _{n=1}^{\infty }z^{n}H_{n}={\frac {-\ln(1-z)}{1-z}},} where ln( z ) is the natural logarithm . An exponential generating function is βˆ‘ n = 1 ∞ z n n ! H n = e z βˆ‘ k = 1 ∞ ( βˆ’ 1 ) k βˆ’ 1 k z k k ! = e z Ein ⁑ ( z ) {\displaystyle \sum _{n=1}^{\infty }{\frac {z^{n}}{n!}}H_{n}=e^{z}\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{k}}{\frac {z^{k}}{k!}}=e^{z}\operatorname {Ein} (z)} where Ein( z ) is the entire exponential integral . The exponential integral may also be expressed as Ein ⁑ ( z ) = E 1 ( z ) + Ξ³ + ln ⁑ z = Ξ“ ( 0 , z ) + Ξ³ + ln ⁑ z {\displaystyle \operatorname {Ein} (z)=\mathrm {E} _{1}(z)+\gamma +\ln z=\Gamma (0,z)+\gamma +\ln z} where Ξ“(0, z ) is the incomplete gamma function . The harmonic numbers have several interesting arithmetic properties. It is well-known that H n {\textstyle H_{n}} is an integer if and only if n = 1 {\textstyle n=1} , a result often attributed to Taeisinger. [ 6 ] Indeed, using 2-adic valuation , it is not difficult to prove that for n β‰₯ 2 {\textstyle n\geq 2} the numerator of H n {\textstyle H_{n}} is an odd number while the denominator of H n {\textstyle H_{n}} is an even number. More precisely, H n = 1 2 ⌊ log 2 ⁑ ( n ) βŒ‹ a n b n {\displaystyle H_{n}={\frac {1}{2^{\lfloor \log _{2}(n)\rfloor }}}{\frac {a_{n}}{b_{n}}}} with some odd integers a n {\textstyle a_{n}} and b n {\textstyle b_{n}} . As a consequence of Wolstenholme's theorem , for any prime number p β‰₯ 5 {\displaystyle p\geq 5} the numerator of H p βˆ’ 1 {\displaystyle H_{p-1}} is divisible by p 2 {\textstyle p^{2}} . Furthermore, Eisenstein [ 7 ] proved that for all odd prime number p {\textstyle p} it holds H ( p βˆ’ 1 ) / 2 ≑ βˆ’ 2 q p ( 2 ) ( mod p ) {\displaystyle H_{(p-1)/2}\equiv -2q_{p}(2){\pmod {p}}} where q p ( 2 ) = ( 2 p βˆ’ 1 βˆ’ 1 ) / p {\textstyle q_{p}(2)=(2^{p-1}-1)/p} is a Fermat quotient , with the consequence that p {\textstyle p} divides the numerator of H ( p βˆ’ 1 ) / 2 {\displaystyle H_{(p-1)/2}} if and only if p {\textstyle p} is a Wieferich prime . In 1991, Eswarathasan and Levine [ 8 ] defined J p {\displaystyle J_{p}} as the set of all positive integers n {\displaystyle n} such that the numerator of H n {\displaystyle H_{n}} is divisible by a prime number p . {\displaystyle p.} They proved that { p βˆ’ 1 , p 2 βˆ’ p , p 2 βˆ’ 1 } βŠ† J p {\displaystyle \{p-1,p^{2}-p,p^{2}-1\}\subseteq J_{p}} for all prime numbers p β‰₯ 5 , {\displaystyle p\geq 5,} and they defined harmonic primes to be the primes p {\textstyle p} such that J p {\displaystyle J_{p}} has exactly 3 elements. Eswarathasan and Levine also conjectured that J p {\displaystyle J_{p}} is a finite set for all primes p , {\displaystyle p,} and that there are infinitely many harmonic primes. Boyd [ 9 ] verified that J p {\displaystyle J_{p}} is finite for all prime numbers up to p = 547 {\displaystyle p=547} except 83, 127, and 397; and he gave a heuristic suggesting that the density of the harmonic primes in the set of all primes should be 1 / e {\displaystyle 1/e} . Sanna [ 10 ] showed that J p {\displaystyle J_{p}} has zero asymptotic density , while Bing-Ling Wu and Yong-Gao Chen [ 11 ] proved that the number of elements of J p {\displaystyle J_{p}} not exceeding x {\displaystyle x} is at most 3 x 2 3 + 1 25 log ⁑ p {\displaystyle 3x^{{\frac {2}{3}}+{\frac {1}{25\log p}}}} , for all x β‰₯ 1 {\displaystyle x\geq 1} . The harmonic numbers appear in several calculation formulas, such as the digamma function ψ ( n ) = H n βˆ’ 1 βˆ’ Ξ³ . {\displaystyle \psi (n)=H_{n-1}-\gamma .} This relation is also frequently used to define the extension of the harmonic numbers to non-integer n . The harmonic numbers are also frequently used to define Ξ³ using the limit introduced earlier: Ξ³ = lim n β†’ ∞ ( H n βˆ’ ln ⁑ ( n ) ) , {\displaystyle \gamma =\lim _{n\rightarrow \infty }{\left(H_{n}-\ln(n)\right)},} although Ξ³ = lim n β†’ ∞ ( H n βˆ’ ln ⁑ ( n + 1 2 ) ) {\displaystyle \gamma =\lim _{n\to \infty }{\left(H_{n}-\ln \left(n+{\frac {1}{2}}\right)\right)}} converges more quickly. In 2002, Jeffrey Lagarias proved [ 12 ] that the Riemann hypothesis is equivalent to the statement that Οƒ ( n ) ≀ H n + ( log ⁑ H n ) e H n , {\displaystyle \sigma (n)\leq H_{n}+(\log H_{n})e^{H_{n}},} is true for every integer n β‰₯ 1 with strict inequality if n > 1 ; here Οƒ ( n ) denotes the sum of the divisors of n . The eigenvalues of the nonlocal problem on L 2 ( [ βˆ’ 1 , 1 ] ) {\displaystyle L^{2}([-1,1])} Ξ» Ο† ( x ) = ∫ βˆ’ 1 1 Ο† ( x ) βˆ’ Ο† ( y ) | x βˆ’ y | d y {\displaystyle \lambda \varphi (x)=\int _{-1}^{1}{\frac {\varphi (x)-\varphi (y)}{|x-y|}}\,dy} are given by Ξ» = 2 H n {\displaystyle \lambda =2H_{n}} , where by convention H 0 = 0 {\displaystyle H_{0}=0} , and the corresponding eigenfunctions are given by the Legendre polynomials Ο† ( x ) = P n ( x ) {\displaystyle \varphi (x)=P_{n}(x)} . [ 13 ] The n th generalized harmonic number of order m is given by H n , m = βˆ‘ k = 1 n 1 k m . {\displaystyle H_{n,m}=\sum _{k=1}^{n}{\frac {1}{k^{m}}}.} (In some sources, this may also be denoted by H n ( m ) {\textstyle H_{n}^{(m)}} or H m ( n ) . {\textstyle H_{m}(n).} ) The special case m = 0 gives H n , 0 = n . {\displaystyle H_{n,0}=n.} The special case m = 1 reduces to the usual harmonic number: H n , 1 = H n = βˆ‘ k = 1 n 1 k . {\displaystyle H_{n,1}=H_{n}=\sum _{k=1}^{n}{\frac {1}{k}}.} The limit of H n , m {\textstyle H_{n,m}} as n β†’ ∞ is finite if m > 1 , with the generalized harmonic number bounded by and converging to the Riemann zeta function lim n β†’ ∞ H n , m = ΞΆ ( m ) . {\displaystyle \lim _{n\rightarrow \infty }H_{n,m}=\zeta (m).} The smallest natural number k such that k n does not divide the denominator of generalized harmonic number H ( k , n ) nor the denominator of alternating generalized harmonic number Hβ€² ( k , n ) is, for n =1, 2, ...Β : The related sum βˆ‘ k = 1 n k m {\displaystyle \sum _{k=1}^{n}k^{m}} occurs in the study of Bernoulli numbers ; the harmonic numbers also appear in the study of Stirling numbers . Some integrals of generalized harmonic numbers are ∫ 0 a H x , 2 d x = a Ο€ 2 6 βˆ’ H a {\displaystyle \int _{0}^{a}H_{x,2}\,dx=a{\frac {\pi ^{2}}{6}}-H_{a}} and ∫ 0 a H x , 3 d x = a A βˆ’ 1 2 H a , 2 , {\displaystyle \int _{0}^{a}H_{x,3}\,dx=aA-{\frac {1}{2}}H_{a,2},} where A is ApΓ©ry's constant ΞΆ (3), and βˆ‘ k = 1 n H k , m = ( n + 1 ) H n , m βˆ’ H n , m βˆ’ 1 for m β‰₯ 0. {\displaystyle \sum _{k=1}^{n}H_{k,m}=(n+1)H_{n,m}-H_{n,m-1}{\text{ for }}m\geq 0.} Every generalized harmonic number of order m can be written as a function of harmonic numbers of order m βˆ’ 1 {\displaystyle m-1} using H n , m = βˆ‘ k = 1 n βˆ’ 1 H k , m βˆ’ 1 k ( k + 1 ) + H n , m βˆ’ 1 n {\displaystyle H_{n,m}=\sum _{k=1}^{n-1}{\frac {H_{k,m-1}}{k(k+1)}}+{\frac {H_{n,m-1}}{n}}} for example: H 4 , 3 = H 1 , 2 1 β‹… 2 + H 2 , 2 2 β‹… 3 + H 3 , 2 3 β‹… 4 + H 4 , 2 4 {\displaystyle H_{4,3}={\frac {H_{1,2}}{1\cdot 2}}+{\frac {H_{2,2}}{2\cdot 3}}+{\frac {H_{3,2}}{3\cdot 4}}+{\frac {H_{4,2}}{4}}} A generating function for the generalized harmonic numbers is βˆ‘ n = 1 ∞ z n H n , m = Li m ⁑ ( z ) 1 βˆ’ z , {\displaystyle \sum _{n=1}^{\infty }z^{n}H_{n,m}={\frac {\operatorname {Li} _{m}(z)}{1-z}},} where Li m ⁑ ( z ) {\displaystyle \operatorname {Li} _{m}(z)} is the polylogarithm , and | z | < 1 . The generating function given above for m = 1 is a special case of this formula. A fractional argument for generalized harmonic numbers can be introduced as follows: For every p , q > 0 {\displaystyle p,q>0} integer, and m > 1 {\displaystyle m>1} integer or not, we have from polygamma functions: H q / p , m = ΞΆ ( m ) βˆ’ p m βˆ‘ k = 1 ∞ 1 ( q + p k ) m {\displaystyle H_{q/p,m}=\zeta (m)-p^{m}\sum _{k=1}^{\infty }{\frac {1}{(q+pk)^{m}}}} where ΞΆ ( m ) {\displaystyle \zeta (m)} is the Riemann zeta function . The relevant recurrence relation is H a , m = H a βˆ’ 1 , m + 1 a m . {\displaystyle H_{a,m}=H_{a-1,m}+{\frac {1}{a^{m}}}.} Some special values are H 1 4 , 2 = 16 βˆ’ 5 6 Ο€ 2 βˆ’ 8 G H 1 2 , 2 = 4 βˆ’ Ο€ 2 3 H 3 4 , 2 = 16 9 βˆ’ 5 6 Ο€ 2 + 8 G H 1 4 , 3 = 64 βˆ’ Ο€ 3 βˆ’ 27 ΞΆ ( 3 ) H 1 2 , 3 = 8 βˆ’ 6 ΞΆ ( 3 ) H 3 4 , 3 = ( 4 3 ) 3 + Ο€ 3 βˆ’ 27 ΞΆ ( 3 ) {\displaystyle {\begin{aligned}H_{{\frac {1}{4}},2}&=16-{\tfrac {5}{6}}\pi ^{2}-8G\\H_{{\frac {1}{2}},2}&=4-{\frac {\pi ^{2}}{3}}\\H_{{\frac {3}{4}},2}&={\frac {16}{9}}-{\frac {5}{6}}\pi ^{2}+8G\\H_{{\frac {1}{4}},3}&=64-\pi ^{3}-27\zeta (3)\\H_{{\frac {1}{2}},3}&=8-6\zeta (3)\\H_{{\frac {3}{4}},3}&=\left({\frac {4}{3}}\right)^{3}+\pi ^{3}-27\zeta (3)\end{aligned}}} where G is Catalan's constant . In the special case that p = 1 {\displaystyle p=1} , we get H n , m = ΞΆ ( m , 1 ) βˆ’ ΞΆ ( m , n + 1 ) , {\displaystyle H_{n,m}=\zeta (m,1)-\zeta (m,n+1),} where ΞΆ ( m , n ) {\displaystyle \zeta (m,n)} is the Hurwitz zeta function . This relationship is used to calculate harmonic numbers numerically. The multiplication theorem applies to harmonic numbers. Using polygamma functions, we obtain H 2 x = 1 2 ( H x + H x βˆ’ 1 2 ) + ln ⁑ 2 H 3 x = 1 3 ( H x + H x βˆ’ 1 3 + H x βˆ’ 2 3 ) + ln ⁑ 3 , {\displaystyle {\begin{aligned}H_{2x}&={\frac {1}{2}}\left(H_{x}+H_{x-{\frac {1}{2}}}\right)+\ln 2\\H_{3x}&={\frac {1}{3}}\left(H_{x}+H_{x-{\frac {1}{3}}}+H_{x-{\frac {2}{3}}}\right)+\ln 3,\end{aligned}}} or, more generally, H n x = 1 n ( H x + H x βˆ’ 1 n + H x βˆ’ 2 n + β‹― + H x βˆ’ n βˆ’ 1 n ) + ln ⁑ n . {\displaystyle H_{nx}={\frac {1}{n}}\left(H_{x}+H_{x-{\frac {1}{n}}}+H_{x-{\frac {2}{n}}}+\cdots +H_{x-{\frac {n-1}{n}}}\right)+\ln n.} For generalized harmonic numbers, we have H 2 x , 2 = 1 2 ( ΞΆ ( 2 ) + 1 2 ( H x , 2 + H x βˆ’ 1 2 , 2 ) ) H 3 x , 2 = 1 9 ( 6 ΞΆ ( 2 ) + H x , 2 + H x βˆ’ 1 3 , 2 + H x βˆ’ 2 3 , 2 ) , {\displaystyle {\begin{aligned}H_{2x,2}&={\frac {1}{2}}\left(\zeta (2)+{\frac {1}{2}}\left(H_{x,2}+H_{x-{\frac {1}{2}},2}\right)\right)\\H_{3x,2}&={\frac {1}{9}}\left(6\zeta (2)+H_{x,2}+H_{x-{\frac {1}{3}},2}+H_{x-{\frac {2}{3}},2}\right),\end{aligned}}} where ΞΆ ( n ) {\displaystyle \zeta (n)} is the Riemann zeta function . The next generalization was discussed by J. H. Conway and R. K. Guy in their 1995 book The Book of Numbers . [ 2 ] :β€Š258 Let H n ( 0 ) = 1 n . {\displaystyle H_{n}^{(0)}={\frac {1}{n}}.} Then the nth hyperharmonic number of order r ( r>0 ) is defined recursively as H n ( r ) = βˆ‘ k = 1 n H k ( r βˆ’ 1 ) . {\displaystyle H_{n}^{(r)}=\sum _{k=1}^{n}H_{k}^{(r-1)}.} In particular, H n ( 1 ) {\displaystyle H_{n}^{(1)}} is the ordinary harmonic number H n {\displaystyle H_{n}} . The Roman Harmonic numbers , [ 14 ] named after Steven Roman , were introduced by Daniel Loeb and Gian-Carlo Rota in the context of a generalization of umbral calculus with logarithms. [ 15 ] There are many possible definitions, but one of them, for n , k β‰₯ 0 {\displaystyle n,k\geq 0} , is c n ( 0 ) = 1 , {\displaystyle c_{n}^{(0)}=1,} and c n ( k + 1 ) = βˆ‘ i = 1 n c i ( k ) i . {\displaystyle c_{n}^{(k+1)}=\sum _{i=1}^{n}{\frac {c_{i}^{(k)}}{i}}.} Of course, c n ( 1 ) = H n . {\displaystyle c_{n}^{(1)}=H_{n}.} If n β‰  0 {\displaystyle n\neq 0} , they satisfy c n ( k + 1 ) βˆ’ c n ( k ) n = c n βˆ’ 1 ( k + 1 ) . {\displaystyle c_{n}^{(k+1)}-{\frac {c_{n}^{(k)}}{n}}=c_{n-1}^{(k+1)}.} Closed form formulas are c n ( k ) = n ! ( βˆ’ 1 ) k s ( βˆ’ n , k ) , {\displaystyle c_{n}^{(k)}=n!(-1)^{k}s(-n,k),} where s ( βˆ’ n , k ) {\displaystyle s(-n,k)} is Stirling numbers of the first kind generalized to negative first argument, and c n ( k ) = βˆ‘ j = 1 n ( n j ) ( βˆ’ 1 ) j βˆ’ 1 j k , {\displaystyle c_{n}^{(k)}=\sum _{j=1}^{n}{\binom {n}{j}}{\frac {(-1)^{j-1}}{j^{k}}},} which was found by Donald Knuth . In fact, these numbers were defined in a more general manner using Roman numbers and Roman factorials , that include negative values for n {\displaystyle n} . This generalization was useful in their study to define Harmonic logarithms . The formulae given above, H x = ∫ 0 1 1 βˆ’ t x 1 βˆ’ t d t = βˆ‘ k = 1 ∞ ( x k ) ( βˆ’ 1 ) k βˆ’ 1 k {\displaystyle H_{x}=\int _{0}^{1}{\frac {1-t^{x}}{1-t}}\,dt=\sum _{k=1}^{\infty }{x \choose k}{\frac {(-1)^{k-1}}{k}}} are an integral and a series representation for a function that interpolates the harmonic numbers and, via analytic continuation , extends the definition to the complex plane other than the negative integers x . The interpolating function is in fact closely related to the digamma function H x = ψ ( x + 1 ) + Ξ³ , {\displaystyle H_{x}=\psi (x+1)+\gamma ,} where ψ ( x ) is the digamma function, and Ξ³ is the Euler–Mascheroni constant . The integration process may be repeated to obtain H x , 2 = βˆ‘ k = 1 ∞ ( βˆ’ 1 ) k βˆ’ 1 k ( x k ) H k . {\displaystyle H_{x,2}=\sum _{k=1}^{\infty }{\frac {(-1)^{k-1}}{k}}{x \choose k}H_{k}.} The Taylor series for the harmonic numbers is H x = βˆ‘ k = 2 ∞ ( βˆ’ 1 ) k ΞΆ ( k ) x k βˆ’ 1 for | x | < 1 {\displaystyle H_{x}=\sum _{k=2}^{\infty }(-1)^{k}\zeta (k)\;x^{k-1}\quad {\text{ for }}|x|<1} which comes from the Taylor series for the digamma function ( ΞΆ {\displaystyle \zeta } is the Riemann zeta function ). There is an asymptotic formulation that gives the same result as the analytic continuation of the integral just described. When seeking to approximate H x for a complex number x , it is effective to first compute H m for some large integer m . Use that as an approximation for the value of H m + x . Then use the recursion relation H n = H n βˆ’1 + 1/ n backwards m times, to unwind it to an approximation for H x . Furthermore, this approximation is exact in the limit as m goes to infinity. Specifically, for a fixed integer n , it is the case that lim m β†’ ∞ [ H m + n βˆ’ H m ] = 0. {\displaystyle \lim _{m\rightarrow \infty }\left[H_{m+n}-H_{m}\right]=0.} If n is not an integer then it is not possible to say whether this equation is true because we have not yet (in this section) defined harmonic numbers for non-integers. However, we do get a unique extension of the harmonic numbers to the non-integers by insisting that this equation continue to hold when the arbitrary integer n is replaced by an arbitrary complex number x , lim m β†’ ∞ [ H m + x βˆ’ H m ] = 0 . {\displaystyle \lim _{m\rightarrow \infty }\left[H_{m+x}-H_{m}\right]=0\,.} Swapping the order of the two sides of this equation and then subtracting them from H x gives H x = lim m β†’ ∞ [ H m βˆ’ ( H m + x βˆ’ H x ) ] = lim m β†’ ∞ [ ( βˆ‘ k = 1 m 1 k ) βˆ’ ( βˆ‘ k = 1 m 1 x + k ) ] = lim m β†’ ∞ βˆ‘ k = 1 m ( 1 k βˆ’ 1 x + k ) = x βˆ‘ k = 1 ∞ 1 k ( x + k ) . {\displaystyle {\begin{aligned}H_{x}&=\lim _{m\rightarrow \infty }\left[H_{m}-(H_{m+x}-H_{x})\right]\\[6pt]&=\lim _{m\rightarrow \infty }\left[\left(\sum _{k=1}^{m}{\frac {1}{k}}\right)-\left(\sum _{k=1}^{m}{\frac {1}{x+k}}\right)\right]\\[6pt]&=\lim _{m\rightarrow \infty }\sum _{k=1}^{m}\left({\frac {1}{k}}-{\frac {1}{x+k}}\right)=x\sum _{k=1}^{\infty }{\frac {1}{k(x+k)}}\,.\end{aligned}}} This infinite series converges for all complex numbers x except the negative integers, which fail because trying to use the recursion relation H n = H n βˆ’1 + 1/ n backwards through the value n = 0 involves a division by zero. By this construction, the function that defines the harmonic number for complex values is the unique function that simultaneously satisfies (1) H 0 = 0 , (2) H x = H x βˆ’1 + 1/ x for all complex numbers x except the non-positive integers, and (3) lim m β†’+∞ ( H m + x βˆ’ H m ) = 0 for all complex values x . This last formula can be used to show that ∫ 0 1 H x d x = Ξ³ , {\displaystyle \int _{0}^{1}H_{x}\,dx=\gamma ,} where Ξ³ is the Euler–Mascheroni constant or, more generally, for every n we have: ∫ 0 n H x d x = n Ξ³ + ln ⁑ ( n ! ) . {\displaystyle \int _{0}^{n}H_{x}\,dx=n\gamma +\ln(n!).} There are the following special analytic values for fractional arguments between 0 and 1, given by the integral H Ξ± = ∫ 0 1 1 βˆ’ x Ξ± 1 βˆ’ x d x . {\displaystyle H_{\alpha }=\int _{0}^{1}{\frac {1-x^{\alpha }}{1-x}}\,dx\,.} More values may be generated from the recurrence relation H Ξ± = H Ξ± βˆ’ 1 + 1 Ξ± , {\displaystyle H_{\alpha }=H_{\alpha -1}+{\frac {1}{\alpha }}\,,} or from the reflection relation H βˆ’ Ξ± βˆ’ H Ξ± βˆ’ 1 = Ο€ cot ⁑ ( Ο€ Ξ± ) . {\displaystyle H_{-\alpha }-H_{\alpha -1}=\pi \cot {(\pi \alpha )}.} For example: H 1 2 = 2 βˆ’ 2 ln ⁑ 2 H 1 3 = 3 βˆ’ Ο€ 2 3 βˆ’ 3 2 ln ⁑ 3 H 2 3 = 3 2 + Ο€ 2 3 βˆ’ 3 2 ln ⁑ 3 H 1 4 = 4 βˆ’ Ο€ 2 βˆ’ 3 ln ⁑ 2 H 1 5 = 5 βˆ’ Ο€ 2 1 + 2 5 βˆ’ 5 4 ln ⁑ 5 βˆ’ 5 4 ln ⁑ ( 3 + 5 2 ) H 3 4 = 4 3 + Ο€ 2 βˆ’ 3 ln ⁑ 2 H 1 6 = 6 βˆ’ 3 2 Ο€ βˆ’ 2 ln ⁑ 2 βˆ’ 3 2 ln ⁑ 3 H 1 8 = 8 βˆ’ 1 + 2 2 Ο€ βˆ’ 4 ln ⁑ 2 βˆ’ 1 2 ( ln ⁑ ( 2 + 2 ) βˆ’ ln ⁑ ( 2 βˆ’ 2 ) ) H 1 12 = 12 βˆ’ ( 1 + 3 2 ) Ο€ βˆ’ 3 ln ⁑ 2 βˆ’ 3 2 ln ⁑ 3 + 3 ln ⁑ ( 2 βˆ’ 3 ) {\displaystyle {\begin{aligned}H_{\frac {1}{2}}&=2-2\ln 2\\H_{\frac {1}{3}}&=3-{\frac {\pi }{2{\sqrt {3}}}}-{\frac {3}{2}}\ln 3\\H_{\frac {2}{3}}&={\frac {3}{2}}+{\frac {\pi }{2{\sqrt {3}}}}-{\frac {3}{2}}\ln 3\\H_{\frac {1}{4}}&=4-{\frac {\pi }{2}}-3\ln 2\\H_{\frac {1}{5}}&=5-{\frac {\pi }{2}}{\sqrt {1+{\frac {2}{\sqrt {5}}}}}-{\frac {5}{4}}\ln 5-{\frac {\sqrt {5}}{4}}\ln \left({\frac {3+{\sqrt {5}}}{2}}\right)\\H_{\frac {3}{4}}&={\frac {4}{3}}+{\frac {\pi }{2}}-3\ln 2\\H_{\frac {1}{6}}&=6-{\frac {\sqrt {3}}{2}}\pi -2\ln 2-{\frac {3}{2}}\ln 3\\H_{\frac {1}{8}}&=8-{\frac {1+{\sqrt {2}}}{2}}\pi -4\ln {2}-{\frac {1}{\sqrt {2}}}\left(\ln \left(2+{\sqrt {2}}\right)-\ln \left(2-{\sqrt {2}}\right)\right)\\H_{\frac {1}{12}}&=12-\left(1+{\frac {\sqrt {3}}{2}}\right)\pi -3\ln {2}-{\frac {3}{2}}\ln {3}+{\sqrt {3}}\ln \left(2-{\sqrt {3}}\right)\end{aligned}}} Which are computed via Gauss's digamma theorem , which essentially states that for positive integers p and q with p < q H p q = q p + 2 βˆ‘ k = 1 ⌊ q βˆ’ 1 2 βŒ‹ cos ⁑ ( 2 Ο€ p k q ) ln ⁑ ( sin ⁑ ( Ο€ k q ) ) βˆ’ Ο€ 2 cot ⁑ ( Ο€ p q ) βˆ’ ln ⁑ ( 2 q ) {\displaystyle H_{\frac {p}{q}}={\frac {q}{p}}+2\sum _{k=1}^{\lfloor {\frac {q-1}{2}}\rfloor }\cos \left({\frac {2\pi pk}{q}}\right)\ln \left({\sin \left({\frac {\pi k}{q}}\right)}\right)-{\frac {\pi }{2}}\cot \left({\frac {\pi p}{q}}\right)-\ln \left(2q\right)} Some derivatives of fractional harmonic numbers are given by d n H x d x n = ( βˆ’ 1 ) n + 1 n ! [ ΞΆ ( n + 1 ) βˆ’ H x , n + 1 ] d n H x , 2 d x n = ( βˆ’ 1 ) n + 1 ( n + 1 ) ! [ ΞΆ ( n + 2 ) βˆ’ H x , n + 2 ] d n H x , 3 d x n = ( βˆ’ 1 ) n + 1 1 2 ( n + 2 ) ! [ ΞΆ ( n + 3 ) βˆ’ H x , n + 3 ] . {\displaystyle {\begin{aligned}{\frac {d^{n}H_{x}}{dx^{n}}}&=(-1)^{n+1}n!\left[\zeta (n+1)-H_{x,n+1}\right]\\[6pt]{\frac {d^{n}H_{x,2}}{dx^{n}}}&=(-1)^{n+1}(n+1)!\left[\zeta (n+2)-H_{x,n+2}\right]\\[6pt]{\frac {d^{n}H_{x,3}}{dx^{n}}}&=(-1)^{n+1}{\frac {1}{2}}(n+2)!\left[\zeta (n+3)-H_{x,n+3}\right].\end{aligned}}} And using Maclaurin series , we have for x < 1 that H x = βˆ‘ n = 1 ∞ ( βˆ’ 1 ) n + 1 x n ΞΆ ( n + 1 ) H x , 2 = βˆ‘ n = 1 ∞ ( βˆ’ 1 ) n + 1 ( n + 1 ) x n ΞΆ ( n + 2 ) H x , 3 = 1 2 βˆ‘ n = 1 ∞ ( βˆ’ 1 ) n + 1 ( n + 1 ) ( n + 2 ) x n ΞΆ ( n + 3 ) . {\displaystyle {\begin{aligned}H_{x}&=\sum _{n=1}^{\infty }(-1)^{n+1}x^{n}\zeta (n+1)\\[5pt]H_{x,2}&=\sum _{n=1}^{\infty }(-1)^{n+1}(n+1)x^{n}\zeta (n+2)\\[5pt]H_{x,3}&={\frac {1}{2}}\sum _{n=1}^{\infty }(-1)^{n+1}(n+1)(n+2)x^{n}\zeta (n+3).\end{aligned}}} For fractional arguments between 0 and 1 and for a > 1, H 1 / a = 1 a ( ΞΆ ( 2 ) βˆ’ 1 a ΞΆ ( 3 ) + 1 a 2 ΞΆ ( 4 ) βˆ’ 1 a 3 ΞΆ ( 5 ) + β‹― ) H 1 / a , 2 = 1 a ( 2 ΞΆ ( 3 ) βˆ’ 3 a ΞΆ ( 4 ) + 4 a 2 ΞΆ ( 5 ) βˆ’ 5 a 3 ΞΆ ( 6 ) + β‹― ) H 1 / a , 3 = 1 2 a ( 2 β‹… 3 ΞΆ ( 4 ) βˆ’ 3 β‹… 4 a ΞΆ ( 5 ) + 4 β‹… 5 a 2 ΞΆ ( 6 ) βˆ’ 5 β‹… 6 a 3 ΞΆ ( 7 ) + β‹― ) . {\displaystyle {\begin{aligned}H_{1/a}&={\frac {1}{a}}\left(\zeta (2)-{\frac {1}{a}}\zeta (3)+{\frac {1}{a^{2}}}\zeta (4)-{\frac {1}{a^{3}}}\zeta (5)+\cdots \right)\\[6pt]H_{1/a,\,2}&={\frac {1}{a}}\left(2\zeta (3)-{\frac {3}{a}}\zeta (4)+{\frac {4}{a^{2}}}\zeta (5)-{\frac {5}{a^{3}}}\zeta (6)+\cdots \right)\\[6pt]H_{1/a,\,3}&={\frac {1}{2a}}\left(2\cdot 3\zeta (4)-{\frac {3\cdot 4}{a}}\zeta (5)+{\frac {4\cdot 5}{a^{2}}}\zeta (6)-{\frac {5\cdot 6}{a^{3}}}\zeta (7)+\cdots \right).\end{aligned}}} This article incorporates material from Harmonic number on PlanetMath , which is licensed under the Creative Commons Attribution/Share-Alike License .
https://en.wikipedia.org/wiki/Harmonic_number
In mathematics , a polynomial p {\displaystyle p} whose Laplacian is zero is termed a harmonic polynomial . [ 1 ] [ 2 ] The harmonic polynomials form a subspace of the vector space of polynomials over the given field . In fact, they form a graded subspace . [ 3 ] For the real field ( R {\displaystyle \mathbb {R} } ), the harmonic polynomials are important in mathematical physics. [ 4 ] [ 5 ] [ 6 ] The Laplacian is the sum of second-order partial derivatives with respect to each of the variables, and is an invariant differential operator under the action of the orthogonal group via the group of rotations. The standard separation of variables theorem [ citation needed ] states that every multivariate polynomial over a field can be decomposed as a finite sum of products of a radial polynomial and a harmonic polynomial. This is equivalent to the statement that the polynomial ring is a free module over the ring of radial polynomials. [ 7 ] Consider a degree- d {\displaystyle d} univariate polynomial p ( x ) := βˆ‘ k = 0 d a k x k {\displaystyle p(x):=\textstyle \sum _{k=0}^{d}a_{k}x^{k}} . In order to be harmonic, this polynomial must satisfy 0 = βˆ‚ 2 βˆ‚ x 2 p ( x ) = βˆ‘ k = 2 d k ( k βˆ’ 1 ) a k x k βˆ’ 2 {\displaystyle 0={\tfrac {\partial ^{2}}{\partial x^{2}}}p(x)=\sum _{k=2}^{d}k(k-1)a_{k}x^{k-2}} at all points x ∈ R {\displaystyle x\in \mathbb {R} } . In particular, when d = 2 {\displaystyle d=2} , we have a polynomial p ( x ) = a 0 + a 1 x + a 2 x 2 {\displaystyle p(x)=a_{0}+a_{1}x+a_{2}x^{2}} , which must satisfy the condition a 2 = 0 {\displaystyle a_{2}=0} . Hence, the only harmonic polynomials of one (real) variable are affine functions x ↦ a 0 + a 1 x {\displaystyle x\mapsto a_{0}+a_{1}x} . In the multivariable case, one finds nontrivial spaces of harmonic polynomials. Consider for instance the bivariate quadratic polynomial p ( x , y ) := a 0 , 0 + a 1 , 0 x + a 0 , 1 y + a 1 , 1 x y + a 2 , 0 x 2 + a 0 , 2 y 2 , {\displaystyle p(x,y):=a_{0,0}+a_{1,0}x+a_{0,1}y+a_{1,1}xy+a_{2,0}x^{2}+a_{0,2}y^{2},} where a 0 , 0 , a 1 , 0 , a 0 , 1 , a 1 , 1 , a 2 , 0 , a 0 , 2 {\displaystyle a_{0,0},a_{1,0},a_{0,1},a_{1,1},a_{2,0},a_{0,2}} are real coefficients. The Laplacian of this polynomial is given by Ξ” p ( x , y ) = βˆ‚ 2 βˆ‚ x 2 p ( x , y ) + βˆ‚ 2 βˆ‚ y 2 p ( x , y ) = 2 ( a 2 , 0 + a 0 , 2 ) . {\displaystyle \Delta p(x,y)={\tfrac {\partial ^{2}}{\partial x^{2}}}p(x,y)+{\tfrac {\partial ^{2}}{\partial y^{2}}}p(x,y)=2(a_{2,0}+a_{0,2}).} Hence, in order for p ( x , y ) {\displaystyle p(x,y)} to be harmonic, its coefficients need only satisfy the relationship a 2 , 0 = βˆ’ a 0 , 2 {\displaystyle a_{2,0}=-a_{0,2}} . Equivalently, all (real) quadratic bivariate harmonic polynomials are linear combinations of the polynomials 1 , x , y , x y , x 2 βˆ’ y 2 . {\displaystyle 1,\quad x,\quad y,\quad xy,\quad x^{2}-y^{2}.} Note that, as in any vector space, there are other choices of basis for this same space of polynomials. A basis for real bivariate harmonic polynomials up to degree 6 is given as follows: Ο• 0 ( x , y ) = 1 Ο• 1 , 1 ( x , y ) = x Ο• 1 , 2 ( x , y ) = y Ο• 2 , 1 ( x , y ) = x y Ο• 2 , 2 ( x , y ) = x 2 βˆ’ y 2 Ο• 3 , 1 ( x , y ) = y 3 βˆ’ 3 x 2 y Ο• 3 , 2 ( x , y ) = x 3 βˆ’ 3 x y 2 Ο• 4 , 1 ( x , y ) = x 3 y βˆ’ x y 3 Ο• 4 , 2 ( x , y ) = βˆ’ x 4 + 6 x 2 y 2 βˆ’ y 4 Ο• 5 , 1 ( x , y ) = 5 x 4 y βˆ’ 10 x 2 y 3 + y 5 Ο• 5 , 2 ( x , y ) = x 5 βˆ’ 10 x 3 y 2 + 5 x y 4 Ο• 6 , 1 ( x , y ) = 3 x 5 y βˆ’ 10 x 3 y 3 + 3 x y 5 Ο• 6 , 2 ( x , y ) = βˆ’ x 6 + 15 x 4 y 2 βˆ’ 15 x 2 y 4 + y 6 {\displaystyle {\begin{aligned}\phi _{0}(x,y)&=1\\\phi _{1,1}(x,y)&=x&\phi _{1,2}(x,y)&=y\\\phi _{2,1}(x,y)&=xy&\phi _{2,2}(x,y)&=x^{2}-y^{2}\\\phi _{3,1}(x,y)&=y^{3}-3x^{2}y&\phi _{3,2}(x,y)&=x^{3}-3xy^{2}\\\phi _{4,1}(x,y)&=x^{3}y-xy^{3}&\phi _{4,2}(x,y)&=-x^{4}+6x^{2}y^{2}-y^{4}\\\phi _{5,1}(x,y)&=5x^{4}y-10x^{2}y^{3}+y^{5}&\phi _{5,2}(x,y)&=x^{5}-10x^{3}y^{2}+5xy^{4}\\\phi _{6,1}(x,y)&=3x^{5}y-10x^{3}y^{3}+3xy^{5}&\phi _{6,2}(x,y)&=-x^{6}+15x^{4}y^{2}-15x^{2}y^{4}+y^{6}\end{aligned}}} This polynomial -related article is a stub . You can help Wikipedia by expanding it . This abstract algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harmonic_polynomial
In mathematics , a harmonic progression (or harmonic sequence ) is a progression formed by taking the reciprocals of an arithmetic progression , which is also known as an arithmetic sequence. Equivalently, a sequence is a harmonic progression when each term is the harmonic mean of the neighboring terms. As a third equivalent characterization, it is an infinite sequence of the form where a is not zero and βˆ’ a / d is not a natural number , or a finite sequence of the form where a is not zero, k is a natural number and βˆ’ a / d is not a natural number or is greater than k . In the following n is a natural number, in sequence: n = 1 , 2 , 3 , 4 , … {\displaystyle \ n=1,\ 2,\ 3,\ 4,\ \ldots \ } Infinite harmonic progressions are not summable (sum to infinity). It is not possible for a harmonic progression of distinct unit fractions (other than the trivial case where a = 1 and k = 0) to sum to an integer . The reason is that, necessarily, at least one denominator of the progression will be divisible by a prime number that does not divide any other denominator. [ 1 ] If collinear points A, B, C, and D are such that D is the harmonic conjugate of C with respect to A and B, then the distances from any one of these points to the three remaining points form harmonic progression. [ 2 ] [ 3 ] Specifically, each of the sequences AC,Β AB,Β AD; BC,Β BA,Β BD; CA,Β CD,Β CB; and DA,Β DC,Β DB are harmonic progressions, where each of the distances is signed according to a fixed orientation of the line. In a triangle, if the altitudes are in arithmetic progression , then the sides are in harmonic progression. An excellent example of Harmonic Progression is the Leaning Tower of Lire . In it, uniform blocks are stacked on top of each other to achieve the maximum sideways or lateral distance covered. The blocks are stacked 1/2, 1/4, 1/6, 1/8, 1/10, … distance sideways below the original block. This ensures that the center of gravity is just at the center of the structure so that it does not collapse. A slight increase in weight on the structure causes it to become unstable and fall.
https://en.wikipedia.org/wiki/Harmonic_progression_(mathematics)
In Euclidean geometry , a harmonic quadrilateral is a quadrilateral whose four vertices lie on a circle , and whose pairs of opposite edges have equal products of lengths. Harmonic quadrilaterals have also been called harmonic quadrangles. They are the images of squares under MΓΆbius transformations . Every triangle can be extended to a harmonic quadrilateral by adding another vertex, in three ways. The notion of Brocard points of triangles can be generalized to these quadrilaterals. A harmonic quadrilateral is a quadrilateral that can be inscribed in a circle (a cyclic quadrilateral ) and in which the products of the lengths of opposite sides are equal (an Apollonius quadrilateral ). Equivalently, it is a quadrilateral that can be obtained as a MΓΆbius transformation of the vertices of a square , as these transformations preserve both the inscribability of a square and the cross ratio of its vertices. [ 1 ] Four points in the complex plane define a harmonic quadrilateral when their complex cross ratio is βˆ’ 1 {\displaystyle -1} ; this is only possible for points inscribed in a circle, and in this case, it equals the real cross ratio. [ 2 ] For any point p {\displaystyle p} in the plane, the four lines connecting p {\displaystyle p} to each vertex of the square cut the circumcircle of the square in the four points of a harmonic quadrilateral. [ 1 ] Every triangle can be extended to a harmonic quadrilateral in three different ways, by adding a fourth vertex to the triangle, at the point where one of the three symmedians of the triangle cross its circumcircle. Each symmedian is the line through one vertex of the triangle and through the crossing point of the two tangent lines to the circumcircle at the other two vertices. [ 3 ] The definition of the Brocard points of a triangle can be extended to harmonic quadrilaterals. A Brocard point of a polygon has the property that the line segments connecting the Brocard to the polygon vertices all form equal angles with the adjacent polygon sides. Each triangle has two Brocard points, one that forms equal angles with the polygon sides adjacent in the clockwise direction from each vertex, and another for the counterclockwise direction. The same property is true for the harmonic quadrilaterals, uniquely among cyclic quadrilaterals. [ 4 ] This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harmonic_quadrilateral
In this article spherical functions are replaced by polynomials that have been well known in electrostatics since the time of Maxwell and associated with multipole moments . [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] In physics, dipole and quadrupole moments typically appear because fundamental concepts of physics are associated precisely with them. [ 9 ] [ 10 ] Dipole and quadrupole moments are: where ρ ( x ) {\displaystyle \rho (\mathbf {x} )} is density of charges (or other quantity). Octupole moment is used rather seldom. As a rule, high-rank moments are calculated with the help of spherical functions. Spherical functions are convenient in scattering problems . Polynomials are preferable in calculations with differential operators . Here, properties of tensors , including high-rank moments as well, are considered to repeat basically features of solid spherical functions but having their own specifics. Using of invariant polynomial tensors in Cartesian coordinates , as shown in a number of recent studies, is preferable and simplifies the fundamental scheme of calculations [ 11 ] [ 12 ] [ 13 ] . [ 14 ] The spherical coordinates are not involved here. The rules for using harmonic symmetric tensors are demonstrated that directly follow from their properties. These rules are naturally reflected in the theory of special functions , but are not always obvious, even though the group properties are general . [ 15 ] At any rate, let us recall the main property of harmonic tensors: the trace over any pair of indices vanishes [ 9 ] . [ 16 ] Here, those properties of tensors are selected that not only make analytic calculations more compact and reduce 'the number of factorials' but also allow correctly formulating some fundamental questions of the theoretical physics [ 9 ] . [ 14 ] Four properties of symmetric tensor M i . . . k {\displaystyle \mathbf {M} _{i...k}} lead to the use of it in physics. A. Tensor is homogeneous polynomial : where l {\displaystyle l} is the number of indices, i.e., tensor rank ; B. Tensor is symmetric with respect to indices; C. Tensor is harmonic, i.e., it is a solution of the Laplace equation : D. Trace over any two indices vanishes: where symbol [ . . . ] {\displaystyle [...]} denotes remaining ( l βˆ’ 2 ) {\displaystyle (l-2)} indices after equating i = i {\displaystyle i=i} . Components of tensor are solid spherical functions. Tensor can be divided by factor r l {\displaystyle r^{l}} to acquire components in the form of spherical functions. The multipole potentials arise when the potential of a point charge is expanded in powers of coordinates x o i {\displaystyle x_{oi}} of the radius vector r o {\displaystyle \mathbf {r} _{o}} ('Maxwell poles') . [ 4 ] [ 1 ] For potential there is well known formula: where the following notation is used. For the l {\displaystyle l} th tensor power of the radius vector and for a symmetric harmonic tensor of rank l {\displaystyle l} , The tensor is a homogeneous harmonic polynomial with described the general properties. Contraction over any two indices (when the two gradients become the Ξ” {\displaystyle \Delta } operator) is null. If tensor is divided by r 2 l + 1 {\displaystyle r^{2l+1}} , then a multipole harmonic tensor arises which is also a homogeneous harmonic function with homogeneity degree βˆ’ ( l + 1 ) {\displaystyle -(l+1)} . From the formula for potential follows that which allows to construct a ladder operator. There is an obvious property of contraction that give rise to a theorem simplifying essentially the calculation of moments in theoretical physics. Let ρ ( x ) {\displaystyle \rho (x)} be a distribution of charge. When calculating a multipole potential, power-law moments can be used instead of harmonic tensors (or instead of spherical functions ): It is an advantage in comparing with using of spherical functions. Example 1. For the quadrupole moment, instead of the integral one can use 'short' integral Moments are different but potentials are equal each other. Formula for the tensor was considered in [ 11 ] [ 12 ] using a ladder operator. It can be derived using the Laplace operator. [ 14 ] Similar approach is known in the theory of special functions. [ 15 ] The first term in the formula, as is easy to see from expansion of a point charge potential, is equal to The remaining terms can be obtained by repeatedly applying the Laplace operator and multiplying by an even power of the modulus r {\displaystyle r} . The coefficients are easy to determine by substituting expansion in the Laplace equation . As a result, formula is following: M [ i ] ( l ) ( r ) = ( 2 l βˆ’ 1 ) ! ! r βŠ— l βˆ’ ( 2 l βˆ’ 3 ) ! ! 1 ! 2 1 r 2 Ξ” r βŠ— l + ( 2 l βˆ’ 5 ) ! ! 2 ! 2 2 r 4 Ξ” 2 r βŠ— l βˆ’ ( 2 l βˆ’ 7 ) ! ! 3 ! 2 3 r 6 Ξ” 3 r βŠ— l + . . . {\displaystyle \mathbf {M} _{[i]}^{(l)}(\mathbf {r} )=(2l-1)!!\mathbf {r} ^{\otimes l}-{\frac {(2l-3)!!}{1!2^{1}}}r^{2}\Delta \mathbf {r} ^{\otimes l}+{\frac {(2l-5)!!}{2!2^{2}}}r^{4}\Delta ^{2}\mathbf {r} ^{\otimes l}-{\frac {(2l-7)!!}{3!2^{3}}}r^{6}\Delta ^{3}\mathbf {r} ^{\otimes l}+...} . This form is useful for applying differential operators of quantum mechanics and electrostatics to it. The differentiation generates product of the Kronecker symbols . Example 2 The last quality can be verified using the contraction with i = k {\displaystyle i=k} . It is convenient to write the differentiation formula in terms of the symmetrization operation. A symbol for it was proposed in, [ 12 ] with the help of sum taken over all independent permutations of indices: As a result, the following formula is obtained: M [ i ] ( l ) ( r ) = ( 2 l βˆ’ 1 ) ! ! r βŠ— l βˆ’ ( 2 l βˆ’ 3 ) ! ! r 2 ⟨ ⟨ Ξ΄ [ . . ] βŠ— 1 r βŠ— ( l βˆ’ 2 ) ⟩ ⟩ + ( 2 l βˆ’ 5 ) ! ! r 4 ⟨ ⟨ Ξ΄ [ . . ] βŠ— 2 r βŠ— ( l βˆ’ 4 ) ⟩ ⟩ βˆ’ . . . {\displaystyle \mathbf {M} _{[i]}^{(l)}(\mathbf {r} )=(2l-1)!!\mathbf {r} ^{\otimes l}-(2l-3)!!r^{2}\left\langle \left\langle \delta _{[..]}^{\otimes 1}\mathbf {r} ^{\otimes (l-2)}\right\rangle \right\rangle +(2l-5)!!r^{4}\left\langle \left\langle \delta _{[..]}^{\otimes 2}\mathbf {r} ^{\otimes (l-4)}\right\rangle \right\rangle -...} , where the symbol βŠ— k {\displaystyle \otimes k} is used for a tensor power of the Kronecker symbol Ξ΄ i m {\displaystyle \delta _{im}} and conventional symbol [..] is used for the two subscripts that are being changed under symmetrization. Following [ 11 ] one can find the relation between the tensor and solid spherical functions. Two unit vectors are needed: vector n z {\displaystyle \mathbf {n} _{z}} directed along the z {\displaystyle z} -axis and complex vector n x Β± i n y = n Β± {\displaystyle \mathbf {n} _{x}\pm i\mathbf {n} _{y}=\mathbf {n} _{\pm }} . Contraction with their powers gives the required relation where P l ( t ) {\displaystyle P_{l}(t)} is a Legendre polynomial . In perturbation theory, it is necessary to expand the source in terms of spherical functions. If the source is a polynomial, for example, when calculating the Stark effect , then the integrals are standard, but cumbersome. When calculating with the help of invariant tensors, the expansion coefficients are simplified, and there is then no need to integrals. It suffices, as shown in, [ 14 ] to calculate contractions that lower the rank of the tensors under consideration. Instead of integrals, the operation of calculating the trace T ^ r {\displaystyle {\hat {T}}r} of a tensor over two indices is used. The following rank reduction formula is useful: where symbol [m] denotes all left (l-2) indices. If the brackets contain several factors with the Kronecker delta, the following relation formula holds: Calculating the trace reduces the number of the Kronecker symbols by one, and the rank of the harmonic tensor on the right-hand side of the equation decreases by two. Repeating the calculation of the trace k times eliminates all the Kronecker symbols: The Laplace equation in four-dimensional 4D space has its own specifics. The potential of a point charge in 4D space is equal to 1 r 2 {\displaystyle {\frac {1}{r^{2}}}} . [ 17 ] From the expansion of the point-charge potential 1 ( r βˆ’ r 0 ) 2 {\displaystyle {\frac {1}{{(\mathbf {r} -\mathbf {r} _{0})}^{2}}}} with respect to powers r 0 βŠ— n {\displaystyle \mathbf {r} _{0}^{\otimes n}} the multipole 4D potential arises: The harmonic tensor in the numinator has a structure similar to 3D harmonic tensor. Its contraction with respect to any two indices must vanish. The dipole and quadruple 4-D tensors, as follows from here, are expressed as The leading term of the expansion, as can be seen, is equal to The method described for 3D tensor, gives relations M [ i ] ( n ) ( r ) = ( 2 n ) ! ! r βŠ— n βˆ’ ( 2 n βˆ’ 2 ) ! ! 1 ! 2 1 r 2 Ξ” r βŠ— n + ( 2 n βˆ’ 4 ) ! ! 2 ! 2 2 r 4 Ξ” 2 r βŠ— n βˆ’ ( 2 n βˆ’ 6 ) ! ! 3 ! 2 3 r 6 Ξ” 3 r βŠ— n + . . . {\displaystyle {\mathfrak {M}}_{[i]}^{(n)}(\mathbf {r} )=(2n)!!\mathbf {r} ^{\otimes n}-{\frac {(2n-2)!!}{1!2^{1}}}r^{2}\Delta \mathbf {r} ^{\otimes n}+{\frac {(2n-4)!!}{2!2^{2}}}r^{4}\Delta ^{2}\mathbf {r} ^{\otimes n}-{\frac {(2n-6)!!}{3!2^{3}}}r^{6}\Delta ^{3}\mathbf {r} ^{\otimes n}+...} , M [ i ] ( l ) ( r ) = ( 2 n ) ! ! r βŠ— n βˆ’ ( 2 n βˆ’ 2 ) ! ! r 2 ⟨ ⟨ Ξ΄ [ . . ] βŠ— 1 r βŠ— ( n βˆ’ 2 ) ⟩ ⟩ + ( 2 n βˆ’ 4 ) ! ! r 4 ⟨ ⟨ Ξ΄ [ . . ] βŠ— 2 r βŠ— ( n βˆ’ 4 ) ⟩ ⟩ βˆ’ . . . {\displaystyle {\mathfrak {M}}_{[i]}^{(l)}(\mathbf {r} )=(2n)!!\mathbf {r} ^{\otimes n}-(2n-2)!!r^{2}\left\langle \left\langle \delta _{[..]}^{\otimes 1}\mathbf {r} ^{\otimes (n-2)}\right\rangle \right\rangle +(2n-4)!!r^{4}\left\langle \left\langle \delta _{[..]}^{\otimes 2}\mathbf {r} ^{\otimes (n-4)}\right\rangle \right\rangle -...} . Four-dimensional tensors are structurally simpler than 3D tensors. Applying the contraction rules allows decomposing the tensor with respect to the harmonic ones. In the perturbation theory, even the third approximation often considered good. Here, the decomposition of the tensor power up to the rank l=6 is presented: To derive the formulas, it is useful to calculate the contraction with respect two indices, i.e., the trace . The formula for l = 6 {\displaystyle l=6} then implies the formula for l = 4 {\displaystyle l=4} . Applying the trace, there is convenient to use rules of previous section. Particular, the last term of the relations for even values of l {\displaystyle l} has the form Also useful is the frequently occurring contraction over all indices, which arises when normalizing the states. The decomposition of tensor powers of a vector is also compact in four dimensions: When using the tensor notation with indices suppressed, the last equality becomes Decomposition of higher powers is not more difficult using contractions over two indices. Ladder operators are useful for representing eigen functions in a compact form. [ 18 ] [ 19 ] They are a basis for constructing coherent states [ 20 ] . [ 21 ] Operators considered here, in mani respects close to the 'creation' and 'annihilation' operators of an oscillator. Efimov's operator D ^ {\displaystyle \mathbf {\hat {D}} } that increases the value of rank by one was introduced in . [ 11 ] It can be obtained from expansion of point-charge potential: Straightforward differentiation on the left-hand side of the equation yields a vector operator acting on a harmonic tensor: D ^ = ( 2 l ^ βˆ’ 1 ) r βˆ’ r 2 βˆ‡ , {\displaystyle \mathbf {\hat {D}} =(2{\hat {l}}-1)\mathbf {r} -r^{2}\mathbf {\nabla } ,} where operator multiplies homogeneous polynomial by degree of homogeneity l {\displaystyle l} . In particular, As a result of an l {\displaystyle l} - fold application to unity, the harmonic tensor arises: written here in different forms. The relation of this tensor to the angular momentum operator L ^ {\displaystyle {\hat {\mathbf {L} }}} ( ℏ = 1 ) {\displaystyle (\hbar =1)} is as follows: Some useful properties of the operator in vector form given below. Scalar product yields a vanishing trace over any two indices. The scalar product of vectors D ^ {\displaystyle {\hat {\mathbf {D} }}} and x {\displaystyle \mathbf {x} } is and, hence, the contraction of the tensor with the vector x {\displaystyle \mathbf {x} } can be expressed as where l {\displaystyle l} is a number. The commutator in the scalar product on the sphere is equal to unity: To calculate the divergence of a tensor, a useful formula is whence ( l {\displaystyle l} on the right-hand side is a number). The raising operator in 4D space has largely similar properties. The main formula for it is where y i {\displaystyle y_{i}} is a 4D vector, i = 1 , 2 , 3 , 4 {\displaystyle i=1,2,3,4} , and the n ^ {\displaystyle {\hat {n}}} operator multiplies a homogeneous polynomial by its degree. Separating the Ο„ {\displaystyle \tau } variable is convenient for physical problems: In particular, The scalar product of the ladder operator D ^ {\displaystyle {\hat {\mathfrak {D}}}} and y {\displaystyle y} is as simple as in 3D space: The scalar product of D ^ {\displaystyle {\hat {\mathfrak {D}}}} and βˆ‡ {\displaystyle \mathbf {\nabla } } is The ladder operator is now associated with the angular momentum operator and additional operator of rotations in 4D space A ^ {\displaystyle {\hat {\mathbf {A} }}} . [ 18 ] They perform Lie algebra as the angular momentum and the Laplace-Runge-Lenz operators . Operator A ^ {\displaystyle {\hat {\mathbf {A} }}} has the simple form Separately for the 3D r {\displaystyle \mathbf {r} } -component and the forth coordinate Ο„ {\displaystyle \tau } of the raising operator, formulas are
https://en.wikipedia.org/wiki/Harmonic_tensors
Harmonization is the process of minimizing redundant or conflicting standards which may have evolved independently. [ 1 ] [ 2 ] The name is also an analogy to the process to harmonizing discordant music. Harmonization is different from standardization . Harmonization involves a reduction in variation of standards, while standardization entails moving towards the eradication of any variation with the adoption of a single standard. [ 3 ] The goal for standard harmonization is to find commonalities, identify critical requirements that need to be retained, and provide a common framework for standards setting organizations (SSO) to adopt. In some instances, businesses come together forming alliances or coalitions, [ 4 ] also referred to multi-stakeholder initiatives (MSI) with a belief that harmonization could reduce compliance costs and simplify the process of meeting requirements. With potential to reduce complexity for those tasked with testing and auditing standards for compliance. A harmonised standard is a European standard developed by a recognised European Standards Organisation: European Committee for Standardization (CEN), European Committee for Electrotechnical Standardization (CENELEC), or European Telecommunications Standards Institute (ETSI). [ 5 ] It is created following a request from the European Commission to one of these organisations. Harmonised standards must be published in the Official Journal of the European Union (OJEU). In the information and communication technologies (ICT) sector, companies initially formed closed groups to develop private standards , for reasons which included competitive advantage. An example being the phrase " embrace, extend, and extinguish " used internally by Microsoft which led to legal action taken by United States Department of Justice . [ 6 ] In response, governments and intergovernmental organizations (IGOs) recommended the use of international standards which resulted in standard harmonization. Examples include the Linux operating system, Adobe portable document format ( PDF ) and the OASIS open document format (ODF) being converted into ISO and IEC international standards. In 2022, EU legislation was passed for all mobile phones, tablets and cameras sold in the EU requiring a USB-C charging port by 2024. [ 7 ] The USB Type-C Specification is an IEC international standard, IEC 62680-1-3. This was reaffirmed at the G7 Hiroshima Summit 2023, where cooperating on international standards setting with a commitment to collectively support the development of open, voluntary and consensus-based standards that will shape the next generation of technology. [ 8 ] Harmonization of regulatory standards is seen by economists as a key component in reducing trade costs and increasing interstate trade. [ 9 ] Where importing-market standards are harmonized with international standards, such as those from ISO or IEC, the negative effect on developing-country exporters is substantially lessened, or even reversed. [ 10 ] The US Government Office of Management and Budget published CircularA-119 [ 11 ] instructing its agencies to adopt voluntary consensus standards before relying upon private standards . The circular mandates standard harmonization by eliminating or reducing US agency use of private standards and government standards. The priority for governments to adopt voluntary consensus standards is supported by international standards such as ISO supporting public policy initiatives. [ 12 ] An example is regulators creating the International Medical Devices Regulatory Forum (IMDRF) [ 13 ] and promoting the Medical Devices Single Audit Program (MDSAP). This uses an international standard , ISO 13485 Medical devices β€” Quality management systems β€” Requirements for regulatory purposes. World Bank Group explain that private standards cannot be used in technical regulation and have to be moved into the public standardization system before they can be used as the basis for technical regulations. [ 14 ] In comparison to the public sector , where governments, IGOs and regulators work towards a harmonised standard , there are instances where private sector promote harmonization of multiple standards. An example is the private organization ISEAL Alliance accepting multiple schemes as community members [ 15 ] using private standards who commit to their code of good practice. [ 16 ] Another example is the Global Food Safety Initiative which is a private organization that promotes harmonization using a benchmarking process [ 17 ] that results in recognition [ 18 ] of multiple scheme owners using private standards . The harmonization approach for multiple private standards has led to criticism from various organizations including the Institute for Multi-Stakeholder Initiative Integrity [ 19 ] and The International Food and Agribusiness Management Review. [ 20 ] For food safety, a single international standard , ISO 22000 , was proposed in 2007 [ 21 ] and 2020 [ 22 ] as a harmonized standard approach used by the public sector. On both occasions, the Global Food Safety Initiative rejected the proposal because promoting ISO 22000 would mean reducing the power of global retailers in terms of control over standards. [ 23 ] Private corporations are not allowed to be members or have voting rights over international standards , because they are consensus-based. Whereas it is possible to have a controlling interest and exert influence if they promote private standards because they are non-consensus. In the environmental sector for β€œnet zero”, corporations continue to promote private standards over international standards . This allows the creation of new terms that are non-consensus and do not follow terms which are defined in international standards such as ISO 14050 Environmental management Vocabulary. An example is the term β€œinsetting” that has been introduced by the private sector, despite it not being part of IWA 42 Net Zero Guidelines. [ 24 ] This approach is an obstacle to standard harmonization and received criticism from the New Climate Institute (NCI), where companies are successfully lobbying the standards setting organizations (SSOs) who use private standards to rubber-stamp the inclusion of insetting claims within their net zero pledges. [ 25 ] Another example of corporate lobbying of a standards setter relates to the Science Based Targets initiative (SBTi). One of their funders, the Bezos Earth Fund exerted influence on SBTi to relax their position on carbon offsets. This resulted in an open letter from SBTi staff to the Board of Trustees disagreeing with the decision. Standards setting organizations who do not follow a consensus model or the WTO principles for international standards development are vulnerable to corporate lobbying, especially when they are receiving funding from the private sector. [ 26 ] In the sustainability sector, the ITC created a Standard Map [ 27 ] as an informational tool in an attempt to harmonize and group together voluntary sustainability standards (VSS) . With over 300 sustainability standards mapped, and financial opportunities with fees that are associated to private standards , this may have led to a perverse incentive . The unintended consequence being a proliferation of private standards, some of which could be primarily seeking monetary gain and may have sabotaged sustainability standards and certification . [ 28 ] To avoid harmonization failures like plugs and sockets, video cassettes and keyboard layouts, [ 29 ] the ambition is to achieve a single international standard as outlined by the European Union, [ 30 ] supported by regional or regulatory addendums where necessary. Not multiple harmonized private standards, all competing against each other, trying to achieve the same goal. International standards organizations express that standardization plays a crucial role for the realization of the UN SDGs in their strategies and activities for sustainability. [ 31 ] [ 32 ] Similar to reducing and preventing the proliferation of private standards in the information and communication technologies (ICT) sector, governments and IGOs recommend international standards in the food sector. This includes the World Health Organization , [ 33 ] the International Trade Centre , [ 34 ] UNIDO , [ 35 ] the World Trade Organization and the Food and Agriculture Organization . [ 36 ] With the public sector recommending standardization over private sector attempts for harmonization, IGOs are encouraging corporation led coalitions to surrender the control they have over private standards . By promoting international standards and standardization instead of harmonization, [ 37 ] the private sector can avoid fragmentation and accusations of undue influence and lobbying in the standards setting and multistakeholder governance process. [ 38 ] [ 39 ]
https://en.wikipedia.org/wiki/Harmonization_(standards)
The Harmonized Commodity Description and Coding System , also known as the Harmonized System ( HS ) of tariff nomenclature is an internationally standardized system of names and numbers to classify traded products. It came into effect in 1988 and has since been developed and maintained by the World Customs Organization (WCO) (formerly the Customs Co-operation Council), an independent intergovernmental organization based in Brussels , Belgium . It is used by over 200 WCO member countries and economies as a basis for their Customs tariffs and for the collection of international trade statistics as well as many other purposes. [ 1 ] The HS is organized logically by economic activity or component material. For example, animals and animal products are found in one section of the HS, while machinery and mechanical appliances are found in another. The HS is organized into 21 Sections, which are subdivided into 96 Chapters (Chapters 1 to 97 with Chapter 77 reserved for potential future use by the HS). The 96 HS Chapters are further subdivided into 1,228 headings and 5,612 subheadings in the current 2022 edition of the HS. Section and Chapter titles describe broad categories of goods, while headings and subheadings describe products in more detail. Generally, HS Sections and Chapters are arranged in order of a product's degree of manufacture or in terms of its technological complexity. Natural commodities, such as live animals and vegetables, for example, are described in the early Sections of the HS, whereas more evolved goods such as machinery and precision instruments are described in later Sections. Chapters within the individual Sections are also usually organized in order of complexity or degree of manufacture. For example, within Section X ( Pulp of wood or of other fibrous cellulosic material; Recovered (waste and scrap) paper or paperboard; Paper and paperboard and articles thereof ), Chapter 47 provides for pulp of wood or of other fibrous cellulosic materials , whereas Chapter 49 covers printed books, newspapers, and other printed matter . Finally, the headings within individual Chapters follow a similar order. For example, the first heading in Chapter 50 ( Silk ) provides for silk worm cocoons while articles made of silk are covered by the Chapter's later headings. The HS code consists of 6-digits. The first two digits designate the Chapter wherein headings and subheadings appear. The second two digits designate the position of the heading in the Chapter. The last two digits designate the position of the subheading in the heading. HS code 1006.30, for example, indicates Chapter 10 ( Cereals ), heading 10.06 ( Rice ), and subheading 1006.30 ( Semi-milled or wholly milled rice, whether or not polished or glazed ). In addition to the HS codes and commodity descriptions, each Section and Chapter of the HS is prefaced by Legal Notes, which are designed to clarify the proper classification of goods. To ensure harmonization, the Contracting Parties to the Convention on the Harmonized Commodity Description and Coding System, have agreed to base their national tariff schedules on the HS Nomenclature and Legal Notes. Parties are permitted to subdivide the HS Nomenclature beyond 6-digits and add their own Legal Notes according to their own tariff and statistical requirements. Parties often set their customs duties at the 8-digit "tariff code" level. Statistical suffixes are often added to the 8-digit tariff code for a total of 10 digits. If the number of digits are more than 6, additional digits are called as the national subdivision. Chapter 77 is reserved for future use by the HS. Chapters 98 and 99 are reserved for domestic use for the Contracting Parties to the HS Convention. Since its creation, the HS has undergone several revisions to reflect changes in trade.Β These revisions eliminate some headings and subheadings describing commodities with low volume of trade and create new headings and subheadings that address new needs, for example, to reflect technological advancements or monitor goods posing environmental concerns. The current edition of the HS became effective on 1 January 2022. The process of assigning HS codes is known as "HS Classification". All products can be classified in the HS by using the General Rules for the Interpretation of the Harmonized System ("GRI") that must be applied in strict order. HS codes can be determined by a variety of factors including a product's composition, its form and its function. An example of a product classified according to its form would be whole potatoes . The classification will also change depending on whether the potatoes are fresh or frozen . Fresh potatoes are classified, under heading 07.01 ( Potatoes, fresh or chilled ), more specifically under subheading 0701.90 ( Other ), while frozen potatoes are classified, under heading 07.10 ( Vegetables (uncooked or cooked by steaming or boiling in water), frozen ), more specifically under subheading 0710.10 ( Potatoes ). An example of a product classified according its material composition is a picture frame . Picture frames made of tropical wood are classified under heading 44.14 ( Wooden frames for paintings, photographs, mirrors or similar objects ), more specifically under subheading 4414.10 ( Of tropical wood ). Picture frames made of plastic are classified under heading 39.24 ( Tableware, kitchenware, other household articles and hygienic or toilet articles, of plastics ), more specifically under subheading 3924.90 ( Other ). Picture frames made of glass are classified under heading 7020.00 ( Other articles of glass ), the ".00" at the end indicates the heading is not further subdivided. An example of a product classified according to its form is personal hygiene soap . When in the form of a bar, cake or moulded shape, such soap is classified under heading 34.01 ( Soap , among others), then under 1-dash subheading 3401.1 ( Soap and organic surface-active products and preparations, in the form of bars, cakes, moulded pieces or shapes, and paper, wadding, felt and nonwovens, impregnated, coated or covered with soap or detergent ), and under 2-dash subheading 3401.11 ( For toilet use (including medicated products) ). Conversely, liquid personal hygiene soap, depending on what is in the liquid, is classified under either subheading 3401.20 ( Soap in other forms ), or subheading 3401.30 ( Organic surface-active products and preparations for washing the skin, in the form of liquid or cream and put up for retail sale, whether or not containing soap ). An example of a product classified according to its function is a carbon monoxide (CO) detector . If the CO detector captures and displays gas measurements, then it is properly classified under subheading 9027.10 ( Gas or smoke analysis apparatus ), under heading 90.27 ( Instruments and apparatus for physical or chemical analysis (for example, polarimeters, refractometers, spectrometers, gas or smoke analysis apparatus); instruments and apparatus for measuring or checking viscosity, porosity, expansion, surface tension or the like; instruments and apparatus for measuring or checking quantities of heat, sound or light (including exposure meters); microtomes ). If the CO detector does not capture and display gas measurements, then it is properly classified under subheading 8531.10 ( Burglar or fire alarms and similar apparatus ), under heading 85.31 ( Electric sound or visual signaling apparatus (for example, bells, sirens, indicator panels, burglar or fire alarms), other than those of heading 85.12 or 85.30 ). Although every product and every part of every product is classifiable in the HS, very few are explicitly described in the HS Nomenclature. Any product for which there is no explicit description can be classified under a "residual" or "basket" heading or subheading, which provide for Other goods. Residual codes normally occur last in numerical order under their related headings and subheadings. An example of a product classified under a residual heading is a live dog , which must be classified under heading 01.06, which provides for Other live animals because dogs are not covered by headings 01.01 through 01.05, which explicitly provide for live equine , live bovine , live swine , live sheep and goats , and live poultry , respectively. As of 2022, there were more than 200 countries or economies applying the Harmonized System worldwide. [ 2 ] HS codes are used by Customs authorities, statistical agencies, and other government regulatory bodies, to monitor and control the import and export of commodities through: Companies use HS codes to calculate the total landed cost of imported products and parts, and to identify selling and sourcing opportunities abroad. HS classification is not always straightforward. Many automotive parts, for example, are not classified under heading 87.08, which provides for Parts and accessories of the motor vehicles of headings 87.01 to 87.05 . For example, automotive seats are classified as articles of furniture under heading 94.01, which provides for Seats (other than those of heading 94.02), whether or not convertible into beds, and parts thereof , and more specifically under subheading 9401.20, which provides for Seats of a kind used for motor vehicles . In many jurisdictions, traders alone bear the legal responsibility to accurately classify their goods. However, due to a lack of familiarity with the rules of HS classification traders may inadvertently determine erroneous HS codes for their commodities. Depending on the severity of the infraction, incorrect classification can result in the imposition of non-compliance penalties, border delays or seizures, or denial of import privileges. There are multiple resources available to traders to assist in properly classifying their goods including the following. Global National or Regional Traders may sometimes resort to using HS code determination guides and other references to classify their traded commodities. These could include local databases published by authorities in other countries. However, such databases are not valid globally. Many Customs authorities around the world allow traders to apply for an advanced HS classification ruling. Such rulings are legally binding in the countries where they are issued and give certainty to the trader. Provided the information supplied in the request was truthful and valid, they may also provide legal protection to the trader following the ruling if there are future questions on the classification of the goods.
https://en.wikipedia.org/wiki/Harmonized_System
Harmony Compiler was written by Peter Samson at the Massachusetts Institute of Technology (MIT). The compiler was designed to encode music for the PDP-1 and built on an earlier program Samson wrote for the TX-0 computer. Jack Dennis noticed and had mentioned to Samson that the sound on or off state of the TX-0 's speaker could be enough to play music. [ 1 ] They succeeded in building a WYSIWYG program for one voice before or by 1960. [ 2 ] For the PDP-1 which arrived at MIT in September 1961, Samson designed the Harmony Compiler which synthesizes four voices from input in a text-based notation. Although it created music in many genres, it was optimized for baroque music. PDP-1 music is merged from four channels and played back in stereo. Notes are on pitch and each has an undertone. The music does not stop for errors. Mistakes are greeted with a message from the typewriter's red ribbon, "To err is human, to forgive divine." [ 3 ] Samson joined the PDP-1 restoration project [ 4 ] at the Computer History Museum in 2004 to recreate the music player. This music software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harmony_Compiler
In mathematics, Harnack's inequality is an inequality relating the values of a positive harmonic function at two points, introduced by A. Harnack ( 1887 ). Harnack's inequality is used to prove Harnack's theorem about the convergence of sequences of harmonic functions. J. Serrin ( 1955 ), and J. Moser ( 1961 , 1964 ) generalized Harnack's inequality to solutions of elliptic or parabolic partial differential equations . Such results can be used to show the interior regularity of weak solutions . Perelman 's solution of the PoincarΓ© conjecture uses a version of the Harnack inequality, found by R. Hamilton ( 1993 ), for the Ricci flow . Harnack's inequality applies to a non-negative function f defined on a closed ball in R n with radius R and centre x 0 . It states that, if f is continuous on the closed ball and harmonic on its interior, then for every point x with | x βˆ’ x 0 |Β = r < R , In the plane R 2 ( n = 2) the inequality can be written: For general domains Ξ© {\displaystyle \Omega } in R n {\displaystyle \mathbf {R} ^{n}} the inequality can be stated as follows: If Ο‰ {\displaystyle \omega } is a bounded domain with Ο‰ Β― βŠ‚ Ξ© {\displaystyle {\bar {\omega }}\subset \Omega } , then there is a constant C {\displaystyle C} such that for every twice differentiable, harmonic and nonnegative function u ( x ) {\displaystyle u(x)} . The constant C {\displaystyle C} is independent of u {\displaystyle u} ; it depends only on the domains Ξ© {\displaystyle \Omega } and Ο‰ {\displaystyle \omega } . By Poisson's formula where Ο‰ n βˆ’ 1 is the area of the unit sphere in R n and r = | x βˆ’ x 0 |. Since the kernel in the integrand satisfies Harnack's inequality follows by substituting this inequality in the above integral and using the fact that the average of a harmonic function over a sphere equals its value at the center of the sphere: For elliptic partial differential equations , Harnack's inequality states that the supremum of a positive solution in some connected open region is bounded by some constant times the infimum, possibly with an added term containing a functional norm of the data: The constant depends on the ellipticity of the equation and the connected open region. There is a version of Harnack's inequality for linear parabolic PDEs such as heat equation . Let M {\displaystyle {\mathcal {M}}} be a smooth (bounded) domain in R n {\displaystyle \mathbb {R} ^{n}} and consider the linear elliptic operator with smooth and bounded coefficients and a positive definite matrix ( a i j ) {\displaystyle (a_{ij})} . Suppose that u ( t , x ) ∈ C 2 ( ( 0 , T ) Γ— M ) {\displaystyle u(t,x)\in C^{2}((0,T)\times {\mathcal {M}})} is a solution of such that Let K {\displaystyle K} be compactly contained in M {\displaystyle {\mathcal {M}}} and choose Ο„ ∈ ( 0 , T ) {\displaystyle \tau \in (0,T)} . Then there exists a constant C >Β 0 (depending only on K , Ο„ {\displaystyle \tau } , t βˆ’ Ο„ {\displaystyle t-\tau } , and the coefficients of L {\displaystyle {\mathcal {L}}} ) such that, for each t ∈ ( Ο„ , T ) {\displaystyle t\in (\tau ,T)} ,
https://en.wikipedia.org/wiki/Harnack's_inequality
In the mathematical field of partial differential equations , Harnack's principle or Harnack's theorem is a corollary of Harnack's inequality which deals with the convergence of sequences of harmonic functions . Given a sequence of harmonic functions u 1 , u 2 , ... on an open connected subset G of the Euclidean space R n , which are pointwise monotonically nondecreasing in the sense that for every point x of G , then the limit automatically exists in the extended real number line for every x . Harnack's theorem says that the limit either is infinite at every point of G or it is finite at every point of G . In the latter case, the convergence is uniform on compact sets and the limit is a harmonic function on G . [ 1 ] The theorem is a corollary of Harnack's inequality. If u n ( y ) is a Cauchy sequence for any particular value of y , then the Harnack inequality applied to the harmonic function u m βˆ’ u n implies, for an arbitrary compact set D containing y , that sup D | u m βˆ’ u n | is arbitrarily small for sufficiently large m and n . This is exactly the definition of uniform convergence on compact sets. In words, the Harnack inequality is a tool which directly propagates the Cauchy property of a sequence of harmonic functions at a single point to the Cauchy property at all points. Having established uniform convergence on compact sets, the harmonicity of the limit is an immediate corollary of the fact that the mean value property (automatically preserved by uniform convergence) fully characterizes harmonic functions among continuous functions. [ 2 ] The proof of uniform convergence on compact sets holds equally well for any linear second-order elliptic partial differential equation , provided that it is linear so that u m βˆ’ u n solves the same equation. The only difference is that the more general Harnack inequality holding for solutions of second-order elliptic PDE must be used, rather than that only for harmonic functions. Having established uniform convergence on compact sets, the mean value property is not available in this more general setting, and so the proof of convergence to a new solution must instead make use of other tools, such as the Schauder estimates . Sources
https://en.wikipedia.org/wiki/Harnack's_principle
The highest award which is presented by the Max Planck Society for services to society is the Harnack Medal, first awarded in 1925. The Harnack Medal is named after the theologian Adolf von Harnack , who was the first president of the Kaiser Wilhelm Society, the predecessor organization of the MPG, from 1911 to 1930. [ 1 ] The medal has only been awarded 33 times since 1924, including 10 times by the Kaiser Wilhelm Society (1924–1936) and 23 times by the Max Planck Society (1953–2017). [ 1 ] Past recipients of the Harnack Medal are: [ 2 ] This science awards article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harnack_Medal
The Harold C. Urey Prize is awarded annually by the Division for Planetary Sciences of the American Astronomical Society . The prize recognizes and encourages outstanding achievements in planetary science by a young scientist. The prize is named after Harold C. Urey .
https://en.wikipedia.org/wiki/Harold_C._Urey_Prize
Harold Scott MacDonald " Donald " Coxeter CC FRS FRSC (9 February 1907 – 31 March 2003) [ 2 ] was a British-Canadian geometer and mathematician. He is regarded as one of the greatest geometers of the 20th century. [ 3 ] Coxeter was born in England and educated at the University of Cambridge , with student visits to Princeton University . He worked for 60 years at the University of Toronto in Canada, from 1936 until his retirement in 1996, becoming a full professor there in 1948. His many honours included membership in the Royal Society of Canada , the Royal Society , and the Order of Canada . He was an author of 12 books, including The Fifty-Nine Icosahedra (1938) and Regular Polytopes (1947). Many concepts in geometry and group theory are named after him, including the Coxeter graph , Coxeter groups , Coxeter's loxodromic sequence of tangent circles , Coxeter–Dynkin diagrams , and the Todd–Coxeter algorithm . Coxeter was born in Kensington , England, to Harold Samuel Coxeter and Lucy ( nΓ©e Gee ). His father had taken over the family business of Coxeter & Son, manufacturers of surgical instruments and compressed gases (including a mechanism for anaesthetising surgical patients with nitrous oxide ), but was able to retire early and focus on sculpting and baritone singing; Lucy Coxeter was a portrait and landscape painter who had attended the Royal Academy of Arts . A maternal cousin was the architect Sir Giles Gilbert Scott . [ 4 ] [ 2 ] In his youth, Coxeter composed music and was an accomplished pianist at the age of 10. [ 5 ] He felt that mathematics and music were intimately related, outlining his ideas in a 1962 article on "Music and Mathematics" in the Canadian Music Journal . [ 5 ] He was educated at King Alfred School, London , and St George's School, Harpenden , where his best friend was John Flinders Petrie, later a mathematician for whom Petrie polygons were named. He was accepted at King's College, Cambridge , in 1925, but decided to spend a year studying in hopes of gaining admittance to Trinity College , where the standard of mathematics was higher. [ 2 ] Coxeter won an entrance scholarship and went to Trinity in 1926 to read mathematics. There he earned his BA (as Senior Wrangler ) in 1928, and his doctorate in 1931. [ 5 ] [ 6 ] In 1932 he went to Princeton University for a year as a Rockefeller Fellow , where he worked with Hermann Weyl , Oswald Veblen , and Solomon Lefschetz . [ 6 ] Returning to Trinity for a year, he attended Ludwig Wittgenstein 's seminars on the philosophy of mathematics . [ 5 ] In 1934 he spent a further year at Princeton as a Procter Fellow. [ 6 ] In 1936 Coxeter moved to the University of Toronto. In 1938 he and P. Du Val , H. T. Flather, and John Flinders Petrie published The Fifty-Nine Icosahedra with University of Toronto Press . In 1940 Coxeter edited the eleventh edition of Mathematical Recreations and Essays , [ 7 ] originally published by W. W. Rouse Ball in 1892. He was elevated to professor in 1948. He was elected a Fellow of the Royal Society of Canada in 1948 and a Fellow of the Royal Society in 1950. He met M. C. Escher in 1954 and the two became lifelong friends; his work on geometric figures helped inspire some of Escher's works, particularly the Circle Limit series based on hyperbolic tessellations . He also inspired some of the innovations of Buckminster Fuller . [ 6 ] Coxeter, M. S. Longuet-Higgins and J. C. P. Miller were the first to publish the full list of uniform polyhedra (1954). [ 8 ] He worked for 60 years at the University of Toronto and published twelve books. Coxeter was a vegetarian . He attributed his longevity to his vegetarian diet, daily exercise such as fifty press-ups and standing on his head for fifteen minutes each morning, and consuming a nightly cocktail made from KahlΓΊa (a coffee liqueur), peach schnapps , and soy milk . [ 4 ] Since 1978, the Canadian Mathematical Society have awarded the Coxeter–James Prize in his honor. He was made a Fellow of the Royal Society in 1950 and in 1997 he was awarded their Sylvester Medal . [ 6 ] In 1990, he became a Foreign Member of the American Academy of Arts and Sciences [ 9 ] and in 1997 was made a Companion of the Order of Canada . [ 10 ] In 1973 he received the Jeffery–Williams Prize . [ 6 ] A festschrift in his honour, The Geometric Vein , was published in 1982. It contained 41 essays on geometry, based on a symposium for Coxeter held at Toronto in 1979. [ 11 ] A second such volume, The Coxeter Legacy , was published in 2006 based on a Toronto Coxeter symposium held in 2004. [ 12 ]
https://en.wikipedia.org/wiki/Harold_Scott_MacDonald_Coxeter
A harp trap is a device used to capture bats without exposing them to disentangling from traps like mist nets and hand nets . It capitalizes on bats' flight characteristic of turning perpendicular to the ground to pass between obstacles, in this case the trap's strings, in which flight attitude they cannot maintain their angle of flight and drop unharmed into a collection chamber. [ 1 ] Invented in 1958 by US Public Health Service veterinarian Denny Constantine, [ 2 ] the harp trap has been modified for different applications and efficiencies by users, including Merlin Tuttle 's double harp trap in 1974, [ 3 ] Charles Francis' 4-frame harp trap in 1989, [ 4 ] and other modifications improving collapsibility and portability. [ 5 ] The harp trap is a significant tool for measuring aspects of bat ecology, [ 1 ] [ 6 ] [ 7 ] most notably to obtain information about bat populations and movement for public health [ 8 ] [ 9 ] and conservation management [ 4 ] [ 10 ] purposes. Even though visually apparent when set out in the open, harp traps are effective if placed where natural features funnel bats toward the trap. [ 7 ] [ 11 ] They can be set across flyways in heavily wooded areas, over small bodies of water, and at roost entrances, [ 11 ] and can be left unattended for periods of time, allowing multiple sites to be worked simultaneously. [ 7 ] [ 11 ] They can be more efficient for surveying bats than mist nets, capturing higher numbers of species and individuals. [ 5 ]
https://en.wikipedia.org/wiki/Harp_trap
A harpoon reaction is a type of chemical reaction , first proposed by Michael Polanyi in 1920 . [ 1 ] [ 2 ] Its mechanism (also called the harpooning mechanism ) involves two neutral reactants undergoing an electron transfer over a relatively long distance to form ions that then attract each other closer together. [ 3 ] For example, a metal atom and a halogen might react to form a cation and anion , respectively, leading to a combined metal halide . The main feature of these redox reactions is that, unlike most reactions, they have steric factors greater than unity; that is, they take place faster than predicted by collision theory . This is explained by the fact that the colliding particles have greater cross sections than the pure geometrical ones calculated from their radii, because when the particles are close enough, an electron "jumps" (therefore the name) from one of the particles to the other one, forming an anion and a cation which subsequently attract each other. Harpoon reactions usually take place in the gas phase, but they are also possible in condensed media. [ 4 ] [ 5 ] The predicted rate constant can be improved by using a better estimation of the steric factor. A rough approximation is that the largest separation R x at which charge transfer can take place on energetic grounds, can be estimated from the solution of the following equation that determines the largest distance at which the Coulombic attraction between the two oppositely charged ions is sufficient to provide the energy Ξ” E 0 {\displaystyle \Delta E_{0}} . With Ξ” E 0 = E i βˆ’ E e a {\displaystyle \Delta E_{0}=E_{i}-E_{ea}} , where E i {\displaystyle E_{i}} is the ionization potential of the metal and E e a {\displaystyle E_{ea}} is the electron affinity of the halogen.
https://en.wikipedia.org/wiki/Harpoon_reaction
Harrington paradox is a notion in the environmental and ecological economics describing the compliance of firms to the environmental regulations . The paradox was first described in Winston Harrington's paper in 1988 and was based on the research over monitoring, realization and compliance to environmental regulations in the US from the end of the 1970s to the beginning of the 1980s. According to the paradox, the firms in general comply with environmental regulations in spite of the fact that: Firms' compliance at such level is contrary to the rational crime theory of Gary Becker [ 1 ] which describes the behavior of profit maximizing entities. The rational firms will comply to the standards only in case the expected fine is higher than the cost of compliance. In order to explain the paradox several suggestions have been put forward. The empirical data observing the paradox is rare. In the research conducted by Norwegian Climate and Pollution Agency [ 3 ] in 2001 no serious violations were revealed, but in the majority of firms (80%) there were minor deviations from standards. The fact that in Norway there is low frequency of monitoring and the fine system for minor violations is light can not bring strong evidence to the paradox, as major violations imply very strict punishments which is conforming to the rational crime theory.
https://en.wikipedia.org/wiki/Harrington_paradox
Harrison's rule is an observation in evolutionary biology by Launcelot Harrison which states that in comparisons across closely related species, host and parasite body sizes tend to covary positively. Launcelot Harrison , an Australian authority in zoology and parasitology , published a study in 1915 concluding that host and parasite body sizes tend to covary positively, [ 1 ] a covariation later dubbed as 'Harrison's rule'. Harrison himself originally proposed it to interpret the variability of congeneric louse species. However, subsequent authors verified it for a wide variety of parasitic organisms including nematodes , [ 2 ] [ 3 ] [ 4 ] [ 5 ] rhizocephalan barnacles , [ 6 ] fleas , lice , ticks , parasitic flies and mites , as well as herbivorous insects associated with specific host plants. [ 3 ] [ 7 ] [ 8 ] Robert Poulin observed that in comparisons across species, the variability of parasite body size also increases with host body size. [ 9 ] It is self-evident that we expect greater variation coming together with greater mean body sizes due to an allometric power law scaling effect. [ 10 ] However, Poulin referred to parasites' increasing body size variability due to biological reasons, thus we expect an increase greater than that caused by a scaling effect. Recently, Harnos et al. applied phylogenetically controlled statistical methods to test Harrison's rule and Poulin's s Increasing Variance Hypothesis in avian lice. [ 11 ] Their results indicate that the three major families of avian lice ( Ricinidae , Menoponidae , Philopteridae ) follow Harrison's rule, and two of them (Menoponidae, Philopteridae) also follow Poulin's supplement to it. The allometry between host and parasite body sizes constitutes an evident aspect of host–parasite coevolution . The slope of this relationship is a taxon-specific character. Parasites' body size is known to covary positively with fecundity [ 12 ] and thus it likely affects the virulence of parasitic infections as well.
https://en.wikipedia.org/wiki/Harrison's_rule
The Harris–Benedict equation (also called the Harris-Benedict principle ) is a method used to estimate an individual's basal metabolic rate (BMR). The estimated BMR value may be multiplied by a number that corresponds to the individual's activity level; the resulting number is the approximate daily kilocalorie intake to maintain current body weight . The Harris-Benedict equation may be used to assist weight loss β€” by reducing the kilocalorie intake number below the estimated maintenance intake of the equation. [ citation needed ] The original Harris–Benedict equations were published in 1918 and 1919. [ 1 ] [ 2 ] The Harris–Benedict equations revised by Roza and Shizgal in 1984. [ 3 ] The 95% confidence range for men is Β±213.0 kcal/day, and Β±201.0 kcal/day for women. The Harris–Benedict equations revised by Mifflin and St Jeor in 1990: [ 4 ] The Harris-Benedict equation sprang from a study by James Arthur Harris and Francis Gano Benedict , which was published in 1919 by the Carnegie Institution of Washington in the monograph A Biometric Study Of Basal Metabolism In Man . A 1984 revision improved its accuracy. Mifflin et al. published an equation more predictive for modern lifestyles in 1990. [ 4 ] Later work produced BMR estimators that accounted for lean body mass. As the BMR equations do not attempt to take into account body composition, identical results can be calculated for a very muscular person, and an overweight person, who are both the same height, weight, age and gender. As muscle and fat require differing amounts of calories to maintain, the TEE estimates will not be accurate for such cases. The paper behind the latest update (Mifflin et al) to the BMR formula states all participants in their study fall within the 'normal' and 'overweight' body mass index (BMI) categories, and so the results also do not necessarily apply to those in the 'underweight' or 'obese' BMI categories.
https://en.wikipedia.org/wiki/Harris–Benedict_equation
Harrogate Spring Water is a private limited company incorporated on 16 August 2000 which manufactures plastic and glass bottled spring water , from Harrogate , North Yorkshire , England and distributes its bottles all over the world. Spa waters were first discovered in Harrogate in the 16th century, with water bottled in glass only the town from the 1740s. [ 2 ] The main spring is sourced from an aquifer in the millstone grit series, below the Harrogate Pinewoods. [ 3 ] [ 4 ] The Thirsty Planet brand takes water from an aquifer located in sand and gravel. [ 5 ] Founded in August 2000, initially under the name HSW Limited, the product was launched in January 2002. [ 6 ] Harrogate Spa Water manufactures bottled water and sells it locally, nationally and internationally, being exported to as far away as Australia. A change in majority share owner distribution was made during 2020 resulting in Danone becoming the majority holder, [ 7 ] [ 8 ] displacing the Cain family from their ownership. [ 9 ] The company was previously owned by Harrogate Water Brands, [ 10 ] which also owned the charity Thirsty Planet, producing its own brand of bottled water. [ 10 ] In 2021, a plan to expand the bottling plant over an area of woodland was criticised by Harrogate residents because Harrogate Spring Water sought to destroy an area of established woodland and natural habitat planted previously by volunteers and local primary school children. [ 11 ] [ 12 ] In the United Kingdom, Harrogate Spring Water sold over 43,000,000 litres (9,500,000Β impΒ gal; 11,000,000Β USΒ gal) annually in 2013, which was a market share of 1.4%. [ 13 ] In 2019, the company achieved sales of Β£21.6Β million. [ 7 ] Airlines including British Airways , Virgin Atlantic , Jet2.com , TUI Airways , and Easyjet provide or sell Harrogate Water on their flights, [ 14 ] and, in the case of British Airways, in their premium lounges at London Heathrow . [ citation needed ] They also supply Cunard for their ships. [ citation needed ] Harrogate Spring Water also supply water to the Masons Gin distillery in Aiskew , North Yorkshire. [ 15 ] [ 16 ] This brand-name food or drink product–related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harrogate_Spring_Water
Emeritus Professor Harry Frederick Recher RZS (NSW) AM (born 27 March 1938, New York City) is an Australian ecologist, ornithologist and advocate for conservation. Recher grew up in the United States of America . He studied at the State University of New York College of Forestry and received his B.S. in 1959 from Syracuse University . At Stanford University , ecologist Paul Ehrlich supervised his PhD on migratory shorebirds that was awarded in 1964. Ehrlich became a lifelong friend and mentor to Recher; also sharing his commitment to a strong sense of social responsibility of science. [ 1 ] Recher held an NIH postdoctoral fellowship at the University of Pennsylvania and Princeton University . In his early career, Recher worked with leading American ecologists Eugene Odum and Robert McArthur. He moved to Australia in 1967. [ 2 ] From 1968 he worked for 20 years at the Australian Museum as a Principal Research Scientist, focussing on conservation issues and the biology of forest and woodland birds. In 1988 he moved to the University of New England . He was also a member of the National Parks and Wildlife Service (NPWS) Scientific Advisory Committee. Recher was co-editor and author of three books, A natural legacy: ecology in Australia ( 1979), [ 3 ] Birds of eucalypt forests and woodland: ecology, conservation, management . (1985) and Woodlands of Australia , all of which were awarded the Whitley Medal by the Royal Zoological Society of New South Wales. As an early Australian ecology textbook, A Natural Legacy with co-editors Irina Dunn and Dan Lunney withΒ David Milledge's hand-drawings illustrating the principles of community ecology and succession, Recher influenced a generation in an era of resurgent environmentalism. Recher is heralded for his long-term field studies, especially of bird communities. In the 1980s, Recher and his colleagues applied these studies to identify the conservation requirements for native birds and animals in their specific habitats. In 2003 the statutory management plan, NPWS Nadgee Nature Reserve Plan of Management acknowledged the value of his work: [ 4 ] [ 5 ] Other significant long term studies which are still ongoing include long term monitoring of heathland bird communities by Harry Recher, and long term study of the impact of fire, drought and flood on forest-dependent mammals by NPWS [...]. In 1990, Recher stood as a NSW candidate for the Australian Senate as an environmental independent with Irina Dunn , who was formerly a member of the House of Representatives for the Nuclear Disarmament Party. [ 6 ] After the election, Recher continued publishing about communications between ecologists, the media and politicians, and everyone. He remained a passionate advocate for conservation and for scientists communicating well about pressing issues of conservation and climate change. [ 7 ] In 1995 he was foundation editor of Pacific Conservation Biology and continued to serve as an associate editor. In 1996 he became the Foundation Professor in Environmental Management at Edith Cowan University in Perth, Western Australia . As an intellectual leader in the field, Recher remained deeply committed to the contribution of science to policy for conservation and public understanding of ecology. [ 8 ] [ 9 ] In 1994 he was awarded the Royal Australasian Ornithologists Union 's D.L. Serventy Medal for outstanding published work on birds in the Australasian region. [ 10 ] As well as numerous published scientific papers, he has authored and edited several books.
https://en.wikipedia.org/wiki/Harry_Frederick_Recher
Harry Medforth Dawson (11 November 1875 – 9 March 1939) was a professor of physical chemistry at the University of Leeds . He studied chemical kinetics, reaction mechanisms involving complex ions and their equilibria. He was elected Fellow of the Royal Society in 1933. Dawson was born in Bramley . He studied at Leeds Modern School and went to Yorkshire College with a Baines Scholarship. Under Arthurs Smithells he became interested in chemistry. After graduating in 1896 with a BSc he obtained the 1851 Exhibition and went to Germany to study at Berlin, Giessen and Leipzig where he studied under Jacobus Henricus van't Hoff , Karl Elbs and Richard Abegg . After receiving his doctorate from the University of Giessen he returned to England in 1899 and joined Yorkshire College as a demonstrator. In 1905 he became a lecturer and received a DSc in 1907. In 1920 he became chair of physical chemistry and worked until his retirement. Dawson worked on iodination of ketones and examined the nature of acid catalysis. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Dawson married was Phillis Mary Barr in 1907 and they had three sons and two daughters. [ 5 ]
https://en.wikipedia.org/wiki/Harry_Medforth_Dawson
Harry Erwin Nursten (August 1927 – 20 December 2011) was a British food chemist , specialising in flavour chemistry at the Department of Nutrition and Food Sciences at the University of Reading . Harry Erwin Nursten was born in Czechoslovakia in August 1927, son of Sergius Nursten and Helene. The family managed to escape to England shortly before the Second World War . In the 1939 England and Wales Register [ 1 ] the parents (β€œNursem”) were living at Corringham Court, Golders Green; Sergius was listed as β€œDental surgeon (seeking work).” The family settled in Ilkley , Yorkshire, where Harry attended Ilkley Grammar School and gained his Higher School Certificate in 1944. [ 2 ] He went to the University of Leeds where he read colour chemistry and dyeing, followed by a PhD in colour chemistry, awarded in 1949. In the summer of that year Nursten was one of a group of volunteers harvesting at Windlestone Hall . [ 3 ] [ 4 ] Also there was Jean Frobisher, Harry's fellow student and bridge partner at Ilkley Grammar School, and now a welfare worker. They were married on 23 December 1950 at St Paul's Church, Esholt . [ 5 ] After more research at Leeds, Nursten taught dyeing and textile chemistry at Nottingham Technical College . He returned to Leeds in 1955 as a lecturer in the Procter Department of Leather Science. [ 6 ] Following two sabbaticals at the Massachusetts Institute of Technology and UC Davis , he moved into the area of food and flavour science. In 1976, he was appointed Chair of Food Science at the University of Reading . Following the merger of the National College of Food Technology and the Department of Food Science, he became Head of Department of one of the biggest Food Science Departments in the UK. In 1992, the year he retired, [ 7 ] Nursten ensured that the Hugh Macdonald Sinclair endowment was used to set up a new centre for human nutrition research at the University of Reading . [ 8 ] Harry Nursten died in Reading on 20 December 2011. His wife, Jean Patricia Nursten, is a noted Professor of social work, [ 9 ] and author. [ 10 ]
https://en.wikipedia.org/wiki/Harry_Nursten
Harry Nyquist ( / ˈ n aΙͺ k w Ιͺ s t / , Swedish: [ˈnŷːkvΙͺst] ; February 7, 1889 – April 4, 1976) was a Swedish-American physicist and electronic engineer who made important contributions to communication theory . [ 1 ] Nyquist was born in the village Nilsby of the parish Stora Kil, VΓ€rmland , Sweden . He was the son of Lars Jonsson Nyqvist (1847–1930) and Catarina (or Katrina) Eriksdotter (1857–1920). His parents had eight children: Elin Teresia, Astrid, Selma, Harry Theodor, Amelie, Olga Maria, Axel Martin and Herta Alfrida. [ 2 ] He immigrated to the United States in 1907. He entered the University of North Dakota in 1912 and received B.S. and M.S. degrees in electrical engineering in 1914 and 1915, respectively. He received a Ph.D. in physics at Yale University in 1917. He worked at AT&T 's Department of Development and Research from 1917 to 1934, and continued when it became Bell Telephone Laboratories that year, until his retirement in 1954. Nyquist received the IRE Medal of Honor in 1960 for "fundamental contributions to a quantitative understanding of thermal noise, data transmission and negative feedback." In October 1960 he was awarded the Stuart Ballantine Medal of the Franklin Institute "for his theoretical analyses and practical inventions in the field of communications systems during the past forty years including, particularly, his original work in the theories of telegraph transmission, thermal noise in electric conductors, and in the history of feedback systems." In 1969 he was awarded the National Academy of Engineering 's fourth Founder's Medal "in recognition of his many fundamental contributions to engineering." In 1975 Nyquist received together with Hendrik Bode the Rufus Oldenburger Medal from the American Society of Mechanical Engineers . [ 3 ] As reported in The Idea Factory: Bell Labs and the Great Age of American Innovation , the Bell Labs patent lawyers wanted to know why some people were so much more productive (in terms of patents) than others. After crunching a lot of data, they found that the only thing the productive employees had in common (other than having made it through the Bell Labs hiring process) was that "Workers with the most patents often shared lunch or breakfast with a Bell Labs electrical engineer named Harry Nyquist. It wasn't the case that Nyquist gave them specific ideas. Rather, as one scientist recalled, 'he drew people out, got them thinking'" (p.Β 135). Nyquist lived in Pharr, Texas after his retirement, and died in Harlingen, Texas on April 4, 1976. As an engineer at Bell Laboratories, Nyquist did important work on thermal noise (" Johnson–Nyquist noise "), [ 4 ] the stability of feedback amplifiers , telegraphy, facsimile , television, and other important communications problems. With Herbert E. Ives , he helped to develop AT&T's first facsimile machines that were made public in 1924. In 1932, he published a classic paper on stability of feedback amplifiers. [ 5 ] The Nyquist stability criterion can now be found in many textbooks on feedback control theory. His early theoretical work on determining the bandwidth requirements for transmitting information laid the foundations for later advances by Claude Shannon , which led to the development of information theory . In particular, Nyquist determined that the number of independent pulses that could be put through a telegraph channel per unit time is limited to twice the bandwidth of the channel, and published his results in the papers Certain factors affecting telegraph speed (1924) [ 6 ] and Certain topics in Telegraph Transmission Theory (1928). [ 7 ] This rule is essentially a dual of what is now known as the Nyquist–Shannon sampling theorem .
https://en.wikipedia.org/wiki/Harry_Nyquist
Harry Potter and the Methods of Rationality ( HPMOR ) is a work of Harry Potter fan fiction by Eliezer Yudkowsky published on FanFiction.Net as a serial from February 28, 2010, [ 1 ] to March 14, 2015, [ 2 ] totaling 122 chapters and over 660,000 words. [ 3 ] [ 4 ] It adapts the story of Harry Potter to explain complex concepts in cognitive science , philosophy , and the scientific method . [ 5 ] Yudkowsky's reimagining supposes that Harry's aunt Petunia Evans married an Oxford professor and homeschooled Harry in science and rational thinking , [ 2 ] [ 4 ] allowing Harry to enter the magical world with ideals from the Age of Enlightenment and an experimental spirit. [ 6 ] The fan fiction spans one year, covering Harry's first year in Hogwarts. [ 7 ] HPMOR has inspired other works of fan fiction, art, and poetry. [ 8 ] HPMOR is connected to the contemporary rationalist community and is popular among rationalists and effective altruists . [ 9 ] [ 10 ] [ 11 ] In this fan fiction's alternate universe to the Harry Potter series, Lily Potter magically made her sister Petunia Evans prettier, letting her marry Oxford professor Michael Verres. They adopt their orphaned nephew Harry James Potter as Harry James Potter-Evans-Verres and homeschool him in science and rationality. When Harry turns 11, Petunia and Professor McGonagall inform him and Michael about the wizarding world and Harry's defeat of Lord Voldemort. Harry becomes irritated over wizarding society's inconsistencies and backwardness. When boarding the Hogwarts Express, Harry befriends Draco Malfoy over Ron Weasley and teaches him science. Harry also befriends Hermione Granger over their scientific inclinations. At Hogwarts, the Sorting Hat sends both Harry and Hermione to Ravenclaw and Draco to Slytherin. As school begins, Harry earns the trust of McGonagall, bonds with Professor Quirrell (who strives to resurrect the teaching of battle magic) and tests magic through the scientific method with Hermione. Harry invents partial transfiguration, which transmutes parts of wholes by applying timeless physics . Draco reluctantly accepts Harry's proof against the Malfoys' bigotry against muggle-borns and informs him that Dumbledore burned his innocent mother, Narcissa, alive. After winter break, Quirrell procures a Dementor to teach students the Patronus charm. Though Hermione and Harry initially fail, Harry recognizes Dementors as shadows of death. He invents the True Patronus charm, destroying the Dementor. After Harry teaches him to cast a regular Patronus, Draco discovers Harry can speak Parseltongue. Quirrell reveals himself as a snake Animagus to Harry and convinces him to help spirit a supposedly manipulated Bellatrix Black from Azkaban, exposing Harry to the horrors of prisoners while Dumbledore believes that Voldemort is back. After a confrontation, he tells Harry that the Order of the Phoenix made him murder Narcissa to stop Voldemort from taking hostages. Hermione establishes the organization S.P.H.E.W. to protest misogyny in heroism and fight bullies. This causes widespread chaos, and the group's activities are put on pause. She and Draco are manipulated into believing she attempted the murder of the latter, and Harry pays his fortune to Lucius Malfoy to save Hermione from Azkaban. A surprised Lucius accepts and withdraws Draco from Hogwarts. The wizarding world theorize that Quirrell is David Monroe, a long-missing opponent of Voldemort. A mountain troll enters Hogwarts and kills Hermione before Harry manages to kill it. Grieving, Harry vows to resurrect Hermione and preserves her body. Harry absolves the Malfoys of guilt in Hermione's murder in exchange for Lucius returning his money, exonerating Hermione, and returning Draco to Hogwarts. Quirrell starts eating unicorns, supposedly to delay death from a disease. Near the end of the year, he captures Harry, revealing himself as Voldemort's spirit possessing Quirrell and how he framed and murdered Hermione by proxy. He coerces Harry into helping him steal the Philosopher's Stone, an artifact for performing permanent transmutation as transfiguration is otherwise temporary, by promising to resurrect Hermione. They succeed when Dumbledore appears and tries to seal Voldemort outside time. Voldemort endangers Harry, forcing Dumbledore to seal himself instead. Voldemort's spirit abandons Quirrell and incarnates using the Stone; he and Harry resurrect Hermione with the power of the Stone and Harry's True Patronus. Voldemort murders Quirrell as a human sacrifice for a ritual to give Hermione a Horcrux and the superpowers of a mountain troll and unicorn, rendering her near-immortal. Knowing Harry is prophesied to destroy the world, Voldemort holds Harry at gunpoint, strips him naked, summons his Death Eaters, forces Harry into a magical oath to never risk destroying the world, and orders his murder. Harry improvises a partial Transfiguration into carbon nanotubes that beheads every Death Eater and maims Voldemort. He stuns, memory-wipes, and transfigures him into his ring's jewel. Harry claims the Stone and stages a scene looking like "David Monroe" died defeating Voldemort and resurrected Hermione. After the battle, Harry receives Dumbledore's letters, learning Dumbledore gambled the world's future on him due to prophecies and let Harry inherit his positions and assets. Harry helps a grieving Draco find his mother, Narcissa, and plans with the resurrected Hermione to overhaul wizarding society by destroying Azkaban with the True Patronus and using the Philosopher's Stone to grant everyone immortality. Yudkowsky wrote Harry Potter and the Methods of Rationality to promote the rationalist community and rationality skills he advocates on his community blog LessWrong . [ 11 ] [ 12 ] [ 13 ] According to him, "I'd been reading a lot of Harry Potter fan fiction at the time the plot of HPMOR spontaneously burped itself into existence inside my mind, so it came out as a Harry Potter story, [...] If I had to rationalize it afterward, I'd say the Potterverse is a very rich environment for a curious thinker, and there's a large number of potential readers who would enter at least moderately familiar with the Harry Potter universe." [ 1 ] Yudkowsky has used HPMOR to solicit donations for the Center for Applied Rationality , which teaches courses based on his work. [ 1 ] [ 14 ] Yudkowsky refused a suggestion from David Whelan to sell HPMOR as an original story after rewriting it to remove the Harry Potter setting's elements from it to avoid copyright infringement like E. L. James did with Fifty Shades , which was originally a Twilight fan fiction, saying, "That's not possible in this case. HPMOR is fundamentally linked to, and can only be understood against the background of, the original Harry Potter novels. Numerous scenes are meant to be understood in the light of other scenes in the original HP." [ 1 ] After HPMOR concluded in 2015, Yudkowsky's readers held many worldwide wrap parties in celebration. [ 1 ] Harry Potter and the Methods of Rationality is highly popular on FanFiction.Net , though it has also caused significant polarization among readers. In 2011, Daniel D. Snyder of The Atlantic recorded how HPMOR "caused uproar in the fan fiction community, drawing both condemnations and praise" on online message boards "for its blasphemousβ€”or brilliantβ€”treatment of the canon." [ 15 ] In 2015, David Whelan of Vice described HPMOR as "the most popular Harry Potter book you've never heard of" and claimed, "Most people agree that it's brilliantly written, challenging, andβ€”curiouslyβ€”mind altering." [ 1 ] HPMOR has received positive mainstream reception. Hugo Award -winning science fiction author David Brin positively reviewed HPMOR for The Atlantic in 2010, saying, "It's a terrific series, subtle and dramatic and stimulating… I wish all Potter fans would go here, and try on a bigger, bolder and more challenging tale." [ 15 ] In 2014, American politician Ben Wikler lauded HPMOR on The Guardian as "the #1 fan fiction series of all time," saying it was "told with enormous gusto, and with emotional insight into that kind of mind," and comparing Harry to his friend Aaron Swartz 's skeptical attitude. [ 16 ] Writing for The Washington Post , legal scholar William Baude praised HPMOR as "the best Harry Potter book ever written, though it is not written by J.K. Rowling" in 2014 [ 17 ] and "one of my favorite books written this millennium" in 2015. [ 2 ] In 2015, Vakasha Sachdev of Hindustan Times described HPMOR as "a thinking person's story about magic and heroism" and how "the conflict between good and evil is represented as a battle between knowledge and ignorance," eliciting his praise. [ 7 ] In 2017, Carol Pinchefsky of Syfy lauded HPMOR as "something brilliant" and "a platform on which the writer bounces off complex ideas in a way that's accessible and downright fun." [ 8 ] In a 2019 interview for The Sydney Morning Herald , young adult writer Lili Wilkinson said that she adores HPMOR ; according to her, "It not only explains basically all scientific theory, from economics to astrophysics, but it also includes the greatest scene where Malfoy learns about DNA and has to confront his pureblood bigotry." [ 4 ] Rhys McKay hailed HPMOR in a 2019 article for Who as "one of the best fanfics ever written" and "a familiar yet all-new take on the Wizarding world." [ 18 ] James D. Miller, an economics professor at Smith College and one of Yudkowsky's acquaintances, praised HPMOR in his 2012 book Singularity Rising as an "excellent marketing strategy" for Yudkowsky's "pseudoscientific-sounding" beliefs due to its carefully crafted lessons about rationality. Though he criticized Yudkowsky as "profoundly arrogant" for believing that making people more rational would make them more likely to agree with his ideas, he nonetheless agreed that such an effort would gain him more followers. [ 12 ] The HPMOR fan audiobook was a Parsec Awards finalist in 2012 and 2015. [ 19 ] [ 20 ] On July 17, 2018, Mikhail Samin, a former head of the Russian Pastafarian Church who had previously published The Gospel of the Flying Spaghetti Monster in Russian, [ 21 ] launched a non-commercial crowdfunding campaign hosted on Planeta.ru alongside about 200 helpers to print a three-volume edition of the Russian translation [ 22 ] of Harry Potter and the Methods of Rationality . [ 21 ] Lin Lobaryov, the former lead editor of Mir Fantastiki , compiled the books. [ 23 ] Samin's campaign reached its 1.086 million β‚½ (approximately US$17 000) goal within 30 hours; [ 24 ] it ended on September 30 with 11.4 million β‚½ collected (approximately US$175 000), having involved 7,278 people, and became the biggest Russian crowdfunding project for a day before a fundraiser hosted on CrowdRepublic for the Russian translation for Gloomhaven surpassed it. [ 21 ] Though Samin originally planned to print 1000 copies of HPMOR , his campaign's unprecedented success led him to print twenty-one times more copies than that. Yudkowsky supported Samin's efforts and wrote an exclusive introduction for HPMOR 's Russian printing, though the campaign's popularity surprised him. [ 21 ] Samin's HPMOR publication project is the largest-scale effort on record, [ 25 ] surpassing many previous low-circulation fan printings, [ 23 ] and he sent some Russian copies of HPMOR to libraries and others to schools as prizes for Olympiad winners. [ 21 ] [ 25 ] J.K. Rowling and her agents refused Russian publishing house Eksmo 's request for commercial publication of HPMOR . [ 21 ] HPMOR has Czech , [ 26 ] Chinese , [ 27 ] French , [ 28 ] German , [ 29 ] Hebrew , [ 30 ] Indonesian , [ 31 ] Italian , [ 32 ] Japanese , [ 33 ] Norwegian , [ 34 ] Spanish , [ 35 ] Swedish , [ 36 ] and Ukrainian [ 37 ] translations.
https://en.wikipedia.org/wiki/Harry_Potter_and_the_Methods_of_Rationality
In geometry, the Hart circle is derived from three given circles that cross pairwise to form eight circular triangles . For any one of these eight triangles, and its three neighboring triangles, there exists a Hart circle , tangent to the inscribed circles of these four circular triangles. Thus, the three given circles have eight Hart circles associated with them. The Hart circles are named after their discover, Andrew Searle Hart . They can be seen as analogous to the nine-point circle of straight-sided triangles. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Hart_circle
The Hartig net is the network of inward-growing hyphae , that extends into the plant host root , penetrating between plant cells in the root epidermis and cortex in ectomycorrhizal symbiosis. [ 1 ] [ 2 ] This network is the internal component of fungal morphology in ectomycorrhizal symbiotic structures formed with host plant roots, in addition to a hyphal mantle or sheath on the root surface, and extramatrical mycelium extending from the mantle into the surrounding soil. The Hartig net is the site of mutualistic resource exchange between the fungus and the host plant . Essential nutrients for plant growth are acquired from the soil by exploration and foraging of the extramatrical mycelium, then transported through the hyphal network across the mantle and into the Hartig net, where they are released by the fungi into the root apoplastic space for uptake by the plant. The hyphae in the Hartig net acquire sugars from the plant root, which are transported to the external mycelium to provide a carbon source to sustain fungal growth. [ 3 ] The Hartig net is a lattice-like network of hyphae that grow into the plant root from the hyphal mantle at the plant root surface. The hyphae of ectomycorrhizal fungi do not penetrate the plant cells, but occupy the apoplastic space between cells in the root. This network extends between the epidermal cells near the root surface, and may also extend between cells in the root cortex . [ 2 ] [ 4 ] The hyphae in the Hartig net formed by some ECM fungi are described as having transfer-cell like structures, with highly folded membranes that increase surface area and facilitate secretion and uptake of resources exchanged in the mutualistic symbiosis. [ 5 ] The initiation of hyphal growth into the intercellular space between roots often begins between 2–4 days following the establishment of the hyphal mantle in contact with the root surface. [ 6 ] [ 7 ] The initial development of the Hartig net likely involves a regulated decrease of plant defense responses, thus allowing fungal infection. Studies carried out with the model ectomycorrhizal fungus Laccaria bicolor have shown that the fungus secretes a small effector protein (MISSP7) that may regulate plant defense mechanisms by controlling plant response to phytohormones . [ 8 ] Unlike some plant root pathogenic fungi , ectomycorrhizal fungi are largely unable to produce many plant cell-wall-degrading enzymes, but increased pectin modification enzymes released by Laccaria bicolor during fungal infection and Hartig net development indicate that pectin degradation may function to loosen the adhesion between neighboring plant cells and allow room for hyphal growth between cells [ 9 ] [ 10 ] This Hartig net structure is common among ectomycorrhizal fungi, although the depth and thickness of the hyphal network can vary considerably depending on the host species. Fungi associating with plants in the Pinaceae form a robust Hartig net that penetrates between cells deep into the root cortex, while the Hartig net formation in ectomycorrhizal symbioses with many angiosperms may not extend beyond the root epidermis. [ 11 ] It has also been demonstrated that the depth and development of the Hartig net can vary among different fungi, even among isolates of the same species. Interestingly, an experiment using two isolates of Paxillus involutus , one of which only developed a loose mantle at the root surface and no developed Hartig net in poplar roots, showed that plant nitrate uptake was still improved by the symbiosis regardless of the presence of internal hyphal structure. [ 12 ] As an additional caveat some fungal species such as Tuber melanosporum can form arbutoid mycorrhizae, involving some intracellular penetration into plant root cells by fungal hyphae in addition to developing a shallow Hartig-net-like structure between epidermal cells. [ 13 ] The Hartig net supplies the plant root with chemical elements required for plant growth, such as nitrogen and phosphorus , [ 14 ] potassium , [ 15 ] [ 16 ] and micronutrients [ 17 ] in addition to water supplied to the roots through hyphal transport. [ 18 ] Essential nutrients acquired from surrounding soil by the extramatrical mycelium are transported into the hyphae in the Hartig net, where they are released into the apoplastic space for direct uptake directly by plant root cells. [ 3 ] [ 19 ] In exchange for the nutrients provided by the fungal partner, the plant provides a portion of its photosynthetically fixed carbon to the fungal partner as sugars. Sugars are released into the apoplastic space and made available for uptake by Hartig net hyphae. Although sucrose was long considered to be an important form of carbon provided by the plant to the fungus, many ectomycorrhizal fungi lack sucrose uptake transporters. Therefore, the fungal symbiont may depend on plant production of invertases to degrade sucrose into useable monosaccharides for fungal uptake. [ 20 ] [ 21 ] In the Hartig net of Amanita muscaria within poplar roots, expression of important fungal enzymes for trehalose biosynthesis was higher than in the extrametrical mycelium, indicating that trehalose production may function as a carbohydrate sink, increasing fungal demand of plant photosynthesized carbon compounds through the symbiotic exchange. [ 22 ] The plant regulatory mechanisms that influence the nutrient supply by the Hartig net are not fully understood, but it is thought that upregulation of plant defense mechanisms in response to decreased nitrogen transport by ECM fungi, rather than reductions in carbon allocation to ECM roots, suggesting that the regulation of symbiotic resource exchange for ECM symbiosis is not a simple reciprocal response. [ 20 ] In addition to the exchange of essential nutrients, the Hartig net may play an important role in plant strategies for tolerance of abiotic stressors, such as regulating bioaccumulation of metals [ 23 ] [ 24 ] or mediating plant stress responses to salinity. [ 12 ] The Hartig net is named after Theodor Hartig , [ 25 ] [ 26 ] a 19th-century German forest biologist and botanist. He reported research in 1842 on the anatomy of the interface between ectomycorrhizal fungi and tree roots.
https://en.wikipedia.org/wiki/Hartig_net
The hartley (symbol Hart ), also called a ban , or a dit (short for "decimal digit"), [ 1 ] [ 2 ] [ 3 ] is a logarithmic unit that measures information or entropy , based on base 10 logarithms and powers of 10. One hartley is the information content of an event if the probability of that event occurring is 1 ⁄ 10 . [ 4 ] It is therefore equal to the information contained in one decimal digit (or dit), assuming a priori equiprobability of each possible value. It is named after Ralph Hartley . If base 2 logarithms and powers of 2 are used instead, then the unit of information is the shannon or bit , which is the information content of an event if the probability of that event occurring is 1 ⁄ 2 . Natural logarithms and powers of e define the nat . One ban corresponds to ln(10) nat = log 2 (10) Sh , or approximately 2.303 nat , or 3.322 bit (3.322 Sh). [ a ] A deciban is one tenth of a ban (or about 0.332 Sh); the name is formed from ban by the SI prefix deci- . Though there is no associated SI unit , information entropy is part of the International System of Quantities , defined by International Standard IEC 80000-13 of the International Electrotechnical Commission . The term hartley is named after Ralph Hartley , who suggested in 1928 to measure information using a logarithmic base equal to the number of distinguishable states in its representation, which would be the base 10 for a decimal digit. [ 5 ] [ 6 ] The ban and the deciban were invented by Alan Turing with Irving John "Jack" Good in 1940, to measure the amount of information that could be deduced by the codebreakers at Bletchley Park using the Banburismus procedure, towards determining each day's unknown setting of the German naval Enigma cipher machine. The name was inspired by the enormous sheets of card, printed in the town of Banbury about 30 miles away, that were used in the process. [ 7 ] Good argued that the sequential summation of decibans to build up a measure of the weight of evidence in favour of a hypothesis, is essentially Bayesian inference . [ 7 ] Donald A. Gillies , however, argued the ban is, in effect, the same as Karl Popper's measure of the severity of a test. [ 8 ] The deciban is a particularly useful unit for log-odds , notably as a measure of information in Bayes factors , odds ratios (ratio of odds, so log is difference of log-odds), or weights of evidence. 10 decibans corresponds to odds of 10:1; 20 decibans to 100:1 odds, etc. According to Good, a change in a weight of evidence of 1 deciban (i.e., a change in the odds from evens to about 5:4) is about as finely as humans can reasonably be expected to quantify their degree of belief in a hypothesis. [ 9 ] Odds corresponding to integer decibans can often be well-approximated by simple integer ratios; these are collated below. Value to two decimal places, simple approximation (to within about 5%), with more accurate approximation (to within 1%) if simple one is inaccurate:
https://en.wikipedia.org/wiki/Hartley_(unit)
The Hartley function is a measure of uncertainty , introduced by Ralph Hartley in 1928. If a sample from a finite set A uniformly at random is picked, the information revealed after the outcome is known is given by the Hartley function where | A | denotes the cardinality of A . If the base of the logarithm is 2, then the unit of uncertainty is the shannon (more commonly known as bit ). If it is the natural logarithm , then the unit is the nat . Hartley used a base-ten logarithm , and with this base, the unit of information is called the hartley (aka ban or dit ) in his honor. It is also known as the Hartley entropy or max-entropy. The Hartley function coincides with the Shannon entropy (as well as with the RΓ©nyi entropies of all orders) in the case of a uniform probability distribution. It is a special case of the RΓ©nyi entropy since: But it can also be viewed as a primitive construction, since, as emphasized by Kolmogorov and RΓ©nyi, the Hartley function can be defined without introducing any notions of probability (see Uncertainty and information by George J. Klir, p.Β 423). The Hartley function only depends on the number of elements in a set, and hence can be viewed as a function on natural numbers. RΓ©nyi showed that the Hartley function in base 2 is the only function mapping natural numbers to real numbers that satisfies Condition 1 says that the uncertainty of the Cartesian product of two finite sets A and B is the sum of uncertainties of A and B . Condition 2 says that a larger set has larger uncertainty. We want to show that the Hartley function, log 2 ( n ), is the only function mapping natural numbers to real numbers that satisfies Let f be a function on positive integers that satisfies the above three properties. From the additive property, we can show that for any integer n and k , Let a , b , and t be any positive integers. There is a unique integer s determined by Therefore, and On the other hand, by monotonicity, Using equation (1), one gets and Hence, Since t can be arbitrarily large, the difference on the left hand side of the above inequality must be zero, So, for some constant ΞΌ , which must be equal to 1 by the normalization property.
https://en.wikipedia.org/wiki/Hartley_function
In mathematics , the Hartley transform ( HT ) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, [ 1 ] and is one of many known Fourier-related transforms . Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions (as opposed to requiring complex numbers ) and of being its own inverse. The discrete version of the transform, the discrete Hartley transform (DHT), was introduced by Ronald N. Bracewell in 1983. [ 2 ] The two-dimensional Hartley transform can be computed by an analog optical process similar to an optical Fourier transform (OFT), with the proposed advantage that only its amplitude and sign need to be determined rather than its complex phase. [ 3 ] However, optical Hartley transforms do not seem to have seen widespread use. The Hartley transform of a function f ( t ) {\displaystyle f(t)} is defined by: H ( Ο‰ ) = { H f } ( Ο‰ ) = 1 2 Ο€ ∫ βˆ’ ∞ ∞ f ( t ) cas ⁑ ( Ο‰ t ) d t , {\displaystyle H(\omega )=\left\{{\mathcal {H}}f\right\}(\omega )={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }f(t)\operatorname {cas} (\omega t)\,\mathrm {d} t\,,} where Ο‰ {\displaystyle \omega } can in applications be an angular frequency and cas ⁑ ( t ) = cos ⁑ ( t ) + sin ⁑ ( t ) = 2 sin ⁑ ( t + Ο€ / 4 ) = 2 cos ⁑ ( t βˆ’ Ο€ / 4 ) , {\displaystyle \operatorname {cas} (t)=\cos(t)+\sin(t)={\sqrt {2}}\sin(t+\pi /4)={\sqrt {2}}\cos(t-\pi /4)\,,} is the cosine-and-sine (cas) or Hartley kernel. In engineering terms, this transform takes a signal (function) from the time-domain to the Hartley spectral domain (frequency domain). The Hartley transform has the convenient property of being its own inverse (an involution ): f = { H { H f } } . {\displaystyle f=\{{\mathcal {H}}\{{\mathcal {H}}f\}\}\,.} The above is in accord with Hartley's original definition, but (as with the Fourier transform) various minor details are matters of convention and can be changed without altering the essential properties: This transform differs from the classic Fourier transform F ( Ο‰ ) = F { f ( t ) } ( Ο‰ ) {\displaystyle F(\omega )={\mathcal {F}}\{f(t)\}(\omega )} in the choice of the kernel. In the Fourier transform, we have the exponential kernel, exp ⁑ ( βˆ’ i Ο‰ t ) = cos ⁑ ( Ο‰ t ) βˆ’ i sin ⁑ ( Ο‰ t ) {\displaystyle \exp \left({-\mathrm {i} \omega t}\right)=\cos(\omega t)-\mathrm {i} \sin(\omega t)} , where i {\displaystyle \mathrm {i} } is the imaginary unit . The two transforms are closely related, however, and the Fourier transform (assuming it uses the same 1 / 2 Ο€ {\displaystyle 1/{\sqrt {2\pi }}} normalization convention) can be computed from the Hartley transform via: F ( Ο‰ ) = H ( Ο‰ ) + H ( βˆ’ Ο‰ ) 2 βˆ’ i H ( Ο‰ ) βˆ’ H ( βˆ’ Ο‰ ) 2 . {\displaystyle F(\omega )={\frac {H(\omega )+H(-\omega )}{2}}-\mathrm {i} {\frac {H(\omega )-H(-\omega )}{2}}\,.} That is, the real and imaginary parts of the Fourier transform are simply given by the even and odd parts of the Hartley transform, respectively. Conversely, for real-valued functions f ( t ) {\displaystyle f(t)} , the Hartley transform is given from the Fourier transform's real and imaginary parts: { H f } = β„œ { F f } βˆ’ β„‘ { F f } = β„œ { F f β‹… ( 1 + i ) } , {\displaystyle \{{\mathcal {H}}f\}=\Re \{{\mathcal {F}}f\}-\Im \{{\mathcal {F}}f\}=\Re \{{\mathcal {F}}f\cdot (1+\mathrm {i} )\}\,,} where β„œ {\displaystyle \Re } and β„‘ {\displaystyle \Im } denote the real and imaginary parts. The Hartley transform is a real linear operator , and is symmetric (and Hermitian ). From the symmetric and self-inverse properties, it follows that the transform is a unitary operator (indeed, orthogonal ). Convolution using Hartley transforms is [ 4 ] f ( x ) βˆ— g ( x ) = F ( Ο‰ ) G ( Ο‰ ) + F ( βˆ’ Ο‰ ) G ( Ο‰ ) + F ( Ο‰ ) G ( βˆ’ Ο‰ ) βˆ’ F ( βˆ’ Ο‰ ) G ( βˆ’ Ο‰ ) 2 {\displaystyle f(x)*g(x)={\frac {F(\omega )G(\omega )+F(-\omega )G(\omega )+F(\omega )G(-\omega )-F(-\omega )G(-\omega )}{2}}} where F ( Ο‰ ) = { H f } ( Ο‰ ) {\displaystyle F(\omega )=\{{\mathcal {H}}f\}(\omega )} and G ( Ο‰ ) = { H g } ( Ο‰ ) {\displaystyle G(\omega )=\{{\mathcal {H}}g\}(\omega )} Similar to the Fourier transform, the Hartley transform of an even/odd function is even/odd, respectively. The properties of the Hartley kernel , for which Hartley introduced the name cas for the function (from cosine and sine ) in 1942, [ 1 ] [ 5 ] follow directly from trigonometry , and its definition as a phase-shifted trigonometric function cas ⁑ ( t ) = 2 sin ⁑ ( t + Ο€ / 4 ) = sin ⁑ ( t ) + cos ⁑ ( t ) {\displaystyle \operatorname {cas} (t)={\sqrt {2}}\sin(t+\pi /4)=\sin(t)+\cos(t)} . For example, it has an angle-addition identity of: 2 cas ⁑ ( a + b ) = cas ⁑ ( a ) cas ⁑ ( b ) + cas ⁑ ( βˆ’ a ) cas ⁑ ( b ) + cas ⁑ ( a ) cas ⁑ ( βˆ’ b ) βˆ’ cas ⁑ ( βˆ’ a ) cas ⁑ ( βˆ’ b ) . {\displaystyle 2\operatorname {cas} (a+b)=\operatorname {cas} (a)\operatorname {cas} (b)+\operatorname {cas} (-a)\operatorname {cas} (b)+\operatorname {cas} (a)\operatorname {cas} (-b)-\operatorname {cas} (-a)\operatorname {cas} (-b)\,.} Additionally: cas ⁑ ( a + b ) = cos ⁑ ( a ) cas ⁑ ( b ) + sin ⁑ ( a ) cas ⁑ ( βˆ’ b ) = cos ⁑ ( b ) cas ⁑ ( a ) + sin ⁑ ( b ) cas ⁑ ( βˆ’ a ) , {\displaystyle \operatorname {cas} (a+b)={\cos(a)\operatorname {cas} (b)}+{\sin(a)\operatorname {cas} (-b)}=\cos(b)\operatorname {cas} (a)+\sin(b)\operatorname {cas} (-a)\,,} and its derivative is given by: cas β€² ⁑ ( a ) = d d a cas ⁑ ( a ) = cos ⁑ ( a ) βˆ’ sin ⁑ ( a ) = cas ⁑ ( βˆ’ a ) . {\displaystyle \operatorname {cas} '(a)={\frac {d}{da}}\operatorname {cas} (a)=\cos(a)-\sin(a)=\operatorname {cas} (-a)\,.}
https://en.wikipedia.org/wiki/Hartley_kernel
The Hartman effect describes how the delay time for a quantum tunneling particle is independent of the thickness of the opaque barrier . It is named after Thomas Hartman , who discovered it in 1962. [ 1 ] The Hartman effect is the tunneling effect through a barrier where the tunneling time tends to a constant for thick enough barriers. This was first described by Thomas E. Hartman in 1962. [ 1 ] Although the effect was first predicted for quantum particles governed by the SchrΓΆdinger equation , it also exists for classical electromagnetic wave packets tunneling as evanescent waves through electromagnetic barriers. [ 2 ] This is because the Helmholtz equation for electromagnetic waves and the time-independent SchrΓΆdinger equation have the same form. Since tunneling is a wave phenomenon, it occurs for all kinds of waves - matter waves, electromagnetic waves, and even sound waves. Hence the Hartman effect should exist for all tunneling waves. There is no unique and universally accepted definition of "tunneling time" in physics. This is because time is not an operator in quantum mechanics, unlike other quantities like position and momentum. Among the many candidates for "tunneling time" are (i) the group delay or phase time, (ii) the dwell time, (iii) the Larmor times, (iv) the BΓΌttiker–Landauer time, and (v) the semiclassical time. [ 3 ] [ 4 ] Three of these tunneling times (group delay, dwell time, and Larmor time) exhibit the Hartman effect, in the sense that they saturate at a constant value as the barrier thickness is increased. If the tunneling time T remains fixed as the barrier thickness L is increased, then the tunneling velocity v = L / T will ultimately become unbounded. The Hartman effect thus leads to predictions of anomalously large, and even superluminal tunneling velocities in the limit of thick barriers. However, more recent rigorous analysis proves that the process is entirely subluminal. [ 5 ] Tunneling time experiments with quantum particles like electrons are extremely difficult, not only because of the timescales (attoseconds) and length scales (sub-nanometre) involved, but also because of possible confounding interactions with the environment that have nothing to do with the actual tunneling process itself. As a result, the only experimental observations of the Hartman effect have been based on electromagnetic analogs to quantum tunneling. The first experimental verification of the Hartman effect was by Enders and Nimtz, who used a microwave waveguide with a narrowed region that served as a barrier to waves with frequencies below the cutoff frequency in that region. [ 6 ] [ 7 ] They measured the frequency-dependent phase shift of continuous wave (cw) microwaves transmitted by the structure. They found that the frequency-dependent phase shift was independent of the length of the barrier region. Since the group delay (phase time) is the derivative of the phase shift with respect to frequency, this independence of the phase shift means that the group delay is independent of barrier length, a confirmation of the Hartman effect. They also found that the measured group delay was shorter than the transit time L / c for a pulse travelling at the speed of light c over the same barrier distance L in vacuum. From this, it was inferred that the tunneling of evanescent waves is superluminal, despite it is now known on rigorous mathematical grounds that the relativistic quantum tunneling (modeled using the Dirac equation) is a subluminal process. [ 5 ] At optical frequencies the electromagnetic analogs to quantum tunneling involve wave propagation in photonic bandgap structures and frustrated total internal reflection at the interface between two prisms in close contact. Spielmann, et al. sent 12 fs (FWHM) laser pulses through the stop band of a multilayer dielectric structure. [ 8 ] They found that the measured group delay was independent of the number of layers, or equivalently, the thickness of the photonic barrier, thus confirming the Hartman effect for tunneling light waves. In another optical experiment, Longhi, et al. sent 380-ps wide laser pulses through the stop band of a fiber Bragg grating (FBG). [ 9 ] They measured the group delay of the transmitted pulses for gratings of length 1.3Β cm, 1.6Β cm, and 2Β cm and found that the delay saturated with length L in a manner described by the function tanh( qL ), where q is the grating coupling constant. This is another confirmation of the Hartman effect. The inferred tunneling group velocity was faster than that of a reference pulse propagating in a fiber without a barrier and also increased with FBG length, or equivalently, the reflectivity. In a different approach to optical tunneling, Balcou and Dutriaux measured the group delay associated with light transport across a small gap between two prisms . [ 10 ] When a light beam travelling through a prism impinges upon the glass-air interface at an angle greater than a certain critical angle, it undergoes total internal reflection and no energy is transmitted into the air. However, when another prism is brought very close (within a wavelength) to the first prism, light can tunnel across the gap and carry energy into the second prism. This phenomenon is known as frustrated total internal reflection (FTIR) and is an optical analog of quantum tunneling. Balcou and Dutriaux obtained the group delay from a measurement of the beam shift (known as the Goos–HΓ€nchen shift ) during FTIR. They found that the group delay saturates with the separation between the prisms, thus confirming the Hartman effect. They also found that the group delays were equal for both transmitted and reflected beams, a result that is predicted for symmetric barriers. The Hartman effect has also been observed with acoustic waves. Yang, et al. propagated ultrasound pulses through 3d phononic crystals made of tungsten carbide beads in water. [ 11 ] For frequencies inside the stop band they found that the group delay saturated with sample thickness. By converting the delay to a velocity through v = L / T , they found a group velocity that increases with sample thickness. In another experiment, Robertson, et al. created a periodic acoustic waveguide structure with an acoustic bandgap for audio frequency pulses. [ 12 ] They found that inside the stop band the acoustic group delay was relatively insensitive to the length of the structure, a verification of the Hartman effect. Furthermore, the group velocity increased with length and was greater than the speed of sound, a phenomenon they refer to as "breaking the sound barrier." Why does the tunneling time of a particle or wave packet become independent of barrier width for thick enough barriers? The origin of the Hartman effect had been a mystery for decades. If the tunneling time becomes independent of barrier width, the implication is that the wave packet speeds up as the barrier is made longer. Not only does it speed up, but it speeds up by just the right amount to traverse the increased distance in the same amount of time. In 2002 Herbert Winful showed that the group delay for a photonic bandgap structure is identical to the dwell time which is proportional to the stored energy in the barrier. [ 13 ] In fact, the dwell time is the stored energy divided by the input power. In the stop band, the electric field is an exponentially decaying function of distance. The stored energy is proportional to the integral of the square of the field. This integral, the area under a decaying exponential, becomes independent of length for a long enough barrier. The group delay saturates because the stored energy saturates. He redefined the group delay in tunneling as the lifetime of stored energy escaping through both ends. [ 14 ] This interpretation of group delay as a lifetime also explains why the transmission and reflection group delays are equal for a symmetric barrier. He pointed out that the tunnelling time is not a propagation delay and "should not be linked to a velocity since evanescent waves do not propagate". [ 15 ] In other papers Winful extended his analysis to quantum (as opposed to electromagnetic) tunneling and showed that the group delay is equal to the dwell time plus a self-interference delay, both of which are proportional to the integrated probability density and hence saturate with barrier length. [ 16 ]
https://en.wikipedia.org/wiki/Hartman_effect
The Hartmann number ( Ha ) is the ratio of electromagnetic force to the viscous force, first introduced by Julius Hartmann (1881 – 1951) of Denmark. [ 1 ] [ 2 ] It is frequently encountered in fluid flows through magnetic fields. [ 3 ] It is defined by: where
https://en.wikipedia.org/wiki/Hartmann_number
In mathematics , in the study of dynamical systems , the Hartman–Grobman theorem or linearisation theorem is a theorem about the local behaviour of dynamical systems in the neighbourhood of a hyperbolic equilibrium point . It asserts that linearisation β€”a natural simplification of the systemβ€”is effective in predicting qualitative patterns of behaviour. The theorem owes its name to Philip Hartman and David M. Grobman . The theorem states that the behaviour of a dynamical system in a domain near a hyperbolic equilibrium point is qualitatively the same as the behaviour of its linearization near this equilibrium point, where hyperbolicity means that no eigenvalue of the linearization has real part equal to zero. Therefore, when dealing with such dynamical systems one can use the simpler linearization of the system to analyse its behaviour around equilibria. [ 1 ] Consider a system evolving in time with state u ( t ) ∈ R n {\displaystyle u(t)\in \mathbb {R} ^{n}} that satisfies the differential equation d u / d t = f ( u ) {\displaystyle du/dt=f(u)} for some smooth map f : R n β†’ R n {\displaystyle f\colon \mathbb {R} ^{n}\to \mathbb {R} ^{n}} . Now suppose the map has a hyperbolic equilibrium state u βˆ— ∈ R n {\displaystyle u^{*}\in \mathbb {R} ^{n}} : that is, f ( u βˆ— ) = 0 {\displaystyle f(u^{*})=0} and the Jacobian matrix A = [ βˆ‚ f i / βˆ‚ x j ] {\displaystyle A=[\partial f_{i}/\partial x_{j}]} of f {\displaystyle f} at state u βˆ— {\displaystyle u^{*}} has no eigenvalue with real part equal to zero. Then there exists a neighbourhood N {\displaystyle N} of the equilibrium u βˆ— {\displaystyle u^{*}} and a homeomorphism h : N β†’ R n {\displaystyle h\colon N\to \mathbb {R} ^{n}} , such that h ( u βˆ— ) = 0 {\displaystyle h(u^{*})=0} and such that in the neighbourhood N {\displaystyle N} the flow of d u / d t = f ( u ) {\displaystyle du/dt=f(u)} is topologically conjugate by the continuous map U = h ( u ) {\displaystyle U=h(u)} to the flow of its linearisation d U / d t = A U {\displaystyle dU/dt=AU} . [ 2 ] [ 3 ] [ 4 ] [ 5 ] A like result holds for iterated maps, and for fixed points of flows or maps on manifolds. A mere topological conjugacy does not provide geometric information about the behavior near the equilibrium. Indeed, neighborhoods of any two equilibria are topologically conjugate so long as the dimensions of the contracting directions (negative eigenvalues) match and the dimensions of the expanding directions (positive eigenvalues) match. [ 6 ] But the topological conjugacy in this context does provide the full geometric picture. In effect, the nonlinear phase portrait near the equilibrium is a thumbnail of the phase portrait of the linearized system. This is the meaning of the following regularity results, and it is illustrated by the saddle equilibrium in the example below. Even for infinitely differentiable maps f {\displaystyle f} , the homeomorphism h {\displaystyle h} need not to be smooth, nor even locally Lipschitz. However, it turns out to be HΓΆlder continuous , with exponent arbitrarily close to 1. [ 7 ] [ 8 ] [ 9 ] [ 10 ] Moreover, on a surface, i.e., in dimension 2, the linearizing homeomorphism and its inverse are continuously differentiable (with, as in the example below, the differential at the equilibrium being the identity) [ 4 ] but need not be C 2 {\displaystyle C^{2}} . [ 11 ] And in any dimension, if f {\displaystyle f} has HΓΆlder continuous derivative, then the linearizing homeomorphism is differentiable at the equilibrium and its differential at the equilibrium is the identity. [ 12 ] [ 13 ] The Hartman–Grobman theorem has been extended to infinite-dimensional Banach spaces, non-autonomous systems d u / d t = f ( u , t ) {\displaystyle du/dt=f(u,t)} (potentially stochastic), and to cater for the topological differences that occur when there are eigenvalues with zero or near-zero real-part. [ 10 ] [ 8 ] [ 14 ] [ 15 ] [ 16 ] [ 17 ] The algebra necessary for this example is easily carried out by a web service that computes normal form coordinate transforms of systems of differential equations, autonomous or non-autonomous, deterministic or stochastic . [ 18 ] Consider the 2D system in variables u = ( y , z ) {\displaystyle u=(y,z)} evolving according to the pair of coupled differential equations d y d t = βˆ’ 3 y + y z and d z d t = z + y 2 . {\displaystyle {\frac {dy}{dt}}=-3y+yz\quad {\text{and}}\quad {\frac {dz}{dt}}=z+y^{2}.} By direct computation it can be seen that the only equilibrium of this system lies at the origin, that is u βˆ— = 0 {\displaystyle u^{*}=0} . The coordinate transform, u = h βˆ’ 1 ( U ) {\displaystyle u=h^{-1}(U)} where U = ( Y , Z ) {\displaystyle U=(Y,Z)} , given by y β‰ˆ Y + Y Z + 1 42 Y 3 + 1 2 Y Z 2 z β‰ˆ Z βˆ’ 1 7 Y 2 βˆ’ 1 3 Y 2 Z {\displaystyle {\begin{aligned}y&\approx Y+YZ+{\tfrac {1}{42}}Y^{3}+{\tfrac {1}{2}}YZ^{2}\\[5pt]z&\approx Z-{\tfrac {1}{7}}Y^{2}-{\tfrac {1}{3}}Y^{2}Z\end{aligned}}} is a smooth map between the original u = ( y , z ) {\displaystyle u=(y,z)} and new U = ( Y , Z ) {\displaystyle U=(Y,Z)} coordinates, at least near the equilibrium at the origin. In the new coordinates the dynamical system transforms to its linearisation d Y d t = βˆ’ 3 Y and d Z d t = Z . {\displaystyle {\frac {dY}{dt}}=-3Y\quad {\text{and}}\quad {\frac {dZ}{dt}}=Z.} That is, a distorted version of the linearisation gives the original dynamics in some finite neighbourhood.
https://en.wikipedia.org/wiki/Hartman–Grobman_theorem
Hartmut BΓ€rnighausen (16 February 1933 in Chemnitz – 30 March 2025 in Ettlingen ) was a German chemist and crystallographer . He was known for establishing the BΓ€rnighausen trees which describe group-subgroup relationships of crystal structures. [ 1 ] BΓ€rnighausen studied Chemistry at Leipzig University and received his diploma after a diploma thesis with Leopold Wolf in 1955. [ 1 ] In May 1958, he flew from East Germany to University of Freiburg , where he worked with Georg Brauer . [ 1 ] He finished his doctorate in the group of Georg Brauer in 1959. [ 1 ] In 1967, he received his habilitation. [ 1 ] From 1967 to 1998, he was a professor for inorganic chemistry at the University of Karlsruhe . [ 1 ] His research focused on the following topics: He was awarded the Carl Hermann Medal of the German Crystallographic Society in 1997. [ 2 ]
https://en.wikipedia.org/wiki/Hartmut_BΓ€rnighausen
In 1927, a year after the publication of the SchrΓΆdinger equation , Hartree formulated what are now known as the Hartree equations for atoms, using the concept of self-consistency that Lindsay had introduced in his study of many electron systems in the context of Bohr theory . [ 1 ] Hartree assumed that the nucleus together with the electrons formed a spherically symmetric field. The charge distribution of each electron was the solution of the SchrΓΆdinger equation for an electron in a potential v ( r ) {\displaystyle v(r)} , derived from the field. Self-consistency required that the final field, computed from the solutions, was self-consistent with the initial field, and he thus called his method the self-consistent field method. In order to solve the equation of an electron in a spherical potential, Hartree first introduced atomic units to eliminate physical constants. Then he converted the Laplacian from Cartesian to spherical coordinates to show that the solution was a product of a radial function P ( r ) / r {\displaystyle P(r)/r} and a spherical harmonic with an angular quantum number β„“ {\displaystyle \ell } , namely ψ = ( 1 / r ) P ( r ) S β„“ ( ΞΈ , Ο• ) {\displaystyle \psi =(1/r)P(r)S_{\ell }(\theta ,\phi )} . The equation for the radial function was [ 2 ] [ 3 ] [ 4 ] In mathematics, the Hartree equation , named after Douglas Hartree , is in R d + 1 {\displaystyle \mathbb {R} ^{d+1}} where and The non-linear SchrΓΆdinger equation is in some sense a limiting case . The wavefunction which describes all of the electrons, Ξ¨ {\displaystyle \Psi } , is almost always too complex to calculate directly. Hartree's original method was to first calculate the solutions to SchrΓΆdinger's equation for individual electrons 1, 2, 3, . . . {\displaystyle ...} , p , in the states Ξ± , Ξ² , Ξ³ , . . . , Ο€ {\displaystyle \alpha ,\beta ,\gamma ,...,\pi } , which yields individual solutions: ψ Ξ± ( x 1 ) , ψ Ξ² ( x 2 ) , ψ Ξ³ ( x 3 ) , . . . , ψ Ο€ ( x p ) {\displaystyle \psi _{\alpha }(\mathbf {x} _{1}),\psi _{\beta }(\mathbf {x} _{2}),\psi _{\gamma }(\mathbf {x} _{3}),...,\psi _{\pi }(\mathbf {x} _{p})} . Since each ψ {\displaystyle \psi } is a solution to the SchrΓΆdinger equation by itself, their product should at least approximate a solution. This simple method of combining the wavefunctions of the individual electrons is known as the Hartree product : [ 5 ] This Hartree product gives us the wavefunction of a system (many-particle) as a combination of wavefunctions of the individual particles. It is inherently mean-field (assumes the particles are independent) and is the unsymmetrized version of the Slater determinant ansatz in the Hartree–Fock method . Although it has the advantage of simplicity, the Hartree product is not satisfactory for fermions , such as electrons, because the resulting wave function is not antisymmetric. An antisymmetric wave function can be mathematically described using the Slater determinant . Let's start from a Hamiltonian of one atom with Z electrons. The same method with some modifications can be expanded to a monoatomic crystal using the Born–von Karman boundary condition and to a crystal with a basis. The expectation value is given by Where the s i {\displaystyle s_{i}} are the spins of the different particles. In general we approximate this potential with a mean field which is also unknown and needs to be found together with the eigenfunctions of the problem. We will also neglect all relativistic effects like spin-orbit and spin-spin interactions. At the time of Hartree the full Pauli exclusion principle was not yet invented, it was only clear the exclusion principle in terms of quantum numbers but it was not clear that the wave function of electrons shall be anti-symmetric. If we start from the assumption that the wave functions of each electron are independent we can assume that the total wave function is the product of the single wave functions and that the total charge density at position r {\displaystyle \mathbf {r} } due to all electrons except i is Where we neglected the spin here for simplicity. This charge density creates an extra mean potential: The solution can be written as the Coulomb integral If we now consider the electron i this will also satisfy the time independent SchrΓΆdinger equation This is interesting on its own because it can be compared with a single particle problem in a continuous medium where the dielectric constant is given by: Where V ( r ) < 0 {\displaystyle V(\mathbf {r} )<0} and Ξ΅ ( r ) > Ο΅ 0 {\displaystyle \varepsilon (\mathbf {r} )>\epsilon _{0}} Finally, we have the system of Hartree equations This is a non linear system of integro-differential equations, but it is interesting in a computational setting because we can solve them iteratively. Namely, we start from a set of known eigenfunctions (which in this simplified mono-atomic example can be the ones of the hydrogen atom) and starting initially from the potential V ( r ) = 0 {\displaystyle V(\mathbf {r} )=0} computing at each iteration a new version of the potential from the charge density above and then a new version of the eigen-functions, ideally these iterations converge. From the convergence of the potential we can say that we have a "self consistent" mean field, i.e. a continuous variation from a known potential with known solutions to an averaged mean field potential. In that sense the potential is consistent and not so different from the originally used one as ansatz . In 1928 J. C. Slater and J. A. Gaunt independently showed that given the Hartree product approximation: They started from the following variational condition where the Ο΅ i {\displaystyle \epsilon _{i}} are the Lagrange multipliers needed in order to minimize the functional of the mean energy ⟨ ψ | H ^ | ψ ⟩ {\displaystyle \langle \psi |{\hat {H}}|\psi \rangle } . The orthogonal conditions acts as constraints in the scope of the lagrange multipliers. From here they managed to derive the Hartree equations. In 1930 Fock and Slater independently then used the Slater determinant instead of the Hartree product for the wave function This determinant guarantees the exchange symmetry (i.e. if the two columns are swapped the determinant change sign) and the Pauli principle if two electronic states are identical there are two identical rows and therefore the determinant is zero. They then applied the same variational condition as above Where now the Ο• n i {\displaystyle \phi _{n_{i}}} are a generic orthogonal set of eigen-functions ⟨ Ο• n i ( r , s i ) | Ο• n j ( r , s j ) ⟩ = Ξ΄ i j {\displaystyle \langle \phi _{n_{i}}(\mathbf {r} ,s_{i})|\phi _{n_{j}}(\mathbf {r} ,s_{j})\rangle =\delta _{ij}} from which the wave function is built. The orthogonal conditions acts as constraints in the scope of the lagrange multipliers. From this they derived the Hartree–Fock method .
https://en.wikipedia.org/wiki/Hartree_equation
In computational physics and chemistry , the Hartree–Fock ( HF ) method is a method of approximation for the determination of the wave function and the energy of a quantum many-body system in a stationary state . The method is named after Douglas Hartree and Vladimir Fock . The Hartree–Fock method often assumes that the exact N -body wave function of the system can be approximated by a single Slater determinant (in the case where the particles are fermions ) or by a single permanent (in the case of bosons ) of N spin-orbitals . By invoking the variational method , one can derive a set of N -coupled equations for the N spin orbitals. A solution of these equations yields the Hartree–Fock wave function and energy of the system. Hartree–Fock approximation is an instance of mean-field theory , [ 1 ] where neglecting higher-order fluctuations in order parameter allows interaction terms to be replaced with quadratic terms, obtaining exactly solvable Hamiltonians. Especially in the older literature, the Hartree–Fock method is also called the self-consistent field method ( SCF ). In deriving what is now called the Hartree equation as an approximate solution of the SchrΓΆdinger equation , Hartree required the final field as computed from the charge distribution to be "self-consistent" with the assumed initial field. Thus, self-consistency was a requirement of the solution. The solutions to the non-linear Hartree–Fock equations also behave as if each particle is subjected to the mean field created by all other particles (see the Fock operator below), and hence the terminology continued. The equations are almost universally solved by means of an iterative method , although the fixed-point iteration algorithm does not always converge. [ 2 ] This solution scheme is not the only one possible and is not an essential feature of the Hartree–Fock method. The Hartree–Fock method finds its typical application in the solution of the SchrΓΆdinger equation for atoms, molecules, nanostructures [ 3 ] and solids but it has also found widespread use in nuclear physics . (See Hartree–Fock–Bogoliubov method for a discussion of its application in nuclear structure theory). In atomic structure theory, calculations may be for a spectrum with many excited energy levels, and consequently, the Hartree–Fock method for atoms assumes the wave function is a single configuration state function with well-defined quantum numbers and that the energy level is not necessarily the ground state . For both atoms and molecules, the Hartree–Fock solution is the central starting point for most methods that describe the many-electron system more accurately. The rest of this article will focus on applications in electronic structure theory suitable for molecules with the atom as a special case. The discussion here is only for the restricted Hartree–Fock method, where the atom or molecule is a closed-shell system with all orbitals (atomic or molecular) doubly occupied. Open-shell systems, where some of the electrons are not paired, can be dealt with by either the restricted open-shell or the unrestricted Hartree–Fock methods. The origin of the Hartree–Fock method dates back to the end of the 1920s, soon after the discovery of the SchrΓΆdinger equation in 1926. Douglas Hartree's methods were guided by some earlier, semi-empirical methods of the early 1920s (by E. Fues, R. B. Lindsay , and himself) set in the old quantum theory of Bohr. In the Bohr model of the atom, the energy of a state with principal quantum number n is given in atomic units as E = βˆ’ 1 / n 2 {\displaystyle E=-1/n^{2}} . It was observed from atomic spectra that the energy levels of many-electron atoms are well described by applying a modified version of Bohr's formula. By introducing the quantum defect d as an empirical parameter, the energy levels of a generic atom were well approximated by the formula E = βˆ’ 1 / ( n + d ) 2 {\displaystyle E=-1/(n+d)^{2}} , in the sense that one could reproduce fairly well the observed transitions levels observed in the X-ray region (for example, see the empirical discussion and derivation in Moseley's law ). The existence of a non-zero quantum defect was attributed to electron–electron repulsion, which clearly does not exist in the isolated hydrogen atom. This repulsion resulted in partial screening of the bare nuclear charge. These early researchers later introduced other potentials containing additional empirical parameters with the hope of better reproducing the experimental data. In 1927, D. R. Hartree introduced a procedure, which he called the self-consistent field method, to calculate approximate wave functions and energies for atoms and ions. [ 4 ] Hartree sought to do away with empirical parameters and solve the many-body time-independent SchrΓΆdinger equation from fundamental physical principles, i.e., ab initio . His first proposed method of solution became known as the Hartree method , or Hartree product . However, many of Hartree's contemporaries did not understand the physical reasoning behind the Hartree method: it appeared to many people to contain empirical elements, and its connection to the solution of the many-body SchrΓΆdinger equation was unclear. However, in 1928 J. C. Slater and J. A. Gaunt independently showed that the Hartree method could be couched on a sounder theoretical basis by applying the variational principle to an ansatz (trial wave function) as a product of single-particle functions. [ 5 ] [ 6 ] In 1930, Slater and V. A. Fock independently pointed out that the Hartree method did not respect the principle of antisymmetry of the wave function. [ 7 ] [ 8 ] The Hartree method used the Pauli exclusion principle in its older formulation, forbidding the presence of two electrons in the same quantum state. However, this was shown to be fundamentally incomplete in its neglect of quantum statistics . A solution to the lack of anti-symmetry in the Hartree method came when it was shown that a Slater determinant , a determinant of one-particle orbitals first used by Heisenberg and Dirac in 1926, trivially satisfies the antisymmetric property of the exact solution and hence is a suitable ansatz for applying the variational principle . The original Hartree method can then be viewed as an approximation to the Hartree–Fock method by neglecting exchange . Fock's original method relied heavily on group theory and was too abstract for contemporary physicists to understand and implement. In 1935, Hartree reformulated the method to be more suitable for the purposes of calculation. [ 9 ] The Hartree–Fock method, despite its physically more accurate picture, was little used until the advent of electronic computers in the 1950s due to the much greater computational demands over the early Hartree method and empirical models. [ 10 ] Initially, both the Hartree method and the Hartree–Fock method were applied exclusively to atoms, where the spherical symmetry of the system allowed one to greatly simplify the problem. These approximate methods were (and are) often used together with the central field approximation to impose the condition that electrons in the same shell have the same radial part and to restrict the variational solution to be a spin eigenfunction . Even so, calculating a solution by hand using the Hartree–Fock equations for a medium-sized atom was laborious; small molecules required computational resources far beyond what was available before 1950. The Hartree–Fock method is typically used to solve the time-independent SchrΓΆdinger equation for a multi-electron atom or molecule as described in the Born–Oppenheimer approximation . Since there are no known analytic solutions for many-electron systems (there are solutions for one-electron systems such as hydrogenic atoms and the diatomic hydrogen cation ), the problem is solved numerically. Due to the nonlinearities introduced by the Hartree–Fock approximation, the equations are solved using a nonlinear method such as iteration , which gives rise to the name "self-consistent field method." The Hartree–Fock method makes five major simplifications to deal with this task: Relaxation of the last two approximations give rise to many so-called post-Hartree–Fock methods. The variational theorem states that for a time-independent Hamiltonian operator, any trial wave function will have an energy expectation value that is greater than or equal to the true ground-state wave function corresponding to the given Hamiltonian. Because of this, the Hartree–Fock energy is an upper bound to the true ground-state energy of a given molecule. In the context of the Hartree–Fock method, the best possible solution is at the Hartree–Fock limit ; i.e., the limit of the Hartree–Fock energy as the basis set approaches completeness . (The other is the full-CI limit , where the last two approximations of the Hartree–Fock theory as described above are completely undone. It is only when both limits are attained that the exact solution, up to the Born–Oppenheimer approximation, is obtained.) The Hartree–Fock energy is the minimal energy for a single Slater determinant. The starting point for the Hartree–Fock method is a set of approximate one-electron wave functions known as spin-orbitals . For an atomic orbital calculation, these are typically the orbitals for a hydrogen-like atom (an atom with only one electron, but the appropriate nuclear charge). For a molecular orbital or crystalline calculation, the initial approximate one-electron wave functions are typically a linear combination of atomic orbitals (LCAO). The orbitals above only account for the presence of other electrons in an average manner. In the Hartree–Fock method, the effect of other electrons are accounted for in a mean-field theory context. The orbitals are optimized by requiring them to minimize the energy of the respective Slater determinant. The resultant variational conditions on the orbitals lead to a new one-electron operator, the Fock operator . At the minimum, the occupied orbitals are eigensolutions to the Fock operator via a unitary transformation between themselves. The Fock operator is an effective one-electron Hamiltonian operator being the sum of two terms. The first is a sum of kinetic-energy operators for each electron, the internuclear repulsion energy, and a sum of nuclear–electronic Coulombic attraction terms. The second are Coulombic repulsion terms between electrons in a mean-field theory description; a net repulsion energy for each electron in the system, which is calculated by treating all of the other electrons within the molecule as a smooth distribution of negative charge. This is the major simplification inherent in the Hartree–Fock method and is equivalent to the fifth simplification in the above list. Since the Fock operator depends on the orbitals used to construct the corresponding Fock matrix , the eigenfunctions of the Fock operator are in turn new orbitals, which can be used to construct a new Fock operator. In this way, the Hartree–Fock orbitals are optimized iteratively until the change in total electronic energy falls below a predefined threshold. In this way, a set of self-consistent one-electron orbitals is calculated. The Hartree–Fock electronic wave function is then the Slater determinant constructed from these orbitals. Following the basic postulates of quantum mechanics, the Hartree–Fock wave function can then be used to compute any desired chemical or physical property within the framework of the Hartree–Fock method and the approximations employed. According to the Slater–Condon rules , the energy expectation value of the molecular electronic Hamiltonian H ^ e {\displaystyle {\hat {H}}^{e}} for a Slater determinant is where h ^ {\displaystyle {\hat {h}}} is the one electron operator including electronic kinetic energy and electron-nucleus Coulombic interaction, and To derive the Hartree-Fock equation we minimize the energy functional for N electrons with orthonormal constraints. We choose a basis set Ο• i ( x i ) {\displaystyle \phi _{i}(x_{i})} in which the Lagrange multiplier matrix Ξ» i j {\displaystyle \lambda _{ij}} becomes diagonal, i.e. Ξ» i j = Ο΅ i Ξ΄ i j {\displaystyle \lambda _{ij}=\epsilon _{i}\delta _{ij}} . Performing the variation , we obtain The factor 1/2 before the double integrals in the molecular Hamiltonian drops out due to symmetry and the product rule. We may define the Fock operator to rewrite the equation where the Coulomb operator J ^ ( x k ) {\displaystyle {\hat {J}}(\mathbf {x} _{k})} and the exchange operator K ^ ( x k ) {\displaystyle {\hat {K}}(\mathbf {x} _{k})} are defined as follows The exchange operator has no classical analogue and can only be defined as an integral operator. The solution Ο• k {\displaystyle \phi _{k}} and Ο΅ k {\displaystyle \epsilon _{k}} are called molecular orbital and orbital energy respectively. Although Hartree-Fock equation appears in the form of a eigenvalue problem, the Fock operator itself depends on Ο• {\displaystyle \phi } and must be solved by a different technique. The optimal total energy E H F {\displaystyle E_{HF}} can be written in terms of molecular orbitals. J ^ i j {\displaystyle {\hat {J}}_{ij}} and K ^ i j {\displaystyle {\hat {K}}_{ij}} are matrix elements of the Coulomb and exchange operators respectively, and V nucl {\displaystyle V_{\text{nucl}}} is the total electrostatic repulsion between all the nuclei in the molecule. The total energy is not equal to the sum of orbital energies. If the atom or molecule is closed shell , the total energy according to the Hartree-Fock method is Typically, in modern Hartree–Fock calculations, the one-electron wave functions are approximated by a linear combination of atomic orbitals . These atomic orbitals are called Slater-type orbitals . Furthermore, it is very common for the "atomic orbitals" in use to actually be composed of a linear combination of one or more Gaussian-type orbitals , rather than Slater-type orbitals, in the interests of saving large amounts of computation time. Various basis sets are used in practice, most of which are composed of Gaussian functions. In some applications, an orthogonalization method such as the Gram–Schmidt process is performed in order to produce a set of orthogonal basis functions. This can in principle save computational time when the computer is solving the Roothaan–Hall equations by converting the overlap matrix effectively to an identity matrix . However, in most modern computer programs for molecular Hartree–Fock calculations this procedure is not followed due to the high numerical cost of orthogonalization and the advent of more efficient, often sparse, algorithms for solving the generalized eigenvalue problem , of which the Roothaan–Hall equations are an example. Numerical stability can be a problem with this procedure and there are various ways of combatting this instability. One of the most basic and generally applicable is called F-mixing or damping. With F-mixing, once a single-electron wave function is calculated, it is not used directly. Instead, some combination of that calculated wave function and the previous wave functions for that electron is used, the most common being a simple linear combination of the calculated and immediately preceding wave function. A clever dodge, employed by Hartree, for atomic calculations was to increase the nuclear charge, thus pulling all the electrons closer together. As the system stabilised, this was gradually reduced to the correct charge. In molecular calculations a similar approach is sometimes used by first calculating the wave function for a positive ion and then to use these orbitals as the starting point for the neutral molecule. Modern molecular Hartree–Fock computer programs use a variety of methods to ensure convergence of the Roothaan–Hall equations. Of the five simplifications outlined in the section "Hartree–Fock algorithm", the fifth is typically the most important. Neglect of electron correlation can lead to large deviations from experimental results. A number of approaches to this weakness, collectively called post-Hartree–Fock methods, have been devised to include electron correlation to the multi-electron wave function. One of these approaches, MΓΈller–Plesset perturbation theory , treats correlation as a perturbation of the Fock operator. Others expand the true multi-electron wave function in terms of a linear combination of Slater determinantsβ€”such as multi-configurational self-consistent field , configuration interaction , quadratic configuration interaction , and complete active space SCF (CASSCF) . Still others (such as variational quantum Monte Carlo ) modify the Hartree–Fock wave function by multiplying it by a correlation function ("Jastrow" factor), a term which is explicitly a function of multiple electrons that cannot be decomposed into independent single-particle functions. An alternative to Hartree–Fock calculations used in some cases is density functional theory , which treats both exchange and correlation energies, albeit approximately. Indeed, it is common to use calculations that are a hybrid of the two methodsβ€”the popular B3LYP scheme is one such hybrid functional method. Another option is to use modern valence bond methods. For a list of software packages known to handle Hartree–Fock calculations, particularly for molecules and solids, see the list of quantum chemistry and solid state physics software . Related fields Concepts People
https://en.wikipedia.org/wiki/Hartree–Fock_method
In mathematics, a Hartshorne ellipse is an ellipse in the unit ball bounded by the 4-sphere S 4 such that the ellipse and the circle given by intersection of its plane with S 4 satisfy the Poncelet condition that there is a triangle with vertices on the circle and edges tangent to the ellipse. They were introduced by Hartshorne ( 1978 ), who showed that they correspond to k =Β 2 instantons on S 4 . This geometry-related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hartshorne_ellipse
The Hart–Tipler conjecture is the idea that an absence of detectable Von Neumann probes is contrapositive evidence that no intelligent life exists outside of the Solar System . [ 1 ] [ 2 ] This idea was first proposed in opposition to the Drake equation in a 1975 paper by Michael H. Hart titled "Explanation for the Absence of Extraterrestrials on Earth". [ 3 ] Assuming that the probes traveled at 1/10 the speed of light and that no time was lost in building new ships upon arriving at the destination, Hart surmised that a wave of Von Neumann probes could cross the galaxy in approximately 650,000 years, a comparatively minimal span of time relative to the estimated age of the universe at 13.7 billion years. Hart’s argument was extended by cosmologist Frank Tipler in his 1981 paper entitled "Extraterrestrial intelligent beings do not exist". [ 4 ] The conjecture is the first of many proposed solutions to the Fermi paradox (the conflict between the lack of obvious evidence for alien life and various high probability estimates for its existence). [ 5 ] [ 6 ] In this case, the solution is that there is no other intelligent life because such estimates are incorrect. [ 7 ] The conjecture is named after astrophysicist Michael H. Hart and mathematical physicist and cosmologist Frank Tipler . [ 8 ] There is no reliable or reproducible evidence that aliens have visited Earth . [ 9 ] [ 10 ] No transmissions or evidence of intelligent extraterrestrial life have been detected or observed anywhere other than Earth in the Universe . If intelligent life existed, it would have produced enough self-replicating spacecraft, known as von Neumann probes, to cover the universe by now, [ 11 ] which runs counter to the knowledge that the Universe is filled with a very large number of planets, some of which likely hold the conditions hospitable for life. Life typically expands until it fills all available niches. [ 12 ] These contradictory facts form the basis for the Fermi paradox , of which the Hart–Tipler conjecture is one proposed solution. The firstborn hypothesis is a special case of the Hart–Tipler conjecture which states that no other intelligent life has been discovered because humanity is the first intelligent life in the universe. [ 13 ] According to the Berserker hypothesis , the absence of interstellar probes is not evidence of life's absence, since such probes could "go berserk" and destroy other civilizations, before self-destructing. [ 14 ]
https://en.wikipedia.org/wiki/Hart–Tipler_conjecture
Haruki's Theorem says that given three intersecting circles that only intersect each other at two points that the lines connecting the inner intersecting points to the outer satisfy: where s 1 , s 2 , s 3 , s 4 , s 5 , s 6 {\displaystyle s_{1},s_{2},s_{3},s_{4},s_{5},s_{6}} are the measure of segments connecting the inner and outer intersection points. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The theorem is named after the Japanese mathematician Hiroshi Haruki .
https://en.wikipedia.org/wiki/Haruki's_Theorem
Haruo Hosoya ( Japanese : 細矒 治倫 , Hepburn : Hosoya Haruo , born 1936) is a Japanese chemist and emeritus professor of Ochanomizu University , Tokyo , Japan . He is the namesake of the Hosoya index used in discrete mathematics and computational chemistry . [ 1 ] Hosoya was born in Kamakura , Japan to a family of an office worker. During 1955-1959 he studied at the University of Tokyo . In 1964 he wrote his Ph.D. thesis, "Study on the Structure of Reactive Intermediates and Reaction Mechanism". After postdoc work abroad ( Ann Arbor , Michigan , with prof. John Platt ), in 1969 he became associate professor at the Ochanomizu University, where he worked for 33 years until his retirement in 2002. After retirement he keeps working in computational chemistry. [ 1 ] In 1971, Hosoya defined the topological index (a graph invariant ) now known as the Hosoya index as the total number of matchings of a graph plus 1. [ 2 ] The Hosoya index is often used in computer (mathematical) chemistry investigations for organic compounds. In 2002-2003 the Internet Electronic Journal of Molecular Design dedicated a series of issues to commemorate the 65th birthday of professor Hosoya. [ 3 ] Hosoya's article "The Topological Index Z Before and After 1971" describes the history of the notion and the associated inside stories and details other Hosoya's achievements. [ 4 ] Hosoya also introduced the triangle of numbers known as Hosoya's triangle (originally "Fibonacci triangle", but that name can be ambiguous). [ 5 ] This article about a Japanese scientist is a stub . You can help Wikipedia by expanding it . This biographical article about a chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haruo_Hosoya
The Harvard Computers were a team of women working as skilled workers to process astronomical data at the Harvard College Observatory in Cambridge, Massachusetts , United States . The team was directed by Edward Charles Pickering (1877 to 1919) and, following his death in 1919, by Annie Jump Cannon . [ 1 ] The women were challenged to make sense of these patterns by devising a scheme for sorting the stars into categories. Annie Jump Cannon's success at this activity made her famous in her own lifetime, and she produced a stellar classification system that is still in use today. Antonia Maury discerned in the spectra a way to assess the relative sizes of stars, and Henrietta Leavitt showed how the cyclic changes of certain variable stars could serve as distance markers in space. [ 2 ] Other computers on the team included Mary Anna Draper , Williamina Fleming , Anna Winlock , and Florence Cushman . [ 3 ] Although these women started primarily as calculators, they made significant contributions to astronomy, much of which they published in research articles. In the 19th century, the Harvard College Observatory faced the challenge of working through an overwhelming amount of astronomical data due to improvements in photographic technology. [ 4 ] Harvard Observatory's director, Edward Charles Pickering, hired a group of women to analyze the astronomical data. [ 4 ] While Pickering was the director of the Harvard Observatory, he hired over eighty women. [ 5 ] These women were known as computers. [ 6 ] Although Pickering believed that gathering data at astronomical observatories was not the most appropriate work, it seems that several factors contributed to his decision to hire women instead of men. [ 3 ] Among them was that men were paid much more than women, so he could employ more staff with the same budget. [ 7 ] This was relevant in a time when the amount of astronomical data was surpassing the capacity of the Observatories to process it. [ 8 ] Although some of Pickering's female staff were astronomy graduates, their wages were similar to those of unskilled workers . They usually earned between 25 and 50 cents per hour (between $8 and $16 in 2024 [ 7 ] ), more than a factory worker but less than a clerical one. [ 9 ] Most of the women depended financially on their friends and family members and lived with coworkers to combat the low wages. [ 10 ] Although the wages Pickering provided were low, it was common to pay women less than men during the 20th century and does not discount his advocation for women in astronomy. [ 10 ] In describing the dedication and efficiency with which the Harvard Computers, including Cushman, undertook this effort, Edward Pickering said, "a loss of one minute in the reduction of each estimate would delay the publication of the entire work by the equivalent of the time of one assistant for two years." [ 11 ] Another reason why Pickering decided to hire women over men was he thought allowing women to conduct astronomical research would show the general public that women were capable of higher thinking and worthy of higher education. [ 12 ] The first female computer to be hired at the Harvard Observatory was Anna Winlock. [ 13 ] Pickering's first hire was Williamina Fleming six years later in 1881. [ 13 ] Together, Fleming and Pickering continued to hire female computers through the twentieth century. [ 13 ] At times women offered to work at the observatory for free in order to gain experience in a field that was difficult to get into. [ 14 ] The computer position was one of the lower class positions at the observatory due to the pay and little chance for promotion. [ 15 ] Under the Henry Draper Memorial project, the women were often tasked with measuring the brightness, position, and color of stars. [ 16 ] The goal of the project was to photograph the stars and classify their spectrum. [ 17 ] Their work was often segregated from men, so teams of male astronomers would take photographs of the stars in the evening and send them to the women at Harvard for analysis. [ 18 ] The work included such tasks as classifying stars by calculating their exact position and movement, predicting the return of comets , and by comparing the photographs to known catalogs and reducing the photographs while accounting for things like atmospheric refraction, parallax, and error in various instruments in order to render the clearest possible image. [ 18 ] While the work was repetitive, it still required attention and accuracy. [ 19 ] Fleming herself described the work as "so nearly alike that there will be little to describe outside ordinary routine work of measurement, examination of photographs, and of work involved in the reduction of these observations". [ 16 ] The work would not have been possible without photographic plate technology . [ 20 ] With such technology, dry, color sensitive plates are used to capture photo visual and photo-red magnitudes. [ 21 ] The dry plates allowed for longer exposure over longer time intervals, increasing the accuracy of the photographs and range of stars capable of being photographed. [ 21 ] The plate technology allowed the women to classify stars more accurately than before. [ 21 ] The observatory, with the help of computers, made several breakthroughs in classifying and cataloging the stars. One such accomplishment was the Henry Draper Catalogue . [ 23 ] Following the death of Henry Draper (1882), Mary Anna Palmer Draper funded the Mount Wilson Observatory . The work on the catalogue was led by Williamina Fleming . [ 24 ] Following the initial classifications done by Fleming (1890), Antonia Maury helped place stars in their correct positions and did further research on the spectra of the stars with Pickering (1901). [ 22 ] Henrietta Leavitt discovered a relationship between a Cepheid variable ’s brightness and its pulsation period (1908). [ 25 ] Annie Jump Cannon and her team classified an average of 5,000 stars per month from the years 1912–1915. [ 26 ] Florence Cushman helped organize and process the data. [ 27 ] The catalog was published between 1918 and 1924. Following the death of Pickering (1919), Cannon took control of the projects. [ 28 ] An extension to the original works was published between 1925 and 1936, where over 46,850 stars were classified. [ 29 ] In the later years of the program, following the publication of the catalog, several women joined and continued to make contributions. Margaret Walton Mayall contributed to the classification of stellar spectra. She later went on to lead the American Association of Variable Star Observers . [ 30 ] Helen Sawyer Hogg specialized in cataloging variable stars within globular clusters. Her work helped lay the foundation for understanding stellar evolution and the structure of the universe. [ 31 ] Cecilia Payne-Gaposchkin proved that stars are composed primarily of hydrogen and helium. [ 32 ] Muriel Mussells Seyfert discovered three new ring nebulae on photographic plates, expanding the catalog of known planetary nebulae . [ 33 ] Mary Anna Palmer Draper (1839–1914) was an American astronomer who helped found the Mount Wilson Observatory . [ 34 ] Draper was the widow of Dr. Henry Draper , an astronomer who died before completing his work on the chemical composition of stars. [ 3 ] She was very involved in her husband's work and wanted to finish his classification of stars after he died. [ 3 ] Mary Draper quickly realized the task facing her was far too daunting for one person. She had received correspondence from Mr. Pickering, a close friend of hers and her husband's. Pickering offered to help finish her husband's work, and encouraged her to publish his findings up to the time of his death. [ 3 ] Draper agreed to give Pickering the plates her husband had been working on, but took them to Harvard University herself since the plates were very small. [ 35 ] While at the university, Draper met the Harvard Observatory's current computers and was able to observe some of the observatory's current projects. [ 35 ] After some deliberation and much consideration, Draper decided in 1886 to donate money and a telescope of her husband's to the Harvard Observatory in order to photograph the spectra of stars. She had decided this would be the best way to continue her husband's work and erect his legacy in astronomy. [ 3 ] She was very insistent on funding the memorial project with her own inheritance, as it would carry on her husband's legacy. She was a dedicated follower of the observatory and a great friend of Pickering's. In 1900, she funded an expedition to see the total solar eclipse occurring that year. [ 3 ] Williamina Fleming (1857–1911) was a Scottish immigrant astronomer who helped with the photographic classification of stellar spectra . [ 36 ] Fleming had no prior relation to Harvard, as she was a Scottish immigrant [ 3 ] working as Pickering's housemaid. Her first assignment was to improve an existing catalog of stellar spectra, which later led to her appointment as head of the β€˜β€™ Henry Draper Catalogue ’’ project. Fleming went on to help develop a classification of stars based on their hydrogen content, as well as play a major role in discovering the strange nature of white dwarf stars . [ 16 ] Williamina continued her career in astronomy when she was appointed Harvard's Curator of Astronomical Photographs in 1899, also known as Curator of the Photographic Plates. At the age of 42, Fleming became the first woman at the observatory to hold a title of such nature. [ 37 ] She remained the only woman curator until the 1950s. [ 38 ] Her work also led to her becoming the first female American citizen to be elected to the Royal Astronomical Society in 1907. [ 39 ] Throughout her career, Fleming was able to classify 10,000 spectra and found over 50 nebulae and over 300 stars. [ 40 ] Fleming did not retire from working at the observatory, as she died at age 54 from pneumonia. [ 40 ] Antonia Maury (1866-1952) was an American astronomer who worked on calculating the orbit of a spectroscopic binary . [ 41 ] Maury was the niece of Henry Draper, and after recommendation from Mrs. Draper, was hired as a computer at the age of 22. [ 3 ] [ 42 ] She was a graduate from Vassar College with honors in physics, astronomy, and philosophy. [ 43 ] Pickering was uncomfortable paying the average computer salary to someone with Antonia Maury's achievements, but ultimately ended up hiring her. Maury was first tasked with the spectral measurement of some of the brightest stars. Pickering then tasked Maury with reclassifying some of the stars after the publication of the Henry Draper Catalog. In 1889, Maury studied images of Mizar and found out that it was actually two stars based on two K-lines that became visible for the star every few weeks. [ 44 ] Antonia took it upon herself to improve and redesigned the system of classification which was later adopted by the International Astronomical Union . Maury left the observatory in 1891 to begin teaching at the Gilman School in Cambridge Massachusetts. Later, Maury would return to the observatory in 1893 and 1895 to publish many of her observations of stellar spectra. Her work was finished with the help of Pickering and the computing staff and was published in 1897. [ 3 ] Maury would return to Harvard College Observatory in 1918 as an adjunct professor. [ 45 ] During this time, Maury's work began to be published under her own name due in part to the director Harlow Shapely . She would remain at the observatory until she retired in 1948. [ 45 ] Anna Winlock (1857–1904) was an American astronomer who helped catalog stars for the Henry Draper Catalogue . [ 46 ] Some of the first women who were hired to work as computers had familial connections to the Harvard Observatory’s male staff. For instance, Winlock, one of the first of the Harvard Computers, was the daughter of Joseph Winlock , the third director of the observatory and Pickering’s immediate predecessor. [ 47 ] Anna Winlock joined the observatory in 1875 to assist in supporting her family after her father's unexpected passing. She tackled her father's unfinished data analysis, performing the arduous work of mathematically reducing meridian circle observations, which rescued a decade's worth of numbers that had been left in a useless state. Winlock also worked on a stellar cataloging section called the "Cambridge Zone". Working over twenty years on the project, the work done by her team on the Cambridge Zone contributed significantly to the Astronomische Gesellschaft Katalog , which contains information on more than one-hundred thousand stars and is used worldwide by many observatories and their researchers. Within a year of Anna Winlock's hiring, three other women joined the staff: Selina Bond, Rhoda Sauders, and a third, who was likely a relative of an assistant astronomer. [ 48 ] In 1886, Anna's younger sister, Louisa Winlock, joined her in the computing room. [ 11 ] Annie Jump Cannon (1863–1941) was an American astronomer who make a catalog of the stars, classifying and recording them. Following the death of Pickering in 1901 she took control over the observatory. [ 49 ] [ 50 ] Pickering hired Cannon, a graduate of Wellesley College , to classify the southern stars. While at Wellesley, she took astronomy courses from one of Pickering's star students, Sarah Frances Whiting . [ 3 ] She became the first female assistant to study variable stars at night. [ 3 ] She studied the light curve of variable stars which could help suggest the type and causation of variation. [ 3 ] Cannon, adding to work done by fellow computer Antonia Maury , greatly simplified [Pickering and Fleming 's star classification based on temperature] system, and in 1922, the International Astronomical Union adopted [Cannon's] as the official classification system for stars....During Pickering’s 42-year tenure at the Harvard Observatory, which ended only a year before he died, in 1919, he received many awards, including the Bruce Medal , the Astronomical Society of the Pacific’s highest honor. Craters on the moon and on Mars are named after him. And Annie Jump Cannon’s enduring achievement was dubbed the Harvard β€”not the Cannonβ€”system of spectral classification. [ 51 ] Cannon's Harvard Classification Scheme is the basis of the today's familiar O B A F G K M system. She also categorized the variable stars into tables so they could be identified and compared more easily. [ 3 ] These systems connect the color of stars to their temperature. According to Rebecca Dinerstein Knight, Cannon was able to work at a pace of classifying the spectra of 300 stars an hour and therefore was able to classify over 350,000 stars in her lifetime. [ 52 ] Cannon was the first female scientist to be recognized for many awards and titles in her field of study. She was the first woman to receive an honorary doctorate from the University of Oxford and the Henry Draper Medal from the National Academy of Sciences, and the first female officer in the American Astronomical Society . [ 53 ] Cannon went on to establish her own Annie Jump Cannon Award for women in postdoctoral work. [ 54 ] In 1934, Cannon awarded the first Annie Jump Cannon Award to Cecilia Payne-Gaposchkin for her contributions in analyzing stars and the stellar spectrum. The award was given out at an American Astronomical Society meeting, and for winning, Cannon awarded Gaposchkin $50 and a gold pin. [ 55 ] Henrietta Swan Leavitt (1868-1921) was an American astronomer who worked to measure the distances between galaxies and determine the scale of modeling. [ 56 ] Leavitt arrived at the observatory in 1893. She had experience through her college studies, traveling abroad, and teaching. In academia, Leavitt excelled in mathematics courses at Cambridge. [ 3 ] When she began working at the observatory she was tasked with measuring star brightness through photometry . [ 3 ] She found hundreds of new variable stars after starting to analyze the Great Nebula in Orion and her work was expanded to study the variables of the entire sky with Annie Jump Cannon and Evelyn Leland . [ 3 ] With skills gained in photometry, Leavitt compared stars in different exposures. Studying Cepheid variables in the Small Magellanic Cloud , she discovered that their apparent brightness was dependent on their period. Since all those stars were approximately the same distance from Earth, that meant their absolute brightness must depend on their period as well, allowing the use of Cepheid variables as a standard candle for determining cosmic distances. [ 57 ] That, in turn, led directly to the modern understanding of the true size of the universe, and Cepheid variables are still an essential rung in the cosmic distance ladder . Pickering published her work with his name as co-author. The legacy she left allowed future scientists to make further discoveries in space. Astronomer Edwin Hubble used Leavitt's method to calculate the distance of the nearest galaxy to the earth, the Andromeda Galaxy. This led to the realization that there are even more galaxies than previously thought. Florence Cushman (1860–1940) was an American astronomer at the Harvard College Observatory who worked on the Henry Draper Catalogue . Cushman was born in Boston, Massachusetts in 1860 and received her early education at Charlestown High School , where she graduated in 1877. In 1888, she began work at the Harvard College Observatory as an employee of Edward Pickering . Her classifications of stellar spectra contributed to Henry Draper Catalogue between 1918 and 1934. [ 58 ] She stayed as an astronomer at the Observatory until 1937 and died in 1940 at the age of 80. [ 59 ] Cushman worked at the Harvard College Observatory from 1918 to 1937. Over the course of her nearly fifty-year career, she employed the objective prism method to analyze, classify, and catalog the optical spectra of hundreds of thousands of stars. In the 19th century, the photographic revolution enabled more detailed analysis of the night sky than had been possible with solely eye-based observations. In order to obtain optical spectra for measurement, male astronomers at the Harvard College Observatory expose glass plates on which the astronomical images were captured at night. During the daytime, female assistants like Florence analyzed the resultant spectra by reducing values, computing magnitudes, and cataloging their findings. [ 60 ] She is credited with determining the positions and magnitudes of the stars listed in the 1918 edition of the Henry Draper Catalogue , [ 61 ] which featured the spectra of roughly 222,000 stars.
https://en.wikipedia.org/wiki/Harvard_Computers
Harvard biphase is a magnetic run length code for encoding magnetic tape. [ 1 ] It is one of the formats employed in forming the digital bits of logic one and logic zero, along with non-return-to-zero (NRZ) and bipolar-return-to-zero (RZ) formats. [ 2 ] Each bit in the Harvard biphase format undergoes change at its trailing edge and this transpires either from high to zero or zero to high independently of its value. [ 2 ] Harvard biphase has previously been used for digital flight data recorder (FDR) where 12-bit words per second are recorded onto magnetic tape using Harvard biphase code. [ 3 ] The data are encoded in frames and each of these contains a snapshot of the avionics system in the aircraft. [ 4 ] For Harvard biphase, a phase transition in the middle of the bit cell indicates that the bit is 1. No transaction indicates that the bit is 0. There is also a phase transition at the start of each bit cell. [ 5 ] The ARINC 573 serves as a standard for FDRs that feature continuous data stream encoded in Harvard biphase. [ 6 ] This computing article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harvard_biphase
A harvester is a type of heavy forestry vehicle employed in cut-to-length logging operations for felling , delimbing and bucking trees . A forest harvester is typically employed together with a skidder that hauls the logs to a roadside landing, or a forwarder to pick up and haul away. Forest harvesters were mainly developed in Sweden and Finland and today do practically all of the commercial felling in these countries. The first fully mobile timber "harvester", the PIKA model 75, was introduced in 1973 [ 1 ] by Finnish systems engineer Sakari PinomΓ€ki and his company PIKA Forest Machines. The first single grip harvester head was introduced in the early 1980s by Swedish company SP Maskiner. Their use has become widespread throughout the rest of Northern Europe, particularly in the harvesting of plantation forests. Before modern harvesters were developed in Finland and Sweden, two inventors from Texas developed a crude tracked unit that sheared off trees at the base up to 0.76 meters (30Β in) in diameter was developed in the US called The Mammoth Tree Shears. After shearing off the tree, the operator could use his controls to cause the tree to fall either to the right or left. Unlike a harvester, it did not delimb the tree after felling it. [ 2 ] Harvesters are employed effectively in level to moderately steep terrain for clearcutting areas of forest. For very steep hills or for removing individual trees, ground crews working with chain saws are still preferred in some countries. In northern Europe small and manoeuvrable harvesters are used for thinning operations, manual felling is typically only used in extreme conditions, where tree size exceeds the capacity of the harvester head or by small woodlot owners. The principle aimed for in mechanised logging is "no feet on the forest floor ", and the harvester and forwarder allow this to be achieved. Keeping workers inside the driving cab of the machine provides a safer and more comfortable working environment for industrial scale logging. Harvesters are built on a robust all-terrain vehicle, either wheeled , tracked , or on a walking excavator . The vehicle may be articulated to provide tight turning capability around obstacles. A diesel engine provides power for both the vehicle and the harvesting mechanism through hydraulic drive . An extensible, articulated boom, similar to that on an excavator , reaches out from the vehicle to carry the harvester head. Some harvesters are adaptations of excavators with a new harvester head, while others are purpose-built vehicles. "Combi" machines are available which combine the felling capability of a harvester with the load-carrying capability of a forwarder, allowing a single operator and machine to fell, process and transport trees. These novel type of vehicles are only competitive in operations with short distances to the landing. A typical harvester head consists of (from bottom to top, with head in vertical position) One operator in the vehicle's cab can control all of these functions. A control computer can simplify mechanical movements and can keep records of the length and diameter of trees cut. Length is computed by either counting the rotations of the gripping wheels or, more commonly, using the measuring wheel. Diameter is computed from the pivot angle of the gripping wheels or delimbing knives when hugging the tree. Length measurement also can be used for automated cutting of the tree into predefined lengths. Computer software can predict the volume of each stem based on analysing stems harvested previously. This information when used in conjunction with price lists for each specific log specification enables the optimisation of log recovery from the stem. Harvesters are routinely available for cutting trees up to 900 millimetres (35Β in) in diameter, built on vehicles weighing up to 20 metric tons (20 long tons; 22 short tons), with a boom reaching up to 10 metres (33Β ft) radius. Larger, heavier vehicles do more damage to the forest floor , but a longer reach helps by allowing harvesting of more trees with fewer vehicle movements. The approximate equivalent type of vehicle in full-tree logging systems are feller-bunchers . Media related to Harvesters (forestry) at Wikimedia Commons
https://en.wikipedia.org/wiki/Harvester_(forestry)
Harvestmen ( Opiliones ) are an order of arachnids often confused with spiders , though the two orders are not closely related. Research on harvestman phylogeny (that is, the phylogenetic tree ) is in a state of flux. While some families are clearly monophyletic , that is share a common ancestor, others are not, and the relationships between families are often not well understood. The relationship of harvestmen with other arachnid orders is still not sufficiently resolved. Scorpiones Opiliones Pseudoscorpiones Solifugae Opiliones Scorpiones Pseudoscorpiones Solifugae Up until the 1980s they were thought to be closely related to mites ( Acari ). In 1990, Shultz proposed grouping them with scorpions , pseudoscorpions and Solifugae ("camel spiders"); he named this clade Dromopoda . [ 2 ] This view is currently widely accepted. However, the relationships of the orders within Dromopoda are not yet sufficiently resolved. Analyses of recent taxa suggested the harvestmen to be the sister group of the three others, collectively called Novogenuata . [ 2 ] An analysis also considering fossil taxa [ 1 ] concluded that the harvestmen are sister to Haplocnemata (Pseudoscorpions and Solifugae), with Scorpions being the sister group of those three combined. [ 3 ] Recent analyses have also recovered the Opiliones as sister-group to the extinct Phalangiotarbids, [ 4 ] [ 5 ] although this has low support, or as sister group to a pseudoscorpion and scorpion clade. [ 6 ] [ 7 ] In 1796, Pierre AndrΓ© Latreille erected the family "Phalangida" [ sic ] for the then known harvestmen, but included the genus Galeodes ( Solifugae ). Tord Tamerlan Teodor Thorell (1892) recognized the suborders Palpatores, Laniatores, Cyphophthalmi (called Anepignathi), but also included the Ricinulei as a harvestman suborder. The latter were removed from the Opiliones by Hansen and William SΓΈrensen (1904), rendering the harvestmen monophyletic. Cyphophthalmi Eupnoi Dyspnoi Laniatores Cyphophthalmi Eupnoi Dyspnoi Laniatores According to more recent theories, Cyphophthalmi, the most basal suborder, are a sister group to all other harvestmen, which are according to this system called Phalangida . The Phalangida consist of three suborders, the Eupnoi , Dyspnoi and Laniatores . While these three are each monophyletic, it is not clear how exactly they are related. In 2002, Giribet et al. came to the conclusion that Dyspnoi and Laniatores are sister groups, and called them Dyspnolaniatores , which are sister to Eupnoi. [ 1 ] This is in contrast to the classical hypothesis that Dyspnoi and Eupnoi form a clade called Palpatores . [ 3 ] Dyspnolaniatores was also recovered in a 2011 study. [ 9 ] In 2014, new analysis by Garwood et al. examined 158 morphological traits across 272 species. In Garwood's phylogenetic tree , the basal Opiliones split into the Phalangida and stem Cyphophthalmi. The Cyphophthalmi stem then diversified into Cyphophthalmi proper and the newly identified Tetrophthalmi , while the Phalangida split into Laniatores and the "Palpatores". Finally, the Palpatores diversified into Eupnoi and Dyspnoi. The analysis moves divergence of the extant suborders from the Devonian Period to the Carboniferous . Opiliones' own divergence is dated to 414 million years ago, which arachnid are estimated to have originated during the late Cambrian to early Ordivician . [ 10 ] Genetic analysis performed on a modern Phalangium opilio specimen found that a suppressed gene that, if active, would generate a second pair of eyes at the lateral location, providing independent evidence of four eyes being the ancestral condition. Garwood et al. also argue that Carboniferous harvestmen diversification is more consistent with changes observed in other terrestrial arthropods , which have been linked to high oxygen levels during that period. [ 10 ] The Cyphophthalmi have been divided into two infraorders, Temperophthalmi (including the superfamily Sironoidea , with the families Sironidae , Troglosironidae and Pettalidae ) and Tropicophthalmi (with the superfamilies Stylocelloidea and its single family Stylocellidae , and Ogoveoidea , including Ogoveidae and Neogoveidae ); however, recent studies suggest that the Sironidae, Neogoveidae and Ogoveidae are not monophyletic, while the Pettalidae and Stylocellidae are. The division into Temperophthalmi and Tropicophthalmi is not supported, with Troglosironidae and Neogoveidae probably forming a monophyletic group. The Pettalidae are possibly the sister group to all other Cyphophthalmi. While most Cyphophthalmi are blind, eyes do occur in several groups. Many Stylocellidae, and some Pettalidae bear eyes near or at the base of the ozophores , as opposed to most harvestmen, which have eyes located on top. The eyes of Stylocellidae could have evolved from the lateral eyes of other arachnids, which have been lost in all other harvestmen. Regardless of their origin, it is thought that eyes were lost several times in Cyphophthalmi. Spermatophores, which normally do not occur in harvestmen, but in several other arachnids, are present in some Sironidae and Stylocellidae. [ 3 ] The Eupnoi are divided into two superfamilies, the Caddoidea and Phalangioidea . The Phalangioidea are assumed to be monophyletic, although only the families Phalangiidae and Sclerosomatidae have been studied; the Caddoidea have not been studied at all in this regard. The limits of families and subfamilies in Eupnoi are uncertain in many cases, and are in urgent need of further study. [ 3 ] Nipponopsalididae Nemastomatidae Dicranolasmatidae Trogulidae The Dyspnoi are probably the best studied harvestman group regarding phylogeny. They are clearly monophyletic, and divided into two superfamilies. The relationship of the superfamily Ischyropsalidoidea , comprising the families Ceratolasmatidae , Ischyropsalididae and Sabaconidae , has been investigated in detail. It is not clear whether Ceratolasmatidae and Sabaconidae are each monophyletic, as the ceratolasmatid Hesperonemastoma groups with the sabaconid Taracus in molecular analyses. All other families are grouped under Troguloidea . [ 3 ] There is not yet a proposed phylogeny for the whole group of Laniatores, although some families have been researched in this regard. The Laniatores are divided into two infraorders, the " Insidiatores " Loman, 1900 and the Grassatores Kury, 2002 . However, Insidiatores is probably paraphyletic. It consists of the two superfamilies Travunioidea and Triaenonychoidea , with the latter closer to the Grassatores. Alternatively, the Pentanychidae , which reside in Travunioidea, could be the sister group to all other Laniatores. The Grassatores are traditionally divided into the Samooidea , Assamioidea , Gonyleptoidea , Phalangodoidea and Zalmoxoidea . Several of these groups are not monophyletic. Molecular analyses relying on nuclear ribosomal genes support monophyly of Gonyleptidae , Cosmetidae (both Gonyleptoidea), Stygnopsidae (currently Assamioidea) and Phalangodidae . The Phalangodidae and Oncopodidae may not form a monophyletic group, thus rendering the Phalangodoidea obsolete. The families of the obsolete Assamioidea have been moved to other groups: Assamiidae and Stygnopsidae are now Gonyleptoidea, Epedanidae reside within their own superfamily Epedanoidea , and the " Pyramidopidae " are possibly related to Phalangodidae. [ 3 ]
https://en.wikipedia.org/wiki/Harvestman_phylogeny
A hash calendar is a data structure that is used to measure the passage of time by adding hash values to an append-only database with one hash value per elapsed second. It can be thought of special kind of Merkle or hash tree , with the property that at any given moment, the tree contains a leaf node for each second since 1970‑01‑01 00:00:00 UTC. The leaves are numbered left to right starting from zero and new leaves are always added to the right. By periodically publishing the root of the hash-tree is it possible to use a hash calendar as the basis of a hash-linking based digital timestamping scheme . The hash calendar construct was invented by Estonian cryptographers Ahto Buldas and Mart Saarepera based on their research on the security properties of cryptographic hash functions and hash-linking based digital timestamping. [ 1 ] Their design goal was to remove the need for a trusted third party i.e. that the time of the timestamp should be verifiable independently from the issuer of the timestamp. [ 2 ] There are different algorithms that can be used to build a hash calendar and extract a relevant hash chain per second. The easiest is to imagine the calendar being built in two phases. In the first phase, the leaves are collected into complete binary trees, starting from left, and making each tree as large as possible. In the second phase, the multiple unconnected trees are turned into a single tree by merging the roots of the initial trees, but this time starting from the right and adding new parent nodes as needed (red nodes). The hash chains can then be extracted as from any hash tree. Since the hash calendar is built in a deterministic manner, the shape of the tree for any moment can be reconstructed knowing just the number of leaf nodes in the tree at that moment, which is one more than the number of seconds from 1970‑01‑01 00:00:00 UTC to that moment. Therefore, given the time when the calendar tree was created and a hash chain extracted from it, the time value corresponding to each leaf node can be computed. The Distributed hash calendar is a distributed network of hash calendar nodes. In order to ensure a high availability service it is possible to have multiple calendars in different physical locations all of which communicate with each other to ensure that each calendar contains identical hash values. Ensuring that the calendars remain in agreement is a form of Byzantine fault tolerance To the right a 5 node calendar cluster is shown where each node communicates with every other node in the cluster and there is no single point of failure. Although each node has a clock the clock is not used for setting the time directly but as a metronome to ensure that the nodes β€œbeat” at the same time. A five node hash calendar cluster is a component of Keyless Signature Infrastructure (KSI), each leaf in the hash calendar being the aggregate hash value of a globally distributed hash tree.
https://en.wikipedia.org/wiki/Hash_calendar
Hash oil or cannabis oil is an oleoresin obtained by the extraction of cannabis or hashish . [ 1 ] It is a cannabis concentrate containing many of its resins and terpenes – in particular, tetrahydrocannabinol (THC), cannabidiol (CBD), and other cannabinoids . Hash oil is usually consumed by smoking , vaporizing or eating . [ 2 ] Preparations of hash oil may be solid or semi-liquid colloids depending on both production method and temperature and are usually identified by their appearance or characteristics. Color most commonly ranges from transparent golden or light brown, to tan or black. There are various extraction methods, most involving a solvent , such as butane or ethanol . [ 2 ] Hash oil is an extracted cannabis product that may use any part of the plant, with minimal or no residual solvent. It is generally thought to be indistinct from traditional hashish , at-least according to the 1961 UN Single Convention on Narcotic Drugs that defines these products as "the separated resin, whether crude or purified, obtained from the cannabis plant". Hash oil may be sold in cartridges used with pen vaporizers . Cannabis retailers in California have reported about 40% of their sales are from smokeable cannabis oils. [ 3 ] The tetrahydrocannabinol (THC) content of hash oil varies tremendously, since the manufacturers use a varying assortment of marijuana plants and preparation techniques. Dealers sometimes cut hash oils with other oils. [ 4 ] [ 5 ] The form of the extract varies depending on the extraction process used; it may be liquid, a clear amber solid (called "shatter"), a sticky semisolid substance (called "wax"), or a brittle honeycombed solid (called "honeycomb wax"). [ 6 ] Hash oil seized in the 1970s had a THC content ranging from 10% to 30%. The oil available on the U.S. West Coast in 1974 averaged about 15% THC. [ 4 ] Samples seized across the United States by the Drug Enforcement Administration over an 18-year period (1980–1997) showed that THC content in hashish and hashish oil averaging 12.9% and 17.4%, respectively, did not show an increase over time. [ 7 ] The highest THC concentrations measured were 52.9% in hashish and 47.0% in hash oil. [ 8 ] Hash oils in use in the 2010s had THC concentrations as high as 90% [ 9 ] [ 10 ] and other products achieving higher concentrations. [ 11 ] Following an outbreak of vaping-related pulmonary illnesses and deaths in 2019, NBC News conducted tests on different black market THC vape cartridges and found cartridges containing up to 30% Vitamin E acetate , and trace amounts of fungicides and pesticides that may be harmful. [ 12 ] The following compounds were found in naphtha extracts of Bedrocan Dutch medical cannabis: [ 13 ] The hash oils made in the 19th century were made from hand collected hashish called charas and kief . The term hash oil [ 14 ] was hashish that had been dissolved or infused into a vegetable oil for use in preparing foods for oral administration. Efforts to isolate the active ingredient in cannabis were well documented in the 19th century, and cannabis extracts and tinctures of cannabis were included in the British Pharmacopoeia and the United States Pharmacopoeia . These solvent extracts were termed cannabin (1845), cannabindon, cannabinine, crude cannabinol and cannabinol. [ 14 ] So-called "butane honey oil" was available briefly in the 1970s. [ 3 ] [ 15 ] This product was made in Kabul , Afghanistan, and smuggled into the United States by The Brotherhood of Eternal Love . Production is thought to have ceased when the facility was destroyed in an explosion. [ 16 ] Traditional ice water-separated hashish production utilizes water and filter bags to separate plant material from resin, though this method still leaves much residual plant matter and is therefore poorly suited for full vaporization. Gold described the use of alcohol and activated charcoal in honey oil production by 1989, [ 17 ] and Michael Starks further detailed procedures and various solvents by 1990. [ 18 ] Large cannabis vaporizers gained popularity in the twentieth century for their ability to vaporize the cannabinoids in cannabis and extracts without burning plant material, using temperature controlled vaporization. Colorado and Washington began licensing hash oil extraction operations in 2014. [ 3 ] Small portable vape pens saw a dramatic increase in popularity in 2017. Hash oil is consumed usually by ingestion, smoking or vaporization. [ 6 ] Smoking or vaporizing hash oil is known colloquially as "dabbing", [ 6 ] from the English verb to daub ( Dutch dabben, French dauber), "to smear with something adhesive". [ 19 ] Dabbing devices include special kinds of water pipes ("dab rigs"), vaporizers and vape pens similar in design to electronic cigarettes . [ 6 ] Oil rigs include a glass water pipe and a quartz bucket which is often covered with a glass bubble or directional cap to direct the airflow and disperse the oil amongst the hot areas of the quartz "nail" (A nail is also referred to as a banger). [ 6 ] The pipe is often heated with a butane blowtorch rather than a cigarette lighter. [ 6 ] The oil can also be sold in prefilled atomizer cartridges. The cartridge is used by connecting it to a battery and inhaling the vaporized oil from the cartridge's mouthpiece. [ 20 ] Hash oil is produced by solvent extraction ( maceration , infusion or percolation ) of marijuana or hashish . After filtering and evaporating the solvent, a sticky resinous liquid with a strong herbal odor (remarkably different from the odor of hemp) remains. [ 4 ] [ 21 ] Fresh, undried plant material is less suited for hash oil production, because much THC and CBD will be present in their carboxylic acid forms (THCA and CBDA ), which may not be highly soluble in some solvents. [ 4 ] The acids are decarboxylated during drying and heating (smoking). A wide variety of solvents can be used for extraction, such as chloroform , dichloromethane , petroleum ether , naphtha , benzene , butane , methanol , ethanol , isopropanol , and olive oil . [ 4 ] [ 13 ] Currently, resinoids are often obtained by extraction with supercritical carbon dioxide . The alcohols extract undesirable water-soluble substances such as chlorophyll and sugars (which can be removed later by washing with water). Non-polar solvents such as benzene , chloroform and petroleum ether will not extract the water-soluble constituents of marijuana or hashish while still producing hash oil. In general, non-polar cannabis extracts taste much better than polar extracts. Alkali washing further improves the odor and taste. The oil may be further refined by 1) alkali washing, or removing the heavy aromatic carboxylic acids with antibiotic properties, which may cause heartburn , gallbladder and pancreas irritation, and resistance to hemp antibiotics ; 2) conversion of CBD to THC . Process 1) consists of dissolving the oil in a non-polar solvent such as petroleum ether , repeatedly washing ( saponifying ) with a base such as sodium carbonate solution until the yellow residue disappears from the watery phase, decanting, and washing with water to remove the base and the saponified components (and evaporating the solvents). This process reduces the oil yield, but the resulting oil is less acidic, more easily digestible and much more potent (almost pure THC). Process 2) consists of dissolving the oil in a suitable solvent such as absolute ethanol containing 0.05% hydrochloric acid , and boiling the mixture for 2 hours. [ 22 ] The majority of ready to consume extract products are produced via "Closed Loop Systems". [ 23 ] These systems typically entail: a vessel that holds the solvent, material columns to hold the plant material, a flow meter to measure the volume of solvent entering the plant material, a recovery vessel(where heat is applied via an external jacket) to convert the liquid solvent into a vapor and separate it from the THC, CBD, or other cannabinoids/byproducts, and some form of a heat exchanger to then convert the hydrocarbon vapors back into a liquid form prior to returning to the original vessel. Such a process can be carried out using a Soxhlet extractor . Ten grams of marijuana yields one to two grams of hash oil. [ 21 ] The oil may retain considerable residual solvent: oil extracted with longer-chain volatile hydrocarbons (such as naphtha) is less viscous (thinner) than oil extracted with short-chain hydrocarbons (such as butane). [ 13 ] Colored impurities from the oil can be removed by adding activated charcoal to about one third to one half the weight or volume of the solvent containing the dissolved oil, mixing well, filtering, and evaporating the solvent. [ 4 ] When decolorizing fatty oils , oil retention can be up to 50 wtΒ % on bleaching earths and nearly 100 wtΒ % on activated charcoal. [ 24 ] The many different textures/types of hydrocarbon extracts include: [ 25 ] Hash rosin has recently become a top quality, highly prized product in the cannabis market. [ 26 ] For dabbing, it is considered to be the cleanest form of concentrating cannabis, [ 27 ] as it requires only ice, water (instead of organic solvents like butane), heat, pressure, and collection tools. Cannabis flower material is washed with ice water, and strained using filters in sequential micron size to isolate intact trichomes and their heads into ice water hash. [ 28 ] The microns that are held in highest regards are the 73ΞΌ and 90ΞΌ, as this is where the resin heads reside. [ 29 ] These are sometimes isolated and sold as one of the highest quality, most expensive cannabis products in the market today, known as "full melt" [ 30 ] because it will dab fine without having to be pressed. "Full spectrum" hash rosin will normally come from 45ΞΌ-159ΞΌ, as smaller and larger particles are likely to be too unrefined or broken stalks of the trichomes. This hash is then pressed at the appropriate temperature and pressure to squeeze the oils out of the hash, and is collected with metal tools and parchment paper. Just like hydrocarbon extraction, the quality of the final product depends greatly on the quality of the starting material. This is emphasized even more so with hash rosin due to its lower yield percentages compared to solvent-derived concentrates (.3-8% rosin vs 10-20% hydrocarbon). Hash rosin producers often touch on how growing cannabis for hash production is different than growing for flower production, as some strains will be deceptive with their looks regarding yields. In Canada, hash oil – defined as a chemically concentrated extract having up to 90% THC potency – was approved for commerce in October 2018. [ 31 ] In the United States, regulations specifically for hash oil have not been issued as of 2019, but hemp seed oil – along with hulled hemp seeds and hemp seed protein – were approved as generally recognized as safe (GRAS) in December 2018, indicating that "these products can be legally marketed in human foods for these uses without food additive approval, provided they comply with all other requirements and do not make disease treatment claims ". [ 32 ] In Germany, the KCanG (Cannabis law) from April 1st, 2024 allows the possession of a certain amount of Cannabis products for adults. However, extraction of cannabinoids from the plants, and thus hash oil, is still illegal in general. [ 33 ] The term extraction is not defined in the law but refers in most definitions to the use of an extractive such as butane or ethanol. As this is not used in the production of rosin it is uncertain whether rosin is a legal cannabis resin according to KCanG due to the mechanical process of production or an illegal one, since of the extractive purpose. On 5 September 2019, the United States Food and Drug Administration (US FDA) announced that 10 out of 18, or 56% of the samples of vape liquids sent in by states, linked to recent vaping related lung disease outbreak in the United States , tested positive for vitamin E acetate [ 34 ] which had been used as a thickening agent by illicit THC vape cartridge manufacturers. [ 35 ] On 8 November 2019, the Centers for Disease Control and Prevention (CDC) identified vitamin E acetate as a very strong culprit of concern in the vaping-related illnesses, but has not ruled out other chemicals or toxicants as possible causes. [ 36 ] The CDC's findings were based on fluid samples from the lungs of 29 patients with vaping-associated pulmonary injury , which provided direct evidence of vitamin E acetate at the primary site of injury in all the 29 lung fluid samples tested. [ 36 ] Research suggests when vitamin E acetate is inhaled, it may interfere with normal lung functioning. [ 37 ] "Vitamin E oil might be in 60-70% of street carts, insiders say." [ 38 ] Counterfeit THC oil has been detected to contain synthetic cannabinoids . Several school children in Greater Manchester collapsed after vaping Spice mis-sold as 'natural cannabis'. [ 39 ] [ 40 ] [ 41 ] [ 42 ] [ 43 ] [ 44 ] [ 45 ] [ 46 ] [ 47 ] As of 2015 [update] the health effects of using hash oil were poorly documented. Cannabis extracts have less plant matter and create less harmful smoke. However, trace amounts of impurities are not generally regarded as safe (GRAS). [ 6 ] In 2019 following an outbreak of illnesses additives added to vape pen mixtures were found to be causing breathing problems, lung damage, and deaths. [ 48 ] Most of the solvents employed vaporize quickly and are flammable, making the extraction process dangerous. Several explosion and fire incidents related to hash oil manufacturing attempts in homes have been reported. [ 21 ] Solvents used to extract THC are flammable or combustible and have resulted in explosions, fires, severe injuries, and deaths. [ 49 ] [ 50 ] [ 10 ] [ 51 ] [ 52 ] [ 53 ] Hash oil can contain up to 80% THC, though up to 99% is possible with other methods of extraction. While health issues of the lungs may be exacerbated by use of hash oil, it is not known to cause side effects not already found in other preparations of cannabis . When exposed to air, warmth and light (especially without antioxidants ), the oil loses its taste and psychoactivity due to aging. Cannabinoid carboxylic acids ( THCA , CBDA , and maybe others) have an antibiotic effect on gram-positive bacteria such as ( penicillin -resistant) Staphylococcus aureus , but gram-negative bacteria such as Escherichia coli are unaffected. [ 54 ]
https://en.wikipedia.org/wiki/Hash_oil
Thomas Haslem v. William A. Lockwood , [ 1 ] Connecticut, (1871) is an important United States case in property , tort , conversion , trover and nuisance law. The plaintiff directed his servants to rake abandoned horse manure into heaps that had accumulated in a public street, intending to carry it away the next day. Before he could do so, the defendant, who had no knowledge of the plaintiff's actions, found the heaps and hauled them off to his own land. The plaintiff sued the defendant in trover demanding payment for the price of the manure. The trial court held for the defendant, stating he owed nothing to the plaintiff. The plaintiff appealed and the Appellate Court of Connecticut held for the plaintiff, remanding the case for a new trial. The manure originally belonged to the owners of the horses that dropped it. But when the owners abandoned it on the road, it became the property of the man who was first to claim it. The Court found that the best owner after the act of abandonment was the borough of Stamford, Connecticut where the manure was found. In the absence of a claim to the manure by the officials of Stamford, the plaintiff was entitled to it by reason of trover . The plaintiff was entitled to damages because the defendant had committed a conversion . The manure had not become a part of the real estate, as the defendant had argued. It remained separate and unattached to the land, and hence was not part of the fee of estate. Comparing manure to seaweed and laws in the 19th century having to do with the scraping into piles of natural things of this sort, the court held that 24 hours was a reasonable time for the defendant to wait to take the manure. That by this standard, and the fruits of his labour of raking into piles, the plaintiff was granted a new trial over the issue of damages. A case in trover for a quantity of manure, brought before a justice of the peace and appealed by the defendant to the Court of Common Pleas for the county of Fairfield, and tried in that court, on the general issue concerning the matter of ownership of the manure before Justice Brewer. At trial it was proved that the plaintiff employed two men to gather into heaps, on the evening of April 6, 1869, some manure that lay scattered on the ground along the side of a public highway. Most of this manure was from horses passing by. The men continued their efforts through the town of Stamford, Connecticut . They started at 6 PM and by 8 PM, their efforts had resulted in eighteen heaps, which was enough to fill six cart-loads. While the heaps consisted largely of manure, there were also traces of soil, gravel and straw which are commonly seen along roadways. The defendant saw the piles the next morning. He inquired of the town warden to whom they belonged, and if he had given permission to anyone for their removal. The town warden did not know to whom the manure belonged and had not given permission to anyone for the removal. Learning this, the defendant removed the manure to his own land, where it was scattered on a field. The plaintiff and defendant both averred that they had received permission from the warden to claim the manure. But testimony revealed that neither had any authority from any town official in Stamford for the removal. Neither plaintiff while gathering, nor the defendant while removing the heaps was interfered with or opposed by any one. The removal of the manure was calculated to improve the appearance and health of the borough. The manure was worth one dollar per cart full, six dollars in all. The plaintiff, upon learning that the defendant had taken the manure, demanded he pay six dollars. Defendant refused the demand. Neither litigant owned any of the land adjacent to the road. On the above facts, the plaintiff prayed the court to rule that the manure was the personal property of the owners of the horses, and had been abandoned. By piling the manure into heaps, the plaintiff claimed ownership in trover . The only person who could reasonably have a greater claim to the manure would be the owner of the land in fee, and that barring any claim by the land owner, the plaintiff was the rightful owner. The defendant claimed that the manure being dropped and spread out over the surface of the earth was a part of the real estate, and belonged to the owner of the fee, subject to a public easement; that the fee was either the borough of Stamford or the town of Stamford, or in the parties who owned lands adjacent; that therefore the scraping up of the manure, mixed with the soil, if real estate, did not change its nature to that of personal estate, unless it was removed, whether the plaintiff had consent of the owner of the fee or not; and that unless the heaps become personal property, the plaintiff could not maintain his action. The defendant further claimed that the plaintiff may have, indeed, turned the manure into a personal estate by the act of piling it up; but had abandoned his claim to the manure by leaving it unattended overnight and into the next day. This inattention was an abandonment of all rights to ownership of the manure. The trial court ruled adversely, and found for the defendant. The plaintiff had no property rights in the piles of manure. The plaintiff appeals this ruling to this court. The case is appealed to this court, with the plaintiff seeking a new trial. Curtis and Hoyt (Counsel for the plaintiff-appellant) offered the following arguments in their brief: (1) The manure in question was the personal property abandoned by its owners. (The owners of the horses.) [ 2 ] [ 3 ] (2) It never became a part of the real estate on which it was abandoned. [ 4 ] [ 5 ] [ 6 ] [ 7 ] [ 8 ] (3) It being personal property abandoned by its owners, and lying upon the highway, and neither the owners of the fee nor the proper authorities of the town and borough having by any act of theirs shown any intention to appropriate the same, it became lawful for the plaintiff to gather it up and remove it from the highway, providing he did not commit a trespass, and removed it without objection from the owners of the land. [ 9 ] No trespass was in fact committed. No person was interfered with the plaintiff or made any objection. The court cannot presume a trespass to have been committed. [ 10 ] [ 11 ] (4) But if the manure had become a part of the real estate, yet when it was gathered into heaps by the plaintiff it was severed from the realty and became personal estate. [ 12 ] [ 13 ] And being gathered without molestation from any person owning or claiming to own the land, it is to be considered as having been taken by tacit consent of such owner. [ 14 ] (5) The plaintiff therefore acquired not only a valid possession, but a title by occupancy, and by having expanded labor and money upon the property. Such a title is a good legal title against every person by the true owner. (6) If the plaintiff had a legal title then he had the constructive possession. If he had legal possession, and only left the property for a short time intending to return and take it away, then he might maintain an action against a wrong doer for taking it away. [ 15 ] [ 16 ] [ 17 ] [ 18 ] [ 19 ] [ 20 ] The leaving of property for a short time, intending to return, does not constitute an abandonment. The property is still to be considered as in the possession of the plaintiff. Olmstead (Counsel for the defendant-respondent), contra. (1) The manure mixed with the dirt and ordinary scrapings of the highway, being spread out over the surface of the highway, was a part of the real estate, and belonged to the owner of the fee, subject to the public easement. [ 21 ] [ 22 ] [ 23 ] [ 24 ] [ 25 ] [ 26 ] [ 27 ] (2) The scraping up of the manure and dirt into piles, if the same was a part of the real estate, did not change its nature to that of personal property, unless there was a severance of it from the realty by removal, (which there was not), whether the plaintiff had the consent of the owner of the fee or not, which consent it is conceded the plaintiff did not have. (3) Unless the scraping up of the heaps made their substance personal property, the plaintiff could not maintain his action either for trespass or trespass on the case. (4) In trespass de bonis asportatis , or trover , the plaintiff must have had the actual possession, or a right to the immediate possession, in order to recover. [ 28 ] (5) If the manure was always personal estate, it being spread upon the surface of the earth, it was in possession of the owner of the fee, who was not the plaintiff. [ 29 ] [ 30 ] The scraping of it into heaps, unless it was removed, would not change the possession from the owner of the fee to the plaintiff. The plaintiff therefore never had the possession. (6) If the heaps were personal property the plaintiff never had any right in the property, but only mere possession, if anything, which he abandoned by leaving the same upon the public highway from 8 o'clock in the evening until 12 o'clock the next day, without leaving any notice on or about the property, or any one to exercise control over the same in his behalf. [ 31 ] [ 32 ] [ 33 ] Opinion delivered by Judge Park. We think the manure scattered upon the ground, under the circumstances of this case, was personal property. The cases referred to by the defendant to show that it was real estate are not on point. The principle of those cases is, that manure made in the usual course of husbandry upon a farm is so attached to and connected with the realty that, in the absence of any express stipulation to the contrary, it becomes appurtenant to it. The principle was established for the benefit of agriculture. It found its origin in the fact that it is essential to the successful cultivation of a farm that the manure, produced from the droppings of cattle and swine fed upon the products of the farm from the land should be used to supply the drain made upon the soil in the production of crops, which otherwise would become impoverished and barren; and in the fact the manure so produced is generally regarded by farmers in this country as a part of the realty and has been so treated by landlords and tenants from time immemorial. [ 34 ] [ 35 ] [ 36 ] [ 37 ] [ 38 ] [ 39 ] [ 40 ] [ 41 ] But this principle does not apply to the droppings of animals driven by travelers on the highway. The highway is not used, and cannot be used, for the purpose of agriculture. The manure is of no benefit whatsoever to it, but on the contrary is a detriment; and in cities and large villages it becomes a nuisance, and is removed by public officers at public expense. The finding in this case is, "that the removal of the manure and scrapings was calculated to improve the appearance and health of the borough." It is therefore evident that the cases relied upon by the defendant have no application to the case. But it is said that if the manure was personal property, it was the possession of the owner of the fee, and the scraping it into heaps by the plaintiff did not change the possession, but it continued before, and that therefore the plaintiff cannot recover, for he neither had possession nor the right to the immediate possession. The manure originally belonged to the travelers whose animals dropped it, but in being worthless to them was immediately abandoned; and whether it then became the property of the borough of Stamford which owned the fee of the land on which the manure lay, is unnecessary to determine; for if it did, the case finds that the removal of the filth would be an improvement to the borough, and no objection was made by any one to the use of that the plaintiff attempted to make of it. Considering the character of such accumulations upon highways, in cities and villages, and the light in which they are everywhere regarded in closely settled communities, we cannot believe at the borough in this instance would have had any objection to the act of the plaintiff in removing a nuisance that affected the public health and the appearance of the streets. At all events, we think facts of the case show a sufficient right in the plaintiff to the mediate possession of the property as against a mere wrong doer. The defendant appears before the court in no enviable light. He does not pretend that he had a right to the manure, even when scattered upon the highway, superior to that of the plaintiff; but after the plaintiff had changed the original condition and greatly enhanced its value by his labor, he seized and appropriated to his own use the fruits of the plaintiff's outlay, and now seeks immunity from responsibility on the ground that the plaintiff was a wrong doer as well as himself. The conduct of the defendant is in keeping with his claim, and neither commends itself to the favorable consideration of the court. The plaintiff had the peaceable and quiet possession of the property; and we deem this sufficient until the borough of Stamford shall make complaint. It is further claimed that if the plaintiff had a right to the property by virtue of occupancy, he lost the right when he ceased to retain the actual possession of the manure after scraping it into heaps. We do not question the general doctrine, that where the right by occupancy exists, it exists no longer that the party retains the actual possession of the property, or till he appropriates it to his own use by removing it to some other place. If he leaves the property at the place where it was discovered, and does nothing whatsoever to enhance its value or change its nature, has right by occupancy is unquestionably gone. But the question is, if a party finds property comparatively worthless, as the plaintiff found the property in question, owing to its scattered condition upon the highway, and greatly increases it value by his labor and expense, does he lose his right if he leaves it a reasonable time to procure the means to take it away, when the means are necessary for its removal? Suppose a teamster with a load of grain, while traveling the highway, discovers a rent in one of his bags, and finds that his grain is scattered upon the road for the distance of a mile. He considers the labor of collecting his corn of more value the property itself, and he therefore abandons it, and pursues his way. A afterwards finds the grain in this condition and gathers it kernel by kernel into heaps by the side of the road, and leaves it a reasonable time to procure the means necessary for its removal. While he is gone for his bag, B discovers the grain thus conveniently collected into heaps and appropriates it to his own use. Has A any remedy? If he has not, the law in this instance is open to just reproach. We think under such circumstances A would have a reasonable time to remove the property, and during such a reasonable time his right to it would be protected. If this is so, then the principle applies to the case under consideration. A reasonable time for the removal of this manure had not elapsed when the defendant seized and converted it to his own use. The statute regulating the rights of parties in the gathering of sea-weed, gives the party who heaps it upon a public beach twenty-four hours in which to remove it, and that length of time for the removal of the property we think would not be unreasonable in most cases like the present one. We therefore advise the Court of Common Pleas to grant a new trial. In this opinion the other judges concurred. The Connecticut Court found the argument of the defendant-respondent to be exceptionally weak in terms of the law. The idea that horse droppings abandoned along the road became a part of the real estate in fee is an interesting argument. But it was soundly rejected by the court. Even following this theory, the borough of Stamford, Connecticut would have been the new best owners of the manure. When the plaintiff-appellate began to rake the manure into neat piles for reclamation, he did it in clear sight of one or more of the officials of Stamford. Also, presumably, any citizen of the town could have observed him. No one objected to his activity, or came forward to claim superior rights to the manure. The plaintiff had "improved" what was otherwise a nuisance to the town. In this act, he also had some legal standing to claim a superior ownership to anyone else. The existing laws allowing persons who piled up seaweed to have a legitimate claim of possession for 24 hours was invoked. The court had nothing good to say about the defendant-respondent, stating he had not placed himself in an enviable light.
https://en.wikipedia.org/wiki/Haslem_v._Lockwood
As minor planet discoveries are confirmed, they are given a permanent number by the IAU 's Minor Planet Center (MPC), and the discoverers can then submit names for them, following the IAU's naming conventions . The list below concerns those minor planets in the specified number-range that have received names, and explains the meanings of those names. Official naming citations of newly named small Solar System bodies are approved and published in a bulletin by IAU's Working Group for Small Bodies Nomenclature (WGSBN). [ 1 ] Before May 2021, citations were published in MPC's Minor Planet Circulars for many decades. [ 2 ] Recent citations can also be found on the JPL Small-Body Database (SBDB). [ 3 ] Until his death in 2016, German astronomer Lutz D. Schmadel compiled these citations into the Dictionary of Minor Planet Names (DMP) and regularly updated the collection. [ 4 ] [ 5 ] Based on Paul Herget 's The Names of the Minor Planets , [ 6 ] Schmadel also researched the unclear origin of numerous asteroids, most of which had been named prior to World War II. This article incorporates text from this source, which is in the public domain : SBDB New namings may only be added to this list below after official publication as the preannouncement of names is condemned. [ 7 ] The WGSBN publishes a comprehensive guideline for the naming rules of non-cometary small Solar System bodies. [ 8 ]
https://en.wikipedia.org/wiki/Hasnaa_Chennaoui-Aoudjehane
Hassan Naim is a Lebanese-Swiss biochemist. He currently holds the position of Director of the "Institut fΓΌr Physiologische Chemie" (Institute for Physiological Chemistry/Biochemistry) at the University of Veterinary Medicine Hanover , while collaborating regularly with the University of Hannover . [ 1 ] Hassan Naim received his Ph.D. degree in biochemistry from the University of Bern, Switzerland. Following appointments at the Biochemistry Department, University of Lausanne (membrane transport in T cells) and the University Children’s Hospital Bern (structure and function of brush border membrane proteins) he moved in 1989 to the Biochemistry Department, University of Texas Southwestern Medical Center at Dallas, USA to continue his work on structure-function relationships of brush border proteins. In 1991 he was recruited as a group leader and Faculty member at the University of DΓΌsseldorf, Germany. In 1997 he was appointed as a Professor and Chair of the Department of Biochemistry at the University of Veterinary Medicine in Hannover, Germany. [ 1 ] Current research interests in the Naim laboratory focus on the molecular mechanisms underlying protein trafficking, particularly polarized protein sorting in epithelial cells, in health and disease. [ 1 ] Some of Naim's recent publications include (but are not limited to): [ 1 ]
https://en.wikipedia.org/wiki/Hassan_Naim
In mathematics, the Hasse derivative is a generalisation of the derivative which allows the formulation of Taylor's theorem in coordinate rings of algebraic varieties . Let k [ X ] be a polynomial ring over a field k . The r -th Hasse derivative of X n is if n β‰₯ r and zero otherwise. [ 1 ] In characteristic zero we have The Hasse derivative is a generalized derivation on k [ X ] and extends to a generalized derivation on the function field k ( X ), [ 1 ] satisfying an analogue of the product rule and an analogue of the chain rule. [ 2 ] Note that the D ( r ) {\displaystyle D^{(r)}} are not themselves derivations in general, but are closely related. A form of Taylor's theorem holds for a function f defined in terms of a local parameter t on an algebraic variety: [ 3 ] This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hasse_derivative
In order theory , a Hasse diagram ( / ˈ h Γ¦ s Ι™ / ; German: [ˈhasΙ™] ) is a type of mathematical diagram used to represent a finite partially ordered set , in the form of a drawing of its transitive reduction . Concretely, for a partially ordered set ( S , ≀ ) {\displaystyle (S,\leq )} one represents each element of S {\displaystyle S} as a vertex in the plane and draws a line segment or curve that goes upward from one vertex x {\displaystyle x} to another vertex y {\displaystyle y} whenever y {\displaystyle y} covers x {\displaystyle x} (that is, whenever x β‰  y {\displaystyle x\neq y} , x ≀ y {\displaystyle x\leq y} and there is no z {\displaystyle z} distinct from x {\displaystyle x} and y {\displaystyle y} with x ≀ z ≀ y {\displaystyle x\leq z\leq y} ). These curves may cross each other but must not touch any vertices other than their endpoints. Such a diagram, with labeled vertices, uniquely determines its partial order. Hasse diagrams are named after Helmut Hasse (1898–1979); according to Garrett Birkhoff , they are so called because of the effective use Hasse made of them. [ 1 ] However, Hasse was not the first to use these diagrams. One example that predates Hasse can be found in an 1895 work by Henri Gustave Vogt. [ 2 ] [ 3 ] Although Hasse diagrams were originally devised as a technique for making drawings of partially ordered sets by hand, they have more recently been created automatically using graph drawing techniques. [ 4 ] In some sources, the phrase "Hasse diagram" has a different meaning: the directed acyclic graph obtained from the covering relation of a partially ordered set, independently of any drawing of that graph. [ 5 ] Although Hasse diagrams are simple, as well as intuitive, tools for dealing with finite posets , it turns out to be rather difficult to draw "good" diagrams. The reason is that, in general, there are many different possible ways to draw a Hasse diagram for a given poset. The simple technique of just starting with the minimal elements of an order and then drawing greater elements incrementally often produces quite poor results: symmetries and internal structure of the order are easily lost. The following example demonstrates the issue. Consider the power set of a 4-element set ordered by inclusion βŠ† {\displaystyle \subseteq } . Below are four different Hasse diagrams for this partial order. Each subset has a node labelled with a binary encoding that shows whether a certain element is in the subset (1) or not (0): The first diagram makes clear that the power set is a graded poset . The second diagram has the same graded structure, but by making some edges longer than others, it emphasizes that the 4-dimensional cube is a combinatorial union of two 3-dimensional cubes, and that a tetrahedron ( abstract 3-polytope ) likewise merges two triangles ( abstract 2-polytopes ). The third diagram shows some of the internal symmetry of the structure. In the fourth diagram the vertices are arranged in a 4Γ—4 grid. If a partial order can be drawn as a Hasse diagram in which no two edges cross, its covering graph is said to be upward planar . A number of results on upward planarity and on crossing-free Hasse diagram construction are known: In software engineering / Object-oriented design , the classes of a software system and the inheritance relation between these classes is often depicted using a class diagram , a form of Hasse diagram in which the edges connecting classes are drawn as solid line segments with an open triangle at the superclass end.
https://en.wikipedia.org/wiki/Hasse_diagram
The Hasse–Minkowski theorem is a fundamental result in number theory which states that two quadratic forms over a number field are equivalent if and only if they are equivalent locally at all places , i.e. equivalent over every topological completion of the field (which may be real , complex , or p-adic ). A related result is that a quadratic space over a number field is isotropic if and only if it is isotropic locally everywhere, or equivalently, that a quadratic form over a number field nontrivially represents zero if and only if this holds for all completions of the field. The theorem was proved in the case of the field of rational numbers by Hermann Minkowski and generalized to number fields by Helmut Hasse . The same statement holds even more generally for all global fields . The importance of the Hasse–Minkowski theorem lies in the novel paradigm it presented for answering arithmetical questions: in order to determine whether an equation of a certain type has a solution in rational numbers, it is sufficient to test whether it has solutions over complete fields of real and p -adic numbers, where one can apply analytic techniques such as Newton's method and its p -adic analogue Hensel's lemma . This is the first significant example of a local-global principle , one of the most fundamental techniques in arithmetic geometry . The Hasse–Minkowski theorem reduces the problem of classifying quadratic forms over a number field K up to equivalence to the set of analogous but much simpler questions over local fields . Basic invariants of a nonsingular quadratic form are its dimension , which is a positive integer, and its discriminant modulo the squares in K , which is an element of the multiplicative group K * / K *2 . In addition, for every place v of K , there is an invariant coming from the completion K v . Depending on the choice of v , this completion may be the real numbers R , the complex numbers C , or a p-adic number field, each of which has different kinds of invariants: These invariants must satisfy some compatibility conditions: a parity relation (the sign of the discriminant must match the negative index of inertia) and a product formula (a local–global relation). Conversely, for every set of invariants satisfying these relations, there is a quadratic form over K with these invariants.
https://en.wikipedia.org/wiki/Hasse–Minkowski_theorem
In mathematics, a Hasse–Schmidt derivation is an extension of the notion of a derivation . The concept was introduced by Schmidt & Hasse (1937) . For a (not necessarily commutative nor associative) ring B and a B - algebra A , a Hasse–Schmidt derivation is a map of B -algebras taking values in the ring of formal power series with coefficients in A . This definition is found in several places, such as Gatto & Salehyan (2016 , Β§3.4), which also contains the following example: for A being the ring of infinitely differentiable functions (defined on, say, R n ) and B = R , the map is a Hasse–Schmidt derivation, as follows from applying the Leibniz rule iteratedly. Hazewinkel (2012) shows that a Hasse–Schmidt derivation is equivalent to an action of the bialgebra of noncommutative symmetric functions in countably many variables Z 1 , Z 2 , ...: the part D i : A β†’ A {\displaystyle D_{i}:A\to A} of D which picks the coefficient of t i {\displaystyle t^{i}} , is the action of the indeterminate Z i . Hasse–Schmidt derivations on the exterior algebra A = β‹€ M {\textstyle A=\bigwedge M} of some B -module M have been studied by Gatto & Salehyan (2016 , Β§4). Basic properties of derivations in this context lead to a conceptual proof of the Cayley–Hamilton theorem . See also Gatto & Scherbak (2015) .
https://en.wikipedia.org/wiki/Hasse–Schmidt_derivation
In organic chemistry , the Hass–Bender oxidation (also called the Hass–Bender carbonyl synthesis [ 1 ] ) is an organic oxidation reaction that converts benzyl halides into benzaldehydes using the sodium salt of 2-nitropropane as the oxidant. [ 2 ] This name reaction is named for Henry B. Hass and Myron L. Bender , who first reported it in 1949. [ 3 ] The reaction process begins with the deprotonation of 2-nitropropane at the Ξ± carbon to form a nitronate . This compound then initiates an S N 2 reaction to displace the benzyl halide. Unlike in the nitroaldol reaction , where the deprotonated carbon of the nitroalkyl group is the nucleophilic atom, it is instead an oxygen of the nitro itself that attacks the benzylic carbon. [ 4 ] The O -benzyl structure then undergoes a pericyclic reaction to produce a benzaldehyde, with dimethyl oxime as a byproduct. Although originally developed for benzyl compounds, the reaction also works for allyl halides, giving the respective Ξ±,Ξ²- enones and enals . [ 5 ] This organic chemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hass–Bender_oxidation
A "hat" ( circumflex (Λ†)), placed over a symbol is a mathematical notation with various uses. In statistics , a circumflex (Λ†), called a "hat", is used to denote an estimator or an estimated value. [ 1 ] For example, in the context of errors and residuals , the "hat" over the letter Ξ΅ ^ {\displaystyle {\hat {\varepsilon }}} indicates an observable estimate (the residuals) of an unobservable quantity called Ξ΅ {\displaystyle \varepsilon } (the statistical errors). Another example of the hat operator denoting an estimator occurs in simple linear regression . Assuming a model of y i = Ξ² 0 + Ξ² 1 x i + Ξ΅ i {\displaystyle y_{i}=\beta _{0}+\beta _{1}x_{i}+\varepsilon _{i}} , with observations of independent variable data x i {\displaystyle x_{i}} and dependent variable data y i {\displaystyle y_{i}} , the estimated model is of the form y ^ i = Ξ² ^ 0 + Ξ² ^ 1 x i {\displaystyle {\hat {y}}_{i}={\hat {\beta }}_{0}+{\hat {\beta }}_{1}x_{i}} where βˆ‘ i ( y i βˆ’ y ^ i ) 2 {\displaystyle \sum _{i}(y_{i}-{\hat {y}}_{i})^{2}} is commonly minimized via least squares by finding optimal values of Ξ² ^ 0 {\displaystyle {\hat {\beta }}_{0}} and Ξ² ^ 1 {\displaystyle {\hat {\beta }}_{1}} for the observed data. In statistics, the hat matrix H projects the observed values y of response variable to the predicted values Ε· : In screw theory , one use of the hat operator is to represent the cross product operation. Since the cross product is a linear transformation , it can be represented as a matrix . The hat operator takes a vector and transforms it into its equivalent matrix. For example, in three dimensions, In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in v ^ {\displaystyle {\hat {\mathbf {v} }}} (pronounced "v-hat"). [ 2 ] [ 1 ] This is especially common in physics context. The Fourier transform of a function f {\displaystyle f} is traditionally denoted by f ^ {\displaystyle {\hat {f}}} . In quantum mechanics, operators are denoted with hat notation. For instance, see the time-independent SchrΓΆdinger equation, where the Hamiltonian operator is denoted H ^ {\displaystyle {\hat {H}}} . H ^ ψ = E ψ {\displaystyle {\hat {H}}\psi =E\psi } This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hat_notation
C 4 carbon fixation or the Hatch–Slack pathway is one of three known photosynthetic processes of carbon fixation in plants. It owes the names to the 1960s discovery by Marshall Davidson Hatch and Charles Roger Slack . [ 1 ] C 4 fixation is an addition to the ancestral and more common C 3 carbon fixation . The main carboxylating enzyme in C 3 photosynthesis is called RuBisCO , which catalyses two distinct reactions using either CO 2 (carboxylation) or oxygen (oxygenation) as a substrate. RuBisCO oxygenation gives rise to phosphoglycolate , which is toxic and requires the expenditure of energy to recycle through photorespiration . C 4 photosynthesis reduces photorespiration by concentrating CO 2 around RuBisCO. To enable RuBisCO to work in a cellular environment where there is a lot of carbon dioxide and very little oxygen, C 4 leaves generally contain two partially isolated compartments called mesophyll cells and bundle-sheath cells. CO 2 is initially fixed in the mesophyll cells in a reaction catalysed by the enzyme PEP carboxylase in which the three-carbon phosphoenolpyruvate (PEP) reacts with CO 2 to form the four-carbon oxaloacetic acid (OAA). OAA can then be reduced to malate or transaminated to aspartate . These intermediates diffuse to the bundle sheath cells, where they are decarboxylated, creating a CO 2 -rich environment around RuBisCO and thereby suppressing photorespiration. The resulting pyruvate (PYR), together with about half of the phosphoglycerate (PGA) produced by RuBisCO, diffuses back to the mesophyll. PGA is then chemically reduced and diffuses back to the bundle sheath to complete the reductive pentose phosphate cycle (RPP). This exchange of metabolites is essential for C 4 photosynthesis to work. Additional biochemical steps require more energy in the form of ATP to regenerate PEP, but concentrating CO 2 allows high rates of photosynthesis at higher temperatures. Higher CO 2 concentration overcomes the reduction of gas solubility with temperature ( Henry's law ). The CO 2 concentrating mechanism also maintains high gradients of CO 2 concentration across the stomatal pores. This means that C 4 plants have generally lower stomatal conductance , reduced water losses and have generally higher water-use efficiency . [ 2 ] C 4 plants are also more efficient in using nitrogen, since PEP carboxylase is cheaper to make than RuBisCO. [ 3 ] However, since the C 3 pathway does not require extra energy for the regeneration of PEP, it is more efficient in conditions where photorespiration is limited, typically at low temperatures and in the shade. [ 4 ] The first experiments indicating that some plants do not use C 3 carbon fixation but instead produce malate and aspartate in the first step of carbon fixation were done in the 1950s and early 1960s by Hugo Peter Kortschak and Yuri Karpilov . [ 5 ] [ 6 ] The C 4 pathway was elucidated by Marshall Davidson Hatch and Charles Roger Slack , in Australia, in 1966. [ 1 ] While Hatch and Slack originally referred to the pathway as the "C 4 dicarboxylic acid pathway", it is sometimes called the Hatch–Slack pathway. [ 6 ] C 4 plants often possess a characteristic leaf anatomy called kranz anatomy , from the German word for wreath . Their vascular bundles are surrounded by two rings of cells; the inner ring, called bundle sheath cells , contains starch -rich chloroplasts lacking grana , which differ from those in mesophyll cells present as the outer ring. Hence, the chloroplasts are called dimorphic. The primary function of kranz anatomy is to provide a site in which CO 2 can be concentrated around RuBisCO, thereby avoiding photorespiration . Mesophyll and bundle sheath cells are connected through numerous cytoplasmic sleeves called plasmodesmata whose permeability at leaf level is called bundle sheath conductance. A layer of suberin [ 7 ] is often deposed at the level of the middle lamella (tangential interface between mesophyll and bundle sheath) in order to reduce the apoplastic diffusion of CO 2 (called leakage). The carbon concentration mechanism in C 4 plants distinguishes their isotopic signature from other photosynthetic organisms. Although most C 4 plants exhibit kranz anatomy, there are, however, a few species that operate a limited C 4 cycle without any distinct bundle sheath tissue. Suaeda aralocaspica , Bienertia cycloptera , Bienertia sinuspersici and Bienertia kavirense (all chenopods ) are terrestrial plants that inhabit dry, salty depressions in the deserts of the Middle East . These plants have been shown to operate single-cell C 4 CO 2 -concentrating mechanisms, which are unique among the known C 4 mechanisms. [ 8 ] [ 9 ] [ 10 ] [ 11 ] Although the cytology of both genera differs slightly, the basic principle is that fluid-filled vacuoles are employed to divide the cell into two separate areas. Carboxylation enzymes in the cytosol are separated from decarboxylase enzymes and RuBisCO in the chloroplasts. A diffusive barrier is between the chloroplasts (which contain RuBisCO) and the cytosol. This enables a bundle-sheath-type area and a mesophyll-type area to be established within a single cell. Although this does allow a limited C 4 cycle to operate, it is relatively inefficient. Much leakage of CO 2 from around RuBisCO occurs. There is also evidence of inducible C 4 photosynthesis by non-kranz aquatic macrophyte Hydrilla verticillata under warm conditions, although the mechanism by which CO 2 leakage from around RuBisCO is minimised is currently uncertain. [ 12 ] In C 3 plants , the first step in the light-independent reactions of photosynthesis is the fixation of CO 2 by the enzyme RuBisCO to form 3-phosphoglycerate . However, RuBisCo has a dual carboxylase and oxygenase activity. Oxygenation results in part of the substrate being oxidized rather than carboxylated , resulting in loss of substrate and consumption of energy, in what is known as photorespiration . Oxygenation and carboxylation are competitive , meaning that the rate of the reactions depends on the relative concentration of oxygen and CO 2 . In order to reduce the rate of photorespiration , C 4 plants increase the concentration of CO 2 around RuBisCO. To do so two partially isolated compartments differentiate within leaves, the mesophyll and the bundle sheath . Instead of direct fixation by RuBisCO, CO 2 is initially incorporated into a four-carbon organic acid (either malate or aspartate ) in the mesophyll. The organic acids then diffuse through plasmodesmata into the bundle sheath cells. There, they are decarboxylated creating a CO 2 -rich environment. The chloroplasts of the bundle sheath cells convert this CO 2 into carbohydrates by the conventional C 3 pathway . There is large variability in the biochemical features of C4 assimilation, and it is generally grouped in three subtypes, differentiated by the main enzyme used for decarboxylation ( NADP-malic enzyme , NADP-ME; NAD-malic enzyme , NAD-ME; and PEP carboxykinase , PEPCK). Since PEPCK is often recruited atop NADP-ME or NAD-ME it was proposed to classify the biochemical variability in two subtypes. For instance, maize and sugarcane use a combination of NADP-ME and PEPCK, millet uses preferentially NAD-ME and Megathyrsus maximus , uses preferentially PEPCK. The first step in the NADP-ME type C 4 pathway is the conversion of pyruvate (Pyr) to phosphoenolpyruvate (PEP), by the enzyme Pyruvate phosphate dikinase (PPDK). This reaction requires inorganic phosphate and ATP plus pyruvate, producing PEP, AMP , and inorganic pyrophosphate (PP i ). The next step is the carboxylation of PEP by the PEP carboxylase enzyme (PEPC) producing oxaloacetate . Both of these steps occur in the mesophyll cells: PEPC has a low K M for HCO βˆ’ 3 β€” and, hence, high affinity, and is not confounded by O 2 thus it will work even at low concentrations of CO 2 . The product is usually converted to malate (M), which diffuses to the bundle-sheath cells surrounding a nearby vein . Here, it is decarboxylated by the NADP-malic enzyme (NADP-ME) to produce CO 2 and pyruvate . The CO 2 is fixed by RuBisCo to produce phosphoglycerate (PGA) while the pyruvate is transported back to the mesophyll cell, together with about half of the phosphoglycerate (PGA). This PGA is chemically reduced in the mesophyll and diffuses back to the bundle sheath where it enters the conversion phase of the Calvin cycle . For each CO 2 molecule exported to the bundle sheath the malate shuttle transfers two electrons, and therefore reduces the demand of reducing power in the bundle sheath. Here, the OAA produced by PEPC is transaminated by aspartate aminotransferase to aspartate (ASP) which is the metabolite diffusing to the bundle sheath. In the bundle sheath ASP is transaminated again to OAA and then undergoes a futile reduction and oxidative decarboxylation to release CO 2 . The resulting Pyruvate is transaminated to alanine, diffusing to the mesophyll. Alanine is finally transaminated to pyruvate (PYR) which can be regenerated to PEP by PPDK in the mesophyll chloroplasts. This cycle bypasses the reaction of malate dehydrogenase in the mesophyll and therefore does not transfer reducing equivalents to the bundle sheath. In this variant the OAA produced by aspartate aminotransferase in the bundle sheath is decarboxylated to PEP by PEPCK. The fate of PEP is still debated. The simplest explanation is that PEP would diffuse back to the mesophyll to serve as a substrate for PEPC. Because PEPCK uses only one ATP molecule, the regeneration of PEP through PEPCK would theoretically increase photosynthetic efficiency of this subtype, however this has never been measured. An increase in relative expression of PEPCK has been observed under low light, and it has been proposed to play a role in facilitating balancing energy requirements between mesophyll and bundle sheath. While in C 3 photosynthesis each chloroplast is capable of completing light reactions and dark reactions , C 4 chloroplasts differentiate in two populations, contained in the mesophyll and bundle sheath cells. The division of the photosynthetic work between two types of chloroplasts results inevitably in a prolific exchange of intermediates between them. The fluxes are large and can be up to ten times the rate of gross assimilation. [ 13 ] The type of metabolite exchanged and the overall rate will depend on the subtype. To reduce product inhibition of photosynthetic enzymes (for instance PECP) concentration gradients need to be as low as possible. This requires increasing the conductance of metabolites between mesophyll and bundle sheath, but this would also increase the retro-diffusion of CO 2 out of the bundle sheath, resulting in an inherent and inevitable trade off in the optimisation of the CO 2 concentrating mechanism. To meet the NADPH and ATP demands in the mesophyll and bundle sheath, light needs to be harvested and shared between two distinct electron transfer chains. ATP may be produced in the bundle sheath mainly through cyclic electron flow around Photosystem I , or in the mesophyll mainly through linear electron flow depending on the light available in the bundle sheath or in the mesophyll. The relative requirement of ATP and NADPH in each type of cells will depend on the photosynthetic subtype. [ 13 ] The apportioning of excitation energy between the two cell types will influence the availability of ATP and NADPH in the mesophyll and bundle sheath. For instance, green light is not strongly adsorbed by mesophyll cells and can preferentially excite bundle sheath cells, or vice versa for blue light. [ 14 ] Because bundle sheaths are surrounded by mesophyll, light harvesting in the mesophyll will reduce the light available to reach bundle sheath cells. Also, the bundle sheath size limits the amount of light that can be harvested. [ 15 ] Different formulations of efficiency are possible depending on which outputs and inputs are considered. For instance, average quantum efficiency is the ratio between gross assimilation and either absorbed or incident light intensity. Large variability of measured quantum efficiency is reported in the literature between plants grown in different conditions and classified in different subtypes but the underpinnings are still unclear. One of the components of quantum efficiency is the efficiency of dark reactions, biochemical efficiency, which is generally expressed in reciprocal terms as ATP cost of gross assimilation (ATP/GA). In C 3 photosynthesis ATP/GA depends mainly on CO 2 and O 2 concentration at the carboxylating sites of RuBisCO. When CO 2 concentration is high and O 2 concentration is low photorespiration is suppressed and C 3 assimilation is fast and efficient, with ATP/GA approaching the theoretical minimum of 3. In C 4 photosynthesis CO 2 concentration at the RuBisCO carboxylating sites is mainly the result of the operation of the CO 2 concentrating mechanisms, which cost circa an additional 2 ATP/GA but makes efficiency relatively insensitive of external CO 2 concentration in a broad range of conditions. Biochemical efficiency depends mainly on the speed of CO 2 delivery to the bundle sheath, and will generally decrease under low light when PEP carboxylation rate decreases, lowering the ratio of CO 2 /O 2 concentration at the carboxylating sites of RuBisCO. The key parameter defining how much efficiency will decrease under low light is bundle sheath conductance. Plants with higher bundle sheath conductance will be facilitated in the exchange of metabolites between the mesophyll and bundle sheath and will be capable of high rates of assimilation under high light. However, they will also have high rates of CO 2 retro-diffusion from the bundle sheath (called leakage) which will increase photorespiration and decrease biochemical efficiency under dim light. This represents an inherent and inevitable trade off in the operation of C 4 photosynthesis. C 4 plants have an outstanding capacity to attune bundle sheath conductance. Interestingly, bundle sheath conductance is downregulated in plants grown under low light [ 16 ] and in plants grown under high light subsequently transferred to low light as it occurs in crop canopies where older leaves are shaded by new growth. [ 17 ] C 4 plants have a competitive advantage over plants possessing the more common C 3 carbon fixation pathway under conditions of drought , high temperatures , and nitrogen or CO 2 limitation. When grown in the same environment, at 30Β Β°C, C 3 grasses lose approximately 833 molecules of water per CO 2 molecule that is fixed, whereas C 4 grasses lose only 277. This increased water use efficiency of C 4 grasses means that soil moisture is conserved, allowing them to grow for longer in arid environments. [ 18 ] C 4 carbon fixation has evolved in at least 62 independent occasions in 19 different families of plants, making it a prime example of convergent evolution . [ 19 ] [ 20 ] This convergence may have been facilitated by the fact that many potential evolutionary pathways to a C 4 phenotype exist, many of which involve initial evolutionary steps not directly related to photosynthesis. [ 21 ] C 4 plants arose around 35 million years ago [ 20 ] during the Oligocene (precisely when is difficult to determine) and were becoming ecologically significant in the early Miocene around 21 million years ago . [ 22 ] C 4 metabolism in grasses originated when their habitat migrated from the shady forest undercanopy to more open environments, [ 23 ] where the high sunlight gave it an advantage over the C 3 pathway. [ 24 ] Drought was not necessary for its innovation; rather, the increased parsimony in water use was a byproduct of the pathway and allowed C 4 plants to more readily colonize arid environments. [ 24 ] Today, C 4 plants represent about 5% of Earth's plant biomass and 3% of its known plant species. [ 18 ] [ 25 ] Despite this scarcity, they account for about 23% of terrestrial carbon fixation. [ 26 ] [ 27 ] Increasing the proportion of C 4 plants on earth could assist biosequestration of CO 2 and represent an important climate change avoidance strategy. Present-day C 4 plants are concentrated in the tropics and subtropics (below latitudes of 45 degrees) where the high air temperature increases rates of photorespiration in C 3 plants. About 8,100 plant species use C 4 carbon fixation, which represents about 3% of all terrestrial species of plants. [ 27 ] [ 28 ] All these 8,100 species are angiosperms . C 4 carbon fixation is more common in monocots compared with dicots , with 40% of monocots using the C 4 pathway [ clarification needed ] , compared with only 4.5% of dicots. Despite this, only three families of monocots use C 4 carbon fixation compared to 15 dicot families. Of the monocot clades containing C 4 plants, the grass ( Poaceae ) species use the C 4 photosynthetic pathway most. 46% of grasses are C 4 and together account for 61% of C 4 species. C 4 has arisen independently in the grass family some twenty or more times, in various subfamilies, tribes, and genera, [ 29 ] including the Andropogoneae tribe which contains the food crops maize , sugar cane , and sorghum . Various kinds of millet are also C 4 . [ 30 ] [ 31 ] Of the dicot clades containing C 4 species, the order Caryophyllales contains the most species. Of the families in the Caryophyllales, the Chenopodiaceae use C 4 carbon fixation the most, with 550 out of 1,400 species using it. About 250 of the 1,000 species of the related Amaranthaceae also use C 4 . [ 18 ] [ 32 ] Members of the sedge family Cyperaceae , and members of numerous families of eudicots – including Asteraceae (the daisy family), Brassicaceae (the cabbage family), and Euphorbiaceae (the spurge family) – also use C 4 . No large trees (above 15 m in height) use C 4 , [ 33 ] however a number of small trees or shrubs smaller than 10 m exist which do: six species of Euphorbiaceae all native to Hawaii and two species of Amaranthaceae growing in deserts of the Middle-East and Asia. [ 34 ] Given the advantages of C 4 , a group of scientists from institutions around the world are working on the C 4 Rice Project to produce a strain of rice , naturally a C 3 plant, that uses the C 4 pathway by studying the C 4 plants maize and Brachypodium . [ 35 ] As rice is the world's most important human foodβ€”it is the staple food for more than half the planetβ€”having rice that is more efficient at converting sunlight into grain could have significant global benefits towards improving food security . The team claims C 4 rice could produce up to 50% more grainβ€”and be able to do it with less water and nutrients. [ 36 ] [ 37 ] [ 38 ] The researchers have already identified genes needed for C 4 photosynthesis in rice and are now looking towards developing a prototype C 4 rice plant. In 2012, the Government of the United Kingdom along with the Bill & Melinda Gates Foundation provided US$14Β million over three years towards the C 4 Rice Project at the International Rice Research Institute . [ 39 ] In 2019, the Bill & Melinda Gates Foundation granted another US$15 million to the Oxford-University-led C4 Rice Project. The goal of the 5-year project is to have experimental field plots up and running in Taiwan by 2024. [ 40 ] C 2 photosynthesis, an intermediate step between C 3 and Kranz C 4 , may be preferred over C 4 for rice conversion. The simpler system is less optimized for high light and high temperature conditions than C 4 , but has the advantage of requiring fewer steps of genetic engineering and performing better than C 3 under all temperatures and light levels. [ 41 ] In 2021, the UK Government provided Β£1.2Β million on studying C 2 engineering. [ 42 ]
https://en.wikipedia.org/wiki/Hatch-Slack_pathway
Hatch marks (also called hash marks or tick marks ) are a form of mathematical notation . They are used in three ways as: Hatch marks are frequently used as an abbreviation of some common units of measurement. In regard to distance, a single hatch mark indicates feet, and two hatch marks indicate inches. In regard to time, a single hatch mark indicates minutes, and two hatch marks indicate seconds. In geometry and trigonometry , such marks are used following an elevated circle to indicate degrees, minutes, and seconds β€” ( Β° ) ( β€² ) ( β€³ ). Hatch marks can probably be traced to hatching in art works, where the pattern of the hatch marks represents a unique tone or hue. Different patterns indicate different tones. Unit-and-value hatch marks are short vertical line segments which mark distances. They are seen on rulers and number lines . The marks are parallel to each other in an evenly-spaced manner. The distance between adjacent marks is one unit. Longer line segments are used for integers and natural numbers . Shorter line segments are used for fractions . Hatch marks provide a visual clue as to the value of specific points on the number line, even if some hatch marks are not labeled with a number. Hatch marks are typically seen in number theory and geometry . In geometry , hatch marks are used to denote equal measures of angles, arcs, line segments, or other elements. [ 1 ] [ 2 ] Hatch marks for congruence notation are in the style of tally marks or of Roman numerals – with some qualifications. These marks are without serifs , and some patterns are not used. For example, the numbers I, II, III, V, and X are used, but IV and VI are not used, since a rotation of 180 degrees can make a 4 easily confused with a 6. For example, if two triangles are drawn, the first pair of congruent sides can be marked with a single hatch mark on each. The second pair of congruent sides can be marked with two hatch marks each. The patterns are not alike: one pair uses one mark while the other pair uses two marks (Figure 1). This use of pattern makes it clear which sides are the same length, even if the sides cannot be measured. If the sides do not appear to be congruent, as long as hatch marks are present and are the same number of hatch marks, then the sides are congruent. Note that the inverse situation should not be assumed. That is, while sides that are hatch marked identically must be assumed to be congruent, it does not follow that sides hatch marked differently must be incongruent . The different hatch marks simply signal that the length measurements may (in this case) be considered to be independent of each other. So, for example, while we are not allowed to conclude that the triangles in the accompanying figure must be isosceles triangles (or even perhaps equilateral triangles ), we yet remain obliged to allow that they could be either of those things. Line charts may sometimes use hatch marks as graphed points. In the early days of computers, monitors and printers could only make charts using the characters available on a common typewriter. To graph a line chart of sales over time, symbols such as *, x, or | were used to mark points, and various characters were used to mark the lines connecting them. While computers have advanced considerably, it is still not unusual to see x or | used as the points of interest (or points of change) on a graph.
https://en.wikipedia.org/wiki/Hatch_mark
Hatley–Pirbhai modeling is a system modeling technique based on the input–process–output model (IPO model), which extends the IPO model by adding user interface processing and maintenance and self-testing processing. [ 1 ] The five componentsβ€”inputs, outputs, user interface, maintenance, and processingβ€”are added to a system model template to allow for modeling of the system which allows for proper assignment to the processing regions. [ 1 ] This modeling technique allows for creation of a hierarchy of detail of which the top level of this hierarchy should consist of a context diagram . [ 1 ] The context diagram serves the purpose of "establish[ing] the information boundary between the system being implemented and the environment in which the system is to operate." [ 1 ] Further refinement of the context diagram requires analysis of the system designated by the shaded rectangle through the development of a system functional flow block diagram . [ 1 ] The flows within the model represent material, energy, data, or information. [ 2 ] This systems -related article is a stub . You can help Wikipedia by expanding it . This software-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hatley–Pirbhai_modeling
The Hatta number ( Ha ) was developed by ShirΓ΄ji Hatta (1895-1973 [ 1 ] ) in 1932, [ 2 ] [ 3 ] who taught at Tohoku University from 1925 to 1958. [ 1 ] [ 2 ] It is a dimensionless parameter that compares the rate of reaction in a liquid film to the rate of diffusion through the film. [ 4 ] It is related to one of the many DamkΓΆhler numbers , Hatta being the square root of such a DamkΓΆhler number of the second type. Conceptually the Hatta number bears strong resemblance to the Thiele modulus for diffusion limitations in porous catalysts, which also is the square root of a DamkΓΆhler number. For a second order reaction ( r A = k 2 C B C A ) Hatta is defined via: H a 2 = k 2 C A , i C B , b u l k Ξ΄ L D A Ξ΄ L C A , i = k 2 C B , b u l k D A ( D A Ξ΄ L ) 2 = k 2 C B , b u l k D A k L 2 {\displaystyle Ha^{2}={{k_{2}C_{A,i}C_{B,bulk}\delta _{L}} \over {{\frac {D_{A}}{\delta _{L}}}\ C_{A,i}}}={{k_{2}C_{B,bulk}D_{A}} \over ({\frac {D_{A}}{\delta _{L}}})^{2}}={{k_{2}C_{B,bulk}D_{A}} \over {{k_{L}}^{2}}}} For a reaction m th order in A and n th order in B : H a = 2 m + 1 k m , n C A , i m βˆ’ 1 C B , b u l k n D A k L {\displaystyle Ha={{\sqrt {{\frac {2}{{m}+1}}k_{m,n}{C_{A,i}}^{m-1}C_{B,bulk}^{n}{D}_{A}}} \over {{k}_{L}}}} For gas-liquid absorption with chemical reactions, a high Hatta number indicates the reaction is much faster than diffusion, usually referred to as the "fast reaction" or "chemically enhanced" regime. In this case, the reaction occurs within a thin (hypothetical) film, and the surface area and the Hatta number itself limit the overall rate. [ 5 ] For Ha>2, with a large excess of B, the maximum rate of reaction assumes that the liquid film is saturated with gas at the interfacial ( C A,i ) and that the bulk concentration of A remains zero; the flux and hence the rate of reaction becomes proportional to the mass transfer coefficient k L and the Hatta number: k L C A,i Ha . Conversely, a Hatta number smaller than unity suggests the reaction is the limiting factor, and the reaction takes place in the bulk fluid; the concentration of A needs to be calculated taking the mass transfer limitation - without enhancement - into account. [ 5 ] This catalysis article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hatta_number
The Hattersley loom was developed by George Hattersley and Sons of Keighley , West Yorkshire, England. The company had been started by Richard Hattersley after 1784, with his son, George Hattersley , later entering the business alongside him. The company developed a number of innovative looms , of which the Hattersley Standard Loom – developed in 1921 – was a great success. The Hattersley Standard Loom was designed and built in 1921. Thousands of models were expected to be sold, which would bring considerable financial success to the company.[1] After the recapitalisation boom of 1919, cotton yarn production peaked in 1926 and further investment was sparse. Rayon, an artificial silk, was invented in the 1930s in nearby Silsden, and the Hattersley Silk Loom was adapted to weave this new fabric. The plain Hattersley Domestic Loom was specially developed for cottage or home use and designed to replace the wooden handloom ; the Domestic is similar in construction to a power loom . It was introduced ca.1900 and the makers claimed that a speed of 160 picks per minute could be easily attained with from 2 to 8 shafts weaving a variety of fabrics. Because foot pedals, or treadles , operate the loom it is still classed as a handloom [ according to whom? ] , but it is much easier and faster to weave as all the motions of the loom are connected via crankshaft and gear wheels. The cast metal chair, manufactured along with the loom, can be raised or lowered to suit, and the seat rocks forward and back as the weaver treadles the loom. [ 1 ] There is an example in the Bradford Industrial Museum . There are only two known examples [ 2 ] of the Hattersley Domestic Weaving System in operation today - by South African homeware textile producers, Mungo , whose domestic Hattersley Loom can be found in use at the Mungo Mill, weaving runs of natural fibre textiles. Hattersley Domestic Weaving can also be found in New Zealand, in use by Roderick McLean of McLean and Company [ 3 ] in Oamaru. Artworks could be replicated en masse by use of the Hattersley Jacquard (Tapestry) Loom. For example, Sir Edwin Henry Landseer 's painting Bolton Abbey in Ye Olden Times was produced in tapestry form by a Jacquard Loom at a Franco-British exhibition in 1908. [ 4 ] There is a Hattersley Jacquard (tapestry) loom located at Queen Street Mill in Burnley . [ 4 ]
https://en.wikipedia.org/wiki/Hattersley_loom
Haulpak was a very successful line of off-highway mining trucks. The name was used from 1953 until around 1999; the line continues under the Komatsu name. The name was adopted as Wabco Haulpak when R. G. LeTourneau 's business was bought by Wabco , and the Haulpak name continued through Wabco's purchase by American Standard , the operation's purchase by Dresser Industries , the merger into Komatsu-Dresser , and for a time after Komatsu took complete ownership from Dresser. The origins of the Haulpak line began with the purchase of R. G. LeTourneau 's construction machinery business in 1953 by Westinghouse Air Brake Company . Wabco had traditionally been a manufacturer of railway air brake systems, but ventured into construction machinery with the purchase of LeRoi air tools and industrial drills in 1952. The subsequent purchase of R. G. LeTourneau's construction machinery line gave Wabco a comprehensive range of machinery including scrapers, rubber-tyred dozers and other attachments. Wabco subsequently added motor graders to their product line by purchasing J.D. Adams in 1955 and thereafter front end loaders, with the purchase of Scoopmobile. Wabco recognised the importance of the off-highway Truck market and hired Ralph H. Kress to design a line of haul trucks in-house. Kress incorporated many new design features which were trend setting and eventually Caterpillar was to offer him a position designing their range of haulers. [ 1 ] The Haulpak line of mining and quarry trucks was the best-performing sector for Wabco for the entire time they owned it and eventually the scrapers, wheel dozers, graders and front end loaders would be discontinued from the Wabco catalogue. In 1968 Wabco had become part of American Standard Company (known for bathroom fittings) and then it would become part of Dresser Industries in 1984. After a Komatsu Limited -Dresser joint venture (KDC) in 1988 was formed, the Haulpak truck line was again (partly) under new ownership, although by 1994 Komatsu had purchased all remaining shares of KDC, making it a wholly owned subsidiary. The Haulpak name was quietly discontinued around 1998–1999 and the new trucks were then known as Komatsu machines. [ 1 ] While the smaller Komatsu haul trucks are distinctly Japanese in design, the current line of larger trucks can trace their heritage back to American Haulpak design roots. [ 1 ]
https://en.wikipedia.org/wiki/Haulpak
In mathematics, a Hausdorff gap consists roughly of two collections of sequences of integers, such that there is no sequence lying between the two collections. The first example was found by Hausdorff ( 1909 ). The existence of Hausdorff gaps shows that the partially ordered set of possible growth rates of sequences is not complete. Let Ο‰ Ο‰ {\displaystyle \omega ^{\omega }} be the set of all sequences of non-negative integers, and define f < g {\displaystyle f<g} to mean lim ( g ( n ) βˆ’ f ( n ) ) = + ∞ {\displaystyle \lim \left(g(n)-f(n)\right)=+\infty } . If X {\displaystyle X} is a poset and ΞΊ {\displaystyle \kappa } and Ξ» {\displaystyle \lambda } are cardinals, then a ( ΞΊ , Ξ» ) {\displaystyle (\kappa ,\lambda )} - pregap in X {\displaystyle X} is a set of elements f Ξ± {\displaystyle f_{\alpha }} for Ξ± ∈ ΞΊ {\displaystyle \alpha \in \kappa } and a set of elements g Ξ² {\displaystyle g_{\beta }} for Ξ² ∈ Ξ» {\displaystyle \beta \in \lambda } such that: A pregap is called a gap if it satisfies the additional condition: A Hausdorff gap is a ( Ο‰ 1 , Ο‰ 1 ) {\displaystyle (\omega _{1},\omega _{1})} -gap in Ο‰ Ο‰ {\displaystyle \omega ^{\omega }} such that for every countable ordinal Ξ± {\displaystyle \alpha } and every natural number n {\displaystyle n} there are only a finite number of Ξ² {\displaystyle \beta } less than Ξ± {\displaystyle \alpha } such that for all k > n {\displaystyle k>n} we have f Ξ± ( k ) < g Ξ² ( k ) {\displaystyle f_{\alpha }(k)<g_{\beta }(k)} . There are some variations of these definitions, with the ordered set Ο‰ Ο‰ {\displaystyle \omega ^{\omega }} replaced by a similar set. For example, one can redefine f < g {\displaystyle f<g} to mean f ( n ) < g ( n ) {\displaystyle f(n)<g(n)} for all but finitely many n {\displaystyle n} . Another variation introduced by Hausdorff (1936) is to replace Ο‰ Ο‰ {\displaystyle \omega ^{\omega }} by the set of all subsets of Ο‰ {\displaystyle \omega } , with the order given by A < B {\displaystyle A<B} if A {\displaystyle A} has only finitely many elements not in B {\displaystyle B} but B {\displaystyle B} has infinitely many elements not in A {\displaystyle A} . It is possible to prove in ZFC that there exist Hausdorff gaps and ( b , Ο‰ ) {\displaystyle (b,\omega )} -gaps where b {\displaystyle b} is the cardinality of the smallest unbounded set in Ο‰ Ο‰ {\displaystyle \omega ^{\omega }} , and that there are no ( Ο‰ , Ο‰ ) {\displaystyle (\omega ,\omega )} -gaps. The stronger open coloring axiom can rule out all types of gaps except Hausdorff gaps and those of type ( ΞΊ , Ο‰ ) {\displaystyle (\kappa ,\omega )} with ΞΊ β‰₯ Ο‰ 2 {\displaystyle \kappa \geq \omega _{2}} .
https://en.wikipedia.org/wiki/Hausdorff_gap
In mathematics , the Hausdorff maximal principle is an alternate and earlier formulation of Zorn's lemma proved by Felix Hausdorff in 1914 (Moore 1982:168). It states that in any partially ordered set , every totally ordered subset is contained in a maximal totally ordered subset, where "maximal" is with respect to set inclusion. In a partially ordered set, a totally ordered subset is also called a chain. Thus, the maximal principle says every chain in the set extends to a maximal chain. The Hausdorff maximal principle is one of many statements equivalent to the axiom of choice over ZF ( Zermelo–Fraenkel set theory without the axiom of choice). The principle is also called the Hausdorff maximality theorem or the Kuratowski lemma (Kelley 1955:33). The Hausdorff maximal principle states that, in any partially ordered set P {\displaystyle P} , every chain C 0 {\displaystyle C_{0}} (i.e., a totally ordered subset ) is contained in a maximal chain C {\displaystyle C} (i.e., a chain that is not contained in a strictly larger chain in P {\displaystyle P} ). In general, there may be several maximal chains containing a given chain. An equivalent form of the Hausdorff maximal principle is that in every partially ordered set, there exists a maximal chain. (Note if the set is empty, the empty subset is a maximal chain.) This form follows from the original form since the empty set is a chain. Conversely, to deduce the original form from this form, consider the set P β€² {\displaystyle P'} of all chains in P {\displaystyle P} containing a given chain C 0 {\displaystyle C_{0}} in P {\displaystyle P} . Then P β€² {\displaystyle P'} is partially ordered by set inclusion. Thus, by the maximal principle in the above form, P β€² {\displaystyle P'} contains a maximal chain C β€² {\displaystyle C'} . Let C {\displaystyle C} be the union of C β€² {\displaystyle C'} , which is a chain in P {\displaystyle P} since a union of a totally ordered set of chains is a chain. Since C {\displaystyle C} contains C 0 {\displaystyle C_{0}} , it is an element of P β€² {\displaystyle P'} . Also, since any chain containing C {\displaystyle C} is contained in C {\displaystyle C} as C {\displaystyle C} is a union, C {\displaystyle C} is in fact a maximal element of P β€² {\displaystyle P'} ; i.e., a maximal chain in P {\displaystyle P} . The proof that the Hausdorff maximal principle is equivalent to Zorn's lemma is somehow similar to this proof. Indeed, first assume Zorn's lemma. Since a union of a totally ordered set of chains is a chain, the hypothesis of Zorn's lemma (every chain has an upper bound) is satisfied for P β€² {\displaystyle P'} and thus P β€² {\displaystyle P'} contains a maximal element or a maximal chain in P {\displaystyle P} . Conversely, if the maximal principle holds, then P {\displaystyle P} contains a maximal chain C {\displaystyle C} . By the hypothesis of Zorn's lemma, C {\displaystyle C} has an upper bound x {\displaystyle x} in P {\displaystyle P} . If y β‰₯ x {\displaystyle y\geq x} , then C ~ = C βˆͺ { y } {\displaystyle {\widetilde {C}}=C\cup \{y\}} is a chain containing C {\displaystyle C} and so by maximality, C ~ = C {\displaystyle {\widetilde {C}}=C} ; i.e., y ∈ C {\displaystyle y\in C} and so y = x {\displaystyle y=x} . β—» {\displaystyle \square } If A is any collection of sets, the relation "is a proper subset of" is a strict partial order on A . Suppose that A is the collection of all circular regions (interiors of circles) in the plane. One maximal totally ordered sub-collection of A consists of all circular regions with centers at the origin. Another maximal totally ordered sub-collection consists of all circular regions bounded by circles tangent from the right to the y-axis at the origin. If (x 0 , y 0 ) and (x 1 , y 1 ) are two points of the plane R 2 {\displaystyle \mathbb {R} ^{2}} , define (x 0 , y 0 ) < (x 1 , y 1 ) if y 0 = y 1 and x 0 < x 1 . This is a partial ordering of R 2 {\displaystyle \mathbb {R} ^{2}} under which two points are comparable only if they lie on the same horizontal line. The maximal totally ordered sets are horizontal lines in R 2 {\displaystyle \mathbb {R} ^{2}} . By the Hausdorff maximal principle, we can show every Hilbert space H {\displaystyle H} contains a maximal orthonormal subset A {\displaystyle A} as follows. [ 1 ] (This fact can be stated as saying that H ≃ β„“ 2 ( A ) {\displaystyle H\simeq \ell ^{2}(A)} as Hilbert spaces.) Let P {\displaystyle P} be the set of all orthonormal subsets of the given Hilbert space H {\displaystyle H} , which is partially ordered by set inclusion. It is nonempty as it contains the empty set and thus by the maximal principle, it contains a maximal chain Q {\displaystyle Q} . Let A {\displaystyle A} be the union of Q {\displaystyle Q} . We shall show it is a maximal orthonormal subset. First, if S , T {\displaystyle S,T} are in Q {\displaystyle Q} , then either S βŠ‚ T {\displaystyle S\subset T} or T βŠ‚ S {\displaystyle T\subset S} . That is, any given two distinct elements in A {\displaystyle A} are contained in some S {\displaystyle S} in Q {\displaystyle Q} and so they are orthogonal to each other (and of course, A {\displaystyle A} is a subset of the unit sphere in H {\displaystyle H} ). Second, if B βŠ‹ A {\displaystyle B\supsetneq A} for some B {\displaystyle B} in P {\displaystyle P} , then B {\displaystyle B} cannot be in Q {\displaystyle Q} and so Q βˆͺ { B } {\displaystyle Q\cup \{B\}} is a chain strictly larger than Q {\displaystyle Q} , a contradiction. β—» {\displaystyle \square } For the purpose of comparison, here is a proof of the same fact by Zorn's lemma. As above, let P {\displaystyle P} be the set of all orthonormal subsets of H {\displaystyle H} . If Q {\displaystyle Q} is a chain in P {\displaystyle P} , then the union of Q {\displaystyle Q} is also orthonormal by the same argument as above and so is an upper bound of Q {\displaystyle Q} . Thus, by Zorn's lemma, P {\displaystyle P} contains a maximal element A {\displaystyle A} . (So, the difference is that the maximal principle gives a maximal chain while Zorn's lemma gives a maximal element directly.) The idea of the proof is essentially due to Zermelo and is to prove the following weak form of Zorn's lemma , from the axiom of choice . [ 2 ] [ 3 ] (Zorn's lemma itself also follows from this weak form.) The maximal principle follows from the above since the set of all chains in P {\displaystyle P} satisfies the above conditions. By the axiom of choice, we have a function f : P ( P ) βˆ’ { βˆ… } β†’ P {\displaystyle f:{\mathfrak {P}}(P)-\{\emptyset \}\to P} such that f ( S ) ∈ S {\displaystyle f(S)\in S} for the power set P ( P ) {\displaystyle {\mathfrak {P}}(P)} of P {\displaystyle P} . For each C ∈ F {\displaystyle C\in F} , let C βˆ— {\displaystyle C^{*}} be the set of all x ∈ P βˆ’ C {\displaystyle x\in P-C} such that C βˆͺ { x } {\displaystyle C\cup \{x\}} is in F {\displaystyle F} . If C βˆ— = βˆ… {\displaystyle C^{*}=\emptyset } , then let C ~ = C {\displaystyle {\widetilde {C}}=C} . Otherwise, let Note C {\displaystyle C} is a maximal element if and only if C ~ = C {\displaystyle {\widetilde {C}}=C} . Thus, we are done if we can find a C {\displaystyle C} such that C ~ = C {\displaystyle {\widetilde {C}}=C} . Fix a C 0 {\displaystyle C_{0}} in F {\displaystyle F} . We call a subset T βŠ‚ F {\displaystyle T\subset F} a tower (over C 0 {\displaystyle C_{0}} ) if There exists at least one tower; indeed, the set of all sets in F {\displaystyle F} containing C 0 {\displaystyle C_{0}} is a tower. Let T 0 {\displaystyle T_{0}} be the intersection of all towers, which is again a tower. Now, we shall show T 0 {\displaystyle T_{0}} is totally ordered. We say a set C {\displaystyle C} is comparable in T 0 {\displaystyle T_{0}} if for each A {\displaystyle A} in T 0 {\displaystyle T_{0}} , either A βŠ‚ C {\displaystyle A\subset C} or C βŠ‚ A {\displaystyle C\subset A} . Let Ξ“ {\displaystyle \Gamma } be the set of all sets in T 0 {\displaystyle T_{0}} that are comparable in T 0 {\displaystyle T_{0}} . We claim Ξ“ {\displaystyle \Gamma } is a tower. The conditions 1. and 2. are straightforward to check. For 3., let C {\displaystyle C} in Ξ“ {\displaystyle \Gamma } be given and then let U {\displaystyle U} be the set of all A {\displaystyle A} in T 0 {\displaystyle T_{0}} such that either A βŠ‚ C {\displaystyle A\subset C} or C ~ βŠ‚ A {\displaystyle {\widetilde {C}}\subset A} . We claim U {\displaystyle U} is a tower. The conditions 1. and 2. are again straightforward to check. For 3., let A {\displaystyle A} be in U {\displaystyle U} . If A βŠ‚ C {\displaystyle A\subset C} , then since C {\displaystyle C} is comparable in T 0 {\displaystyle T_{0}} , either A ~ βŠ‚ C {\displaystyle {\widetilde {A}}\subset C} or C βŠ‚ A ~ {\displaystyle C\subset {\widetilde {A}}} . In the first case, A ~ {\displaystyle {\widetilde {A}}} is in U {\displaystyle U} . In the second case, we have A βŠ‚ C βŠ‚ A ~ {\displaystyle A\subset C\subset {\widetilde {A}}} , which implies either A = C {\displaystyle A=C} or C = A ~ {\displaystyle C={\widetilde {A}}} . (This is the moment we needed to collapse a set to an element by the axiom of choice to define A ~ {\displaystyle {\widetilde {A}}} .) Either way, we have A ~ {\displaystyle {\widetilde {A}}} is in U {\displaystyle U} . Similarly, if C βŠ‚ A {\displaystyle C\subset A} , we see A ~ {\displaystyle {\widetilde {A}}} is in U {\displaystyle U} . Hence, U {\displaystyle U} is a tower. Now, since U βŠ‚ T 0 {\displaystyle U\subset T_{0}} and T 0 {\displaystyle T_{0}} is the intersection of all towers, U = T 0 {\displaystyle U=T_{0}} , which implies C ~ {\displaystyle {\widetilde {C}}} is comparable in T 0 {\displaystyle T_{0}} ; i.e., is in Ξ“ {\displaystyle \Gamma } . This completes the proof of the claim that Ξ“ {\displaystyle \Gamma } is a tower. Finally, since Ξ“ {\displaystyle \Gamma } is a tower contained in T 0 {\displaystyle T_{0}} , we have T 0 = Ξ“ {\displaystyle T_{0}=\Gamma } , which means T 0 {\displaystyle T_{0}} is totally ordered. Let C {\displaystyle C} be the union of T 0 {\displaystyle T_{0}} . By 2., C {\displaystyle C} is in T 0 {\displaystyle T_{0}} and then by 3., C ~ {\displaystyle {\widetilde {C}}} is in T 0 {\displaystyle T_{0}} . Since C {\displaystyle C} is the union of T 0 {\displaystyle T_{0}} , C ~ βŠ‚ C {\displaystyle {\widetilde {C}}\subset C} and thus C ~ = C {\displaystyle {\widetilde {C}}=C} . β—» {\displaystyle \square } The Bourbaki–Witt theorem , together with the Axiom of choice , can be used to prove the Hausdorff maximal principle. Indeed, let P {\displaystyle P} be a nonempty poset and X : = { C βŠ† P : C is a chain } {\displaystyle X\mathrel {\mathop {:} } =\{C\subseteq P\,:\,C\ {\text{is a chain}}\}} be the set of all totally ordered subsets of P {\displaystyle P} . Notice that X β‰  βˆ… {\displaystyle X\neq \emptyset } , since P β‰  βˆ… {\displaystyle P\neq \emptyset } and { x } ∈ X {\displaystyle \{x\}\in X} , for any x ∈ P {\displaystyle x\in P} . Also, equipped with the inclusion βŠ† {\displaystyle \subseteq } , X {\displaystyle X} is a poset. We claim that every chain C βŠ† X {\displaystyle {\mathcal {C}}\subseteq X} has a supremum . In order to check this out, let S {\displaystyle S} be the union ⋃ C ∈ C C {\displaystyle \bigcup _{C\in {\mathcal {C}}}C} . Clearly, C βŠ† S {\displaystyle C\subseteq S} , for all C ∈ C {\displaystyle C\in {\mathcal {C}}} . Also, if U {\displaystyle U} is any upper bound of C {\displaystyle {\mathcal {C}}} , then S βŠ† U {\displaystyle S\subseteq U} , since by definition C βŠ† U {\displaystyle C\subseteq U} for all C ∈ C {\displaystyle C\in {\mathcal {C}}} . Now, consider the map f : X β†’ X {\displaystyle f\colon X\to X} given by f ( C ) : = { C , if C is maximal C βˆͺ { g ( P βˆ– C ) } , if C is not maximal {\displaystyle f(C)\mathrel {\mathop {:} } ={\begin{cases}C,&{\text{if}}\ C\ {\text{is maximal}}\\C\cup \{g(P\setminus C)\},&{\text{if}}\ C\ {\text{is not maximal}}\end{cases}}} where g {\displaystyle g} is a choice function on { P βˆ– C } {\displaystyle \{P\setminus C\}} whose existence is ensured by the Axiom of choice, and the fact that P βˆ– C β‰  βˆ… {\displaystyle P\setminus C\neq \emptyset } is an immediate consequence of the non-maximality of C {\displaystyle C} . Thus, C βŠ† f ( C ) {\displaystyle C\subseteq f(C)} , for each C ∈ X {\displaystyle C\in X} . In view of the Bourbaki-Witt theorem, there exists an element C 0 ∈ C {\displaystyle C_{0}\in {\mathcal {C}}} such that f ( C 0 ) = C 0 {\displaystyle f(C_{0})=C_{0}} , and therefore C 0 {\displaystyle C_{0}} is a maximal chain of P {\displaystyle P} . In the case P = βˆ… {\displaystyle P=\emptyset } , the empty set is trivially a maximal chain of P {\displaystyle P} , as already mentioned above. β—» {\displaystyle \square }
https://en.wikipedia.org/wiki/Hausdorff_maximal_principle
In mathematics , the Hausdorff moment problem , named after Felix Hausdorff , asks for necessary and sufficient conditions that a given sequence ( m 0 , m 1 , m 2 , ...) be the sequence of moments of some Borel measure ΞΌ supported on the closed unit interval [0, 1] . In the case m 0 = 1 , this is equivalent to the existence of a random variable X supported on [0, 1] , such that E[ X n ] = m n . The essential difference between this and other well-known moment problems is that this is on a bounded interval , whereas in the Stieltjes moment problem one considers a half-line [0, ∞) , and in the Hamburger moment problem one considers the whole line (βˆ’βˆž, ∞) . The Stieltjes moment problems and the Hamburger moment problems, if they are solvable, may have infinitely many solutions (indeterminate moment problem) whereas a Hausdorff moment problem always has a unique solution if it is solvable (determinate moment problem). In the indeterminate moment problem case, there are infinite measures corresponding to the same prescribed moments and they consist of a convex set. The set of polynomials may or may not be dense in the associated Hilbert spaces if the moment problem is indeterminate, and it depends on whether measure is extremal or not. But in the determinate moment problem case, the set of polynomials is dense in the associated Hilbert space. In 1921, Hausdorff showed that ( m 0 , m 1 , m 2 , ...) is such a moment sequence if and only if the sequence is completely monotonic, that is, its difference sequences satisfy the equation for all n , k β‰₯ 0 . Here, Ξ” is the difference operator given by The necessity of this condition is easily seen by the identity which is non-negative since it is the integral of a non-negative function . For example, it is necessary to have
https://en.wikipedia.org/wiki/Hausdorff_moment_problem
The Hausdorff paradox is a paradox in mathematics named after Felix Hausdorff . It involves the sphere S 2 {\displaystyle {S^{2}}} (the surface of a 3-dimensional ball in R 3 {\displaystyle {\mathbb {R} ^{3}}} ). It states that if a certain countable subset is removed from S 2 {\displaystyle {S^{2}}} , then the remainder can be divided into three disjoint subsets A , B {\displaystyle {A,B}} and C {\displaystyle {C}} such that A , B , C {\displaystyle {A,B,C}} and B βˆͺ C {\displaystyle {B\cup C}} are all congruent . In particular, it follows that on S 2 {\displaystyle S^{2}} there is no finitely additive measure defined on all subsets such that the measure of congruent sets is equal (because this would imply that the measure of B βˆͺ C {\displaystyle {B\cup C}} is simultaneously 1 / 3 {\displaystyle 1/3} , 1 / 2 {\displaystyle 1/2} , and 2 / 3 {\displaystyle 2/3} of the non-zero measure of the whole sphere). The paradox was published in Mathematische Annalen in 1914 and also in Hausdorff's book, GrundzΓΌge der Mengenlehre , the same year. The proof of the much more famous Banach–Tarski paradox uses Hausdorff's ideas. The proof of this paradox relies on the axiom of choice . This paradox shows that there is no finitely additive measure on a sphere defined on all subsets which is equal on congruent pieces. (Hausdorff first showed in the same paper the easier result that there is no countably additive measure defined on all subsets.) The structure of the group of rotations on the sphere plays a crucial role here – the statement is not true on the plane or the line. In fact, as was later shown by Banach , [ 1 ] it is possible to define an "area" for all bounded subsets in the Euclidean plane (as well as "length" on the real line) in such a way that congruent sets will have equal "area". (This Banach measure , however, is only finitely additive, so it is not a measure in the full sense, but it equals the Lebesgue measure on sets for which the latter exists.) This implies that if two open subsets of the plane (or the real line) are equi-decomposable then they have equal area.
https://en.wikipedia.org/wiki/Hausdorff_paradox
The Hausdorffβˆ’Young inequality is a foundational result in the mathematical field of Fourier analysis . As a statement about Fourier series , it was discovered by William Henry Young ( 1913 ) and extended by Hausdorff ( 1923 ). It is now typically understood as a rather direct corollary of the Plancherel theorem , found in 1910, in combination with the Riesz-Thorin theorem , originally discovered by Marcel Riesz in 1927. With this machinery, it readily admits several generalizations, including to multidimensional Fourier series and to the Fourier transform on the real line, Euclidean spaces, as well as more general spaces. With these extensions, it is one of the best-known results of Fourier analysis, appearing in nearly every introductory graduate-level textbook on the subject. The nature of the Hausdorff-Young inequality can be understood with only Riemann integration and infinite series as prerequisite. Given a continuous function f : ( 0 , 1 ) β†’ R {\displaystyle f:(0,1)\to \mathbb {R} } , define its "Fourier coefficients" by for each integer n {\displaystyle n} . The Hausdorff-Young inequality can be used to show that Loosely speaking, this can be interpreted as saying that the "size" of the function f {\displaystyle f} , as represented by the right-hand side of the above inequality, controls the "size" of its sequence of Fourier coefficients, as represented by the left-hand side. However, this is only a very specific case of the general theorem. The usual formulations of the theorem are given below, with use of the machinery of L p spaces and Lebesgue integration . Given a nonzero real number p {\displaystyle p} , define the real number p β€² {\displaystyle p'} (the "conjugate exponent" of p {\displaystyle p} ) by the equation If p {\displaystyle p} is equal to one, this equation has no solution, but it is interpreted to mean that p β€² {\displaystyle p'} is infinite, as an element of the extended real number line . Likewise, if p {\displaystyle p} is infinite, as an element of the extended real number line , then this is interpreted to mean that p β€² {\displaystyle p'} is equal to one. The commonly understood features of the conjugate exponent are simple: Given a function f : ( 0 , 1 ) β†’ C , {\displaystyle f:(0,1)\to \mathbb {C} ,} one defines its "Fourier coefficients" as a function c : Z β†’ C {\displaystyle c:\mathbb {Z} \to \mathbb {C} } by although for an arbitrary function f {\displaystyle f} , these integrals may not exist. HΓΆlder's inequality shows that if f {\displaystyle f} is in L p ( ( 0 , 1 ) ) {\displaystyle L^{p}{\bigl (}(0,1){\bigr )}} for some number p ∈ [ 1 , ∞ ] {\displaystyle p\in [1,\infty ]} , then each Fourier coefficient is well-defined. [ 1 ] The Hausdorff-Young inequality says that, for any number p {\displaystyle p} in the interval ( 1 , 2 ] {\displaystyle (1,2]} , one has for all f {\displaystyle f} in L p ( ( 0 , 1 ) ) {\displaystyle L^{p}{\bigl (}(0,1){\bigr )}} . Conversely, still supposing p ∈ ( 1 , 2 ] {\displaystyle p\in (1,2]} , if c : Z β†’ C {\displaystyle c:\mathbb {Z} \to \mathbb {C} } is a mapping for which then there exists f ∈ L p β€² ( 0 , 1 ) {\displaystyle f\in L^{p'}(0,1)} whose Fourier coefficients obey [ 1 ] The case of Fourier series generalizes to the multidimensional case. Given a function f : ( 0 , 1 ) k β†’ C , {\displaystyle f:(0,1)^{k}\to \mathbb {C} ,} define its Fourier coefficients c : Z k β†’ C {\displaystyle c:\mathbb {Z} ^{k}\to \mathbb {C} } by As in the case of Fourier series, the assumption that f {\displaystyle f} is in L p {\displaystyle L^{p}} for some value of p {\displaystyle p} in [ 1 , ∞ ] {\displaystyle [1,\infty ]} ensures, via the HΓΆlder inequality, the existence of the Fourier coefficients. Now, the Hausdorff-Young inequality says that if p {\displaystyle p} is in the range [ 1 , 2 ] {\displaystyle [1,2]} , then for any f {\displaystyle f} in L p ( ( 0 , 1 ) k ) {\displaystyle L^{p}{\bigl (}(0,1)^{k}{\bigr )}} . [ 2 ] One defines the multidimensional Fourier transform by The Hausdorff-Young inequality, in this setting, says that if p {\displaystyle p} is a number in the interval [ 1 , 2 ] {\displaystyle [1,2]} , then one has for any f ∈ L p ( R m ) {\displaystyle f\in L^{p}(\mathbb {R} ^{m})} . [ 3 ] The above results can be rephrased succinctly as: Here we use the language of normed vector spaces and bounded linear maps, as is convenient for application of the Riesz-Thorin theorem. There are two ingredients in the proof: The operator norm of either linear maps is less than or equal to one, as one can directly verify. One can then apply the Riesz–Thorin theorem . Equality is achieved in the Hausdorff-Young inequality for (multidimensional) Fourier series by taking for any particular choice of integers m 1 , … , m k . {\displaystyle m_{1},\ldots ,m_{k}.} In the above terminology of "normed vector spaces", this asserts that the operator norm of the corresponding bounded linear map is exactly equal to one. Since the Fourier transform is closely analogous to the Fourier series, and the above Hausdorff-Young inequality for the Fourier transform is proved by exactly the same means as the Hausdorff-Young inequality for Fourier series, it may be surprising that equality is not achieved for the above Hausdorff-Young inequality for the Fourier transform, aside from the special case p = 2 {\displaystyle p=2} for which the Plancherel theorem asserts that the Hausdorff-Young inequality is an exact equality. In fact, Beckner (1975) , following a special case appearing in Babenko (1961) , showed that if p {\displaystyle p} is a number in the interval [ 1 , 2 ] {\displaystyle [1,2]} , then for any f {\displaystyle f} in L p ( R n ) {\displaystyle L^{p}(\mathbb {R} ^{n})} . This is an improvement of the standard Hausdorff-Young inequality, as the context p ≀ 2 {\displaystyle p\leq 2} and p β€² β‰₯ 2 {\displaystyle p'\geq 2} ensures that the number appearing on the right-hand side of this " Babenko–Beckner inequality " is less than or equal to 1. Moreover, this number cannot be replaced by a smaller one, since equality is achieved in the case of Gaussian functions. In this sense, Beckner's paper gives an optimal ("sharp") version of the Hausdorff-Young inequality. In the language of normed vector spaces, it says that the operator norm of the bounded linear map L p ( R n ) β†’ L p / ( p βˆ’ 1 ) ( R n ) {\displaystyle L^{p}(\mathbb {R} ^{n})\to L^{p/(p-1)}(\mathbb {R} ^{n})} , as defined by the Fourier transform, is exactly equal to The condition p ∈ [ 1 , 2 ] {\displaystyle p\in [1,2]} is essential. If p > 2 {\displaystyle p>2} , then the fact that a function belongs to L p {\displaystyle L^{p}} does not give any additional information on the order of growth of its Fourier series beyond the fact that it is in β„“ 2 {\displaystyle \ell ^{2}} .
https://en.wikipedia.org/wiki/Hausdorff–Young_inequality
The Robinson annulation is a chemical reaction used in organic chemistry for ring formation. It was discovered by Robert Robinson in 1935 as a method to create a six membered ring by forming three new carbon–carbon bonds. [ 1 ] The method uses a ketone and a methyl vinyl ketone to form an Ξ±,Ξ²-unsaturated ketone in a cyclohexane ring by a Michael addition followed by an aldol condensation . This procedure is one of the key methods to form fused ring systems. Formation of cyclohexenone and derivatives are important in chemistry for their application to the synthesis of many natural products and other interesting organic compounds such as antibiotics and steroids . [ 2 ] Specifically, the synthesis of cortisone is completed through the use of the Robinson annulation. [ 3 ] The initial paper on the Robinson annulation was published by William Rapson and Robert Robinson while Rapson studied at Oxford with professor Robinson. Before their work, cyclohexenone syntheses were not derived from the Ξ±,Ξ²-unsaturated ketone component. Initial approaches coupled the methyl vinyl ketone with a naphthol to give a naphtholoxide, but this procedure was not sufficient to form the desired cyclohexenone. This was attributed to unsuitable conditions of the reaction. [ 1 ] Robinson and Rapson found in 1935 that the interaction between cyclohexanone and Ξ±,Ξ²-unsaturated ketone afforded the desired cyclohexenone. It remains one of the key methods for the construction of six membered ring compounds. Since it is so widely used, there are many aspects of the reaction that have been investigated such as variations of the substrates and reaction conditions as discussed in the scope and variations section. [ 4 ] Robert Robinson won the Nobel Prize for Chemistry in 1947 for his contribution to the study of alkaloids. [ 5 ] The original procedure of the Robinson annulation begins with the nucleophilic attack of a ketone in a Michael reaction on a vinyl ketone to produce the intermediate Michael adduct. Subsequent aldol type ring closure leads to the keto alcohol, which is then followed by dehydration to produce the annulation product. In the Michael reaction, the ketone is deprotonated by a base to form an enolate nucleophile which attacks the electron acceptor (in red). This acceptor is generally an Ξ±,Ξ²-unsaturated ketone, although aldehydes , acid derivatives and similar compounds can work as well (see scope). In the example shown here, regioselectivity is dictated by the formation of the thermodynamic enolate. Alternatively, the regioselectivity is often controlled by using a Ξ²-diketone or Ξ²-ketoester as the enolate component, since deprotonation at the carbon flanked by the carbonyl groups is strongly favored. The intramolecular aldol condensation then takes place in such a way that installs the six-membered ring. In the final product, the three carbon atoms of the Ξ±,Ξ²-unsaturated system and the carbon Ξ± to its carbonyl group make up the four-carbon bridge of the newly installed ring. In order to avoid a reaction between the original enolate and the cyclohexenone product, the initial Michael adduct is often isolated first and then cyclized to give the desired octalone in a separate step. [ 6 ] Studies have been completed on the formation of the hydroxy ketones in the Robinson annulation reaction scheme. The trans compound is favored due to antiperiplanar effects of the final aldol condensation in kinetically controlled reactions. It has also been found though that the cyclization can proceed in synclinal orientation. The figure below shows the three possible stereochemical pathways, assuming a chair transition state. [ 7 ] It has been postulated that the difference in the formation of these transition states and their corresponding products is due to solvent interactions. Scanio found that changing the solvent of the reaction from dioxane to DMSO gives different stereochemistry in step D above. This suggests that the presence of protic or aprotic solvents gives rise to different transition states. [ 8 ] Robinson annulation is one notable example of a wider class of chemical transformations termed Tandem Michael-aldol reactions, that sequentially combine Michael addition and aldol reaction into a single reaction. As is the case with Robinson annulation, Michael addition usually happens first to tether the two reactants together, then aldol reaction proceeds intramolecularly to generate the ring system in the product. Usually five- or six-membered rings are generated. Although the Robinson annulation is generally conducted under basic conditions, reactions have been conducted under a variety of conditions. Heathcock and Ellis report similar results to the base-catalyzed method using sulfuric acid . [ 2 ] The Michael reaction can occur under neutral conditions through an enamine . A Mannich base can be heated in the presence of the ketone to produce the Michael adduct. [ 6 ] Successful preparation of compounds using the Robinson annulation methods have been reported. [ 9 ] A typical Michael acceptor is an Ξ±,Ξ²-unsaturated ketone, although aldehydes and acid derivatives work as well. In addition, Bergmann et al. reports that donors such as nitriles , nitro compounds, sulfones and certain hydrocarbons can be used as acceptors. [ 10 ] Overall, Michael acceptors are generally activated olefins such as those shown below where EWG refers to an electron withdrawing group such as cyano, keto, or ester as shown. The Wichterle reaction is a variant of the Robinson annulation that replaces methyl vinyl ketone with 1,3-dichloro- cis -2-butene. This gives an example of using a different Michael acceptor from the typical Ξ±,Ξ²-unsaturated ketone. The 1,3-dichloro- cis -2-butene is employed to avoid undesirable polymerization or condensation during the Michael addition. [ 11 ] The reaction sequence in the related Hauser annulation is a Michael addition followed by a Dieckmann condensation and finally an elimination. The Dieckmann condensation is a similar ring closing intramolecular chemical reaction of diesters with base to give Ξ²-ketoesters. The Hauser donor is an aromatic sulfone or methylene sulfoxide with a carboxylic ester group in the ortho position. The Hauser acceptor is a Michael acceptor . In the original Hauser publication ethyl 2-carboxybenzyl phenyl sulfoxide reacts with pent-3-ene-2-one with LDA as a base in THF at βˆ’78Β Β°C. [ 12 ] Asymmetric synthesis of Robinson annulation products most often involve the use of a proline catalyst . Studies report the use of L-proline as well as several other chiral amines for use as catalysts during both steps of the Robinson annulation reaction. [ 13 ] The advantages of using the optically active proline catalysis is that they are stereoselective with enantiomeric excesses of 60–70%. [ 14 ] Wang, et al. reported the one-pot synthesis of chiral thiochromenes by such an organocatalytic Robinson annulation. [ 15 ] The Wieland–Miescher ketone is the Robinson annulation product of 2-methyl-cyclohexane-1,3-dione and methyl vinyl ketone. This compound is used in the syntheses of many steroids possessing important biological properties and can be made enantiopure using proline catalysis. [ 14 ] F. Dean Toste and co-workers [ 16 ] have used Robinson annulation in the total synthesis of (+)-fawcettimine, a tetracyclic Lycopodium alkaloid that has potential application to inhibiting the acetylcholine esterase . Scientists at Merck discovered platensimycin , a novel antibiotic lead compound with potential medicinal applications as seen in the adjacent picture. [ 17 ] Initial synthesis gave a racemic form of the compound using an intramolecular etherification reaction of the alcohol motifs and the double bond. Yamamoto and coworkers report the use of an alternative intramolecular Robinson annulation to provide a straightforward enantioselective synthesis of tetracyclic core of platensimycin. The key Robinson annulation step was reported to be accomplished in one pot using L-proline for chiral control. The reaction conditions can be seen below. [ 18 ]
https://en.wikipedia.org/wiki/Hauser_annulation
The Haute QualitΓ© Environnementale or HQE ( High Quality Environmental standard ) is a standard for green building in France , based on the principles of sustainable development first set out at the 1992 Earth Summit . The standard is controlled by the Paris -based Association pour la Haute QualitΓ© Environnementale (ASSOHQE). The standard specifies criteria for the following: [ 1 ] Managing the impacts on the outdoor environment Creating a pleasant indoor environment On 16 June 2009, it was announced that the CSTB ( Centre Scientifique et Technique du Batiment ) and its subsidiary CertiVΓ©A had signed a memorandum of understanding to work together with the global arm of the United Kingdom's Building Research Establishment (BRE) to develop a pan-European building environmental assessment method. The BRE developed and markets ( BREEAM (the BRE Environmental Assessment Method), which has similarities to the French HQE. Unfortunately, BREEAM and HQE are still disseminating their own standards round the world, leaving little doubt that no pan-European method will emerge in the near future, at least stemming from these two organisations. [ 2 ] Since 2013, the HQE brand is now available for buildings and districts worldwide. [ 3 ] As of 2016 HQE is present in 24 countries. [ 4 ] This France -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haute_QualitΓ©_Environnementale
In control theory and in particular when studying the properties of a linear time-invariant system in state space form, the Hautus lemma (after Malo L. J. Hautus), also commonly known as the Popov-Belevitch-Hautus test or PBH test , [ 1 ] [ 2 ] can prove to be a powerful tool. A special case of this result appeared first in 1963 in a paper by Elmer G. Gilbert , [ 1 ] and was later expanded to the current PBH test with contributions by Vasile M. Popov in 1966, [ 3 ] [ 4 ] Vitold Belevitch in 1968, [ 5 ] and Malo Hautus in 1969, [ 5 ] who emphasized its applicability in proving results for linear time-invariant systems. There exist multiple forms of the lemma: The Hautus lemma for controllability says that given a square matrix A ∈ M n ( β„œ ) {\displaystyle \mathbf {A} \in M_{n}(\Re )} and a B ∈ M n Γ— m ( β„œ ) {\displaystyle \mathbf {B} \in M_{n\times m}(\Re )} the following are equivalent: The Hautus lemma for stabilizability says that given a square matrix A ∈ M n ( β„œ ) {\displaystyle \mathbf {A} \in M_{n}(\Re )} and a B ∈ M n Γ— m ( β„œ ) {\displaystyle \mathbf {B} \in M_{n\times m}(\Re )} the following are equivalent: The Hautus lemma for observability says that given a square matrix A ∈ M n ( β„œ ) {\displaystyle \mathbf {A} \in M_{n}(\Re )} and a C ∈ M m Γ— n ( β„œ ) {\displaystyle \mathbf {C} \in M_{m\times n}(\Re )} the following are equivalent: The Hautus lemma for detectability says that given a square matrix A ∈ M n ( β„œ ) {\displaystyle \mathbf {A} \in M_{n}(\Re )} and a C ∈ M m Γ— n ( β„œ ) {\displaystyle \mathbf {C} \in M_{m\times n}(\Re )} the following are equivalent:
https://en.wikipedia.org/wiki/Hautus_lemma
The Havenga Prize ( Havengaprys in Afrikaans) is a prize awarded annually by the Suid-Afrikaanse Akademie vir Wetenskap en Kuns (South African Academy for Science and Arts) to a candidate for original research in the Sciences since 1945. Candidates are judged on the quality of research publications and evidence of the promotion of Afrikaans. The Havenga prize can only be awarded to a person once, but can be awarded posthumously. The prize is named after Finance Minister Nicolaas Christiaan Havenga , who donated Β£50 annually to the academy for the prize from 1946. A bequest of R4Β 000 was received from Havenga's estate and R14Β 000 from the estate of his wife, Olive. Since 1979 the prize has been awarded in the form of a gold medal. Mostly compiled from akademie.co.za (in Afrikaans), archived on The WaybackMachine
https://en.wikipedia.org/wiki/Havenga_prize
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁑ ΞΈ = 1 βˆ’ cos ⁑ ΞΈ = 2 sin 2 ⁑ ΞΈ 2 = sin ⁑ ΞΈ tan ⁑ ΞΈ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav ΞΈ = sin 2 ⁑ ( ΞΈ 2 ) = 1 βˆ’ cos ⁑ ΞΈ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle ΞΈ (representing half of the subtended angle Ξ” ) is the distance AC (half of the chord). On the other hand, the versed sine of ΞΈ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( ΞΈ ) (equal to the length of line OC ) and versin( ΞΈ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Ξ” = 2 ΞΈ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As ΞΈ goes to zero, versin( ΞΈ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small ΞΈ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( ΞΈ = 0, 2 Ο€ , …) where it is zeroβ€”thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75Β° increments from 0 to 90Β°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ ΞΈ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( ΞΈ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ ΞΈ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by JosΓ© de Mendoza y RΓ­os of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < ΞΈ < 2 Ο€ ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin βˆ’1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav βˆ’1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v β‰ˆ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s β‰ˆ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v β‰ˆ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Havercosine
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁑ ΞΈ = 1 βˆ’ cos ⁑ ΞΈ = 2 sin 2 ⁑ ΞΈ 2 = sin ⁑ ΞΈ tan ⁑ ΞΈ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav ΞΈ = sin 2 ⁑ ( ΞΈ 2 ) = 1 βˆ’ cos ⁑ ΞΈ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle ΞΈ (representing half of the subtended angle Ξ” ) is the distance AC (half of the chord). On the other hand, the versed sine of ΞΈ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( ΞΈ ) (equal to the length of line OC ) and versin( ΞΈ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Ξ” = 2 ΞΈ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As ΞΈ goes to zero, versin( ΞΈ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small ΞΈ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( ΞΈ = 0, 2 Ο€ , …) where it is zeroβ€”thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75Β° increments from 0 to 90Β°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ ΞΈ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( ΞΈ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ ΞΈ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by JosΓ© de Mendoza y RΓ­os of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < ΞΈ < 2 Ο€ ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin βˆ’1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav βˆ’1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v β‰ˆ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s β‰ˆ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v β‰ˆ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Haversed_cosine
The versine or versed sine is a trigonometric function found in some of the earliest ( Sanskrit Aryabhatia , [ 1 ] Section I) trigonometric tables . The versine of an angle is 1 minus its cosine . There are several related functions, most notably the coversine and haversine . The latter, half a versine, is of particular importance in the haversine formula of navigation. The versine [ 3 ] [ 4 ] [ 5 ] [ 6 ] [ 7 ] or versed sine [ 8 ] [ 9 ] [ 10 ] [ 11 ] [ 12 ] is a trigonometric function already appearing in some of the earliest trigonometric tables. It is symbolized in formulas using the abbreviations versin , sinver , [ 13 ] [ 14 ] vers , or siv . [ 15 ] [ 16 ] In Latin , it is known as the sinus versus (flipped sine), versinus , versus , or sagitta (arrow). [ 17 ] Expressed in terms of common trigonometric functions sine, cosine, and tangent, the versine is equal to versin ⁑ ΞΈ = 1 βˆ’ cos ⁑ ΞΈ = 2 sin 2 ⁑ ΞΈ 2 = sin ⁑ ΞΈ tan ⁑ ΞΈ 2 {\displaystyle \operatorname {versin} \theta =1-\cos \theta =2\sin ^{2}{\frac {\theta }{2}}=\sin \theta \,\tan {\frac {\theta }{2}}} There are several related functions corresponding to the versine: Special tables were also made of half of the versed sine, because of its particular use in the haversine formula used historically in navigation . hav ΞΈ = sin 2 ⁑ ( ΞΈ 2 ) = 1 βˆ’ cos ⁑ ΞΈ 2 {\displaystyle {\text{hav}}\ \theta =\sin ^{2}\left({\frac {\theta }{2}}\right)={\frac {1-\cos \theta }{2}}} The ordinary sine function ( see note on etymology ) was sometimes historically called the sinus rectus ("straight sine"), to contrast it with the versed sine ( sinus versus ). [ 31 ] The meaning of these terms is apparent if one looks at the functions in the original context for their definition, a unit circle : For a vertical chord AB of the unit circle, the sine of the angle ΞΈ (representing half of the subtended angle Ξ” ) is the distance AC (half of the chord). On the other hand, the versed sine of ΞΈ is the distance CD from the center of the chord to the center of the arc. Thus, the sum of cos( ΞΈ ) (equal to the length of line OC ) and versin( ΞΈ ) (equal to the length of line CD ) is the radius OD (with length 1). Illustrated this way, the sine is vertical ( rectus , literally "straight") while the versine is horizontal ( versus , literally "turned against, out-of-place"); both are distances from C to the circle. This figure also illustrates the reason why the versine was sometimes called the sagitta , Latin for arrow . [ 17 ] [ 30 ] If the arc ADB of the double-angle Ξ” = 2 ΞΈ is viewed as a " bow " and the chord AB as its "string", then the versine CD is clearly the "arrow shaft". In further keeping with the interpretation of the sine as "vertical" and the versed sine as "horizontal", sagitta is also an obsolete synonym for the abscissa (the horizontal axis of a graph). [ 30 ] In 1821, Cauchy used the terms sinus versus ( siv ) for the versine and cosinus versus ( cosiv ) for the coversine. [ 15 ] [ 16 ] [ nb 1 ] As ΞΈ goes to zero, versin( ΞΈ ) is the difference between two nearly equal quantities, so a user of a trigonometric table for the cosine alone would need a very high accuracy to obtain the versine in order to avoid catastrophic cancellation , making separate tables for the latter convenient. [ 12 ] Even with a calculator or computer, round-off errors make it advisable to use the sin 2 formula for small ΞΈ . Another historical advantage of the versine is that it is always non-negative, so its logarithm is defined everywhere except for the single angle ( ΞΈ = 0, 2 Ο€ , …) where it is zeroβ€”thus, one could use logarithmic tables for multiplications in formulas involving versines. In fact, the earliest surviving table of sine (half- chord ) values (as opposed to the chords tabulated by Ptolemy and other Greek authors), calculated from the Surya Siddhantha of India dated back to the 3rd century BC, was a table of values for the sine and versed sine (in 3.75Β° increments from 0 to 90Β°). [ 31 ] The versine appears as an intermediate step in the application of the half-angle formula sin 2 ( ⁠ ΞΈ / 2 ⁠ ) = ⁠ 1 / 2 ⁠ versin( ΞΈ ), derived by Ptolemy , that was used to construct such tables. The haversine, in particular, was important in navigation because it appears in the haversine formula , which is used to reasonably accurately compute distances on an astronomic spheroid (see issues with the Earth's radius vs. sphere ) given angular positions (e.g., longitude and latitude ). One could also use sin 2 ( ⁠ ΞΈ / 2 ⁠ ) directly, but having a table of the haversine removed the need to compute squares and square roots. [ 12 ] An early utilization by JosΓ© de Mendoza y RΓ­os of what later would be called haversines is documented in 1801. [ 14 ] [ 32 ] The first known English equivalent to a table of haversines was published by James Andrew in 1805, under the name "Squares of Natural Semi-Chords". [ 33 ] [ 34 ] [ 17 ] In 1835, the term haversine (notated naturally as hav. or base-10 logarithmically as log. haversine or log. havers. ) was coined [ 35 ] by James Inman [ 14 ] [ 36 ] [ 37 ] in the third edition of his work Navigation and Nautical Astronomy: For the Use of British Seamen to simplify the calculation of distances between two points on the surface of the Earth using spherical trigonometry for applications in navigation. [ 3 ] [ 35 ] Inman also used the terms nat. versine and nat. vers. for versines. [ 3 ] Other high-regarded tables of haversines were those of Richard Farley in 1856 [ 33 ] [ 38 ] and John Caulfield Hannyngton in 1876. [ 33 ] [ 39 ] The haversine continues to be used in navigation and has found new applications in recent decades, as in Bruce D. Stark's method for clearing lunar distances utilizing Gaussian logarithms since 1995 [ 40 ] [ 41 ] or in a more compact method for sight reduction since 2014. [ 29 ] While the usage of the versine, coversine and haversine as well as their inverse functions can be traced back centuries, the names for the other five cofunctions appear to be of much younger origin. One period (0 < ΞΈ < 2 Ο€ ) of a versine or, more commonly, a haversine waveform is also commonly used in signal processing and control theory as the shape of a pulse or a window function (including Hann , Hann–Poisson and Tukey windows ), because it smoothly ( continuous in value and slope ) "turns on" from zero to one (for haversine) and back to zero. [ nb 2 ] In these applications, it is named Hann function or raised-cosine filter . The functions are circular rotations of each other. Inverse functions like arcversine (arcversin, arcvers, [ 8 ] avers, [ 43 ] [ 44 ] aver), arcvercosine (arcvercosin, arcvercos, avercos, avcs), arccoversine (arccoversin, arccovers, [ 8 ] acovers, [ 43 ] [ 44 ] acvs), arccovercosine (arccovercosin, arccovercos, acovercos, acvc), archaversine (archaversin, archav, haversin βˆ’1 , [ 45 ] invhav, [ 46 ] [ 47 ] [ 48 ] ahav, [ 43 ] [ 44 ] ahvs, ahv, hav βˆ’1 [ 49 ] [ 50 ] ), archavercosine (archavercosin, archavercos, ahvc), archacoversine (archacoversin, ahcv) or archacovercosine (archacovercosin, archacovercos, ahcc) exist as well: These functions can be extended into the complex plane . [ 42 ] [ 19 ] [ 24 ] Maclaurin series : [ 24 ] When the versine v is small in comparison to the radius r , it may be approximated from the half-chord length L (the distance AC shown above) by the formula [ 51 ] v β‰ˆ L 2 2 r . {\displaystyle v\approx {\frac {L^{2}}{2r}}.} Alternatively, if the versine is small and the versine, radius, and half-chord length are known, they may be used to estimate the arc length s ( AD in the figure above) by the formula s β‰ˆ L + v 2 r {\displaystyle s\approx L+{\frac {v^{2}}{r}}} This formula was known to the Chinese mathematician Shen Kuo , and a more accurate formula also involving the sagitta was developed two centuries later by Guo Shoujing . [ 52 ] A more accurate approximation used in engineering [ 53 ] is v β‰ˆ s 3 2 L 1 2 8 r {\displaystyle v\approx {\frac {s^{\frac {3}{2}}L^{\frac {1}{2}}}{8r}}} The term versine is also sometimes used to describe deviations from straightness in an arbitrary planar curve, of which the above circle is a special case. Given a chord between two points in a curve, the perpendicular distance v from the chord to the curve (usually at the chord midpoint) is called a versine measurement. For a straight line, the versine of any chord is zero, so this measurement characterizes the straightness of the curve. In the limit as the chord length L goes to zero, the ratio ⁠ 8 v / L 2 ⁠ goes to the instantaneous curvature . This usage is especially common in rail transport , where it describes measurements of the straightness of the rail tracks [ 54 ] and it is the basis of the Hallade method for rail surveying . The term sagitta (often abbreviated sag ) is used similarly in optics , for describing the surfaces of lenses and mirrors .
https://en.wikipedia.org/wiki/Haversed_sine
The haversine formula determines the great-circle distance between two points on a sphere given their longitudes and latitudes . Important in navigation , it is a special case of a more general formula in spherical trigonometry , the law of haversines , that relates the sides and angles of spherical triangles. The first table of haversines in English was published by James Andrew in 1805, [ 1 ] but Florian Cajori credits an earlier use by JosΓ© de Mendoza y RΓ­os in 1801. [ 2 ] [ 3 ] The term haversine was coined in 1835 by James Inman . [ 4 ] [ 5 ] These names follow from the fact that they are customarily written in terms of the haversine function, given by hav ΞΈ = sin 2 ( ⁠ ΞΈ / 2 ⁠ ) . The formulas could equally be written in terms of any multiple of the haversine, such as the older versine function (twice the haversine). Prior to the advent of computers, the elimination of division and multiplication by factors of two proved convenient enough that tables of haversine values and logarithms were included in 19th- and early 20th-century navigation and trigonometric texts. [ 6 ] [ 7 ] [ 8 ] These days, the haversine form is also convenient in that it has no coefficient in front of the sin 2 function. Let the central angle ΞΈ between any two points on a sphere be: where The haversine formula allows the haversine of ΞΈ to be computed directly from the latitude (represented by Ο† ) and longitude (represented by Ξ» ) of the two points: where Finally, the haversine function hav( ΞΈ ) , applied above to both the central angle ΞΈ and the differences in latitude and longitude, is The haversine function computes half a versine of the angle ΞΈ , or the squares of half chord of the angle on a unit circle (sphere). To solve for the distance d , apply the archaversine ( inverse haversine ) to hav( ΞΈ ) or use the arcsine (inverse sine) function: or more explicitly: where Ο† m = Ο† 2 + Ο† 1 2 {\displaystyle \varphi _{\text{m}}={\frac {\varphi _{2}+\varphi _{1}}{2}}} . When using these formulae, one must ensure that h = hav( ΞΈ ) does not exceed 1 due to a floating point error ( d is real only for 0 ≀ h ≀ 1 ). h only approaches 1 for antipodal points (on opposite sides of the sphere)β€”in this region, relatively large numerical errors tend to arise in the formula when finite precision is used. Because d is then large (approaching Ο€ R , half the circumference) a small error is often not a major concern in this unusual case (although there are other great-circle distance formulas that avoid this problem). (The formula above is sometimes written in terms of the arctangent function, but this suffers from similar numerical problems near h = 1 .) As described below, a similar formula can be written using cosines (sometimes called the spherical law of cosines , not to be confused with the law of cosines for plane geometry) instead of haversines, but if the two points are close together (e.g. a kilometer apart, on the Earth) one might end up with cos( ⁠ d / R ⁠ ) = 0.99999999 , leading to an inaccurate answer. Since the haversine formula uses sines, it avoids that problem. Either formula is only an approximation when applied to the Earth , which is not a perfect sphere: the " Earth radius " R varies from 6356.752Β km at the poles to 6378.137Β km at the equator. More importantly, the radius of curvature of a north-south line on the earth's surface is 1% greater at the poles (β‰ˆ6399.594Β km) than at the equator (β‰ˆ6335.439Β km)β€”so the haversine formula and law of cosines cannot be guaranteed correct to better than 0.5%. [ citation needed ] More accurate methods that consider the Earth's ellipticity are given by Vincenty's formulae and the other formulas in the geographical distance article. Given a unit sphere, a "triangle" on the surface of the sphere is defined by the great circles connecting three points u , v , and w on the sphere. If the lengths of these three sides are a (from u to v ), b (from u to w ), and c (from v to w ), and the angle of the corner opposite c is C , then the law of haversines states: [ 10 ] Since this is a unit sphere, the lengths a , b , and c are simply equal to the angles (in radians ) subtended by those sides from the center of the sphere (for a non-unit sphere, each of these arc lengths is equal to its central angle multiplied by the radius R of the sphere). In order to obtain the haversine formula of the previous section from this law, one simply considers the special case where u is the north pole , while v and w are the two points whose separation d is to be determined. In that case, a and b are ⁠ Ο€ / 2 ⁠ βˆ’ Ο† 1,2 (that is, the, co-latitudes), C is the longitude separation Ξ» 2 βˆ’ Ξ» 1 , and c is the desired ⁠ d / R ⁠ . Noting that sin( ⁠ Ο€ / 2 ⁠ βˆ’ Ο† ) = cos( Ο† ) , the haversine formula immediately follows. To derive the law of haversines, one starts with the spherical law of cosines : As mentioned above, this formula is an ill-conditioned way of solving for c when c is small. Instead, we substitute the identity that cos( ΞΈ ) = 1 βˆ’ 2 hav( ΞΈ ) , and also employ the addition identity cos( a βˆ’ b ) = cos( a ) cos( b ) + sin( a ) sin( b ) , to obtain the law of haversines, above. One can prove the formula: by transforming the points given by their latitude and longitude into cartesian coordinates , then taking their dot product . Consider two points p 1 , p 2 {\displaystyle {\bf {p_{1},p_{2}}}} on the unit sphere , given by their latitude Ο† {\displaystyle \varphi } and longitude Ξ» {\displaystyle \lambda } : These representations are very similar to spherical coordinates , however latitude is measured as angle from the equator and not the north pole. These points have the following representations in cartesian coordinates: From here we could directly attempt to calculate the dot product and proceed, however the formulas become significantly simpler when we consider the following fact: the distance between the two points will not change if we rotate the sphere along the z-axis. This will in effect add a constant to Ξ» 1 , Ξ» 2 {\displaystyle \lambda _{1},\lambda _{2}} . Note that similar considerations do not apply to transforming the latitudes - adding a constant to the latitudes may change the distance between the points. By choosing our constant to be βˆ’ Ξ» 1 {\displaystyle -\lambda _{1}} , and setting Ξ» β€² = Ξ” Ξ» {\displaystyle \lambda '=\Delta \lambda } , our new points become: With ΞΈ {\displaystyle \theta } denoting the angle between p 1 {\displaystyle {\bf {p_{1}}}} and p 2 {\displaystyle {\bf {p_{2}}}} , we now have that: The haversine formula can be used to find the approximate distance between the White House in Washington, D.C. (latitude 38.898Β° N, longitude 77.037Β° W) and the Eiffel Tower in Paris (latitude 48.858Β° N, longitude 2.294Β° E). The difference in latitudes is Ξ” Ο† = {\displaystyle \Delta \varphi ={}} 9.96Β° and the difference in longitudes is Ξ” Ξ» = {\displaystyle \Delta \lambda ={}} 79.331Β°. Inputting these into the haversine formula, hav ⁑ ( ΞΈ ) = hav ⁑ ( Ξ” Ο† ) + cos ⁑ ( Ο† 1 ) cos ⁑ ( Ο† 2 ) hav ⁑ ( Ξ” Ξ» ) = hav ⁑ ( 9.96 ∘ ) + cos ⁑ ( 38.898 ∘ ) cos ⁑ ( 48.858 ∘ ) hav ⁑ ( 79.331 ∘ ) β‰ˆ 0.0075356 + 0.77827 Γ— 0.65793 Γ— 0.40743 β‰ˆ 0.21616 ΞΈ β‰ˆ 55.411 ∘ . {\displaystyle {\begin{aligned}\operatorname {hav} \left(\theta \right)&=\operatorname {hav} (\Delta \varphi )+\cos(\varphi _{1})\cos(\varphi _{2})\operatorname {hav} (\Delta \lambda )\\[5mu]&=\operatorname {hav} (9.96^{\circ })+\cos(38.898^{\circ })\cos(48.858^{\circ })\operatorname {hav} (79.331^{\circ })\\[5mu]&\approx 0.0075356+0.77827\times 0.65793\times 0.40743\\[5mu]&\approx 0.21616\\[5mu]\theta &\approx 55.411^{\circ }.\end{aligned}}} The great-circle distance is this central angle, in radians (55.411 degrees is 0.96710 radians), multiplied by the average radius of the Earth , 0.96710 Γ— 6371.2 km β‰ˆ 6161.6 km . {\displaystyle 0.96710\times 6371.2\ {\text{km}}\approx 6161.6\ {\text{km}}.} By comparison, using a more accurate ellipsoidal model of the earth, the geodesic distance between these landmarks can be computed as approximately 6177.45 km. [ 11 ]
https://en.wikipedia.org/wiki/Haversine_formula
The Havriliak–Negami relaxation is an empirical modification of the Debye relaxation model in electromagnetism. Unlike the Debye model, the Havriliak–Negami relaxation accounts for the asymmetry and broadness of the dielectric dispersion curve. The model was first used to describe the dielectric relaxation of some polymers , [ 1 ] by adding two exponential parameters to the Debye equation: where Ξ΅ ∞ {\displaystyle \varepsilon _{\infty }} is the permittivity at the high frequency limit, Ξ” Ξ΅ = Ξ΅ s βˆ’ Ξ΅ ∞ {\displaystyle \Delta \varepsilon =\varepsilon _{s}-\varepsilon _{\infty }} where Ξ΅ s {\displaystyle \varepsilon _{s}} is the static, low frequency permittivity, and Ο„ {\displaystyle \tau } is the characteristic relaxation time of the medium. The exponents Ξ± {\displaystyle \alpha } and Ξ² {\displaystyle \beta } describe the asymmetry and broadness of the corresponding spectra. Depending on application, the Fourier transform of the stretched exponential function can be a viable alternative that has one parameter less. For Ξ² = 1 {\displaystyle \beta =1} the Havriliak–Negami equation reduces to the Cole–Cole equation , for Ξ± = 1 {\displaystyle \alpha =1} to the Cole–Davidson equation . The storage part Ξ΅ β€² {\displaystyle \varepsilon '} and the loss part Ξ΅ β€³ {\displaystyle \varepsilon ''} of the permittivity (here: Ξ΅ ^ ( Ο‰ ) = Ξ΅ β€² ( Ο‰ ) βˆ’ i Ξ΅ β€³ ( Ο‰ ) {\displaystyle {\hat {\varepsilon }}(\omega )=\varepsilon '(\omega )-i\varepsilon ''(\omega )} with ( Β± i ) 2 = βˆ’ 1 {\displaystyle (\pm i)^{2}=-1} ) can be calculated as and with The maximum of the loss part lies at The Havriliak–Negami relaxation can be expressed as a superposition of individual Debye relaxations with the real valued distribution function where if the argument of the arctangent is positive, else [ 2 ] Noteworthy, g ( ln ⁑ Ο„ ) {\displaystyle g(\ln \tau )} becomes imaginary valued for and complex valued for The first logarithmic moment of this distribution, the average logarithmic relaxation time is where Ξ¨ {\displaystyle \Psi } is the digamma function and E u {\displaystyle {\rm {Eu}}} the Euler constant . [ 3 ] The inverse Fourier transform of the Havriliak-Negami function (the corresponding time-domain relaxation function) can be numerically calculated. [ 4 ] It can be shown that the series expansions involved are special cases of the Fox–Wright function . [ 5 ] In particular, in the time-domain the corresponding of Ξ΅ ^ ( Ο‰ ) {\displaystyle {\hat {\varepsilon }}(\omega )} can be represented as where Ξ΄ ( t ) {\displaystyle \delta (t)} is the Dirac delta function and is a special instance of the Fox–Wright function and, precisely, it is the three parameters Mittag-Leffler function [ 6 ] also known as the Prabhakar function. The function E Ξ± , Ξ² Ξ³ ( z ) {\displaystyle E_{\alpha ,\beta }^{\gamma }(z)} can be numerically evaluated, for instance, by means of a Matlab code . [ 7 ]
https://en.wikipedia.org/wiki/Havriliak–Negami_relaxation
The Hawaii Ocean Time-series ( HOT ) program is a long-term oceanographic study based at the University of Hawaii at Manoa . In 2015, the American Society for Microbiology designated the HOT Program's field site Station ALOHA (A Long-Term Oligotrophic Habitat Assessment; ( 22Β°45β€²N 158Β°00β€²W ο»Ώ / ο»Ώ 22.750Β°N 158.000Β°W ο»Ώ / 22.750; -158.000 )) a "Milestone in Microbiology ", for playing "a key role in defining the discipline of microbial oceanography and educating the public about the vital role of marine microbes in global ecosystems." [ 1 ] Scientists working on the Hawaii Ocean Time-series (HOT) program have been making repeated observations of the hydrography , chemistry and biology of the water column at a station north of Oahu, Hawaii since October 1988. [ 2 ] The objective of this research is to provide a comprehensive description of the ocean at a site representative of the North Pacific Subtropical Gyre . [ 3 ] Cruises are made approximately once per month to the deep-water Station ALOHA located 100Β km north of Oahu, Hawaii. Measurements of the thermohaline structure, water column chemistry, currents , optical properties, primary production , plankton community structure, and rates of particle export are made on each cruise. The HOT program also uses autonomous underwater vehicles , including floats and gliders , to collect data at Station ALOHA between cruises. [ 4 ] HOT was founded to understand the processes controlling the fluxes of carbon and associated bioelements in the ocean and to document changes in the physical structure of the water column. To achieve this, the HOT program has several specific goals: The dissolved inorganic carbon data set that has been accumulated over the course of the HOT program shows the increase of carbon dioxide in the surface waters of the Pacific and subsequent acidification of the ocean . [ 6 ] The data collected by these cruises are available online. The 200th cruise of the HOT program was in 2008. [ 7 ] HOT recently celebrated its 25th year in operation, with the 250th research cruise occurring in March 2013. Station ALOHA is a deep water (~4,800 m) location approximately 100Β km north of the Hawaiian Island of Oahu. Thus, the region is far enough from land to be free of coastal ocean dynamics and terrestrial inputs, but close enough to a major port (Honolulu) to make relatively short duration (less than five days) near-monthly cruises logistically and financially feasible. Sampling at this site occurs within a 10Β km radius around the center of the station. Each HOT cruise begins with a stop at a coastal station south of the island of Oahu, approximately 10Β km off Kahe Point (21Β° 20.6'N, 158Β° 16.4'W) in 1500 m of water. Station Kahe (termed Station 1) is used to test equipment and train new personnel before departing for Station ALOHA. Since August 2004, Station ALOHA has also been home to a surface mooring outfitted for meteorological and upper ocean measurements; this mooring, named WHOTS (also termed Station 50), is a collaborative project between Woods Hole Oceanographic Institution and HOT. WHOTS provides long-term, high-quality air-sea fluxes as a coordinated part of HOT, contributing to the program’s goals of observing heat, fresh water and chemical fluxes. In 2011, the ALOHA Cabled Observatory (ACO) became operational. This instrumented fiber optic cabled observatory provides power and communications to the seabed (4728 m). The ACO is currently configured with an array of thermistors , current meters , conductivity sensors, two hydrophones , and a video camera . [ 5 ] A core suite of environmental variables was selected at the start of the program that is expected to display detectable change on time scales of several days to one decade. Since 1988, the interdisciplinary station work has included physical, chemical, biological and sedimentological observations and rate measurements. The initial phase of the HOT program (October 1988 – February 1991) was entirely supported by research vessels, with the exception of the availability of existing satellite and ocean buoy sea surface data. In February 1991, an array of inverted echosounders (IES) was deployed around Station ALOHA and in June 1992, a sequencing sediment trap mooring was deployed a few km north of it. In 1993, the IES network was replaced with two strategically positioned instruments: one at Station ALOHA and the other at the coastal station Kaena. A physical-biogeochemical mooring (known as HALE-ALOHA) was deployed from January 1997 to June 2000 for high frequency atmospheric and oceanic observations. [ 8 ] HOT relies on the University-National Oceanographic Laboratory System research vessel Kilo Moana operated by the University of Hawaii for most of the near-monthly sampling expeditions. When at Station ALOHA, a variety of sampling strategies is used to capture the range of physical and biogeochemical dynamics natural to the NPSG ecosystem. These strategies include high resolution conductivity-temperature-depth ( CTD ) profiles, biogeochemical analyses of discrete water samples, in situ vertically profiling bio-optical instrumentation, free-drifting arrays for determinations of primary production and particle fluxes, deep ocean sediment traps, and oblique plankton net tows. The suite of core measurements conducted by HOT has remained largely unchanged over the program’s lifetime. On each HOT cruise, samples are collected from the surface ocean to near the sea bed (~4,800 m), with the most intensive sampling occurring in the upper 1,000 m. HOT utilizes a β€œburst” vertical profiling strategy where physical and biogeochemical properties are measured at 3 hour intervals over a 36-hour period, covering 3 semi-diurnal tidal cycles and 1 inertial period (~31 hours). This approach captures variability in ocean dynamics due to internal tides around Station ALOHA. It is designed to assess variability on time scales of a few hours to a few years. High frequency variability (less than 6 hours) and variability on time scales of between 3–60 days are not adequately sampled at the present time. [ 5 ] [ 9 ] The 25 year record of ocean carbon measurements at Station ALOHA document that the partial pressure of CO 2 ( p CO 2 ) in the mixed layer is increasing at a rate slightly greater than the trend observed in the atmosphere. This has been accompanied by progressive decreases in seawater pH. Although the effect of anthropogenic CO 2 is evidenced by long-term decreases in seawater pH throughout the upper 600 m, the rate of acidification at Station ALOHA varies with depth. For example, in the upper mesopelagic waters (~160–310 m) pH is decreasing at nearly twice the rate observed in the surface waters. Such depth-dependent differences in acidification are due to a combination of regional differences in time-varying climate signatures, mixing, and changes in biological activity. [ 10 ] [ 11 ]
https://en.wikipedia.org/wiki/Hawaii_Ocean_Time-series
Hawaiian ethnobiology is the study of how people in Hawaii, particularly pertaining to those of pre-western contact, interacted with the plants around them. This includes to practices of agroforestry , horticulture , religious plants, medical plants, agriculture , and aquaculture. Often in conservation , "Hawaiian ethnobiology" describes the state of ecology in the Hawaiian Islands prior to human contact. However, since "ethno" refers to people, "Hawaiian ethnobiology" is the study of how people, past and present, interact with the living world around them. The concept of conservation was, like many things in pre-contact ancient Hawaii , decentralized. At the ahupuaΚ»a level, a konohiki managed the natural resource wealth. He would gather information on people's observations and make decisions as to what was kapu (strictly forbidden) during what times. Also, the concept of kuleana (responsibility) fueled conservation. Families were delegated a fishing area. It was their responsibility to not take more than they needed during fishing months, and to feed the fish kalo ( Colocasia esculenta ) and breadfruit ( Artocarpus altilis ) during a certain season. The same idea of not collecting more than what was needed, and tending to the care of "wild" harvested products extended up into the forest. In modern times, this role is institutionalized within a central state government. This causes animosity between natural resource collectors (subsistence fisherman) and state legislature (local Department of Fish and Wildlife ). Managing the forest resources around you is agroforestry . This includes timber and non-timber forest crops. Hawaiian agroforestry practices If a religious belief system influences a culture's practices in how the perceive and manage their environment, then that plant is part of a "sacred ecology". [ 1 ] Hawaiian sacred plants include Κ» awa ( Piper methysticum ), which was used both religiously as a sacrament, and by the common people as a relaxant/sedative. Other religious plants that have shaped ecology are Ki ( Cordyline fruticosa ) Kalo. Ki is a sterile plant, so the wide distribution of the plant across the main Hawaiian islands indicated human activity; if not directly planted, then through gravitational fragmentation. [ 2 ] Kalo was the staple starch crop of the Hawaiian diet. In Hawaiian genealogy, Haloa was the first born of Papa (Earth Mother) and Wakea (Sky Father). He was a still birth, so Papa went out and buried Haloa. Haloa then sprouted into the first kalo plant. Their second son they also named Haloa. He was charged with the kuleana to always care for his older brother. The historical Hawaiian people draw their direct lineage from Haloa, and did, and some still do, assume his responsibility to care for kalo. This responsibility, and need for food, drove the building of huge kalo growing complexes called loΚ»i. [ 1 ] [ 2 ]
https://en.wikipedia.org/wiki/Hawaiian_ethnobiology
Hawkes Ocean Technologies is a marine engineering firm that specializes in consumer submarines , founded by Graham Hawkes . [ 1 ] It is headquartered in San Francisco , US. [ 2 ] Hawkes Remotes is a subsidiary that builds ROVs (remotely operated vehicles), unmanned robotic submarines. [ 3 ] Hawkes builds the DeepFlight range of submersibles , which uses hydrodynamic forces for diving, instead of ballast. [ 4 ] The subs are all-electric. [ 5 ] All or some of them have two pairs of wings like an airplane's, one pair front and the other pair rear, shorter than an airplane's and the other way up so they push the submarine down.
https://en.wikipedia.org/wiki/Hawkes_Ocean_Technologies
The Professor Stephen Hawking Fellowship is a prestigious annual fellowship of the Cambridge Union Society in the University of Cambridge . Awarded to an individual who has made an exceptional contribution to the STEM fields and social discourse, [ 1 ] it is unique amongst comparable accolades in that it is conferred by the students of the University (through the Union), rather than the University itself. [ 2 ] Established to celebrate Hawking’s achievements and the close relationship between him and the students of Cambridge, Professor Hawking accepted the inaugural fellowship and delivered the lecture in his last public appearance before his passing. [ 3 ] [ 4 ] Each honouree visits the Union to commence their tenure as fellow, delivering what is known as β€˜The Hawking Lecture’.
https://en.wikipedia.org/wiki/Hawking_Fellowship
The Friedel–Crafts reactions are a set of reactions developed by Charles Friedel and James Crafts in 1877 to attach substituents to an aromatic ring . [ 1 ] Friedel–Crafts reactions are of two main types: alkylation reactions and acylation reactions. Both proceed by electrophilic aromatic substitution . [ 2 ] [ 3 ] [ 4 ] [ 5 ] In commercial applications, the alkylating agents are generally alkenes , some of the largest scale reactions practiced in industry. Such alkylations are of major industrial importance, e.g. for the production of ethylbenzene , the precursor to polystyrene, from benzene and ethylene and for the production of cumene from benzene and propene in cumene process : Industrial production typically uses solid acids derived from a zeolite as the catalyst. Friedel–Crafts alkylation involves the alkylation of an aromatic ring . Traditionally, the alkylating agents are alkyl halides . Many alkylating agents can be used instead of alkyl halides. For example, enones and epoxides can be used in presence of protons. The reaction typically employs a strong Lewis acid , such as aluminium chloride as catalyst, to increase the electrophilicity of the alkylating agent. [ 6 ] This reaction suffers from the disadvantage that the product is more nucleophilic than the reactant because alkyl groups are activators for the Friedel–Crafts reaction . Consequently, overalkylation can occur. However, steric hindrance can be exploited to limit the number of successive alkylation cycles that occur, as in the t -butylation of 1,4-dimethoxybenzene that gives only the product of two alkylation cycles and with only one of three possible isomers of it: [ 7 ] Furthermore, the reaction is only useful for primary alkyl halides in an intramolecular sense when a 5- or 6-membered ring is formed. For the intermolecular case, the reaction is limited to tertiary alkylating agents, some secondary alkylating agents (ones for which carbocation rearrangement is degenerate), or alkylating agents that yield stabilized carbocations (e.g., benzylic or allylic ones). In the case of primary alkyl halides, the carbocation-like complex (R (+) ---X---Al (-) Cl 3 ) will undergo a carbocation rearrangement reaction to give almost exclusively the rearranged product derived from a secondary or tertiary carbocation. [ 8 ] Protonation of alkenes generates carbocations , the electrophiles. A laboratory-scale example by the synthesis of neophyl chloride from benzene and methallyl chloride using sulfuric acid catalyst. [ 9 ] The general mechanism for primary alkyl halides is shown in the figure below. [ 8 ] Friedel–Crafts alkylations can be reversible . Although this is usually undesirable it can be exploited; for instance by facilitating transalkylation reactions. [ 10 ] It also allows alkyl chains to be added reversibly as protecting groups . This approach is used industrially in the synthesis of 4,4'-biphenol via the oxidative coupling and subsequent dealkylation of 2,6-di-tert-butylphenol . [ 11 ] [ 12 ] Friedel–Crafts acylation involves the acylation of aromatic rings. Typical acylating agents are acyl chlorides . Acid anhydrides as well as carboxylic acids are also viable. A typical Lewis acid catalyst is aluminium trichloride . Because, however, the product ketone forms a rather stable complex with Lewis acids such as AlCl 3 , a stoichiometric amount or more of the "catalyst" must generally be employed, unlike the case of the Friedel–Crafts alkylation, in which the catalyst is constantly regenerated. [ 13 ] Reaction conditions are similar to the Friedel–Crafts alkylation. This reaction has several advantages over the alkylation reaction. Due to the electron-withdrawing effect of the carbonyl group, the ketone product is always less reactive than the original molecule, so multiple acylations do not occur. Also, there are no carbocation rearrangements, as the acylium ion is stabilized by a resonance structure in which the positive charge is on the oxygen. The viability of the Friedel–Crafts acylation depends on the stability of the acyl chloride reagent. Formyl chloride, for example, is too unstable to be isolated. Thus, synthesis of benzaldehyde through the Friedel–Crafts pathway requires that formyl chloride be synthesized in situ . This is accomplished by the Gattermann-Koch reaction , accomplished by treating benzene with carbon monoxide and hydrogen chloride under high pressure, catalyzed by a mixture of aluminium chloride and cuprous chloride . Simple ketones that could be obtained by Friedel–Crafts acylation are produced by alternative methods, e.g., oxidation, in industry. The reaction proceeds through generation of an acylium center. The reaction is completed by deprotonation of the arenium ion by AlCl 4 βˆ’ , regenerating the AlCl 3 catalyst. However, in contrast to the truly catalytic alkylation reaction, the formed ketone is a moderate Lewis base, which forms a complex with the strong Lewis acid aluminum trichloride. The formation of this complex is typically irreversible under reaction conditions. Thus, a stochiometric quantity of AlCl 3 is needed. The complex is destroyed upon aqueous workup to give the desired ketone. For example, the classical synthesis of deoxybenzoin calls for 1.1 equivalents of AlCl 3 with respect to the limiting reagent, phenylacetyl chloride. [ 14 ] In certain cases, generally when the benzene ring is activated, Friedel–Crafts acylation can also be carried out with catalytic amounts of a milder Lewis acid (e.g. Zn(II) salts) or a BrΓΈnsted acid catalyst using the anhydride or even the carboxylic acid itself as the acylation agent. If desired, the resulting ketone can be subsequently reduced to the corresponding alkane substituent by either Wolff–Kishner reduction or Clemmensen reduction . The net result is the same as the Friedel–Crafts alkylation except that rearrangement is not possible. [ 15 ] Arenes react with certain aldehydes and ketones to form the hydroxyalkylated products, for example in the reaction of the mesityl derivative of glyoxal with benzene: [ 16 ] As usual, the aldehyde group is more reactive electrophile than the phenone . This reaction is related to several classic named reactions: Friedel–Crafts reactions have been used in the synthesis of several triarylmethane and xanthene dyes . [ 26 ] Examples are the synthesis of thymolphthalein (a pH indicator) from two equivalents of thymol and phthalic anhydride : A reaction of phthalic anhydride with resorcinol in the presence of zinc chloride gives the fluorophore fluorescein . Replacing resorcinol by N,N-diethylaminophenol in this reaction gives rhodamine B : The Haworth synthesis is a classic method for the synthesis of polycyclic aromatic hydrocarbons. In this reaction, an arene is reacted with succinic anhydride , the subsequent product is then reduced in either a Clemmensen reduction or a Wolff-Kishner reduction . Lastly, a second Friedel-Crafts acylation takes place with addition of acid. [ 27 ] The product formed in this reaction is then analogously reduced, followed by a dehydrogenation reaction (with the reagent SeO 2 for example) to extend the aromatic ring system. [ 28 ] Reaction of chloroform with aromatic compounds using an aluminium chloride catalyst gives triarylmethanes, which are often brightly colored, as is the case in triarylmethane dyes. This is a bench test for aromatic compounds. [ 29 ]
https://en.wikipedia.org/wiki/Haworth_Phenanthrene_synthesis
In chemistry , a Haworth projection is a common way of writing a structural formula to represent the cyclic structure of monosaccharides with a simple three-dimensional perspective. A Haworth projection approximates the shapes of the actual molecules better for furanoses β€”which are in reality nearly planarβ€”than for pyranoses that exist in solution in the chair conformation. [ 1 ] Organic chemistry and especially biochemistry are the areas of chemistry that use the Haworth projection the most. The Haworth projection was named after the British chemist Sir Norman Haworth . [ 2 ] A Haworth projection has the following characteristics: [ 3 ] This stereochemistry article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haworth_projection
The Haworth synthesis is a multistep preparation of alkyl-substituted polycyclic aromatic hydrocarbons developed by the British chemist Robert Downs Haworth [ 1 ] [ 2 ] in 1932. [ 3 ] This chemical process -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haworth_synthesis
Hay's test , also known as Hay's sulphur powder test , is a chemical test used for detecting the presence of bile salts in urine . [ 1 ] Sulphur powder is sprinkled into a test tube with three millilitres of urine and if the test is positive, the sulphur powder sinks to the bottom of the test tube. Sulphur powder sinks because bile salts decrease the surface tension of urine. [ 2 ] [ 3 ] [ 4 ] [ 5 ] This article about analytical chemistry is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hay's_test
The Hayashi limit is a theoretical constraint upon the maximum radius of a star for a given mass . When a star is fully within hydrostatic equilibrium β€”a condition where the inward force of gravity is matched by the outward pressure of the gasβ€”the star can not exceed the radius defined by the Hayashi limit. This has important implications for the evolution of a star, both during the formulative contraction period and later when the star has consumed most of its hydrogen supply through nuclear fusion . [ 1 ] A Hertzsprung-Russell diagram displays a plot of a star's surface temperature against the luminosity . On this diagram, the Hayashi limit forms a nearly vertical line at about 2,500 K. The outer layers of low temperature stars are always convective, and models of stellar structure for fully convective stars do not provide a solution to the right of this line. Thus in theory, stars are constrained to remain to the left of this limit during all periods when they are in hydrostatic equilibrium, and the region to the right of the line forms a type of "forbidden zone". Note, however, that there are exceptions to the Hayashi limit. These include collapsing protostars , as well as stars with magnetic fields that interfere with the internal transport of energy through convection. [ 2 ] Red giants are stars that have expanded their outer envelope in order to support the nuclear fusion of helium. This moves them up and to the right on the H-R diagram. However, they are constrained by the Hayashi limit not to expand beyond a certain radius. Stars that find themselves across the Hayashi limit have large convection currents in their interior driven by massive temperature gradients. Additionally, those stars states are unstable so the stars rapidly adjust their states, moving in the Hertzprung-Russel diagram until they reach the Hayashi limit. [ 3 ] When lower mass stars in the main sequence start expanding and becoming a red giant the stars revisit the Hayashi track . The Hayashi limit constrains the asymptotic giant branch evolution of stars which is important in the late evolution of stars and can be observed, for example, in the ascending branches of the Hertzsprung–Russell diagrams of globular clusters, which have stars of approximately the same age and composition. [ 4 ] The Hayashi limit is named after ChΕ«shirō Hayashi , a Japanese astrophysicist. [ 5 ] Despite its importance to protostars and late stage main sequence stars, the Hayashi limit was only recognized in Hayashi’s paper in 1961. This late recognition may be because the properties of the Hayashi track required numerical calculations that were not fully developed before. [ 4 ] We can derive the relation between the luminosity, temperature and pressure for a simple model for a fully convective star and from the form of this relation we can infer the Hayashi limit. This is an extremely crude model of what occurs in convective stars, but it has good qualitative agreement with the full model with less complications. We follow the derivation in Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution. [ 4 ] Nearly all of the interior part of convective stars has an adiabatic stratification (corrections to this are small for fully convective regions), such that Ξ΄ l n T Ξ΄ l n P = βˆ‡ a d i a b a t i c = 0.4 {\displaystyle {\frac {\delta lnT}{\delta lnP}}=\nabla _{adiabatic}=0.4} , which holds for an adiabatic expansion of an ideal gas . We assume that this relation holds from the interior to the surface of the starβ€”the surface is called photosphere. We assume βˆ‡ a d i a b a t i c {\displaystyle \nabla _{adiabatic}} to be constant throughout the interior of the star with value 0.4. However, we obtain the correct distinctive behavior. For the interior we consider a simple polytropic relation between P and T: P = C T ( 1 + n ) {\displaystyle P=CT^{(1+n)}} With the index n = 3 / 2 {\displaystyle n=3/2} . We assume the relation above to hold until the photosphere where we assume to have a simple absorption law ΞΊ = ΞΊ 0 P a T b {\displaystyle \kappa =\kappa _{0}P^{a}T^{b}} Then, we use the hydrostatic equilibrium equation and integrate it with respect to the radius to give us P 0 = C o n s t a n t βˆ— ( M R 2 T e f f βˆ’ b ) 1 1 + a {\displaystyle P_{0}=Constant*\left({\frac {M}{R^{2}}}T_{eff}^{-b}\right)^{\frac {1}{1+a}}} For the solution in the interior we set P = P 0 {\displaystyle P=P_{0}} ; T = T e f f {\displaystyle T=T_{eff}} in the P-T relation and then eliminate pressure of this equation. Luminosity is given by the Stephan-Boltzmann law applied to a perfect black body : L = 4 Ο€ R 2 Οƒ T e f f 4 {\displaystyle L=4\pi R^{2}\sigma \,T_{eff}^{4}} . Thus, any value of R corresponds to a certain point in the Hertzsprung–Russell diagram. Finally, after some algebra this is the equation for the Hayashi limit in the Hertzsprung–Russell diagram: log ⁑ ( T e f f ) = A log ⁑ ( L ) + B log ⁑ ( M ) + c o n s t a n t {\displaystyle \log(T_{e}ff)=A\log(L)+B\log(M)+constant} [ 4 ] With coefficients A = 0.75 a βˆ’ 0.25 b βˆ’ 5.5 a + 1.5 {\displaystyle A={\frac {0.75a-0.25}{b-5.5a+1.5}}} , B = 0.5 a βˆ’ 1.5 b βˆ’ 5.5 a + 1.5 {\displaystyle B={\frac {0.5a-1.5}{b-5.5a+1.5}}} Takeaways from plugin in a β‰ˆ 1 {\displaystyle a\approx 1} and b β‰ˆ 3 {\displaystyle b\approx 3} for a cool hydrogen ion dominated atmosphere oppacity model ( T < 5000 K {\displaystyle T<5000K} ): These predictions are supported by numerical simulations of stars. [ 4 ] Until now we have made no claims on the stability of locale to the left, right or at the Hayashi limit in the Hertzsprung–Russell diagram. To the left of the Hayashi limit, we have βˆ‡ < βˆ‡ a d i a b a t i c {\displaystyle \nabla <\nabla {adiabatic}} and some part of the model is radiative. The model is fully convective at the Hayashi limit with βˆ‡ = βˆ‡ a d i a b a t i c {\displaystyle \nabla =\nabla {adiabatic}} . Models to the right of the Hayashi limit should have βˆ‡ > βˆ‡ a d i a b a t i c {\displaystyle \nabla >\nabla _{adiabatic}} . If a star is formed such that some region in its deep interior has large βˆ‡ βˆ’ βˆ‡ a d i a b a t i c > 0 {\displaystyle \nabla -\nabla _{adiabatic}>0} large convective fluxes with velocities v c o n v e c t i v e β‰ˆ ( βˆ‡ βˆ’ βˆ‡ a d i a b a t i c ) / 2 {\displaystyle v_{convective}\approx (\nabla -\nabla _{adiabatic})/2} . The convective fluxes of energy cooldown the interior rapidly until βˆ‡ = βˆ‡ a d i a b a t i c {\displaystyle \nabla =\nabla _{adiabatic}} and the star has moved to the Hayashi limit. In fact, it can be shown from the mixing length model that even a small excess can transport energy from the deep interior to the surface by convective fluxes. This will happen within the short timescale for the adjustment of convection which is still larger than timescales for non-equilibrium processes in the star such as hydrodynamic adjustment associated with the thermal time scale . Hence, the limit between an β€œallowed” stable region (left) and a β€œforbidden” unstable region (right) for stars of given M and composition that are in hydrostatic equilibrium and have a fully adjusted convection is the Hayashi limit. [ 4 ]
https://en.wikipedia.org/wiki/Hayashi_limit
The Hayashi rearrangement is the chemical reaction of ortho -benzoylbenzoic acids catalyzed by sulfuric acid or phosphorus pentoxide . [ 1 ] [ 2 ] This reaction proceeds through electrophilic acylium ion attack with a spiro intermediate. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Hayashi_rearrangement
The Hayashi track is a luminosity–temperature relationship obeyed by infant stars of less than 3 M β˜‰ in the pre-main-sequence phase (PMS phase) of stellar evolution. It is named after Japanese astrophysicist Chushiro Hayashi . On the Hertzsprung–Russell diagram , which plots luminosity against temperature, the track is a nearly vertical curve. After a protostar ends its phase of rapid contraction and becomes a T Tauri star , it is extremely luminous. The star continues to contract, but much more slowly. While slowly contracting, the star follows the Hayashi track downwards, becoming several times less luminous but staying at roughly the same surface temperature, until either a radiative zone develops, at which point the star starts following the Henyey track , or nuclear fusion begins, marking its entry onto the main sequence . The shape and position of the Hayashi track on the Hertzsprung–Russell diagram depends on the star's mass and chemical composition. For solar-mass stars, the track lies at a temperature of roughly 4000 K. Stars on the track are nearly fully convective and have their opacity dominated by hydrogen ions. Stars less than 0.5 M β˜‰ are fully convective even on the main sequence, but their opacity begins to be dominated by Kramers' opacity law after nuclear fusion begins, thus moving them off the Hayashi track. Stars between 0.5 and 3 M β˜‰ develop a radiative zone prior to reaching the main sequence. Stars between 3 and 10 M β˜‰ are fully radiative at the beginning of the pre-main-sequence. Even heavier stars are born onto the main sequence, with no PMS evolution. [ 1 ] At the end of a low- or intermediate-mass star's life, the star follows an analogue of the Hayashi track, but in reverseβ€”it increases in luminosity, expands, and stays at roughly the same temperature, eventually becoming a red giant . In 1961, Professor Chushiro Hayashi published two papers [ 2 ] [ 3 ] that led to the concept of the pre-main-sequence and form the basis of the modern understanding of early stellar evolution. Hayashi realized that the existing model, in which stars are assumed to be in radiative equilibrium with no substantial convection zone, cannot explain the shape of the red-giant branch . [ 4 ] He therefore replaced the model by including the effects of thick convection zones on a star's interior. A few years prior, Osterbrock proposed deep convection zones with efficient convection, analyzing them using the opacity of H βˆ’ ions (the dominant opacity source in cool atmospheres) in temperatures below 5000Β K. However, the earliest numerical models of Sun-like stars did not follow up on this work and continued to assume radiative equilibrium. [ 1 ] In his 1961 papers, Hayashi showed that the convective envelope of a star is determined by E = 4 Ο€ G 3 / 2 ( ΞΌ H k ) 5 / 2 M 1 / 2 R 3 / 2 P T 5 / 2 , {\displaystyle E=4\pi G^{3/2}\left({\frac {\mu H}{k}}\right)^{5/2}{\frac {M^{1/2}R^{3/2}P}{T^{5/2}}},} where E is unitless, and not the energy . Modelling stars as polytropes with index 3/2 (in other words, assuming they follow a pressure-density relationship of P = K ρ 5 / 3 {\displaystyle P=K\rho ^{5/3}} ), he found that E = 45 is the maximum for a quasistatic star. If a star is not contracting rapidly, E = 45 defines a curve on the HR diagram, to the right of which the star cannot exist. He then computed the evolutionary tracks and isochrones (luminosity–temperature distributions of stars at a given age) for a variety of stellar masses and noted that NGC2264 , a very young star cluster, fits the isochrones well. In particular, he calculated much lower ages for solar-type stars in NGC2264 and predicted that these stars were rapidly contracting T Tauri stars . In 1962, Hayashi published a 183-page review of stellar evolution, discussing the evolution of stars born in the forbidden region. These stars rapidly contract due to gravity before settling to a quasistatic, fully convective state on the Hayashi tracks. In 1965, numerical models by Iben and Ezer & Cameron realistically simulated pre-main-sequence evolution, including the Henyey track that stars follow after leaving the Hayashi track. These standard PMS tracks can still be found in textbooks on stellar evolution. The forbidden zone is the region on the HR diagram to the right of the Hayashi track where no star can be in hydrostatic equilibrium , even those that are partially or fully radiative. Newborn protostars start out in this zone, but are not in hydrostatic equilibrium and will rapidly move towards the Hayashi track. Because stars emit light via black-body radiation , the power per unit surface area they emit is given by the Stefan–Boltzmann law : j ⋆ = Οƒ T 4 . {\displaystyle j^{\star }=\sigma T^{4}.} The star's luminosity is therefore given by L = 4 Ο€ R 2 Οƒ T 4 . {\displaystyle L=4\pi R^{2}\sigma T^{4}.} For a given L , a lower temperature implies a larger radius, and vice versa. Thus, the Hayashi track separates the HR diagram into two regions: the allowed region to the left, with high temperatures and smaller radii for each luminosity, and the forbidden region to the right, with lower temperatures and correspondingly larger radii. The Hayashi limit can refer to either the lower bound in temperature or the upper bound on radius defined by the Hayashi track. The region to the right is forbidden because it can be shown that a star in the region must have a temperature gradient of d ln ⁑ T d ln ⁑ P > 0.4 , {\displaystyle {\frac {d\ln T}{d\ln P}}>0.4,} where d ln ⁑ T / d ln ⁑ P = 0.4 {\displaystyle d\ln T/d\ln P=0.4} for a monatomic ideal gas undergoing adiabatic expansion or contraction. A temperature gradient greater than 0.4 is therefore called superadiabatic. Consider a star with a superadiabatic gradient. Imagine a parcel of gas that starts at radial position r , but moves upwards to r + dr in a sufficiently short time that it exchanges negligible heat with its surroundingsβ€”in other words, the process is adiabatic. The pressure of the surroundings, as well as that of the parcel, decreases by some amount dP . The parcel's temperature changes by d T = 0.4 T d P / P {\displaystyle dT=0.4\,T\,dP/P} . The temperature of the surroundings also decreases, but by some amount dTβ€² that is greater than dT . The parcel therefore ends up being hotter than its surroundings. Since the ideal gas law can be written P = ρ R T / ΞΌ {\displaystyle P=\rho RT/\mu } , a higher temperature implies a lower density at the same pressure. The parcel is therefore also less dense than its surroundings. This will cause it to rise even more, and the parcel will become even less dense than its new surroundings. Clearly, this situation is not stable. In fact, a superadiabatic gradient causes convection . Convection tends to lower the temperature gradient because the rising parcel of gas will eventually be dispersed, dumping its excess thermal and kinetic energy into its surroundings and heating up said surroundings. In stars, the convection process is known to be highly efficient, with a typical d ln ⁑ T / d ln ⁑ P {\displaystyle d\ln T/d\ln P} that only exceeds the adiabatic gradient by 1 part in 10 million. [ 5 ] If a star is placed in the forbidden zone, with a temperature gradient much greater than 0.4, it will experience rapid convection that brings the gradient down. Since this convection will drastically change the star's pressure and temperature distribution, the star is not in hydrostatic equilibrium , and will contract until it is. A star far to the left of the Hayashi track has a temperature gradient smaller than adiabatic. This means that if a parcel of gas rises a tiny bit, it will be more dense than its surroundings and sink back to where it came from. Convection therefore does not occur, and almost all energy output is carried radiatively. Stars form when small regions of a giant molecular cloud collapse under their own gravity, becoming protostars . The collapse releases gravitational energy, which heats up the protostar. This process occurs on the free fall timescale , which is roughly 100,000 years for solar-mass protostars, and ends when the protostar reaches approximately 4000 K. This is known as the Hayashi boundary, and at this point, the protostar is on the Hayashi track. At this point, they are known as T Tauri stars and continue to contract, but much more slowly. As they contract, they decrease in luminosity because less surface area becomes available for emitting light. The Hayashi track gives the resulting change in temperature, which will be minimal compared to the change in luminosity because the Hayashi track is nearly vertical. In other words, on the HR diagram, a T Tauri star starts out on the Hayashi track with a high luminosity and moves downward along the track as time passes. The Hayashi track describes a fully convective star. This is a good approximation for very young pre-main-sequence stars because they are still cool and highly opaque , so that radiative transport is insufficient to carry away the generated energy and convection must occur. Stars less massive than 0.5 M β˜‰ remain fully convective, and therefore remain on the Hayashi track, throughout their pre-main-sequence stage, joining the main sequence at the bottom of the Hayashi track. Stars heavier than 0.5 M β˜‰ have higher interior temperatures, which decreases their central opacity and allows radiation to carry away large amounts of energy. This allows a radiative zone to develop around the star's core. The star is then no longer on the Hayashi track, and experiences a period of rapidly increasing temperature at nearly constant luminosity. This is called the Henyey track , and ends when temperatures are high enough to ignite hydrogen fusion in the core. The star is then on the main sequence . Lower-mass stars follow the Hayashi track until the track intersects with the main sequence, at which point hydrogen fusion begins and the star follows the main sequence. Even lower-mass 'stars' never achieve the conditions necessary to fuse hydrogen and become brown dwarfs . The exact shape and position of the Hayashi track can only be computed numerically using computer models. Nevertheless, we can make an extremely crude analytical argument that captures most of the track's properties. The following derivation loosely follows that of Kippenhahn, Weigert, and Weiss in Stellar Structure and Evolution . [ 5 ] In our simple model, a star is assumed to consist of a fully convective interior inside of a fully radiative atmosphere. The convective interior is assumed to be an ideal monatomic gas with a perfectly adiabatic temperature gradient: d ln ⁑ T d ln ⁑ P = 0.4. {\displaystyle {\frac {d\ln T}{d\ln P}}=0.4.} This quantity is sometimes labelled βˆ‡ {\displaystyle \nabla } . The following adiabatic equation therefore holds true for the entire interior: P 1 βˆ’ Ξ³ T Ξ³ = C , {\displaystyle P^{1-\gamma }T^{\gamma }=C,} where Ξ³ {\displaystyle \gamma } is the adiabatic gamma , which is 5/3 for an ideal monatomic gas. The ideal gas law says: P = N k T / V = ρ k T ΞΌ H = ( k ρ ΞΌ H ) Ξ³ C , {\displaystyle {\begin{aligned}P&=NkT/V\\[1ex]&={\frac {\rho kT}{\mu _{H}}}\\[1ex]&=\left({\frac {k\rho }{\mu _{H}}}\right)^{\gamma }C,\end{aligned}}} where ΞΌ H {\displaystyle \mu _{H}} is the molecular weight per particle, and H is (to a very good approximation) the mass of a hydrogen atom. This equation represents a polytrope of index 1.5, since a polytrope is defined by P = K ρ 1 + 1 / n {\displaystyle P=K\rho ^{1+1/n}} , where n = 1.5 {\displaystyle n=1.5} is the polytropic index. Applying the equation to the center of the star gives P c = ( k ρ c ΞΌ H ) Ξ³ C . {\displaystyle P_{c}=\left({\frac {k\rho _{c}}{\mu H}}\right)^{\gamma }C.} We can solve for C : C = ( ΞΌ H ρ c k ) Ξ³ P c . {\displaystyle C=\left({\frac {\mu _{H}}{\rho _{c}k}}\right)^{\gamma }P_{c}.} But for any polytrope, P c = W n G M 2 / R 4 {\displaystyle P_{c}=W_{n}GM^{2}/R^{4}} and ρ c = K n ρ avg {\displaystyle \rho _{c}=K_{n}\rho _{\text{avg}}} . W n , K n {\displaystyle W_{n},K_{n}} are all constants independent of pressure and density, and the average density is defined as ρ avg ≑ M 4 3 Ο€ R 3 . {\displaystyle \rho _{\text{avg}}\equiv {\frac {M}{{\frac {4}{3}}\pi R^{3}}}.} Plugging this 2 equations into the equation for C , we have C ∼ M 2 βˆ’ Ξ³ R 3 Ξ³ βˆ’ 4 , {\displaystyle C\sim M^{2-\gamma }R^{3\gamma -4},} where all multiplicative constants have been ignored. Recall that our original definition of C was P 1 βˆ’ Ξ³ T Ξ³ = C . {\displaystyle P^{1-\gamma }T^{\gamma }=C.} Therefore, for any star of mass M and radius R , we have P 1 βˆ’ Ξ³ T Ξ³ ∼ M 2 βˆ’ Ξ³ R 3 Ξ³ βˆ’ 4 , {\displaystyle P^{1-\gamma }T^{\gamma }\sim M^{2-\gamma }R^{3\gamma -4},} We need another relationship between P , T , M , and R in order to eliminate P . This relationship will come from the atmosphere model. The atmosphere is assumed to be thin, with average opacity k . Opacity is defined to be optical depth divided by density. Thus, by definition, the optical depth of the stellar surface, also called the photosphere , is d Ο„ d r = k ρ , {\displaystyle {\frac {d\tau }{dr}}=k\rho ,} Ο„ = ∫ R ∞ k ρ d r = k ∫ R ∞ ρ d r , {\displaystyle \tau =\int _{R}^{\infty }k\rho \,dr=k\int _{R}^{\infty }\rho \,dr,} where R is the stellar radius, also known as the position of the photosphere. The pressure at the surface is P 0 = ∫ R ∞ g ρ d r = G M R 2 ∫ R ∞ ρ d r = G M Ο„ k R 2 . {\displaystyle {\begin{aligned}P_{0}&=\int _{R}^{\infty }g\rho \,dr\\&={\frac {GM}{R^{2}}}\int _{R}^{\infty }\rho \,dr\\&={\frac {GM\tau }{kR^{2}}}.\end{aligned}}} The optical depth at the photosphere turns out to be Ο„ = 2 / 3 {\displaystyle \tau =2/3} . By definition, the temperature of the photosphere is T = T eff {\displaystyle T=T_{\text{eff}}} , where effective temperature is given by L = 4 Ο€ R 2 T eff 4 {\displaystyle L=4\pi R^{2}T_{\text{eff}}^{4}} . Therefore, the pressure is P 0 = G M R 2 2 3 k . {\displaystyle P_{0}={\frac {GM}{R^{2}}}{\frac {2}{3k}}.} We can approximate the opacity to be k = k 0 P a T b , {\displaystyle k=k_{0}P^{a}T^{b},} where a = 1 , b = 3 . Plugging this into the pressure equation, we get P 0 ∝ ( M R 2 T eff b ) 1 a + 1 , {\displaystyle P_{0}\propto \left({\frac {M}{R^{2}T_{\text{eff}}^{b}}}\right)^{\frac {1}{a+1}},} Finally, we need to eliminate R and introduce L , the luminosity. This can be done with the equation L = 4 Ο€ R 2 Οƒ T eff 4 , {\displaystyle L=4\pi R^{2}\sigma T_{\text{eff}}^{4},} Equations 1 and 2 can now be combined by setting T = T eff {\displaystyle T=T_{\text{eff}}} and P = P 0 {\displaystyle P=P_{0}} in equationΒ 1, then eliminating P 0 {\displaystyle P_{0}} . R can be eliminated using equation 3 . After some algebra and setting Ξ³ = 5 / 3 {\displaystyle \gamma =5/3} , we get ln ⁑ T eff = A ln ⁑ L + B ln ⁑ M + const , {\displaystyle \ln T_{\text{eff}}=A\ln L+B\ln M+{\text{const}},} where A = 0.75 a βˆ’ 0.25 5.5 a + b + 1.5 , B = 0.5 a + 1.5 5.5 a + b + 1.5 . {\displaystyle {\begin{aligned}A&={\frac {0.75a-0.25}{5.5a+b+1.5}},\\B&={\frac {0.5a+1.5}{5.5a+b+1.5}}.\end{aligned}}} In cool stellar atmospheres ( T < 5000 K ), like those of newborn stars, the dominant source of opacity is the H βˆ’ ion, for which a β‰ˆ 1 {\displaystyle a\approx 1} and b β‰ˆ 3 {\displaystyle b\approx 3} , we get A = 0.05 {\displaystyle A=0.05} and B = 0.2 {\displaystyle B=0.2} . Since A is much smaller than 1, the Hayashi track is extremely steep: if the luminosity changes by a factor of 2, the temperature only changes by 4%. The fact that B is positive indicates that the Hayashi track shifts left on the HR diagram, towards higher temperatures, as mass increases. Although this model is extremely crude, these qualitative observations are fully supported by numerical simulations. At high temperatures, the atmosphere's opacity begins to be dominated by Kramers' opacity law instead of the H βˆ’ ion, with a = 1 and b = βˆ’4.5. In that case, A = 0.2 in our crude model, far higher than 0.05, and the star is no longer on the Hayashi track. In Stellar Interiors , [ 6 ] Hansen, Kawaler, and Trimble go through a similar derivation without neglecting multiplicative constants and arrived at T eff = ( 2600 K ) ΞΌ 13 / 51 ( M M βŠ™ ) 7 / 51 ( L L βŠ™ ) 1 / 102 , {\displaystyle T_{\text{eff}}=(2600~{\text{K}})\mu ^{13/51}\left({\frac {M}{M_{\odot }}}\right)^{7/51}\left({\frac {L}{L_{\odot }}}\right)^{1/102},} where ΞΌ {\displaystyle \mu } is the molecular weight per particle. The authors note that the coefficient of 2600Β K is too lowβ€”it should be around 4000Β Kβ€”but this equation nevertheless shows that temperature is nearly independent of luminosity. The diagram at the top of this article shows numerically computed stellar evolution tracks for various masses. The vertical portions of each track is the Hayashi track. The endpoints of each track lie on the main sequence. The horizontal segments for higher-mass stars show the Henyey track . It is approximately true that βˆ‚ ln ⁑ T eff βˆ‚ ln ⁑ M β‰ˆ 0.1. {\displaystyle {\frac {\partial \ln T_{\text{eff}}}{\partial \ln M}}\approx 0.1.} The diagram to the right shows how Hayashi tracks change with changes in chemical composition. Z is the star's metallicity , the mass fraction not accounted for by hydrogen or helium. For any given hydrogen mass fraction, increasing Z leads to increasing molecular weight. The dependence of temperature on molecular weight is extremely steepβ€”it is approximately βˆ‚ ln ⁑ T eff βˆ‚ ln ⁑ ΞΌ β‰ˆ βˆ’ 26. {\displaystyle {\frac {\partial \ln T_{\text{eff}}}{\partial \ln \mu }}\approx -26.} Decreasing Z by a factor of 10 shifts the track right, changing ln ⁑ T eff {\displaystyle \ln T_{\text{eff}}} by about 0.05. Chemical composition affects the Hayashi track in a few ways. The track depends strongly on the atmosphere's opacity, and this opacity is dominated by the H βˆ’ ion. The abundance of the H βˆ’ ion is proportional to the density of free electrons, which, in turn, is higher if there are more metals because metals are easier to ionize than hydrogen or helium. Observational evidence of the Hayashi track comes from color–magnitude plotsβ€”the observational equivalent of HR diagramsβ€”of young star clusters. [ 1 ] For Hayashi, NGCΒ 2264 provided the first evidence of a population of contracting stars. In 2012, data from NGCΒ 2264 was re-analyzed to account for dust reddening and extinction. The resulting color–magnitude plot is shown at right. In the upper diagram, the isochrones are curves along which stars of a certain age are expected to lie, assuming that all stars evolve along the Hayashi track. An isochrone is created by taking stars of every conceivable mass, evolving them forwards to the same age, and plotting all of them on the color–magnitude diagram. Most of the stars in NGCΒ 2264 are already on the main sequence (black line), but a substantial population lies between the isochrones for 3.2Β million and 5Β million years, indicating that the cluster is 3.2–5Β million years old and a large population of T Tauri stars is still on their respective Hayashi tracks. Similar results have been obtained for NGCΒ 6530, ICΒ 5146, and NGCΒ 6611. [ 1 ] The lower diagram shows Hayashi tracks for various masses, along with T Tauri observations collected from a variety of sources. Note the bold curve to the right, representing a stellar birthline . Even though some Hayashi tracks theoretically extend above the birthline, few stars are above it. In effect, stars are "born" onto the birthline before evolving downwards along their respective Hayashi tracks. The birthline exists because stars formed from overdense cores of giant molecular clouds in an inside-out manner. [ 4 ] That is, a small central region first collapses in on itself while the outer shell is still nearly static. The outer envelope then accretes onto the central protostar. Before the accretion is over, the protostar is hidden from view, and therefore not plotted on the color-magnitude diagram. When the envelope finishes accreting, the star is revealed and appears on the birthline.
https://en.wikipedia.org/wiki/Hayashi_track
In quantum information , the Hayden–Preskill thought experiment (also known as the Hayden–Preskill protocol ) is a thought experiment that investigates the black hole information paradox by hypothesizing on how long it takes to decode information thrown in a black hole from its Hawking radiation . [ 1 ] The thought experiment concerning Alice and Bob is as follows: Alice throws a k qubit quantum state into a black hole that is entangled with Bob's quantum computer . Bob collects the Hawking radiation emitted by the black hole and feeds it into his quantum computer where he applies the appropriate quantum gates that will decode Alice's state. Bob only needs at least k qubits from the black hole's Hawking radiation to decode Alice's quantum state. [ 2 ] The black hole can be thought of as a quantum information mirror, because it returns scrambled information almost instantly, with a delay that can be accounted for by the scrambling time and the time it takes for the black hole to radiate the qubits. [ 3 ] This decoding method, known as the Yoshida-Kitaev decoding scheme, can theoretically be applied to a small system thermalized with a large system. This opens up the possibility of testing the Hayden–Preskill thought experiment in real life. [ 4 ] Outlined below are models used to explore the Hayden–Preskill thought experiment. Non-symmetric modes with low energy are called soft, while modes with high energy are called heavy. Using energy conservation and a toy model , it becomes clear that Hawking radiation corresponds to heavy modes classically. Only soft modes correspond to the Hayden–Preskill protocol. The toy-model relies on a clear distinction between heavy and soft modes based on thermodynamics properties, energy, and charge. [ 5 ] In order to physically represent the Hayden–Preskill Protocol Dicke models can be used. [ 6 ] Using a system of two Dicke models , it was found that when data is thrown into a black hole the initial spin information can be read after it has been scrambled into the cavity. In a single system, information scrambling prevents the ability to decode the information; however, if a thermofield double state is used, the scrambling of information allows for the initial state information to be read. Therefore, efficiency for decoding is at its maximum when scrambling is fastest, and when the system is most chaotic. [ 6 ] If decoding fidelity is a constant, the black hole will act similarly to a mirror and reflect back any information that falls into it almost immediately. However, if experiments could be conducted the Hayden–Preskill protocol would result in some information loss. Recall that in decoding information from the black hole we need the early radiation that will be called B' and the late radiation that will be called D, to reconstruct the original state A. There is an error that emerges from storing early radiation B'. Qubits may be randomly lost while being stored. Additionally, the early radiation and the black hole are initially maximally entangled, but decoherence emerges over time. Ultimately, the information loss due to erasure in storage is much more impactful than the decoherence, because information loss from decoherence can be partially recovered with an understanding of entanglement . [ 7 ] The Hayden–Preskill thought experiment implies that information that falls into a black hole can be recovered via the Hawking radiation , which raises the question: does the information that falls into a black hole fall in or radiate out? One approach to this is the concept of black hole complementarity , which claims that an observer orbiting a black hole observes the information radiating out as Hawking radiation, while an observer that falls into the black hole observes the information falling inward. This does not seem to violate the no cloning principle of quantum mechanics since you can only measure one or the other; if you fall into a black hole and measure a qubit, you can't leave and then measure the Hawking radiation. Black hole complementarity has four basic postulates: According to Almheiri, Marolf, Polchinski, and Sully postulates 1, 2, and 4 feature a contradiction. Say we divide the Hawking radiation leaving the black hole into two time frames: one "early," and one "late." Because the Hawking radiation is a pure state based on the quantum wave function of the original mass, the late Hawking radiation must be entangled with the early Hawking radiation. However, black hole complementarity also implies that the outgoing Hawking radiation is entangled with the information inside the black hole. This violates what is known as " monogamy of entanglement ," the idea that a quantum system can only be entangled with one other quantum system. To fix this problem, either postulate 2 or postulate 4 must be false: if postulate 2 is false, then there must be some exotic dynamics extending beyond the event horizon that resolve this conflict; if postulate 4 is false, then the entanglement of the inner and outer information must be broken, leading to the creation of high-energy modes. These high-energy modes create a " firewall " that burns up anything that enters the black hole. [ 8 ]
https://en.wikipedia.org/wiki/Hayden–Preskill_thought_experiment
The Hayes-Wheelwright Matrix , also known as the product-process matrix , is a tool used to analyze the fit between a chosen product positioning and the appropriate manufacturing process . It was developed by, and named for, Robert H. Hayes and Steven C. Wheelwright , who published articles entitled " Link Manufacturing Process and Product Life Cycles " and " The Dynamics of Process-Product Life Cycles " in the Harvard Business Review in 1979. [ 1 ] The first dimension of the matrix, the product lifecycle , is a measure of the maturity of the product or market. It ranges from highly customized products with low volumes, to highly standardized products with high volume. The second dimension, the process lifecycle, is a measure of the maturity of the manufacturing process. It ranges from highly manual processes with high unit costs ( job shop ) to highly automated process with low unit costs ( continuous flow ). Companies can occupy any position in the matrix. However, according to the framework, they can only be successful if their product lifecycle stage is consistent with their process lifecycle stage. A company's place on the matrix depends on two dimensions – the process structure/process lifecycle and the product structure/product lifecycles. [ 2 ] The process structure/process lifecycle is composed of the process choice ( job shop , batch , assembly line , and continuous flow ) and the process structure (jumbled flow, disconnected line flow, connected line flow and continuous flow). [ 2 ] The product structure/product lifecycle refers to the four stages of the product lifecycle from low volume to high volume and the product structure from low standardization to high standardization. [ 3 ] Unique Product Multiple Products Standardized Product Commodity product Each process choice on the diagonal of the matrix comprises different sets of characteristics in consideration of skill level and flexibility of workers and labour intensity. The upper-left modules (project, job shop, batch processes) tend to have higher skilled workers with a larger range of skills for better flexibility and are more labor-intensive compared. It is rare for the upper-left modules to work at full capacity and they use general-purpose equipment. They usually cater to local and/or niche markets . The lower-right manufacturing processes ( mass production ; assembly line and continuous processes) require only unskilled or semi-skilled workers to monitor and maintain the equipment as they are far more capital intensive processes. The production facilities are also interrelated and require specialized machinery unique to the specific product. They often cater to national markets and can be vertically integrated . The matrix highlights the difficult trade-off between efficiency and flexibility of the operations with the upper-left modules favoring flexibility with high-cost productions and the lower-right modules favoring efficiency with the ability to spread their large fixed costs over a wider base, reducing cost per unit. [ 2 ] The product-process matrix affects three aspects of the business. Distinctive competence is a characteristic or aspect of the company that gives it a comparative advantage over its competitors, usually categorized by cost/price, quality, flexibility and service/time. The matrix can be used as a framework to identify and analyze a company's distinctive competence to better inform decisions on processes and alternatives and marketing alternatives. [ 2 ] The wide range of skilled labor and use of general-purpose equipment allows upper-left processes to have distinctive competence in flexibility in their product/service provided, specifically in unique product designs. [ 2 ] Lower-right processes do not have that aspect of flexibility since they rely on specialized machinery with unskilled or semi-skilled workers. However, they have better flexibility when it comes to quantity. [ 2 ] Upper-left processes excel in quality when it comes to unique designs based on the customers' specifications or if the product is considered artisan. While upper-left processes cater products to specific customers, lower-right processes can take advantage of consistently producing homogeneous products to eliminate flaws and improve designs over time for a more reliability to the end user. [ 2 ] Upper-left processes can claim distinctive competence through face-to-face interaction and personal attention while lower-right processes are more time-efficient. [ 2 ] Businesses that use the upper-left processes are likely able to charge higher prices because of their ability to cater to individual customers and to compensate for the skilled labor. [ 2 ] Lower-right processes are more cost-efficient because their large volumes allow them to take advantage of economies of scale . [ 2 ] Firms operating along the diagonal matrix are assumed to perform better than those too far from the diagonal because it impairs them from competing effectively. For example, a commodity produced by a job shop would be economically impractical. [ 2 ] There are niche players that do not operate exactly on the diagonal but near it; for example, Rolls-Royce manufactures automobiles using job shop. Management must consider the disadvantages and implications of doing so. [ 2 ] Management can also consider the strategic implications of their position on the matrix compared to their competitors. A firm's position on the matrix can change over time; it can predict the consequences of any future products or process changes. [ 2 ] The nature of a product can be identified using the matrix. Hayes and Wheelwright illustrate this using a specialized manufacturer of printed circuit boards that produced customized products in low-volumes using an interrelated assembly-line process, placing the business in the undesirable lower-left corner of the matrix. Knowing this, the company concluded its product lay in design capability rather than the circuit boards themselves, which placed them nearer along the diagonal. [ 2 ] Another diagnostic use of the matrix is to organize individual operating units according to the suitable process choice while maintaining the overall coordination of the manufacturing procedure. Most firms use more than one process for a product. For example, batch processing may be more suitable for individual components because of its nature or the volume needed is not sufficient for the line process, but the product itself is constructed on an assembly line. Firms may need separate facilities for the parts or products. [ 2 ] Firms can also produce similar products using different process options. Fender Musical Instruments mass-produce electric guitars using the line process while also producing custom guitars using job shop (Fender Custom Shop). [ 2 ] The Hayes-Wheelwright matrix is a four-stage model; each stage is characterized by the management strategy implemented to exploit the manufacturing potential. In stage 1, the production process is flexible and high cost, and becomes increasingly standardize, mechanized, and automated, resulting in an inflexible and cost-efficient process. A company can move between stages. Chase and Hayes (1991) expanded on the model to include service firms. Cruz and Rodriguez (2008) also used the theoretical framework to assess the effectiveness of the operations strategy. Bhurchand et al . report that the model is "widely accepted" in the relevant academic literature. [ 4 ] Job shops are semi-custom manufacturing processes with small-to-medium volume. Products are either unique to the order or have inconsistent demand with long gaps between orders. Because each output is different, efficiency is difficult. Each order requires varying structure, materials, form and possibly processing in accordance with the customer's design and specification, resulting in a jumbled flow with no repetitive pattern. This usually requires a process layout in which the machines are grouped in different areas of the shop according to purpose or function. This manufacturing process also requires highly skilled and experienced labor. Besides manufacturing operations like tools, machine and die manufacturers, it can also apply to service operations such as law offices, medical practices, automobile repair and tailor shops. [ 2 ] Batch processes produce similar items on a repeated basis, often in higher volumes than job shops. Management might accumulate products so they can be processed together. The larger volume and repetition of requirements allows management to take a more effective manufacturing route as they optimize capacity and significantly reduce costs. There is a disconnected line flow or intermittent flow since the work-in-process move about different machine grouping in the shop in a jumbled fashion. It is smoother than job shop processing because the volume is higher and similarity in items allows the manufacturer to take advantage of the repetition. Printing and machine shops that have contracts for higher volumes of products are examples of the batch process in manufacturing. Examples of service operations could include some offices, some operations in hospitals, university and school classes and food preparations. [ 2 ] Where the product has a consistent demand and large enough, the business can employ process referred to as mass-production such as the assembly line and continuous manufacturing. [ 2 ] In the assembly line process, operations do not change with a standard and uninterrupted flow with a homogeneous output. This process is heavily automated with special-purpose equipment. Unlike the previous process, there is no variation in production. Managers would have a larger span of control and less skilled workers are needed because the standardization of the product means individual units do not have to me as closely monitored and controlled, easing routing, scheduling and control. The assembly line process also means machinery is organized according to sequence and is usually connected by an automated conveyor system, thus as a connected line flow. This is called a product layout. The set of inputs and outputs are often fixed and consistent with a continuous flow of work. An example of assembly-line manufacturing is automobile manufacturing. Car washes, class registration in universities and many fast food operations are services that employ assembly lines. [ 2 ] Continuous production involves raw materials undergoing successive operations such as refining and processing to a narrow range of extremely standardized products characterize as commodities in very high volumes. Continuous manufacturing requires substantial capital investment, so demand for the product must be exceptionally high. The cost of starting or stopping the process can be detrimental to the business. Thus, the processes often run non-stop with minimum downtime. High production levels also minimize the average fixed cost per unit. The process is self-monitoring with a fixed and automated route, which limits labor requirements to monitoring and maintaining the machinery. Industries that use this process include gas, chemicals, electricity ores, rubber, petroleum, cement, paper, wood, and certain foods like milk, water, wheat, flour, sugar and spirits. [ 2 ] A project is a process choice added by some writers and placed at the extreme upper-left corner of the matrix (i.e. where "Movie production" sits in the matrix above). Projects are large-scale unique products. They are unique to the customer and are often too big to move, thus the project is the process of choice. [ 2 ] The matrix facilitates broader thinking about organizational competence and competitive advantage by including stages of the product lifecycle and its choice of the production process(es) for different products into its strategic planning process. It allows manufacturing managers to be more involved in the planning process so that their decisions can more effectively coincide with those of marketing and of the corporation itself. All resulting in more informed predictions about the changes in the industry with appropriate strategic responses. [ 2 ] In addition, the matrix can be used to identify business opportunities available given the company's manufacturing capabilities. It can aid in major decision-making about changes in the production process and guide investment decisions to stay in line with product and process plans. It helps to choose the best process and product structure when entering a new market and the suitable manufacturing facilities. It also helps identify and monitor the progress of important manufacturing objectives at a corporate level. [ 2 ] The matrix does not account for the combinations of the product lifecycle and process lifecycle that do not follow the above-mentioned characteristics. "Some 60 per cent of the firms studied did not fall on the diagonal". [ according to whom? ] [ 2 ] Evolving management styles and technology are diminishing some of the inherent trade-offs found on the matrix, resulting in low predictive validity. [ 6 ] Ahmad and Schroeder, however, suggest developing the matrix to include three axes rather than two. Besides the x-axis (product lifecycle stages) and the y-axis (Process lifecycle stages), they propose to add a z-axis to represent the company's inclusion of innovative initiatives. [ 2 ] The product variety considered in the matrix is also limited. Koth and Orne (1989) propose the complexity of products and organizational characteristics like the extent of vertical integration, size and geographical scope of the operations should affect the appropriate process design. Das and Narasimhan (2001) suggest advanced manufacturing technology for modular product structures can influence the contingency effect of the product variety and increase output and improve capabilities for job and batch shops in areas that were conventionally related with assembly lines and flow lines. [ 6 ] The matrix is static and its dimensions are too simple. The matrix is based on the current products but does not account for the dynamic nature of the firms’ operating environments. Processes should be designed with the evolution of product offerings and projected future product offerings in mind. [ 6 ]
https://en.wikipedia.org/wiki/Hayes-Wheelwright_matrix
The Hayflick limit , or Hayflick phenomenon , is the number of times a normal somatic , differentiated human cell population will divide before cell division stops. [ 1 ] [ 2 ] The concept of the Hayflick limit was advanced by American anatomist Leonard Hayflick in 1961, [ 3 ] at the Wistar Institute in Philadelphia , Pennsylvania. Hayflick demonstrated that a normal human fetal cell population will divide between 40 and 60 times in cell culture before entering a senescence phase. This finding refuted the contention by Alexis Carrel that normal cells are immortal . Hayflick interpreted his discovery to be aging at the cellular level. The aging of cell populations appears to correlate with the overall physical aging of an organism. [ 3 ] [ 4 ] Macfarlane Burnet coined the name "Hayflick limit" in his book Intrinsic Mutagenesis: A Genetic Approach to Ageing , published in 1974. [ 5 ] Prior to Leonard Hayflick's discovery, it was believed that vertebrate cells had an unlimited potential to replicate. Alexis Carrel , a Nobel Prize -winning surgeon, had stated "that all cells explanted in tissue culture are immortal, and that the lack of continuous cell replication was due to ignorance on how best to cultivate the cells". [ 5 ] He claimed to have cultivated fibroblasts from the hearts of chickens (which typically live 5 to 10 years) and to have kept the culture growing for 34 years. [ 6 ] However, other scientists have been unable to replicate Carrel's results, [ 5 ] and they are suspected to be due to an error in experimental procedure. To provide required nutrients, embryonic stem cells of chickens may have been re-added to the culture daily. This would have easily allowed the cultivation of new, fresh cells in the culture, so there was not an infinite reproduction of the original cells. [ 3 ] It has been speculated that Carrel knew about this error, but he never admitted it. [ 7 ] [ 8 ] Also, it has been theorized [ by whom? ] that the cells Carrel used were young enough to contain pluripotent stem cells , which, if supplied with a supporting telomerase -activation nutrient, would have been capable of staving off replicative senescence, or even possibly reversing it. Cultures not containing telomerase-active pluripotent stem cells would have been populated with telomerase-inactive cells, which would have been subject to the 50 Β± 10 mitosis event limit until cellular senescence occurs as described in Hayflick's findings. [ 4 ] Hayflick first became suspicious of Carrel's claims while working in a lab at the Wistar Institute. Hayflick noticed that one of his cultures of embryonic human fibroblasts had developed an unusual appearance and that cell division had slowed. Initially, he brushed this aside as an anomaly caused by contamination or technical error. However, he later observed other cell cultures exhibiting similar manifestations. Hayflick checked his research notebook and was surprised to find that the atypical cell cultures had all been cultured to approximately their 40th doubling while younger cultures never exhibited the same problems. Furthermore, conditions were similar between the younger and older cultures he observedβ€”same culture medium, culture containers, and technician. This led him to doubt that the manifestations were due to contamination or technical error. [ 9 ] Hayflick next set out to prove that the cessation of normal cell replicative capacity that he observed was not the result of viral contamination, poor culture conditions or some unknown artifact. Hayflick teamed with Paul Moorhead for the definitive experiment to eliminate these as causative factors. As a skilled cytogeneticist , Moorhead was able to distinguish between male and female cells in culture. The experiment proceeded as follows: Hayflick mixed equal numbers of normal human male fibroblasts that had divided many times (cells at the 40th population doubling) with female fibroblasts that had divided fewer times (cells at the 15th population doubling). Unmixed cell populations were kept as controls. After 20 doublings of the mixed culture, only female cells remained. Cell division ceased in the unmixed control cultures at the anticipated times; when the male control culture stopped dividing, only female cells remained in the mixed culture. This suggested that technical errors or contaminating viruses were unlikely explanations as to why cell division ceased in the older cells, and proved that unless the virus or artifact could distinguish between male and female cells (which it could not) then the cessation of normal cell replication was governed by an internal counting mechanism. [ 3 ] [ 5 ] [ 9 ] These results disproved Carrel's immortality claims and established the Hayflick limit as a credible biological theory. Unlike Carrel's experiment, Hayflick's have been successfully repeated by other scientists. [ citation needed ] Hayflick describes three phases in the life of normal cultured cells. At the start of his experiment he named the primary culture "phase one". Phase two is defined as the period when cells are proliferating; Hayflick called this the time of "luxuriant growth". After months of doubling the cells eventually reach phase three, a phenomenon he named " senescence ", where cell replication rate slows before halting altogether. [ citation needed ] The Hayflick limit has been found to correlate with the length of the telomeric region at the end of chromosomes. During the process of DNA replication of a chromosome, small segments of DNA within each telomere are unable to be copied and are lost. [ 10 ] This occurs due to the uneven nature of DNA replication, where leading and lagging strands are not replicated symmetrically. [ 11 ] The telomeric region of DNA does not code for any protein; it is simply a repeated code on the end region of linear eukaryotic chromosomes. After many divisions, the telomeres reach a critical length and the cell becomes senescent. It is at this point that a cell has reached its Hayflick limit. [ 12 ] [ 13 ] Hayflick was the first to report that only cancer cells are immortal. This could not have been demonstrated until he had demonstrated that normal cells are mortal. [ 3 ] [ 4 ] Cellular senescence does not occur in most cancer cells due to expression of an enzyme called telomerase . This enzyme extends telomeres, preventing the telomeres of cancer cells from shortening and giving them infinite replicative potential. [ 14 ] A proposed treatment for cancer is the usage of telomerase inhibitors that would prevent the restoration of the telomere, allowing the cell to die like other body cells. [ 15 ] Hayflick suggested that his results in which normal cells have a limited replicative capacity may have significance for understanding human aging at the cellular level. [ 4 ] It has been reported that the limited replicative capability of human fibroblasts observed in cell culture is far greater than the number of replication events experienced by non-stem cells in vivo during a normal postnatal lifespan. [ 16 ] In addition, it has been suggested that no inverse correlation exists between the replicative capacity of normal human cell strains and the age of the human donor from which the cells were derived, as previously argued. It is now clear that at least some of these variable results are attributable to the mosaicism of cell replication numbers at different body sites where cells were taken. [ 16 ] Comparisons of different species indicate that cellular replicative capacity may correlate primarily with species body mass, but more likely to species lifespan. [ clarification needed ] Thus, the limited capacity of cells to replicate in culture may be directly relevant to the overall physical aging of an organism. [ 3 ] [ 4 ]
https://en.wikipedia.org/wiki/Hayflick_limit
In mathematics, the Haynsworth inertia additivity formula , discovered by Emilie Virginia Haynsworth (1916–1985), concerns the number of positive, negative, and zero eigenvalues of a Hermitian matrix and of block matrices into which it is partitioned . [ 1 ] The inertia of a Hermitian matrix H is defined as the ordered triple whose components are respectively the numbers of positive, negative, and zero eigenvalues of H . Haynsworth considered a partitioned Hermitian matrix where H 11 is nonsingular and H 12 * is the conjugate transpose of H 12 . The formula states: [ 2 ] [ 3 ] where H / H 11 is the Schur complement of H 11 in H : If H 11 is singular , we can still define the generalized Schur complement, using the Moore–Penrose inverse H 11 + {\displaystyle H_{11}^{+}} instead of H 11 βˆ’ 1 {\displaystyle H_{11}^{-1}} . The formula does not hold if H 11 is singular. However, a generalization has been proven in 1974 by Carlson, Haynsworth and Markham, [ 4 ] to the effect that Ο€ ( H ) β‰₯ Ο€ ( H 11 ) + Ο€ ( H / H 11 ) {\displaystyle \pi (H)\geq \pi (H_{11})+\pi (H/H_{11})} and Ξ½ ( H ) β‰₯ Ξ½ ( H 11 ) + Ξ½ ( H / H 11 ) {\displaystyle \nu (H)\geq \nu (H_{11})+\nu (H/H_{11})} . Carlson, Haynsworth and Markham also gave sufficient and necessary conditions for equality to hold.
https://en.wikipedia.org/wiki/Haynsworth_inertia_additivity_formula
Haystack is a project at the Massachusetts Institute of Technology to research and develop several applications around personal information management and the Semantic Web . The most notable of those applications is the Haystack client, a research personal information manager (PIM) and one of the first to be based on semantic desktop technologies. [ 1 ] The Haystack client is published as open source software under the BSD license . Similar to the Chandler PIM, the Haystack system unifies handling different types of unstructured information . This information has a common representation in RDF that is presented to users in a configurable human-readable way. Haystack was developed in the RDF -aware dynamic language Adenine which was created for the project. [ 2 ] The language was named after the nuclease adenine and is a scripting language that is cross-platform . It is the perhaps the earliest example of a homoiconic general graph (rather than list/tree) programming language. [ 3 ] A substantial characteristic of Adenine is that this language possesses native support for the Resource Description Framework (RDF). The language constructs of Adenine are derived from Python and Lisp . Adenine is written in RDF and thus also can be represented and written with RDF based syntaxes such as Notation3 (N3). This software article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haystack_(MIT_project)
The Hazard Communication Standard (HCS) requires employers to disclose toxic and hazardous substances in workplaces. This is related to the Worker Protection Standard . Specifically, this requires unrestricted employee access to the Material Safety Data Sheet (MSDS), Globally Harmonized System of Classification and Labeling of Chemicals (GHS) or equivalent, and appropriate training to understand health and safety risks. This requirement was made necessary to ensure that the employees/workers understand the possibility of risk of chemicals and the measure/method to treat each hazard while staying safe. In addition, the chemical and any mixture's classification are also needed. [ 1 ] Before the GHS, the MSDS was primarily used in the United States, and it was often translated differently in other countries. Increased international trade created conflict and confusion between different methods of classifications and labeling of the same chemical from one country to the next. Therefore, the GHS was created to aid in a universal process of classifying and labeling all substances. Given that no sheet is ever completely perfect, the GHS is updated about every two years. The ninth revision is the most current, released in December 2021. [ 2 ] European Union (EU) began to adopt the GHS into their standards in 2009, having the EU Classification, Labelling and Packaging (CLP) reflect the same as the GHS before putting it into full force. Following was the United States, which finally adopted the GHS in 2012, and it is now known as OSHA's HCS 2012 when referenced for enforcement. Canada adopted the GHS in 2015, changing the federal Hazardous Product Act (HPA) and making a new regulation. The Hazardous Products Regulations (HPR) were created under the HPA to embody the GHS as the new standard. [ 3 ] As the world continues to trade and understand more of the effects of chemicals, the HCS will changeβ€”however, currently the GHS has made communication regarding hazards much more straightforward and is well adopted. Therefore, GHS is expected be part of the HCS in the future as a common standard used to provide the same chemical information to the end user. Workplace safety in the USA began long before Dr. Alice Hamilton in Chicago, [ citation needed ] who began working for the state of Illinois in 1910 to deal with workplace safety. [ 4 ] The Occupational Safety and Health Administration was established in 1970 to standardize safety for nearly all workers in the United States, and hazard communication for toxic substance exposure was included during the 1980s. The Globally Harmonized System of Classification and Labeling of Chemicals (GHS) is currently being pursued to standardize workplace hazard protection internationally. [ 5 ] As GHS has been adopted as the Hazard Communication Standard in the following Countries with the year of adoption. [ 6 ] * The countries covered by the EU/ European Economic Area (EEA): Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden, Netherlands and the United Kingdom. OSHA's Hazard Communication Standard (HAZCOM) was first adopted in 1983 in the United States with limited scope (48 FR 53280; November 25, 1983). In 1987, scope was expanded to cover all industries where employees are potentially exposed to hazardous chemicals (52 FR 31852; August 24, 1987). This is managed by the Occupational Safety and Health Administration . This is managed by states that have an approved plan . [ 7 ] The standard is identified in 29 C.F.R. 1910.1200 . [ 8 ] The summary is as follows. "This occupational safety and health standard is intended to address comprehensively the issue of classifying the potential hazards of chemicals, and communicating information concerning hazards and appropriate protective measures to employees, and to preempt any legislative or regulatory enactments of a state, or political subdivision of a state, pertaining to this subject. Classifying the potential hazards of chemicals and communicating information concerning hazards and appropriate protective measures to employees, may include, for example, but is not limited to, provisions for: developing and maintaining a written hazard communication program for the workplace, including lists of hazardous chemicals present; labeling of containers of chemicals in the workplace, as well as of containers of chemicals being shipped to other workplaces; preparation and distribution of safety data sheets to employees and downstream employers; and development and implementation of employee training programs regarding hazards of chemicals and protective measures. Under section 18 of the Act, no state or political subdivision of a state may adopt or enforce any requirement relating to the issue addressed by this Federal standard, except pursuant to a Federally-approved state plan." The United States Department of Defense does not manage hazards in accordance with public law. The Purpose is identified in 29 C.F.R. 1910 1200 , and is defined as follows: "The purpose of this section is to ensure that the hazards of all chemicals produced or imported are classified, and that information concerning the classified hazards is transmitted to employers and employees. The requirements of this section are intended to be consistent with the provisions of the United Nations Globally Harmonized System of Classification and Labelling of Chemicals (GHS), Revision 3. The transmittal of information is to be accomplished by means of comprehensive hazard communication programs, which are to include container labeling and other forms of warning, safety data sheets and employee training." Employees access to hazard information is one of the prerequisites required for access to competent medical diagnosis and treatment. Environmental illness share characteristics with common diseases. Cyanide exposure symptoms include weakness, headache, nausea, confusion, dizziness, seizures, cardiac arrest, and unconsciousness. [ 9 ] [ 10 ] Influenza and heart disease include the same symptoms. Failure to obtain proper disclosure is likely to lead to improper or ineffective medical diagnosis and treatment. The Hazard Communication Standard requires the Safety Data Sheet to be made readily available for workplace exposure in the United States, because this information is required by physicians so they can do their job. [ 11 ] Physicians also require epidemiological data maintained by local government agencies responsible for maintaining pesticide application data for use outside buildings (environmental exposure). [ 12 ] This is part of the Right to know .
https://en.wikipedia.org/wiki/Hazard_Communication_Standard
A hazard analysis is one of many methods that may be used to assess risk . At its core, the process entails describing a system object (such as a person or machine) that intends to conduct some activity. During the performance of that activity, an adverse event (referred to as a β€œ factor ”) may be encountered that could cause or contribute to an occurrence (mishap, incident , accident). Finally, that occurrence will result in some outcome that may be measured in terms of the degree of loss or harm. This outcome may be measured on a continuous scale, such as an amount of monetary loss, or the outcomes may be categorized into various levels of severity. The first step in hazard analysis is to identify the hazards. If an automobile is an object performing an activity such as driving over a bridge, and that bridge may become icy, then an icy bridge might be identified as a hazard. If this hazard is encountered, it could cause or contribute to the occurrence of an automobile accident, and the outcome of that occurrence could range in severity from a minor fender-bender to a fatal accident. [ citation needed ] A hazard analysis may be used to inform decisions regarding the mitigation of risk. For instance, the probability of encountering an icy bridge may be reduced by adding salt such that the ice will melt. Or, risk mitigation strategies may target the occurrence. For instance, putting tire chains on a vehicle does nothing to change the probability of a bridge becoming icy, but if an icy bridge is encountered, it does improve traction, reducing the chance of a sliding into another vehicle. Finally, risk may be managed by influencing the severity of outcomes. For instance, seatbelts and airbags do nothing to prevent bridges from becoming icy, nor do they prevent accidents caused by that ice. However, in the event of an accident, these devices lower the probability of the accident resulting in fatal or serious injuries. [ citation needed ] IEEE STD-1228-1994 Software Safety Plans prescribes industry best practices for conducting software safety hazard analyses to help ensure safety requirements and attributes are defined and specified for inclusion in software that commands, controls or monitors critical functions. When software is involved in a system, the development and design assurance of that software is often governed by DO-178C . The severity of consequence identified by the hazard analysis establishes the criticality level of the software. Software criticality levels range from A to E, corresponding to the severity of Catastrophic to No Safety Effect. Higher levels of rigor are required for level A and B software and corresponding functional tasks and work products is the system safety domain are used as objective evidence of meeting safety criteria and requirements. [ citation needed ] In 2009 [ 1 ] a leading edge commercial standard was promulgated based on decades of proven system safety processes in DoD and NASA. ANSI/GEIA-STD-0010-2009 (Standard Best Practices for System Safety Program Development and Execution) is a demilitarized commercial best practice that uses proven holistic, comprehensive and tailored approaches for hazard prevention, elimination and control. It is centered around the hazard analysis and functional based safety process. When used as part of an aviation hazard analysis, "Severity" describes the outcome (the degree of loss or harm) that results from an occurrence (an aircraft accident or incident). When categorized, severity categories must be mutually exclusive such that every occurrence has one, and only one, severity category associated with it. The definitions must also be collectively exhaustive such that all occurrences fall into one of the categories. In the US, the FAA includes five severity categories as part of its safety risk management policy. [ 2 ] (medical devices) When used as part of an aviation hazard analysis, a "Likelihood" is a specific probability. It is the joint probability of a hazard occurring, that hazard causing or contributing to an aircraft accident or incident, and the resulting degree of loss or harm falling within one of the defined severity categories. Thus, if there are five severity categories, each hazard will have five likelihoods. In the US, the FAA provides a continuous probability scale for measuring likelihood, but also includes seven likelihood categories as part of its safety risk management policy. [ 2 ] (medical devices) FAA (September 29, 2023). "Safety Risk Management Policy (FAA Order 8040.4C)" (PDF) . Retrieved May 6, 2024 .
https://en.wikipedia.org/wiki/Hazard_analysis
Hazard elimination is a hazard control strategy based on completely removing a material or process causing a hazard . Elimination is the most effective of the five members of the hierarchy of hazard controls in protecting workers, and where possible should be implemented before all other control methods. [ 1 ] [ 2 ] [ 3 ] Many jurisdictions require that an employer eliminate hazards if it is possible, before considering other types of hazard control. [ 4 ] [ 5 ] Elimination is most effective early in the design process, when it may be inexpensive and simple to implement. It is more difficult to implement for an existing process, when major changes in equipment and procedures may be required. [ 2 ] Elimination can fail as a strategy if the hazardous process or material is reintroduced at a later stage in the design or production phases. [ 6 ] The complete elimination of hazards is a major component to the philosophy of Prevention through Design , which promotes the practice of eliminating hazards at the earliest design stages of a project. [ 7 ] Complete elimination of a hazard is often the most difficult control to achieve, but addressing it at the start of a project allows designers and planners to make large changes much more easily without the need to retrofit or redo work. Understanding the 5 main hazard areas is a major part of assessing risks on a jobsite. The 5 main hazard areas are materials, environmental hazards, equipment hazards, people hazards, and system hazards. Materials can bring the hazards of inhalation, absorption, and ingestion. Equipment hazards are related taking the proper precautions to machinery and tools. People can create hazards by becoming distracted, taking shortcuts, using machinery when impaired, and general fatigue. System hazards is the practice of making sure employees are properly trained for their job, and ensuring that proper safety precautions are set in place. [ 8 ] Removing the use of a hazardous chemical is an example of elimination. [ 1 ] Some substances are difficult or impossible to eliminate because they have unique properties necessary to the process, but it may be possible to instead substitute less hazardous versions of the substance. [ 9 ] Elimination also applies to equipment as well. For example, noisy equipment can be removed from a room used for other purposes, [ 10 ] or an unnecessary blade can be removed from a machine . [ 5 ] Prompt repair of damaged equipment eliminates hazards stemming from their malfunction. [ 10 ] Elimination also applies to processes. For example, the risk of falls can be eliminated by eliminating the process of working in a high area, by using extending tools from the ground instead of climbing, [ 11 ] or moving a piece to be worked on to ground level. [ 1 ] The need for workers to enter a hazardous area such as a grain elevator can be eliminated by installing equipment that performs the task automatically. [ 12 ] Eliminating an inspection that requires opening a package containing a hazardous material reduces the inhalation hazard to the inspector. [ 9 ] Understanding the risks of a workplace environment is one of the most important ways to remain safe on a worksite and hazard elimination is the safest way to avoid serious injuries or fatalities. [ 13 ] Assessing the risks of a workplace environment should be done at the design or development stage of the project because taking an entire risk out of a project can change its whole trajectory.[12] For example, removing hazardous materials before any work happens in a workplace environment is the ideal case because the hazard is completely removed from the situation before anyone has to do work around it. Working backwards to fix the problem after work has begun can create challenges such as construction starting on a site without realizing that hazardous material needs to removed causing a costly repair to go back and fix the problem. [ 14 ] Deciding whether hazard elimination is the right solution for a project may require weighing multiple factors. Some examples include whether the elimination of the hazard is appropriate for the severity of the hazard as well as whether the approach is effective, reliable, and will last. Determining if the elimination of the hazard will done in a timely and economically beneficial manner is one of the most important parts of the decision because that is the motivation behind many projects. [ 15 ] Eliminating hazards around highways is a major issue due to the level of traffic. The Highway Safety Programs and Projects makes addresses major traffic concerns and takes a special priority for the safety of everyone on the road. Removing potential safety issues and addressing safety concerns is a costly project. The average price of hazard elimination is around $400,000 to $1,000,000. [ 16 ]
https://en.wikipedia.org/wiki/Hazard_elimination
Hazard substitution is a hazard control strategy in which a material or process is replaced with another that is less hazardous. Substitution is the second most effective of the five members of the hierarchy of hazard controls in protecting workers, after elimination . [ 1 ] [ 2 ] [ 3 ] Substitution and elimination are most effective early in the design process, when they may be inexpensive and simple to implement, while for an existing process they may require major changes in equipment and procedures. [ 1 ] The concept of prevention through design emphasizes integrating the more effective control methods such as elimination and substitution early in the design phase. [ 4 ] Hazard substitutions can involve not only changing one chemical for another, but also using the same chemical in a less hazardous form. Substitutions can also be made to processes and equipment. In making a substitution, the hazards of the new material should be considered and monitored, so that a new hazard is not unwittingly introduced, [ 3 ] causing "regrettable substitutions". [ 5 ] Substitution can also fail as a strategy if the hazardous process or material is reintroduced at a later stage in the design or production phases, [ 6 ] or if cost or quality concerns cause a substitution to not be adopted. [ 7 ] A common substitution is to replace a toxic chemical with a less toxic one. [ 8 ] Some examples include replacing the solvent benzene , a carcinogen , with toluene ; switching from organic solvents to water-based detergents ; and replacing paints containing lead with those containing non-leaded pigments. [ 3 ] Dry cleaning can avoid the use of toxic perchloroethylene by using petroleum -based solvents, supercritical carbon dioxide , or wet cleaning techniques. [ 9 ] Chemical substitutions are an example of green chemistry . [ 5 ] Chemicals can also be substituted with a different form of the same chemical. In general, inhalation exposure to dusty powders can be reduced by using a slurry or suspension of particles in a liquid solvent instead of a dry powder, [ 10 ] or substituting larger particles such as pellets or ingots . [ 3 ] Some chemicals, such as nanomaterials , often cannot be eliminated or substituted with conventional materials because their unique properties are necessary to the desired product or process. [ 10 ] However, it may be possible to choose properties of the nanoparticle such as size , shape , functionalization , surface charge , solubility , agglomeration , and aggregation state to improve their toxicological properties while retaining the desired functionality. [ 11 ] In 2014, the U.S. National Academies released a recommended decision-making framework for chemical substitutions. The framework maintained health-related metrics used by previous frameworks, including carcinogenicity , mutagenicity , reproductive and developmental toxicity , endocrine disruption , acute and chronic toxicity , dermal and eye irritation , and dermal and respiratory sensitization , and ecotoxicity . It added an emphasis on assessing actual exposure rather than only the inherent hazards of the chemical itself, decision rules for resolving trade-offs among hazards, and consideration of novel data sources on hazards such as simulations . The assessment framework has 13 steps, many of which are unique, such as dedicated steps for scoping and problem formulation, assessing physicochemical properties, broader life-cycle assessment , and research and innovation. The framework also provides guidance on tools and sources for scientific information. [ 12 ] Hazards to workers can be reduced by limiting or replacing procedures that may aerosolize toxic materials contained in the item. Examples include limiting agitation procedures such as sonication , or by using a lower-temperature process in chemical reactors to minimize release of materials in exhaust. [ 13 ] Substituting a water-jet cutting process instead of mechanical sawing of a solid item also creates less dust. [ 14 ] Equipment can also be substituted, for example using a self-retracting lifeline instead of a fixed rope for fall protection , [ 15 ] or packaging materials in smaller containers to prevent lifting injuries. [ 16 ] Health effects from noise can be controlled by purchasing or renting less noisy equipment. This topic has been the subject of several Buy Quiet campaigns, and the NIOSH Power Tools Database contains data on sound power, pressure, and vibration levels of many power tools. [ 17 ] [ 18 ] A regrettable substitution occurs when a material or process believed to be less hazardous turns out to have an unexpected hazard. One well-known example occurred when dichloromethane was phased out as a brake cleaner due to its environmental effects, but its replacement n -hexane was subsequently found to be neurotoxic . [ 5 ] [ 12 ] Often the substances being replaced have well-studied hazards, but the alternatives may have little or no toxicity data, making alternatives assessments difficult. [ 5 ] Often, chemicals with no toxicity data are considered preferable since they do not prompt such concerns as a California Proposition 65 warning. [ 19 ] Another type of regrettable substitution involves shifting the burden of a hazard to another party. One example is that the potent neurotoxin acrylamide can be replaced with the safer N -vinyl formamide , but the synthesis of the latter requires use of the highly toxic hydrogen cyanide , increasing the hazards to workers in the manufacturing firm. In performing an alternatives assessment , including the effects over the entire product lifecycle as part of a life-cycle assessment can mitigate this. [ 12 ]
https://en.wikipedia.org/wiki/Hazard_substitution
Hazard symbols are universally recognized symbols designed to alert individuals to the presence of hazardous or dangerous materials, locations, or conditions. These include risks associated with electromagnetic fields , electric currents , toxic chemicals, explosive substances , and radioactive materials . Their design and use are often governed by laws and standards organizations to ensure clarity and consistency. Hazard symbols may vary in color, background, borders, or accompanying text to indicate specific dangers and levels of risk, such as toxicity classes. These symbols provide a quick, universally understandable visual warning that transcends language barriers, making them more effective than text-based warnings in many situations. Tape with yellow and black diagonal stripes is commonly used as a generic hazard warning. This can be in the form of barricade tape , or as a self-adhesive tape for marking floor areas and the like. In some regions (for instance the UK) [ 1 ] yellow tape is buried a certain distance above buried electrical cables to warn future groundworkers of the hazard. On roadside warning signs, an exclamation mark is often used to draw attention to a generic warning of danger, hazards, and the unexpected. In Europe and elsewhere in the world (except North America and Australia), this type of sign is used if there are no more-specific signs to denote a particular hazard. [ 2 ] [ 3 ] When used for traffic signs, it is accompanied by a supplementary sign describing the hazard, usually mounted under the exclamation mark. This symbol has also been more widely adopted for generic use in many other contexts not associated with road traffic. It often appears on hazardous equipment, in instruction manuals to draw attention to a precaution, on tram/train blind spot warning stickers and on natural disaster (earthquake, tsunami, hurricane, volcanic eruption) preparedness posters/brochuresβ€”as an alternative when a more-specific warning symbol is not available. The skull-and-crossbones symbol, consisting of a human skull and two bones crossed together behind the skull, is today generally used as a warning of danger of death , particularly in regard to poisonous substances. The symbol, or some variation thereof, specifically with the bones (or swords) below the skull, was also featured on the Jolly Roger , the traditional flag of European and American seagoing pirates . It is also part of the Canadian WHMIS home symbols placed on containers to warn that the contents are poisonous. In the United States, due to concerns that the skull-and-crossbones symbol's association with pirates might encourage children to play with toxic materials, the Mr. Yuk symbol is also used to denote poison. This symbol has also been more widely adopted for generic use in many other contexts not associated with poisonous materials. It used for denoting number of dead victims caused by natural disasters (e.g. earthquakes) or armed conflicts on event infographics. The international radiation symbol is a trefoil around a small central circle representing radiation from an atom. It first appeared in 1946 at the University of California, Berkeley Radiation Laboratory . [ 4 ] At the time, it was rendered as magenta , and was set on a blue background. The shade of magenta used (Martin Senour Roman Violet No. 2225) was chosen because it was expensive and less likely to be used on other signs. [ 5 ] However, a blue background for other signs started to be used extensively. Blue was typically used on information signs and the color tended to fade with weathering. This resulted in the background being changed on the radiation hazard sign. [ 6 ] The original version used in the United States is magenta against a yellow background, and it is drawn with a central circle of radius R , an internal radius of 1.5 R and an external radius of 5 R for the blades, which are separated from each other by 60Β°. The trefoil is black in the international version, which is also used in the United States. [ 7 ] The symbol was adopted as a standard in the US by ANSI in 1969. [ 6 ] [ 8 ] It was first documented as an international symbol in 1963 in International Organization for Standardization (ISO) recommendation R.361. [ 9 ] In 1974, after approval by national standards bodies, the symbol became an international standard as ISO 361 Basic ionizing radiation symbol . [ 10 ] The standard specifies the shape, proportions, application and restrictions on the use of the symbol. It may be used to signify the actual or potential presence of ionizing radiation. It is not used for non-ionizing electromagnetic waves or sound waves. The standard does not specify the radiation levels at which it is to be used. [ 10 ] The sign is commonly referred to as a radioactivity warning sign, but it is actually a warning sign of ionizing radiation . Ionizing radiation is a much broader category than radioactivity alone, as many non-radioactive sources also emit potentially dangerous levels of ionizing radiation. This includes x-ray apparatus, radiotherapy linear accelerators, and particle accelerators. Non-ionizing radiation can also reach potentially dangerous levels, but this warning sign is different from the trefoil ionizing radiation warning symbol. [ 11 ] The sign is not to be confused with the fallout shelter identification sign introduced by the Office of Civil Defense in 1961. This was originally intended to be the same as the radiation hazard symbol but was changed to a slightly different symbol because shelters are a place of safety, not of hazard. [ 6 ] [ 12 ] On February 15, 2007, two groupsβ€”the International Atomic Energy Agency (IAEA) and the International Organization for Standardization (ISO)β€”jointly announced the adoption of a new ionizing radiation warning symbol to supplement the traditional trefoil symbol. The new symbol, to be used on sealed radiation sources, is aimed at alerting anyone, anywhere to the danger of being close to a strong source of ionizing radiation. [ 13 ] It depicts, on a red background, a black trefoil with waves of radiation streaming from it, along with a black skull and crossbones , and a running figure with an arrow pointing away from the scene. The radiating trefoil suggests the presence of radiation, while the red background and the skull and crossbones warn of danger. The figure running away from the scene is meant to suggest taking action to avoid the labeled material. The new symbol is not intended to be generally visible, but rather to appear on internal components of devices that house radiation sources so that if anybody attempts to disassemble such devices they will see an explicit warning not to proceed any further. [ 14 ] [ 15 ] The biohazard symbol is used in the labeling of biological materials that carry a significant health risk, including viral and bacteriological samples, including infected dressings and used hypodermic needles (see sharps waste ). [ 16 ] The biohazard symbol was developed in 1966 by Charles Baldwin , an environmental-health engineer working for the Dow Chemical Company on their containment products. [ 17 ] According to Baldwin, who was assigned by Dow to its development: "We wanted something that was memorable but meaningless, so we could educate people as to what it means." In an article in Science in 1967, the symbol was presented as the new standard for all biological hazards ("biohazards"). The article explained that over 40 symbols were drawn up by Dow's artists, and all of the symbols investigated had to meet a number of criteria: "(i) striking in form in order to draw immediate attention; (ii) unique and unambiguous, in order not to be confused with symbols used for other purposes; (iii) quickly recognizable and easily recalled; (iv) easily stenciled; (v) symmetrical, in order to appear identical from all angles of approach; and (vi) acceptable to groups of varying ethnic backgrounds." The chosen scored the best on nationwide testing for uniqueness and memorability. [ 16 ] All parts of the biohazard sign can be drawn with a compass and straightedge . The basic outline of the symbol is a plain trefoil , which is three circles overlapping each other equally like in a triple Venn diagram with the overlapping parts erased. The diameter of the overlapping part is equal to half the radius of the three circles. Then three inner circles are drawn in with 2 ⁄ 3 radius of the original circles so that it is tangent to the outside three overlapping circles. A tiny circle in center has a diameter 1 ⁄ 2 of the radius of the three inner circles, and arcs are erased at 90Β°, 210Β°, and 330Β°. The arcs of the inner circles and the tiny circle are connected by a line. Finally, the ring under is drawn from the distance to the perimeter of the equilateral triangle that forms between the centers of the three intersecting circles. An outer circle of the ring under is drawn and finally enclosed with the arcs from the center of the inner circles with a shorter radius from the inner circles. [ 7 ] A chemical hazard symbol is a pictogram applied to containers and storage areas of dangerous chemical compounds to indicate the specific hazard, and thus the required precautions. There are several systems of labels, depending on the purpose, such as on the container for transportation, containers for end-use, or on a vehicle during transportation. The United Nations has designed GHS hazard pictograms and GHS hazard statements to internationally harmonize chemical hazard warnings under the Globally Harmonized System of Classification and Labelling of Chemicals . These symbols have gradually replaced nation and region specific systems such as the European Union's Directive 67/548/EEC symbols, [ 24 ] Canada's Workplace Hazardous Materials Information System. [ 25 ] It has also been adopted in the United States for materials being sold and shipped by manufacturers, distributors and importers. [ 26 ] The USA previously did not mandate a specific system, instead allowing any system, provided it had met certain requirements. [ 27 ] The European Union aligned its regulations with the GHS standards in 2008 with the adoption of CLP Regulation , replacing its existing Directive 67/548/EEC symbols during the mid-2010s, and requiring use of GHS symbols after 1 June 2017. [ 28 ] [ 29 ] Since 2015, European standards are set by: The Workplace Hazardous Materials Information System, or WHMIS, is Canada 's national workplace hazard communication standard, first introduced in 1988, and included eight chemical hazard symbols. [ 30 ] This system was brought into alignment with GHS in 2015, with a gradual phase in of GHS symbols and label designs through 15 December 2025. [ 25 ] The WHMIS system does deviate from GHS by retaining the former WHMIS symbol for Class 3, Division 3, biohazardous infectious materials , as GHS lacks a biological hazard symbol. [ 25 ] The US-based National Fire Protection Association (NFPA) has a standard NFPA 704 using a diamond with four colored sections each with a number indicating severity 0–4 (0 for no hazard, 4 indicates a severe hazard). [ 31 ] The system was developed in the early 1960s, as a means to warn firefighters of possible dangers posed by storage tanks filled with chemicals. The red section denotes flammability. The blue section denotes health risks. Yellow represents reactivity (tendency to explode). The white section denotes special hazard information, not properly covered by the other categories, such as water reactivity, oxidizers, and asphyxiant gases. [ 31 ] A large number of warning symbols with non-standard designs are in use around the world. Some warning symbols have been redesigned to be more comprehensible to children, such as the Mr. Ouch (depicting an electricity danger as a snarling, spiky creature) and Mr. Yuk (a green frowny face sticking its tongue out, to represent poison) designs in the United States.
https://en.wikipedia.org/wiki/Hazard_symbol