text
stringlengths
11
320k
source
stringlengths
26
161
In mathematics and computer science , in the field of coding theory , the Hamming bound is a limit on the parameters of an arbitrary block code : it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code . An original message and an encoded version are both composed in an alphabet of q letters. Each code word contains n letters. The original message (of length m ) is shorter than n letters. The message is converted into an n -letter codeword by an encoding algorithm, transmitted over a noisy channel , and finally decoded by the receiver. The decoding process interprets a garbled codeword, referred to as simply a word , as the valid codeword "nearest" the n -letter received string. Mathematically, there are exactly q m possible messages of length m , and each message can be regarded as a vector of length m . The encoding scheme converts an m -dimensional vector into an n -dimensional vector. Exactly q m valid codewords are possible, but any one of q n words can be received because the noisy channel might distort one or more of the n letters when a codeword is transmitted. An alphabet set A q {\displaystyle {\mathcal {A}}_{q}} is a set of symbols with q {\displaystyle q} elements. The set of strings of length n {\displaystyle n} on the alphabet set A q {\displaystyle {\mathcal {A}}_{q}} are denoted A q n {\displaystyle {\mathcal {A}}_{q}^{n}} . (There are q n {\displaystyle q^{n}} distinct strings in this set of strings.) A q {\displaystyle q} -ary block code of length n {\displaystyle n} is a subset of the strings of A q n {\displaystyle {\mathcal {A}}_{q}^{n}} , where the alphabet set A q {\displaystyle {\mathcal {A}}_{q}} is any alphabet set having q {\displaystyle q} elements. (The choice of alphabet set A q {\displaystyle {\mathcal {A}}_{q}} makes no difference to the result, provided the alphabet is of size q {\displaystyle q} .) Let A q ( n , d ) {\displaystyle \ A_{q}(n,d)} denote the maximum possible size of a q {\displaystyle q} -ary block code C {\displaystyle \ C} of length n {\displaystyle n} and minimum Hamming distance d {\displaystyle d} between elements of the block code (necessarily positive for q n > 1 {\displaystyle q^{n}>1} ). Then, the Hamming bound is: where It follows from the definition of d {\displaystyle d} that if at most errors are made during transmission of a codeword then minimum distance decoding will decode it correctly (i.e., it decodes the received word as the codeword that was sent). Thus the code is said to be capable of correcting t {\displaystyle t} errors. For each codeword c ∈ C {\displaystyle c\in C} , consider a ball of fixed radius t {\displaystyle t} around c {\displaystyle c} . Every pair of these balls ( Hamming balls ) are non-intersecting by the t {\displaystyle t} -error-correcting property. Let m {\displaystyle m} be the number of words in each ball (in other words, the volume of the ball). A word that is in such a ball can deviate in at most t {\displaystyle t} components from those of the ball's centre , which is a codeword. The number of such words is then obtained by choosing up to t {\displaystyle t} of the n {\displaystyle n} components of a codeword to deviate to one of ( q − 1 ) {\displaystyle (q-1)} possible other values (recall, the code is q {\displaystyle q} -ary: it takes values in A q n {\displaystyle {\mathcal {A}}_{q}^{n}} ). Thus, A q ( n , d ) {\displaystyle A_{q}(n,d)} is the (maximum) total number of codewords in C {\displaystyle C} , and so, by the definition of t {\displaystyle t} , the greatest number of balls with no two balls having a word in common. Taking the union of the words in these balls centered at codewords, results in a set of words, each counted precisely once, that is a subset of A q n {\displaystyle {\mathcal {A}}_{q}^{n}} (where | A q n | = q n {\displaystyle |{\mathcal {A}}_{q}^{n}|=q^{n}} words) and so: Whence: For an A q ( n , d ) {\displaystyle A_{q}(n,d)} code C (a subset of A q n {\displaystyle {\mathcal {A}}_{q}^{n}} ), the covering radius of C is the smallest value of r such that every element of A q n {\displaystyle {\mathcal {A}}_{q}^{n}} is contained in at least one ball of radius r centered at each codeword of C . The packing radius of C is the largest value of s such that the set of balls of radius s centered at each codeword of C are mutually disjoint . From the proof of the Hamming bound, it can be seen that for t = ⌊ 1 2 ( d − 1 ) ⌋ {\displaystyle t\,=\,\left\lfloor {\frac {1}{2}}(d-1)\right\rfloor } , we have: Therefore, s ≤ r and if equality holds then s = r = t . The case of equality means that the Hamming bound is attained. Codes that attain the Hamming bound are called perfect codes . Examples include codes that have only one codeword, and codes that are the whole of A q n {\displaystyle \scriptstyle {\mathcal {A}}_{q}^{n}} . Another example is given by the repeat codes , where each symbol of the message is repeated an odd fixed number of times to obtain a codeword where q = 2. All of these examples are often called the trivial perfect codes. In 1973, Tietäväinen proved [ 1 ] that any non-trivial perfect code over a prime-power alphabet has the parameters of a Hamming code or a Golay code . A perfect code may be interpreted as one in which the balls of Hamming radius t centered on codewords exactly fill out the space ( t is the covering radius = packing radius). A quasi-perfect code is one in which the balls of Hamming radius t centered on codewords are disjoint and the balls of radius t +1 cover the space, possibly with some overlaps. [ 2 ] Another way to say this is that a code is quasi-perfect if its covering radius is one greater than its packing radius. [ 3 ]
https://en.wikipedia.org/wiki/Hamming_bound
In computer science and telecommunications , Hamming codes are a family of linear error-correcting codes . Hamming codes can detect one-bit and two-bit errors, or correct one-bit errors without detection of uncorrected errors. By contrast, the simple parity code cannot correct errors, and can detect only an odd number of bits in error. Hamming codes are perfect codes , that is, they achieve the highest possible rate for codes with their block length and minimum distance of three. [ 1 ] Richard W. Hamming invented Hamming codes in 1950 as a way of automatically correcting errors introduced by punched card readers. In his original paper, Hamming elaborated his general idea, but specifically focused on the Hamming(7,4) code which adds three parity bits to four bits of data. [ 2 ] In mathematical terms, Hamming codes are a class of binary linear code. For each integer r ≥ 2 there is a code-word with block length n = 2 r − 1 and message length k = 2 r − r − 1 . Hence the rate of Hamming codes is R = k / n = 1 − r / (2 r − 1) , which is the highest possible for codes with minimum distance of three (i.e., the minimal number of bit changes needed to go from any code word to any other code word is three) and block length 2 r − 1 . The parity-check matrix of a Hamming code is constructed by listing all columns of length r that are non-zero, which means that the dual code of the Hamming code is the shortened Hadamard code , also known as a Simplex code. The parity-check matrix has the property that any two columns are pairwise linearly independent . Due to the limited redundancy that Hamming codes add to the data, they can only detect and correct errors when the error rate is low. This is the case in computer memory (usually RAM), where bit errors are extremely rare and Hamming codes are widely used, and a RAM with this correction system is an ECC RAM ( ECC memory ). In this context, an extended Hamming code having one extra parity bit is often used. Extended Hamming codes achieve a Hamming distance of four, which allows the decoder to distinguish between when at most one one-bit error occurs and when any two-bit errors occur. In this sense, extended Hamming codes are single-error correcting and double-error detecting, abbreviated as SECDED . Richard Hamming , the inventor of Hamming codes, worked at Bell Labs in the late 1940s on the Bell Model V computer, an electromechanical relay-based machine with cycle times in seconds. Input was fed in on punched paper tape , seven-eighths of an inch wide, which had up to six holes per row. During weekdays, when errors in the relays were detected, the machine would stop and flash lights so that the operators could correct the problem. During after-hours periods and on weekends, when there were no operators, the machine simply moved on to the next job. Hamming worked on weekends, and grew increasingly frustrated with having to restart his programs from scratch due to detected errors. In a taped interview, Hamming said, "And so I said, 'Damn it, if the machine can detect an error, why can't it locate the position of the error and correct it?'". [ 3 ] Over the next few years, he worked on the problem of error-correction, developing an increasingly powerful array of algorithms. In 1950, he published what is now known as Hamming code, which remains in use today in applications such as ECC memory . A number of simple error-detecting codes were used before Hamming codes, but none were as effective as Hamming codes in the same overhead of space. Parity adds a single bit that indicates whether the number of ones (bit-positions with values of one) in the preceding data was even or odd . If an odd number of bits is changed in transmission, the message will change parity and the error can be detected at this point; however, the bit that changed may have been the parity bit itself. The most common convention is that a parity value of one indicates that there is an odd number of ones in the data, and a parity value of zero indicates that there is an even number of ones. If the number of bits changed is even, the check bit will be valid and the error will not be detected. Moreover, parity does not indicate which bit contained the error, even when it can detect it. The data must be discarded entirely and re-transmitted from scratch. On a noisy transmission medium, a successful transmission could take a long time or may never occur. However, while the quality of parity checking is poor, since it uses only a single bit, this method results in the least overhead. A two-out-of-five code is an encoding scheme which uses five bits consisting of exactly three 0s and two 1s. This provides ( 5 3 ) = 10 {\displaystyle {\binom {5}{3}}=10} possible combinations, enough to represent the digits 0–9. This scheme can detect all single bit-errors, all odd numbered bit-errors and some even numbered bit-errors (for example the flipping of both 1-bits). However it still cannot correct any of these errors. Another code in use at the time repeated every data bit multiple times in order to ensure that it was sent correctly. For instance, if the data bit to be sent is a 1, an n = 3 repetition code will send 111. If the three bits received are not identical, an error occurred during transmission. If the channel is clean enough, most of the time only one bit will change in each triple. Therefore, 001, 010, and 100 each correspond to a 0 bit, while 110, 101, and 011 correspond to a 1 bit, with the greater quantity of digits that are the same ('0' or a '1') indicating what the data bit should be. A code with this ability to reconstruct the original message in the presence of errors is known as an error-correcting code. This triple repetition code is a Hamming code with m = 2, since there are two parity bits, and 2 2 − 2 − 1 = 1 data bit. Such codes cannot correctly repair all errors, however. In our example, if the channel flips two bits and the receiver gets 001, the system will detect the error, but conclude that the original bit is 0, which is incorrect. If we increase the size of the bit string to four, we can detect all two-bit errors but cannot correct them (the quantity of parity bits is even); at five bits, we can both detect and correct all two-bit errors, but not all three-bit errors. Moreover, increasing the size of the parity bit string is inefficient, reducing throughput by three times in our original case, and the efficiency drops drastically as we increase the number of times each bit is duplicated in order to detect and correct more errors. If more error-correcting bits are included with a message, and if those bits can be arranged such that different incorrect bits produce different error results, then bad bits could be identified. In a seven-bit message, there are seven possible single bit errors, so three error control bits could potentially specify not only that an error occurred but also which bit caused the error. Hamming studied the existing coding schemes, including two-of-five, and generalized their concepts. To start with, he developed a nomenclature to describe the system, including the number of data bits and error-correction bits in a block. For instance, parity includes a single bit for any data word, so assuming ASCII words with seven bits, Hamming described this as an (8,7) code, with eight bits in total, of which seven are data. The repetition example would be (3,1) , following the same logic. The code rate is the second number divided by the first, for our repetition example, 1/3. Hamming also noticed the problems with flipping two or more bits, and described this as the "distance" (it is now called the Hamming distance , after him). Parity has a distance of 2, so one bit flip can be detected but not corrected, and any two bit flips will be invisible. The (3,1) repetition has a distance of 3, as three bits need to be flipped in the same triple to obtain another code word with no visible errors. It can correct one-bit errors or it can detect - but not correct - two-bit errors. A (4,1) repetition (each bit is repeated four times) has a distance of 4, so flipping three bits can be detected, but not corrected. When three bits flip in the same group there can be situations where attempting to correct will produce the wrong code word. In general, a code with distance k can detect but not correct k − 1 errors. Hamming was interested in two problems at once: increasing the distance as much as possible, while at the same time increasing the code rate as much as possible. During the 1940s he developed several encoding schemes that were dramatic improvements on existing codes. The key to all of his systems was to have the parity bits overlap, such that they managed to check each other as well as the data. The following general algorithm generates a single-error correcting (SEC) code for any number of bits. The main idea is to choose the error-correcting bits such that the index-XOR (the XOR of all the bit positions containing a 1) is 0. We use positions 1, 10, 100, etc. (in binary) as the error-correcting bits, which guarantees it is possible to set the error-correcting bits so that the index-XOR of the whole message is 0. If the receiver receives a string with index-XOR 0, they can conclude there were no corruptions, and otherwise, the index-XOR indicates the index of the corrupted bit. An algorithm can be deduced from the following description: If a byte of data to be encoded is 10011010, then the data word (using _ to represent the parity bits) would be __1_001_1010, and the code word is 011100101010. The choice of the parity, even or odd, is irrelevant but the same choice must be used for both encoding and decoding. This general rule can be shown visually: Shown are only 20 encoded bits (5 parity, 15 data) but the pattern continues indefinitely. The key thing about Hamming codes that can be seen from visual inspection is that any given bit is included in a unique set of parity bits. To check for errors, check all of the parity bits. The pattern of errors, called the error syndrome , identifies the bit in error. If all parity bits are correct, there is no error. Otherwise, the sum of the positions of the erroneous parity bits identifies the erroneous bit. For example, if the parity bits in positions 1, 2 and 8 indicate an error, then bit 1+2+8=11 is in error. If only one parity bit indicates an error, the parity bit itself is in error. With m parity bits, bits from 1 up to 2 m − 1 {\displaystyle 2^{m}-1} can be covered. After discounting the parity bits, 2 m − m − 1 {\displaystyle 2^{m}-m-1} bits remain for use as data. As m varies, we get all the possible Hamming codes: Hamming codes have a minimum distance of 3, which means that the decoder can detect and correct a single error, but it cannot distinguish a double bit error of some codeword from a single bit error of a different codeword. Thus, some double-bit errors will be incorrectly decoded as if they were single bit errors and therefore go undetected, unless no correction is attempted. To remedy this shortcoming, Hamming codes can be extended by an extra parity bit. This way, it is possible to increase the minimum distance of the Hamming code to 4, which allows the decoder to distinguish between single bit errors and two-bit errors. Thus the decoder can detect and correct a single error and at the same time detect (but not correct) a double error. If the decoder does not attempt to correct errors, it can reliably detect triple bit errors. If the decoder does correct errors, some triple errors will be mistaken for single errors and "corrected" to the wrong value. Error correction is therefore a trade-off between certainty (the ability to reliably detect triple bit errors) and resiliency (the ability to keep functioning in the face of single bit errors). This extended Hamming code was popular in computer memory systems, starting with IBM 7030 Stretch in 1961, [ 4 ] where it is known as SECDED (or SEC-DED, abbreviated from single error correction, double error detection ). [ 5 ] Server computers in 21st century, while typically keeping the SECDED level of protection, no longer use Hamming's method, relying instead on the designs with longer codewords (128 to 256 bits of data) and modified balanced parity-check trees. [ 4 ] The (72,64) Hamming code is still popular in some hardware designs, including Xilinx FPGA families. [ 4 ] In 1950, Hamming introduced the [7,4] Hamming code. It encodes four data bits into seven bits by adding three parity bits. As explained earlier, it can either detect and correct single-bit errors or it can detect (but not correct) both single and double-bit errors. With the addition of an overall parity bit, it becomes the [8,4] extended Hamming code and can both detect and correct single-bit errors and detect (but not correct) double-bit errors. The matrix G := ( I k − A T ) {\displaystyle \mathbf {G} :={\begin{pmatrix}{\begin{array}{c|c}I_{k}&-A^{\text{T}}\\\end{array}}\end{pmatrix}}} is called a (canonical) generator matrix of a linear ( n , k ) code, and H := ( A I n − k ) {\displaystyle \mathbf {H} :={\begin{pmatrix}{\begin{array}{c|c}A&I_{n-k}\\\end{array}}\end{pmatrix}}} is called a parity-check matrix . This is the construction of G and H in standard (or systematic) form. Regardless of form, G and H for linear block codes must satisfy H G T = 0 {\displaystyle \mathbf {H} \,\mathbf {G} ^{\text{T}}=\mathbf {0} } , an all-zeros matrix. [ 6 ] Since [7, 4, 3] = [ n , k , d ] = [2 m − 1, 2 m − 1 − m , 3]. The parity-check matrix H of a Hamming code is constructed by listing all columns of length m that are pair-wise independent. Thus H is a matrix whose left side is all of the nonzero n -tuples where order of the n -tuples in the columns of matrix does not matter. The right hand side is just the ( n − k )- identity matrix . So G can be obtained from H by taking the transpose of the left hand side of H with the identity k - identity matrix on the left hand side of G . The code generator matrix G {\displaystyle \mathbf {G} } and the parity-check matrix H {\displaystyle \mathbf {H} } are: G := ( 1 0 0 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 1 1 1 ) 4 , 7 {\displaystyle \mathbf {G} :={\begin{pmatrix}1&0&0&0&1&1&0\\0&1&0&0&1&0&1\\0&0&1&0&0&1&1\\0&0&0&1&1&1&1\end{pmatrix}}_{4,7}} and H := ( 1 1 0 1 1 0 0 1 0 1 1 0 1 0 0 1 1 1 0 0 1 ) 3 , 7 . {\displaystyle \mathbf {H} :={\begin{pmatrix}1&1&0&1&1&0&0\\1&0&1&1&0&1&0\\0&1&1&1&0&0&1\end{pmatrix}}_{3,7}.} Finally, these matrices can be mutated into equivalent non-systematic codes by the following operations: [ 6 ] From the above matrix we have 2 k = 2 4 = 16 codewords. Let a → {\displaystyle {\vec {a}}} be a row vector of binary data bits, a → = [ a 1 , a 2 , a 3 , a 4 ] , a i ∈ { 0 , 1 } {\displaystyle {\vec {a}}=[a_{1},a_{2},a_{3},a_{4}],\quad a_{i}\in \{0,1\}} . The codeword x → {\displaystyle {\vec {x}}} for any of the 16 possible data vectors a → {\displaystyle {\vec {a}}} is given by the standard matrix product x → = a → G {\displaystyle {\vec {x}}={\vec {a}}G} where the summing operation is done modulo-2. For example, let a → = [ 1 , 0 , 1 , 1 ] {\displaystyle {\vec {a}}=[1,0,1,1]} . Using the generator matrix G {\displaystyle G} from above, we have (after applying modulo 2, to the sum), x → = a → G = ( 1 0 1 1 ) ( 1 0 0 0 1 1 0 0 1 0 0 1 0 1 0 0 1 0 0 1 1 0 0 0 1 1 1 1 ) = ( 1 0 1 1 2 3 2 ) = ( 1 0 1 1 0 1 0 ) {\displaystyle {\vec {x}}={\vec {a}}G={\begin{pmatrix}1&0&1&1\end{pmatrix}}{\begin{pmatrix}1&0&0&0&1&1&0\\0&1&0&0&1&0&1\\0&0&1&0&0&1&1\\0&0&0&1&1&1&1\\\end{pmatrix}}={\begin{pmatrix}1&0&1&1&2&3&2\end{pmatrix}}={\begin{pmatrix}1&0&1&1&0&1&0\end{pmatrix}}} The [7,4] Hamming code can easily be extended to an [8,4] code by adding an extra parity bit on top of the (7,4) encoded word (see Hamming(7,4) ). This can be summed up with the revised matrices: and Note that H is not in standard form. To obtain G, elementary row operations can be used to obtain an equivalent matrix to H in systematic form: For example, the first row in this matrix is the sum of the second and third rows of H in non-systematic form. Using the systematic construction for Hamming codes from above, the matrix A is apparent and the systematic form of G is written as The non-systematic form of G can be row reduced (using elementary row operations) to match this matrix. The addition of the fourth row effectively computes the sum of all the codeword bits (data and parity) as the fourth parity bit. For example, 1011 is encoded (using the non-systematic form of G at the start of this section) into 01 1 0 011 0 where blue digits are data; red digits are parity bits from the [7,4] Hamming code; and the green digit is the parity bit added by the [8,4] code. The green digit makes the parity of the [7,4] codewords even. Finally, it can be shown that the minimum distance has increased from 3, in the [7,4] code, to 4 in the [8,4] code. Therefore, the code can be defined as [8,4] Hamming code. To decode the [8,4] Hamming code, first check the parity bit. If the parity bit indicates an error, single error correction (the [7,4] Hamming code) will indicate the error location, with "no error" indicating the parity bit. If the parity bit is correct, then single error correction will indicate the (bitwise) exclusive-or of two error locations. If the locations are equal ("no error") then a double bit error either has not occurred, or has cancelled itself out. Otherwise, a double bit error has occurred.
https://en.wikipedia.org/wiki/Hamming_code
The Hamming scheme , named after Richard Hamming , is also known as the hyper-cubic association scheme , and it is the most important example for coding theory . [ 1 ] [ 2 ] [ 3 ] In this scheme X = F n , {\displaystyle X={\mathcal {F}}^{n},} the set of binary vectors of length n , {\displaystyle n,} and two vectors x , y ∈ F n {\displaystyle x,y\in {\mathcal {F}}^{n}} are i {\displaystyle i} -th associates if they are Hamming distance i {\displaystyle i} apart. Recall that an association scheme is visualized as a complete graph with labeled edges. The graph has v {\displaystyle v} vertices, one for each point of X , {\displaystyle X,} and the edge joining vertices x {\displaystyle x} and y {\displaystyle y} is labeled i {\displaystyle i} if x {\displaystyle x} and y {\displaystyle y} are i {\displaystyle i} -th associates. Each edge has a unique label, and the number of triangles with a fixed base labeled k {\displaystyle k} having the other edges labeled i {\displaystyle i} and j {\displaystyle j} is a constant c i j k , {\displaystyle c_{ijk},} depending on i , j , k {\displaystyle i,j,k} but not on the choice of the base. In particular, each vertex is incident with exactly c i i 0 = v i {\displaystyle c_{ii0}=v_{i}} edges labeled i {\displaystyle i} ; v i {\displaystyle v_{i}} is the valency of the relation R i . {\displaystyle R_{i}.} The c i j k {\displaystyle c_{ijk}} in a Hamming scheme are given by Here, v = | X | = 2 n {\displaystyle v=|X|=2^{n}} and v i = ( n i ) . {\displaystyle v_{i}={\tbinom {n}{i}}.} The matrices in the Bose-Mesner algebra are 2 n × 2 n {\displaystyle 2^{n}\times 2^{n}} matrices , with rows and columns labeled by vectors x ∈ F n . {\displaystyle x\in {\mathcal {F}}^{n}.} In particular the ( x , y ) {\displaystyle (x,y)} -th entry of D k {\displaystyle D_{k}} is 1 {\displaystyle 1} if and only if d H ( x , y ) = k . {\displaystyle d_{H}(x,y)=k.}
https://en.wikipedia.org/wiki/Hamming_scheme
The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a given set of bits , this is the number of bits set to 1, or the digit sum of the binary representation of a given number and the ℓ ₁ norm of a bit vector. In this binary case, it is also called the population count , [ 1 ] popcount , sideways sum , [ 2 ] or bit summation . [ 3 ] The Hamming weight is named after the American mathematician Richard Hamming , although he did not originate the notion. [ 5 ] The Hamming weight of binary numbers was already used in 1899 by James W. L. Glaisher to give a formula for the number of odd binomial coefficients in a single row of Pascal's triangle . [ 6 ] Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954. [ 7 ] Hamming weight is used in several disciplines including information theory , coding theory , and cryptography . Examples of applications of the Hamming weight include: The population count of a bitstring is often needed in cryptography and other applications. The Hamming distance of two words A and B can be calculated as the Hamming weight of A xor B . [ 1 ] The problem of how to implement it efficiently has been widely studied. A single operation for the calculation, or parallel operations on bit vectors are available on some processors . For processors lacking those features, the best solutions known are based on adding counts in a tree pattern. For example, to count the number of 1 bits in the 16-bit binary number a = 0110 1100 1011 1010, these operations can be done: Here, the operations are as in C programming language , so X >> Y means to shift X right by Y bits, X & Y means the bitwise AND of X and Y, and + is ordinary addition. The best algorithms known for this problem are based on the concept illustrated above and are given here: [ 1 ] The above implementations have the best worst-case behavior of any known algorithm. However, when a value is expected to have few nonzero bits, it may instead be more efficient to use algorithms that count these bits one at a time. As Wegner described in 1960, [ 14 ] the bitwise AND of x with x − 1 differs from x only in zeroing out the least significant nonzero bit: subtracting 1 changes the rightmost string of 0s to 1s, and changes the rightmost 1 to a 0. If x originally had n bits that were 1, then after only n iterations of this operation, x will be reduced to zero. The following implementation is based on this principle. If greater memory usage is allowed, we can calculate the Hamming weight faster than the above methods. With unlimited memory, we could simply create a large lookup table of the Hamming weight of every 64 bit integer. If we can store a lookup table of the hamming function of every 16 bit integer, we can do the following to compute the Hamming weight of every 32 bit integer. A recursive algorithm is given in Donovan & Kernighan [ 15 ] Muła et al. [ 16 ] have shown that a vectorized version of popcount64b can run faster than dedicated instructions (e.g., popcnt on x64 processors). In error-correcting coding , the minimum Hamming weight, commonly referred to as the minimum weight w min of a code is the weight of the lowest-weight non-zero code word. The weight w of a code word is the number of 1s in the word. For example, the word 11001010 has a weight of 4. In a linear block code the minimum weight is also the minimum Hamming distance ( d min ) and defines the error correction capability of the code. If w min = n , then d min = n and the code will correct up to d min /2 errors. [ 17 ] Some C compilers provide intrinsic functions that provide bit counting facilities. For example, GCC (since version 3.4 in April 2004) includes a builtin function __builtin_popcount that will use a processor instruction if available or an efficient library implementation otherwise. [ 18 ] LLVM-GCC has included this function since version 1.5 in June 2005. [ 19 ] In the C++ Standard Library , the bit-array data structure bitset has a count() method that counts the number of bits that are set. In C++20 , a new header <bit> was added, containing functions std::popcount and std::has_single_bit , taking arguments of unsigned integer types. In Java, the growable bit-array data structure BitSet has a BitSet.cardinality() method that counts the number of bits that are set. In addition, there are Integer.bitCount(int) and Long.bitCount(long) functions to count bits in primitive 32-bit and 64-bit integers, respectively. Also, the BigInteger arbitrary-precision integer class also has a BigInteger.bitCount() method that counts bits. In Python , the int type has a bit_count() method to count the number of bits set. This functionality was introduced in Python 3.10, released in October 2021. [ 20 ] In Common Lisp , the function logcount , given a non-negative integer, returns the number of 1 bits. (For negative integers it returns the number of 0 bits in 2's complement notation.) In either case the integer can be a BIGNUM. Starting in GHC 7.4, the Haskell base package has a popCount function available on all types that are instances of the Bits class (available from the Data.Bits module). [ 21 ] MySQL version of SQL language provides BIT_COUNT() as a standard function. [ 22 ] Fortran 2008 has the standard, intrinsic, elemental function popcnt returning the number of nonzero bits within an integer (or integer array). [ 23 ] Some programmable scientific pocket calculators feature special commands to calculate the number of set bits, e.g. #B on the HP-16C [ 3 ] [ 24 ] and WP 43S , [ 25 ] [ 26 ] #BITS [ 27 ] [ 28 ] or BITSUM [ 29 ] [ 30 ] on HP-16C emulators, and nBITS on the WP 34S . [ 31 ] [ 32 ] FreePascal implements popcnt since version 3.0. [ 33 ]
https://en.wikipedia.org/wiki/Hamming_weight
Hammond's postulate (or alternatively the Hammond–Leffler postulate ), is a hypothesis in physical organic chemistry which describes the geometric structure of the transition state in an organic chemical reaction . [ 1 ] First proposed by George Hammond in 1955, the postulate states that: [ 2 ] If two states, as, for example, a transition state and an unstable intermediate, occur consecutively during a reaction process and have nearly the same energy content, their interconversion will involve only a small reorganization of the molecular structures. Therefore, the geometric structure of a state can be predicted by comparing its energy to the species neighboring it along the reaction coordinate . For example, in an exothermic reaction the transition state is closer in energy to the reactants than to the products. Therefore, the transition state will be more geometrically similar to the reactants than to the products. In contrast, however, in an endothermic reaction the transition state is closer in energy to the products than to the reactants. So, according to Hammond’s postulate the structure of the transition state would resemble the products more than the reactants. [ 3 ] This type of comparison is especially useful because most transition states cannot be characterized experimentally. [ 4 ] Hammond's postulate also helps to explain and rationalize the Bell–Evans–Polanyi principle . Namely, this principle describes the experimental observation that the rate of a reaction , and therefore its activation energy , is affected by the enthalpy of that reaction. Hammond's postulate explains this observation by describing how varying the enthalpy of a reaction would also change the structure of the transition state. In turn, this change in geometric structure would alter the energy of the transition state, and therefore the activation energy and reaction rate as well. [ 5 ] The postulate has also been used to predict the shape of reaction coordinate diagrams. For example, electrophilic aromatic substitution involves a distinct intermediate and two less well defined states. By measuring the effects of aromatic substituents and applying Hammond's postulate it was concluded that the rate-determining step involves formation of a transition state that should resemble the intermediate complex. [ 6 ] During the 1940s and 1950s, chemists had trouble explaining why even slight changes in the reactants caused significant differences in the rate and product distributions of a reaction. In 1955 George Hammond , a young professor at Iowa State University , postulated that transition-state theory could be used to qualitatively explain the observed structure-reactivity relationships. [ 7 ] Notably, John E. Leffler of Florida State University proposed a similar idea in 1953. [ 8 ] However, Hammond's version has received more attention since its qualitative nature was easier to understand and employ than Leffler's complex mathematical equations. Hammond's postulate is sometimes called the Hammond–Leffler postulate to give credit to both scientists. [ 7 ] Effectively, the postulate states that the structure of a transition state resembles that of the species nearest to it in free energy . This can be explained with reference to potential energy diagrams: In case (a), which is an exothermic reaction, the energy of the transition state is closer in energy to that of the reactant than that of the intermediate or the product. Therefore, from the postulate, the structure of the transition state also more closely resembles that of the reactant. In case (b), the energy of the transition state is close to neither the reactant nor the product, making none of them a good structural model for the transition state. Further information would be needed in order to predict the structure or characteristics of the transition state. Case (c) depicts the potential diagram for an endothermic reaction, in which, according to the postulate, the transition state should more closely resemble that of the intermediate or the product. Another significance of Hammond’s postulate is that it permits us to discuss the structure of the transition state in terms of the reactants, intermediates, or products. In the case where the transition state closely resembles the reactants, the transition state is called “early” while a “late” transition state is the one that closely resembles the intermediate or the product. [ 9 ] An example of the “early” transition state is chlorination. Chlorination favors the products because it is an exothermic reaction, which means that the products are lower in energy than the reactants. [ 10 ] When looking at the adjacent diagram (representation of an "early" transition state), one must focus on the transition state, which is not able to be observed during an experiment. To understand what is meant by an “early” transition state, the Hammond postulate represents a curve that shows the kinetics of this reaction. Since the reactants are higher in energy, the transition state appears to be right after the reaction starts. An example of the “late” transition state is bromination. Bromination favors the reactants because it is an endothermic reaction, which means that the reactants are lower in energy than the products. [ 11 ] Since the transition state is hard to observe, the postulate of bromination helps to picture the “late” transition state (see the representation of the "late" transition state). Since the products are higher in energy, the transition state appears to be right before the reaction is complete. One other useful interpretation of the postulate often found in textbooks of organic chemistry is the following: This interpretation ignores extremely exothermic and endothermic reactions which are relatively unusual and relates the transition state to the intermediates which are usually the most unstable. Hammond's postulate can be used to examine the structure of the transition states of a SN1 reaction . In particular, the dissociation of the leaving group is the first transition state in a S N 1 reaction. The stabilities of the carbocations formed by this dissociation are known to follow the trend tertiary > secondary > primary > methyl. Therefore, since the tertiary carbocation is relatively stable and therefore close in energy to the R-X reactant, then the tertiary transition state will have a structure that is fairly similar to the R-X reactant. In terms of the graph of reaction coordinate versus energy, this is shown by the fact that the tertiary transition state is further to the left than the other transition states. In contrast, the energy of a methyl carbocation is very high, and therefore the structure of the transition state is more similar to the intermediate carbocation than to the R-X reactant. Accordingly, the methyl transition state is very far to the right. Bimolecular nucleophilic substitution (SN2) reactions are concerted reactions where both the nucleophile and substrate are involved in the rate limiting step. Since this reaction is concerted, the reaction occurs in one step, where the bonds are broken, while new bonds are formed. [ 12 ] Therefore, to interpret this reaction, it is important to look at the transition state, which resembles the concerted rate limiting step. In the "Depiction of S N 2 Reaction" figure, the nucleophile forms a new bond to the carbon, while the halide (L) bond is broken. [ 13 ] An E1 reaction consists of a unimolecular elimination, where the rate determining step of the mechanism depends on the removal of a single molecular species. This is a two-step mechanism. The more stable the carbocation intermediate is, the faster the reaction will proceed, favoring the products. Stabilization of the carbocation intermediate lowers the activation energy. The reactivity order is (CH3)3C- > (CH3)2CH- > CH3CH2- > CH3-. [ 14 ] Furthermore, studies describe a typical kinetic resolution process that starts out with two enantiomers that are energetically equivalent and, in the end, forms two energy-inequivalent intermediates, referred to as diastereomers. According to Hammond's postulate, the more stable diastereomer is formed faster. [ 15 ] Elimination, bimolecular reactions are one step, concerted reaction where both base and substrate participate in the rate limiting step. In an E2 mechanism, a base takes a proton near the leaving group, forcing the electrons down to make a double bond, and forcing off the leaving group-all in one concerted step. The rate law depends on the first order concentration of two reactants, making it a 2nd order (bimolecular) elimination reaction. Factors that affect the rate determining step are stereochemistry, leaving groups, and base strength. A theory, for an E2 reaction, by Joseph Bunnett suggests the lowest pass through the energy barrier between reactants and products is gained by an adjustment between the degrees of C β -H and C α -X rupture at the transition state. The adjustment involves much breaking of the bond more easily broken, and a small amount of breaking of the bond which requires more energy. [ 16 ] This conclusion by Bunnett is a contradiction from the Hammond postulate. The Hammond postulate is the opposite of what Bunnett theorized. In the transition state of a bond breaking step it involves little breaking when the bond is easily broken and much breaking when it is difficult to break. [ 16 ] Despite these differences, the two postulates are not in conflict since they are concerned with different sorts of processes. Hammond focuses on reaction steps where one bond is made or broken, or the breaking of two or more bonds is done with no time taken occur simultaneously. The E2 theory transition state concerns a process when bond formation or breaking are not simultaneous. [ 16 ] Technically, Hammond's postulate only describes the geometric structure of a chemical reaction. However, Hammond's postulate indirectly gives information about the rate , kinetics , and activation energy of reactions. Hence, it gives a theoretical basis for the understanding the Bell–Evans–Polanyi principle , which describes the experimental observation that the enthalpy and rate of a similar reactions were usually correlated. The relationship between Hammond's postulate and the BEP principle can be understood by considering a S N 1 reaction . Although two transition states occur during a S N 1 reaction (dissociation of the leaving group and then attack by the nucleophile), the dissociation of the leaving group is almost always the rate-determining step . Hence, the activation energy and therefore rate of the reaction will depend only upon the dissociation step. First, consider the reaction at secondary and tertiary carbons. As the BEP principle notes, experimentally S N 1 reactions at tertiary carbons are faster than at secondary carbons. Therefore, by definition, the transition state for tertiary reactions will be at a lower energy than for secondary reactions. However, the BEP principle cannot justify why the energy is lower. Using Hammond's postulate, the lower energy of the tertiary transition state means that its structure is relatively closer to its reactants R(tertiary)-X than to the carbocation product when compared to the secondary case. Thus, the tertiary transition state will be more geometrically similar to the R(tertiary)-X reactants than the secondary transition state is to its R(secondary)-X reactants. Hence, if the tertiary transition state is close in structure to the (low energy) reactants, then it will also be lower in energy because structure determines energy. Likewise, if the secondary transition state is more similar to the (high energy) carbocation product, then it will be higher in energy. Hammond's postulate is useful for understanding the relationship between the rate of a reaction and the stability of the products. While the rate of a reaction depends just on the activation energy (often represented in organic chemistry as ΔG ‡ “delta G double dagger”), the final ratios of products in chemical equilibrium depends only on the standard free-energy change ΔG (“delta G”). The ratio of the final products at equilibrium corresponds directly with the stability of those products. Hammond's postulate connects the rate of a reaction process with the structural features of those states that form part of it, by saying that the molecular reorganizations have to be small in those steps that involve two states that are very close in energy. This gave birth to the structural comparison between the starting materials, products, and the possible "stable intermediates" that led to the understanding that the most stable product is not always the one that is favored in a reaction process. Hammond's postulate is especially important when looking at the rate-limiting step of a reaction. However, one must be cautious when examining a multistep reaction or one with the possibility of rearrangements during an intermediate stage. In some cases, the final products appear in skewed ratios in favor of a more unstable product (called the kinetic product ) rather than the more stable product (the thermodynamic product ). In this case one must examine the rate-limiting step and the intermediates. Often, the rate-limiting step is the initial formation of an unstable species such as a carbocation . Then, once the carbocation is formed, subsequent rearrangements can occur. In these kinds of reactions, especially when run at lower temperatures, the reactants simply react before the rearrangements necessary to form a more stable intermediate have time to occur. At higher temperatures when microscopic reversal is easier, the more stable thermodynamic product is favored because these intermediates have time to rearrange. Whether run at high or low temperatures, the mixture of the kinetic and thermodynamic products eventually reach the same ratio, one in favor of the more stable thermodynamic product, when given time to equilibrate due to microreversal.
https://en.wikipedia.org/wiki/Hammond's_postulate
Rev. Hamnet Holditch , also spelled Hamnett Holditch (1800 – 12 December 1867), was an English mathematician who was President of Gonville and Caius College, Cambridge . In 1858, he introduced the result in geometry now known as Holditch's theorem . Hamnet Holditch was born in 1800 in King's Lynn , the son of George Holditch, pilot and harbour-master. Educated at King's Lynn Grammar School under Rev. Martin Coulcher, [ 1 ] he matriculated at Gonville and Caius College, Cambridge in 1818, and graduated B.A. in 1822 ( Senior Wrangler and 1st Smith's Prize ), M.A. in 1825. [ 2 ] At Gonville and Caius College, Holditch was a junior fellow from 1821 and a senior fellow from 1823, and held the college posts of lecturer in Hebrew and Greek, registrar, steward, salarist (1823–28), bursar (1828–31), and President (1835–67). [ 3 ] [ 4 ] He died at Gonville and Caius College on 12 December 1867, aged 67, [ 5 ] and was buried at North Wootton . [ 1 ] [ 3 ] Although Holditch produced ten mathematical papers, he was extremely idle as a tutor. [ 6 ] John Venn , an undergraduate at Caius in the 1850s then a Caius Fellow from 1857, noted that Holditch, despite his succession of college offices, "beyond a few private pupils, never took part in educational work": [ 3 ] He was a very ingenious mathematician, and would probably have distinguished himself had he been compelled to work. Remarkable for his extreme shyness. On account of some ancient slight he for many years entirely absented himself from Hall and Chapel, and few members of the college knew him even by sight:— an undergraduate once showed him round the college, taking him for a stranger. The whole summer he spent fishing in Scotland or Wales. It is curious to see Holditch coming out of his den, which he does once in ten years, with something about rolling curves or caustics. He was senior wrangler the year before Airy , and what has made a man of such decided talent shut himself up I never heard.
https://en.wikipedia.org/wiki/Hamnet_Holditch
The Hampson–Linde cycle is a process for the liquefaction of gases , especially for air separation . William Hampson and Carl von Linde independently filed for patents of the cycle in 1895: Hampson on 23 May 1895 and Linde on 5 June 1895. [ 1 ] [ 2 ] [ 3 ] [ 4 ] The Hampson–Linde cycle introduced regenerative cooling , a positive-feedback cooling system. [ 5 ] The heat exchanger arrangement permits an absolute temperature difference (e.g. 0.27 °C/atm J–T cooling for air) to go beyond a single stage of cooling and can reach the low temperatures required to liquefy "fixed" gases. The Hampson–Linde cycle differs from the Siemens cycle only in the expansion step. Whereas the Siemens cycle has the gas do external work to reduce its temperature, the Hampson–Linde cycle relies solely on the Joule–Thomson effect ; this has the advantage that the cold side of the cooling apparatus needs no moving parts. [ 1 ] The cooling cycle proceeds in several steps: In each cycle the net cooling is more than the heat added at the beginning of the cycle. As the gas passes more cycles and becomes cooler, reaching lower temperatures at the expansion valve becomes more difficult.
https://en.wikipedia.org/wiki/Hampson–Linde_cycle
A hamus or hamulus is a structure functioning as, or in the form of, hooks or hooklets. The terms are directly from Latin , in which hamus means " hook ". The plural is hami . Hamulus is the diminutive – hooklet or little hook. The plural is hamuli . Adjectives are hamate and hamulate , as in "a hamulate wing-coupling", in which the wings of certain insects in flight are joined by hooking hamuli on one wing into folds on a matching wing. Hamulate can also mean "having hamuli". The terms hamose , hamular , hamous and hamiform also have been used to mean "hooked", or "hook-shaped". Terms such as hamate [ 1 ] that do not indicate a diminutive usually refer particularly to a hook at the tip, whereas diminutive terms such as hamulose tend to imply that something is beset with small hooks. [ 2 ] [ 3 ] In vertebrate anatomy, a hamulus is a small, hook-shaped portion of a bone, or possibly of other hard tissue. In human anatomy, examples include: [ 4 ] In arthropod morphology, hamuli are hooklets, usually in the form of projections of the surface of the exoskeleton . Hami might be actual evaginations of the whole thickness of the exoskeleton. The best-known examples are probably the row of hamuli on the anterior edge of the metathoracic (rear) wings of Hymenoptera such as the honeybee. The hooks attach to a fold on the posterior edge of the mesothoracic (front) wings. It is less widely realised that similar hamuli, though usually fewer, are used in wing coupling in the Sternorrhyncha , the suborder of aphids and scale insects . In the Sternorrhyncha such wing coupling occurs particularly in the males of some species. The rear wings of that suborder frequently are reduced or absent, and in many species the last vestige of the rear wing to persist is a futile little strap holding the hamuli, still hooking into the fold of the large front wings. In those springtails (Collembola) that have a functional furcula , the underside of the third abdominal segment bears a hooked structure, variously called the retinaculum or hamula. It holds the furcula ready for release in times of emergency. [ 5 ] The terms also are used in descriptive anatomy of some insect genitalia, such as hamuli in various Odonata and "hamus" for the hooked part of the uncus in male Lepidoptera . [ 5 ] In botany , such words largely refer to hooked bristles such as the hooks on the rachilla of Uncinia , which attach the fruit to passing animals, or the similarly functioning hooks on Burdocks , well known as the alleged inspiration for Velcro .
https://en.wikipedia.org/wiki/Hamulus
Archaea , one of the three domains of life, are a highly diverse group of prokaryotes that include a number of extremophiles. [ 1 ] One of these extremophiles has given rise to a highly complex new appendage known as the hamus ( pl. : hami ). In contrast to the well-studied prokaryotic appendages pili and fimbriae, much is yet to be discovered about archaeal appendages such as hami. [ 2 ] Appendages serve multiple functions for cells and are often involved in attachment, horizontal conjugation , and movement. The unique appendage was discovered at the same time as the unique community of archaea that produces them. Research into the structure of hami suggests their main function aids in attachment and biofilm formation. This is accomplished due to their evenly placed prickles, helical structure, and barbed end. [ 3 ] These appendages are heat and acid resistant, aiding in the cell's ability to live in extreme environments. [ 4 ] In 1977, archaea, then known as archaebacteria, were first discovered when Carl Woese and George Fox published their findings in the Proceedings of the National Academy of Sciences, stating that these organisms were distantly related to bacteria . This revolutionized biology into the three domains of life known today; Bacteria, Eukarya, and Archaea. [ 5 ] By checking the ratios of biogenic isotopes that are unique to different metabolisms, scientists have dated archaea as far back as 2,500 million years. Due to oxygen being a trace element in the atmosphere at this time, archaea anaerobes methanotrophy is believed to have preceded bacterial aerobic methanotrophy. When studying phylogenetic trees , Bacteria are evolved from the last universal common ancestor or LUCA , while Archaea and Eukarya are considered sister lineages because they share a last common ancestor that is more recent than LUCA. [ 6 ] Archaea, much like other microorganisms, possess a variety of extracellular appendages to facilitate important functions such as motility, cell adhesion , and DNA transfer . Unlike fimbriae and pili, whose composition and function(s) are well defined among bacterial species, hami belong to a relatively new class of filamentous cell appendages unique to archaea. [ 7 ] Archaeal cells may have as many as 100 hami, which are largely composed of 120 kDa subunits. Each hamus (hami plural), is helical in shape with many hook-like projections at the distal end, which are hypothesized to aid in attachment to surfaces within the environment, or in the formation of biofilms. Archaeal cells possessing hami appear to grow only in relatively cold aquatic environments around 10 degrees Celsius, which could be suggestive of a particular function that has not yet been defined. [ 9 ] One possible explanation for this observation could be the relationship archaeal cells, SM1 euryarchaeon , possessing hami have with Thiothrix, a type of sulfur-oxidizing bacterium typically found within similar conditions. Hamus-bearing archaeal cells sometimes form macroscopically visible communities with Thiothrix or IMB1 ε- proteobacterium, called a string-of-pearls. [ 9 ] Thiothrix and IMB1 ε- proteobacterium are filamentous bacteria that appear to form the outer shell of the pearl as well as the strings that connect these pearls together. Within the pearls, it appears the archaea SM1 euryarchaeon forms the majority of the core. [ 4 ] Research has shown the SM1 euryarchaeon use the hamus to aid in biofilm formation. [ 2 ] The formation of string-of-pearls communities suggests a mutual dependency for nutrient exchange, though the entirety of this unique relationship has yet to be established. [ 10 ] Another hami producing biofilm was discovered that was dissimilar from the string pearl formation. This biofilm consists almost entirely of SM1 archaea making it the first biofilm found of this nature as no other biofilm with a nearly pure composition of archaea has been found. [ 4 ] This biofilm has a highly organized structure with distances between cells being exceptionally consistent. Scientists speculate the hami are not only responsible for the strong attachments found in the biofilm formation but also this highly intricate and specific structure. [ 4 ] It is possible that other archaeal cells possessing hami have not yet been discovered or cultured. Archaeal appendages serve a variety of purposes and provide the archaeal cells with multiple unique and essential abilities. Hami play a large role in cellular attachment. These appendages allow the cells to adhere to each other, as well as their surroundings. When the hami filaments of one cell come into contact with a neighboring cell, the hami are able to entangle and produce a web like structure between the cells. [ 11 ] This helps to form and maintain the biofilm. Hami are also used by the cells in biofilms or individually to adhere to external environmental surfaces. They have been proven to attach to substances with varying chemical compositions including those of an inorganic nature. [ 2 ] Hami are also capable of contributing to the EPS of the cell as part of the main protein component of the EPS. [ 12 ] One interesting facet of these hami is that their 120 kDa protein allows them to remain stable over a broad range of temperatures. One research experiment found hami to be stable at 70 degrees C and noted the finding curious as the only currently known hami producing cells live in 10 degrees C. These hami were also noted to be stable over a significant pH range of 0.5-11.5. [ 4 ] Archaea are known as extremophiles and live in extreme environments, but this capacity to remain stable over a large range of both pH and temperature makes hami very unique structures. Similarly, this lends to the possibility that archaeal hami may exist in other yet to be discovered biofilms outside of the 10 degree C temperature range and in various pH ranges.
https://en.wikipedia.org/wiki/Hamus_(archaea)
Han Zhong (韓終 or 韓眾) was a Qin dynasty (221 BCE-206 BCE) herbalist fangshi ("Method Master") and Daoist xian ("Transcendent; 'Immortal'"). In Chinese history , Qin Shi Huang , the first emperor of China, commissioned Han in 215 BCE to lead a maritime expedition in search of the elixir of life , yet he never returned, which subsequently led to the infamous burning of books and burying of scholars . In Daoist tradition, after Han Zhong consumed the psychoactive drug changpu (菖蒲, " Acorus calamus , sweet flag") for thirteen years, he grew thick body hair that protected him from cold, acquired a photographic memory , and achieved transcendence . He is iconographically represented as riding a white deer and having pendulous ears. The present Chinese name Han Zhong combines the common surname Hán ( 韓 ) and given name Zhōng ( 終 ) or Zhòng ( 眾 ). Hán (韓 or 韩) has English translation equivalents of: " 1. name of one of the 7 major states in Warring States period , comprising the area of present-day southeast Shanxi and central Henan , originally part of the Jin state . 2. a surname." (Kroll 2017: 151). In modern Standard Chinese usage, the word commonly translates "Korea", such as Hánguó (韓國, "Korea") or Běihán (北韓, "North Korea") (Bishop 2016: n.p.). Zhōng (終 or 终) has English translations of: " 1. end, finish, conclude … come to the end of life; death, demise. 2. all the way to the end, through to the finish; all of, the whole, complete(ly) … 3. in the end, finally, after all, in conclusion. …" (Kroll 2017: 611). Zhòng (眾 or 众) can be translated as: " 1. multitude, throng; manifold; numerous, legion; throng(ing); sundry, diverse … the crowd, common run, mass of; average, normal … 2. in everyone's presence, public(ly)" (Kroll 2017: 613). Although scholars generally believe Han Zhong (韓眾) and Han Zhong (韓終) were variant writings of one person's name, some suggest they were two individuals; the Warring States period (475–221 BCE) Transcendent Han Zhong (韓眾) and the Western Han dynasty (202 BCE-9 CE) fangshi Han Zhong (韓終) (Zhang and Unschuld 2014: 173–174). Two honorific names below for Han Zhong are Huolin xianren (霍林仙人, Transcendent of Huolin ) and Bailuxian (白鹿仙, White Deer Transcendent). Chāng (菖) is usually limited to the Acorus name: 1. ~蒲 chāngpú , sweet-flag ( Acorus calamus ), sweetly scented wetland grass, used apotropaically; sometimes ref[erring] to cattail ( Typha latifolia ) or bulrush ( Typha minima )." (Kroll 2017: 41). Pú (蒲) occurs within several Chinese plant names: " 1. sweet-flag ( Acorus calamus ), also 菖~ chāngpú , a wetland grass, used apotropaically; the latter also cattail ( Typha latifolia ) or bulrush ( Typha minima ). 2. ~柳 [with "willow"] púliǔ , purple willow, purple osier ( Salix sinopurpurea ), deciduous shrub that produces small purple catkins in early spring. 3. ~葵 [with "sunflower"] púkuí , Chinese fan palm, fountain palm ( Livistona chinensis ). 4. ~桃 [with "peach"] pútáo , rose-apple ( Syzygium jambos ). ..." (Kroll 2017: 350). A modern dictionary of Chinese botanical nomenclature lists five Acorus terms: changpu (菖蒲, Acorus calamus ), riben baichang (日本白菖, with "Japanese", A. calamus var. angustatus ), shi changpu (石菖蒲, with "stone/rock", Acorus gramineus "), xiye changpu (細葉菖蒲, with "thin leaf", A. gramineus var. pusillus "), and jinxian shi changpu (金線石菖蒲, with "gold thread", A. gramineus var. variegatus ") (Fèvre and Métaillé 2005: 737) The Chinese classics present Han Zhong as both a historical personage and a legendary persona. The textual examples below are roughly arranged chronologically. The 3rd-2nd centuries BCE Chuci "Songs of Chu " mentions Han Zhong (韓眾) in two poems about shamanistic spirit journeys. The Yuan You (遠遊, Far Roaming) compares him with Fu Yue , a legendary minister under the Shang dynasty king Wu Ding (r. c. 1250–c. 1200 BCE), I marveled how Fu Yue lived on in a star; I admired Han Zhong for attaining Oneness. Their bodies grew dim and faded in the distance; They left the crowded world behind and withdrew themselves. (Hawkes 1985: 194). The Zibei (自悲, Oppressed by Grief) poem says, I heard South Land music and wanted to go there, And coming to Kuaiji I rested there awhile. There I met Han Zhong, who gave me lodging. I asked him wherein lay the secret of heaven's Tao. Borrowing a floating cloud to take me on my journey, With the pale woman-rainbow as a banner to fly over it, I harnessed the Green Dragon to it for my swift conveyance. And off in a flash we flew, at a speed that made the eyes dim. … (Hawkes 1985: 254). In addition, the Chuci' s enigmatic Tianwen (Heavenly Questions) section refers to marijuana and perhaps calamus. "Where is the nine-branched weed [靡蓱九衢]? Where is the flower of the Great Hemp [枲華]?" (Hawkes 1985: 128); alternatively, "The nine-jointed calamus, And xi blossoms, where do they grow?" (Field 1986: 49). Two Chuci poems mention white deer: "Green cyprus grass grows in between, And the rush-grass rustles and sways. White deer, roebuck and horned deer Now leap and now stand poised." (" Seven Remonstrances ", Hawkes 1985: 245); "Floating on cloud and mist, we enter the dim height of heaven; Riding on white deer we sport and take our pleasure." (" Alas That My Lot Was Not Cast ", Hawkes 1985: 266). Sima Qian 's 1st century BCE Records of the Grand Historian mentions Han Zhong as one of five fangshi Method Masters who the first Chinese emperor Qin Shi Huang (r. 221-210 BCE) selected to lead maritime expeditions seeking legendary Daoist Transcendents and elixirs of longevity . In 219 BCE, Xu Fu (徐福) or Xu Shi (徐巿) from Qi and others submitted a memorial to the throne requesting to search for the xian Transcendents who reportedly lived on three hidden islands in the East Sea , Penglai (蓬萊), Fangzhang (方丈), and Yingzhou (瀛洲). The emperor ordered him to take a flotilla with "several thousand young men and maidens" and locate these supernatural islands (Needham 1976: 17). In 215 BCE, during Qin Shi Huang's fourth imperial inspection tour of northeast China, he commissioned more naval expeditions searching for Transcendental drugs. First, when the emperor was visiting Mount Jieshi (碣石山, in Hebei ) he commanded Scholar Lu (盧生), from Yan , to find the Transcendent Xianmen Gao (羨門高) (Needham 1976: 18). When Lu came back from his unsuccessful mission overseas, he reported on "matters concerning ghosts and gods" to the emperor, and presented prophetic writings, one of which read: "Qin will be destroyed by hu [亡秦者胡也]." (Dawson 1994: 72). Understanding hu (胡, "foreign; barbarian") in its usual meaning, the emperor ordered General Meng Tian to lead 300,000 troops on a campaign against the Xiongnu barbarians—however, hu was eventually understood as a reference to the emperor's youngest son and hapless successor, Prince Huhai (胡亥, r. 210-207 BCE), whose name was written with the same character (Dawson 1994: 154). Second, the emperor directed Han Zhong (written 韓終, cf. below), Master Hou (侯公), and Scholar Shi (石生) to search for legendary Transcendents and their "drug of deathlessness" (僊人不死之藥) (Nienhauser 2018: 145). Their explorations never returned to China and were presumed lost. In 213 BCE, Qin Shi Huang approved his Chancellor Li Si 's proposal to suppress intellectual dissent by burning most existing books, except those on divination, agriculture, medicine, and history of the state of Qin. The rules were draconian, "Anyone who ventures to discuss [the Classic of Poetry or the Book of Documents ] will be executed in the marketplace. Those who use the ancient (system) to criticize the present, will be executed together with their families. … Thirty days after the ordinance has been issued, anyone who has not burned his books will be tattooed and sentenced to hard labor." (Nienhauser 2018: 147). In 212 BCE, the emperor became resentful that Han Zhong and the other Method Masters had repeatedly lied about being able to obtain longevity elixirs, which culminated with the mass execution of 460 scholars in the infamous burning of books and burying of scholars . First, Scholar Lu blamed evil spirits [惡鬼] for causing the failures to find "magic mushrooms, elixirs of long life, and immortals" [芝奇藥僊], and suggested, "I hope that Your Highness will not let anyone know of the residence wherein you stay; then the elixir of long life may be obtained" (Nienhauser 2018: 149). Then, Scholars Lu and Hou secretly met and concluded that since Qin Shi Huang had never been informed of his mistakes, and was becoming more arrogant daily, his obsession with power was so extreme that they could never seek the elixir of longevity for him (Nienhauser 2018: 149). Therefore, they absconded, and when the First Emperor learned of it, he was enraged. "I have eliminated those books which I earlier confiscated from the world and judged useless, and recruited only the literary men and practitioners of [magic] methods and techniques in great number, with the desire to bring about the great peace, and, with the practitioners of [magic] methods, to seek wonderous drugs by means of alchemy [方士欲練以求奇藥]. Now I have heard that Han Chung [written with the variant 韓眾, cf. 韓終 above] has never reported back after he left, and Hsu Fu and his associates have spent cash countable only in myriads, but the elixir is yet to be found. I am only told every day that they accused each other of embezzlement. I respected and treated lavishly Scholar Lu and his like. Now they have slandered me to substantiate my lack of virtue, I will have someone investigate all the masters in Hsien-yang, to see if any of them has spread phantom rumors to confuse the black-haired [i.e., Chinese people]." (Nienhauser 2018: 150). He had the Imperial Scribes interrogate the various Masters, who accused and implicated one another to save themselves. The emperor selected 460 of those who had violated prohibitions, and had them trapped and executed. Finally, in 210 BCE, the last year of Qin Shi Huang's life, Xu Fu and the remaining fangshi worried that the emperor would punish them for their failures to find longevity drugs, and made up a fish story. Xu told the emperor, "The elixir from Penglai was obtainable, but we were constantly troubled by large sharks [大鮫魚], and therefore were unable to get there. We would ask for someone skilled at archery to accompany us, so that, upon seeing the sharks, we could shoot them with automatic crossbows ." (Nienhauser 2018: 153–154). Xu set sail on a final expedition but never returned. Later tradition has it that he settled in Japan (Twitchett and Loewe 1986: 78). Following the Records of the Grand Historian , subsequent official Chinese dynastic Twenty-Four Histories retell Han Zhong's story. For instance, 111 CE Book of Han mentions him in the "Treatise on Sacrifices" . "When Ch'in Shih Huang first unified the empire, he indulged in the cult of hsien immortality. Thereupon, he sent people like Hsu Fu 徐福 and Han Chung 韓終 to sea, with unmarried boys and girls, in search of hsien as well as drugs. But (these people) took the opportunity to run away and never came back. Such efforts aroused the resentment and hatred of all under heaven." (Yu 1965: 95). Although the 2nd century CE Liexian Zhuan (Collected Biographies of Transcendents) does not mention Han Zhong, it records calamus as one of 29 psychoactive plants consumed by Daoist adepts (Chen 2021: 18–19). Two hagiographies describe calamus-root subsistence diets. Shangqiu Zixu (商丘子胥) was a master of grain-avoidance fasting who was fond of blowing the thirty-six-pipe mouth organ ( yu ) while he herded pigs. At age seventy he had neither married a wife nor grown old. When people asked about the essentials of his way of life he would say, "I only eat old thistles ( shu 朮) and calamus ( ch'ang-p'u ) roots [菖蒲根], and drink water. In this way I don't get hungry or old, that's all." When the noble and wealthy heard of it and tried eating this diet, they could never last through a year before quitting, and claimed there must be some secret formula. (Mather 1976: 434-435) Wu Guang (務光 or 瞀光) was a Xia dynasty loyalist who supposedly refused to serve two Shang dynasty kings four centuries apart. After refusing to work for King Tang of Shang (r. c. 1617?-1588? BCE), he committed suicide by drowning, yet miraculously reappeared to deliver a similar refusal to King Wuding of Shang (r. c. 1254-1197 BCE) (Bokenkamp 2015: 293). According to traditions, Wu Guang's ears were seven cun long (comparable with Han Zhong's pendulous ears), loved playing the jin (琴, "zither"), and subsisted on calamus roots (服蒲韭根). This uncommon term pujiu (蒲韭, "calamus"; with jiu "leek; chives"), is related with literary Chinese Yaojiu (堯韭, " Yao's leek; calamus") (Bokenkamp 2015: 295). Ge Hong 's c. 318 Baopuzi (Master Who Embraces Simplicity) mentions Han Zhong (韓終) twice and medical changpu (菖蒲) five times. The "Gold and Cinnabar" chapter lists famous elixirs of longevity , including the Han Zhong dan (韓終丹, Han Zhong's elixir), in which calamus is not an ingredient. " Varnish honey and cinnabar [漆蜜]. Fry. When taken, it can protract your years and confer everlasting vision. In full sun you will cast no shadow." (4, Ware 1966: 89). "The Genie's Pharmacopeia" chapter says, "Han Chung took sweet flag for thirteen years and his body developed hairs. He intoned ten thousand words of text each day. He felt no cold in winter, though his gown was open. To be effective, sweet flag must have grown an inch above the surrounding stones and have nine or more nodules [ jie 節]. That with purple flowers is best." (11, Ware 1966: 195). This chapter quotes the apocryphal Yuanshen qi (援神契, Key to the Sacred Foundation) for the Classic of Filial Piety , "Pepper and ginger protect against the effects of dampness, sweet flag sharpens the hearing [菖蒲益聰], sesame protracts the years, and resin puts weapons to flight." (11, Ware 1966: 177). It also describes magical rouzhi (肉芝, "flesh excrescences"), notably the fengli , a mythical flying animal, "resembling a sable, blue in color and the size of a fox" found in southern forests. It is almost impossible to slay, except for changpu suffocation, and cannot be killed by burning, chopping with an ax, or beating with an iron mace, but "It dies at once, however, if its nose is stuffed with reeds from the surface of a rock [石上菖蒲]" (11, Ware 1966: 185). "The Ultimate System" chapter mentions ancient herbal cures about which people are skeptical, including, "sweet flag and dried ginger [菖蒲乾姜] check rheumatism" (5, Ware 1966: 103). A lost fragment of the original Baopuzi text, which was preserved in the 624 Yiwen Leiju (Collection of Literature Arranged by Categories), connects Han Zhong (韓終) with the shanzhi (山芝, Mountain Excrescence): "This is what Han Zhong consumed in order to merge with heaven and earth, prolong life, and communicate with the spirits." (Chen 2021: 22). Ge Hong also compiled the c. 4th century Daoist Shenxian zhuan (Biographies of Divine Transcendents), which mentions Han Zhong in cases of two calamus-eaters. First, in the hagiography of the Han-dynasty Transcendent Liu Gen (劉根), Han Zhong presented Liu a scripture about expelling the Three Corpses , supernatural parasites that live inside the human body and seek to hasten the death of their host. Liu explained to his student Wang Zhen (王珍) how he met Han Zhong and became his disciple. I once entered the mountains, and in my meditations there was no state I did not reach. Later, I entered Mount Huayin . There I saw a personage riding a carriage drawn by a white deer, followed by several dozen attendants, including four jade maidens each of whom was holding a staff hung with a colored flag and was fifteen or sixteen years old. I prostrated myself repeatedly, then bowed my head and begged a word. The divine personage said to me, "Have you heard of someone called Han Zhong 韓眾?" "Truly I have heard that there is such a person, yes," I answered. The divine personage said, "I am he." (Campany 2002: 242-243) Liu recounted his multiple failures at studying the Dao without an enlightened teacher, and pleaded for help. The divine personage then said, "Sit, and I will tell you something. You must have the bones of a transcendent; that is why you were able to see me. But at present your marrow is not full, your blood is not warm, your breath is slight, your brains are weak, your sinews are slack, and your flesh is damp. This is why, when you ingest medicinals and circulate pneumas, you do not obtain their benefits. If you wish to achieve long life, you must first cure your illnesses; only after twelve years have passed can you ingest the drug of transcendence." Campany 2002: 243-244) Han Zhong then summarized various methods of achieving xian Transcendence, the best of which will allow one to live for several hundred years, and said, "If you desire long life, the first thing you must do is to expel the three corpses. Once the three corpses are expelled, you must fix your aim and your thought, eliminating sensual desires." Han presented Liu with a manuscript of the Shenfang wupian (神方五篇, Divine Methods in Five Sections), which says, The ambushing corpses always ascend to Heaven to report on people's sins on the first, fifteenth, and last days of each month. The Director of Allotted Life Spans ( Siming 司命) deducts from people's accounts and shortens their life spans accordingly. The gods within people's bodies want to make people live, but the corpses want to make them die. When people die, their gods disperse; the corpses, once in this bodiless state, become ghosts, and when people sacrifice to [the dead] these ghosts obtain the offering foods. This is why the corpses want people to die. When you dream of fighting with an evil person, this is [caused by] the corpses and the gods at war [inside you]. (Campany 2002 245–246) Following the book's instructions, Liu synthesized the elixir, ingested it, and thereby attained Transcendence. Second, in an exception to the usual Chinese master-disciple secret teachings about consuming calamus to achieve xian Transcendence, Wang Xing (王興) was a peasant who happened to be on Mount Song and overheard the giant spirit of Han Zhong telling Emperor Wu of Han (r. 141-87 BCE) where to find the best plants. Wang Xing was a native of Yangcheng who lived in Gourd Valley. He was a commoner who was illiterate and who had no intention of practicing the Way. When Han Emperor Wu ascended Mount Song [in 110 BCE], he climbed to the Cave of Great Stupidity, where he erected a palace for the Dao [a temporary meditation chamber and altar] and had Dong Zhongshu , Dongfang Shuo , and others fast and meditate on the gods. That night, the emperor suddenly saw a transcendent twenty feet [2 丈 ] tall, with ears hanging down to his shoulders. The emperor greeted him respectfully and inquired who he was. The transcendent replied, "I am the spirit of Mount Jiuyi [九疑, Nine Doubts Mountain]. I have heard that the sweet flag that grows atop rocks on the Central Marchmount here, the variety with nine joints [節] per inch, will bring one long life if ingested. So I have come to gather some." Then the spirit suddenly vanished. The emperor said to his attendants, "That was not merely someone who studies the Way and practices macrobiotics. It was surely the spirit of the Central Marchmount, saying that to instruct me." And so they all gathered sweet flag for him to ingest. After two years the emperor began feeling depressed and unhappy, so he stopped taking it. At that time many of his attendant officials were also taking it, but none could sustain the practice for very long. Only Wang Xing, who had overheard the transcendent instructing Emperor Wu to take sweet flag, harvested and ingested it without ceasing and so attained long life. Those who lived near his village, both old and young, said that he was seen there over many generations. It is not known how he ended up. (Campany 2002: 341-342) The eminent Tang poet Li Bai (701–762) wrote "The Calamus-Gatherer of Mount Song" (嵩山采菖蒲者)", which refers to this Shenxian zhuan story about Emperor Wu of Han. A divine person of ancient visage, Both ears hanging down to his shoulders, Came upon, on Song Marchmount , the Marshal One of Han, Who considered him a Transcendent of Mount Jiuyi. "I have come to gather calamus. Ingesting it, one can extend one’s years." Thus saying, suddenly he disappeared; Obliterating his shadow, he entered the clouds and mist. At his injunction the Thearch was not, after all, enlightened – His final return was to the fields of Lush Mound. (Bokenkamp 2015: 296) "Lush Mound" translates Maoling (茂陵), the mausoleum of Emperor Wu of Han. Shangqing School tradition links Han Zhong with the provenance of several scriptures, such as the Taishang Lingbao wufu (太上靈寶五符, Five Talismans of the Numinous Treasure). The manuscript originated with Donghai Xiaotong (東海小童, Young Lad of the Eastern Sea) who gave it to his student Zhang Daoling (Campany 2002: 243, 356). He transmitted it to Han Zhong (韓終), honorifically called the Huolin xianren (霍林仙人, Transcendent of Huolin ), who gave manuscript, partially written in ancient tadpole script , to the Transcendent Yue Zichang (樂子長) (Bokenkamp 2008: 1167). Several classics record that King Wen of Zhou (r. 1152–1050 BCE) loved to eat pungent calamus. The 239 BCE Lüshi Chunqiu says, " King Wen enjoyed pickled calamus. When Confucius learned this, he wrinkled his nose and tried them. It took him three years to be able to endure them." (Knoblock and Riegel 2000: 329). The 3rd-century BCE Legalist classic Hanfeizi compares King Wen eating calamus pickles and Chu Dao (屈到), a minister of King Kang of Chu (r. 559–545 BCE), eating water-chestnuts, neither of which is considered tasty; thus, what a person eats is not necessarily delicious (Liao 1939: 438–439). The c. 139 BCE Huainanzi uses calamus to exemplify things that bring small benefits but great harm, "Calamus deters fleas and lice, but people do not make mats out of it because it attracts centipedes." (Major et al. 2010: 838). Some early accounts of Han Zhong, such as this Music Bureau poem, refer to his iconographic white deer, which he usually rode or sometimes hitched to a flying chariot. The Transcendent One, astride a white deer: His hair is short, but his ears so long! He leads me up Grand Floriate Mountain, Where we pluck divine herbs, gathering Redflag. Arriving at the Master’s gate, We present the drugs – a jade cask full. The master ingests these drugs: His body is healthy in but one day. It strengthens his hair, changing white to black; It extends his years, lengthening his fated span. . . (Bokenkamp 2015: 296) The early 11th-century Song dynasty Taishang lingbao zhicao pin (太上靈寶芝草品, Uppermost Numinous Treasure Catalog of Excrescence Plants) says that Han Zhong (韓眾) achieved spiritual Transcendence after eating jinqingzhi (金精芝, Metal Essence Excrescence) (Kohn 2000: 248). Jinqingzhi ( Metal Essence Excrescence) grows on Mount Hua . It has a white cap , and white birdlike clouds growing above the stem. Its flavor is sweet and pungent. It should be picked on a ren (壬, "9th of the 10 Heavenly Stems ") day in October, and dried in the shade for 100 days. Anyone who eats it will live for 8,000 years. After eating this excrescence Han Zhong became a Transcendent. [金精芝生於華山白蓋莖上有白雲狀如雀雞其味甘辛十月壬日採之陰乾百日食之八千歲韓眾食之仙矣]. Later hagiographies add information that Han Zhong was a native of Deyang district (modern Chengdu , Sichuan ), studied the Dao with Tianzhen huangren (天真皇人, August One of Heavenly Perfection), transmitted the Shangqing jinshu yuzi (上清金書玉字, Golden Scripture with Jade Characters), and at the end of his life ascended to heaven in broad daylight (Campany 2002: 243). Changpu (菖蒲, " Acorus calamus , sweet flag") is a versatile plant. It is used as an ingredient in Traditional Chinese medicine , an insect repellent, an apotropaic tradition in the Dragon Boat Festival , and an ingredient in "herbal regimes designed to lead to the longevous state of Transcendent being 仙人" (Bokenkamp 2015: 293). In Chinese herbology , the calamus is considered a "potent herb" (Campany 2002: 341). It is believed to have stimulant, tonic, antispasmodic, sedative, stomachic , and diaphoretic properties. Preparations, including calamus powder, juice, and tincture are used to treat hemoptysis , colic, menorrhagia , carbuncles, buboes , deaf ears, and sore eyes (Stuart and Smith 1911: 14). The 3rd or 4th century Lingbao santian fang (靈寶三天方, Prescriptions of the Three Lingbao Heavens) has an early description of ingesting the calamus rhizome. Calamus grows near marshes, in damp depressions, on riverbanks, in ditches or on the banks of lakes. It also grows in the mountains on stone. The knotty root [rhizome] with nine nodes per inch is called the Numinous Body. It is foremost in making fast its attainments and contains the vapor of the 10,000 eons. As a result, it is life-giving and nurtures seminal essence and spirit . It repels water, guards against damps, represses demons, and dissolves [spirit-incurred] calamities, [so that] Guimei 鬼魅 and Wangliang 魍魎 demons are driven into the murky dark and the spirits of the unburied dead and violently murdered dare not approach. If you ingest it without ceasing, your life span will reach a thousand thousands. (Bokenkamp 2015: 298) Praised as a ( lingcao (靈草, "celestial herb") in Daoist texts, the calamus was believed to increase longevity, improve memory, and "heal a thousand diseases" (Junqueira 2022: 458–459). Acorus calamus ("sweet flag") was commonly confused with varieties of iris . Both have sword-shaped leaves with parallel veins. The Iris pseudacorus ("yellow iris") provides a good example, its specific name pseudacorus means "false acorus" and refers to their pointed leaves. In Chinese, the changpu ("calamus") was misidentified with the huachangpu (花菖蒲, lit. "flowering calamus", Iris ensata , "Japanese water iris") (Bokenkamp 2015: 294). The above Baopuzi description of Han Zhong eating changpu for thirteen years states that the best variety has zihua (紫花, "purple flowers"), which is obviously not the sweet flag. Acorus calamus has tiny greenish-yellow flowers on a spadix ; Iris ensata has large bluish-purple copigmented flowers. Furthermore, Chinese zihua yuanwei (紫花鳶尾, "purple flower iris") is an old name for I. ensata . The Daoist physician and pharmacologist Tao Hongjing (456–536) was apparently the first to differentiate the Japanese water iris from the calamus: The true calamus plant has leaves with [central] ridges, like the blade of a sword. Also, in the fourth and fifth months it produces minuscule flowers. In the marshy spots near the eastern mountain streams [of Mao Shan ] there is a plant called the Brook Iris 溪蓀, which, in root shape and appearance is exceedingly similar to the calamus which grows on stones. It is commonly called the 'calamus which grows on stones' 石上菖蒲 by the unknowledgeable. This is mistaken. This plant may only be used as an expectorant and to repel fleas and lice and may not be ingested. (Bokenkamp 2015: 294) Japanese kanji characters can have different readings , categorized as either on'yomi (音読み, "pronunciation reading" from Chinese) or kun'yomi (訓読み, "semantic reading" from native Japanese), and most characters have at least two readings. Japanese clarifies the ambiguity of Chinese changpu (菖蒲) meaning both "calamus" and/or "iris" with Sino-Japanese on'yomi shōbu (菖蒲, "calamus") and native kun'yomi ayame (菖蒲, "iris"). The Dragon Boat Festival , or Duanwu jie (端午節), is a traditional Chinese holiday celebrated on the fifth day of the fifth month in the lunisolar Chinese calendar , generally corresponding with late May to late June in the Gregorian calendar . This folk festival is celebrated by holding dragon boat races , praying for good luck, warding off evil demons, eating zongzi glutinous rice dumplings, and drinking realgar wine ( huangjiu ["yellow wine"] dosed with arsenic sulfide ). Calamus, traditionally considered an apotropaic (averting evil) and a demonifuge (chasing away demons), is closely associated with this festival for several reasons (Bokenkamp 2015: 298–299). The Duanwu was considered the hottest day of the year, when poisonous insects, malevolent spirits, and epidemic diseases were most active. During the Dragon Boat Festival, a ubiquitous ritual in traditional Chinese households consisted of hanging up mugwort twigs and calamus leaves, tied with a red thread, above the front door, in order to ward off evil. Sometimes the root was cut into the shape of a man and worn on the person (Junqueira 2022: 458). In addition to realgar wine, Chinese people would drink wine infused with calamus root "to ward off the damp vapors." Both calamus and iris are insect repellents, particularly useful against mosquitoes and fleas, which may have helped reduce the spread of disease. A sharply pointed calamus leaf, called a pujian (蒲劍, "calamus sword"), was considered an effective apotropaic weapon. Mugwort tigers combined with calamus swords could allegedly drive off ghostly and poisonous beings. Another reason is the resemblance of calamus leaves to swords. The Dragon Boat Festival doucao (闘草, "battle with herbs") observance may well have originated as a demon-quelling sword in some sort of religious drama (Junqueira 2022: 459). The baicaotang (百草湯, "bath of a hundred herbs") is another Duanwu ritual. Although the herbs for this decoction varied from region to region, it often included calamus, mugwort, and mulberry leaves, and occasionally chrysanthemum flowers and peach twigs. The herbs should be picked in the early morning of the Duanwu , boiled down, and used as an herbal bath later in the afternoon (Junqueira 2022: 458). In Japan, the Chinese Dragon Boat Festival or Double Fifth Festival is celebrated on May 5 as Tango no sekku (端午の節句). Until recently, Tango no sekku was known as Boys' Day, with a Shinto holiday counterpart of Hinamatsuri Dolls' Day or Girls' Day celebrated on March 3. In 1948, the government changed May 5 to be a national holiday called Kodomo no hi or Children's Day that includes both boys and girls (Nussbaum and Roth 2002: 948). Japanese shōbu (菖蒲, "calamus") and ayame (菖蒲, "iris") are written with the same kanji characters. While the calamus is the traditional apotropaic plant in the Chinese Dragon Boat Festival, it was changed to the iris in the Japanese Children's Day Festival. The Chinese baicaotang calamus bath corresponds to the Japanese shōbuyu (菖蒲湯, "iris bath"). Glutinous rice dumplings, both Chinese zongzi and Japanese chimaki , are usually wrapped in bamboo leaves and steamed, but calamus leaves are specially used for the Dragon Boat Festival and iris leaves for Children's Day. While realgar wine is not a Japanese tradition, drinking iris-infused sake corresponds to calamus-infused Chinese wine in order to ward off evil during the Double Fifth Festival.
https://en.wikipedia.org/wiki/Han_Zhong_(Daoist)
Han unification is an effort by the authors of Unicode and the Universal Character Set to map multiple character sets of the Han characters of the so-called CJK languages into a single set of unified characters . Han characters are a feature shared in common by written Chinese ( hanzi ), Japanese ( kanji ), Korean ( hanja ) and Vietnamese ( chữ Hán ). Modern Chinese, Japanese and Korean typefaces typically use regional or historical variants of a given Han character . In the formulation of Unicode, an attempt was made to unify these variants by considering them as allographs – different glyphs representing the same "grapheme" or orthographic unit – hence, "Han unification", with the resulting character repertoire sometimes contracted to Unihan . [ 1 ] [ a ] Nevertheless, many characters have regional variants assigned to different code points , such as Traditional 個 (U+500B) versus Simplified 个 (U+4E2A). The Unicode Standard details the principles of Han unification. [ 5 ] [ 6 ] The Ideographic Research Group (IRG), made up of experts from the Chinese-speaking countries, North and South Korea, Japan, Vietnam, and other countries, is responsible for the process. [ 7 ] One rationale was the desire to limit the size of the full Unicode character set, where CJK characters as represented by discrete ideograms may approach or exceed 100,000 [ b ] characters. Version 1 of Unicode was designed to fit into 16 bits and only 20,940 characters (32%) out of the possible 65,536 were reserved for these CJK Unified Ideographs . Unicode was later extended to 21 bits allowing many more CJK characters (97,680 are assigned, with room for more). An article hosted by IBM attempts to illustrate part of the motivation for Han unification: [ 8 ] The problem stems from the fact that Unicode encodes characters rather than "glyphs," which are the visual representations of the characters. There are four basic traditions for East Asian character shapes: traditional Chinese, simplified Chinese, Japanese, and Korean. While the Han root character may be the same for CJK languages, the glyphs in common use for the same characters may not be. For example, the traditional Chinese glyph for "grass" uses four strokes for the "grass" radical [ ⺿ ], whereas the simplified Chinese, Japanese, and Korean glyphs [ ⺾ ] use three. But there is only one Unicode point for the grass character (U+8349) [ 草 ] regardless of writing system. Another example is the ideograph for "one," which is different in Chinese, Japanese, and Korean. Many people think that the three versions should be encoded differently. In fact, the three ideographs for "one" ( 一 , 壹 , or 壱 ) are encoded separately in Unicode, as they are not considered national variants. The first is the common form in all three countries, while the second and third are used on financial instruments to prevent tampering (they may be considered variants). However, Han unification has also caused considerable controversy, particularly among the Japanese public, who, with the nation's literati, have a history of protesting the culling of historically and culturally significant variants. [ 9 ] [ 10 ] (See Kanji § Orthographic reform and lists of kanji . Today, the list of characters officially recognized for use in proper names continues to expand at a modest pace.) In 1993, the Japan Electronic Industries Development Association (JEIDA) published a pamphlet titled " 未来の文字コード体系に私達は不安をもっています " (We are feeling anxious for the future character encoding system JPNO 20985671 ), summarizing major criticism against the Han Unification approach adopted by Unicode. A grapheme is the smallest abstract unit of meaning in a writing system. Any grapheme has many possible glyph expressions, but all are recognized as the same grapheme by those with reading and writing knowledge of a particular writing system. Although Unicode typically assigns characters to code points to express the graphemes within a system of writing, the Unicode Standard ( section 3.4 D7 ) cautions: An abstract character does not necessarily correspond to what a user thinks of as a "character" and should not be confused with a grapheme . However, this quote refers to the fact that some graphemes are composed of several graphic elements or "characters". So, for example, the character U+0061 a LATIN SMALL LETTER A combined with U+030A ◌̊ COMBINING RING ABOVE (generating the combination "å") might be understood by a user as a single grapheme while being composed of multiple Unicode abstract characters. In addition, Unicode also assigns some code points to a small number (other than for compatibility reasons) of formatting characters, whitespace characters, and other abstract characters that are not graphemes, but instead used to control the breaks between lines, words, graphemes and grapheme clusters. With the unified Han ideographs, the Unicode Standard makes a departure from prior practices in assigning abstract characters not as graphemes, but according to the underlying meaning of the grapheme: what linguists sometimes call sememes . This departure therefore is not simply explained by the oft quoted distinction between an abstract character and a glyph, but is more rooted in the difference between an abstract character assigned as a grapheme and an abstract character assigned as a sememe. In contrast, consider ASCII 's unification of punctuation and diacritics , where graphemes with widely different meanings (for example, an apostrophe and a single quotation mark) are unified because the glyphs are the same. For Unihan the characters are not unified by their appearance, but by their definition or meaning. For a grapheme to be represented by various glyphs means that the grapheme has glyph variations that are usually determined by selecting one font or another or using glyph substitution features where multiple glyphs are included in a single font. Such glyph variations are considered by Unicode a feature of rich text protocols and not properly handled by the plain text goals of Unicode. However, when the change from one glyph to another constitutes a change from one grapheme to another—where a glyph cannot possibly still, for example, mean the same grapheme understood as the small letter "a"—Unicode separates those into separate code points. For Unihan the same thing is done whenever the abstract meaning changes, however rather than speaking of the abstract meaning of a grapheme (the letter "a"), the unification of Han ideographs assigns a new code point for each different meaning—even if that meaning is expressed by distinct graphemes in different languages. Although a grapheme such as "ö" might mean something different in English (as used in the word "coördinated") than it does in German (as used in the word "schön"), it is still the same grapheme and can be easily unified so that English and German can share a common abstract Latin writing system (along with Latin itself). This example also points to another reason that "abstract character" and grapheme as an abstract unit in a written language do not necessarily map one-to-one. In English the combining diaeresis , "¨", and the "o" it modifies may be seen as two separate graphemes, whereas in languages such as Swedish, the letter "ö" may be seen as a single grapheme. Similarly in English the dot on an "i" is understood as a part of the "i" grapheme whereas in other languages, such as Turkish, the dot may be seen as a separate grapheme added to the dotless "ı" . To deal with the use of different graphemes for the same Unihan sememe, Unicode has relied on several mechanisms: especially as it relates to rendering text. One has been to treat it as simply a font issue so that different fonts might be used to render Chinese, Japanese or Korean. Also font formats such as OpenType allow for the mapping of alternate glyphs according to language so that a text rendering system can look to the user's environmental settings to determine which glyph to use. The problem with these approaches is that they fail to meet the goals of Unicode to define a consistent way of encoding multilingual text. [ 11 ] So rather than treat the issue as a rich text problem of glyph alternates, Unicode added the concept of variation selectors , first introduced in version 3.2 and supplemented in version 4.0. [ 12 ] While variation selectors are treated as combining characters, they have no associated diacritic or mark. Instead, by combining with a base character, they signal the two character sequence selects a variation (typically in terms of grapheme, but also in terms of underlying meaning as in the case of a location name or other proper noun) of the base character. This then is not a selection of an alternate glyph, but the selection of a grapheme variation or a variation of the base abstract character. Such a two-character sequence however can be easily mapped to a separate single glyph in modern fonts. Since Unicode has assigned 256 separate variation selectors, it is capable of assigning 256 variations for any Han ideograph. Such variations can be specific to one language or another and enable the encoding of plain text that includes such grapheme variations. Since the Unihan standard encodes "abstract characters", not "glyphs", the graphical artifacts produced by Unicode have been considered temporary technical hurdles, and at most, cosmetic. However, again, particularly in Japan, due in part to the way in which Chinese characters were incorporated into Japanese writing systems historically, the inability to specify a particular variant was considered a significant obstacle to the use of Unicode in scholarly work. For example, the unification of "grass" (explained above), means that a historical text cannot be encoded so as to preserve its peculiar orthography. Instead, for example, the scholar would be required to locate the desired glyph in a specific typeface in order to convey the text as written, defeating the purpose of a unified character set. Unicode has responded to these needs by assigning variation selectors so that authors can select grapheme variations of particular ideographs (or even other characters). [ 12 ] Small differences in graphical representation are also problematic when they affect legibility or belong to the wrong cultural tradition. Besides making some Unicode fonts unusable for texts involving multiple "Unihan languages", names or other orthographically sensitive terminology might be displayed incorrectly. (Proper names tend to be especially orthographically conservative—compare this to changing the spelling of one's name to suit a language reform in the US or UK.) While this may be considered primarily a graphical representation or rendering problem to be overcome by more artful fonts, the widespread use of Unicode would make it difficult to preserve such distinctions. The problem of one character representing semantically different concepts is also present in the Latin part of Unicode. The Unicode character for a curved apostrophe is the same as the character for a right single quote (’). On the other hand, the capital Latin letter A is not unified with the Greek letter Α or the Cyrillic letter А . This is, of course, desirable for reasons of compatibility, and deals with a much smaller alphabetic character set. While the unification aspect of Unicode is controversial in some quarters for the reasons given above, Unicode itself does now encode a vast number of seldom-used characters of a more-or-less antiquarian nature. Some of the controversy stems from the fact that the very decision of performing Han unification was made by the initial Unicode Consortium, which at the time was a consortium of North American companies and organizations (most of them in California), [ 13 ] but included no East Asian government representatives. The initial design goal was to create a 16-bit standard, [ 14 ] and Han unification was therefore a critical step for avoiding tens of thousands of character duplications. This 16-bit requirement was later abandoned, making the size of the character set less of an issue today. The controversy later extended to the internationally representative ISO: the initial CJK Joint Research Group (CJK-JRG) favored a proposal (DIS 10646) for a non-unified character set, "which was thrown out in favor of unification with the Unicode Consortium's unified character set by the votes of American and European ISO members" (even though the Japanese position was unclear). [ 15 ] Endorsing the Unicode Han unification was a necessary step for the heated ISO 10646/Unicode merger. Much of the controversy surrounding Han unification is based on the distinction between glyphs , as defined in Unicode, and the related but distinct idea of graphemes. [ citation needed ] Unicode assigns abstract characters (graphemes), as opposed to glyphs, which are a particular visual representations of a character in a specific typeface . One character may be represented by many distinct glyphs, for example a "g" or an "a", both of which may have one loop ( ɑ , ɡ ) or two ( a , g ). Yet for a reader of Latin script based languages the two variations of the "a" character are both recognized as the same grapheme. Graphemes present in national character code standards have been added to Unicode, as required by Unicode's Source Separation rule, even where they can be composed of characters already available. The national character code standards existing in CJK languages are considerably more involved, given the technological limitations under which they evolved, and so the official CJK participants in Han unification may well have been amenable to reform. Unlike European versions, CJK Unicode fonts, due to Han unification, have large but irregular patterns of overlap, requiring language-specific fonts. Unfortunately, language-specific fonts also make it difficult to access a variant which, as with the "grass" example, happens to appear more typically in another language style. (That is to say, it would be difficult to access "grass" with the four-stroke radical more typical of Traditional Chinese in a Japanese environment, which fonts would typically depict the three-stroke radical.) Unihan proponents tend to favor markup languages for defining language strings, but this would not ensure the use of a specific variant in the case given, only the language-specific font more likely to depict a character as that variant. (At this point, merely stylistic differences do enter in, as a selection of Japanese and Chinese fonts are not likely to be visually compatible.) Chinese users seem to have fewer objections to Han unification, [ citation needed ] largely because Unicode did not attempt to unify Simplified Chinese characters with Traditional Chinese characters . (Simplified Chinese characters are used among Chinese speakers in the People's Republic of China , Singapore , and Malaysia . Traditional Chinese characters are used in Hong Kong and Taiwan ( Big5 ) and they are, with some differences, more familiar to Korean and Japanese users.) Unicode is seen as neutral [ by whom? ] with regards to this politically charged issue, and has encoded Simplified and Traditional Chinese glyphs separately (e.g. the ideograph for "discard" is 丟 U+4E1F for Traditional Chinese Big5 #A5E1 and 丢 U+4E22 for Simplified Chinese GB #2210). It is also noted that Traditional and Simplified characters should be encoded separately according to Unicode Han Unification rules, because they are distinguished in pre-existing PRC character sets. Furthermore, as with other variants, Traditional to Simplified characters is not a one-to-one relationship. There are several alternative character sets that are not encoding according to the principle of Han Unification, and thus free from its restrictions: These region-dependent character sets are also seen as not affected by Han Unification because of their region-specific nature: However, none of these alternative standards has been as widely adopted as Unicode , which is now the base character set for many new standards and protocols, internationally adopted, and is built into the architecture of operating systems ( Microsoft Windows , Apple macOS , and many Unix-like systems), programming languages ( Perl , Python , C# , Java , Common Lisp , APL , C , C++ ), and libraries (IBM International Components for Unicode (ICU) along with the Pango , Graphite , Scribe , Uniscribe , and ATSUI rendering engines), font formats ( TrueType and OpenType ) and so on. In March 1989, a (B)TRON -based system was adopted by Japanese government organizations "Center for Educational Computing" as the system of choice for school education including compulsory education . [ 16 ] However, in April, a report titled "1989 National Trade Estimate Report on Foreign Trade Barriers" from Office of the United States Trade Representative specifically listed the system as a trade barrier in Japan. The report claimed that the adoption of the TRON-based system by the Japanese government is advantageous to Japanese manufacturers, and thus excluding US operating systems from the huge new market; specifically the report lists MS-DOS, OS/2 and UNIX as examples. The Office of USTR was allegedly under Microsoft's influence as its former officer Tom Robertson was then offered a lucrative position by Microsoft. [ 17 ] While the TRON system itself was subsequently removed from the list of sanction by Section 301 of the Trade Act of 1974 after protests by the organization in May 1989, the trade dispute caused the Ministry of International Trade and Industry to accept a request from Masayoshi Son to cancel the Center of Educational Computing's selection of the TRON-based system for the use of educational computers. [ 18 ] The incident is regarded as a symbolic event for the loss of momentum and eventual demise of the BTRON system, which led to the widespread adoption of MS-DOS in Japan and the eventual adoption of Unicode with its successor Windows. There has not been any push for full semantic unification of all semantically linked characters, though the idea would treat the respective users of East Asian languages the same, whether they write in Korean, Simplified Chinese, Traditional Chinese, Kyūjitai Japanese, Shinjitai Japanese or Vietnamese. Instead of some variants getting distinct code points while other groups of variants have to share single code points, all variants could be reliably expressed only with metadata tags (e.g., CSS formatting in webpages). The burden would be on all those who use differing versions of 直 , 別 , 兩 , 兔 , whether that difference be due to simplification, international variance or intra-national variance. However, for some platforms (e.g., smartphones), a device may come with only one font pre-installed. The system font must make a decision for the default glyph for each code point and these glyphs can differ greatly, indicating different underlying graphemes. Consequently, relying on language markup across the board as an approach is beset with two major issues. First, there are contexts where language markup is not available (code commits, plain text). Second, any solution would require every operating system to come pre-installed with many glyphs for semantically identical characters that have many variants. In addition to the standard character sets in Simplified Chinese, Traditional Chinese, Korean, Vietnamese, Kyūjitai Japanese and Shinjitai Japanese, there also exist "ancient" forms of characters that are of interest to historians, linguists and philologists. Unicode's Unihan database has already drawn connections between many characters. The Unicode database catalogs the connections between variant characters with distinct code points already. However, for characters with a shared code point, the reference glyph image is usually biased toward the Traditional Chinese version. Also, the decision of whether to classify pairs as semantic variants or z-variants is not always consistent or clear, despite rationalizations in the handbook. [ 19 ] So-called semantic variants of 丟 (U+4E1F) and 丢 (U+4E22) are examples that Unicode gives as differing in a significant way in their abstract shapes, while Unicode lists 佛 and 仏 as z-variants, differing only in font styling. Paradoxically, Unicode considers 兩 and 両 to be near identical z-variants while at the same time classifying them as significantly different semantic variants. There are also cases of some pairs of characters being simultaneously semantic variants and specialized semantic variants and simplified variants: 個 (U+500B) and 个 (U+4E2A). There are cases of non-mutual equivalence. For example, the Unihan database entry for 亀 (U+4E80) considers 龜 (U+9F9C) to be its z-variant, but the entry for 龜 does not list 亀 as a z-variant, even though 龜 was obviously already in the database at the time that the entry for 亀 was written. Some clerical errors led to doubling of completely identical characters such as 﨣 (U+FA23) and 𧺯 (U+27EAF). If a font has glyphs encoded to both points so that one font is used for both, they should appear identical. These cases are listed as z-variants despite having no variance at all. Intentionally duplicated characters were added to facilitate bit-for-bit round-trip conversion . Because round-trip conversion was an early selling point of Unicode, this meant that if a national standard in use unnecessarily duplicated a character, Unicode had to do the same. Unicode calls these intentional duplications " compatibility variants " as with 漢 (U+FA9A) which calls 漢 (U+6F22) its compatibility variant. As long as an application uses the same font for both, they should appear identical. Sometimes, as in the case of 車 with U+8ECA and U+F902, the added compatibility character lists the already present version of 車 as both its compatibility variant and its z-variant. The compatibility variant field overrides the z-variant field, forcing normalization under all forms, including canonical equivalence. Despite the name, compatibility variants are actually canonically equivalent and are united in any Unicode normalization scheme and not only under compatibility normalization. This is similar to how U+212B Å ANGSTROM SIGN is canonically equivalent to a pre-composed U+00C5 Å LATIN CAPITAL LETTER A WITH RING ABOVE . Much software (such as the MediaWiki software that hosts Wikipedia) will replace all canonically equivalent characters that are discouraged (e.g. the angstrom symbol) with the recommended equivalent. Despite the name, CJK "compatibility variants" are canonically equivalent characters and not compatibility characters. 漢 (U+FA9A) was added to the database later than 漢 (U+6F22) was and its entry informs the user of the compatibility information. On the other hand, 漢 (U+6F22) does not have this equivalence listed in this entry. Unicode demands that all entries, once admitted, cannot change compatibility or equivalence so that normalization rules for already existing characters do not change. Some pairs of Traditional and Simplified are also considered to be semantic variants. According to Unicode's definitions, it makes sense that all simplifications (that do not result in wholly different characters being merged for their homophony) will be a form of semantic variant. Unicode classifies 丟 and 丢 as each other's respective traditional and simplified variants and also as each other's semantic variants. However, while Unicode classifies 億 (U+5104) and 亿 (U+4EBF) as each other's respective traditional and simplified variants, Unicode does not consider 億 and 亿 to be semantic variants of each other. Unicode claims that "Ideally, there would be no pairs of z-variants in the Unicode Standard." [ 19 ] This would make it seem that the goal is to at least unify all minor variants, compatibility redundancies and accidental redundancies, leaving the differentiation to fonts and to language tags. This conflicts with the stated goal of Unicode to take away that overhead, and to allow any number of any of the world's scripts to be on the same document with one encoding system. [ improper synthesis? ] Chapter One of the handbook states that "With Unicode, the information technology industry has replaced proliferating character sets with data stability, global interoperability and data interchange, simplified software, and reduced development costs. While taking the ASCII character set as its starting point, the Unicode Standard goes far beyond ASCII's limited ability to encode only the upper- and lowercase letters A through Z. It provides the capacity to encode all characters used for the written languages of the world – more than 1 million characters can be encoded. No escape sequence or control code is required to specify any character in any language. The Unicode character encoding treats alphabetic characters, ideographic characters, and symbols equivalently, which means they can be used in any mixture and with equal facility." [ 11 ] This leaves the option to settle on one unified reference grapheme for all z-variants, which is contentious since few outside of Japan would recognize 佛 and 仏 as equivalent. Even within Japan, the variants are on different sides of a major simplification called Shinjitai. Unicode would effectively make the PRC's simplification of 侣 (U+4FA3) and 侶 (U+4FB6) a monumental difference by comparison. Such a plan would also eliminate the very visually distinct variations for characters like 直 (U+76F4) and 雇 (U+96C7). One would expect that all simplified characters would simultaneously also be z-variants or semantic variants with their traditional counterparts, but many are neither. It is easier to explain the strange case that semantic variants can be simultaneously both semantic variants and specialized variants when Unicode's definition is that specialized semantic variants have the same meaning only in certain contexts. Languages use them differently. A pair whose characters are 100% drop-in replacements for each other in Japanese may not be so flexible in Chinese. Thus, any comprehensive merger of recommended code points would have to maintain some variants that differ only slightly in appearance even if the meaning is 100% the same for all contexts in one language, because in another language the two characters may not be 100% drop-in replacements. In each row of the following table, the same character is repeated in all six columns. However, each column is marked (by the lang attribute) as being in a different language: Chinese ( simplified and two types of traditional ), Japanese , Korean , or Vietnamese . The browser should select, for each character, a glyph (from a font) suitable to the specified language. (Besides actual character variation—look for differences in stroke order, number, or direction—the typefaces may also reflect different typographical styles, as with serif and non-serif alphabets.) This only works for fallback glyph selection if you have CJK fonts installed on your system and the font selected to display this article does not include glyphs for these characters. No character variant that is exclusive to Korean or Vietnamese has received its own code point, whereas almost all Shinjitai Japanese variants or Simplified Chinese variants each have distinct code points and unambiguous reference glyphs in the Unicode standard. In the twentieth century, East Asian countries made their own respective encoding standards. Within each standard, there coexisted variants with distinct code points, hence the distinct code points in Unicode for certain sets of variants. Taking Simplified Chinese as an example, the two character variants of 內 (U+5167) and 内 (U+5185) differ in exactly the same way as do the Korean and non-Korean variants of 全 (U+5168). Each respective variant of the first character has either 入 (U+5165) or 人 (U+4EBA). Each respective variant of the second character has either 入 (U+5165) or 人 (U+4EBA). Both variants of the first character got their own distinct code points. However, the two variants of the second character had to share the same code point. The justification Unicode gives is that the national standards body in the PRC made distinct code points for the two variations of the first character 內 / 内 , whereas Korea never made separate code points for the different variants of 全 . There is a reason for this that has nothing to do with how the domestic bodies view the characters themselves. China went through a process in the twentieth century that changed (if not simplified) several characters. During this transition, there was a need to be able to encode both variants within the same document. Korean has always used the variant of 全 with the 入 (U+5165) radical on top. Therefore, it had no reason to encode both variants. Korean language documents made in the twentieth century had little reason to represent both versions in the same document. Almost all of the variants that the PRC developed or standardized got distinct code points owing simply to the fortune of the Simplified Chinese transition carrying through into the computing age. This privilege however, seems to apply inconsistently, whereas most simplifications performed in Japan and mainland China with code points in national standards, including characters simplified differently in each country, did make it into Unicode as distinct code points. Sixty-two Shinjitai "simplified" characters with distinct code points in Japan got merged with their Kyūjitai traditional equivalents, like 海 . [ citation needed ] This can cause problems for the language tagging strategy. There is no universal tag for the traditional and "simplified" versions of Japanese as there are for Chinese. Thus, any Japanese writer wanting to display the Kyūjitai form of 海 may have to tag the character as "Traditional Chinese" or trust that the recipient's Japanese font uses only the Kyūjitai glyphs, but tags of Traditional Chinese and Simplified Chinese may be necessary to show the two forms side by side in a Japanese textbook. This would preclude one from using the same font for an entire document, however. There are two distinct code points for 海 in Unicode, but only for "compatibility reasons". Any Unicode-conformant font must display the Kyūjitai and Shinjitai versions' equivalent code points in Unicode as the same. Unofficially, a font may display 海 differently with 海 (U+6D77) as the Shinjitai version and 海 (U+FA45) as the Kyūjitai version (which is identical to the traditional version in written Chinese and Korean). The radical 糸 (U+7CF8) is used in characters like 紅 / 红 , with two variants, the second form being simply the cursive form. The radical components of 紅 (U+7D05) and 红 (U+7EA2) are semantically identical and the glyphs differ only in the latter using a cursive version of the 糸 component. However, in mainland China, the standards bodies wanted to standardize the cursive form when used in characters like 红 . Because this change happened relatively recently, there was a transition period. Both 紅 (U+7D05) and 红 (U+7EA2) got separate code points in the PRC's text encoding standards bodies so Chinese-language documents could use both versions. The two variants received distinct code points in Unicode as well. The case of the radical 艸 (U+8278) proves how arbitrary the state of affairs is. When used to compose characters like 草 (U+8349), the radical was placed at the top, but had two different forms. Traditional Chinese and Korean use a four-stroke version. At the top of 草 should be something that looks like two plus signs ( ⺿ ). Simplified Chinese, Kyūjitai Japanese and Shinjitai Japanese use a three-stroke version, like two plus signs sharing their horizontal strokes ( ⺾ , i.e. 草 ). The PRC's text encoding bodies did not encode the two variants differently. The fact that almost every other change brought about by the PRC, no matter how minor, did warrant its own code point suggests that this exception may have been unintentional. Unicode copied the existing standards as is, preserving such irregularities. The Unicode Consortium has recognized errors in other instances. The myriad Unicode blocks for CJK Han Ideographs have redundancies in original standards, redundancies brought about by flawed importation of the original standards, as well as accidental mergers that are later corrected, providing precedent for dis-unifying characters. For native speakers, variants can be unintelligible or be unacceptable in educated contexts. English speakers may understand a handwritten note saying "4P5 kg" as "495 kg", but writing the nine backwards (so it looks like a "P") can be jarring and would be considered incorrect in any school. Likewise, to users of one CJK language reading a document with "foreign" glyphs: variants of 骨 can appear as mirror images, 者 can be missing a stroke/have an extraneous stroke, and 令 may be unreadable to Non-Japanese people. (In Japan, both variants are accepted). In some cases, often where the changes are the most striking, Unicode has encoded variant characters, making it unnecessary to switch between fonts or lang attributes. However, some variants with arguably minimal differences get distinct codepoints, and not every variant with arguably substantial changes gets a unique codepoint. As an example, take a character such as 入 (U+5165), for which the only way to display the variants is to change font (or lang attribute) as described in the previous table. On the other hand, for 內 (U+5167), the variant of 内 (U+5185) gets a unique codepoint. For some characters, like 兌 / 兑 (U+514C/U+5151), either method can be used to display the different glyphs. In the following table, each row compares variants that have been assigned different code points. For brevity, note that shinjitai variants with different components will usually (and unsurprisingly) take unique codepoints (e.g., 氣/気 ). They will not appear here nor will the simplified Chinese characters that take consistently simplified radical components (e.g., 紅 / 红 , 語 / 语 ). [ 3 ] This list is not exhaustive. In order to resolve issues brought by Han unification, a Unicode Technical Standard known as the Unicode Ideographic Variation Database have been created to resolve the problem of specifying specific glyph in plain text environment. [ 20 ] By registering glyph collections into the Ideographic Variation Database (IVD), it is possible to use Ideographic Variation Selectors to form Ideographic Variation Sequence (IVS) to specify or restrict the appropriate glyph in text processing in a Unicode environment. Ideographic characters assigned by Unicode appear in the following blocks: Unicode includes support of CJKV radicals, strokes, punctuation, marks and symbols in the following blocks: Additional compatibility (discouraged use) characters appear in these blocks: These compatibility characters (excluding the twelve unified ideographs in the CJK Compatibility Ideographs block) are included for compatibility with legacy text handling systems and other legacy character sets. They include forms of characters for vertical text layout and rich text characters that Unicode recommends handling through other means. The International Ideographs Core (IICore) is a subset of 9810 ideographs derived from the CJK Unified Ideographs tables, designed to be implemented in devices with limited memory, input/output capability, and/or applications where the use of the complete ISO 10646 ideograph repertoire is not feasible. There are 9810 characters in the current standard. [ 22 ] The Unihan project has always made an effort to make available their build database. [ 2 ] The libUnihan project provides a normalized SQLite Unihan database and corresponding C library. [ 23 ] All tables in this database are in fifth normal form . libUnihan is released under the LGPL , while its database, UnihanDb, is released under the MIT License . The last version was released in October 2008. CJK Unified Ideographs CJK Unified Ideographs Extension A CJK Unified Ideographs Extension B CJK Unified Ideographs Extension C CJK Unified Ideographs Extension D CJK Unified Ideographs Extension E CJK Unified Ideographs Extension F CJK Unified Ideographs Extension G CJK Unified Ideographs Extension H CJK Unified Ideographs Extension I CJK Radicals Supplement Kangxi Radicals Ideographic Description Characters CJK Symbols and Punctuation CJK Strokes Enclosed CJK Letters and Months CJK Compatibility CJK Compatibility Ideographs CJK Compatibility Forms Enclosed Ideographic Supplement CJK Compatibility Ideographs Supplement 0 BMP 0 BMP 2 SIP 2 SIP 2 SIP 2 SIP 2 SIP 3 TIP 3 TIP 2 SIP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 0 BMP 1 SMP 2 SIP 4E00–9FFF 3400–4DBF 20000–2A6DF 2A700–2B73F 2B740–2B81F 2B820–2CEAF 2CEB0–2EBEF 30000–3134F 31350–323AF 2EBF0–2EE5F 2E80–2EFF 2F00–2FDF 2FF0–2FFF 3000–303F 31C0–31EF 3200–32FF 3300–33FF F900–FAFF FE30–FE4F 1F200–1F2FF 2F800–2FA1F 20,992 6,592 42,720 4,154 222 5,762 7,473 4,939 4,192 622 115 214 16 64 39 255 256 472 32 64 542 Unified Unified Unified Unified Unified Unified Unified Unified Unified Unified Not unified Not unified Not unified Not unified Not unified Not unified Not unified 12 are unified Not unified Not unified Not unified Han Han Han Han Han Han Han Han Han Han Han Han Common Han, Hangul , Common, Inherited Common Hangul, Katakana , Common Katakana, Common Han Common Hiragana , Common Han
https://en.wikipedia.org/wiki/Han_unification
Hand-rearing , artificial-rearing , human-rearing or hand-raising is the process of caring for and feeding juvenile animals by humans during a stage when they would normally be fed by their parents . [ 1 ] For the hand-rearing of mammals , a bottle with milk from a female of their species , milk from another closely related species, or an appropriate milk formula can be used. [ 1 ] [ 2 ] In the case of birds , in some instances, hand-rearing with puppets that mimic the mother's head with key features to stimulate the chick 's beak opening and food ingestion may be necessary. [ 3 ] Hand-rearing can lead to habituation or imprinting of these animals towards humans, with the risk that adults may not exhibit normal behavior towards their species' companions, especially in animals raised for reintroduction into the wild. Potential difficulties include integration into groups of conspecifics, learning natural behaviors such as hunting, choosing a mate, as well as raising their own offspring. [ 1 ] [ 4 ] However, in livestock farming and domestic animal breeding , habituating animals to humans can be of great utility. [ 4 ] [ 5 ]
https://en.wikipedia.org/wiki/Hand-rearing
Hand-waving (with various spellings) is a pejorative label for attempting to be seen as effective – in word, reasoning, or deed – while actually doing nothing effective or substantial. [ 1 ] It is often applied to debating techniques that involve fallacies , misdirection and the glossing over of details. [ 2 ] It is also used academically to indicate unproven claims and skipped steps in proofs (sometimes intentionally, as in lectures and instructional materials), with some specific meanings in particular fields, including literary criticism, speculative fiction, mathematics, logic, science and engineering. The term can additionally be used in work situations, when attempts are made to display productivity or assure accountability without actually resulting in them. The term can also be used as a self-admission of, and suggestion to defer discussion about, an allegedly unimportant weakness in one's own argument's evidence, to forestall an opponent dwelling on it. In debate competition , certain cases of this form of hand-waving may be explicitly permitted. Hand-waving is an idiomatic metaphor , derived in part from the use of excessive gesticulation , perceived as unproductive, distracting or nervous, in communication or other effort. [ citation needed ] The term also evokes the sleight-of-hand distraction techniques of stage magic , [ 2 ] and suggests that the speaker or writer seems to believe that if they, figuratively speaking, simply wave their hands, no one will notice or speak up about the holes in the reasoning. [ 2 ] This implication of misleading intent has been reinforced by the pop-culture influence of the Star Wars franchise, in which mystically powerful hand-waving is fictionally used for mind control , and some uses of the term in public discourse are explicit Star Wars references. [ 3 ] Actual hand-waving motions may be used either by a speaker to indicate a desire to avoid going into details, [ 1 ] or by critics to indicate that they believe the proponent of an argument is engaging in a verbal hand-wave inappropriately. [ 2 ] The spelling of the compound varies (both with regard to this idiom and the everyday human communication gesture of waving ). While hand-waving is the most common spelling of the unitary present participle and gerund in this usage, and hand-wave of the simple present verb, hand wave dominates as the noun-phrase form. Handwaving and handwave may be preferred in some circles, and are well attested. [ 4 ] "Hand waving" is mostly used otherwise, e.g. "she had one hand waving, the other on the rail", but is found in some dictionaries in this form. [ 1 ] A more arch, mock-antiquarian construction is waving of [the] hands . Superlative constructions such as "vigorous hand-waving", "waved their hand[s] furiously", "lots of waving of hands", etc., are used to imply that the hand-waver lacks confidence in the information being conveyed, cannot convincingly express or defend the core of the argument being advanced. The descriptive epithet hand-waver has been applied to those engaging in hand-waving, but is not common. The opposite of hand-waving is sometimes called nose-following in mathematics ( see § In mathematics , below ). However it is spelled, the expression is also used in the original literal meaning of gesturing in a greeting, departing, excited, or attention-seeking manner by waving the hands, as in "friendly were the hand-waving crowds ..." (— Sinclair Lewis ), [ 1 ] which dates to the mid-17th century as a hyphenated verb [ 5 ] and the early 19th century United States as a fully compounded verb. [ 6 ] It is unclear when the figurative usage arose. The Oxford Dictionary of English lists it as "extended use", [ 5 ] and it appears primarily in modern American dictionaries, some of which label it as "informal". [ 1 ] Handwaving is frequently used in low-quality debate , including political campaigning and commentary , issue-based advocacy , advertising and public relations , tabloid journalism , opinion pieces , Internet memes , and informal discussion and writing. If the opponent in a debate or a commentator on an argument alleges hand-waving, it suggests that the proponent of the argument, position or message has engaged in one or more fallacies of logic , [ 2 ] usually informal , and/or glossed over non-trivial details, [ 2 ] and is attempting to wave away challenges and deflect questions, as if swatting at flies. The distraction inherent in the sense of the term has become a key part of the meaning. [ 2 ] The fallacies in question vary, but often include one of the many variants of argument to emotion , and in political discourse frequently involve unjustified assignment or transference of blame. Hand-waving is not itself a fallacy; the proponent's argument may incidentally be correct despite their failure to properly support it. [ 2 ] A tertiary meaning refers to use of poorly-reasoned arguments specifically to impress [ 7 ] or to persuade. [ 1 ] [ 7 ] The New Hacker's Dictionary (a.k.a. The Jargon File ) observes: If someone starts a sentence with "Clearly..." or "Obviously..." or "It is self-evident that...", it is a good bet he is about to handwave (alternatively, use of these constructions in a sarcastic tone before a paraphrase of someone else's argument suggests that it is a handwave). The theory behind this term is that if you wave your hands at the right moment, the listener may be sufficiently distracted to not notice that what you have said is bogus [i.e., incorrect]. Failing that, if a listener does object, you might try to dismiss the objection with a wave of your hand. [ 2 ] The implication that hand-waving is done with the specific intent to mislead has long been attached to the term, due to the use of literal waving of a hand – either natural-looking or showy, but never desperate – by illusionists to distract audiences and misdirect their attention from the mechanisms of the sleight-of-hand , gimmicked props or other trick being used in the performance. This meaning has become reinforced in recent decades by the influence of Star Wars (1977) and its sequels, in which the fictional Jedi mind trick involves a subtle hand wave with mystical powers – that only work on the weak-minded – to disguise reality and compel compliance. Consequently, there is an implication in current usage that a hand-waver may be craftily intending to deceive, and has a low opinion of the intelligence of the opponent or (especially) an audience or the general public. The labels "Jedi hand wave" and "Jedi mind trick" themselves are sometimes applied, in a tongue-in-cheek way, to this manipulation technique in public discourse; [ 3 ] US Congressman Luke Messer 's use of it in reference to President Barack Obama 's 2016 State of the Union address generated headlines. [ 8 ] [ 9 ] In an unplanned debate or presentation, an off-the-cuff essay, or an informal discussion, the proponent may have little or no time for preparation. Participants in such exchanges may use the term in reference to their own arguments, in the same sense as an author admitting a minor plot flaw ( see § In literary criticism ). When the proponents use the term, they are conceding that they know an ancillary point of or intermediate step in their arguments is poorly supported; they are suggesting that such details are not important and do not affect their key arguments or conclusions, and that the hand-waved details should be excluded from current consideration. Examples include when they believe a statement is true but cannot prove it at that time, and when the sources upon which they are relying conflict in minor ways: "I'm hand-waving over the exact statistics here, but they all show at least a 20% increase, so let's move on". In formal debate competition , certain forms of hand-waving may sometimes be explicitly permitted. In policy debate , the concept of fiat allows a team to pursue a line a reasoning based on a scenario that is not presently true, if a judge is satisfied that the case has been that it could become true. By extension, handwaving is used in literary, film and other media criticism of speculative fiction to refer to a plot device (e.g., a scientific discovery, a political development, or rules governing the behavior of a fictional creature) that is left unexplained or sloppily explained because it is convenient to the story, with the implication that the writer is aware of the logical weakness but hopes the audience will not notice or will suspend disbelief regarding such a macguffin , deus ex machina , continuity error or plot hole . The fictional material " handwavium " (a.k.a. "unobtainium", among other humorous names) is sometimes referred to in situations where the plot requires access to a substance of great value and properties that cannot be explained by real-world science, but is convenient to solving, or central to creating, a problem for the characters in the story. Perhaps the best known example is the spice melange , a fictional drug with supernatural properties, in Frank Herbert 's far-future science-fantasy epic, Dune . Hand-waving has come to be used in role-playing games to describe actions and conversations that are quickly glossed over, rather than acted out in full according to the rules. This may be done to keep from bogging down the play of the game with time-consuming but minor details. In mathematics, and disciplines in which mathematics plays a major role, hand-waving refers to either absence of formal proof or methods that do not meet mathematical rigor . In practice, it often involves the use of unrepresentative examples, unjustified assumptions, key omissions and faulty logic, and while these may be useful in expository papers and seminar presentations, they ultimately fall short of the standard of proof needed to establish a result. Proof by intimidation is one form of hand-waving. The mathematical profession tends to be receptive to informed critiques from any listener, and a claimant to a new result is expected to be able to answer any such question with a logical argument, up to a full proof. Should a speaker apparently fail to give such an answer, anyone in the audience who can supply the needed demonstration may sometimes upstage the speaker. The objector in such a case might receive some measure credit for the theorem the hand-waver presented. [ citation needed ] The opposite of hand-waving in mathematics (and related fields) is sometimes called nose-following , which refers to the unimaginative development of a narrow line of reasoning that—while correct—can also end up making the subject dry and uninteresting. [ 10 ] [ better source needed ] The rationale for this culture of hyper-critical scrutiny is suggested by a quote of G. H. Hardy : "[A mathematician's] subject is the most curious of all—there is none in which truth plays such odd pranks. It has the most elaborate and the most fascinating technique, and gives unrivalled openings for the display of sheer professional skill." [ 11 ] Hand-waving arguments in engineering and other applied sciences often include order-of-magnitude estimates and dimensional analysis , [ 12 ] especially in the use of Fermi problems in physics and engineering education. However, competent, well-intentioned researchers and professors also rely on explicitly declared hand-waving when, given a limited time, a large result must be shown and minor technical details cannot be given much attention—e.g., "it can be shown that z is an even number", as an intermediary step in reaching a conclusion. Another example of hand-waving can be found in the oversimplifications of the geologic representations commonly used in groundwater models created in support of land-development applications, especially those involving metal mining and aggregate extraction. Back-of-the-envelope calculations are approximate ways to get an answer by over-simplification, and are comparable to hand-waving in this sense. Hand-waving has been used to describe work-related situations where productivity is seemingly displayed, but deliverables are not produced, especially in the context of intentional engagement in busy work or pretend-work, vague claims of overwork or complications, impenetrably buzzword -laden rationalizations for delays or otherwise poor performance, and plausible-sounding but weak excuse-making and attention-deflecting tactics. In employment situations, as in political discourse, a hand-waving effort may seek to shift blame to other parties. Another use is in reference to fiscal problems, such as an inability to adequately explain accounting discrepancies or an avoidance of accountability for missing funds.
https://en.wikipedia.org/wiki/Hand-waving
In occupational safety and health , hand arm vibrations ( HAVs ) are a specific type of occupational hazard which can lead to hand–arm vibration syndrome ( HAVS ). HAVS, also known as vibration white finger ( VWF ) or dead finger , [ 1 ] is a secondary form of Raynaud's syndrome , an industrial injury triggered by continuous use of vibrating hand-held machinery. Use of the term vibration white finger has generally been superseded in professional usage by broader concept of HAVS, although it is still used by the general public. The symptoms of vibration white finger are the vascular component of HAVS. HAVS is a widespread recognized industrial disease affecting tens of thousands of workers. It is a disorder that affects the blood vessels , nerves , muscles , and joints of the hand , wrist , and arm . Its best known effect is vibration-induced white finger (VWF), a term introduced by the Industrial Injury Advisory Council in 1970. Injury can occur at frequencies between 5 and 2000 Hz but the greatest risk for fingers is between 50 and 300 Hz. The total risk exposure for hand and arm is calculated by the use of ISO 5349-1, which stipulates maximum damage between 8 and 16 Hz and a rapidly declining risk at higher frequencies. The ISO 5349-1 frequency risk assessment has been criticized as corresponding poorly to observational data; more recent research suggests that medium and high frequency vibrations also increase HAVS risk. [ 2 ] [ 3 ] Excessive exposure to hand arm vibrations can result in various patterns of diseases casually known as HAVS or VWF. This can affect nerves, joints, muscles, blood vessels or connective tissues of the hand and forearm: [ citation needed ] In extreme cases, the affected person may lose fingers. The effects are cumulative. When symptoms first appear, they may disappear after a short time. If exposure to vibration continues over months or years, the symptoms can worsen and become permanent. [ 4 ] Exposure to hand arm vibrations is a respectively newer occupational hazard in the work place. While hand arm vibrations have been occurring ever since the first usage of the power tool, concern over damage due to HAVS has lagged behind its fellow hazards such as Noise and chemical hazards . While safety engineers worldwide are collaboratively working on instilling both an Exposure Action Value and an Exposure Limit Value similar to the occupational noise standards, the Occupational Safety and Health Administration , the only regulatory public safety administration in the United States, has yet to offer either official values in the U.S. [ 5 ] Occupations at risk of Hand and Arm Vibration Syndrome (HAVs) includes Mining, Foundry , and highest exposure being within construction. [ 6 ] One unexpected occupation that is associated with HAVs is dentistry. [ 6 ] Dentistry is mainly associated with Musculoskeletal Disorder (MSD). [ 6 ] Consequently, HAVs is also linked to this field's ergonomic health issues due to the frequent use of dentistry hand-piece tools. [ 7 ] As reported by the Vibration Directive of European Legislation , real-time or one-time use of the dental tools does not surpass the exposure limit. [ 7 ] However, a long history of frequent handling of these tools has later been associated with Dental workers experiencing HAVs with inclusion of outside factors, such as high Body Mass Index ( BMI ). [ 7 ] While these workplace industries more prominently affect men in the working population, there are still a significant number of women who also experience HAVs. [ 8 ] According to a study conducted in Sweden, about 2% of all women and 14% of all men utilize vibrating tools for work. [ 8 ] Women are more likely to experience the symptoms for HAVs at a higher prevalence than men. [ 8 ] While OSHA has yet to supply these values, other countries agencies have. The Health and Safety Executive of the British Government suggests to use an Exposure Action Value of 2.5 m/s 2 and an Exposure Limit Value of 5.0 m/s 2 . [ 9 ] which is based on the EU directive from 2002. [ 10 ] However, it has been shown that those exposure levels still are not safe as 10% of a population would get sensorineural injuries after 5 years at action level exposure. [ 11 ] The Canadian Centre for Occupational Health and Safety promotes the ACGIH Threshold Limit Values shown by the adjacent table. [ 12 ] When the time-weighted acceleration data exceeds these numbers for the duration, damage from HAVS begins. [ 13 ] There have been additional recommendations based from National Institute for Occupational Safety and Health (NIOSH) to minimize exposure of vibrating tools. [ 14 ] Workplaces and Physicians' offices should not only view HAVs as a serious condition but should also look into implementing change. These implementations include engineering control, medical surveillance, and Personal Protective Equipment (PPE) to mitigate vibration exposure. [ 15 ] Another implication refers to administrative controls, an example being limiting the amount of hours/days a worker is using the vibrating tools. Furthermore, companies could provide adequate training to workers on the hazards and protocols of handling vibrating tools, along with supplying tools that generate the least amount of vibration while still completing the assignment. [ 14 ] Good practice in industrial health and safety management requires that worker vibration exposure is assessed in terms of acceleration, amplitude , and duration. Using a tool that vibrates slightly for a long time can be as damaging as using a heavily vibrating tool for a short time. The duration of use of the tool is measured as trigger time , the period when the worker actually has their finger on the trigger to make the tool run, and is typically quoted in hours per day. Vibration amplitude is quoted in metres per second squared, and is measured by an accelerometer on the tool or given by the manufacturer. Amplitudes can vary significantly with tool design, condition and style of use, even for the same type of tool. [ citation needed ] Anti vibration gloves are traditionally made with a thick and soft palm material to insulate from the vibrations. The protection is highly dependent on frequency range; most gloves provide no protection in palm and wrist below ~50 Hz and in fingers below ~400 Hz. Factors such as high grip force, cold hands or vibration forces in shear direction can have a reducing effect and or increase damage to the hands and arms. Gloves do help to keep hands warm but to get the desired effect, the frequency output from the tool must match the properties of the vibration glove that is selected. Anti-vibration gloves in many cases amplify the vibrations at frequencies lower than those mentioned in the text above. [ citation needed ] The effect of legislation in various countries on worker vibration limits has been to oblige equipment providers to develop better-designed, better-maintained tools, and for employers to train workers appropriately. It also drives tool designers to innovate to reduce vibration. Some examples are the easily manipulated mechanical arm (EMMA) [ 16 ] and the suspension mechanism designed into chainsaws . [ citation needed ] The Control of Vibration at Work Regulations 2005, created under the Health and Safety at Work etc. Act 1974 , [ 17 ] is the legislation in the UK that governs exposure to vibration and assists with preventing HAVS occurring. In the UK, Health and Safety Executive gives the example of a hammer drill which can vary from 6 m/s² to 25 m/s². HSE publishes a list of typically observed vibration levels for various tools, and graphs of how long each day a worker can be exposed to particular vibration levels. This makes managing the risk relatively straightforward. Tools are given an Exposure Action Value (EAV, the time which a tool can be used before action needs to be taken to reduce vibration exposure) and an Exposure Limit Value (ELV, the time after which a tool may not be used). [ citation needed ] There are only a few ways to lower the severity and risk of damage from HAVS without complete engineering redesign on the operation of the tools. A few examples could be increasing the dampening through thicker gloves and increasing the trigger size of the tool to decrease the stress concentration of the vibrations on the contact area, but the best course of action would be to buy safer tools that vibrate less. These Exposure Action Values and Exposure Limit Values seem rather low, when compared to lab tested data, shown by the National Institute for Occupational Safety and Health Power Tools Database . Just an example out of the database, the reciprocating saws look to have extremely violent vibrations with one of the saws vibrations reaching 50 m/s 2 in one hand and over 35 m/s 2 in the other. [ 18 ] There are various occupational standards of vibration measurement for HAV in use in the United States. They are ANSI S3.34, ACGIH-HAV standard, and NIOSH #89-106. Internationally, European Union Directive 2002/44/EC and ISO5349 are the vibration measurement standards for HAV. [ 19 ] Hand arm vibrations can affect anyone that uses them for a prolonged period of time. There are many types of tools that use hand arm vibrations including chainsaws, engineering controls, and power tools. [ 20 ] [ 21 ] Many industrial workers use these power tools, for example, when working with construction. Some of the side effects of using hand arm vibrations are discomfort in the head and jaw, chest and abdomen pains, and changing speech. [ 22 ] Depending on the way the hand arm vibration instruments are held, it can influence the vibration effects. This includes the grip force that the worker uses on the tool, the density of the material the tool is being used on, and the texture of the material the tool is used on. [ 23 ] If the material is harder, the vibrations would shake more vigorously compared to a softer material.  Hand arm vibrations can also affect people daily with the pain of using these tools such as disturbing sleep, inability to work in certain conditions, and having a hard time doing daily tasks. [ 24 ] Hand arm vibrations can affect the daily lives of workers that use these tools. While there are different tools used to monitor HAV, a simple system can be used in organizations highlighting excess use of grinding disks when using a hand held angle grinder. This is re-active monitoring and it was introduced by Carl West at a fabrication workshop in Rotherham, England in 2009. [ 25 ] A simpler system, known as re-active monitoring, may be used by, for example, monitoring rates of usage of consumable items. Such a system was introduced by Carl West at a fabrication workshop in Rotherham, England. In this system, the vibration levels of the angle grinding tools in use was measured, as was the average life of a grinding disk. Thus by recording numbers of grinding disks used, vibration exposure may be calculated. [ 26 ] The symptoms were first described by Professor Giovanni Loriga in Italy in 1911, although the link was not made between the symptoms and vibrating hand tools until a study undertaken by Alice Hamilton MD in 1918. She formed her theory through following the symptoms reported by quarry cutters and carvers in Bedford, Indiana. She also discovered the link between an increase in HAV symptoms and cold weather as 1918 was a particularly harsh winter. [ citation needed ] The first scale for assessing the condition, the Taylor-Pelmear scale, was published in 1975, but it was not listed as a prescribed disease in the United Kingdom until 1985, and the Stockholm scale was introduced in 1987. In 1997, the UK High Court awarded £127,000 in compensation to seven coal miners for vibration white finger. A UK government fund set up to cover subsequent claims by ex-coalminers had exceeded £100 million in payments by 2005.
https://en.wikipedia.org/wiki/Hand_arm_vibrations
A hand boiler or (less commonly) love meter is a glass sculpture used as an experimental tool to demonstrate vapour-liquid equilibrium , or as a collector's item to whimsically "measure love." It consists of a lower bulb containing a volatile liquid and a mixture of gases that is connected usually by a twisting glass tube that connects to an upper or "receiving" glass bulb. A hand boiler functions similar to the " drinking bird " toy: [ 1 ] The upper and lower bulbs of the device are at different temperatures, and therefore the vapor pressure in the two bulbs is different. Since the lower bulb is warmer, the vapor pressure in it is higher. The difference in vapor pressure forces the liquid from the lower bulb to the upper bulb. Thus: Δ h = Δ p ρ g {\displaystyle \Delta h={\frac {\Delta p}{\rho g}}} where: Δ h {\displaystyle \Delta h} = the height of the column of fluid above the fluid's level in the lower bulb Δ p {\displaystyle \Delta p} = the difference in vapor pressure between the two bulbs (which can be determined via the Antoine equation ) ρ {\displaystyle \rho } = the density of the liquid g {\displaystyle g} = the acceleration of gravity at the Earth's surface The boiling is caused by the relationship between the temperature and pressure of a gas. As the temperature of a gas in a closed container rises, the pressure also rises. There must be a temperature (and pressure) difference between the two large chambers for the liquid to move. When held upright (with the smaller bulb on top), the liquid will move from the bulb with the higher pressure to the bulb with lower pressure. As the gas continues to expand, the gas will then bubble through the liquid, boiling. The fact that the liquid is volatile (easily vaporized) makes the hand boiler more effective. Adding heat to the liquid produces more gas, also increasing pressure in the closed container. [ 2 ] Sometimes a hand boiler is used to show properties of distillation . Since the liquid both evaporates and condenses at relatively cool temperatures while in an enclosed system, the boiler can be turned upside down, and the top end can be placed in ice water. The gaseous form of the liquid will condense in the cooled chamber. Since the liquid is often colored with dye, but the dye does not evaporate or condense at the same temperature, the liquid that condenses in the cooled chamber is colorless, leaving the pigment behind. In popular culture, hand boilers used to be sometimes known as "love meters" because the tube that separates the upper and lower bulbs is twisted into a heart shape and the volatile liquid is colored red. Love meters were a common collector's item or a souvenir. Depending on how the item was packaged, one would grasp the lower bulb to "prove" how passionate one was, or a couple would each grasp one end to see who would force the liquid into the other's bulb. Hand boilers are much more commonly used as a scientific novelty today. Hand boilers date back at least as early as 1767, when the American polymath Benjamin Franklin encountered them in Germany. He developed an improved version in 1768, [ 3 ] after which they were called Franklin's pulse glass or palm glass or pulse hammer (German: Pulshammer ) or water hammer (German: Wasserhammer ). [ 4 ]
https://en.wikipedia.org/wiki/Hand_boiler
A hand compass (also hand bearing compass or sighting compass ) is a compact magnetic compass capable of one-hand use and fitted with a sighting device to record a precise bearing or azimuth to a given target or to determine a location. [ 1 ] [ 2 ] Hand or sighting compasses include instruments with simple notch-and-post alignment ("gunsights"), prismatic sights , direct or lensatic sights, [ 3 ] and mirror/vee (reflected-image) sights. With the additional precision offered by the sighting arrangement, and depending upon construction, sighting compasses provide increased accuracy when measuring precise bearings to an objective. [ 4 ] The term hand compass is used by some in the forestry and surveying professions to refer to a certain type of hand compass optimized for use in those fields, also known as a forester or cruiser compass. [ 5 ] [ 6 ] A hand compass may also include the various one-hand or 'pocket' versions of the surveyor's or geologist's transit . While small portable compasses fitted with mechanical sighting devices have existed for a few hundred years, the first one-hand compass with a sighting device appeared around 1885. [ 7 ] These soon evolved into more elaborate and specialized models such as the Brunton Pocket Transit patented in 1894. [ 8 ] Hand compasses were soon widely employed in the practice of forestry , geology , archaeology , speleology , preliminary cartography and land surveying . In the United States, the hand compass became very popular among foresters seeking a compass to plot and estimate stands of timber. While the Pocket Transit was more than adequate for such work, it was relatively expensive. Consequently, a new type of hand compass was introduced: the forester or cruiser compass . Traditionally, cruiser compasses featured a sighting notch, a mechanically-damped [ 9 ] or "dry" needle, adjustable declination and a large dial marked in individual degrees using counterclockwise calibration (reversed east and west positions). A screw base for a tripod or jacob staff ( monopod ) was often fitted as well. [ 10 ] By the late 1960s many foresters had begun using more modern liquid-damped compass designs, including mirror-sight protractor models such as the Silva Type 15 Ranger or the Suunto MC-1 (later, the MC-2 ). These compasses were fast to use, particularly along straight cruise lines and were sufficiently accurate for most forestry applications. [ 11 ] On the other hand, geologists, speleologists , archaeologists , ornithologists , and foresters engaged in precision survey work often used direct-reading models such as the Suunto KB-14 , prismatic compasses such as Suunto KB-77 or the traditional Brunton Pocket Transit . [ 12 ] [ 13 ] Many models featured an optional quadrant (0-90-0 degree) scale instead of an azimuthal (0-360 degree) system. [ 14 ] By using a hand compass in combination with aerial photographs and maps a person can determine his/her location in the field, determine direction to landmarks or destinations, estimate distance, estimate area, and find points of interest (marked boundary lines, USGS marker, plot centers). For increased accuracy, many professional hand compasses continue to be fitted with tripod mounts. [ 15 ] While the hand compass continues to be widely employed in such work, it has been increasingly supplanted in recent years by use of the GPS, or Global Positioning System receiver. The marine hand compass, or hand bearing compass|hand-bearing compass as it is termed in nautical use, has been used by small-boat or inshore sailors since at least the 1920s to keep a running course or to record precise bearings to landmarks on shore in order to determine position via the resection technique. [ 16 ] [ 17 ] Instead of a magnetized needle or disc, most hand bearing compasses feature liquid damping with a floating card design (a magnetized, degreed float or dial atop a jeweled pivot bearing). [ 18 ] Equipped with a viewing prism, the hand bearing compass allows instant reading of forward bearings from the user to an object or vessel, and some provide the reciprocal bearing as well. [ 19 ] [ 20 ] Modern examples of marine hand bearing compasses include the Suunto KB-14 and KB-77 , and the Plastimo Iris 50 . [ 21 ] [ 22 ] These compasses frequently have battery-illuminated or photoluminescent degree dials for use in low light or darkness. [ 23 ] [ 24 ]
https://en.wikipedia.org/wiki/Hand_compass
The Handbook of Automated Reasoning ( ISBN 0444508139 , 2128 pages) is a collection of survey articles on the field of automated reasoning . Published in June 2001 by MIT Press , it is edited by John Alan Robinson and Andrei Voronkov . Volume 1 describes methods for classical logic , first-order logic with equality and other theories, and induction . Volume 2 covers higher-order , non-classical and other kinds of logic.
https://en.wikipedia.org/wiki/Handbook_of_Automated_Reasoning
In human biology , handedness is an individual's preferential use of one hand , known as the dominant hand , due to and causing it to be stronger, faster or more dextrous . The other hand, comparatively often the weaker, less dextrous or simply less subjectively preferred, is called the non-dominant hand . [ 2 ] [ 3 ] [ 4 ] In a study from 1975 on 7,688 children in US grades 1–6, left handers comprised 9.6% of the sample, with 10.5% of male children and 8.7% of female children being left-handed. [ 5 ] [ 6 ] [ 7 ] Overall, around 90% of people are right-handed. [ 8 ] Handedness is often defined by one's writing hand. It is fairly common for people to prefer to do a particular task with a particular hand. Mixed-handed people change hand preference depending on the task. Not to be confused with handedness, ambidexterity describes having equal ability in both hands. Those who learn it still tend to favor their originally dominant hand. Natural ambidexterity (equal preference of either hand) does exist, but it is rare—most people prefer using one hand for most purposes. Most research suggests that left-handedness has an epigenetic marker—a combination of genetics, biology and the environment. In some cultures, the use of the left hand can be considered disrespectful. Because the vast majority of the population is right-handed, many devices are designed for use by right-handed people, making their use by left-handed people more difficult. [ 9 ] In many countries, left-handed people are or were required to write with their right hands. However, left-handed people have an advantage in sports that involve aiming at a target in an area of an opponent's control, as their opponents are more accustomed to the right-handed majority. As a result, they are over-represented in baseball , tennis , fencing , [ 10 ] cricket , boxing , [ 11 ] [ 12 ] and mixed martial arts . [ 13 ] Handedness may be measured behaviourally (performance measures) or through questionnaires (preference measures). The Edinburgh Handedness Inventory has been used since 1971 but contains some dated questions and is hard to score. Revisions have been published by Veale [ 19 ] and by Williams. [ 20 ] The longer Waterloo Handedness Questionnaire is not widely accessible. More recently, the Flinders Handedness Survey (FLANDERS) has been developed. [ 21 ] Some non-human primates have a preferred hand for tasks, but they do not display a strong right-biased preference like modern humans, with individuals equally split between right-handed and left-handed preferences. When exactly a right handed preference developed in the human lineage is unknown, although it is known through various means that Neanderthals had a right-handedness bias like modern humans. Attempts to determine handedness of early humans by analysing the morphology of lithic artefacts have been found to be unreliable. [ 22 ] There are several theories of how handedness develops. Handedness displays a complex inheritance pattern. For example, if both parents of a child are left-handed, there is a 26% chance of their child being left-handed. [ 23 ] A large study of twins from 25,732 families by Medland et al. (2006) indicates that the heritability of handedness is roughly 24%. [ 24 ] Two theoretical single-gene models have been proposed to explain the patterns of inheritance of handedness, by Marian Annett [ 25 ] of the University of Leicester , and by Chris McManus [ 23 ] of UCL . However, growing evidence from linkage and genome-wide association studies suggests that genetic variance in handedness cannot be explained by a single genetic locus . [ 26 ] [ 27 ] [ 28 ] [ 29 ] [ 30 ] [ 31 ] [ 32 ] [ excessive citations ] From these studies, McManus et al. now conclude that handedness is polygenic and estimate that at least 40 loci contribute to the trait. [ 33 ] Brandler et al. performed a genome-wide association study for a measure of relative hand skill and found that genes involved in the determination of left-right asymmetry in the body play a key role in handedness. [ 34 ] Brandler and Paracchini suggest the same mechanisms that determine left-right asymmetry in the body (e.g. nodal signaling and ciliogenesis ) also play a role in the development of brain asymmetry (handedness being a reflection of brain asymmetry for motor function). [ 35 ] In 2019, Wiberg et al. performed a genome-wide association study and found that handedness was significantly associated with four loci, three of them in genes encoding proteins involved in brain development. [ 36 ] Four studies have indicated that individuals who have had in-utero exposure to diethylstilbestrol (a synthetic estrogen -based medication used between 1940 and 1971) were more likely to be left-handed over the clinical control group. Diethylstilbestrol animal studies "suggest that estrogen affects the developing brain, including the part that governs sexual behavior and right and left dominance". [ 37 ] [ 38 ] [ 39 ] [ 40 ] Another theory is that ultrasound may sometimes affect the brains of unborn children, causing higher rates of left-handedness in children whose mothers receive ultrasound during pregnancy. Research suggests there may be a weak association between ultrasound screening (sonography used to check the healthy development of the fetus and mother) and left-handedness. [ 41 ] Twin studies indicate that genetic factors explain 25% of the variance in handedness, and environmental factors the remaining 75%. [ 42 ] While the molecular basis of handedness epigenetics is largely unclear, Ocklenburg et al. (2017) found that asymmetric methylation of CpG sites plays a key role for gene expression asymmetries related to handedness. [ 43 ] [ 44 ] One common handedness theory is the brain hemisphere division of labor. In most people, the left side of the brain controls speaking. The theory suggests it is more efficient for the brain to divide major tasks between the hemispheres—thus most people may use the non-speaking (right) hemisphere for perception and gross motor skills. As speech is a very complex motor control task, the specialised fine motor areas controlling speech are most efficiently used to also control fine motor movement in the dominant hand. As the right hand is controlled by the left hemisphere (and the left hand is controlled by the right hemisphere) most people are, therefore right-handed. The theory depends on left-handed people having a reversed organisation. [ 45 ] However, the majority of left-handers have been found to have left-hemisphere language dominance—just like right-handers. [ 46 ] [ 47 ] Only around 30% of left-handers are not left-hemisphere dominant for language. Some of those have reversed brain organisation, where the verbal processing takes place in the right-hemisphere and visuospatial processing is dominant to the left hemisphere. [ 48 ] Others have more ambiguous bilateral organisation, where both hemispheres do parts of typically lateralised functions. When tasks designed to investigate lateralisation (preference for handedness) are averaged across a group of left-handers, the overall effect is that left-handers show the same pattern of data as right-handers, but with a reduced asymmetry. [ 49 ] The majority of the evidence comes from literature assessing oral language production and comprehension. When it comes to writing, findings from recent studies were inconclusive for a difference in lateralization for writing between left-handers and right-handers. [ 50 ] Researchers studied fetuses in utero and determined that handedness in the womb was a very accurate predictor of handedness after birth. [ 51 ] In a 2013 study, 39% of infants (6 to 14 months) and 97% of toddlers (18 to 24 months) demonstrated a hand preference. [ 52 ] Infants have been observed to fluctuate heavily when choosing a hand to lead in grasping and object manipulation tasks, especially in one- versus two-handed grasping. Between 36 and 48 months, there is a significant decline in variability between handedness in one-handed grasping; it can be seen earlier in two-handed manipulation. Children of 18–36 months showed more hand preference when performing bi-manipulation tasks than with simple grasping. [ 53 ] The decrease in handedness variability in children of 36–48 months may be attributable to preschool or kindergarten attendance due to increased single-hand activities such as writing and coloring. [ 53 ] Scharoun and Bryden noted that right-handed preference increases with age up to the teenage years. [ 6 ] The modern turn in handedness research has been towards emphasizing degree rather than direction of handedness as a critical variable. [ 54 ] In his book Right-Hand, Left-Hand , Chris McManus of University College London argues that the proportion of left-handers is increasing, and that an above-average quota of high achievers have been left-handed. He says that left-handers' brains are structured in a way that increases their range of abilities, and that the genes that determine left-handedness also govern development of the brain's language centers. [ 55 ] Writing in Scientific American , he states: Studies in the U.K., U.S. and Australia have revealed that left-handed people differ from right-handers by only one IQ point, which is not noteworthy ... Left-handers' brains are structured differently from right-handers' in ways that can allow them to process language, spatial relations and emotions in more diverse and potentially creative ways. Also, a slightly larger number of left-handers than right-handers are especially gifted in music and math. A study of musicians in professional orchestras found a significantly greater proportion of talented left-handers, even among those who played instruments that seem designed for right-handers, such as violins. Similarly, studies of adolescents who took tests to assess mathematical giftedness found many more left-handers in the population. [ 56 ] Left-handers are overrepresented among those with lower cognitive skills and mental impairments, with those with intellectual disability being roughly twice as likely to be left-handed, as well as generally lower cognitive and non-cognitive abilities amongst left-handed children. [ 57 ] Conversely, left-handers are also overrepresented in high IQ societies, such as Mensa . A 2005 study found that "approximately 20% of the members of Mensa are lefthanded, double the proportion in most general populations". [ 58 ] Ghayas & Adil (2007) found that left-handers were significantly more likely to perform better on intelligence tests than right-handers and that right-handers also took more time to complete the tests. [ 59 ] In a systematic review and meta-analysis, Ntolka & Papadatou-Pastou (2018) found that right-handers had higher IQ scores, but that difference was negligible (about 1.5 points). [ 60 ] The prevalence of difficulties in left-right discrimination was investigated in a cohort of 2,720 adult members of Mensa and Intertel by Storfer. [ 61 ] According to the study, 7.2% of the men and 18.8% of the women evaluated their left-right directional sense as poor or below average; moreover participants who were relatively ambidextrous experienced problems more frequently than did those who were more strongly left- or right-handed. [ 61 ] The study also revealed an effect of age, with younger participants reporting more problems. [ 61 ] Nelson, Campbell, and Michel studied infants and whether developing handedness during infancy correlated with language abilities in toddlers. In the article they assessed 38 infants and followed them through to 12 months and then again once they became toddlers from 18 to 24 months. They discovered that when a child developed a consistent use of their right or left hand during infancy (such as using the right hand to put the pacifier back in, or grasping random objects with the left hand), they were more likely to have superior language skills as a toddler. Children who became lateral later than infancy (i.e., when they were toddlers) showed normal development of language and had typical language scores. The researchers used Bayley scales of infant and toddler development to assess the subjects. [ 62 ] In two studies, Diana Deutsch found that left-handers, particularly those with mixed-hand preference, performed significantly better than right-handers in musical memory tasks. [ 63 ] [ 64 ] There are also handedness differences in perception of musical patterns. Left-handers as a group differ from right-handers, and are more heterogeneous than right-handers, in perception of certain stereo illusions, such as the octave illusion , the scale illusion , and the glissando illusion . [ 65 ] Studies have found a positive correlation between left-handedness and several specific physical and mental disorders and health problems, including: As handedness is a highly heritable trait associated with various medical conditions, and because many of these conditions could have presented a Darwinian fitness challenge in ancestral populations, this indicates left-handedness may have previously been rarer than it currently is, due to natural selection. However, on average, left-handers have been found to have an advantage in fighting and competitive, interactive sports, which could have increased their reproductive success in ancestral populations. [ 78 ] In 2006, researchers from Lafayette College and Johns Hopkins University concluded that there was no statistically significant correlation between handedness and earnings for the general population, but among college-educated people, left-handers earned 10 to 15% more than their right-handed counterparts. [ 79 ] In a 2014 study published by the National Bureau of Economic Research , Harvard economist Joshua Goodman finds that left-handed people earn 10 to 12 percent less over the course of their lives than right-handed people. Goodman attributes this disparity to higher rates of emotional and behavioral problems in left-handed people. [ 57 ] Interactive sports such as table tennis, badminton and cricket have an overrepresentation of left-handedness, while non-interactive sports such as swimming show no overrepresentation. Smaller physical distance between participants increases the overrepresentation. In fencing , about half the participants are left-handed. [ 80 ] In tennis, 40% of the seeded players are left-handed. [ 81 ] The term southpaw is sometimes used to refer to a left-handed individual, especially in baseball and boxing . [ 82 ] Some studies suggest that right handed male athletes tend to be statistically taller and heavier than left handed ones. [ 83 ] Other, sports-specific factors may increase or decrease the advantage left-handers usually hold in one-on-one situations: One advantage is a left-handed catcher's ability to frame a right-handed pitcher's breaking balls. A right-handed catcher catches a right-hander's breaking ball across his body, with his glove moving out of the strike zone. A left-handed catcher would be able to catch the pitch moving into the strike zone and create a better target for the umpire. According to a meta-analysis of 144 studies, totaling 1,787,629 participants, the best estimate for the male to female odds ratio was 1.23, indicating that men are 23% more likely to be left-handed. For example, if the incidence of female left-handedness was 10%, then the incidence of male left-handedness would be approximately 12% (10% incidence of left-handedness among women multiplied by an odds ratio of 1:1.23 for women:men results in a 12.3% incidence of left-handedness among men). [ 93 ] [ clarification needed ] Some studies examining the relationship between handedness and sexual orientation have reported that a disproportionate minority of homosexual people exhibit left-handedness, [ 94 ] though findings are mixed. [ 95 ] [ 96 ] [ 97 ] A 2001 study also found that people assigned male at birth whose gender identity did not align with their assigned sex, were more than twice as likely to be left-handed than a clinical control group (19.5% vs. 8.3%, respectively). [ 98 ] Paraphilias (atypical sexual interests) have also been linked to higher rates of left-handedness. A 2008 study analyzing the sexual fantasies of 200 males found "elevated paraphilic interests were correlated with elevated non-right handedness". [ 99 ] Greater rates of left-handedness have also been documented among pedophiles . [ 100 ] [ 101 ] [ 102 ] [ 103 ] A 2014 study attempting to analyze the biological markers of asexuality asserts that non-sexual men and women were 2.4 and 2.5 times, respectively, more likely to be left-handed than their heterosexual counterparts. [ 104 ] A study at Durham University —which examined mortality data for cricketers whose handedness was a matter of public record—found that left-handed men were almost twice as likely to die in war as their right-handed contemporaries. [ 105 ] The study theorised that this was because weapons and other equipment was designed for the right-handed. "I can sympathise with all those left-handed cricketers who have gone to an early grave trying desperately to shoot straight with a right-handed Lee Enfield .303", wrote a journalist reviewing the study in the cricket press. [ 106 ] The findings echo those of previous American studies, which found that left-handed US sailors were 34% more likely to have a serious accident than their right-handed counterparts. [ 107 ] A high level of handedness (whether strongly favoring right or left) is associated with poorer episodic memory , [ 108 ] [ 109 ] and with poorer communication between brain hemispheres, [ 110 ] which may give poorer emotional processing, although bilateral stimulation may reduce such effects. [ 111 ] [ 112 ] A high level of handedness is associated with a smaller corpus callosum whereas low handedness with a larger one. [ 113 ] Left-handedness is associated with better divergent thinking . [ 114 ] Many tools and procedures are designed to facilitate use by right-handed people, often without realizing the difficulties incurred by the left-handed. John W. Santrock has written, "For centuries, left-handers have suffered unfair discrimination in a world designed for right-handers." [ 9 ] Many products for left-handed use are made by specialist producers, although not available from normal suppliers. Items as simple as a knife ground for use with the right hand are less convenient for left-handers. There is a multitude of examples: kitchen tools such as knives, corkscrews and scissors, garden tools , and so on. While not requiring a purpose-designed product, there are more appropriate ways for left-handers to tie shoelaces. [ 115 ] There are companies that supply products designed specifically for left-handed use. One such is Anything Left-Handed, which in 1967 opened a shop in Soho, London; the shop closed in 2006, but the company continues to supply left-handed products worldwide by mail order. [ 116 ] Writing from left to right as in many languages, in particular, with the left hand covers and tends to smear (depending upon ink drying) what was just written. Left-handed writers have developed various ways of holding a pen for best results. [ 117 ] For using a fountain pen , preferred by many left-handers, nibs ground to optimise left-handed use (pushing rather than pulling across the paper) without scratching are available. McManus noted that, as the Industrial Revolution spread across Western Europe and the United States in the 19th century, workers needed to operate complex machines that were designed with right-handers in mind. This would have made left-handers more visible and at the same time appear less capable and more clumsy. Writing left-handed with a dip pen, in particular, was prone to blots and smearing. Moreover, apart from inconvenience, left-handed people have historically been considered unlucky or even malicious for their difference by the right-handed majority. In many languages, including English, the word for the direction "right" also means "correct" or "proper". Throughout history, being left-handed was considered negative, or evil. [ 118 ] Black magic is sometimes referred to as the " left-hand path ". [ 119 ] Before the development of fountain pens and other writing instruments, children were taught to write with a dip pen . While a right-hander could smoothly drag the pen across paper from left to right, a dip pen could not easily be pushed across by the left hand without digging into the paper and making blots and stains. [ 120 ] Even with more modern pens, writing from left to right, as in many languages, with the left hand covers and can smear what was just written when moving across the line. Into the 20th and even the 21st century, left-handed children in Uganda were beaten by schoolteachers or parents for writing with their left hand, [ 121 ] or had their left hands tied behind their backs to force them to write with their right hand. [ 122 ] As a child, the future British king George VI (1895–1952) was naturally left-handed. He was forced to write with his right hand, as was common practice at the time. He was not expected to become king, so that was not a factor. [ 123 ] Depending on the position and inclination of the writing paper, and the writing method, the left-handed writer can write as neatly and efficiently or as messily and slowly as right-handed writers. Usually the left-handed child needs to be taught how to write correctly with the left hand, since discovering a comfortable left-handed writing method on one's own may not be straightforward. [ 124 ] [ 125 ] In the Soviet school system , all left-handed children were forced to write with their right hand. [ 126 ] [ 127 ] International Left-Handers Day is held annually every August 13. [ 128 ] It was founded by the Left-Handers Club in 1992, with the club itself having been founded in 1990. [ 128 ] International Left-Handers Day is, according to the club, "an annual event when left-handers everywhere can celebrate their sinistrality (left-handedness) and increase public awareness of the advantages and disadvantages of being left-handed". [ 128 ] It celebrates their uniqueness and differences, who are from seven to ten percent of the world's population. Thousands of left-handed people in today's society have to adapt to use right-handed tools and objects. Again according to the club, "in the U.K. alone there were over 20 regional events to mark the day in 2001—including left-v-right sports matches, a left-handed tea party, pubs using left-handed corkscrews where patrons drank and played pub games with the left hand only, and nationwide 'Lefty Zones' where left-handers' creativity, adaptability and sporting prowess were celebrated, whilst right-handers were encouraged to try out everyday left-handed objects to see just how awkward it can feel using the wrong equipment." [ 128 ] Kangaroos and other macropod marsupials show a left-hand preference for everyday tasks in the wild. 'True' handedness is unexpected in marsupials however, because unlike placental mammals , they lack a corpus callosum . Left-handedness was particularly apparent in the red kangaroo ( Macropus rufus ) and the eastern gray kangaroo ( Macropus giganteus ). Red-necked (Bennett's) wallabies ( Macropus rufogriseus ) preferentially use their left hand for behaviours that involve fine manipulation, but the right for behaviours that require more physical strength. There was less evidence for handedness in arboreal species. [ 129 ] Studies of dogs, horses, and domestic cats have shown that females of those species tend to be right-handed, while males tend to be left-handed. [ 130 ]
https://en.wikipedia.org/wiki/Handedness
A Personal Navigation Assistant ( PNA ) also known as Personal Navigation Device or Portable Navigation Device ( PND ) is a portable electronic product which combines a positioning capability (such as GPS ) and navigation functions. Some PNA devices are PDAs with limited features and can be unlocked. [ 1 ] The earliest PNAs were hand-held GPS units (circa mid-1980s) which were capable of displaying the user's location on an electronic map . These units included simple navigation functions such as course-to-steer and course-made-good. This first generation of PNAs were used by the US military. According to the analyst firm Berg Insight , there were more than 150 million turn-by-turn navigation systems worldwide in mid-2009, including about 35 million factory installed and aftermarket in-dash navigation systems, over 90 million Personal Navigation Devices (PNDs) and an estimated 28 million navigation-enabled mobile handsets with GPS. [ 2 ] The term PNA has come into widespread use with the growing popularity of automobile navigation systems . Original PNAs provided users with a map layer, real-time-traffic, and a routing engine with audio/visual cues for turn-by-turn guidance. The latest generation of PNA have sophisticated navigation functions such as parking assistance and personalization engines that enhance the user experience. To reduce total cost of ownership and time to market, most modern PNA devices such as those made by Garmin Ltd. , Mio Technology Ltd. or TomTom International BV. are running an off-the-shelf embedded operating system such as Windows CE or Embedded Linux on commodity hardware with OEM versions of popular PDA Navigation software packages such as TomTom Navigator, I-GO 2006, Netropa IntelliNav iGuidance, or Destinator. Other manufacturers such as Garmin and Magellan prefer to bundle their own software developed in-house. Because many of these devices use an embedded OS, many technically inclined users find it easy to modify PNAs to run third party software and use them for things other than navigation, such as a low-cost audio-video player or PDA replacement. GPS equipped mobile phones have now eclipsed the sale of dedicated GPS units. Nokia , Samsung Electronics , Motorola and other handset makers were predicted to sell 162 million GPS equipped phones in 2007, dwarfing the 20 million units Garmin and TomTom have forecast they will sell combined, according to iSuppli, a leading market researcher in California. The inclusion of Google Maps Navigation in Android devices such as Motorola Droid and Nokia 's announcement of free Ovi Maps has led to many people using their smartphones instead of having a separate PNA for trip navigation. Systems designed for automobiles are able to calculate routes taking into account the road network, and sometimes in real time: their popularity has led to the wide spread of navigation assistants. On some devices, the user can define the place of arrival by his postal address (and no longer only by his geographical coordinates ), and sometimes with the name of the place. Instructions are often given step by step, with directional pictograms commented on by a voice synthesis system. The navigator then gives route suggestions that the driver can follow when they are relevant. Sometimes, these navigation systems use incorrect data (Map not adapted to the vehicle or not updated, canyon effect, etc.) generating erroneous information, and the driver who follows them blindly can cause an accident which can be fatal, in particular for heavy vehicles: coaches and other heavy goods vehicles . Thus, the systems display alerts warning the user of these possible errors. Some navigators are specialized for heavy goods vehicles and take into account the size of the vehicles but also their mass and dimensions, in order to only offer itineraries using suitable roads. On the other hand, other applications for the general public, such as Waze or Coyote, are unable to give a route including all the constraints that this type of vehicle must follow (including mass and height) to truck drivers. [ 3 ] Some versions are very complete, and can offer:
https://en.wikipedia.org/wiki/Handheld_GPS
The Handheld Isothermal Silver Standard Sensor (HISSS) project was sponsored by DARPA in the 2000s to develop a hand-held sensor that is capable of identifying biological weapon threats across the entire spectrum including bacteria, viruses and toxins. [ 1 ] The program began in early part of the 21st century with the following goals: [ citation needed ] The final goal was to give field units the ability to detect threat agents across the complete spectrum of biological warfare weapons. The main contractor for this project was Northrop Grumman with subcontractors Ionian Technologies and Ribomed . This United States military article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Handheld_Isothermal_Silver_Standard_Sensor
A handheld television is a portable device for watching television that usually uses a TFT LCD or OLED and CRT color display. Many of these devices resemble handheld transistor radios . In 1970, Panasonic released the first TV which was small enough to fit in a large pocket; called the Panasonic IC TV MODEL TR-001 [ 1 ] and Sinclair Research released the second pocket television, the MTV-1 . Since LCD technology was not yet mature at the time, the TV used a minuscule CRT which set the record for being the smallest CRT on a commercially marketed product. Later in 1982, Sony released their first model - the FD-200, which was introduced as “Flat TV” later renamed after the nickname Watchman , a play on the word Walkman . [ 2 ] It had grayscale video at first. Several years later, a color model with an active-matrix LCD was released. Some smartphones integrate a television receiver, although Internet broadband video is far more common. Since the switch-over to digital broadcasting, handheld TVs have reduced in size and improved in quality. [ dubious – discuss ] [ citation needed ] Portable TV was eventually brought to digital TV with DVB-H , although it didn't see much success. These devices often have stereo 1⁄8 inch (3.5 mm) phono plugs for composite video -analog mono audio relay to serve them as composite monitors ; also, some models have mono 3.5 mm jacks for the broadcast signal that is usually relayed via F connector or Belling-Lee connector on standard television models. Some include HDMI , USB and SD ports. Screen sizes vary from 1.3 to 5 inches (33 to 127 mm). Some handheld televisions also double as portable DVD players and USB personal video recorders . Portable televisions cannot fit in a pocket, but often run on batteries and include a cigarette lighter receptacle plug. Pocket televisions fit in a pocket. Wearable televisions sometimes are made in the form of a wristwatch .
https://en.wikipedia.org/wiki/Handheld_television
A handle is a part of, or attachment to, an object that allows it to be grasped and manipulated by hand . The design of each type of handle involves substantial ergonomic issues, even where these are dealt with intuitively or by following tradition. Handles for tools are an important part of their function, enabling the user to exploit the tools to maximum effect. Package handles allow for convenient carrying of packages. The three nearly universal requirements of are: Other requirements may apply to specific handles: One major category of handles are pull handles, where one or more hands grip the handle or handles, and exert force to shorten the distance between the hands and their corresponding shoulders. The three criteria stated above are universal for pull handles. Many pull handles are for lifting, mostly on objects to be carried. Horizontal pull handles are widespread, including drawer pulls , handles on latchless doors and the outside of car doors. The inside controls for opening car doors from inside are usually pull handles, although their function of permitting the door to be pushed open is accomplished by an internal unlatching linkage. Pull handles are also a frequent host of common door handle bacteria such as e-coli , fungal or other viral infections. [ 4 ] Two kinds of pull handles may involve motion in addition to the hand-focused motions described: Another category of hand-operated device requires grasping (but not pulling) and rotating the hand and either the lower arm or the whole arm, about their axis. When the grip required is a fist grip, as with a door handle that has an arm rather than a knob to twist, the term "handle" unambiguously applies. Another clear case is a rarer device seen on mechanically complicated doors like those of airliners, where (instead of the whole hand moving down as it also rotates, on the door handles just described) the axis of rotation is between the thumb and the outermost fingers, so the thumb moves up if the outer fingers move down. The handles of bicycle grips , club-style weapons , shovels and spades , axes , hammers , mallets and hatchets , baseball bats , rackets , golf clubs and croquet mallets involve a greater range of ergonomic issues.
https://en.wikipedia.org/wiki/Handle
The Handle-o-Meter is a testing machine developed by Johnson & Johnson and now manufactured by Thwing-Albert that measures the "handle" of sheeted materials: a combination of its surface friction and flexibility. Originally, it was used to test the durability and flexibility of toilet paper and paper towels . [ 1 ] It is also used to measure the stiffness of packaging film. [ 2 ] The test sample is placed over an adjustable slot. The resistance encountered by the penetrator blade as it is moved into the slot by a pivoting arm is measured by the machine. [ 3 ] The data collected when such nonwovens, tissues, toweling, film and textiles are tested has been shown to correlate well with the actual performance of these specific material's performance as a finished product. [ 4 ] Materials are placed over a slot that extends across the instrument platform, and then the tester hits test. A beam then protrudes through the slot and a strain gauge measures the force that the material exerts on the beam. Stiff materials offer greater resistance to the movement of the beam. [ 3 ] Machine direction and transverse stiffness are measured separately. [ 5 ] There are three different test modes which can be applied to the material: single, double, and quadruple. The average is automatically calculated for double or quadruple tests. [ citation needed ] Restrictions on measuring the friction between the platform and the material limit the instrument's accuracy. [ 6 ] This product article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Handle-o-Meter
In mathematics, a handle decomposition of a 3-manifold allows simplification of the original 3-manifold into pieces which are easier to study. An important method used to decompose into handlebodies is the Heegaard splitting , which gives a decomposition in two handlebodies of equal genus. [ 1 ] As an example: lens spaces are orientable 3-spaces and allow decomposition into two solid tori , which are genus-one-handlebodies. The genus one non-orientable space is a space which is the union of two solid Klein bottles and corresponds to the twisted product of the 2-sphere and the 1-sphere: S 2 × ~ S 1 {\displaystyle \scriptstyle S^{2}{\tilde {\times }}S^{1}} . Each orientable 3-manifold is the union of exactly two orientable handlebodies; meanwhile, each non-orientable one needs three orientable handlebodies. The minimal genus of the glueing boundary determines what is known as the Heegaard genus . For non-orientable spaces an interesting invariant is the tri-genus .
https://en.wikipedia.org/wiki/Handle_decompositions_of_3-manifolds
A loom is a device used to weave cloth and tapestry . The basic purpose of any loom is to hold the warp threads under tension to facilitate the interweaving of the weft threads. The precise shape of the loom and its mechanics may vary, but the basic function is the same. The word "loom" derives from the Old English geloma , formed from ge- (perfective prefix) and loma , a root of unknown origin; the whole word geloma meant a utensil, tool, or machine of any kind. In 1404 "lome" was used to mean a machine to enable weaving thread into cloth. [ 1 ] [ 2 ] [ failed verification ] By 1838 "loom" had gained the additional meaning of a machine for interlacing thread. [ citation needed ] Weaving is done on two sets of threads or yarns, which cross one another. The warp threads are the ones stretched on the loom (from the Proto-Indo-European * werp , "to bend" [ 3 ] ). Each thread of the weft (i.e. "that which is woven") is inserted so that it passes over and under the warp threads. The ends of the warp threads are usually fastened to beams. One end is fastened to one beam, the other end to a second beam, so that the warp threads all lie parallel and are all the same length. The beams are held apart to keep the warp threads taut. The textile is woven starting at one end of the warp threads, and progressing towards the other end. The beam on the finished-fabric end is called the cloth beam . The other beam is called the warp beam . Beams may be used as rollers to allow the weaver to weave a piece of cloth longer than the loom. As the cloth is woven, the warp threads are gradually unrolled from the warp beam, and the woven portion of the cloth is rolled up onto the cloth beam (which is also called the takeup roll ). The portion of the fabric that has already been formed but not yet rolled up on the takeup roll is called the fell . Not all looms have two beams. For instance, warp-weighted looms have only one beam; the warp yarns hang from this beam. The bottom ends of the warp yarns are tied to dangling loom weights. A loom has to perform three principal motions : shedding, picking, and battening. There are also usually two secondary motions , because the newly constructed fabric must be wound onto cloth beam. This process is called taking up. At the same time, the warp yarns must be let off or released from the warp beam, unwinding from it. To become fully automatic, a loom needs a tertiary motion , the filling stop motion. This will brake the loom if the weft thread breaks. [ 4 ] An automatic loom requires 0.125 hp to 0.5 hp to operate (100W to 400W). A loom, then, usually needs two beams, and some way to hold them apart. It generally has additional components to make shedding, picking, and battening faster and easier. There are also often components to help take up the fell. The nature of the loom frame and the shedding, picking, and battening devices vary. Looms come in a wide variety of types, many of them specialized for specific types of weaving. They are also specialized for the lifestyle of the weaver. For instance, nomadic weavers tend to use lighter, more portable looms, while weavers living in cramped city dwellings are more likely to use a tall upright loom, or a loom that folds into a narrow space when not in use. It is possible to weave by manually threading the weft over and under the warp threads, but this is slow. Some tapestry techniques use manual shedding. Pin looms and peg looms also generally have no shedding devices. Pile carpets generally do not use shedding for the pile, because each pile thread is individually knotted onto the warps, but there may be shedding for the weft holding the carpet together. Usually weaving uses shedding devices. These devices pull some of the warp threads to each side, so that a shed is formed between them, and the weft is passed through the shed. There are a variety of methods for forming the shed. At least two sheds must be formed, the shed and the countershed. Two sheds is enough for tabby weave ; more complex weaves, such as twill weaves , satin weaves , diaper weaves , and figured (picture-forming) weaves, require more sheds. Heddle-rods and shedding-sticks are not the fastest way to weave, but they are very simple to make, needing only sticks and yarn. They are often used on vertical [ 5 ] and backstrap looms. [ 6 ] They allow the creation of elaborate supplementary-weft brocades . [ 6 ] They are also used on modern tapestry looms; the frequent changing of weft colour in tapestry makes weaving tapestry slow, so using faster, more complex shedding systems isn't worthwhile. The same is true of looms for handmade knotted-pile carpet ; hand-knotting each pile thread to the warp takes far more time than weaving a couple of weft threads to hold the pile in place. At its simplest, a heddle-bar is simply a stick placed across the warp and tied to individual warp threads. It is not tied to all of the warp threads; for a plain tabby weave , it is tied to every other thread. The little loops of string used to tie the wraps to the heddle bar are called heddles or leashes . When the heddle-bar is pulled perpendicular to the warp, it pulls the warp threads it is tied to out of position, creating a shed. A warp-weighted loom (see diagram) typically uses a heddle-bar, or several. It has two upright posts (C); they support a horizontal beam (D), which is cylindrical so that the finished cloth can be rolled around it, allowing the loom to be used to weave a piece of cloth taller than the loom, and preserving an ergonomic working height. The warp threads (F, and A and B) hang from the beam and rest against the shed rod (E). The heddle-bar (G) is tied to some of the warp threads (A, but not B), using loops of string called leashes (H). So when the heddle rod is pulled out and placed in the forked sticks protruding from the posts (not lettered, no technical term given in citation), the shed (1) is replaced by the counter-shed (2). By passing the weft through the shed and the counter-shed, alternately, cloth is woven. [ 7 ] Several heddle-bars can be used side-by-side; three or more can be used to weave twill weaves , for instance. There are also other ways to create counter-sheds. A shed-rod is simpler and easier to set up than a heddle-bar, and can make a counter-shed. A shed-rod (shedding stick, shed roll) is simply a stick woven through the warp threads. When pulled perpendicular to the threads (or rotated to stand on edge, for wide, flat shedding rods), it creates a counter shed. The combination of a heddle-bar and a shedding-stick can create the shed and countershed needed for a plain tabby weave, as in the video. There are also slitted heddle-rods, which are sawn partway through, with evenly-placed slits. Each warp thread goes in a slit. The odd-numbered slits are at 90 degrees to the even slits. The rod is rotated back and forth to create the shed and countershed, [ 8 ] so it is often large-diameter. [ 9 ] Tablet weaving uses cards punched with holes. The warp threads pass through the holes, and the cards are twisted and shifted to created varied sheds. This shedding technique is used for narrow work . It is also used to finish edges, weaving decorative selvage bands instead of hemming. There are heddles made of flip-flopping rotating hooks, which raise and lower the warp, creating sheds . The hooks, when vertical, have the weft threads looped around them horizontally. If the hooks are flopped over on side or another, the loop of weft twists, raising one or the other side of the loop, which creates the shed and countershed. [ 10 ] Rigid heddles are generally used on single-shaft looms. Odd warp threads go through the slots, and even ones through the circular holes, or vice versa. The shed is formed by lifting the heddle, and the countershed by depressing it. The warp threads in the slots stay where they are, and the ones in the circular holes are pulled back and forth. A single rigid heddle can hold all the warp threads, though sometimes multiple rigid heddles are used. Treadles may be used to drive the rigid heddle up and down. Rigid heddles or (above) are called "rigid" to distinguish them from string and wire heddles. Rigid heddles are one-piece, by non-rigid ones are multi-piece. Each warp thread has its own heald (also, confusingly, called a heddle). The heald has an eyelet at each end (for the staves, also called shafts) and one in the middle, called the mail, (for the warp thread). A row of these healds is slid onto two staves, the upper and lower staves; the staves together, or the staves together with the healds, may be called a heald frame , which is, confusingly, also called a shaft and a harness. [ 11 ] Replaceable, interchangeable healds can be smaller, allowing finer weaves. Unlike a rigid heddle, a flexible heddle cannot push the warp thread. This means that two heald frames are needed even for a plain tabby weave . Twill weaves require three or more heald frames (depending on the type of twill), and more complex figured weaves require still more frames. The different heald frames must be controlled by some mechanism, and the mechanism must be able to pull them in both directions. They are mostly controlled by treadles; creating the shed with the feet leaves the hands free to ply the shuttle. However in some tabletop looms, heald frames are also controlled by levers. [ 12 ] [ better source needed ] In treadle looms, the weaver controls the shedding with their feet, by treading on treadles . Different treadles and combinations of treadles produce different sheds. The weaver must remember the sequence of treadling needed to produce the pattern. The precise mechanism by which the treadles control the heddles varies. Rigid-heddle treadle looms do exist, but the heddles are usually flexible. Sometimes, the treadles are tied directly to the staves (with a Y-shaped bridle so they stay level). Alternately, they may be tied to a stick called a lamm , which in turn is tied to the stave, to make the motion more controlled and regular. The lamm may pivot or slide. Counterbalance looms are the most common type of treadle loom globally, as they are simple and give a smooth, quiet, quick motion. [ 13 ] The heald frames are joined together in pairs, by a cord running over heddle pulleys or a heddle roller. When one heald frame rises, the other falls. It takes a pair of treadles to control a pair of frames. Counterbalance looms are usually used with two or four frames, though some have as many as ten. [ 13 ] In theory each pair of heald frames has to have an equal number to warps pulled by each frame, so the patterns that can be made on them are limited. [ 14 ] In practice, fairly unbalanced tie-ups just make the shed a bit smaller, and as the shed on a counterbalance loom is adjustable in size and quite large to start with (compared to other types of loom), so it is entirely possible to weave good cloth on a counterbalance loom with unbalanced heald frames, [ 15 ] [ 13 ] unless the loom is extremely shallow (that is, the length of warp being pulled on is short, less than 1 meter or 3 feet), which exacerbates the slightly uneven tension. [ 13 ] Limited patterns are not, of course, a disadvantage when weaving plainer patterns, such as tabbies and twills. Jack looms (also called single-tieup-looms and rising-shed looms [ 16 ] ), have their treadles connected to jacks, levers that push or pull the heald frames up; the harnesses are weighted to fall back into place by gravity. Several frames can be connected to a single treadle. Frames can also be raised by more than one treadle. This allows treadles to control arbitrary combinations of frames, which vastly increases the number of different sheds that can be created from the same number of frames. Any number of treadles can also be engaged at once, meaning that the number of different sheds that can be selected is two to the power of the number of treadles. Eight is a large but reasonable number of treadles, giving a maximum of 2 8 =256 sheds (some of which will probably not have enough threads on one side to be useful). [ citation needed ] Having more possible sheds allows more complex patterns, [ 14 ] [ 16 ] such as diaper weaves . [ citation needed ] Jack looms are easy to make and to tie up (if not quite as easy as counterbalance looms). The gravity return makes jack looms heavy to operate. The shed of a jack loom is smaller for a given length of warp being pulled aside by the heddles (loom depth). The warp threads being pulled up by the jacks are also tauter than the other warp threads (unlike a counter balance loom, where the threads are pulled an equal amount in opposite directions). Uneven tension makes weaving evenly harder. It also lowers the maximum tension at which one can practically weave. [ 14 ] [ 16 ] If the threads are rough, closely-spaced, very long or numerous, it can be hard to open the sheds on the jack loom. [ 16 ] Jack looms without castles (the superstructure above the weft) have to lift the heald frames from below, and are noiser due to the impact of wood on wood; elastomer pads can reduce the noise. [ 13 ] In countermarch looms , the treadles are tied to lamms, [ 17 ] [ 14 ] which may pivot at one end or slide up and down. [ 18 ] Half of the lamms in turn connect to jacks, which also pivot, and push or pull the staves up or down. [ 17 ] Some countermarches have two horizontal jacks per shaft, others a single vertical jack. [ 13 ] Each treadle is tied to all of the heald frames, moving some of them up and the rest of them down. [ 13 ] This allows the complex combinatorial treadles of a jack loom, with the large shed and balanced, even tension of a counterbalance loom, with its quiet, light operation. Unfortunately, countermarch looms are more complex, harder to build, slower to tie up, [ 17 ] [ 14 ] [ 13 ] and more prone to malfunction. [ 17 ] [ 19 ] A drawloom is for weaving figured cloth. In a drawloom, a "figure harness" is used to control each warp thread separately, [ 20 ] allowing very complex patterns. A drawloom requires two operators, the weaver, and an assistant called a "drawboy" to manage the figure harness. The earliest confirmed drawloom fabrics come from the State of Chu and date c. 400 BC. [ 21 ] Some scholars speculate an independent invention in ancient Syria , since drawloom fabrics found in Dura-Europas are thought to date before 256 AD. [ 21 ] [ 22 ] The draw loom was invented in China during the Han dynasty ( State of Liu ?); [ contradictory ] [ 23 ] foot-powered multi-harness looms and jacquard looms were used for silk weaving and embroidery, both of which were cottage industries with imperial workshops. [ 24 ] The drawloom enhanced and sped up the production of silk and played a significant role in Chinese silk weaving. The loom was introduced to Persia, India, and Europe. [ 23 ] A dobby head is a device that replaces the drawboy, the weaver's helper who used to control the warp threads by pulling on draw threads. "Dobby" is a corruption of "draw boy". Mechanical dobbies pull on the draw threads using pegs in bars to lift a set of levers. The placement of the pegs determines which levers are lifted. The sequence of bars (they are strung together) effectively remembers the sequence for the weaver. Computer-controlled dobbies use solenoids instead of pegs. The Jacquard loom is a mechanical loom, invented by Joseph Marie Jacquard in 1801, which simplifies the process of manufacturing figured textiles with complex patterns such as brocade , damask , and matelasse . [ 25 ] [ 26 ] The loom is controlled by punched cards with punched holes, each row of which corresponds to one row of the design. Multiple rows of holes are punched on each card and the many cards that compose the design of the textile are strung together in order. It is based on earlier inventions by the Frenchmen Basile Bouchon (1725), Jean Baptiste Falcon (1728), and Jacques Vaucanson (1740). [ 27 ] To call it a loom is a misnomer. A Jacquard head could be attached to a power loom or a handloom, the head controlling which warp thread was raised during shedding. Multiple shuttles could be used to control the colour of the weft during picking. The Jacquard loom is the predecessor to the computer punched card readers of the 19th and 20th centuries. [ 28 ] The weft may be passed across the shed as a ball of yarn, but usually this is too bulky and unergonomic. Shuttles are designed to be slim, so they pass through the shed; to carry a lot of yarn, so the weaver does not need to refill them too often; and to be an ergonomic size and shape for the particular weaver, loom, and yarn. They may also be designed for low friction. At their simplest, these are just sticks wrapped with yarn. They may be specially shaped, as with the bobbins and bones used in tapestry-making (bobbins are used on vertical warps, and bones on horizontal ones). [ 29 ] [ 30 ] Boat shuttles may be closed (central hollow with a solid bottom) or open (central hole goes right through). The yarn may be side-feed or end-feed. [ 34 ] [ 35 ] They are commonly made for 10-cm (4-inch) and 15-cm (6-inch) bobbin lengths. [ 36 ] Hand weavers who threw a shuttle could only weave a cloth as wide as their armspan . If cloth needed to be wider, two people would do the task (often this would be an adult with a child). John Kay (1704–1779) patented the flying shuttle in 1733. The weaver held a picking stick that was attached by cords to a device at both ends of the shed. With a flick of the wrist, one cord was pulled and the shuttle was propelled through the shed to the other end with considerable force, speed and efficiency. A flick in the opposite direction and the shuttle was propelled back. A single weaver had control of this motion but the flying shuttle could weave much wider fabric than an arm's length at much greater speeds than had been achieved with the hand thrown shuttle. The flying shuttle was one of the key developments in weaving that helped fuel the Industrial Revolution . The whole picking motion no longer relied on manual skill and it was just a matter of time before it could be powered by something other than a human. Different types of power looms are most often defined by the way that the weft, or pick, is inserted into the warp. Many advances in weft insertion have been made in order to make manufactured cloth more cost effective. Weft insertion rate is a limiting factor in production speed. As of 2010 [update] , industrial looms can weave at 2,000 weft insertions per minute. [ 37 ] There are five main types of weft insertion and they are as follows: The newest weft thread must be beaten against the fell. Battening can be done with a long stick placed in the shed parallel to the weft (a sword batten), a shorter stick threaded between the warp threads perpendicular to warp and weft (a pin batten), a comb, or a reed (a comb with both ends closed, so that it has to be sleyed, that is have the warp threads threaded through it, when the loom is warped). For rigid-heddle looms, the heddle may be used as a reed. Patented in 1802, dandy looms automatically rolled up the finished cloth, keeping the fell always the same length. They significantly speeded up hand weaving (still a major part of the textile industry in the 1800s). Similar mechanisms were used in power looms. The temples act to keep the cloth from shrinking sideways as it is woven. Some warp-weighted looms had temples made of loom weights , suspended by strings so that they pulled the cloth breadthwise. [ 7 ] Other looms may have temples tied to the frame, or temples that are hooks with an adjustable shaft between them. Power looms may use temple cylinders. Pins can leave a series of holes in the selvages (these may be from stenter pins used in post-processing). Loom frames can be roughly divided, by the orientation of the warp threads, into horizontal looms and vertical looms. There are many finer divisions. Most handloom frame designs can be constructed fairly simply. [ 39 ] The back-strap loom (also known as belt loom) [ 40 ] is a simple loom with ancient roots, still used in many cultures around the world (as in the weaving of Andean textiles , and in Central, East and South Asia). [ 41 ] It consists of two sticks or bars between which the warps are stretched. One bar is attached to a fixed object and the other to the weaver, usually by means of a strap around the weaver's back. [ 42 ] The weaver leans back and uses their body weight to tension the loom. Both simple and complex textiles can be woven on backstrap looms. They produce narrowcloth : width is limited to the weaver's armspan. They can readily produce warp-faced textiles, often decorated with intricate pick-up patterns woven in complementary and supplementary warp techniques, and brocading. Balanced weaves are also possible on the backstrap loom. The warp-weighted loom is a vertical loom that may have originated in the Neolithic period. Its defining characteristic is hanging weights (loom weights) which keep bundles of the warp threads taut. Frequently, extra warp thread is wound around the weights. When a weaver has woven far enough down, the completed section (fell) can be rolled around the top beam, and additional lengths of warp threads can be unwound from the weights to continue. This frees the weaver from vertical size constraint. Horizontally, breadth is limited by armspan; making broadwoven cloth requires two weavers, standing side by side at the loom. Simple weaves, and complex weaves that need more than two different sheds, can both be woven on a warp-weighted loom. They can also be used to produce tapestries. In pegged looms, the beams can be simply held apart by hooking them behind pegs driven into the ground, with wedges or lashings used to adjust the tension. Pegged looms may, however, also have horizontal sidepieces holding the beams apart. Such looms are easy to set up and dismantle, and are easy to transport, so they are popular with nomadic weavers. They are generally only used for comparatively small woven articles. [ 45 ] Urbanites are unlikely to use horizontal floor looms as they take up a lot of floor space, and full-time professional weavers are unlikely to use them as they are unergonomic. Their cheapness and portability is less valuable to urban professional weavers. [ 46 ] In a treadle loom, the shedding is controlled by the feet, which tread on the treadles . The earliest evidence of a horizontal loom is found on a pottery dish in ancient Egypt , dated to 4400 BC. It was a frame loom, equipped with treadles to lift the warp threads, leaving the weaver's hands free to pass and beat the weft thread. [ 47 ] A pit loom has a pit for the treadles, reducing the stress transmitted through the much shorter frame. [ 48 ] In a wooden vertical-shaft loom, the heddles are fixed in place in the shaft. The warp threads pass alternately through a heddle, and through a space between the heddles (the shed ), so that raising the shaft raises half the threads (those passing through the heddles), and lowering the shaft lowers the same threads — the threads passing through the spaces between the heddles remain in place. A treadle loom for figured weaving may have a large number of harnesses or a control head. It can, for instance, have a Jacquard machine attached to it [ 49 ] (see Loom#Shedding methods) . Tapestry can have extremely complex wefts, as different strands of wefts of different colours are used to form the pattern. Speed is lower, and shedding and picking devices may be simpler. Looms used for weaving traditional tapestry are called not as "vertical-warp" and "horizontal-warp", but as "high-warp" or "low-warp" (the French terms haute-lisse and basse-lisse are also used in English). [ 50 ] Inkle looms are narrow looms used for narrow work . They are used to make narrow warp-faced strips such as ribbons, bands, or tape. They are often quite small; some are used on a tabletop. others are backstraps looms with a rigid heddle , and very portable. There exist very small hand-held looms known as darning looms. They are made to fit under the fabric being mended, and are often held in place by an elastic band on one side of the cloth and a groove around the loom's darning-egg portion on the other. They may have heddles made of flip-flopping rotating hooks (see Loom#Rotating-hook heddles ) . [ 51 ] Other devices sold as darning looms are just a darning egg and a separate comb-like piece with teeth to hook the warp over; these are used for repairing knitted garments and are like a linear knitting spool . [ 52 ] Darning looms were sold during World War Two clothing rationing in the United Kingdom [ 53 ] and Canada, [ 54 ] and some are homemade. [ 55 ] [ 56 ] Circular looms are used to create seamless tubes of fabric for products such as hosiery, sacks, clothing, fabric hoses (such as fire hoses) and the like. Tablet weaving can be used to knit tubes, including tubes that split and join. Small jigs also used for circular knitting are also sometimes called circular looms, [ 57 ] but they are used for knitting, not weaving. A power loom is a loom powered by a source of energy other than the weaver's muscles. When power looms were developed, other looms came to be referred to as handlooms . Most cloth is now woven on power looms, but some is still woven on handlooms. [ 48 ] The development of power looms was gradual. The capabilities of power looms gradually expanded, but handlooms remained the most cost-effective way to make some types of textiles for most of the 1800s. Many improvements in loom mechanisms were first applied to hand looms (like the dandy loom ), and only later integrated into power looms. Edmund Cartwright built and patented a power loom in 1785, and it was this that was adopted by the nascent cotton industry in England. The silk loom made by Jacques Vaucanson in 1745 operated on the same principles but was not developed further. The invention of the flying shuttle by John Kay allowed a hand weaver to weave broadwoven cloth without an assistant, and was also critical to the development of a commercially successful power loom. [ 58 ] Cartwright's loom was impractical but the ideas behind it were developed by numerous inventors in the Manchester area of England. By 1818, there were 32 factories containing 5,732 looms in the region. [ 59 ] The Horrocks loom was viable, but it was the Roberts Loom in 1830 that marked the turning point. [ 60 ] [ clarification needed ] Incremental changes to the three motions continued to be made. The problems of sizing, stop-motions, consistent take-up, and a temple to maintain the width remained. In 1841, Kenworthy and Bullough produced the Lancashire Loom [ 61 ] which was self-acting or semi-automatic. This enabled a youngster to run six looms at the same time. Thus, for simple calicos, the power loom became more economical to run than the handloom – with complex patterning that used a dobby or Jacquard head, jobs were still put out to handloom weavers until the 1870s. Incremental changes were made such as the Dickinson Loom , culminating in the fully automatic Northrop Loom , developed by the Keighley -born inventor Northrop, who was working for the Draper Corporation in Hopedale . This loom recharged the shuttle when the pirn was empty. The Draper E and X models became the leading products from 1909. They were challenged by synthetic fibres such as rayon . [ 62 ] By 1942, faster, more efficient, and shuttleless Sulzer and rapier looms had been introduced. [ 63 ] The loom is a symbol of cosmic creation and the structure upon which individual destiny is woven. This symbolism is encapsulated in the classical myth of Arachne who was changed into a spider by the goddess Athena , who was jealous of her skill at the godlike craft of weaving. [ 64 ] In Maya civilization the goddess Ixchel taught the first woman how to weave at the beginning of time. [ 65 ]
https://en.wikipedia.org/wiki/Handloom
In cellular telecommunications , handover , or handoff , is the process of transferring an ongoing call or data session from one channel connected to the core network to another channel. In satellite communications it is the process of transferring satellite control responsibility from one earth station to another without loss or interruption of service. American English uses the term handoff , and this is most commonly used within some American organizations such as 3GPP2 and in American originated technologies such as CDMA2000 . In British English the term handover is more common, and is used within international and European organisations such as ITU-T , IETF , ETSI and 3GPP , and standardised within European originated standards such as GSM and UMTS . The term handover is more common in academic research publications and literature, while handoff is slightly more common within the IEEE and ANSI organisations. [ original research? ] In telecommunications there may be different reasons why a handover might be conducted: [ 1 ] The most basic form of handover is when a phone call in progress is redirected from its current cell (called source ) to a new cell (called target ). [ 1 ] In terrestrial networks the source and the target cells may be served from two different cell sites or from one and the same cell site (in the latter case the two cells are usually referred to as two sectors on that cell site). Such a handover, in which the source and the target are different cells (even if they are on the same cell site) is called inter-cell handover. The purpose of inter-cell handover is to maintain the call as the subscriber is moving out of the area covered by the source cell and entering the area of the target cell. A special case is possible, in which the source and the target are one and the same cell and only the used channel is changed during the handover. Such a handover, in which the cell is not changed, is called intra-cell handover. The purpose of intra-cell handover is to change one channel, which may be interfered or fading with a new clearer or less fading channel. In addition to the above classification of inter-cell and intra-cell classification of handovers, they also can be divided into hard and soft handovers: [ 1 ] Handover can also be classified on the basis of handover techniques used. Broadly they can be classified into three types: An advantage of the hard handover is that at any moment in time one call uses only one channel. The hard handover event is indeed very short and usually is not perceptible by the user. In the old analog systems it could be heard as a click or a very short beep; in digital systems it is unnoticeable. Another advantage of the hard handover is that the phone's hardware does not need to be capable of receiving two or more channels in parallel, which makes it cheaper and simpler. A disadvantage is that if a handover fails the call may be temporarily disrupted or even terminated abnormally. Technologies which use hard handovers, usually have procedures which can re-establish the connection to the source cell if the connection to the target cell cannot be made. However re-establishing this connection may not always be possible (in which case the call will be terminated) and even when possible the procedure may cause a temporary interruption to the call. One advantage of the soft handovers is that the connection to the source cell is broken only when a reliable connection to the target cell has been established and therefore the chances that the call will be terminated abnormally due to failed handovers are lower. However, by far a bigger advantage comes from the mere fact that simultaneously channels in multiple cells are maintained and the call could only fail if all of the channels are interfered or fade at the same time. Fading and interference in different channels are unrelated and therefore the probability of them taking place at the same moment in all channels is very low. Thus the reliability of the connection becomes higher when the call is in a soft handover. Because in a cellular network the majority of the handovers occur in places of poor coverage, where calls would frequently become unreliable when their channel is interfered or fading, soft handovers bring a significant improvement to the reliability of the calls in these places by making the interference or the fading in a single channel not critical. This advantage comes at the cost of more complex hardware in the phone, which must be capable of processing several channels in parallel. Another price to pay for soft handovers is use of several channels in the network to support just a single call. This reduces the number of remaining free channels and thus reduces the capacity of the network. By adjusting the duration of soft handovers and the size of the areas in which they occur, the network engineers can balance the benefit of extra call reliability against the price of reduced capacity. While theoretically speaking soft handovers are possible in any technology, analog or digital, the cost of implementing them for analog technologies is prohibitively high and none of the technologies that were commercially successful in the past (e.g. AMPS , TACS , NMT , etc.) had this feature. Of the digital technologies, those based on FDMA also face a higher cost for the phones (due to the need to have multiple parallel radio-frequency modules) and those based on TDMA or a combination of TDMA/FDMA, in principle, allow not so expensive implementation of soft handovers. However, none of the 2G (second-generation) technologies have this feature (e.g. GSM, D-AMPS / IS-136 , etc.). On the other hand, all CDMA based technologies, 2G and 3G (third-generation), have soft handovers. On one hand, this is facilitated by the possibility to design not so expensive phone hardware supporting soft handovers for CDMA and on the other hand, this is necessitated by the fact that without soft handovers CDMA networks may suffer from substantial interference arising due to the so-called near–far effect. In all current commercial technologies based on FDMA or on a combination of TDMA/FDMA (e.g. GSM, AMPS, IS-136/DAMPS, etc.) changing the channel during a hard handover is realised by changing the pair of used transmit/receive frequencies . For the practical realisation of handovers in a cellular network each cell is assigned a list of potential target cells, which can be used for handing over calls from this source cell to them. These potential target cells are called neighbors and the list is called neighbor list . Creating such a list for a given cell is not trivial and specialized computer tools are used. They implement different algorithms and may use for input data from field measurements or computer predictions of radio wave propagation in the areas covered by the cells. During a call one or more parameters of the signal in the channel in the source cell are monitored and assessed in order to decide when a handover may be necessary. The downlink (forward link) and/or uplink (reverse link) directions may be monitored. The handover may be requested by the phone or by the base station (BTS) of its source cell and, in some systems, by a BTS of a neighboring cell. The phone and the BTSes of the neighboring cells monitor each other's signals and the best target candidates are selected among the neighboring cells. In some systems, mainly based on CDMA, a target candidate may be selected among the cells which are not in the neighbor list. This is done in an effort to reduce the probability of interference due to the aforementioned near–far effect. In analog systems the parameters used as criteria for requesting a hard handover are usually the received signal power and the received signal-to-noise ratio (the latter may be estimated in an analog system by inserting additional tones, with frequencies just outside the captured voice-frequency band at the transmitter and assessing the form of these tones at the receiver). In non-CDMA 2G digital systems the criteria for requesting hard handover may be based on estimates of the received signal power, bit error rate (BER) and block error/erasure rate (BLER), received quality of speech ( RxQual ), distance between the phone and the BTS (estimated from the radio signal propagation delay) and others. In CDMA systems, 2G and 3G, the most common criterion for requesting a handover is Ec/Io ratio measured in the pilot channel ( CPICH ) and/or RSCP . In CDMA systems, when the phone in soft or softer handover is connected to several cells simultaneously, it processes the received in parallel signals using a rake receiver . Each signal is processed by a module called rake finger . A usual design of a rake receiver in mobile phones includes three or more rake fingers used in soft handover state for processing signals from as many cells and one additional finger used to search for signals from other cells. The set of cells, whose signals are used during a soft handover, is referred to as the active set . If the search finger finds a sufficiently-strong signal (in terms of high Ec/Io or RSCP) from a new cell this cell is added to the active set. The cells in the neighbour list (called in CDMA neighbouring set ) are checked more frequently than the rest and thus a handover with a neighbouring cell is more likely, however a handover with others cells outside the neighbor list is also allowed (unlike in GSM, IS-136/DAMPS, AMPS, NMT, etc.). There are occurrences where a handoff is unsuccessful. Much research has been dedicated to this problem. [ example needed ] The source of the problem was discovered in the late 1980s. Because frequencies cannot be reused in adjacent cells, when a user moves from one cell to another, a new frequency must be allocated for the call. If a user moves into a cell when all available channels are in use, the user's call must be terminated. Also, there is the problem of signal interference where adjacent cells overpower each other resulting in receiver desensitization. There are also inter-technology handovers where a call's connection is transferred from one access technology to another, e.g. a call being transferred from GSM to UMTS or from CDMA IS-95 to CDMA2000 . The 3GPP UMA/GAN standard enables GSM/UMTS handoff to Wi-Fi and vice versa. Different systems have different methods for handling and managing handoff request. Some systems handle handoff in same way as they handle new originating call. In such system the probability that the handoff will not be served is equal to blocking probability of new originating call. But if the call is terminated abruptly in the middle of conversation then it is more annoying than the new originating call being blocked. So in order to avoid this abrupt termination of ongoing call handoff request should be given priority to new call this is called as handoff prioritization. There are two techniques for this:
https://en.wikipedia.org/wiki/Handover
In wireless technology, handover keying (Hokey) refers to maintaining a secure connection seamlessly while migrating from one wireless network to another. This article about wireless technology is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Handover_keying
In computing , a hang or freeze occurs when either a process or system ceases to respond to inputs . A typical example is when computer's graphical user interface (such as Microsoft Windows [ a ] ) no longer responds to the user typing on the keyboard or moving the mouse. The term covers a wide range of behaviors in both clients and servers , and is not limited to graphical user interface issues. Hangs have varied causes and symptoms, including software or hardware defects, such as an infinite loop or long-running uninterruptible computation, resource exhaustion ( thrashing ), under-performing hardware ( throttling ), external events such as a slow computer network , misconfiguration, and compatibility problems. The fundamental reason is typically resource exhaustion: resources necessary for some part of the system to run are not available, due to being in use by other processes or simply insufficient. Often the cause is an interaction of multiple factors, making "hang" a loose umbrella term rather than a technical one. A hang may be temporary if caused by a condition that resolves itself, such as slow hardware, or it may be permanent and require manual intervention, as in the case of a hardware or software logic error. Many modern operating systems provide the user with a means to forcibly terminate a hung program without rebooting or logging out ; some operating systems, such as those designed for mobile devices, may even do this automatically. In more severe hangs affecting the whole system, the only solution might be to reboot the machine, usually by power cycling with an off/on or reset button. A hang differs from a crash , in which the failure is immediate and unrelated to the responsiveness of inputs. [ citation needed ] In a multitasking operating system, it is possible for an individual process or thread to get stuck, such as blocking on a resource or getting into an infinite loop, though the effect on the overall system varies significantly. In a cooperative multitasking system, any thread that gets stuck without yielding will hang the system, as it will wedge itself as the running thread and prevent other threads from running. By contrast, modern operating systems primarily use pre-emptive multitasking , such as Windows 2000 and its successors, as well as Linux and Apple Inc. 's macOS . In these cases, a single thread getting stuck will not necessarily hang the system, as the operating system will preempt it when its time slice expires, allowing another thread to run. If a thread does hang, the scheduler may switch to another group of interdependent tasks so that all processes will not hang. [ 1 ] However, a stuck thread will still consume resources: at least an entry in scheduling, and if it is running (for instance, stuck in an infinite loop), it will consume processor cycles and power when it is scheduled, slowing the system though it does not hang it. However, even with preemptive multitasking, a system can hang, and a misbehaved or malicious task can hang the system, primarily by monopolizing some other resource, such as IO or memory, even though processor time cannot be monopolized. For example, a process that blocks the file system will often hang the system. Moving around a window on top of a hanging program during a hang may cause a window trail from redrawing. [ 2 ] Hardware can cause a computer to hang, either because it is intermittent or because it is mismatched with other hardware in the computer [ 3 ] (this can occur when one makes an upgrade ). Hardware can also become defective over time due to dirt or heat damage. A hang can also occur due to the fact that the programmer has incorrect termination conditions for a loop , or, in a co-operative multitasking operating system , forgetting to yield to other tasks. Said differently, many software -related hangs are caused by threads waiting for an event to occur which will never occur. [ 4 ] This is also known as an infinite loop . Another cause of hangs is a race condition in communication between processes . One process may send a signal to a second process then stop execution until it receives a response. If the second process is busy the signal will be forced to wait until the process can get to it. However, if the second process was busy sending a signal to the first process then both processes would wait forever for the other to respond to signals and never see the other’s signal (this event is known as a deadlock ). If the processes are uninterruptible they will hang and have to be shut down. If at least one of the processes is a critical kernel process the whole system may hang and have to be restarted. A computer may seem to hang when in fact it is simply processing very slowly. This can be caused by too many programs running at once, not enough memory ( RAM ), or memory fragmentation , slow hardware access (especially to remote devices), slow system APIs, etc. It can also be caused by hidden programs which were installed surreptitiously, such as spyware . In many cases programs may appear to be hung, but are making slow progress, and waiting a few minutes will allow the task to complete. Modern operating systems provide a mechanism for terminating hung processes, for instance, with the Unix kill command, or through a graphical means such as the Task Manager 's "end task" button in Windows (select the particular process in the list and press "end task"). Older systems, such as those running MS-DOS , early versions of Windows, or Classic Mac OS often needed to be completely restarted in the event of a hang. On embedded devices where human interaction is limited, a watchdog timer can reboot the computer in the event of a hang.
https://en.wikipedia.org/wiki/Hang_(computing)
Hangprinter is an open-source fused deposition modeling delta 3D printer notable for its unique frameless design. It was created by Torbjørn Ludvigsen residing in Sweden. [ 1 ] The Hangprinter uses relatively low cost parts and can be constructed for around US$250. [ 2 ] [ 3 ] [ 4 ] [ 5 ] The printer is part of the RepRap project , meaning many of the parts of the printer are able to be produced on the printer itself (partially self replicating). The design files for the printer are available on GitHub for download, modification and redistribution. [ 6 ] The Hangprinter v0, also called the Slideprinter, is a 2D plotter. It was designed solely to test if a 3D version could realistically be created. [ 7 ] The Hangprinter v1 uses counter weights to stay elevated. [ 8 ] All parts of the Hangprinter Version 2 are contained within a single unit which uses cables to suspend the printer within a room, allowing it to create extremely large objects over 4 meters tall. [ 2 ] [ 3 ] [ 4 ] [ 5 ] Version 3 of the Hangprinter has the motors and gears attached to the ceiling, making the carriage lighter. [ 9 ] Version 4 includes upgrades from version 3 including flex compensation, better calibration and automatic homing. [ 10 ] To enable 3D printers to economically use recycled plastic feedstocks to enable distributed recycling and additive manufacturing (DRAM) [ 11 ] [ 12 ] several types of fused granular fabrication (FGF)/ fused particle fabrication (FPF) [ 13 ] -based 3D printers have been designed and released with open source licenses. First, a large-scale printer was demonstrated [ 14 ] with a GigabotX extruder [ 15 ] based on the open source cable driven hangprinter concept. Then detailed plans using recyclebot auger techniques were released in HardwareX to build such a printer for under $1700. [ 16 ] This approach would further reduce the cost of using hangprinters to make large scale products as the cost of recycled shredded plastic is ~$1–5/kg while filament is generally around $20/kg. Makers that have built open source granulators [ 17 ] or have access to other types of waste plastic shredders (e.g. from Precious Plastic [ 18 ] ) can generate feedstock for hanging waste printers for under $1/kg, which makes large scale production with a hangprinter competitive with any conventional manufacturing process. In 2022, a patent describing the “Sky Big Area Additive Manufacturing” (SkyBAAM) system was granted to UT-Battelle, LLC, a nonprofit corporation that operates the Oak Ridge National Laboratory (ORNL). The patent describes the core features already featured in HangPrinter, causing controversy in the open source community. The RepRap project established a GoFundMe campaign to cover the legal costs in their upcoming action to challenge the patent. [ 19 ] [ 20 ] [ 21 ] In May 2023 it was announced that the US Patent Office rejected the wide claims of the SkyBAAM patent and would be settling on a much narrower patent instead. Per a post on Torbjørn Ludvigsen's blog "They largely agreed with our analysis. They rejected all the patent's original claims. They accepted a narrower version of them." [ 22 ] Per the interpretation provided in that post the narrower patent would only cover cases where every detail provided is included in the design, instead of those designs with any of the described details.
https://en.wikipedia.org/wiki/Hangprinter
Hani Abdulaziz Al Hussein is a Kuwaiti engineer and politician. He served as chief executive officer of the Kuwait Petroleum Corporation from 2004 to 2007 and oil minister from February 2012 to May 2013. Hussein received a bachelor's degree in chemical engineering from the University of Tulsa in 1971. [ 1 ] After graduation Hussein joined Kuwait National Petroleum Company in February 1972 and worked there until April 1980. [ 1 ] Then he began to work at Shuaiba refinery (1972-1974). [ 1 ] Then he joined planning department in 1977 and his tenure at the department lasted until 1977. [ 1 ] From 1977 to 1980 he worked at international marketing department. [ 1 ] He served as the board chairman and managing director of the Petrochemical Industries Company (PIC) from 1990 to 1995. [ 2 ] He held different posts at the Kuwait Petroleum Corporation (KPC) including managing director for oil refining and local marketing and managing director for marketing. [ 3 ] From 1998 to 2004 he was also board chairman and managing director of Kuwait National Petroleum Company (KNPC). [ 2 ] Hussein was made chief executive officer of the KPC in 2004 [ 4 ] [ 5 ] and he replaced Nader Sultan in the post. [ citation needed ] Hussein resigned from office in April 2007. [ citation needed ] In June 2007, then Prime Minister Nasser Al-Mohammad Al-Sabah appointed Hussein as his chief petroleum advisor. [ 6 ] Hussein was appointed oil minister to the cabinet led by Prime Minister Jaber Al Sabah on 24 February 2012, replacing Mohammad Busairi in the post. [ 4 ] [ 7 ] In a December 2012 cabinet reshuffle Hussein was reappointed to the post. [ 8 ] However, he resigned from office on 26 May 2013 due to tensions with members of the Kuwaiti parliament. [ 9 ] Finance Minister Mustafa Al Shamali was appointed as acting oil minister to succeed him, [ 10 ] and on 4 August Shamali was appointed to full portfolio in a cabinet reshuffle. [ 11 ] [ 12 ]
https://en.wikipedia.org/wiki/Hani_Abdulaziz_Al_Hussein
Hankinson's equation (also called Hankinson's formula or Hankinson's criterion ) [ 1 ] is a mathematical relationship for predicting the off-axis uniaxial compressive strength of wood. The formula can also be used to compute the fiber stress or the stress wave velocity at the elastic limit as a function of grain angle in wood . For a wood that has uniaxial compressive strengths of σ 0 {\displaystyle \sigma _{0}} parallel to the grain and σ 90 {\displaystyle \sigma _{90}} perpendicular to the grain, Hankinson's equation predicts that the uniaxial compressive strength of the wood in a direction at an angle α {\displaystyle \alpha } to the grain is given by Even though the original relation was based on studies of spruce , Hankinson's equation has been found to be remarkably accurate for many other types of wood. A generalized form of the Hankinson formula has also been used for predicting the uniaxial tensile strength of wood at an angle to the grain. This formula has the form [ 2 ] where the exponent n {\displaystyle n} can take values between 1.5 and 2. The stress wave velocity at angle α {\displaystyle \alpha } to the grain at the elastic limit can similarly be obtained from the Hankinson formula where V 0 {\displaystyle V_{0}} is the velocity parallel to the grain, V 90 {\displaystyle V_{90}} is the velocity perpendicular to the grain and α {\displaystyle \alpha } is the grain angle.
https://en.wikipedia.org/wiki/Hankinson's_equation
The Hanle effect , [ 1 ] also known as zero-field level crossing , [ 2 ] is a reduction in the polarization of light when the atoms emitting the light are subject to a magnetic field in a particular direction, and when they have themselves been excited by polarized light. Experiments which utilize the Hanle effect include measuring the lifetime of excited states , [ 3 ] and detecting the presence of magnetic fields. [ 4 ] The first experimental evidence for the effect came from Robert W. Wood , [ 5 ] [ 6 ] and Lord Rayleigh . [ 7 ] The effect is named after Wilhelm Hanle , who was the first to explain the effect, in terms of classical physics , in Zeitschrift für Physik in 1924. [ 8 ] [ 9 ] Initially, the causes of the effect were controversial, and many theorists mistakenly thought it was a version of the Faraday effect . Attempts to understand the phenomenon were important in the subsequent development of quantum physics . [ 10 ] An early theoretical treatment of level crossing effect was given by Gregory Breit . [ 11 ] The classical explanation for this effect involves the Lorentz oscillator model , which treats the electron bound to the nucleus as a classical oscillator. When light interacts with this oscillator, it sets the electron in motion in the direction of its polarization. Consequently, the radiation emitted by this moving electron is polarized in the same direction as the incident light, as explained by classical electrodynamics. Consider light propagating along the y-axis, linearly polarized with the electric field along the x-axis, incident on a single atom in a vapor cell. The vapor cell is placed in a uniform magnetic field along the z-axis. A detector is placed to monitor the fluorescent light emitted along the z-axis from the vapor cell and measure its polarization in the x-y plane. The oscillating electric field of the incident light induces oscillations in the electron along the x axis. The electric field generated by the induced dipole along the z-axis is given by E ( z , t ) = ω 0 2 4 π ϵ 0 c 2 p ′ ( t − z / c ) cos ⁡ [ k z − ω 0 ( t − t 0 ) ] z {\displaystyle \mathbf {E} (z,t)={\frac {\omega _{0}^{2}}{4\pi \epsilon _{0}c^{2}}}\mathbf {p} '(t-z/c){\frac {\cos[kz-\omega _{0}(t-t_{0})]}{z}}} where ω 0 {\displaystyle \omega _{0}} is the angular frequency of the incident light and t 0 {\displaystyle t_{0}} is the time at which the electron is excited. The component of the fluorescent light polarized at angle α {\displaystyle \alpha } with respect to the x axis at the detector has intensity given by I ( α , t ) = ω 0 4 p 0 2 64 π 2 ϵ c 3 e − ( t − t 0 ) / τ { 1 + c o s [ 2 ( ω L ( t − t 0 ) − 2 α ] } {\displaystyle I(\alpha ,t)={\frac {\omega _{0}^{4}p_{0}^{2}}{64\pi ^{2}\epsilon c^{3}}}e^{-(t-t_{0})/\tau \{1+cos[2(\omega _{L}(t-t_{0})-2\alpha ]\}}} where ω L = g J e B / ( 2 m 0 ) {\displaystyle \omega _{L}=g_{J}eB/(2m_{0})} is the Larmor frequency and 1 / τ {\displaystyle 1/\tau } is the damping rate of the fluorescence radiation. Consider N {\displaystyle N} atoms in the vapor cell, excited at a constant rate R {\displaystyle R} . The steady-state intensity at the detector can be obtained by integrating over the excitation time t 0 {\displaystyle t_{0}} from − ∞ {\displaystyle -\infty } to t {\displaystyle t} , which gives I ( α ) = N R p 0 2 ω 0 4 128 π 2 ϵ 0 c 3 [ 1 γ + γ cos ⁡ ( 2 α ) γ 2 + ω L 2 + sin ⁡ ( 2 α γ 2 + ω L 2 ] {\displaystyle I(\alpha )=NR{\frac {p_{0}^{2}\omega _{0}^{4}}{128\pi ^{2}\epsilon _{0}c^{3}}}\left[{\frac {1}{\gamma }}+{\frac {\gamma \cos(2\alpha )}{\gamma ^{2}+\omega _{L}^{2}}}+{\frac {\sin(2\alpha }{\gamma ^{2}+\omega _{L}^{2}}}\right]} The polarization degree is P = I x − I y I x + I y = 1 1 + ( g J e τ / m 0 c ) 2 B 2 {\displaystyle P={\frac {I_{x}-I_{y}}{I_{x}+I_{y}}}={\frac {1}{1+(g_{J}e\tau /m_{0}c)^{2}B^{2}}}} where I x {\displaystyle I_{x}} and I y {\displaystyle I_{y}} are the intensities of the x-polarized component ( α = 0 {\displaystyle \alpha =0} ) and the y-polarized component ( α = π / 2 {\displaystyle \alpha =\pi /2} ). This has a Lorentzian shape as a function of the magnetic field strength B {\displaystyle B} . Observation of the Hanle effect on the light emitted by the Sun is used to indirectly measure the magnetic fields within the Sun, see: The effect was initially considered in the context of gasses, followed by applications to solid state physics . [ 12 ] It has been used to measure both the states of localized electrons [ 13 ] and free electrons . [ 14 ] For spin-polarized electrical currents, the Hanle effect provides a way to measure the effective spin lifetime in a particular device. [ 15 ] The zero-field Hanle level crossings involve magnetic fields, in which the states which are degenerate at zero magnetic field are split due to the Zeeman effect . There is also the closely analogous zero-field Stark level crossings with electric fields, in which the states which are degenerate at zero electric field are split due to the Stark effect . Tests of zero field Stark level crossings came after the Hanle-type measurements, and are generally less common, due to the increased complexity of the experiments. [ 16 ]
https://en.wikipedia.org/wiki/Hanle_effect
In the mathematical subject of group theory , the Hanna Neumann conjecture is a statement about the rank of the intersection of two finitely generated subgroups of a free group . The conjecture was posed by Hanna Neumann in 1957. [ 1 ] In 2011, a strengthened version of the conjecture (see below ) was proved independently by Joel Friedman [ 2 ] and by Igor Mineyev. [ 3 ] In 2017, a third proof of the Strengthened Hanna Neumann conjecture, based on homological arguments inspired by pro-p-group considerations, was published by Andrei Jaikin-Zapirain. [ 4 ] The subject of the conjecture was originally motivated by a 1954 theorem of Howson [ 5 ] who proved that the intersection of any two finitely generated subgroups of a free group is always finitely generated, that is, has finite rank . In this paper Howson proved that if H and K are subgroups of a free group F ( X ) of finite ranks n ≥ 1 and m ≥ 1 then the rank s of H ∩ K satisfies: In a 1956 paper [ 6 ] Hanna Neumann improved this bound by showing that: In a 1957 addendum, [ 1 ] Hanna Neumann further improved this bound to show that under the above assumptions She also conjectured that the factor of 2 in the above inequality is not necessary and that one always has This statement became known as the Hanna Neumann conjecture . Let H , K ≤ F ( X ) be two nontrivial finitely generated subgroups of a free group F ( X ) and let L = H ∩ K be the intersection of H and K . The conjecture says that in this case Here for a group G the quantity rank( G ) is the rank of G , that is, the smallest size of a generating set for G . Every subgroup of a free group is known to be free itself and the rank of a free group is equal to the size of any free basis of that free group. If H , K ≤ G are two subgroups of a group G and if a , b ∈ G define the same double coset HaK = HbK then the subgroups H ∩ aKa −1 and H ∩ bKb −1 are conjugate in G and thus have the same rank . It is known that if H , K ≤ F ( X ) are finitely generated subgroups of a finitely generated free group F ( X ) then there exist at most finitely many double coset classes HaK in F ( X ) such that H ∩ aKa −1 ≠ {1}. Suppose that at least one such double coset exists and let a 1 ,..., a n be all the distinct representatives of such double cosets. The strengthened Hanna Neumann conjecture , formulated by her son Walter Neumann (1990), [ 7 ] states that in this situation The strengthened Hanna Neumann conjecture was proved in 2011 by Joel Friedman. [ 2 ] Shortly after, another proof was given by Igor Mineyev. [ 3 ]
https://en.wikipedia.org/wiki/Hanna_Neumann_conjecture
Hanna Reisler (née Bregman, Hebrew : הנר רייסלר ) is an Israeli-American Professor of Chemistry at the University of Southern California . [ 1 ] She is interested in the reaction dynamics of molecules and free radicals, as well as the photodissociation in the gas phase. Reisler established the University of Southern California Women In Science and Engineering (WISE) program. [ 2 ] Reisler grew up in Israel. [ 3 ] She studied at the Hebrew University of Jerusalem , earning her undergraduate degree in 1964. [ 4 ] [ 5 ] She moved to the Weizmann Institute of Science for her graduate studies, completing her PhD in physical chemistry in 1972. [ 6 ] Reisler worked as a postdoctoral fellow with John Doering at Johns Hopkins University . [ 7 ] [ 8 ] Here she studied the inelastic scattering of ions . [ 9 ] [ 10 ] After completing her PhD, Reisler moved to the United States to conduct postdoctoral research at Johns Hopkins University. There, she worked in the laboratory of physicist John Doering. [7] [8] Her research at Johns Hopkins centered on the inelastic scattering of ions, contributing to a deeper understanding of energy transfer processes in molecular collisions.[9][10] This early work laid the groundwork for her later studies in molecular beam techniques and the dynamics of transient molecular species.Research and career Reisler was a researcher at the Soreq Nuclear Research Center . [ 11 ] In 1977, she joined the University of Southern California as a research associate with Curt Wittig , before being appointed Associate Professor in 1987. [ 12 ] She was a member of the Center for the Study of Fast Transient Processes, which was supported by the United States Army Research Laboratory . [ 13 ] Reisler and Wittig worked on gas-surface and solid-state interactions. [ 14 ] The first paper she published while at USC was included in James T. Yardley's book on energy transfer. [ 12 ] During her tenure at USC, Reisler has worked in the Department of Electrical Engineering , Physics , and Chemistry. [ 15 ] She was made a full Professor at the University of Southern California in 1991. In his biography, Wittig described Reisler as "one of the most important faculty members of the College of Letters, Arts, and Sciences, if not the entire University". [ 12 ] She is interested in the molecular mechanisms of chemical reactions. Reisler has looked at molecular transport and guest-host interactions in thin films. [ 5 ] Her group at USC has evaluated vibrational pre-dissociation dynamics of hydrogen-bonded dimers and large clusters. [ 16 ] She also works on the reactions of diradicals and amorphous solid water. [ 16 ] In particular, she has studied chloromethyl radicals, hydroxyl radicals , and NO dimers. [ 14 ] In 2000, there were only three women members of the faculty across the USC Viterbi School of Engineering . [ 17 ] Reisler founded the Women In Science and Engineering (WISE) program at the University of Southern California. [ 2 ] [ 3 ] [ 18 ] The program was launched with an anonymous $20 million donation, and continues to receive a $1 million per year endowment. [ 3 ] [ 17 ] She advocated for more comprehensive support for scientists with families. [ 19 ] It has since provided fellowships and childcare support for students and postdocs . [ 3 ] She created a networking group that meets once a month to share information and resources. [ 20 ] She was appointed the Lloyd Armstrong Jr. Chair in Science and Engineering, which looks to advance the careers of women scientists. [ 14 ] She is involved with the mentorship of early-career scientists. [ 21 ] Her commitment to mentoring has been recognized by the University of Southern California, which launched a mentorship award in her honour. [ 22 ] She was honoured by Johns Hopkins University and nominated to their Society of Scholars in 2018. [ 7 ]
https://en.wikipedia.org/wiki/Hanna_Reisler
In classical mechanics , the Hannay angle is a mechanics analogue of the geometric phase (or Berry phase). It was named after John Hannay of the University of Bristol , UK. Hannay first described the angle in 1985, extending the ideas of the recently formalized Berry phase to classical mechanics. [ 1 ] Consider a one-dimensional system moving in a cycle, like a pendulum. Now slowly vary a slow parameter λ {\displaystyle \lambda } , like pulling and pushing on the string of a pendulum. We can picture the motion of the system as having a fast oscillation and a slow oscillation. The fast oscillation is the motion of the pendulum, and the slow oscillation is the motion of our pulling on its string. If we picture the system in phase space, its motion sweeps out a torus. The adiabatic theorem in classical mechanics states that the action variable, which corresponds to the phase space area enclosed by the system's orbit, remains approximately constant. Thus, after one slow oscillation period, the fast oscillation is back to the same cycle, but its phase on the cycle has changed during the time. The phase change has two leading orders. The first order is the "dynamical angle", which is simply ∫ 0 T ω ( λ ) λ ˙ d t {\displaystyle \int _{0}^{T}\omega (\lambda ){\dot {\lambda }}dt} . This angle depends on the precise details of the motion, and it is of order O ( T ) {\displaystyle O(T)} . The second order is Hannay's angle, which surprisingly is independent of the precise details of λ ˙ {\displaystyle {\dot {\lambda }}} . It depends on the trajectory of λ {\displaystyle \lambda } , but not how fast or slow it traverses the trajectory. It is of order O ( 1 ) {\displaystyle O(1)} . [ 2 ] The Hannay angle is defined in the context of action-angle coordinates . In an initially time-invariant system, an action variable I α {\displaystyle I_{\alpha }} is a constant. After introducing a periodic perturbation λ ( t ) {\displaystyle \lambda (t)} , the action variable I α {\displaystyle I_{\alpha }} becomes an adiabatic invariant, and the Hannay angle θ α H {\displaystyle \theta _{\alpha }^{H}} for its corresponding angle variable can be calculated according to the path integral that represents an evolution in which the perturbation λ ( t ) {\displaystyle \lambda (t)} gets back to the original value [ 3 ] θ α H = − ∂ ∂ I α ∮ p ⋅ ∂ q ∂ λ d λ = − ∂ I α ∬ ω {\displaystyle \theta _{\alpha }^{H}=-{\frac {\partial }{\partial I_{\alpha }}}\oint \!{\boldsymbol {p}}\cdot {\frac {\partial {\boldsymbol {q}}}{\partial \lambda }}\mathrm {d} \lambda =-\partial _{I_{\alpha }}\iint \omega } where p {\displaystyle {\boldsymbol {p}}} and q {\displaystyle {\boldsymbol {q}}} are canonical variables of the Hamiltonian , and ω {\displaystyle \omega } is the symplectic Hamiltonian 2-form. The Foucault pendulum is an example from classical mechanics that is sometimes also used to illustrate the Berry phase. Below we study the Foucault pendulum using action-angle variables. For simplicity, we will avoid using the Hamilton–Jacobi equation , which is employed in the general protocol. [ 4 ] We consider a plane pendulum with frequency ω {\displaystyle \omega } under the effect of Earth's rotation whose angular velocity is Ω → = ( Ω x , Ω y , Ω z ) {\displaystyle {\vec {\Omega }}=(\Omega _{x},\Omega _{y},\Omega _{z})} with amplitude denoted as Ω = | Ω → | {\displaystyle \Omega =|{\vec {\Omega }}|} . Here, the z {\displaystyle z} direction points from the center of the Earth to the pendulum. The Lagrangian for the pendulum is L = 1 2 m ( x ˙ 2 + y ˙ 2 ) − 1 2 m ω 2 ( x 2 + y 2 ) + m Ω z ( x y ˙ − y x ˙ ) {\displaystyle L={\frac {1}{2}}m({\dot {x}}^{2}+{\dot {y}}^{2})-{\frac {1}{2}}m\omega ^{2}(x^{2}+y^{2})+m\Omega _{z}(x{\dot {y}}-y{\dot {x}})} The corresponding motion equation is x ¨ + ω 2 x = 2 Ω z y ˙ {\displaystyle {\ddot {x}}+\omega ^{2}x=2\Omega _{z}{\dot {y}}} y ¨ + ω 2 y = − 2 Ω z x ˙ {\displaystyle {\ddot {y}}+\omega ^{2}y=-2\Omega _{z}{\dot {x}}} We then introduce an auxiliary variable ϖ = x + i y {\displaystyle \varpi =x+iy} that is in fact an angle variable. We now have an equation for ϖ {\displaystyle \varpi } : ϖ ¨ + ω 2 ϖ = − 2 i Ω z ϖ ˙ {\displaystyle {\ddot {\varpi }}+\omega ^{2}\varpi =-2i\Omega _{z}{\dot {\varpi }}} From its characteristic equation λ 2 + ω 2 = − 2 i Ω z λ {\displaystyle \lambda ^{2}+\omega ^{2}=-2i\Omega _{z}\lambda } we obtain its characteristic root (we note that Ω ≪ ω {\displaystyle \Omega \ll \omega } ) λ = − i Ω z ± i Ω z 2 + ω 2 ≈ − i Ω z ± i ω {\displaystyle \lambda =-i\Omega _{z}\pm i{\sqrt {\Omega _{z}^{2}+\omega ^{2}}}\approx -i\Omega _{z}\pm i\omega } The solution is then ϖ = e − i Ω z t ( A e i ω t + B e − i ω t ) {\displaystyle \varpi =e^{-i\Omega _{z}t}(Ae^{i\omega t}+Be^{-i\omega t})} After the Earth rotates one full rotation that is T = 2 π / Ω ≈ 24 h {\displaystyle T=2\pi /\Omega \approx 24h} , we have the phase change for ϖ {\displaystyle \varpi } Δ φ = 2 π ω Ω + 2 π Ω z Ω {\displaystyle \Delta \varphi =2\pi {\frac {\omega }{\Omega }}+2\pi {\frac {\Omega _{z}}{\Omega }}} The first term is due to dynamic effect of the pendulum and is termed as the dynamic phase, while the second term representing a geometric phase that is essentially the Hannay angle θ H = 2 π Ω z Ω {\displaystyle \theta ^{H}=2\pi {\frac {\Omega _{z}}{\Omega }}} A free rigid body tumbling in free space has two conserved quantities: energy and angular momentum vector E , L → {\displaystyle E,{\vec {L}}} . Viewed from within the rigid body's frame, the angular momentum direction is moving about, but its length is preserved. After a certain time T {\displaystyle T} , the angular momentum direction would return to its starting point. Viewed in the inertial frame, the body has undergone a rotation (since all elements in SO(3) are rotations). A classical result states that during time T {\displaystyle T} , the body has rotated by angle 2 E T / ‖ L → ‖ − Ω {\displaystyle 2ET/\|{\vec {L}}\|-\Omega } where Ω {\displaystyle \Omega } is the solid angle swept by the angular momentum direction as viewed from within the rigid body's frame. [ 5 ] The heavy top. [ 6 ] The orbit of earth, periodically perturbed by the orbit of Jupiter. [ 7 ] The rotational transform associated with the magnetic surfaces of a toroidal magnetic field with a nonplanar axis. [ 8 ] This classical mechanics –related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hannay_angle
In mathematics , Hanner's inequalities are results in the theory of L p spaces . Their proof was published in 1956 by Olof Hanner . They provide a simpler way of proving the uniform convexity of L p spaces for p ∈ (1, +∞) than the approach proposed by James A. Clarkson in 1936. Let f , g ∈ L p ( E ), where E is any measure space . If p ∈ [1, 2], then The substitutions F = f + g and G = f − g yield the second of Hanner's inequalities: For p ∈ [2, +∞) the inequalities are reversed (they remain non-strict). Note that for p = 2 {\displaystyle p=2} the inequalities become equalities which are both the parallelogram rule .
https://en.wikipedia.org/wiki/Hanner's_inequalities
Hanoch Gutfreund ( Hebrew : חנוך גוטפרוינד ; born 1935 in Kraków, Poland) is the Andre Aisenstadt Chair in theoretical physics and was the president at the Hebrew University of Jerusalem . Prior to his presidency, he was a professor at the university. Gutfreund received a Ph.D. in theoretical physics from the Hebrew University of Jerusalem in 1966. [ 1 ] [ 2 ] Gutfreund is the Andre Aisenstadt Chair in Theoretical Physics and has been a professor at the university since 1985. [ 2 ] [ 3 ] Gutfreund was earlier the Head of the Physics Institute, Head of the Advanced Studies Institute, Rector, and President of the university from 1992 to 1997 (following Yoram Ben-Porat , and succeeded by Menachem Magidor ). [ 2 ] [ 4 ] [ 5 ] [ 6 ] Gutfreund is the Director of the Einstein Center and is Hebrew University's appointee responsible for Albert Einstein 's intellectual property . [ 2 ] [ 7 ] [ 8 ] [ 9 ] He heads the executive committee of the Israel Science Foundation . [ 2 ] His writings include The Formative Years of Relativity: The History and Meaning of Einstein's Princeton Lectures (with Jürgen Renn , Princeton University Press , 2017) and The Road to Relativity: The History and Meaning of Einstein's "The Foundation of General Relativity", Featuring the Original Manuscript of Einstein's Masterpiece (with Jürgen Renn, Princeton University Press, 2017), and Einstein on Einstein: Autobiographical and Scientific Reflections (with Jürgen Renn, Princeton University Press, 2020). [ 10 ] [ 11 ] [ 12 ] Gutfreund lives in Jerusalem. [ 13 ]
https://en.wikipedia.org/wiki/Hanoch_Gutfreund
Hans Primas (June 18, 1928, Zurich - October 6, 2014) was a Swiss theoretical chemist. From 1948 to 1951 Primas studied chemistry at the Zurich University of Applied Sciences (Technikum Winterthur). In 1961, after his habilitation, he became associate professor and in 1966 full professor of physical and theoretical chemistry at the ETH Zurich. [ 1 ] In 1967-68 and from 1976 to 1978 he was the head of the Department of Chemistry at the ETH Zurich. Primas was interested in the interpretation of quantum mechanics and in the philosophy of science, especially in connection to theoretical chemistry, and one treatise on the matter.
https://en.wikipedia.org/wiki/Hans_Primas
Hans Reichenbach ( / ˈ r aɪ x ən b ɑː x / ; [ 4 ] German: [ˈʁaɪçənbax] ; September 26, 1891 – April 9, 1953) was a leading philosopher of science , educator , and proponent of logical empiricism . He was influential in the areas of science , education , and of logical empiricism. He founded the Gesellschaft für empirische Philosophie (Society for Empirical Philosophy) in Berlin in 1928, also known as the " Berlin Circle ". Carl Gustav Hempel , Richard von Mises , David Hilbert and Kurt Grelling all became members of the Berlin Circle. In 1930, Reichenbach and Rudolf Carnap became editors of the journal Erkenntnis . He also made lasting contributions to the study of empiricism based on a theory of probability ; the logic and the philosophy of mathematics ; space , time , and relativity theory ; analysis of probabilistic reasoning ; and quantum mechanics . [ 5 ] In 1951, he authored The Rise of Scientific Philosophy , his most popular book. [ 6 ] [ 7 ] Hans was the second son of a Jewish merchant, Bruno Reichenbach, who had converted to Protestantism . He married Selma Menzel, a school mistress, who came from a long line of Protestant professionals which went back to the Reformation . [ 8 ] His elder brother Bernard played a significant role in the left communist movement . His younger brother, Herman was a music educator. After completing secondary school in Hamburg , Hans Reichenbach studied civil engineering at the Hochschule für Technik Stuttgart , and physics , mathematics and philosophy at various universities, including Berlin , Erlangen , Göttingen and Munich . Among his teachers were Ernst Cassirer , David Hilbert , Max Planck , Max Born , Edmund Husserl , and Arnold Sommerfeld . Reichenbach was active in youth movements and student organizations. He joined the Freistudentenschaft in 1910. [ 9 ] He attended the founding conference of the Freideutsche Jugend umbrella group at Hoher Meissner in 1913. He published articles about the university reform, the freedom of research, and against anti-Semitic infiltrations in student organizations. His older brother Bernard shared in this activism and went on to become a member of the Communist Workers' Party of Germany , representing this organisation on the Executive Committee of the Communist International . Hans wrote the Platform of the Socialist Student Party, Berlin which was published in 1918. [ 10 ] The party had remained clandestine until the November Revolution when it was formally founded with him as chairman. He also worked with Karl Wittfogel , Alexander Schwab and his other brother Herman at this time. [ 11 ] In 1919 his text Student und Sozialismus: mit einem Anhang: Programm der Sozialistischen Studentenpartei was published by Hermann Schüller , an activist with the League for Proletarian Culture . However following his attending lectures by Albert Einstein in 1919, he stopped participating in political groups. [ 12 ] Reichenbach received a degree in philosophy from the University of Erlangen in 1915 and his PhD dissertation on the theory of probability , titled Der Begriff der Wahrscheinlichkeit für die mathematische Darstellung der Wirklichkeit ( The Concept of Probability for the Mathematical Representation of Reality ) and supervised by Paul Hensel and Max Noether , was published in 1916. Reichenbach served during World War I on the Russian front, in the German army radio troops. In 1917 he was removed from active duty, due to an illness, and returned to Berlin . While working as a physicist and engineer, Reichenbach attended Albert Einstein 's lectures on the theory of relativity in Berlin from 1917 to 1920. In 1920 Reichenbach began teaching at the Technische Hochschule Stuttgart as Privatdozent . In the same year, he published his first book (which was accepted as his habilitation in physics at the Technische Hochschule Stuttgart) on the philosophical implications of the theory of relativity , The Theory of Relativity and A Priori Knowledge ( Relativitätstheorie und Erkenntnis Apriori ), which criticized the Kantian notion of synthetic a priori . He subsequently published Axiomatization of the Theory of Relativity (1924), From Copernicus to Einstein (1927) and The Philosophy of Space and Time (1928), the last stating the logical positivist view on the theory of relativity. Reichenbach distinguishes between axioms of connection and of coordination. Axioms of connection are those scientific laws which specify specific relations between specific physical things, like Maxwell’s equations . They describe empirical laws. Axioms of coordination are those laws which describe all things and are a priori , like Euclidean geometry and are “general rules according to which the connections take place”. For example the axioms of connection of gravitational equations are based upon the axioms of coordination of arithmetic . [ 13 ] Another distinction of his was between the 'context of discovery' and 'context of justification'. The way scientists come up with ideas is not always the same as the way they justify them, and so as separate objects of study Reichenbach distinguished between them. [ 14 ] In 1926, with the help of Albert Einstein, Max Planck and Max von Laue , Reichenbach became assistant professor in the physics department of the University of Berlin. He gained notice for his methods of teaching, as he was easily approached and his courses were open to discussion and debate. This was highly unusual at the time, although the practice is nowadays a common one. In 1928, Reichenbach founded the so-called " Berlin Circle " ( German : Die Gesellschaft für empirische Philosophie ; English: Society for Empirical Philosophy ). Among its members were Carl Gustav Hempel , Richard von Mises , David Hilbert and Kurt Grelling . The Vienna Circle manifesto lists 30 of Reichenbach's publications in a bibliography of closely related authors. In 1930 he and Rudolf Carnap began editing the journal Erkenntnis . When Adolf Hitler became Chancellor of Germany in 1933, Reichenbach was immediately dismissed from his appointment at the University of Berlin under the government's so called "Race Laws" due to his Jewish ancestry. Reichenbach himself did not practise Judaism, and his mother was a German Protestant, but he nevertheless suffered problems. He thereupon emigrated to Turkey , where he headed the department of philosophy at Istanbul University . He introduced interdisciplinary seminars and courses on scientific subjects, and in 1935 he published The Theory of Probability . In 1938, with the help of Charles W. Morris , Reichenbach moved to the United States to take up a professorship at the University of California, Los Angeles in its Philosophy Department . Reichenbach helped establish UCLA as a leading philosophy department in the United States in the post-war period. Carl Hempel , Hilary Putnam , and Wesley Salmon were perhaps his most prominent students. During his time there, he published several of his most notable books, including Philosophic Foundations of Quantum Mechanics in 1944, Elements of Symbolic Logic in 1947, and The Rise of Scientific Philosophy (his most popular book) in 1951. [ 6 ] [ 7 ] Reichenbach died unexpectedly of a heart attack on April 9, 1953. He was living in Los Angeles at the time, and had been working on problems in the philosophy of time and on the nature of scientific laws . As part of this he proposed a three part model of time in language, involving speech time, event time and — critically — reference time, which has been used by linguists since for describing tenses . [ 15 ] This work resulted in two books published posthumously: The Direction of Time and Nomological Statements and Admissible Operations . Hans Reichenbach manuscripts, photographs, lectures, correspondence, drawings and other related materials are maintained by the Archives of Scientific Philosophy, Special Collections, University Library System, University of Pittsburgh. [ 5 ] Much of the content has been digitized. Some more notable content includes:
https://en.wikipedia.org/wiki/Hans_Reichenbach
Hans D. Sluga ( German: [ˈsluːga] ; born 24 April 1937) is a German philosopher who spent most of his career as professor of philosophy at the University of California, Berkeley . Sluga teaches and writes on topics in the history of analytic philosophy , the history of continental philosophy , as well as on political theory, and ancient philosophy in Greece and China . He has been particularly influenced by the thought of Gottlob Frege , Ludwig Wittgenstein , Martin Heidegger , Friedrich Nietzsche , and Michel Foucault . Hans Sluga studied at the University of Bonn and the University of Munich . He subsequently obtained a BPhil [ 1 ] at Oxford , where he studied under R. M. Hare , Isaiah Berlin , Gilbert Ryle and Michael Dummett . [ 2 ] Since 1970, Sluga has been a professor of philosophy at the University of California, Berkeley , serving from 2009 as the William and Trudy Ausfahl Professor of Philosophy until his retirement in 2020. [ 3 ] He previously served as a lecturer in philosophy at University College London . Sluga describes his philosophical orientation as follows: "My overall philosophical outlook is radically historicist. I believe that we can understand ourselves only as beings with a particular evolution and history." [ 1 ] He has worked extensively on the early history of analytic philosophy. In his writings on Gottlob Frege he has sought to establish the influence of Immanuel Kant , Hermann Lotze , and of neo-Kantians like Kuno Fischer and Wilhelm Windelband on Frege's views on the foundations of mathematics and in the theory of meaning. This historically oriented approach to Frege's thought brought him into sharp conflict with Michael Dummett 's "realist" interpretation of Frege. Sluga's work in analytic philosophy has been influenced substantially by Wittgenstein to whose early and late writings he has devoted a number of studies. His writings on both Frege and Wittgenstein have contributed to the development of the study of the history of analytic philosophy as a field within analytic philosophy. [ citation needed ] Since the early 1990s Sluga has become increasingly concerned with political philosophy. In Heidegger's Crisis he set out to explore the question why philosophers from Plato till the present get so often entangled in dangerous political affairs. Sluga analyzes Heidegger's political engagement by putting it into the larger context of the development of German philosophy in the Nazi period. He seeks to show thereby that many diagnoses of Heidegger's politics are misdirected because of their overly narrow focus on the person and work of Heidegger. He challenges, in particular, the claim that Heidegger's critique of reason is to blame for his political errors by pointing out that committed "rationalists" among the German philosophers were prone to the same errors. Sluga's book seeks to show that the willingness of not only Heidegger, but also of Neo-Kantians like Bruno Bauch, Neo-Fichteans like Max Wundt, and Nietzscheans like Alfred Baeumler to involve themselves politically was ultimately due to a misconceived belief that they were living through a moment of world-historical crisis in which they were particularly called upon to intervene. His book Politics and the Search for the Common Good seeks to re-think politics in substantially new terms. Sluga distinguishes in it between a long tradition of "normative political theorizing" that ranges from Plato and Aristotle through Kant to contemporary writers like John Rawls and a more recent form of "diagnostic practice" that emerged in the 19th century and whose first practitioners were Karl Marx and Friedrich Nietzsche. Diagnostic political philosophy, Sluga argues, does not seek to establish political norms through a process of abstract philosophical reasoning but seeks to reach practical conclusions through a careful diagnosis of the political realities. Identifying himself with this strand of political philosophizing, Sluga proceeds to examine the thinking of Carl Schmitt , Hannah Arendt , and Michel Foucault as 20th century exemplars of the diagnostic approach. The book seeks to highlight the promise and the achievements of the diagnostic method as well as its shortcomings so far and its inherent limitations. In doing so, Sluga maps out an understanding of politics that makes use of some of Wittgenstein's methodological concepts. He characterizes politics as a family resemblance phenomenon and argues that the concept of politics does not identify a natural kind . It is therefore also mistaken to assume that there is a single common good at which all politics aims. Similarly, we must forgo the belief that there is a best form of government (as, e.g., democracy). Politics must, rather, be conceived as a continuous search for a common good which can have no final, conclusive answer. It is a sphere of uncertainty in which we operate always with a radically incomplete and unreliable picture of where we are and with only shifting ideas of where we want to go. The institutional forms that this search takes will change over time. Sluga agrees with other diagnostic thinkers that the classical institution of the modern state is now giving way to a new form of political order which he calls "the corporāte," [ 4 ] whose challenges are defined by the growth of human populations, rapid technological changes, and an ever more pressing environmental crisis. Sluga is a noted interpreter of Wittgenstein and has contributed significantly to Wittgenstein scholarship, including editing the 1996 volume The Cambridge Companion to Wittgenstein with David G. Stern. [ 5 ] He has argued against the relevance of increasingly more detailed and sophisticated analyses of Wittgenstein's work, even claiming that Wittgenstein himself would not have regarded this exegetical excess as a legitimate concern for philosophy. [ 6 ] In recent years, he has endorsed Rupert Read 's "post- therapeutic " or "liberatory" interpretation of Wittgenstein. [ 7 ] [ 8 ]
https://en.wikipedia.org/wiki/Hans_Sluga
Hans Thacher Clarke (27 December 1887 – 21 October 1972) was a prominent biochemist during the first half of the twentieth century. He was born in England where he received his university training, but also studied in Germany and Ireland. He spent the remainder of his life in the United States. Clarke was born in Harrow in England . His father was Joseph Thacher Clarke, an archeologist. His older sister was the composer and violist Rebecca Clarke . [ 1 ] Hans Clarke attended University College London School, and went on to enter the university as a student of chemistry, where he studied under William Ramsay , J. Norman Collie , and Samuel Smiles . He received a degree (Bachelor of Science) in 1908, and continued performing research at the university directed by Smiles and Stewart. In 1911 he was awarded an 1851 Exhibition Scholarship , which allowed him to study for three semesters in Berlin under Emil Fischer , and one semester with A. W. Stewart at Queen's College, Belfast . On his return he was granted the D.Sc. from London University in 1913. Clarke's father had been the European representative of US photographic pioneer company Kodak for several years, and was a personal friend of founder George Eastman . After Hans graduated in Chemistry, Eastman consulted with him a few times regarding chemistry-related processes. When World War I erupted, Eastman was forced to look for other sources of the chemicals that he had been obtaining from Germany, and he turned to Hans Clarke for assistance. At Eastman's request, Clarke moved to Rochester, New York in 1914 to assist what he assumed to be the company's considerable chemical engineering department. He was shocked to discover that he was the sole organic chemist there. Clarke stayed with Kodak until 1928, when he was invited to become the Professor of Biological Chemistry in the Columbia University College of Physicians and Surgeons . [ 2 ] His administrative skills and ability to recognize talent contributed to the growth of Columbia's biochemistry department, which by the 1940s had become one of the largest and most influential in the United States. [ 3 ] As the dark events foreshadowing World War II pushed eminent Jewish scientists out of Europe, Clarke opened his laboratory to refugee biochemists, among them E. Brand, Erwin Chargaff , Zacharias Dische , K. Meyer, David Nachmansohn , Rudolph Schoenheimer , and Heinrich Waelsch. [ 4 ] As head of Columbia's Biochemistry Department, Clarke took a personal interest in graduate students, of whom he demanded rigorous qualifications prior to admission. As time went on he devoted less time to his own research, becoming inundated with departmental and professional responsibilities. [ 5 ] Clarke's time at Kodak resulted in few publications in the chemical literature, but he aided the preparation of 26 substances to the Organic Syntheses series, and checked some 65 others. He stayed associated with Kodak for the rest of his life, only retiring as a consultant in 1969. Among other researches, he was involved in the production of penicillin in the United States. Clarke retired from Columbia in 1956 due to its mandatory retirement policy, but was able to move to Yale University , where he spent eight years in full-time research. When Yale required the space that he was occupying [ 6 ] he moved again, and did another seven years' work at the Children's Cancer Relief Foundation in Boston , Massachusetts. Clarke was elected to the National Academy of Sciences in 1942, and served on the boards of the Journal of the American Chemical Society and of the Journal of Biological Chemistry . He was a member of the American Philosophical Society , the American Chemical Society , the American Otological Society , and the American Society of Biological Chemists . He is probably best known for his work on the eponymously named Eschweiler-Clarke reaction . In 1973 his widow donated his voluminous personal and research papers to the American Philosophical Society. Clarke was named Assistant Director of the Office of Scientific Research and Development in 1944, which placed him in charge of coordinating penicillin production in the United States. Clarke served as science attaché to the US Embassy in London (1951–52). He was able to work closely with Sir Robert Robinson , with whom he had edited a major book on research in penicillin (issued in 1949). [ 7 ] Clarke was chairman of the Rochester section of the American Chemical Society (1921), the New York section (1946) and the Organic Chemistry Division (1924–25). He worked on the Committee on Professional Training, and the Garvin Award Committee. He was a president of the American Society of Biological Chemists (1947). He served on several grant-allocating committees. As a member of the Otological Society he served on a grants committee from 1956 to 1962. He was Chairman of the Merck Fellowship Board of the National Academy of Sciences in 1957. Clarke was much in demand for his talents as a lucid writer and was called on to serve as editor or referee throughout his career. He sat on the editorial board of Organic Syntheses (1921–32), and on the editorial board of the Journal of Biological Chemistry (1937–51), and was associate editor of the Journal of the American Chemical Society (1928–38) [ 8 ] Clarke was an expert clarinet player, and received numerous requests to perform. His donated papers include one notebook dedicated to clarinet performance. [ 9 ]
https://en.wikipedia.org/wiki/Hans_Thacher_Clarke
Hans von Storch (born 13 August 1949) is a German climate scientist. He is a professor at the Meteorological Institute of the University of Hamburg , and (since 2001) Director of the Institute for Coastal Research at the Helmholtz Research Centre (previously: GKSS Research Center) in Geesthacht , Germany . He is a member of the advisory boards of the journal Journal of Climate . He worked at the Max Planck Institute for Meteorology from 1986 to 1995 and headed the Statistical Analysis and Modelling research group there. Storch said in testimony to the U.S. House of Representatives in 2006 that anthropogenic climate change exists: He is also known for an article in Der Spiegel he co-wrote with Nico Stehr , which states that: In December 2009, he expressed concern about the credibility of science and criticized some publicly visible scientists for simplifying and dramatizing their communications. He pointed to the German Waldsterben ( Forest dieback ) hype of the 1980s: [ 5 ] On 20 June 2013 Storch stated "So far, no one has been able to provide a compelling answer to why climate change seems to be taking a break. We're facing a puzzle. Recent CO 2 emissions have actually risen even more steeply than we feared. As a result, according to most climate models, we should have seen temperatures rise by around 0.25 degrees Celsius (0.45 degrees Fahrenheit) over the past 10 years. That hasn't happened. In fact, the increase over the last 15 years was just 0.06 degrees Celsius (0.11 degrees Fahrenheit) -- a value very close to zero. This is a serious scientific problem that the Intergovernmental Panel on Climate Change (IPCC) will have to confront when it presents its next Assessment Report late next year." [ 6 ] Hans von Storch, who also concurs with the mainstream view on global warming, [ 7 ] said that the University of East Anglia (UEA) had "violated a fundamental principle of science" by refusing to share data with other researchers. "They play science as a power game," he said. [ 8 ] In 2003, with effect from 1 August, Hans von Storch was appointed as editor-in-chief of the journal Climate Research , after having been on its editorial board since 1994. A few months before a controversial article ( Soon and Baliunas 2003 [ 9 ] ) had raised questions about the journal's decentralised review process, with no editor-in-chief, and about the editorial policy of one editor, Chris de Freitas . [ 10 ] Storch drafted and circulated an editorial on the new regime, reserving the right as editor-in-chief to reject articles proposed for acceptance by one of the editors. Following the publisher's refusal to publish the editorial unless all editors serving on the board endorsed the new policy, Storch resigned four days before he was due to take up his new position. [ 11 ] Four other editors later left the journal. Storch later told the Chronicle of Higher Education that " climate science skeptics " “had identified Climate Research as a journal where some editors were not as rigorous in the review process as is otherwise common.” [ 12 ] In late 2004, Storch's team published an article in the journal Science which tested multiproxy methods such as those used by Mann, Bradley, and Hughes, 1998, often called MBH98, [ 13 ] or Mann and Jones , [ 14 ] to obtain the global temperature variations in the past 1000 years . The test suggested that the method used in MBH98 would inherently underestimate large variations had they occurred; but this was subsequently challenged: see hockey stick graph for more detail. To reach this conclusion, Storch et al. used a climate model to generate a series of annual temperature maps for the world over the past several centuries. They then added white noise to the proxy data and applied the methods used in MBH98, a variation of principal component analysis , to the computed temperature maps and found that the amount of variation was considerably reduced. [ citation needed ] In April 2006, Science published a comment authored by Wahl and collaborators, asserting errors in the 2004 paper, stating that "their conclusion was based on incorrect implementation of the reconstruction procedure" a mistake with Repercussions; [ 15 ] and a disputing VS Reply. In this reply, VS and his team demonstrated that caveats raised in the Wahl comment did not invalidate their original conclusion. The inadequacy of the MBH98 methodology for climate reconstructions was later independently confirmed in other publications, for instance by Lee, Zwiers and Tsao, 2008 [ 16 ] or by Christiansen et al., 2009. [ 17 ] In 2010, Storch received the IMSC achievement award at the International Meetings on Statistical Climatology in Edinburgh , to "recognize his key contributions to statistical downscaling, reconstruction of temperature series, analyses of climatic variability, and detection and attribution of climate change". [ 18 ] In 1977, Hans von Storch co-founded a 100-member Donald Duck Club, defending Donald Duck against accusations of indecent behavior. Between 1976 and 1985 he was publisher of a magazine on Donald Duck, Der Hamburger Donaldist . [ 19 ]
https://en.wikipedia.org/wiki/Hans_von_Storch
In trigonometry , Hansen's problem is a problem in planar surveying , named after the astronomer Peter Andreas Hansen (1795–1874), who worked on the geodetic survey of Denmark. There are two known points A, B , and two unknown points P 1 , P 2 . From P 1 and P 2 an observer measures the angles made by the lines of sight to each of the other three points. The problem is to find the positions of P 1 and P 2 . See figure; the angles measured are ( α 1 , β 1 , α 2 , β 2 ) . Since it involves observations of angles made at unknown points, the problem is an example of resection (as opposed to intersection). Define the following angles: γ = ∠ P 1 A P 2 , δ = ∠ P 1 B P 2 , ϕ = ∠ P 2 A B , ψ = ∠ P 1 B A . {\displaystyle {\begin{alignedat}{5}\gamma &=\angle P_{1}AP_{2},&\quad \delta &=\angle P_{1}BP_{2},\\[4pt]\phi &=\angle P_{2}AB,&\quad \psi &=\angle P_{1}BA.\end{alignedat}}} As a first step we will solve for φ and ψ . The sum of these two unknown angles is equal to the sum of β 1 and β 2 , yielding the equation ϕ + ψ = β 1 + β 2 . {\displaystyle \phi +\psi =\beta _{1}+\beta _{2}.} A second equation can be found more laboriously, as follows. The law of sines yields A B ¯ P 2 B ¯ = sin ⁡ α 2 sin ⁡ ϕ , P 2 B ¯ P 1 P 2 ¯ = sin ⁡ β 1 sin ⁡ δ . {\displaystyle {\frac {\overline {AB}}{\overline {P_{2}B}}}={\frac {\sin \alpha _{2}}{\sin \phi }},\qquad {\frac {\overline {P_{2}B}}{\overline {P_{1}P_{2}}}}={\frac {\sin \beta _{1}}{\sin \delta }}.} Combining these, we get A B ¯ P 1 P 2 ¯ = sin ⁡ α 2 sin ⁡ β 1 sin ⁡ ϕ sin ⁡ δ . {\displaystyle {\frac {\overline {AB}}{\overline {P_{1}P_{2}}}}={\frac {\sin \alpha _{2}\sin \beta _{1}}{\sin \phi \sin \delta }}.} Entirely analogous reasoning on the other side yields A B ¯ P 1 P 2 ¯ = sin ⁡ α 1 sin ⁡ β 2 sin ⁡ ψ sin ⁡ γ . {\displaystyle {\frac {\overline {AB}}{\overline {P_{1}P_{2}}}}={\frac {\sin \alpha _{1}\sin \beta _{2}}{\sin \psi \sin \gamma }}.} Setting these two equal gives sin ⁡ ϕ sin ⁡ ψ = sin ⁡ γ sin ⁡ α 2 sin ⁡ β 1 sin ⁡ δ sin ⁡ α 1 sin ⁡ β 2 = k . {\displaystyle {\frac {\sin \phi }{\sin \psi }}={\frac {\sin \gamma \sin \alpha _{2}\sin \beta _{1}}{\sin \delta \sin \alpha _{1}\sin \beta _{2}}}=k.} Using a known trigonometric identity this ratio of sines can be expressed as the tangent of an angle difference: Where k = sin ⁡ ϕ sin ⁡ ψ . {\displaystyle k={\frac {\sin \phi }{\sin \psi }}.} This is the second equation we need. Once we solve the two equations for the two unknowns φ, ψ , we can use either of the two expressions above for A B ¯ P 1 P 2 ¯ {\displaystyle {\tfrac {\overline {AB}}{\overline {P_{1}P_{2}}}}} to find ⁠ P 1 P 2 ¯ {\displaystyle {\overline {P_{1}P_{2}}}} ⁠ since AB is known. We can then find all the other segments using the law of sines. [ 1 ] We are given four angles ( α 1 , β 1 , α 2 , β 2 ) and the distance AB . The calculation proceeds as follows: Calculate P 1 P 2 ¯ = A B ¯ sin ⁡ ϕ sin ⁡ δ sin ⁡ α 2 sin ⁡ β 1 {\displaystyle {\overline {P_{1}P_{2}}}={\overline {AB}}\ {\frac {\sin \phi \sin \delta }{\sin \alpha _{2}\sin \beta _{1}}}} or equivalently P 1 P 2 ¯ = A B ¯ sin ⁡ ψ sin ⁡ γ sin ⁡ α 1 sin ⁡ β 2 . {\displaystyle {\overline {P_{1}P_{2}}}={\overline {AB}}\ {\frac {\sin \psi \sin \gamma }{\sin \alpha _{1}\sin \beta _{2}}}.} If one of these fractions has a denominator close to zero, use the other one.
https://en.wikipedia.org/wiki/Hansen's_problem
Hansen solubility parameters were developed by Charles M. Hansen in his Ph.D thesis in 1967 [ 1 ] [ 2 ] as a way of predicting if one material will dissolve in another and form a solution . [ 3 ] They are based on the idea that like dissolves like where one molecule is defined as being 'like' another if it bonds to itself in a similar way. Specifically, each molecule is given three Hansen parameters, each generally measured in MPa 0.5 : These three parameters can be treated as co-ordinates for a point in three dimensions also known as the Hansen space. The nearer two molecules are in this three-dimensional space, the more likely they are to dissolve into each other. To determine if the parameters of two molecules (usually a solvent and a polymer) are within range, a value called interaction radius ( R 0 {\displaystyle R_{\mathrm {0} }} ) is given to the substance being dissolved. This value determines the radius of the sphere in Hansen space and its center is the three Hansen parameters. To calculate the distance ( R a {\displaystyle \ Ra} ) between Hansen parameters in Hansen space the following formula is used: Combining this with the interaction radius R 0 {\displaystyle R_{\mathrm {0} }} gives the relative energy difference (RED) of the system: Historically Hansen solubility parameters (HSP) have been used in industries such as paints and coatings where understanding and controlling solvent–polymer interactions was vital. Over the years their use has been extended widely to applications such as: HSP have been criticized for lacking the formal theoretical derivation of Hildebrand solubility parameters . All practical correlations of phase equilibrium involve certain assumptions that may or may not apply to a given system. In particular, all solubility parameter-based theories have a fundamental limitation that they apply only to associated solutions (i.e., they can only predict positive deviations from Raoult's law ): they cannot account for negative deviations from Raoult's law that result from effects such as solvation (often important in water-soluble polymers) or the formation of electron donor acceptor complexes. Like any simple predictive theory, HSP are best used for screening with data used to validate the predictions. Hansen parameters have been used to estimate Flory-Huggins Chi parameters, often with reasonable accuracy. The factor of 4 in front of the dispersion term in the calculation of Ra has been the subject of debate. There is some theoretical basis for the factor of four (see Ch 2 of Ref 1 and also. [ 6 ] However, there are clearly systems (e.g. Bottino et al. , "Solubility parameters of poly(vinylidene fluoride)" J. Polym. Sci. Part B: Polymer Physics 26 (4), 785-79, 1988) where the regions of solubility are far more eccentric than predicted by the standard Hansen theory. HSP effects can be over-ridden by size effects (small molecules such as methanol can give "anomalous results"). [ This quote needs a citation ] It has been shown that it is possible to calculate HSP via molecular dynamics techniques, [ 7 ] though currently [ when? ] the polar and hydrogen bonding parameters cannot reliably be partitioned in a manner that is compatible with Hansen's values. The following are limitations according to Hansen:
https://en.wikipedia.org/wiki/Hansen_solubility_parameter
Hantz reactions are a class of pattern-forming precipitation reactions in gels implementing a reaction–diffusion system . The precipitation patterns are forming as a reaction of two electrolytes : a highly concentrated "outer" one diffuses into a hydrogel , while the "inner" one is dissolved in the gel itself. The colloidal precipitate which builds up the patterns is trapped by the gel and kept at the location where it is formed, [ 1 ] similar to Liesegang rings . The first representative of this class of reactions was the NaOH (outer electrolyte)+CuCl 2 (inner electrolyte). [ 2 ] Later the NaOH+ AgNO 3 , [ 3 ] the CuCl 2 +K 3 [Fe(CN) 6 ], [ 4 ] the NaOH+ AlCl 3 , [ 5 ] and the NH 3 +AgNO 3 [ 6 ] reactions in several hydrogels have also proved to show similar behavior. Precipitate patterns forming in these reactions are exceptionally rich. Besides the macroscopic shapes like layered structures, helices and cardioids , regular sheets of colloidal precipitate may also emerge with a periodicity even less than 20 micrometers (microscopic patterns). The arrangement that best shows the sequence of events leading to the formation of macroscopic patters is the one in which the outer electrolyte penetrates in a thin gel sheet located between two glass plates. In this case, the diffusion front has a quasi-one-dimensional shape. [ 3 ] [ 7 ] If there are some impurities or obstacles in the gel, the precipitation may cease at these points, and the traveling precipitation front following the diffusion front will split. As the broken precipitation front advances, its active segments are getting shorter, resulting in triangle-like regions free of precipitate behind the front. The reason why the precipitation temporarily or permanently stops in these regions is that the oblique, passive edges of the precipitate act as a semipermeable membrane, blocking the diffusion of the outer electrolyte. [ 8 ] The mechanism behind the regression of the active front segments is not fully understood. It is believed that a diffusive intermediate compound forms at the active segments having reduced concentration at the sides, and a critical concentration is required for the precipitation to occur. When the outer electrolyte is poured onto the top of a gel column in a glass tube, the diffusion front takes roughly the form of a disk. In this case, the precipitation fronts involved in pattern formation can perform more complicated motions, leading to more complex patterns that depend on the outer and inner electrolyte concentration. These include the formation of multi-armed helices, intermingled cardioids, Voronoi tessellations , so-called target patterns and other, even more complex shapes. In certain conditions, for example when the cation of the inner electrolyte is Cu 2+ or Ag 2+ , regular sheets consisting of colloidal grains are formed. [ 9 ] [ 6 ] This phenomenon is especially striking when the reactions run in poly(vinyl)alcohol gels, and the speed of the precipitation front falls below about 0.3 μm/s. The finest microscopic patterns have been observed in the NaOH+AgNO 3 reactions, where the periodicity dropped below 10 μm. The chemical mechanism of this pattern formation is not fully understood, but computer simulations based on phase separation described by the Cahn–Hilliard equation with a moving source front exhibit the most important properties of the building of the microscopic patterns. [ 10 ] Defects may also be present in the regular microscopic sheets, which can even interact during the front propagation. These microscopic patterns have raised interest in different fields of micro and nanotechnology as well [ 11 ]
https://en.wikipedia.org/wiki/Hantz_reactions
The Hantzsch pyridine synthesis or Hantzsch dihydropyridine synthesis is a multi-component organic reaction between an aldehyde such as formaldehyde , 2 equivalents of a β-keto ester such as ethyl acetoacetate and a nitrogen donor such as ammonium acetate or ammonia . [ 1 ] [ 2 ] The initial reaction product is a dihydropyridine which can be oxidized in a subsequent step to a pyridine . [ 3 ] The driving force for this second reaction step is aromatization . This reaction was reported in 1881 by Arthur Rudolf Hantzsch . A 1,4-dihydropyridine di carboxylate is also called a 1,4-DHP compound or a Hantzsch ester . These compounds are an important class of calcium channel blockers [ 2 ] and as such commercialized in for instance nifedipine , amlodipine or nimodipine . The reaction has been demonstrated to proceed in water as reaction solvent and with direct aromatization by ferric chloride , manganese dioxide or potassium permanganate in a one-pot synthesis . [ 4 ] The Hantzsch dihydropyridine synthesis has been affected by microwave chemistry . [ 5 ] At least five significant pathways have been proposed for the Hantzch reaction synthesis of 1,4-dihydropyridine. Low yield and unexpected products may arise under varying reactants and reaction conditions. Previous studies have tested the reactions of preformed intermediates to determine the most likely mechanism and design successful syntheses. [ 6 ] An early study into the mechanism using 13 C and 15 N NMR indicated the intermediacy of the chalcone 6 and enamine 3 . This data suggested the following route for the reaction. [ 7 ] Later research using mass spectrometry monitoring with charge-tagged reactants supported intermediate pathway A as a likely route and showed evidence that the reaction followed two additional intermediate pathways which converge to precursor 7 . [ 6 ] Reagents likely influence the route taken as when the methyl group of 1 is replaced by an electron-withdrawing group, the reaction instead proceeds through a diketone intermediate. [ 8 ] The classical method for synthesis of Hantzsch 1,4-dihydropyridines, which involves a one-pot condensation of aldehydes with ethyl acetoacetate and ammonia, have several drawbacks such as harsh reaction conditions, long reaction times, and generally low yield of products. A synthesis of 1,4-dihydropyridines in aqueous micelles catalyzed by PTSA under ultrasonic irradiation. Using condensation of benzaldehyde , ethyl acetoacetate and ammonium acetate as a model, experiments have proven that when catalyzed by p-toluenesulfonic acid (PTSA) under ultrasonic irradiation, the reaction can have a product yield of 96% in aqueous (SDS, 0.1M). The reaction had also been carried out in various solvent system, and it was discovered that the ultrasonic irradiation in aqueous micelles gave better yields than in solvents such as methanol, ethanol, THF. Using the optimized reaction conditions, a series of 1,4-dihydropyridine were synthesized, and they all have a reaction yield above 90%. [ 9 ] Oxidation of 1,4-DHPs accounts for one of the easiest ways of accessing pyridine derivatives. [ 10 ] Common oxidants used to promote aromatization of 1,4-DHPs are CrO 3 , KMnO 4 , and HNO 3 . [ 11 ] However, aromatization is often accompanied by: low chemical yields, strong oxidative conditions, burdensome workups, the formation of side products, or the need of excess oxidant. [ 11 ] [ 12 ] As such, particular attention has been paid to developing methods of aromatization to yield pyridine derivatives under milder and efficient conditions. Such conditions include, but are not limited to: iodine in refluxing methanol, [ 11 ] chromium dioxide (CrO 2 ), [ 12 ] sodium chlorite , [ 13 ] and under metal-free, photochemical conditions using both UV-light and visible light. [ 14 ] Upon metabolism, 1,4-DHP based antihypertensive drugs undergo oxidation by way of cytochrome P-450 in the liver and are thus converted to their pyridine derivatives. [ 11 ] As a result, particular attention has been paid to the aromatization of 1,4-DHPs as a means to understand biological systems and so as to develop new methods of accessing pyridines. [ 13 ] As a multi-component reaction , the Hantzsch pyridine synthesis is much more atom efficient with a simpler number of reaction steps than a linear-strategy synthesis. In recent years, research has looked to make this an even more environmentally friendly reaction by investigating "greener" solvents and reaction conditions. [ 15 ] One line of study has experimented with using ionic liquids as catalysts for room temperature reactions. Ionic liquids are an easy to handle and non-toxic option to replace traditional catalysts. Additionally, this catalyst lead to a high yield at room temperature, reducing the impact of heating the reaction for an extended time. A second study used ceric ammonium nitrate (CAN) as an alternate catalyst and achieved a solvent-free room temperature reaction. [ 16 ] The Knoevenagel–Fries modification allows for the synthesis of unsymmetrical pyridine compounds. [ 17 ]
https://en.wikipedia.org/wiki/Hantzsch_pyridine_synthesis
The Hantzsch Pyrrole Synthesis , named for Arthur Rudolf Hantzsch , is the chemical reaction of β-ketoesters ( 1 ) with ammonia (or primary amines ) and α-halo ketones ( 2 ) to give substituted pyrroles ( 3 ). [ 1 ] [ 2 ] Pyrroles are found in a variety of natural products with biological activity, so the synthesis of substituted pyrroles has important applications in medicinal chemistry. [ 3 ] [ 4 ] Alternative methods for synthesizing pyrroles exist, such as the Knorr Pyrrole Synthesis and Paal-Knorr Synthesis . Below is one published mechanism for the reaction: [ 5 ] The mechanism starts with the amine ( 1 ) attacking the β carbon of the β-ketoesters ( 2 ), and eventually forming an enamine ( 3 ). The enamine then attacks the carbonyl carbon of the α-haloketone ( 4 ). This is followed by the loss of H 2 O, giving an imine ( 5 ). This intermediate undergoes an intramolecular nucleophilic attack, forming a 5-membered ring ( 6 ). Finally, a hydrogen is eliminated and the pi-bonds are rearranged in the ring, yielding the final product ( 7 ). An alternative mechanism has been proposed in which the enamine ( 3 ) attacks the α-carbon of the α-haloketone ( 4 ) as part of a nucleophilic substitution, instead of attacking the carbonyl carbon. [ 6 ] A generalization of the Hantzsch pyrrole synthesis was developed by Estevez, et al. [ 7 ] In this reaction highly substituted pyrroles can be synthesized in a one-pot reaction, with relatively high yields (60% - 97%). This reaction involves the high-speed vibration milling (HSVM) of ketones with N -iodosuccinimide (NIS) and p -toluenesulfonic acid , to form an α-iodoketone in situ . This is followed by addition of a primary amine, a β-dicarbonyl compound, cerium(IV) ammonium nitrate (CAN) and silver nitrate , as shown in the scheme below: 2,3-dicarbonylated pyrroles can be synthesized by a version of the Hantzsch Pyrrole Synthesis. [ 8 ] These pyrroles are particularly useful for total synthesis because the carbonyl groups can be converted into a variety of other functional groups. The reaction can also occur between an enamine and an α-haloketone to synthesize substituted indoles , which also have biological significance. [ 6 ] [ 9 ] A library of substituted pyrrole analogs can be quickly produced by using continuous flow chemistry (reaction times of around 8 min.). [ 10 ] The advantage of using this method, as opposed to the in-flask synthesis, is that this one does not require the work-up and purification of several intermediates, and could therefore lead to a higher percent yield.
https://en.wikipedia.org/wiki/Hantzsch_pyrrole_synthesis
In organic chemistry , Hantzsch–Widman nomenclature , also called the extended Hantzsch–Widman system (named for Arthur Rudolf Hantzsch and Karl Oskar Widman [ sv ; de ] ), is a type of systematic chemical nomenclature used for naming heterocyclic parent hydrides having no more than ten ring members. [ 1 ] Some common heterocyclic compounds have retained names that do not follow the Hantzsch–Widman pattern. [ 2 ] [ 3 ] A Hantzsch–Widman name will always contain a prefix, which indicates the type of heteroatom present in the ring, and a stem, which indicates both the total number of atoms and the presence or absence of double bonds . The name may include more than a one prefix, if more than one type of heteroatom is present; a multiplicative prefix if there are several heteroatoms of the same type; and locants to indicate the relative positions of the different atoms. Hantzsch–Widman names may be combined with other aspects of organic nomenclature, to indicate substitution or fused-ring systems. The Hantzsch–Widman prefixes indicate the type of heteroatom(s) present in the ring. They form a priority series: If there is more than one type of heteroatom in the ring, the prefix that is higher on the list comes before the prefix that is lower on the list. For example, "oxa" (for oxygen ) always comes before "aza" (for nitrogen ) in a name. The priority order is the same as that used in substitutive nomenclature , but Hantzsch–Widman nomenclature is recommended only for use with a more restricted set of heteroatoms (see also below). [ 3 ] [ note 1 ] All of the prefixes end in "a": In Hantzsch–Widman nomenclature (but not in some other methods of naming heterocycles), the final "a" is elided when the prefix comes before a vowel. The heteroatom is assumed to have its standard bonding number for organic chemistry while the name is being constructed. The halogens have a standard bonding number of one, and so a heterocyclic ring containing a halogen as a heteroatom should have a formal positive charge. [ 4 ] In principle, lambda nomenclature could be used to specify a non-standard valence state for a heteroatom [ 3 ] but, in practice, this is rare. The choice of stem is quite complicated, and not completely standardised. The main criteria are: Notes on table: Hantzsch–Widman nomenclature is named after the German chemist Arthur Hantzsch and the Swedish chemist Oskar Widman , who independently proposed similar methods for the systematic naming of heterocyclic compounds in 1887 and 1888 respectively. [ 5 ] [ 6 ] It forms the basis for many common chemical names, such as dioxin and benzodiazepine .
https://en.wikipedia.org/wiki/Hantzsch–Widman_nomenclature
Hao Wang ( Chinese : 王浩 ; pinyin : Wáng Hào ; 20 May 1921 – 13 May 1995) was a Chinese-American logician , philosopher, mathematician, and commentator on Kurt Gödel . Born in Jinan , Shandong, in the Republic of China (today in the People's Republic of China), Wang received his early education in China. He obtained a BSc degree in mathematics from the National Southwestern Associated University in 1943 and an M.A. in philosophy from Tsinghua University in 1945, where his teachers included Feng Youlan and Jin Yuelin , after which he moved to the United States for further graduate studies. He studied logic under W. V. O. Quine at Harvard University , culminating in a Ph.D. in 1948. He was appointed to an assistant professorship at Harvard the same year. During the early 1950s, Wang studied with Paul Bernays in Zürich . In 1956, he was appointed Reader in the Philosophy of Mathematics at the University of Oxford . In 1959, Wang wrote on an IBM 704 computer a program that in only 9 minutes mechanically proved several hundred mathematical logic theorems in Whitehead and Russell 's Principia Mathematica . [ 1 ] In 1961, he was appointed Gordon McKay Professor of Mathematical Logic and Applied Mathematics at Harvard. [ 2 ] From 1967 until 1991, he headed the logic research group at Rockefeller University in New York City, where he was professor of logic. In 1972, Wang joined in a group of Chinese American scientists led by Chih-Kung Jen as the first such delegation from the U.S. to the People's Republic of China. One of Wang's most important contributions was the Wang tile . [ 3 ] He showed that any Turing machine can be turned into a set of Wang tiles. The domino problem is to find an algorithm that uses a set of Wang tiles to tile the plane. The first noted example of aperiodic tiling is a set of Wang tiles, whose nonexistence Wang had once conjectured, discovered by his student Robert Berger in 1966. Wang also had a significant influence on theory of computational complexity. [ 4 ] A philosopher in his own right, [ 5 ] Wang also developed a penetrating interpretation of Ludwig Wittgenstein 's later philosophy of mathematics, which he called "anthropologism." Later he broadened this reading in the foundations of mathematics. He chronicled Kurt Gödel 's philosophical ideas and authored several books on the subject, [ 6 ] thereby providing contemporary scholars many insights elucidating Gödel's later philosophical thought. He saw his own philosophy of "substantial factualism" as a middle ground that includes both abstract theoretical formulations and the ordinary language of everyday discourse. In 1983 he was presented with the first Milestone Prize for Automated Theorem-Proving , sponsored by the International Joint Conference on Artificial Intelligence . [ 7 ] On 13 May 1995, Wang died at New York Hospital one week from his 74th birthday. According to his wife Hanne Tierney, Wang's cause of death was from lymphoma . [ 8 ] [ 9 ] In addition to Tierney, Wang was survived by a daughter and two sons. [ 8 ]
https://en.wikipedia.org/wiki/Hao_Wang_(academic)
Parental (paternal and maternal) haplarithms are the outputs of haplarithmisis process. For instance, paternal haplarithm represents chromosome specific profile illuminating paternal haplotype of that chromosome (including homologous recombination between the two paternal homologous chromosomes) and the amount of those haplotypes. Importantly, the haplarithm signatures allow tracing back the genomic aberration to meiosis and/or mitosis . [ 1 ]
https://en.wikipedia.org/wiki/Haplarithm
Haplarithmisis (Greek for haplotype numbering) is a conceptual process in Genetics that enables simultaneous haplotyping and copy-number profiling of DNA samples derived from cells. Haplarithmisis also reveals parental, segregation, and mechanistic origins of genomic anomalies. [ 1 ] [ 2 ] The resulting profiles of haplarithmisis are called parental haplarithms (i.e. paternal haplarithm and maternal haplarithm ). Haplarithmisis enabled a new form of preimplantation genetic diagnosis , by which segmental and full chromosome anomalies could not only be detected but also traced back to meiosis or mitosis . [ 3 ] [ 4 ] In its first application in basic genome research, haplarithmisis led to discovery of parental genome segregation, a phenomenon that causes the segregation of entire parental genomes in distinct blastomere lineages causing cleavage-stage chimerism and mixoploidy. [ 5 ] [ 6 ]
https://en.wikipedia.org/wiki/Haplarithmisis
A haplotype is a group of alleles in an organism that are inherited together from a single parent, [ 1 ] [ 2 ] and a haplogroup ( haploid from the Greek : ἁπλοῦς , haploûs , "onefold, simple" and English: group ) is a group of similar haplotypes that share a common ancestor with a single-nucleotide polymorphism mutation . [ 3 ] More specifically, a haplotype is a combination of alleles at different chromosomal regions that are closely linked and tend to be inherited together. As a haplogroup consists of similar haplotypes, it is usually possible to predict a haplogroup from haplotypes. Haplogroups pertain to a single line of descent . As such, membership of a haplogroup, by any individual, relies on a relatively small proportion of the genetic material possessed by that individual. Each haplogroup originates from, and remains part of, a preceding single haplogroup (or paragroup ). As such, any related group of haplogroups may be precisely modelled as a nested hierarchy , in which each set (haplogroup) is also a subset of a single broader set (as opposed, that is, to biparental models, such as human family trees). Haplogroups can be further divided into subclades. Haplogroups are normally identified by an initial letter of the alphabet, and refinements consist of additional number and letter combinations, such as (for example) A → A1 → A1a . The alphabetical nomenclature was published in 2002 by the Y Chromosome Consortium . [ 4 ] In human genetics , the haplogroups most commonly studied are Y-chromosome (Y-DNA) haplogroups and mitochondrial DNA (mtDNA) haplogroups , each of which can be used to define genetic populations . Y-DNA is passed solely along the patrilineal line, from father to son, while mtDNA is passed down the matrilineal line, from mother to offspring of both sexes. Neither recombines , and thus Y-DNA and mtDNA change only by chance mutation at each generation with no intermixture between parents' genetic material. Mitochondria are small organelles that lie in the cytoplasm of eukaryotic cells , such as those of humans. Their primary function is to provide energy to the cell. Mitochondria are thought to be reduced descendants of symbiotic bacteria that were once free living. One indication that mitochondria were once free living is that each contains a circular DNA , called mitochondrial DNA (mtDNA), whose structure is more similar to bacteria than eukaryotic organisms (see endosymbiotic theory ). The overwhelming majority of a human's DNA is contained in the chromosomes in the nucleus of the cell, but mtDNA is an exception. An individual inherits their cytoplasm and the organelles contained by that cytoplasm exclusively from the maternal ovum (egg cell); sperm only pass on the chromosomal DNA, all paternal mitochondria are digested in the oocyte . When a mutation arises in a mtDNA molecule, the mutation is therefore passed down in a direct female line of descent. Mutations are changes in the nitrogen bases of the DNA sequence . Single changes from the original sequence are called single nucleotide polymorphisms (SNPs). [ dubious – discuss ] Human Y chromosomes are male-specific sex chromosomes ; nearly all humans that possess a Y chromosome will be morphologically male. Although Y chromosomes are situated in the cell nucleus and paired with X chromosomes , they only recombine with the X chromosome at the ends of the Y chromosome ; the remaining 95% of the Y chromosome does not recombine. Therefore, the Y chromosome and any mutations that arise in it are passed down in a direct male line of descent. Other chromosomes, autosomes and X chromosomes (when another X chromosome is available to pair with it), share their genetic material during meiosis , the process of cell division which produces gametes . Effectively this means that the genetic material from these chromosomes gets mixed up in every generation, and so any new mutations are passed down randomly from parents to offspring. The special feature that both Y chromosomes and mtDNA display is that mutations can accrue along a certain segment of both molecules and these mutations remain fixed in place on the DNA. Furthermore, the historical sequence of these mutations can also be inferred. For example, if a set of ten Y chromosomes (derived from ten different individuals) contains a mutation, A, but only five of these chromosomes contain a second mutation, B, then it is overwhelmingly likely that mutation B occurred after mutation A. Furthermore, all ten individuals who carry the chromosome with mutation A are the direct male line descendants of the same man who was the first person to carry this mutation. The first man to carry mutation B was also a direct male line descendant of this man, but is also the direct male line ancestor of all men carrying mutation B. Series of mutations such as this form molecular lineages. Furthermore, each mutation defines a set of specific Y chromosomes called a haplogroup. All humans carrying mutation A form a single haplogroup, and all humans carrying mutation B are part of this haplogroup, but mutation B also defines a more recent haplogroup (which is a subgroup or subclade ) of its own to which humans carrying only mutation A do not belong. Both mtDNA and Y chromosomes are grouped into lineages and haplogroups; these are often presented as tree-like diagrams. Human Y chromosome DNA (Y-DNA) haplogroups are named from A to T, and are further subdivided using numbers and lower case letters. Y chromosome haplogroup designations are established by the Y Chromosome Consortium. [ 5 ] Y-chromosomal Adam is the name given by researchers to the male who is the most recent common patrilineal (male-lineage) ancestor of all living humans. Major Y-chromosome haplogroups, and their geographical regions of occurrence (prior to the recent European colonization), include: (mutation M168 occurred ~50,000 bp) (mutation M89 occurred ~45,000 bp) (mutation M9 occurred ~40,000 bp ) Human mtDNA haplogroups are lettered: A , B , C , CZ , D , E , F , G , H , HV , I , J , pre- JT , JT , K , L0 , L1 , L2 , L3 , L4 , L5 , L6 , M , N , O , P , Q , R , R0 , S , T , U , V , W , X , Y , and Z . The versions of the mtDNA tree was maintained by Mannis van Oven on the PhyloTree website up to 2016. [ 9 ] When the number of new mtDNA tests started to heavily increase, other companies started to develop the mtDNA halpotree. First the company YFull introduced their MTree [ 10 ] . In 2025 FamilyTreeDNA introduced their MitoTree(Beta). [ 11 ] Phylogenetic tree of human mitochondrial DNA (mtDNA) haplogroups Mitochondrial Eve is the name given by researchers to the woman who is the most recent common matrilineal (female-lineage) ancestor of all living humans. Haplogroups can be used to define genetic populations and are often geographically oriented. For example, the following are common divisions for mtDNA haplogroups: The mitochondrial haplogroups are divided into three main groups, which are designated by the sequential letters L, M, N. Humanity first split within the L group between L0 and L1-6. L1-6 gave rise to other L groups, one of which, L3, split into the M and N group. The M group comprises the first wave of human migration which is thought to have evolved outside of Africa, following an eastward route along southern coastal areas. Descendant lineages of haplogroup M are now found throughout Asia, the Americas, and Melanesia, as well as in parts of the Horn of Africa and North Africa; almost none have been found in Europe. The N haplogroup may represent another macrolineage that evolved outside of Africa, heading northward instead of eastward. Shortly after the migration, the large R group split off from the N. Haplogroup R consists of two subgroups defined on the basis of their geographical distributions, one found in southeastern Asia and Oceania and the other containing almost all of the modern European populations. Haplogroup N(xR), i.e. mtDNA that belongs to the N group but not to its R subgroup, is typical of Australian aboriginal populations, while also being present at low frequencies among many populations of Eurasia and the Americas. The L type consists of nearly all Africans. The M type consists of: M1 – Ethiopian, Somali and Indian populations. Likely due to much gene flow between the Horn of Africa and the Arabian Peninsula (Saudi Arabia, Yemen, Oman), separated only by a narrow strait between the Red Sea and the Gulf of Aden. CZ – Many Siberians; branch C – Some Amerindian; branch Z – Many Saami, some Korean, some North Chinese, some Central Asian populations. D – Some Amerindians, many Siberians and northern East Asians E – Malay, Borneo, Philippines, Taiwanese aborigines , Papua New Guinea G – Many Northeast Siberians, northern East Asians, and Central Asians Q – Melanesian, Polynesian, New Guinean populations The N type consists of: A – Found in many Amerindians and some East Asians and Siberians I – 10% frequency in Northern, Eastern Europe S – Some Indigenous Australian (First Nations People of Australia) W – Some Eastern Europeans, South Asians, and southern East Asians X – Some Amerindians, Southern Siberians, Southwest Asians, and Southern Europeans Y – Most Nivkhs and people of Nias ; many Ainus, Tungusic people , and Austronesians ; also found with low frequency in some other populations of Siberia, East Asia, and Central Asia R – Large group found within the N type. Populations contained therein can be divided geographically into West Eurasia and East Eurasia. Almost all European populations and a large number of Middle-Eastern population today are contained within this branch. A smaller percentage is contained in other N type groups (See above). Below are subclades of R : B – Some Chinese, Tibetans, Mongolians, Central Asians, Koreans, Amerindians, South Siberians, Japanese, Austronesians F – Mainly found in southeastern Asia, especially Vietnam ; 8.3% in Hvar Island in Croatia. [ 13 ] R0 – Found in Arabia and among Ethiopians and Somalis; branch HV (branch H; branch V) – Europe, Western Asia, North Africa; Pre-JT – Arose in the Levant (modern Lebanon area), found in 25% frequency in Bedouin populations; branch JT (branch J; branch T) – North, Eastern Europe, Indus, Mediterranean U – High frequency in West Eurasia, Indian sub-continent, and Algeria, found from India to the Mediterranean and to the rest of Europe; U5 in particular shows high frequency in Scandinavia and Baltic countries with the highest frequency in the Sami people . Here is a list of Y-chromosome and MtDNA geographic haplogroup assignment proposed by Bekada et al. 2013. [ 14 ] According to SNPS haplogroups which are the age of the first extinction event tend to be around 45–50 kya. Haplogroups of the second extinction event seemed to diverge 32–35 kya according to Mal'ta . The ground zero extinction event appears to be Toba during which haplogroup CDEF* appeared to diverge into C, DE and F. C and F have almost nothing in common while D and E have plenty in common. Extinction event #1 according to current estimates occurred after Toba, although older ancient DNA could push the ground zero extinction event to long before Toba, and push the first extinction event here back to Toba. Haplogroups with extinction event notes by them have a dubious origin and this is because extinction events lead to severe bottlenecks, so all notes by these groups are just guesses. Note that the SNP counting of ancient DNA can be highly variable meaning that even though all these groups diverged around the same time no one knows when. [ 15 ] [ 16 ] Y-Chromosome
https://en.wikipedia.org/wiki/Haplogroup
In human mitochondrial genetics , Haplogroup M8 is a human mitochondrial DNA (mtDNA) haplogroup . [ 2 ] [ 3 ] Haplogroup M8 is a descendant of haplogroup M . Haplogroup M8 is divided into subclades M8a , C and Z . It is an East Asian haplogroup. Today, haplogroup M8 is found at its highest frequency in indigenous populations of East Siberia such as Evenk and Yukaghir . Haplogroup M8 is one of the most common mtDNA haplogroups among Yakut , Tuvan . Haplogroup C , the most major one of three subclades, is highly distributed among the Amerindian and Indigenous peoples of East Siberia . Haplogroup Z , another one of three subclades, is highly distributed among Even from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total). mtDNA Haplogroup M8a, a not well known one of the three subclades, is highly distributed among Northern Han Chinese from Liaoning (16/317 = 5.0%). Haplogroup C , the most major one of three subclades is highly distributed among the Amerindian and Indigienous peoples of East Siberia . Haplogroup Z , the other one of three subclades is highly distributed among Even from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total), mtDNA Haplogroup M8a , not well known one of three subclades is highly distributed among Northern Han Chinese from Liaoning (16/317 = 5.0%). This phylogenetic tree of haplogroup M8 subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation [ 1 ] and subsequent published research. The American figure skater Kristi Yamaguchi is a member of haplogroup M8a. [ 4 ] Phylogenetic tree of human mitochondrial DNA (mtDNA) haplogroups
https://en.wikipedia.org/wiki/Haplogroup_M8
In human mitochondrial genetics , Haplogroup Z is a human mitochondrial DNA (mtDNA) haplogroup . Haplogroup Z is believed to have arisen in Central Asia, and is a descendant of haplogroup CZ . The greatest clade diversity of haplogroup Z is found in East Asia and Central Asia . However, its greatest frequency appears in some peoples of Russia , such as Evens from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total) and from Berezovka, Srednekolymsky District, Sakha Republic (3/15 Z1a3, 1/15 Z1a2a, 4/15 = 26.7% Z total), and among the Saami people of northern Fennoscandia . With the exception of three Khakasses who belong to Z4, [ 5 ] two Yakut who belong to Z3a1, [ 5 ] two Yakut, a Yakutian Evenk, a Buryat, and an Altai Kizhi who belong to Z3(xZ3a, Z3c), [ 5 ] and the presence of the Z3c clade among populations of Altai Republic, [ 5 ] nearly all members of haplogroup Z in North Asia and Europe belong to subclades of Z1. The TMRCA of Z1 is 20,400 [95% CI 7,400 <-> 34,000] ybp according to Sukernik et al. 2012, [ 2 ] 20,400 [95% CI 7,800 <-> 33,800] ybp according to Fedorova et al. 2013, [ 5 ] or 19,600 [95% CI 12,500 <-> 29,300] ybp according to YFull. [ 3 ] Among the members (Z1, Z2, Z3, Z4, and Z7) of haplogroup Z, Nepalese populations were characterized by rare clades Z3a1a and Z7, of which Z3a1a was the most frequent sub-clade in Newar, with a frequency of 16.5%. [ 6 ] Z3, found in East Asia, North Asia, and MSEA, is the oldest member of haplogroup Z with an estimated age of ~ 25.4 Kya. [ 6 ] Haplogroup Z3a1a is also detected in other Nepalese populations, such as Magar (5.4%), Tharu, Kathmandu (mixed population) and Nepali-other (mixed population from Kathmandu and Eastern Nepal). [ 6 ] S6). Z3a1a1 detected in Tibet, Myanmar, Nepal, India, Thai-Laos and Vietnam trace their ancestral roots to China with a coalescent age of ~ 8.4 Kya [ 6 ] Fedorova et al. 2013 have reported finding Z* (xZ1a, Z3, Z4) in 1/388 Turks and 1/491 Kazakhs. These individuals should belong to Z1* (elsewhere observed in a Tofalar), Z2 (observed in Japanese), Z7 (observed in the Himalaya), Z5 (observed in Japanese), or basal Z* (observed in a Blang individual in Northern Thailand ). [ 5 ] This phylogenetic tree of haplogroup Z subclades is based on the paper by Mannis van Oven and Manfred Kayser Updated comprehensive phylogenetic tree of global human mitochondrial DNA variation [ 4 ] and subsequent published research. Phylogenetic tree of human mitochondrial DNA (mtDNA) haplogroups
https://en.wikipedia.org/wiki/Haplogroup_Z
Haploidisation is the process of halving the chromosomal content of a cell, producing a haploid cell. Within the normal reproductive cycle, haploidisation is one of the major functional consequences of meiosis , the other being a process of chromosomal crossover that mingles the genetic content of the parental chromosomes. [ 1 ] Usually, haploidisation creates a monoploid cell from a diploid progenitor, or it can involve halving of a polyploid cell, for example to make a diploid potato plant from a tetraploid lineage of potato plants. If haploidisation is not followed by fertilisation , the result is a haploid lineage of cells. For example, experimental haploidisation may be used to recover a strain of haploid Dictyostelium from a diploid strain. [ 2 ] It sometimes occurs naturally in plants when meiotically reduced cells (usually egg cells) develop by parthenogenesis . Haploidisation was one of the procedures used by Japanese researchers to produce Kaguya , a mouse which had same-sex parents; two haploids were then combined to make the diploid mouse. Haploidisation commitment is a checkpoint in meiosis which follows the successful completion of premeiotic DNA replication and recombination commitment. [ 3 ]
https://en.wikipedia.org/wiki/Haploidisation
Haploinsufficiency in genetics describes a model of dominant gene action in diploid organisms, in which a single copy of the wild-type allele at a locus in heterozygous combination with a variant allele is insufficient to produce the wild-type phenotype . Haploinsufficiency may arise from a de novo or inherited loss-of-function mutation in the variant allele, such that it yields little or no gene product (often a protein ). Although the other, standard allele still produces the standard amount of product, the total product is insufficient to produce the standard phenotype. This heterozygous genotype may result in a non- or sub-standard, deleterious, and (or) disease phenotype. Haploinsufficiency is the standard explanation for dominant deleterious alleles. [ clarification needed ] In the alternative case of haplosufficiency , the loss-of-function allele behaves as above, but the single standard allele in the heterozygous genotype produces sufficient gene product to produce the same, standard phenotype as seen in the homozygote . Haplosufficiency accounts for the typical dominance of the "standard" allele over variant alleles, where the phenotypic identity of genotypes heterozygous and homozygous for the allele defines it as dominant, versus a variant phenotype produced only by the genotype homozygous for the alternative allele, which defines it as recessive. The alteration in the gene dosage , which is caused by the loss of a functional allele, is also called allelic insufficiency. About 3,000 human genes cannot tolerate loss of one of the two alleles. [ 1 ] An example of this is seen in the case of Williams syndrome , a neurodevelopmental disorder caused by the haploinsufficiency of genes at 7q11.23. The haploinsufficiency is caused by the copy-number variation (CNV) of 28 genes led by the deletion of ~1.6 Mb. These dosage-sensitive genes are vital for human language and constructive cognition. [ 2 ] Another example is the haploinsufficiency of telomerase reverse transcriptase which leads to anticipation in autosomal dominant dyskeratosis congenita . It is a rare inherited disorder characterized by abnormal skin manifestations, which results in bone marrow failure , pulmonary fibrosis and an increased predisposition to cancer. A null mutation in motif D of the reverse transcriptase domain of the telomerase protein, hTERT, leads to this phenotype. Thus telomerase dosage is important for maintaining tissue proliferation. [ 3 ] A variation of haploinsufficiency exists for mutations in the gene PRPF31 , a known cause of autosomal dominant retinitis pigmentosa . There are two wild-type alleles of this gene—a high- expressivity allele and a low-expressivity allele. When the mutant gene is inherited with a high-expressivity allele, there is no disease phenotype. However, if a mutant allele and a low-expressivity allele are inherited, the residual protein levels falls below that required for normal function, and disease phenotype is present. [ 4 ] Copy number variation (CNV) refers to the differences in the number of copies of a particular region of the genome. This leads to too many or too few of the dosage sensitive genes. The genomic rearrangements, that is, deletions or duplications, are caused by the mechanism of non-allelic homologous recombination (NAHR). In the case of the Williams Syndrome, the microdeletion includes the ELN gene. The hemizygosity of the elastin is responsible for supravalvular aortic stenosis , the obstruction in the left ventricular outflow of blood in the heart. [ 5 ] [ 6 ] Other examples include: The most direct method to detect haploinsufficiency is the heterozygous deletion of one allele in a model organism. This can be done in tissue culture cells or in single-celled organisms such as yeast ( Saccharomyces cerevisiae ). [ 11 ]
https://en.wikipedia.org/wiki/Haploinsufficiency
The haplotype-relative-risk ( HRR ) method is a family-based method for determining gene allele association to a disease in the presence of actual genetic linkage . Nuclear families with one affected child are sampled using the parental haplotypes not transmitted as a control. While similar to the genotype relative risk (RR), the HRR provides a solution to the problem of population stratification by only sampling within family trios. The HRR method was first proposed by Rubinstein in 1981 then detailed in 1987 by Rubinstein and Falk [ 1 ] and is an important tool in genetic association studies. The original method proposed by Falk and Rubinstien fell under scrutiny in 1989, when Ott showed the equivalence of HRR to the classical RR method [ 2 ] demonstrating that the HRR holds only when there is zero chance of recombination between a disease locus and its markers. [ 3 ] Yet, even when the recombination factor for a locus and its genetic markers is >0 HRR estimates are still more conservative than RR estimates. [ 4 ] While the HRR method has proven an effective means of avoiding population stratification biases, another family-based association test known as the transmission disequilibrium test , [ 5 ] or TDT, is more commonly used. Some research uses both HRR and TDT for their ability to complement each other since one result may give no association while the other will. A positive association result from both TDT and HRR means there is strong evidence that a link exists and vice versa. For example, both HRR and TDT methods were used in a study looking for polymorphism in D2 and D3 dopamine receptor in association with schizophrenia and neither found any evidence for linkage, [ 6 ] making an actual role of those genes in the etiology of the mental disorder all the more unlikely. This model represents a case which there is a single locus where all genotypes may lead to expression of the allele in its most simplified definition. Under these parameters a linkage disequilibrium of more than 50% means there is a possible link to the gene allele and inheritance. H R R = P 1 1 − P 1 ∗ 1 − P 2 P 2 {\displaystyle HRR={\frac {P_{1}}{1-P_{1}}}*{\frac {1-P_{2}}{P_{2}}}} Gives the HHR which can be estimated by H R R = a ′ b ′ ∗ d ′ c ′ {\displaystyle HRR={\frac {a^{'}}{b^{'}}}*{\frac {d^{'}}{c^{'}}}} a ' denotes the observed frequency of children who are positive for the gene allele H. b ' denotes the observed frequency of children who are negative for the gene allele H. c ' is the observed frequency of families with at least one transmitted parental marker allele H. d ' is the observed frequency of families with no transmitted parental marker allele H. P 1 is the probability this child is positive for the allele of interest H. P 2 is the probability that at least one of the nontransmitted parental marker alleles equals the allele of interest H. H is the allele of interest. [ 7 ]
https://en.wikipedia.org/wiki/Haplotype-relative-risk
In genetics , a haplotype block is a region of an organism's genome in which there is little evidence of a history of genetic recombination , and which contain only a small number of distinct haplotypes . [ 1 ] According to the haplotype-block model, such blocks should show high levels of linkage disequilibrium and be separated from one another by numerous recombination events. [ 2 ] The boundaries of haplotype blocks cannot be directly observed; they must instead be inferred indirectly through the use of algorithms . However, some evidence suggests that different algorithms for identifying haplotype blocks give very different results when used on the same data, [ 3 ] though another study suggests that their results are generally consistent. [ 4 ] The National Institutes of Health funded the HapMap project to catalog haplotype blocks throughout the human genome . [ 5 ] There are two main ways that the term "haplotype block" is defined: one based on whether a given genomic sequence displays higher linkage disequilibrium than a predetermined threshold, and one based on whether the sequence consists of a minimum number of single nucleotide polymorphisms (SNPs) that explain a majority of the common haplotypes in the sequence (or a lower-than-usual number of unique haplotypes). [ 6 ] In 2001, Patil et al. [ 7 ] proposed the following definition of the term: "Suppose we have a number of haplotypes consisting of a set of consecutive SNPs. A segment of consecutive SNPs is a block if at least α percent of haplotypes are represented more than once". [ 8 ] This genetics article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Haplotype_block
Haplotype convergence is the unrelated appearance of identical haplotypes in separate populations, through either convergent evolution or random chance. Haplotype convergence is rare, due to the sheer odds involved of two unrelated individuals independently evolving exactly the same genetic sequence in the site of interest. Thus, haplotypes are shared mainly between very closely related individuals, as the genetic information in two related individuals will be much more similar than between unrelated individuals. [ 1 ] Substitution bias further increases the likelihood of haplotype convergence, as this increases the probability of mutations occurring at the same site. [ 2 ] Sequences may also diverge from the same original sequence and then revert, converging in this manner. [ 3 ] Convergence through convergent evolution in two unrelated groups is much less common, as derived traits may arise through dramatically different pathways. [ 4 ] [ 5 ] Erroneously determining two individuals to be identical due to haplotype convergence becomes much less likely when more genetic markers are tested, since that would require a larger amount of extremely rare coincidences. [ 6 ] With modern high-throughput sequencing approaches, sequencing a large set of markers, or even the entire genome, is much more feasible and greatly minimizes these issues. [ 7 ] In some regions, due to low diversity in the Y-STR gene (often used to study surname origin), haplotype convergence may confuse analyses, concluding unrelated individuals to be very closely related. [ 8 ] Similarly, a study of New World mitochondrial DNA haplogroups observed that similarities in haplotypes between Native Americans and Asians were a result of the hypervariability of the HVSI region in mitochondrial DNA, rather than common ancestry. [ 2 ] As an example of haplotype convergence due to convergent evolution in more distantly related groups, threespine stickleback in blackwater environments similar to that of the ancient bluefin killifish and black bream independently evolved the same haplotype in the SWS2 gene, which promotes better eyesight in those conditions. [ 9 ]
https://en.wikipedia.org/wiki/Haplotype_convergence
In genetics , haplotype estimation (also known as "phasing") refers to the process of statistical estimation of haplotypes from genotype data. The most common situation arises when genotypes are collected at a set of polymorphic sites from a group of individuals. For example in human genetics, genome-wide association studies collect genotypes in thousands of individuals at between 200,000-5,000,000 SNPs using microarrays. Haplotype estimation methods are used in the analysis of these datasets and allow genotype imputation [ 1 ] [ 2 ] of alleles from reference databases such as the HapMap Project and the 1000 Genomes Project . Genotypes measure the unordered combination of alleles at each locus, whereas haplotypes represent the genetic information on multiple loci that have been inherited together from an individual's parents. Theoretically the number of possible haplotypes equals to the product of allele numbers of each locus in consideration. Specially, most of the SNPs are bi-allelic; Therefore when considering N {\displaystyle N} heterozygous bi-allelic loci, there will be 2 N {\displaystyle 2^{N}} possible pairs of haplotypes that could underlie the genotypes. For example, when considering two bi-allelic loci A and B ( N = 2 {\displaystyle N=2} ), of which the genotypes are a 1 and a 2 , b 1 and b 2 , respectively, we will have the following haplotypes: a 1 _b 1 , a 1 _b 2 , a 2 _b 1 , and a 2 _b 2 ( "_" indicates that the alleles are on the same chromosome). Many statistical methods have been proposed for estimation of haplotypes. Some of the earliest approaches used a simple multinomial model in which each possible haplotype consistent with the sample was given an unknown frequency parameter and these parameters were estimated with an Expectation–maximization algorithm . These approaches were only able to handle small numbers of sites at once, although sequential versions were later developed, specifically the SNPHAP method. The most accurate and widely used methods for haplotype estimation utilize some form of hidden Markov model (HMM) to carry out inference. For a long time PHASE [ 3 ] was the most accurate method. PHASE was the first method to utilize ideas from coalescent theory concerning the joint distribution of haplotypes. This method used a Gibbs sampling approach in which each individuals haplotypes were updated conditional upon the current estimates of haplotypes from all other samples. Approximations to the distribution of a haplotype conditional upon a set of other haplotypes were used for the conditional distributions of the Gibbs sampler. PHASE was used to estimate the haplotypes from the HapMap Project . PHASE was limited by its speed and was not applicable to datasets from genome-wide association studies. The fastPHASE [ 4 ] and BEAGLE methods [ 5 ] introduced haplotype cluster models applicable to GWAS -sized datasets. Subsequently the IMPUTE2 [ 6 ] and MaCH [ 7 ] methods were introduced that were similar to the PHASE approach but much faster. These methods iteratively update the haplotype estimates of each sample conditional upon a subset of K haplotype estimates of other samples. IMPUTE2 introduced the idea of carefully choosing which subset of haplotypes to condition on to improve accuracy. Accuracy increases with K but with quadratic O ( K 2 ) {\displaystyle O(K^{2})} computational complexity. The SHAPEIT1 method made a major advance by introducing a linear O ( K ) {\displaystyle O(K)} complexity method that operates only on the space of haplotypes consistent with an individual’s genotypes. [ 8 ] The HAPI-UR method subsequently proposed a very similar method. [ 9 ] SHAPEIT2 [ 10 ] combines the best features of SHAPEIT1 and IMPUTE2 to improve efficiency and accuracy.
https://en.wikipedia.org/wiki/Haplotype_estimation
In mathematics , the " happy ending problem " (so named by Paul Erdős because it led to the marriage of George Szekeres and Esther Klein [ 1 ] ) is the following statement: Theorem — any set of five points in the plane in general position [ 2 ] has a subset of four points that form the vertices of a convex quadrilateral . This was one of the original results that led to the development of Ramsey theory . The happy ending theorem can be proven by a simple case analysis: if four or more points are vertices of the convex hull , any four such points can be chosen. If on the other hand, the convex hull has the form of a triangle with two points inside it, the two inner points and one of the triangle sides can be chosen. See Peterson (2000) for an illustrated explanation of this proof, and Morris & Soltan (2000) for a more detailed survey of the problem. The Erdős–Szekeres conjecture states precisely a more general relationship between the number of points in a general-position point set and its largest subset forming a convex polygon , namely that the smallest number of points for which any general position arrangement contains a convex subset of n {\displaystyle n} points is 2 n − 2 + 1 {\displaystyle 2^{n-2}+1} . It remains unproven, but less precise bounds are known. Erdős & Szekeres (1935) proved the following generalisation: Theorem — for any positive integer N , any sufficiently large finite set of points in the plane in general position has a subset of N points that form the vertices of a convex polygon. The proof appeared in the same paper that proves the Erdős–Szekeres theorem on monotonic subsequences in sequences of numbers. Let f ( N ) denote the minimum M for which any set of M points in general position must contain a convex N -gon. It is known that On the basis of the known values of f ( N ) for N = 3, 4 and 5, Erdős and Szekeres conjectured in their original paper that f ( N ) = 1 + 2 N − 2 for all N ≥ 3. {\displaystyle f(N)=1+2^{N-2}\quad {\text{for all }}N\geq 3.} They proved later, by constructing explicit examples, that [ 6 ] f ( N ) ≥ 1 + 2 N − 2 . {\displaystyle f(N)\geq 1+2^{N-2}.} In 2016 Andrew Suk [ 7 ] showed that for N ≥ 7 f ( N ) ≤ 2 N + o ( N ) . {\displaystyle f(N)\leq 2^{N+o(N)}.} Suk actually proves, for N sufficiently large, f ( N ) ≤ 2 N + 6 N 2 / 3 l o g N . {\displaystyle f(N)\leq 2^{N+6N^{2/3}logN}.} This was subsequently improved to: [ 8 ] f ( N ) ≤ 2 N + O ( N l o g N ) . {\displaystyle f(N)\leq 2^{N+O({\sqrt {NlogN}})}.} There is also the question of whether any sufficiently large set of points in general position has an "empty" convex quadrilateral, pentagon, etc., that is, one that contains no other input point. The original solution to the happy ending problem can be adapted to show that any five points in general position have an empty convex quadrilateral, as shown in the illustration, and any ten points in general position have an empty convex pentagon. [ 9 ] However, there exist arbitrarily large sets of points in general position that contain no empty convex heptagon . [ 10 ] For a long time the question of the existence of empty hexagons remained open, but Nicolás (2007) and Gerken (2008) proved that every sufficiently large point set in general position contains a convex empty hexagon. More specifically, Gerken showed that the number of points needed is no more than f (9) for the same function f defined above, while Nicolás showed that the number of points needed is no more than f (25). Valtr (2008) supplies a simplification of Gerken's proof that however requires more points, f (15) instead of f (9). At least 30 points are needed; there exists a set of 29 points in general position with no empty convex hexagon. [ 11 ] The question was finally answered by Heule & Scheucher (2024) , who showed, using a SAT solving approach, that indeed every set of 30 points in general position contains an empty hexagon. The problem of finding sets of n points minimizing the number of convex quadrilaterals is equivalent to minimizing the crossing number in a straight-line drawing of a complete graph . The number of quadrilaterals must be proportional to the fourth power of n , but the precise constant is not known. [ 12 ] It is straightforward to show that, in higher-dimensional Euclidean spaces , sufficiently large sets of points will have a subset of k points that forms the vertices of a convex polytope , for any k greater than the dimension: this follows immediately from existence of convex k -gons in sufficiently large planar point sets, by projecting the higher-dimensional point set into an arbitrary two-dimensional subspace. However, the number of points necessary to find k points in convex position may be smaller in higher dimensions than it is in the plane, and it is possible to find subsets that are more highly constrained. In particular, in d dimensions, every d + 3 points in general position have a subset of d + 2 points that form the vertices of a cyclic polytope . [ 13 ] More generally, for every d and k > d there exists a number m ( d , k ) such that every set of m ( d , k ) points in general position has a subset of k points that form the vertices of a neighborly polytope . [ 14 ]
https://en.wikipedia.org/wiki/Happy_ending_problem
Haptens (derived from the Greek haptein , meaning “to fasten”) [ 1 ] are small molecules that elicit an immune response only when attached to a large carrier such as a protein ; the carrier may be one that also does not elicit an immune response by itself. The mechanisms of absence of immune response may vary and involve complex immunological interactions, but can include absent or insufficient co-stimulatory signals from antigen-presenting cells . Haptens have been used to study allergic contact dermatitis (ACD) and the mechanisms of inflammatory bowel disease (IBD) to induce autoimmune-like responses. [ 2 ] The concept of haptens emerged from the work of Austrian immunologist Karl Landsteiner , [ 3 ] [ 4 ] who also pioneered the use of synthetic haptens to study immunochemical phenomena. [ 5 ] Haptens applied on skin, when conjugate with a carrier, could induce contact hypersensitivity, which is a type IV delayed hypersensitivity reaction mediated by T cells and dendritic cells . It consists of two phases: sensitization and elicitation. The sensitization phase where the hapten is applied to the skin for the first time is characterized by the activation of innate immune responses, including migration of dendritic cells to the lymph nodes, priming antigen-specific naive T cells , and the generation of antigen-specific effector or memory T cells and B cells and antibody-secreting plasma cells . The second elicitation phase where the hapten is applied to a different skin area starts with activation of effector T cells followed by T cell-mediated tissue damage and antibody-mediated immune responses. Haptens initially activate innate immune responses by complex mechanisms involving inflammatory cytokines , damage-associated molecular patterns (DAMP), or the inflammasome . [ 6 ] Once the body has generated antibodies to a hapten-carrier adduct , the small-molecule hapten may also be able to bind to the antibody, but it will usually not initiate an immune response; usually only the hapten-carrier adduct can do this. Sometimes the small-molecule hapten can even block immune response to the hapten-carrier adduct by preventing the adduct from binding to the antibody, a process called hapten inhibition . A well-known example of a hapten is urushiol , which is the toxin found in poison ivy . When absorbed through the skin from a poison ivy plant, urushiol undergoes oxidation in the skin cells to generate the actual hapten, a reactive quinone -type molecule, which then reacts with skin proteins to form hapten adducts. After a second exposure, the proliferated T-cells become activated, generating an immune reaction that produces typical blisters of a urushiol-induced contact dermatitis . [ 7 ] Another example of a hapten-mediated contact dermatitis is nickel allergy , which is caused by nickel metal ions penetrating the skin and binding to skin proteins. A lot of haptens are comprised in different kinds of drugs, pesticides, hormones, food toxins etc. Most important factor is the molecular mass, which is <1000 Da . [ 8 ] The first researched haptens were aniline and its carboxyl derivatives ( o- , m- , and p-aminobenzoic acid ). [ 9 ] Some haptens can induce autoimmune disease. An example is hydralazine , a blood pressure-lowering drug that occasionally can produce drug-induced lupus erythematosus in certain individuals. This also appears to be the mechanism by which the anesthetic gas halothane can cause a life-threatening hepatitis , as well as the mechanism by which penicillin -class drugs cause autoimmune hemolytic anemia . [ 10 ] Other haptens that are commonly used in molecular biology applications include fluorescein , biotin , digoxigenin , and dinitrophenol . Antibodies have successfully been raised against endogenous & unreactive small molecules such as some neurotransmitters (e.g. serotonin (5HT), glutamate , dopamine , GABA , tryptamine , glycine , noradrenaline ), amino acids (e.g. tryptophan , 5-hydroxytryptophan , 5-methoxytryptophan ), by using glutaraldehyde to crosslink these molecules to carrier proteins suitable for immune recognition. Notably, detection of such small molecules in tissues requires the tissue to be glutaraldehyde-fixed, as the glutaraldehyde covalent-linkage on the molecule of interest often forms a portion of the antibody recognized epitope . [ 11 ] [ 12 ] Due to their nature and properties, hapten-carrier adducts have been essential in immunology . They have been used to evaluate the properties of specific epitopes and antibodies. They are important in the purification and production of monoclonal antibodies . They are also vital in the development of sensitive quantitative and qualitative immunoassays . [ 13 ] However, to achieve the best and most desirable results, many factors are needed to be taken into the design of hapten conjugates. These include the method of hapten conjugation, the type of carrier used and the hapten density. Variations in these factors could lead to different strengths of immune response toward the newly formed antigenic determinant. [ 14 ] In general, carrier proteins should be immunogenic and contain enough amino acid residues in the reactive side chains to conjugate with the haptens. For protein haptenation to occur, hapten must be electron deficient ( electrophilic ), either by itself, or it can be converted to a protein-reactive species for example by air oxidation or cutaneous metabolism. [ 15 ] Haptens become fastened to a carrier molecule by a covalent bond. Depending on the haptens being used, other factors in considering the carrier proteins could include their in vivo toxicity, commercial availability and cost. [ 13 ] The most common carriers include serum globulin , albumins , ovalbumin and many others. Human serum albumin (HSA) is often the model protein of choice for protein-binding assays. This is a well-characterized protein, and the role of albumin in blood and tissues in vivo is often to bind to xenobiotics via its substrate-binding pockets and remove the invading chemical from the circulation or tissue, thus acting as a detoxification mechanism. Although proteins are mostly employed for hapten conjugation, synthetic polypeptides such as Poly-L-glutamic acid , polysaccharides and liposomes could also be used. [ 13 ] Most common reaction mechanisms forming covalent bonds and predicted to be involved in sensitization are nucleophilic substitution on a saturated centre, nucleophilic substitution on an unsaturated centre and nucleophilic addition. Other reactions are also possible, such as electrophilic substitution (diazonium salts), radical reactions, and ionic reactions. [ 15 ] While selecting a suitable method for hapten conjugation, functional groups on the hapten and its carrier must be identified. Depending on the groups present, one of the two main strategies could be employed: Hapten inhibition or "semi-hapten" is the inhibition of a type III hypersensitivity response. In inhibition, free hapten molecules bind with antibodies toward that molecule without causing the immune response, leaving fewer antibodies left to bind to the immunogenic hapten-protein adduct. An example of a hapten inhibitor is dextran 1 , which is a small fraction (1 kilodalton ) of the entire dextran complex, which is enough to bind anti-dextran antibodies, but insufficient to result in the formation of immune complexes and resultant immune responses. [ 20 ] Haptens are widely used in immunology and related fields. Sensitizing chemicals can cause different forms of allergy, allergic contact dermatitis, or sensitization of the respiratory tract. Interestingly, discrete types of chemicals induce divergent immune responses: contact allergens provoke preferential type I hypersensitivity responses, whereas respiratory allergens stimulate selective type II responses , which could be very suitable for modeling how the immune response is polarized towards different types of antigens. [ 21 ] In allergology, in vitro / in silico tests for skin sensitization, hazard identification, and potency evaluation on different drug and cosmetic components are highly preferred in early product development. The ability of a drug to act as a hapten is a clear indication of potential immunogenicity. [ 22 ] Hapten-specific antibodies are used in broad area of different immunoassays, immunobiosensor technologies and immunoaffinity chromatography purification columns; those antibodies could be used to detect small environmental contaminants, drugs of abuse, vitamins, hormones, metabolites, food toxins and environmental pollutants. [ 23 ]
https://en.wikipedia.org/wiki/Hapten
In coordination chemistry , hapticity is the coordination of a ligand to a metal center via an uninterrupted and contiguous series of atoms . [ 1 ] The hapticity of a ligand is described with the Greek letter η ('eta'). For example, η 2 describes a ligand that coordinates through 2 contiguous atoms. In general the η-notation only applies when multiple atoms are coordinated (otherwise the κ-notation is used). In addition, if the ligand coordinates through multiple atoms that are not contiguous then this is considered denticity [ 2 ] (not hapticity), and the κ-notation is used once again. [ 3 ] When naming complexes care should be taken not to confuse η with μ ('mu'), which relates to bridging ligands . [ 4 ] [ 5 ] The need for additional nomenclature for organometallic compounds became apparent in the mid-1950s when Dunitz, Orgel , and Rich described the structure of the " sandwich complex " ferrocene by X-ray crystallography [ 6 ] where an iron atom is "sandwiched" between two parallel cyclopentadienyl rings. Cotton later proposed the term hapticity derived from the adjectival prefix hapto (from the Greek haptein , to fasten, denoting contact or combination) placed before the name of the olefin, [ 7 ] where the Greek letter η (eta) is used to denote the number of contiguous atoms of a ligand that bind to a metal center. The term is usually employed to refer to ligands containing extended π-systems or where agostic bonding is not obvious from the formula. The η-notation is encountered in many coordination compounds: The hapticity of a ligand can change in the course of a reaction. [ 12 ] E.g. in a redox reaction: Here one of the η 6 -benzene rings changes to a η 4 -benzene. Similarly hapticity can change during a substitution reaction: Here the η 5 -cyclopentadienyl changes to an η 3 -cyclopentadienyl, giving room on the metal for an extra 2-electron donating ligand 'L'. Removal of one molecule of CO and again donation of two more electrons by the cyclopentadienyl ligand restores the η 5 -cyclopentadienyl. The so-called indenyl effect also describes changes in hapticity in a substitution reaction. Hapticity must be distinguished from denticity . Polydentate ligands coordinate via multiple coordination sites within the ligand. In this case the coordinating atoms are identified using the κ-notation, as for example seen in coordination of 1,2-bis(diphenylphosphino)ethane (Ph 2 PCH 2 CH 2 PPh 2 ), to NiCl 2 as dichloro[ethane-1,2-diylbis(diphenylphosphane)-κ 2 P]nickel(II). If the coordinating atoms are contiguous (connected to each other), the η-notation is used, as e.g. in titanocene dichloride : dichlorobis(η 5 -2,4-cyclopentadien-1-yl)titanium. [ 13 ] Molecules with polyhapto ligands are often fluxional , also known as stereochemically non-rigid. Two classes of fluxionality are prevalent for organometallic complexes of polyhapto ligands:
https://en.wikipedia.org/wiki/Hapticity
Haptik is an Indian enterprise conversational AI platform founded in August 2013, [ 1 ] [ 2 ] and acquired by Reliance Industries Limited in 2019. [ 3 ] Haptik was the pioneer chatbot and one of the first modern Conversational AI and Generative AI. [ 4 ] The company develops technology to enable enterprises to build conversational AI systems that allow users to converse with applications and electronic devices in free-format, natural language, using speech or text. [ 5 ] [ 6 ] The company has been accorded numerous accolades including the Frost & Sullivan Award , NASSCOM 's Al Game Changer Award, [ 7 ] and serves Fortune 500 brands globally in industries such as financial , insurance , healthcare , technology and communications . [ 8 ] [ 9 ] Haptik was founded by Aakrit Vaish [ 10 ] and Swapan Rajdev, in August 2013. [ 11 ] [ 12 ] The company launched its first product Haptik app in March 2014, which is a chat-based personal assistant which lets its users get things done for Android and iOS platforms in India . [ 13 ] [ 14 ] By September 2014, the platform added 125 chat experts who helped users with their queries. [ citation needed ] Over time the company upgraded it into a complete conversational commerce app. [ 15 ] The app received 2 million downloads and 15 million installations. [ 16 ] In August 2015, Dan Roth joined Haptik's board of advisers who helped scale the platform's Natural language processing (NLP). [ 17 ] In the same year, Haptik was appointed as the official personal assistant of Mumbai City FC . [ 18 ] It also provided a customer support chatbot to SwipeTelecom. [ 19 ] In November 2017, the company launched a full-scale enterprise-level bot management platform including an analytics dashboard. [ 20 ] In 2019, Haptik launched a voice bot for one of the largest food chains in the world, allowing its customers to place orders using Alexa. It also helps users find the nearest outlet of the food chain and provides information on product availability on a real-time basis. [ 21 ] In March 2019, the Government of Maharashtra signed a partnership pact with Haptik to develop a chatbot for its Aaple Sarkar platform. The bot provides conversational access to information regarding 1,400 services managed by the state government. [ 22 ] [ 23 ] In April 2019, Reliance Jio bought an 87% stake in the company in a $100 million deal. This was followed by the acquisition of Mumbai -based start-up, Buzzo.ai, that develops customizable Artificial Intelligence software for e-commerce . [ 24 ] [ 25 ] In April 2019 Reliance Jio Infocom Ltd bought an 87% stake in the company in a $100 million deal. [ 26 ] [ 27 ] The company was renamed Jio Haptik Technologies Ltd. [ 28 ] Jio is a $65 billion Internet conglomerate, with businesses across Telecom, E-commerce, Media & Entertainment, Healthcare, and more. [ 29 ] [ 30 ] In July 2019, Haptik acquired Los-Angeles based startup, Convrg to expand its technical expertise and business outreach in North America . [ 31 ] After getting acquired by Reliance Jio , Haptik acquired a Mumbai -based start-up, Buzzo.ai, which develops customizable Artificial Intelligence software for e-commerce. [ 24 ] [ 25 ] The combined reach of Jio Platforms is close to 500 million customers [ 32 ] [ 33 ] and it counts marquee names such as Google , Facebook , Silver Lake , KKR amongst some of its notable investors. [ 33 ] [ 29 ] [ 30 ] In November 2019, the company appointed Saumil Shah as the Vice President of Engineering. [ 34 ] In December 2019, Haptik developed a chatbot for Tata Mutual Fund called 'Prof. Simply Simple' that helps with resolving routine, repetitive queries, and frees the customer support team to solve complex queries. [ 35 ] In March 2020, The Government of India launched a WhatsApp chatbot called MyGov Corona Helpdesk to create awareness about coronavirus . The bot was built by Haptik. [ 36 ] [ 37 ] [ 38 ] In October 2021 Haptik launched Interakt App for MSMEs: The launch came amid the growth of Direct To Consumer (D2C) brands on the internet that were looking to leverage personalised interactions with customers through chat platforms such as WhatsApp. [ 39 ] In December 2021, Haptik was awarded a special honour by the Minister of State for External Affairs - Meenakshi Lekhi, for partnering with Mygov on the Corona Helpdesk. [ citation needed ] In 2022, the company has also announced to build a chatbot for RedDoorz, a hotel management and ticket booking platform in Indonesia . [ 40 ] Haptik built the world's largest WhatsApp chatbot for COVID-19 . [ 45 ] This was the official helpline for the Government of India which was utilized by over 21 million users across the country. [ 38 ] The MyGov Corona Helpdesk was engineered to fight rumors, educate the masses and bring a sense of calm to the pandemic situation. Haptik built the Helpdesk ground-up using official data shared by the Government. [ 36 ] Kotak Life partnered with Haptik to develop an AI-driven conversational assistant called KAYA which provides 24X7 assistance to consumers. [ 46 ] The company partnered with Amazon Pay , [ 47 ] HDFC Life , [ 48 ] Ola Cabs , [ 49 ] Uber , [ 50 ] Times Internet , Mumbai City FC , [ 17 ] Coca-Cola , [ 51 ] Ziman, [ 52 ] Zomato , [ 53 ] BookMyShow, [ 53 ] [ 50 ] Cleartrip , Goibibo , Zoop, [ 54 ] UrbanClap, Via.com , Dineout, Flipkart , and Kotak Life to run campaigns on Haptik app. [ 55 ] In March 2018, the company partnered with Amazon Web Services (AWS) to provide Al-enabled conversational solutions [ buzzword ] to customers in India . [ 56 ] Haptik has entered into a strategic partnership with Y Combinator -backed Leena AI to provide enterprises for all types of bot solutions. [ buzzword ] [ 57 ] Haptik's repertoire of chatbot customers in India includes Samsung , [ 2 ] Future Group , [ 58 ] KFC , Dream11 , Sharekhan , Edelweiss , Tokio, Club Mahindra and IIFL among others. [ 59 ] Haptik has also built assistants for TOI, [ 60 ] Samsung , Ziman [ 52 ] and Akancha Against Harassment, an online Cyber Safety Initiative. [ 61 ] Haptik is one of the world's largest conversational AI platforms. [ 58 ] In October 2017, The Times of India app incorporated Haptik's virtual personal assistant service with Sprite as the exclusive brand partner. [ 60 ] Samsung was the second partner who uses Haptik to power its 'My Assistant' service that is pre-installed on the Samsung Galaxy S7 and Galaxy S7 Edge in India . [ 2 ] Haptik built a scalable Support Bot for Dream11 which helped the online handle 8x their volume without a large support staff during IPL 2018. [ 62 ] Haptik raised $11.2M in its Series B funding round from Times Internet in April 2016. [ 63 ] [ 64 ] [ 2 ] Times Internet thus acquired a majority stake in Haptik. Earlier, the company had received funding of $1 million from Kalaari Capital in September 2014. [ 65 ] [ 66 ] Haptik is a part of the Reliance Industries Limited , which acquired a majority stake in the company in a $100 million deal in April 2019. [ 67 ] [ 68 ] Haptik builds Conversational AI that understands context. [ 69 ] [ 70 ] [ 71 ] [ 72 ] Haptik has open sourced its proprietary Named Entity Recognition system that powers the chatbots behind Haptik app at the Chatbot Summit held in Berlin on 26 June 2017. [ 73 ] [ 74 ]
https://en.wikipedia.org/wiki/Haptik
Hara hachi bun me ( 腹八分目 ) (also spelled hara hachi bu , and sometimes misspelled hari hachi bu ) is a Confucian [ 1 ] teaching that instructs people to eat until they are 80 percent full. [ 2 ] The Japanese phrase translates to "Eat until you are eight parts (out of ten) full", [ 2 ] or "belly 80 percent full". [ 3 ] There is evidence that following this practice leads to a lower body mass index and increased longevity, and it might even help to prevent dementia in the elderly. Biochemist Clive McCay , a professor at Cornell University in the 1930s, reported that significant calorie restriction prolonged life in laboratory animals. [ 4 ] [ 5 ] Authors Bradley and Craig Wilcox along with Makoto Suzuki believe that hara hachi bun me may act as a form of calorie restriction, and therefore extend the life expectancy for those who practice this philosophy. They take the case of Okinawa , whose population rank at the top in terms of life expectancy: they believe that hara hachi bun me assists in keeping the average Okinawan's BMI low, and this is thought to be due to the delay in the stomach stretch receptors that help signal satiety . The result of not practising hara hachi bun me is a constant stretching of the stomach which in turn increases the amount of food needed to feel full. [ 2 ] Yoshida Iwase and colleagues have investigated the reasons why some centenarians are able to reach such old ages without signs of dementia, and among other factors, they've found out that following the hara hachi bun me philosophy might contribute to healthier neurological markers for the elderly. [ 6 ] Okinawans are a minority culture who, although part of Japan , are descendants of the Ryukyuan Kingdom and who had influences from mainland China. Okinawa has the world's highest proportion of centenarians , at approximately 50 per 100,000 people. [ 7 ] They are known to practise hara hachi bun me , [ 2 ] and as a result they typically consume about 1,800 [ 3 ] to 1,900 kilo-calories per day. [ 8 ] The typical body mass index (BMI) of their elders is about 18 to 22, compared to a typical BMI of 26 or 27 for adults over 60 years of age in the United States . [ 9 ] The philosophy of hara hachi bun me , is also found in other cultures. From the teachings of Confucius , [ 1 ] philosophies dating back to the 5th century BCE in China, a proverb found in Traditional Chinese Medicine states: "Chīfàn qī fēn bǎo, sān fēn han" ( 吃飯七分飽、三分寒 ) or "only eat 7 parts full, and wear 3 parts less.") [ 10 ] The principle of avoiding surfeit also appears in Islam, dating back to the 6th century by prophet Muhammad, embodied in the proverb stating: "you should fill one third of the stomach with liquid, another third with food, and leave the rest empty." [ 10 ] The practise of a Confucian teaching that cautioned about eating too much, so as not to over burden the spleen, stomach or heart [ 11 ] evolved into a Japanese proverb as: "Hara hachi bun ni yamai nashi, hara juuni bun ni isha tarazu" (腹八分に病なし、腹十二分に医者足らず) or literally "stomach 80% in, no illness, stomach 120% in, doctor shortage" which usually translated into English as "eight parts of a full stomach sustain the man; the other two sustain the doctor". [ 11 ] In the 1965 book Three Pillars of Zen , the author quotes Hakuun Yasutani in his lecture for zazen beginners advising his students about the book Zazen Yojinki ( Precautions to Observe in Zazen ), written circa 1300, advised them to eat no more than eighty percent of their capacity, reinforced by the proverb above. [ 11 ] Hara hachi bun me was popularised in the United States by a variety of modern books on diet and longevity. [ 12 ] [ 13 ]
https://en.wikipedia.org/wiki/Hara_hachi_bun_me
Harald Ganzinger (31 October 1950, Werneck – 3 June 2004, Saarbrücken ) was a German computer scientist who together with Leo Bachmair developed the superposition calculus , which is (as of 2007) used in most of the state-of-the-art automated theorem provers for first-order logic . He received his Ph.D. from the Technical University of Munich in 1978. Before 1991 he was a Professor of Computer Science at University of Dortmund . Then he joined the Max Planck Institute for Computer Science in Saarbrücken shortly after it was founded in 1991. Until 2004 he was the Director of the Programming Logics department of the Max Planck Institute for Computer Science and honorary professor at Saarland University . His research group created the SPASS automated theorem prover . He received the Herbrand Award in 2004 ( posthumous ) for his important contributions to automated theorem proving . This article about a computer specialist of Germany is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harald_Ganzinger
Harald Schäfer (10 February 1913, Jena – 21 December 1992, Münster ) was a professor of inorganic chemistry at the University of Münster in Germany. He is recognized for popularizing the use of chemical vapor transport and the discovery of many new inorganic compounds . Schäfer began his studies in 1937 and was awarded a doctorate in 1940. His dissertation was on analytical chemistry of boron. He conducted his habilitation at the Technical University of Stuttgart on iron oxychlorides , during which he discovered the phenomenon of chemical vapor transport (migration in the solid state via the gas phase). [ 1 ] [ 2 ] In recognition of his research achievements, he was awarded the Alfred Stock Memorial Prize in 1967. He was also elected to the Leopoldina Academy . This article about a German chemist is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harald_Schäfer
In mathematics , the Haran diamond theorem gives a general sufficient condition for a separable extension of a Hilbertian field to be Hilbertian. Let K be a Hilbertian field and L a separable extension of K . Assume there exist two Galois extensions N and M of K such that L is contained in the compositum NM , but is contained in neither N nor M . Then L is Hilbertian. The name of the theorem comes from the pictured diagram of fields, and was coined by Jarden. This theorem was firstly proved using non-standard methods by Weissauer. It was reproved by Fried using standard methods. The latter proof led Haran to his diamond theorem. Let K be a Hilbertian field, N a Galois extension of K , and L a finite proper extension of N . Then L is Hilbertian. If L is finite over K , it is Hilbertian; hence we assume that L/K is infinite. Let x be a primitive element for L/N , i.e., L = N ( x ). Let M be the Galois closure of K ( x ). Then all the assumptions of the diamond theorem are satisfied, hence L is Hilbertian. Another, preceding to the diamond theorem, sufficient permanence condition was given by Haran–Jarden: Theorem. Let K be a Hilbertian field and N , M two Galois extensions of K . Assume that neither contains the other. Then their compositum NM is Hilbertian. This theorem has a very nice consequence: Since the field of rational numbers, Q is Hilbertian ( Hilbert's irreducibility theorem ), we get that the algebraic closure of Q is not the compositum of two proper Galois extensions.
https://en.wikipedia.org/wiki/Haran's_diamond_theorem
HarbisonWalker International is a refractory solutions provider headquartered in Pittsburgh , Pennsylvania, USA. HarbisonWalker International was created by the merger of Harbison-Walker , A.P. Green, and North American Refractories, all considered among the Big 5 of the firebrick industry during the first half of the 20th century. [ n 1 ] [ 1 ] The Harbison and Walker Company was founded in Pennsylvania in 1875 when Hay Walker Sr. and Samuel P. Harbison bought the company Star Fire Brick Company founded 10 years earlier. Harbison-Walker Refractories was acquired by Dresser Industries in 1967 [ 2 ] but demerged from Dresser in 1992. [ 3 ] [ 4 ] In 1999, Harbison-Walker managed 4,000 employees and 25 locations in 7 countries (15 production sites in the USA, 3 in Canada) when it was acquired by the Austrian conglomerate RHI Refractories . [ 5 ] Missouri-born Allen Percival Green (1875-1956) worked temporarily for Harbison-walker, and then bought the Mexico Brick and Fire Clay Company in 1910. The A.P. Green Fire Brick Company grew strong during World War I and World War II . A.P. Green's company went public on the New York Stock Exchange in 1966, and was taken over by Gypsum ( USG ) in 1967. [ 1 ] It was then acquired by RHI AG. North American Refractories Co. (NARCO) was created in 1929 in Ohio through the merger of seven manufacturers of refractory solutions. NARCO was acquired by Eltra Corp. in 1965. Eltra Corp. was acquired by Allied Corp. in 1979, making NARCO part of Allied's chemical division. It was acquired by RHI AG in 1995. [ 6 ] In 2002, many US refractory companies - including Harbison-Walker, A.P. Green, and North American Refractories - were embroiled in the abestos claims and filed for bankruptcy. [ 7 ] RHI AG filed for Chapter 11 . The reorganization of RHI AG led to the creation of ANH International in 2007, which adopted the name HarbisonWalker International (HWI) in 2015. [ 8 ] In 2023, a US private investment firm acquired HWI and combined its operations with its other refractory company, Calderys . [ 9 ] HarbisonWalker International (HWI) manufactures and distributes high temperature solutions, from refractories to casting fluxes and molding solutions. [ 10 ]
https://en.wikipedia.org/wiki/HarbisonWalker_International
The Hard Rock Miner's Handbook is a reference book that deals with the underground hard-rock mining industry. It was written by engineer Jack de la Vergne as a non-profit publication. [ 1 ] The first edition was published in 2000 by McIntosh Engineering, a mining engineering consulting company. [ 2 ] It is currently in its third printing, is used by thousands of people in the mining industry, including students, professors, miners, engineers and mining executives as a source of practical mining information, [ 3 ] as well as " Rules of Thumb " and "Tricks of the trade" are used widely in the mining industry. [ 4 ] Copies of the handbook have been distributed to more than 113 countries around the world. [ 3 ]
https://en.wikipedia.org/wiki/Hard_Rock_Miner's_Handbook
Hard X-ray Modulation Telescope ( HXMT ) also known as Insight ( Chinese : 慧眼 ) [ 3 ] is a Chinese X-ray space observatory, launched on June 15, 2017 [ 2 ] to observe black holes , neutron stars , active galactic nuclei and other phenomena based on their X-ray and gamma-ray emissions. [ 4 ] It is based on the JianBing 3 imagery reconnaissance satellite series platform. The project, a joint collaboration of the Ministry of Science and Technology of China , the Chinese Academy of Sciences , and Tsinghua University , has been under development since 2000. The main scientific instrument is an array of 18 NaI(Tl)/CsI(na) slat-collimated "phoswich" scintillation detectors, collimated to 5.7°×1° overlapping fields of view. [ 5 ] The main NaI detectors have an area of 286 cm 2 each, and cover the 20–200 keV energy range. Data analysis is planned to be by a direct algebraic method, "direct demodulation", [ 6 ] which has shown promise in de-convolving the raw data into images while preserving excellent angular and energy resolution. The satellite has three payloads, the high energy X-ray Telescope (20–250 keV), the medium energy X-ray telescope (5–30 keV), and the low energy X-ray telescope (1–15 keV) [ 2 ]
https://en.wikipedia.org/wiki/Hard_X-ray_Modulation_Telescope
Hard coding (also hard-coding or hardcoding ) is the software development practice of embedding data directly into the source code of a program or other executable object, as opposed to obtaining the data from external sources or generating it at runtime . Hard-coded data typically can be modified only by editing the source code and recompiling the executable, although it can be changed in memory or on disk using a debugger or hex editor . Data that is hard-coded is best suited for unchanging pieces of information, such as physical constants , version numbers , and static text elements. Softcoded data, on the other hand, encodes arbitrary information through user input , text files , INI files , HTTP server responses, configuration files, preprocessor macros, external constants, databases, command-line arguments , and is determined at runtime. Hard coding requires the program's source code to be changed any time the input data or desired format changes, when it might be more convenient to the end user to change the detail by some means outside the program. [ 1 ] Hard coding is often required, but can also be considered an anti-pattern . [ 2 ] Programmers may not have a dynamic user interface solution for the end user worked out but must still deliver the feature or release the program. This is usually temporary but does resolve, in a short term sense, the pressure to deliver the code. Later, softcoding is done to allow a user to pass on parameters that give the end user a way to modify the results or outcome. The term "hard-coded" was initially used as an analogy to hardwiring circuits - and was meant to convey the inflexibility that results from its usage within software design and implementation. In the context of run-time extensible collaborative development environments such as MUDs , hardcoding also refers to developing the core engine of the system responsible for low-level tasks and executing scripts , as opposed to softcoding which is developing the high-level scripts that get interpreted by the system at runtime , with values from external sources, such as text files , INI files , preprocessor macros , external constants, databases , command-line arguments , HTTP server responses, configuration files , and user input . In this case, the term is not pejorative and refers to general development, rather than specifically embedding output data. Hardcoding credentials is a popular way of creating a backdoor . Hardcoded credentials are usually not visible in configuration files or the output of account-enumeration commands and cannot be easily changed or bypassed by users. If discovered, a user might be able to disable such a backdoor by modifying and rebuilding the program from its source code ( if source is publicly available ), decompiling , or reverse-engineering software , directly editing the program's binary code , or instituting an integrity check (such as digital signatures, anti-tamper, and anti-cheat ) to prevent the unexpected access, but such actions are often prohibited by an end-user license agreement . As a digital rights management measure, software developers may hardcode a unique serial number directly into a program. Or it is common to hardcode a public key , creating the DRM for which it is infeasible to create a keygen. On the opposite case, a software cracker may hard-code a valid serial number to the program or even prevent the executable from asking the user for it, allowing unauthorized copies to be redistributed without the need of entering a valid number, thus sharing the same key for every copy, if one has been hard-coded. If a Windows program is programmed to assume it is always installed to C:\Program Files\Appname and someone tries to install it to a different drive for space or organizational reasons, it may fail to install or to run after installation. This problem might not be identified in the testing process, since the average user installs to the default drive and directory and testing might not include the option of changing the installation directory. However, it is advisable for programmers and developers not to fix the installation path of a program, since the default installation path depends on the operating system, OS version, and sysadmin decisions. For example, many installations of Microsoft Windows use drive C: as their primary hard disk , but this is not guaranteed. There was a similar issue with microprocessors in early computers, which started execution at a fixed address in memory. Some " copy-protected " programs look for a particular file on a floppy disk or flash drive on startup to verify that they are not unauthorized copies. If the computer is replaced by a newer machine, which doesn't have a floppy drive, the program that requires it now can't be run since the floppy disk can't be inserted. This last example shows why hard coding may turn out to be impractical even when it seems at the time that it would work completely. In the 1980s and 1990s, the great majority of PCs were fitted with at least one floppy drive, but floppy drives later fell out of use. A program hard-coded in that manner 15 years ago could face problems if not updated. Some Windows operating systems have so called Special Folders which organize files logically on the hard disk. There are problems that can arise involving hard coding: Some Windows programs hard code the profile path to developer-defined locations such as C:\Documents and Settings\ Username . This is the path for the vast majority of Windows 2000 or above, but this would cause an error if the profile is stored on a network or otherwise relocated. The proper way to get it is to call the GetUserProfileDirectory function or to resolve the %userprofile% environment variable. Another assumption that developers often make is assuming that the profile is located on a local hard disk. Some Windows programs hardcode the path to My Documents as ProfilePath \My Documents . These programs would work on machines running the English version, but on localized versions of Windows this folder normally has a different name. For example, in Italian versions the My Documents folder is named Documenti . My Documents may also have been relocated using Folder Redirection in Group Policy in Windows 2000 or above. The proper way to get it is to call the SHGetFolderPath function. An indirect reference, such as a variable inside the program called "FileName", could be expanded by accessing a "browse for file" dialogue window, and the program code would not have to be changed if the file moved. Hard coding is especially problematic in preparing the software for translation to other languages. In many cases, a single hard-coded value, such as an array size, may appear several times within the source code of a program. This would be a magic number . This may commonly cause a program bug if some of the appearances of the value are modified, but not all of them. Such a bug is hard to find and may remain in the program for a long time. A similar problem may occur if the same hard-coded value is used for more than one parameter value, e.g. an array of 6 elements and a minimum input string length of 6. A programmer may mistakenly change all instances of the value (often using an editor's search-and-replace facility) without checking the code to see how each instance is used. Both situations are avoided by defining constants , which associate names with the values, and using the names of the constants for each appearance within the code. One important case of hard coding is when strings are placed directly into the file, which forces translators to edit the source code to translate a program. (There is a tool called gettext that permits strings to be left in files, but lets translators translate them without changing the source code; it effectively de-hard codes the strings.) In computing competitions such as the International Olympiad in Informatics , contestants are required to write a program with specific input-output pattern according to the requirement of the questions. In rare cases where the possible number of inputs is small enough, a contestant might consider using an approach that maps all possible inputs to their correct outputs. This program would be considered a hard-coded solution as opposed to an algorithmic one (even though the hard-coded program might be the output of an algorithmic program). Softcoding is a computer coding term that refers to obtaining a value or function from some external resource, such as text files , INI files , preprocessor macros , external constants, configuration files , command-line arguments , databases, user input, HTTP server responses. It is the opposite of hardcoding, which refers to coding values and functions in the source code. Avoiding hard coding of commonly altered values is good programming practice. Users of the software should be able to customize it to their needs, within reason, without having to edit the program's source code. Similarly, careful programmers avoid magic numbers in their code, to improve its readability, and assist maintenance. These practices are generally not referred to as softcoding . The term is generally used where softcoding becomes an anti-pattern . Abstracting too many values and features can introduce more complexity and maintenance issues than would be experienced with changing the code when required. Softcoding, in this sense, was featured in an article on The Daily WTF . [ 3 ] At the extreme end, soft-coded programs develop their own poorly designed and implemented scripting languages, and configuration files that require advanced programming skills to edit. This can lead to the production of utilities to assist in configuring the original program, and these utilities often end up being 'softcoded' themselves. The boundary between proper configurability and problematic soft-coding changes with the style and nature of a program. Closed-source programs must be very configurable, as the end user does not have access to the source to make any changes. In-house software and software with limited distribution can be less configurable, as distributing altered copies is simpler. Custom-built web applications are often best with limited configurability, as altering the scripts is seldom any harder than altering a configuration file. To avoid softcoding, consider the value to the end user of any additional flexibility you provide, and compare it with the increased complexity and related ongoing maintenance costs the added configurability involves. Several legitimate design patterns exist for achieving the flexibility that softcoding attempts to provide. An application requiring more flexibility than is appropriate for a configuration file may benefit from the incorporation of a scripting language . In many cases, the appropriate design is a domain-specific language integrated into an established scripting language. Another approach is to move most of an application's functionality into a library, providing an API for writing-related applications quickly.
https://en.wikipedia.org/wiki/Hard_coding
In information handling , the U.S. Federal Standard 1037C (Glossary of Telecommunication Terms) defines a hard copy as a permanent reproduction, or copy, in the form of a physical object, of any media suitable for direct use by a person (in particular paper ), of displayed or transmitted data . Examples of hard copies include teleprinter pages, continuous printed tapes, computer printouts, and radio photo prints. On the other hand, physical objects such as magnetic tapes , floppy disks , or non-printed punched paper tapes are not defined as hard copies by 1037C. [ 1 ] A file that can be viewed on a screen without being printed is sometimes called a soft copy . [ 2 ] [ 3 ] The U.S. Federal Standard 1037C defines "soft copy" as "a nonpermanent display image, for example, a cathode ray tube display ." [ 4 ] The term "hard copy" predates the digital computer. In the book and newspaper printing process, "hard copy" refers to a manuscript or typewritten document that has been edited and proofread and is ready for typesetting or being read on-air in a radio or television broadcast. The old meaning of hard copy was mostly discarded after the information revolution . [ 5 ] One often-overlooked use for printers is in the field of IT security . Copies of various system and server activity logs are typically stored on the local filesystem , where a remote attacker – having achieved their primary goals – can then alter or delete the contents of the logs in an attempt to "cover their tracks" or otherwise thwart the efforts of system administrators and security experts. However, if the log entries are simultaneously given to a printer, line-by-line, a local hard-copy record of system activity is created – which cannot be remotely altered or otherwise manipulated. Dot matrix printers are ideal for this task, as they can sequentially print each log entry, one at a time, as they are added to the log. The usual dot-matrix printer support for continuous stationery also prevents incriminating pages from being surreptitiously removed or altered without evidence of tampering. The hacker's Jargon File defines a dead-tree version to be a paper version of an online document, where the phrase "dead trees" refers to paper . A saying from the Jargon File is that "You can't grep dead trees" , which comes from the Unix command grep , which searches the contents of text files. This means that there is an advantage to keeping documents in digital form, rather than on paper, so that they can be more easily searched for specific contents. A similar entry in the Jargon File is "tree-killer", which may refer to either a printer or a person who wastes paper. Dead-tree edition refers to a printed paper version of a written work, as opposed to digital alternatives such as a web page . [ 6 ] This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 22 January 2022. (in support of MIL-STD-188 ).
https://en.wikipedia.org/wiki/Hard_copy
A hard disk drive failure occurs when a hard disk drive malfunctions and the stored information cannot be accessed with a properly configured computer. A hard disk failure may occur in the course of normal operation, or due to an external factor such as exposure to fire or water or high magnetic fields , or suffering a sharp impact or environmental contamination, which can lead to a head crash . The stored information on a hard drive may also be rendered inaccessible as a result of data corruption , disruption or destruction of the hard drive's master boot record , or by malware deliberately destroying the disk's contents. There are a number of causes for hard drives to fail including: human error, hardware failure, firmware corruption, media damage, heat, water damage, power issues and mishaps. [ 1 ] Drive manufacturers typically specify a mean time between failures (MTBF) or an annualized failure rate (AFR) which are population statistics that can't predict the behavior of an individual unit. [ 2 ] These are calculated by constantly running samples of the drive for a short period of time, analyzing the resultant wear and tear upon the physical components of the drive, and extrapolating to provide a reasonable estimate of its lifespan. Hard disk drive failures tend to follow the concept of the bathtub curve . [ 3 ] Drives typically fail within a short time if there is a defect present from manufacturing. If a drive proves reliable for a period of a few months after installation, the drive has a significantly greater chance of remaining reliable. Therefore, even if a drive is subjected to several years of heavy daily use, it may not show any notable signs of wear unless closely inspected. On the other hand, a drive can fail at any time in many different situations. The most notorious cause of drive failure is a head crash , where the internal read-and-write head of the device, usually just hovering above the surface, touches a platter , or scratches the magnetic data-storage surface. A head crash usually incurs severe data loss , and data recovery attempts may cause further damage if not done by a specialist with proper equipment. Drive platters are coated with an extremely thin layer of non- electrostatic lubricant, so that the read-and-write head will likely simply glance off the surface of the platter should a collision occur. However, this head hovers mere nanometers from the platter's surface which makes a collision an acknowledged risk. Another cause of failure is a faulty air filter . The air filters on today's drives equalize the atmospheric pressure and moisture between the drive enclosure and its outside environment. If the filter fails to capture a dust particle, the particle can land on the platter, causing a head crash if the head happens to sweep over it. After a head crash, particles from the damaged platter and head media can cause one or more bad sectors . These, in addition to platter damage, will quickly render a drive useless. A drive also includes controller electronics, which occasionally fail. In such cases, it may be possible to recover all data by replacing the controller board. Failure of a hard disk drive can be catastrophic or gradual. The former typically presents as a drive that can no longer be detected by CMOS setup , or that fails to pass BIOS POST so that the operating system never sees it. Gradual hard-drive failure can be harder to diagnose, because its symptoms, such as corrupted data and slowing down of the PC (caused by gradually failing areas of the hard drive requiring repeated read attempts before successful access), can be caused by many other computer issues, such as malware . A rising number of bad sectors can be a sign of a failing hard drive, but because the hard drive automatically adds them to its own growth defect table, [ 4 ] they may not become evident to utilities such as ScanDisk unless the utility can catch them before the hard drive's defect management system does, or the backup sectors held in reserve by the internal hard-drive defect management system run out (by which point the drive is on the point of failing outright). A cyclical repetitive pattern of seek activity such as rapid or slower seek-to-end noises ( click of death ) can be indicative of hard drive problems. [ 5 ] During normal operation, heads in HDDs fly above the data recorded on the disks. Modern HDDs prevent power interruptions or other malfunctions from landing its heads in the data zone by either physically moving ( parking ) the heads to a special landing zone on the platters that is not used for data storage, or by physically locking the heads in a suspended ( unloaded ) position raised off the platters. Some early PC HDDs did not park the heads automatically when power was prematurely disconnected and the heads would land on data. In some other early units the user would run a program to manually park the heads. A landing zone is an area of the platter usually near its inner diameter (ID), where no data is stored. This area is called the Contact Start/Stop (CSS) zone, or the landing zone. Disks are designed such that either a spring or, more recently, rotational inertia in the platters is used to park the heads in the case of unexpected power loss. In this case, the spindle motor temporarily acts as a generator , providing power to the actuator. Spring tension from the head mounting constantly pushes the heads towards the platter. While the disk is spinning, the heads are supported by an air bearing and experience no physical contact or wear. In CSS drives the sliders carrying the head sensors (often also just called heads ) are designed to survive a number of landings and takeoffs from the media surface, though wear and tear on these microscopic components eventually takes its toll. Most manufacturers design the sliders to survive 50,000 contact cycles before the chance of damage on startup rises above 50%. However, the decay rate is not linear: when a disk is younger and has had fewer start-stop cycles, it has a better chance of surviving the next startup than an older, higher-mileage disk (as the head literally drags along the disk's surface until the air bearing is established). For example, the Seagate Barracuda 7200.10 series of desktop hard disk drives are rated to 50,000 start–stop cycles; in other words, no failures attributed to the head–platter interface were seen before at least 50,000 start–stop cycles during testing. [ 6 ] Around 1995 IBM pioneered a technology where a landing zone on the disk is made by a precision laser process ( Laser Zone Texture = LZT) producing an array of smooth nanometer-scale "bumps" in a landing zone, [ 7 ] thus vastly improving stiction and wear performance. This technology is still in use today, predominantly in older and newer lower-capacity Seagate desktop drives including the 7200.12 series, 7200.14 series models that are 500GB and under and is still used in the latest 16th generation in one model (BarraCuda Compute 1TB ST1000DM010), [ 8 ] but has been phased out in all 2.5" drives, as well as higher-capacity desktop, NAS, and enterprise drives in favor of load/unload ramps. Western Digital and Toshiba also fully phased out CSS in all their hard drives including the cheapest models. Some early adopters included IBM and Hitachi. In general, CSS technology can be prone to increased stiction (the tendency for the heads to stick to the platter surface), e.g. as a consequence of increased humidity. Excessive stiction can cause physical damage to the platter and slider or spindle motor. Load/unload technology relies on the heads being lifted off the platters into a safe location, thus eliminating the risks of wear and stiction altogether. The first HDD RAMAC and most early disk drives used complex mechanisms to load and unload the heads. Nearly all modern HDDs use ramp loading, first introduced by Memorex in 1967, [ 9 ] to load/unload onto plastic "ramps" near the outer disk edge. Laptop drives adopted this due to the need for increased shock resistance, and then ultimately it was adopted on most desktop drives. Addressing shock robustness, IBM also created a technology for their ThinkPad line of laptop computers called the Active Protection System. When a sudden, sharp movement is detected by the built-in accelerometer in the ThinkPad, internal hard disk heads automatically unload themselves to reduce the risk of any potential data loss or scratch defects. Apple later also utilized this technology in their PowerBook , iBook , MacBook Pro , and MacBook line, known as the Sudden Motion Sensor . Sony , [ 10 ] HP with their HP 3D DriveGuard, [ 11 ] and Toshiba [ 12 ] have released similar technology in their notebook computers. Hard drives may fail in a number of ways. Failure may be immediate and total, progressive, or limited. Data may be totally destroyed, or partially or totally recoverable. Earlier drives had a tendency toward developing bad sectors with use and wear; these bad sectors could be "mapped out" so they were not used and did not affect operation of a drive, and this was considered normal unless many bad sectors developed in a short period of time. Some early drives even had a table attached to a drive's case on which bad sectors were to be listed as they appeared. [ 13 ] Later drives map out bad sectors automatically, in a way invisible to the user; a drive with remapped sectors may continue to be used, though performance may decrease as the drive must physically move the heads to the remapped sector. Statistics and logs available through S.M.A.R.T. (Self-Monitoring, Analysis, and Reporting Technology) provide information about the remapping. In modern HDDs, each drive ships with zero user-visible bad sectors, and any bad/reallocated sectors may predict the impending failure of a drive. [ citation needed ] Other failures, which may be either progressive or limited, are usually considered to be a reason to replace a drive; the value of data potentially at risk usually far outweighs the cost saved by continuing to use a drive which may be failing. Repeated but recoverable read or write errors, unusual noises, excessive and unusual heating, and other abnormalities, are warning signs. Most major hard disk and motherboard vendors support S.M.A.R.T , which measures drive characteristics such as operating temperature , spin-up time, data error rates, etc. Certain trends and sudden changes in these parameters are thought to be associated with increased likelihood of drive failure and data loss. However, S.M.A.R.T. parameters alone may not be useful for predicting individual drive failures. [ 16 ] While several S.M.A.R.T. parameters affect failure probability, a large fraction of failed drives do not produce predictive S.M.A.R.T. parameters. [ 16 ] Unpredictable breakdown may occur at any time in normal use, with potential loss of all data. Recovery of some or even all data from a damaged drive is sometimes, but not always possible, and is normally costly. A 2007 study published by Google suggested very little correlation between failure rates and either high temperature or activity level. Indeed, the Google study indicated that "one of our key findings has been the lack of a consistent pattern of higher failure rates for higher temperature drives or for those drives at higher utilization levels.". [ 17 ] Hard drives with S.M.A.R.T.-reported average temperatures below 27 °C (81 °F) had higher failure rates than hard drives with the highest reported average temperature of 50 °C (122 °F), failure rates at least twice as high as the optimum S.M.A.R.T.-reported temperature range of 36 °C (97 °F) to 47 °C (117 °F). [ 16 ] The correlation between manufacturers, models and the failure rate was relatively strong. Statistics in this matter are kept highly secret by most entities; Google did not relate manufacturers' names with failure rates, [ 16 ] though it has been revealed that Google uses Hitachi Deskstar drives in some of its servers. [ 18 ] Google's 2007 study found, based on a large field sample of drives, that actual annualized failure rates ( AFRs ) for individual drives ranged from 1.7% for first year drives to over 8.6% for three-year-old drives. [ 19 ] A similar 2007 study at CMU on enterprise drives showed that measured MTBF was 3–4 times lower than the manufacturer's specification, with an estimated 3% mean AFR over 1–5 years based on replacement logs for a large sample of drives, and that hard drive failures were highly correlated in time. [ 20 ] A 2007 study of latent sector errors (as opposed to the above studies of complete disk failures) showed that 3.45% of 1.5 million disks developed latent sector errors over 32 months (3.15% of nearline disks and 1.46% of enterprise class disks developed at least one latent sector error within twelve months of their ship date), with the annual sector error rate increasing between the first and second years. Enterprise drives showed less sector errors than consumer drives. Background scrubbing was found to be effective in correcting these errors. [ 21 ] SCSI , SAS , and FC drives are more expensive than consumer-grade SATA drives, and usually used in servers and disk arrays , where SATA drives were sold to the home computer and desktop and near-line storage market and were perceived to be less reliable. This distinction is now becoming blurred. The mean time between failures (MTBF) of SATA drives is usually specified to be about 1 million hours. Some drives such as Western Digital Raptor have rated 1.4 million hours MTBF, [ 22 ] while SAS/FC drives are rated for upwards of 1.6 million hours. [ 23 ] Modern helium-filled drives are completely sealed without a breather port, thus eliminating the risk of debris ingression, resulting in a typical MTBF of 2.5 million hours. However, independent research indicates that MTBF is not a reliable estimate of a drive's longevity ( service life ). [ 24 ] MTBF is conducted in laboratory environments in test chambers and is an important metric to determine the quality of a disk drive, but is designed to only measure the relatively constant failure rate over the service life of the drive (the middle of the " bathtub curve ") before final wear-out phase. [ 20 ] [ 25 ] [ 26 ] A more interpretable, but equivalent, metric to MTBF is annualized failure rate (AFR). AFR is the percentage of drive failures expected per year. Both AFR and MTBF tend to measure reliability only in the initial part of the life of a hard disk drive thereby understating the real probability of failure of a used drive. [ 27 ] Server and industrial drives usually have higher MTBF and lower AFR. The cloud storage company Backblaze produces an annual report into hard drive reliability. However, the company states that it mainly uses commodity consumer drives, which are deployed in enterprise conditions, rather than in their representative conditions and for their intended use. Consumer drives are also not tested to work with enterprise RAID cards of the kind used in a datacenter, and may not respond in the time a RAID controller expects; such cards will be identified as having failed when they have not. [ 28 ] The result of tests of this kind may be relevant or irrelevant to different users, since they accurately represent the performance of consumer drives in the enterprise or under extreme stress, but may not accurately represent their performance in normal or intended use. [ 29 ] In order to avoid the loss of data due to disk failure, common solutions include: Data from a failed drive can sometimes be partially or totally recovered if the platters' magnetic coating is not totally destroyed. Specialized companies carry out data recovery, at significant cost. It may be possible to recover data by opening the drives in a clean room and using appropriate equipment to replace or revitalize failed components. [ 35 ] If the electronics have failed, it is sometimes possible to replace the electronics board, though often drives of nominally exactly the same model manufactured at different times have different circuit boards that are incompatible. Moreover, electronics boards of modern drives usually contain drive-specific adaptation data required for accessing their system areas , so the related componentry needs to be either reprogrammed (if possible) or unsoldered and transferred between two electronics boards. [ 36 ] [ 37 ] [ 38 ] Sometimes operation can be restored for long enough to recover data, perhaps requiring reconstruction techniques such as file carving . Risky techniques may be justifiable if the drive is otherwise dead. If a drive is started up once it may continue to run for a shorter or longer time but never start again, so as much data as possible is recovered as soon as the drive starts.
https://en.wikipedia.org/wiki/Hard_disk_drive_failure
In statistical mechanics , the hard hexagon model is a 2-dimensional lattice model of a gas, where particles are allowed to be on the vertices of a triangular lattice but no two particles may be adjacent. The model was solved by Rodney Baxter ( 1980 ), who found that it was related to the Rogers–Ramanujan identities . The hard hexagon model occurs within the framework of the grand canonical ensemble , where the total number of particles (the "hexagons") is allowed to vary naturally, and is fixed by a chemical potential . In the hard hexagon model, all valid states have zero energy, and so the only important thermodynamic control variable is the ratio of chemical potential to temperature μ /( kT ). The exponential of this ratio, z = exp( μ /( kT )) is called the activity and larger values correspond roughly to denser configurations. For a triangular lattice with N sites, the grand partition function is where g ( n , N ) is the number of ways of placing n particles on distinct lattice sites such that no 2 are adjacent. The function κ is defined by so that log(κ) is the free energy per unit site. Solving the hard hexagon model means (roughly) finding an exact expression for κ as a function of z . The mean density ρ is given for small z by The vertices of the lattice fall into 3 classes numbered 1, 2, and 3, given by the 3 different ways to fill space with hard hexagons. There are 3 local densities ρ 1 , ρ 2 , ρ 3 , corresponding to the 3 classes of sites. When the activity is large the system approximates one of these 3 packings, so the local densities differ, but when the activity is below a critical point the three local densities are the same. The critical point separating the low-activity homogeneous phase from the high-activity ordered phase is z c = ( 11 + 5 5 ) / 2 = ϕ 5 = 11.09017.... {\displaystyle z_{c}=(11+5{\sqrt {5}})/2=\phi ^{5}=11.09017....} with golden ratio φ . Above the critical point the local densities differ and in the phase where most hexagons are on sites of type 1 can be expanded as The solution is given for small values of z < z c by where For large z > z c the solution (in the phase where most occupied sites have type 1) is given by The functions G and H turn up in the Rogers–Ramanujan identities , and the function Q is the Euler function , which is closely related to the Dedekind eta function . If x = e 2πiτ , then x −1/60 G ( x ), x 11/60 H ( x ), x −1/24 P ( x ), z , κ, ρ, ρ 1 , ρ 2 , and ρ 3 are modular functions of τ, while x 1/24 Q ( x ) is a modular form of weight 1/2. Since any two modular functions are related by an algebraic relation, this implies that the functions κ , z , R , ρ are all algebraic functions of each other (of quite high degree) ( Joyce 1988 ). In particular, the value of κ (1), which Eric Weisstein dubbed the hard hexagon entropy constant ( Weisstein ), is an algebraic number of degree 24 equal to 1.395485972... ( OEIS : A085851 ). The hard hexagon model can be defined similarly on the square and honeycomb lattices. No exact solution is known for either of these models, but the critical point z c is near 3.7962 ± 0.0001 for the square lattice and 7.92 ± 0.08 for the honeycomb lattice; κ (1) is approximately 1.503048082... ( OEIS : A085850 ) for the square lattice and 1.546440708... for the honeycomb lattice ( Baxter 1999 ).
https://en.wikipedia.org/wiki/Hard_hexagon_model
Hard infrastructure , also known as tangible or built infrastructure , is the physical infrastructure of roads, bridges, tunnels, railways, ports, and harbors, among others, as opposed to the soft infrastructure or "intangible infrastructure of human capital in the form of education, research, health and social services and "institutional infrastructure" in the form of legal, economic and social systems. [ 1 ] [ 2 ] This article delineates both the capital goods , or fixed assets , and the control systems , software required to operate, manage and monitor the systems, as well as any accessory buildings, plants, or vehicles that are an essential part of the system. Also included are fleets of vehicles operating according to schedules such as public transit buses and garbage collection, as well as basic energy or communications facilities that are not usually part of a physical network, such as oil refineries , radio, and television broadcasting facilities. Hard infrastructure in general usually has the following attributes: [ according to whom? ] These are physical assets that provide services. The people employed in the hard infrastructure sector generally maintain, monitor, and operate the assets, but do not offer services to the clients or users of the infrastructure. Interactions between workers and clients are generally limited to administrative tasks concerning ordering, scheduling, or billing of services. [ citation needed ] These are large networks constructed over generations and are not often replaced as a whole system. The network provides services to a geographically defined area, and has a long life because its service capacity is maintained by continual refurbishment or replacement of components as they wear out. [ citation needed ] The system or network tends to evolve over time as it is continuously modified, improved, enlarged, and as various components are rebuilt, decommissioned or adapted to other uses. The system components are interdependent and not usually capable of subdivision or separate disposal, and consequently are not readily disposable within the commercial marketplace. The system interdependency may limit a component life to a lesser period than the expected life of the component itself. [ citation needed ] The systems tend to be natural monopolies , insofar that economies of scale means that multiple agencies providing a service are less efficient than would be the case if a single agency provided the service. This is because the assets have a high initial cost and a value that is difficult to determine. Once most of the system is built, the marginal cost of servicing additional clients or users tends to be relatively inexpensive, and may be negligible if there is no need to increase the peak capacity or the geographical extent of the network. [ citation needed ] In public economics theory, infrastructure assets such as highways and railways tend to be public goods , in that they carry a high degree of non-excludability , where no household can be excluded from using it, and non-rivalry , where no household can reduce another from enjoying it. These properties lead to externality , free ridership , and spillover effects that distort perfect competition and market efficiency. Hence, government becomes the best actor to supply the public goods. [ 3 ] Transportation infrastructures such as canals, railways, highways, airways and pipelines include the following: [ 4 ] The OECD classifies coal mines, oil wells and natural gas wells as part of the mining sector, and power generation as part of the industrial sector of the economy, not part of infrastructure. [ 5 ] OECD lists communications under its economic infrastructure Common Reporting Standard codes. [ 5 ]
https://en.wikipedia.org/wiki/Hard_infrastructure
Hard spheres are widely used as model particles in the statistical mechanical theory of fluids and solids. They are defined simply as impenetrable spheres that cannot overlap in space. They mimic the extremely strong ("infinitely elastic bouncing") repulsion that atoms and spherical molecules experience at very close distances. Hard spheres systems are studied by analytical means, by molecular dynamics simulations, and by the experimental study of certain colloidal model systems. Beside being a model of theoretical significance, the hard-sphere system is used as a basis in the formulation of several modern, predictive Equations of State for real fluids through the SAFT approach, and models for transport properties in gases through Chapman-Enskog Theory . Hard spheres of diameter σ {\displaystyle \sigma } are particles with the following pairwise interaction potential: V ( r 1 , r 2 ) = { 0 if | r 1 − r 2 | ≥ σ ∞ if | r 1 − r 2 | < σ {\displaystyle V(\mathbf {r} _{1},\mathbf {r} _{2})={\begin{cases}0&{\text{if}}\quad |\mathbf {r} _{1}-\mathbf {r} _{2}|\geq \sigma \\\infty &{\text{if}}\quad |\mathbf {r} _{1}-\mathbf {r} _{2}|<\sigma \end{cases}}} where r 1 {\displaystyle \mathbf {r} _{1}} and r 2 {\displaystyle \mathbf {r} _{2}} are the positions of the two particles. The first three virial coefficients for hard spheres can be determined analytically B 2 v 0 = 4 B 3 v 0 2 = 10 B 4 v 0 3 = − 712 35 + 219 2 35 π + 4131 35 π arccos ⁡ 1 3 ≈ 18.365 {\displaystyle {\begin{aligned}{\frac {B_{2}}{v_{0}}}&=4\\{\frac {B_{3}}{{v_{0}}^{2}}}&=10\\{\frac {B_{4}}{{v_{0}}^{3}}}&=-{\frac {712}{35}}+{\frac {219{\sqrt {2}}}{35\pi }}+{\frac {4131}{35\pi }}\arccos {\frac {1}{\sqrt {3}}}\approx 18.365\end{aligned}}} Higher-order ones can be determined numerically using Monte Carlo integration . We list B 5 v 0 4 = 28.24 ± 0.08 B 6 v 0 5 = 39.5 ± 0.4 B 7 v 0 6 = 56.5 ± 1.6 {\displaystyle {\begin{aligned}{\frac {B_{5}}{{v_{0}}^{4}}}&=28.24\pm 0.08\\{\frac {B_{6}}{{v_{0}}^{5}}}&=39.5\pm 0.4\\{\frac {B_{7}}{{v_{0}}^{6}}}&=56.5\pm 1.6\end{aligned}}} A table of virial coefficients for up to eight dimensions can be found on the page Hard sphere: virial coefficients . [ 1 ] The hard sphere system exhibits a fluid-solid phase transition between the volume fractions of freezing η f ≈ 0.494 {\displaystyle \eta _{\mathrm {f} }\approx 0.494} and melting η m ≈ 0.545 {\displaystyle \eta _{\mathrm {m} }\approx 0.545} . The pressure diverges at random close packing η r c p ≈ 0.644 {\displaystyle \eta _{\mathrm {rcp} }\approx 0.644} for the metastable liquid branch and at close packing η c p = 2 π / 6 ≈ 0.74048 {\displaystyle \eta _{\mathrm {cp} }={\sqrt {2}}\pi /6\approx 0.74048} for the stable solid branch. The static structure factor of the hard-spheres liquid can be calculated using the Percus–Yevick approximation . A simple, yet popular equation of state describing systems of pure hard spheres was developed in 1969 by N. F. Carnahan and K. E. Starling . [ 2 ] By expressing the compressibility of a hard-sphere system as a geometric series, the expression Z = p V n R T = 1 + η + η 2 − η 3 ( 1 − η ) 3 {\displaystyle Z={\frac {pV}{nRT}}={\frac {1+\eta +\eta ^{2}-\eta ^{3}}{{\left(1-\eta \right)}^{3}}}} is obtained, where η {\displaystyle \eta } is the packing fraction , given by η = N A π n σ 3 6 V {\displaystyle \eta ={\frac {N_{A}\pi n\sigma ^{3}}{6V}}} where N A {\displaystyle N_{A}} is Avogadro's number , n / V {\displaystyle n/V} is the molar density of the fluid, and σ {\displaystyle \sigma } is the diameter of the hard-spheres. From this Equation of State, one can obtain the residual Helmholtz energy , [ 3 ] A res n R T = 4 η − 3 η 2 ( 1 − η ) 2 , {\displaystyle {\frac {A_{\text{res}}}{nRT}}={\frac {4\eta -3\eta ^{2}}{{\left(1-\eta \right)}^{2}}},} which yields the residual chemical potential μ res R T = 8 η − 9 η 2 + 3 η 3 ( 1 − η ) 3 . {\displaystyle {\frac {\mu _{\text{res}}}{RT}}={\frac {8\eta -9\eta ^{2}+3\eta ^{3}}{{\left(1-\eta \right)}^{3}}}.} One can also obtain the value of the radial distribution function , g ( r ) {\displaystyle g(r)} , evaluated at the surface of a sphere, [ 3 ] g ( σ ) = 1 − 1 2 η ( 1 − η ) 3 . {\displaystyle g(\sigma )={\frac {1-{\frac {1}{2}}\eta }{{\left(1-\eta \right)}^{3}}}.} The latter is of significant importance to accurate descriptions of more advanced intermolecular potentials based on perturbation theory , such as SAFT , where a system of hard spheres is taken as a reference system, and the complete pair-potential is described by perturbations to the underlying hard-sphere system. Computation of the transport properties of hard-sphere gases at moderate densities using Revised Enskog Theory also relies on an accurate value for g ( σ ) {\displaystyle g(\sigma )} , and the Carnahan-Starling Equation of State has been used for this purpose to large success. [ 4 ]
https://en.wikipedia.org/wiki/Hard_spheres
Hard water is water that has a high mineral content (in contrast with "soft water"). Hard water is formed when water percolates through deposits of limestone , chalk or gypsum , [ 1 ] which are largely made up of calcium and magnesium carbonates , bicarbonates and sulfates . Drinking hard water may have moderate health benefits. It can pose critical problems in industrial settings, where water hardness is monitored to avoid costly breakdowns in boilers , cooling towers , and other equipment that handles water. In domestic settings, hard water is often indicated by a lack of foam formation when soap is agitated in water, and by the formation of limescale in kettles and water heaters. [ 2 ] Wherever water hardness is a concern, water softening is commonly used to reduce hard water's adverse effects. Natural rainwater, snow and other forms of precipitation typically have low concentrations of divalent cations such as calcium and magnesium. They may have small concentrations of ions such as sodium , chloride and sulfate derived from wind action over the sea. Where precipitation falls in drainage basins formed of hard, impervious and calcium-poor rocks, only very low concentrations of divalent cations are found and the water is termed soft water . [ 3 ] Examples include Snowdonia in Wales and the Western Highlands in Scotland. Areas with complex geology can produce varying degrees of hardness of water over short distances. [ 4 ] [ 5 ] The permanent hardness of water is determined by the water's concentration of cations with charges greater than or equal to 2+. Usually, the cations have a charge of 2+, i.e., they are divalent . Common cations found in hard water include Ca 2+ and Mg 2+ , which frequently enter water supplies by leaching from minerals within aquifers . Common calcium -containing minerals are calcite and gypsum . A common magnesium mineral is dolomite (which also contains calcium). Rainwater and distilled water are soft , because they contain few of these ions . [ 3 ] The following equilibrium reaction describes the dissolving and formation of calcium carbonate and calcium bicarbonate (on the right): The reaction can go in either direction. Rain containing dissolved carbon dioxide can react with calcium carbonate and carry calcium ions away with it. The calcium carbonate may be re-deposited as calcite as the carbon dioxide is lost to the atmosphere, sometimes forming stalactites and stalagmites . Calcium and magnesium ions can sometimes be removed by water softeners. [ 6 ] Permanent hardness (mineral content) is generally difficult to remove by boiling . [ 7 ] If this occurs, it is usually caused by the presence of calcium sulfate / calcium chloride and/or magnesium sulfate / magnesium chloride in the water, which do not precipitate out as the temperature increases. Ions causing the permanent hardness of water can be removed using a water softener, or ion-exchange column. Temporary hardness is caused by the presence of dissolved bicarbonate minerals ( calcium bicarbonate and magnesium bicarbonate ). When dissolved, these types of minerals yield calcium and magnesium cations (Ca 2+ , Mg 2+ ) and carbonate and bicarbonate anions ( CO 2− 3 and HCO − 3 ). The presence of the metal cations makes the water hard. However, unlike the permanent hardness caused by sulfate and chloride compounds, this "temporary" hardness can be reduced either by boiling the water or by the addition of lime ( calcium hydroxide ) through the process of lime softening . [ 8 ] Boiling promotes the formation of carbonate from the bicarbonate and precipitates calcium carbonate out of solution, leaving water that is softer upon cooling. With hard water, soap solutions form a white precipitate ( soap scum ) instead of producing lather , because the 2+ ions destroy the surfactant properties of the soap by forming a solid precipitate (the soap scum). A major component of such scum is calcium stearate , which arises from sodium stearate , the main component of soap : Hardness can thus be defined as the soap-consuming capacity of a water sample, or the capacity of precipitation of soap as a characteristic property of water that prevents the lathering of soap. Synthetic detergents do not form such scums. Because soft water has few calcium ions, there is no inhibition of the lathering action of soaps and no soap scum is formed in normal washing. Similarly, soft water produces no calcium deposits in water heating systems. Hard water also forms deposits that clog plumbing. These deposits, called " scale ", are composed mainly of calcium carbonate (CaCO 3 ), magnesium hydroxide (Mg(OH) 2 ), and calcium sulfate (CaSO 4 ). [ 3 ] Calcium and magnesium carbonates tend to be deposited as off-white solids on the inside surfaces of pipes and heat exchangers . This precipitation (formation of an insoluble solid) is principally caused by thermal decomposition of bicarbonate ions but also happens in cases where the carbonate ion is at saturation concentration. [ 9 ] The resulting build-up of scale restricts the flow of water in pipes. In boilers, the deposits impair the flow of heat into water, reducing the heating efficiency and allowing the metal boiler components to overheat. In a pressurized system, this overheating can lead to the failure of the boiler. [ 10 ] The damage caused by calcium carbonate deposits varies according to the crystalline form, for example, calcite or aragonite . [ 11 ] The presence of ions in an electrolyte , in this case, hard water, can also lead to galvanic corrosion , in which one metal will preferentially corrode when in contact with another type of metal when both are in contact with an electrolyte. The softening of hard water by ion exchange does not increase its corrosivity per se . Similarly, where lead plumbing is in use, softened water does not substantially increase plumbo -solvency. [ 12 ] In swimming pools, hard water is manifested by a turbid , or cloudy (milky), appearance to the water. Calcium and magnesium hydroxides are both soluble in water. The solubility of the hydroxides of the alkaline-earth metals to which calcium and magnesium belong ( group 2 of the periodic table ) increases moving down the column. Aqueous solutions of these metal hydroxides absorb carbon dioxide from the air, forming insoluble carbonates, and giving rise to turbidity. This often results from the pH being excessively high (pH > 7.6). Hence, a common solution to the problem is, while maintaining the chlorine concentration at the proper level, to lower the pH by the addition of hydrochloric acid , the optimum value is in the range of 7.2 to 7.6. In some cases it is desirable to soften hard water. Most detergents contain ingredients that counteract the effects of hard water on the surfactants. For this reason, water softening is often unnecessary. Where softening is practised, it is often recommended to soften only the water sent to domestic hot water systems to prevent or delay inefficiencies and damage due to scale formation in water heaters. A common method for water softening involves the use of ion-exchange resins , which replace ions like Ca 2+ by twice the number of mono cations such as sodium or potassium ions. Washing soda ( sodium carbonate , Na 2 CO 3 ) is easily obtained and has long been used as a water softener for domestic laundry, in conjunction with the usual soap or detergent. Water that has been treated by a water softening may be termed softened water . In these cases, the water may also contain elevated levels of sodium or potassium and bicarbonate or chloride ions. The World Health Organization says that "there does not appear to be any convincing evidence that water hardness causes adverse health effects in humans". [ 2 ] In fact, the United States National Research Council has found that hard water serves as a dietary supplement for calcium and magnesium. [ 13 ] Some studies have shown a weak inverse relationship between water hardness and cardiovascular disease in men, up to a level of 170 mg calcium carbonate per litre of water. The World Health Organization has reviewed the evidence and concluded the data was inadequate to recommend a level of hardness. [ 2 ] Recommendations have been made for the minimum and maximum levels of calcium (40–80 ppm ) and magnesium (20–30 ppm) in drinking water, and a total hardness expressed as the sum of the calcium and magnesium concentrations of 2–4 mmol/L. [ 14 ] Other studies have shown weak correlations between cardiovascular health and water hardness. [ 15 ] [ 16 ] [ 17 ] The prevalence of atopic dermatitis (eczema) in children may be increased by hard drinking water. [ 18 ] [ 19 ] Living in areas with hard water may also play a part in the development of AD in early life. However, when AD is already established, using water softeners at home does not reduce the severity of the symptoms. [ 19 ] Hardness can be quantified by instrumental analysis . The total water hardness is the sum of the molar concentrations of Ca 2+ and Mg 2+ , in mol/L or mmol/L units. Although water hardness usually measures only the total concentrations of calcium and magnesium (the two most prevalent divalent metal ions), iron , aluminium , and manganese are also present at elevated levels in some locations. The presence of iron characteristically confers a brownish ( rust -like) colour to the calcification, instead of white (the colour of most of the other compounds). Water hardness is often not expressed as a molar concentration, but rather in various units, such as degrees of general hardness ( dGH ), German degrees (°dH), parts per million (ppm, mg/L, or American degrees), grains per gallon (gpg), English degrees (°e, e, or °Clark ), or French degrees (°fH, °f or °HF; lowercase f is used to prevent confusion with degrees Fahrenheit ). The table below shows conversion factors between the various units. The various alternative units represent an equivalent mass of calcium oxide (CaO) or calcium carbonate (CaCO 3 ) that, when dissolved in a unit volume of pure water, would result in the same total molar concentration of Mg 2+ and Ca 2+ . The different conversion factors arise from the fact that equivalent masses of calcium oxide and calcium carbonates differ and that different mass and volume units are used. The units are as follows: As it is the precise mixture of minerals dissolved in the water, together with water's pH and temperature, that determine the behaviour of the hardness, a single-number scale does not adequately describe hardness. However, the United States Geological Survey uses the following classification for hard and soft water: [ 5 ] Seawater is considered to be very hard due to various dissolved salts. Typically seawater's hardness is in the area of 6,570 ppm (6.57 grams per litre). [ 21 ] In contrast, fresh water has a hardness in the range of 15 to 375 ppm, generally around 600 mg/L. [ 22 ] Several indices are used to describe the behaviour of calcium carbonate in water, oil, or gas mixtures. [ 23 ] [ better source needed ] The Langelier saturation index [ 24 ] (sometimes Langelier stability index) is a calculated number used to predict the calcium carbonate stability of water. [ 25 ] It indicates whether the water will precipitate, dissolve, or be in equilibrium with calcium carbonate. In 1936, Wilfred Langelier developed a method for predicting the pH at which water is saturated in calcium carbonate (called pH s ). [ 26 ] The LSI is expressed as the difference between the actual system pH and the saturation pH s : [ 27 ] If the actual pH of the water is below the calculated saturation pH s , the LSI is negative and the water has a very limited scaling potential. If the actual pH exceeds pHs, the LSI is positive, and being supersaturated with CaCO 3 , the water tends to form scale. At increasing positive index values, the scaling potential increases. In practice, water with an LSI between −0.5 and +0.5 will not display enhanced mineral dissolving or scale-forming properties. Water with an LSI below −0.5 tends to exhibit noticeably increased dissolving abilities while water with an LSI above +0.5 tends to exhibit noticeably increased scale-forming properties. The LSI is temperature-sensitive. The LSI becomes more positive as the water temperature increases. This has particular implications in situations where well water is used. The temperature of the water when it first exits the well is often significantly lower than the temperature inside the building served by the well or at the laboratory where the LSI measurement is made. This increase in temperature can cause scaling, especially in cases such as water heaters. Conversely, systems that reduce water temperature will have less scaling. The Ryznar stability index (RSI) [ 24 ] : 525 uses a database of scale thickness measurements in municipal water systems to predict the effect of water chemistry. [ 25 ] : 72 [ 28 ] It was developed from empirical observations of corrosion rates and film formation in steel mains. This index is defined as: [ 29 ] The Puckorius scaling index (PSI) uses slightly different parameters to quantify the relationship between the saturation state of the water and the amount of limescale deposited. Other indices include the Larson-Skold Index, [ 30 ] the Stiff-Davis Index, [ 31 ] and the Oddo-Tomson Index. [ 32 ] The hardness of local water supplies depends on the source of water. Water in streams flowing over volcanic (igneous) rocks will be soft, while water from boreholes drilled into porous rock is normally very hard. Analysis of water hardness in major Australian cities by the Australian Water Association shows a range from very soft (Melbourne) to hard (Adelaide). Total hardness levels of calcium carbonate in ppm are: Prairie provinces (mainly Saskatchewan and Manitoba ) contain high quantities of calcium and magnesium, often as dolomite , which are readily soluble in the groundwater that contains high concentrations of trapped carbon dioxide from the last glaciation . In these parts of Canada, the total hardness in ppm of calcium carbonate equivalent frequently exceeds 200 ppm, if groundwater is the only source of potable water. The west coast, by contrast, has unusually soft water, derived mainly from mountain lakes fed by glaciers and snowmelt. Some typical values are: Information from the Drinking Water Inspectorate shows that drinking water in England is generally considered to be 'very hard', with most areas of England, particularly east of a line between the Severn and Tees estuaries, exhibiting above 200 ppm for the calcium carbonate equivalent. Water in London, for example, is mostly obtained from the River Thames and River Lea , both of which derive a significant proportion of their dry weather flow from springs in limestone and chalk aquifers. Wales , Devon , Cornwall , and parts of northwest England are softer water areas and range from 0 to 200 ppm. [ 58 ] In the brewing industry in England and Wales, water is often deliberately hardened with gypsum in the process of Burtonisation . Generally, water is mostly hard in urban areas of England where soft water sources are unavailable. Several cities built water supply sources in the 18th century as the Industrial Revolution and urban population burgeoned. Manchester was a notable such city in North West England and its wealthy corporation built several reservoirs at Thirlmere and Haweswater in the Lake District to the north. There is no exposure to limestone or chalk in their headwaters and consequently the water in Manchester is rated as 'very soft'. [ 52 ] Similarly, tap water in Birmingham is also soft as it is sourced from the Elan Valley Reservoirs in Wales, even though groundwater in the area is hard. The EPA has published a standards handbook for the interpretation of water quality in Ireland in which definitions of water hardness are given. [ 59 ] Section 36 discusses hardness. Reference to original EU documentation is given, which sets out no limit for hardness. The handbook also gives no "Recommended or Mandatory Limit Values" for hardness. The handbook does indicate that above the midpoint of the ranges defined as "Moderately Hard", effects are seen increasingly: "The chief disadvantages of hard waters are that they neutralise the lathering power of soap [...] and, more important, that they can cause blockage of pipes and severely reduced boiler efficiency because of scale formation. These effects will increase as the hardness rises to and beyond 200 mg/L CaCO 3 ." A collection of data from the United States found that about half the water stations tested had hardness over 120 mg per litre of calcium carbonate equivalent, placing them in the categories "hard" or "very hard". [ 5 ] The other half were classified as soft or moderately hard. More than 85% of American homes have hard water. [ citation needed ] The softest waters occur in parts of the New England , South Atlantic–Gulf, Pacific Northwest , and Hawaii regions. Moderately hard waters are common in many of the rivers of the Tennessee , Great Lakes , and Alaska regions. Hard and very hard waters are found in some of the streams in most of the regions throughout the country. The hardest waters (greater than 1,000 ppm) are in streams in Texas , New Mexico , Kansas , Arizona , Utah , parts of Colorado , southern Nevada , and southern California . [ 60 ] [ 61 ]
https://en.wikipedia.org/wiki/Hard_water
Hardenability is the depth to which a steel is hardened after putting it through a heat treatment process. It should not be confused with hardness , which is a measure of a sample's resistance to indentation or scratching. [ 1 ] It is an important property for welding , since it is inversely proportional to weldability , that is, the ease of welding a material. When a hot steel work-piece is quenched , the area in contact with the water immediately cools and its temperature equilibrates with the quenching medium. The inner depths of the material however, do not cool so rapidly, and in work-pieces that are large, the cooling rate may be slow enough to allow the austenite to transform fully into a structure other than martensite or bainite . This results in a work-piece that does not have the same crystal structure throughout its entire depth; with a softer core and harder "shell". [ 2 ] The softer core is some combination of ferrite and cementite , such as pearlite . The hardenability of ferrous alloys, i.e. steels, is a function of the carbon content and other alloying elements and the grain size of the austenite. [ 1 ] The relative importance of the various alloying elements is calculated by finding the equivalent carbon content of the material. The fluid used for quenching the material influences the cooling rate due to varying thermal conductivities and specific heats . Substances like brine and water cool the steel much more quickly than oil or air. If the fluid is agitated cooling occurs even more quickly. The geometry of the part also affects the cooling rate: of two samples of equal volume, the one with higher surface area will cool faster. [ 3 ] The hardenability of a ferrous alloy is measured by a Jominy test: a round metal bar of standard size (indicated in the top image) is transformed to 100% austenite through heat treatment, and is then quenched on one end with room-temperature water. The cooling rate will be highest at the end being quenched, and will decrease as distance from the end increases. Subsequent to cooling a flat surface is ground on the test piece and the hardenability is then found by measuring the hardness along the bar. The farther away from the quenched end that the hardness extends, the higher the hardenability. This information is plotted on a hardenability graph. [ 4 ] [ 5 ] [ 6 ] The Jominy end-quench test was invented by Walter E. Jominy (1893-1976) and A.L. Boegehold, [ 7 ] metallurgists in the Research Laboratories Division of General Motors Corp., in 1937. For his pioneering work in heat treating, Jominy was recognized by the American Society for Metals (ASM) with its Albert Sauveur Achievement Award in 1944. Jominy served as president of ASM in 1951.
https://en.wikipedia.org/wiki/Hardenability
A hardmask is a material used in semiconductor processing as an etch mask instead of a polymer or other organic "soft" resist material. Hardmasks are necessary when the material being etched is itself an organic polymer . Anything used to etch this material will also etch the photoresist being used to define its patterning since that is also an organic polymer. This arises, for instance, in the patterning of low-κ dielectric insulation layers used in VLSI fabrication. [ 1 ] Polymers tend to be etched easily by oxygen , fluorine , chlorine and other reactive gases used in plasma etching . Use of a hardmask involves an additional deposition process, and hence additional cost. First, the hardmask material is deposited and etched into the required pattern using a standard photoresist process. Following that the underlying material can be etched through the hardmask. Finally the hardmask is removed with a further etching process. [ 2 ] Hardmask materials can be metal or dielectric. Silicon based masks such as silicon dioxide or silicon carbide are usually used for etching low-κ dielectrics. [ 3 ] However, SiOCH ( carbon doped hydrogenated silicon oxide ), a material used to insulate copper interconnects, [ 4 ] requires an etchant that attacks silicon compounds. For this material, metal or amorphous carbon hardmasks are used. The most common metal for hardmasks is titanium nitride , but tantalum nitride has also been used. [ 5 ] This article about materials science is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hardmask
In materials science , hardness (antonym: softness ) is a measure of the resistance to localized plastic deformation , such as an indentation (over an area) or a scratch (linear), induced mechanically either by pressing or abrasion . In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin , or wood and common plastics . Macroscopic hardness is generally characterized by strong intermolecular bonds , but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness , indentation hardness , and rebound hardness. Hardness is dependent on ductility , elastic stiffness , plasticity , strain , strength , toughness , viscoelasticity , and viscosity . Common examples of hard matter are ceramics , concrete , certain metals , and superhard materials , which can be contrasted with soft matter . There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between one scale and another. Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due to friction from a sharp object. [ 1 ] The principle is that an object made of a harder material will scratch an object made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through the film to the substrate. The most common test is Mohs scale , which is used in mineralogy . One tool to make this measurement is the sclerometer . Another tool used to make these tests is the pocket hardness tester . This tool consists of a scale arm with graduated markings attached to a four-wheeled carriage. A scratch tool with a sharp rim is mounted at a predetermined angle to the testing surface. In order to use it a weight of known mass is added to the scale arm at one of the graduated markings, the tool is then drawn across the test surface. The use of the weight and markings allows a known pressure to be applied without the need for complicated machinery. [ 2 ] Indentation hardness measures the resistance of a sample to material deformation due to a constant compression load from a sharp object. Tests for indentation hardness are primarily used in engineering and metallurgy . The tests work on the basic premise of measuring the critical dimensions of an indentation left by a specifically dimensioned and loaded indenter. Common indentation hardness scales are Rockwell , Vickers , Shore , and Brinell , amongst others. Rebound hardness , also known as dynamic hardness , measures the height of the "bounce" of a diamond-tipped hammer dropped from a fixed height onto a material. This type of hardness is related to elasticity . The device used to take this measurement is known as a scleroscope . [ 3 ] Two scales that measures rebound hardness are the Leeb rebound hardness test and Bennett hardness scale. Ultrasonic Contact Impedance (UCI) method determines hardness by measuring the frequency of an oscillating rod. The rod consists of a metal shaft with vibrating element and a pyramid-shaped diamond mounted on one end. [ 4 ] There are five hardening processes: Hall-Petch strengthening , work hardening , solid solution strengthening , precipitation hardening , and martensitic transformation . In solid mechanics , solids generally have three responses to force , depending on the amount of force and the type of material: Strength is a measure of the extent of a material's elastic range, or elastic and plastic ranges together. This is quantified as compressive strength , shear strength , tensile strength depending on the direction of the forces involved. Ultimate strength is an engineering measure of the maximum load a part of a specific material and geometry can withstand. Brittleness , in technical usage, is the tendency of a material to fracture with very little or no detectable plastic deformation beforehand. Thus in technical terms, a material can be both brittle and strong. In everyday usage "brittleness" usually refers to the tendency to fracture under a small amount of force, which exhibits both brittleness and a lack of strength (in the technical sense). For perfectly brittle materials, yield strength and ultimate strength are the same, because they do not experience detectable plastic deformation. The opposite of brittleness is ductility . The toughness of a material is the maximum amount of energy it can absorb before fracturing, which is different from the amount of force that can be applied. Toughness tends to be small for brittle materials, because elastic and plastic deformations allow materials to absorb large amounts of energy. Hardness increases with decreasing particle size . This is known as the Hall-Petch relationship . However, below a critical grain-size, hardness decreases with decreasing grain size. This is known as the inverse Hall-Petch effect. Hardness of a material to deformation is dependent on its microdurability or small-scale shear modulus in any direction, not to any rigidity or stiffness properties such as its bulk modulus or Young's modulus . Stiffness is often confused for hardness. [ 5 ] [ 6 ] Some materials are stiffer than diamond (e.g. osmium) but are not harder, and are prone to spalling and flaking in squamose or acicular habits. The key to understanding the mechanism behind hardness is understanding the metallic microstructure , or the structure and arrangement of the atoms at the atomic level. In fact, most important metallic properties critical to the manufacturing of today’s goods are determined by the microstructure of a material. [ 7 ] At the atomic level, the atoms in a metal are arranged in an orderly three-dimensional array called a crystal lattice . In reality, however, a given specimen of a metal likely never contains a consistent single crystal lattice. A given sample of metal will contain many grains, with each grain having a fairly consistent array pattern. At an even smaller scale, each grain contains irregularities. There are two types of irregularities at the grain level of the microstructure that are responsible for the hardness of the material. These irregularities are point defects and line defects. A point defect is an irregularity located at a single lattice site inside of the overall three-dimensional lattice of the grain. There are three main point defects. If there is an atom missing from the array, a vacancy defect is formed. If there is a different type of atom at the lattice site that should normally be occupied by a metal atom, a substitutional defect is formed. If there exists an atom in a site where there should normally not be, an interstitial defect is formed. This is possible because space exists between atoms in a crystal lattice. While point defects are irregularities at a single site in the crystal lattice, line defects are irregularities on a plane of atoms. Dislocations are a type of line defect involving the misalignment of these planes. In the case of an edge dislocation, a half plane of atoms is wedged between two planes of atoms. In the case of a screw dislocation two planes of atoms are offset with a helical array running between them. [ 8 ] In glasses, hardness seems to depend linearly on the number of topological constraints acting between the atoms of the network. [ 9 ] Hence, the rigidity theory has allowed predicting hardness values with respect to composition. Dislocations provide a mechanism for planes of atoms to slip and thus a method for plastic or permanent deformation. [ 7 ] Planes of atoms can flip from one side of the dislocation to the other effectively allowing the dislocation to traverse through the material and the material to deform permanently. The movement allowed by these dislocations causes a decrease in the material's hardness. The way to inhibit the movement of planes of atoms, and thus make them harder, involves the interaction of dislocations with each other and interstitial atoms. When a dislocation intersects with a second dislocation, it can no longer traverse through the crystal lattice. The intersection of dislocations creates an anchor point and does not allow the planes of atoms to continue to slip over one another [ 10 ] A dislocation can also be anchored by the interaction with interstitial atoms. If a dislocation comes in contact with two or more interstitial atoms, the slip of the planes will again be disrupted. The interstitial atoms create anchor points, or pinning points, in the same manner as intersecting dislocations. By varying the presence of interstitial atoms and the density of dislocations, a particular metal's hardness can be controlled. Although seemingly counter-intuitive, as the density of dislocations increases, there are more intersections created and consequently more anchor points. Similarly, as more interstitial atoms are added, more pinning points that impede the movements of dislocations are formed. As a result, the more anchor points added, the harder the material will become. Careful note should be taken of the relationship between a hardness number and the stress-strain curve exhibited by the material. The latter, which is conventionally obtained via tensile testing , captures the full plasticity response of the material (which is in most cases a metal). It is in fact a dependence of the (true) von Mises plastic strain on the (true) von Mises stress , but this is readily obtained from a nominal stress – nominal strain curve (in the pre- necking regime), which is the immediate outcome of a tensile test. This relationship can be used to describe how the material will respond to almost any loading situation, often by using the Finite Element Method (FEM). This applies to the outcome of an indentation test (with a given size and shape of indenter, and a given applied load). However, while a hardness number thus depends on the stress-strain relationship, inferring the latter from the former is far from simple and is not attempted in any rigorous way during conventional hardness testing. (In fact, the Indentation Plastometry technique, which involves iterative FEM modelling of an indentation test, does allow a stress-strain curve to be obtained via indentation, but this is outside the scope of conventional hardness testing.) A hardness number is just a semi-quantitative indicator of the resistance to plastic deformation. Although hardness is defined in a similar way for most types of test – usually as the load divided by the contact area – the numbers obtained for a particular material are different for different types of test, and even for the same test with different applied loads. Attempts are sometimes made [ 11 ] [ 12 ] [ 13 ] [ 14 ] [ 15 ] to identify simple analytical expressions that allow features of the stress-strain curve, particularly the yield stress and Ultimate Tensile Stress (UTS), to be obtained from a particular type of hardness number. However, these are all based on empirical correlations, often specific to particular types of alloy: even with such a limitation, the values obtained are often quite unreliable. The underlying problem is that metals with a range of combinations of yield stress and work hardening characteristics can exhibit the same hardness number. The use of hardness numbers for any quantitative purpose should, at best, be approached with considerable caution.
https://en.wikipedia.org/wiki/Hardness
Hardscape is hard landscape materials in the built environment structures that are incorporated into a landscape. [ 1 ] This can include paved areas, driveways, retaining walls , sleeper walls , stairs , walkways , and any other landscaping made up of hard wearing materials such as wood , stone , and concrete , as opposed to softscape , the horticultural elements of a landscape . Hard landscaping involves projects that cover the entirety of the yard and that are necessary before soft landscaping features come into play. Hard landscaping alters the foundation of the yard, the "bricks and mortar" so to speak; only when this is completed can the landscaper begin to focus on the softscape features of the yard, such as lawn, floral plantings, trees and shrubs. One key feature of hard landscaping has to do with the absorption of water – something that is of great importance given the climate. Hard landscaping ensures that worrying about water after heavy rain or snowfall is not an issue. The right water absorption and irrigation system installed through hard landscaping, coupled with hard materials that safely move water away from the property can ensure that soil movement is never a problem and that the yard stays a drier, enjoyable living space, rather than a wet and muddy bog. There are soft landscaping options that can help to achieve this, but the bulk of this is achieved through hard landscaping. From an urban planning perspective, hardscapes can include very large features, such as paved roads, driveways or fountains, and even small pools or ponds that do not exceed a certain safe height. Most water features are hardscapes because they require a barrier to retain the water, instead of letting it drain into the surrounding soil. Hardscaping allows the erection of man-made landscaping features that would otherwise be impossible due to soil erosion , including some that compensate for large amounts of human traffic that would cause wear on bare earth or grass. For example, sheer vertical features are possible. Without nearby bare soil, or natural drainage channels, swales or culverts, hardscape with an impervious surface requires artificial methods of drainage or surface runoff to carry off the water that would normally be absorbed into the ground as groundwater and prevent premature wear to itself. Lack of capacity, or poorly planned or executed drainage or grading of the surface can cause problems after severe storms or heavy extended periods of rain fall, such as flooding, washout, mud flows, sink holes, accelerated erosion, wet rot to wood elements, drowning of plants trees and shrubs, and even foundation problems to an adjacent home such as cracking the foundation, basement flooding due to water infiltration, and pest infiltration, such as ants and other insects entering through damaged areas. Hardscape landscaping in Queensland, [ 2 ] Australia is a licensed qualification called Structural Landscaping, which is divided into two classes of licenses: Trade Contractor Structural Landscaper, and a Builder restricted to Structural Landscaping, referred to as the "jack of all trades" due to its large scope of works. These Structural Landscaping licenses include the erection and fabrication of decking, fences, carports, pergolas, paving and the construction of retaining walls. Hardscape landscaping in New South Wales, [ 3 ] Within NSW a contractor must hold a relevant trade qualification such as a certificate lll in Landscape Construction or Certificate lll in Horticulture. Once the trades person has obtained their formal qualification they will be able to apply for a NSW contractors license which will enable them to carry out works over $5,000 including GST. A NSW contractors license includes works such as retaining walls, pergolas, fencing, driveways & decks.
https://en.wikipedia.org/wiki/Hardscape
A hardstand (also hard standing and hardstanding in British English) is a paved or hard-surfaced area on which vehicles, such as cars or aircraft, may be parked. [ 1 ] [ 2 ] The term may also be used informally to refer to an area of compacted hard surface such as macadam . Hardstands are found at airports , military facilities , freight terminals, and other facilities where heavy vehicles need to be parked for significant periods of time. They also exist, paved or unpaved, at places where road vehicles are parked. [ citation needed ] At airports, hardstands enable airliners to board or offload passengers using stair trucks or mobile ramps, and (on smaller aircraft) built-in airstairs, without needing dedicated jet bridges . [ citation needed ] The purpose of a hardstand is to provide a strong surface for stationary vehicles, including where the vehicles may otherwise sink into the ground if left for extended periods of time. A hardstand is configured with a slope for drainage, which with unpaved surfaces serves to slow deterioration. [ 3 ] Hardstands are paved with materials including concrete heavy-duty pavers , which give maintenance flexibility over other products as well as strength for the life of the project; or asphalt ; or macadam . To support the weight of heavy vehicles such as large airplanes , tanks , or heavy trucks , the paving is usually thicker and more durable than in automobile parking lots . [ citation needed ]
https://en.wikipedia.org/wiki/Hardstand
Hardware-dependent software ( HDS or HdS ), the part of an operating system that varies across microprocessor boards and is comprised notably of device drivers and of boot code which performs hardware initialization. HDS does not comprise code which is only specific to a processor family and can run unchanged on various members of it. The HDS is alternatively called the BSP , for Board Support Package , especially in the world of commercial operating systems where the processor family code is distributed in binary form only. Often software that runs on operating systems may be hardware dependent at first, but emulators can reduce dependencies for specific hardware. [ 1 ] [ 2 ] This article related to a type of software is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hardware-dependent_software
In engineering, hardware architecture refers to the identification of a system's physical components and their interrelationships. This description, often called a hardware design model , allows hardware designers to understand how their components fit into a system architecture and provides to software component designers important information needed for software development and integration. Clear definition of a hardware architecture allows the various traditional engineering disciplines (e.g., electrical and mechanical engineering) to work more effectively together to develop and manufacture new machines, devices and components. [ 1 ] Hardware is also an expression used within the computer engineering industry to explicitly distinguish the ( electronic computer ) hardware from the software that runs on it. But hardware, within the automation and software engineering disciplines, need not simply be a computer of some sort. A modern automobile runs vastly more software than the Apollo spacecraft. Also, modern aircraft cannot function without running tens of millions of computer instructions embedded and distributed throughout the aircraft and resident in both standard computer hardware and in specialized hardware components such as IC wired logic gates, analog and hybrid devices, and other digital components. The need to effectively model how separate physical components combine to form complex systems is important over a wide range of applications, including computers, personal digital assistants (PDAs), cell phones, surgical instrumentation, satellites, and submarines. Hardware architecture is the representation of an engineered (or to be engineered ) electronic or electromechanical hardware system, and the process and discipline for effectively implementing the design (s) for such a system. It is generally part of a larger integrated system encompassing information , software , and device prototyping . [ 2 ] It is a representation because it is used to convey information about the related elements comprising a hardware system, the relationships among those elements, and the rules governing those relationships. It is a process because a sequence of steps is prescribed to produce or change the architecture, and/or a design from that architecture, of a hardware system within a set of constraints. It is a discipline because a body of knowledge is used to inform practitioners as to the most effective way to design the system within a set of constraints. A hardware architecture is primarily concerned with the internal electrical (and, more rarely, the mechanical ) interfaces among the system's components or subsystems , and the interface between the system and its external environment, especially the devices operated by or the electronic displays viewed by a user . (This latter, special interface, is known as the computer human interface , AKA human computer interface, or HCI ; formerly called the man-machine interface.) [ 3 ] Integrated circuit (IC) designers are driving current technologies into innovative approaches for new products. Hence, multiple layers of active devices are being proposed as single chip, opening up opportunities for disruptive microelectronic, optoelectronic, and new microelectromechanical hardware implementation. [ 4 ] [ 5 ] Prior to the advent of digital computers, the electronics and other engineering disciplines used the terms system and hardware as they are still commonly used today. However, with the arrival of digital computers on the scene and the development of software engineering as a separate discipline, it was often necessary to distinguish among engineered hardware artifacts, software artifacts, and the combined artifacts. A programmable hardware artifact, or machine, that lacks its computer program is impotent; even as a software artifact, or program, is equally impotent unless it can be used to alter the sequential states of a suitable (hardware) machine. However, a hardware machine and its programming can be designed to perform an almost illimitable number of abstract and physical tasks. Within the computer and software engineering disciplines (and, often, other engineering disciplines, such as communications), then, the terms hardware, software, and system came to distinguish between the hardware that runs a computer program , the software, and the hardware device complete with its program. A hardware can be controlled from a software with the help of a middle device called hardware controller, this hardware controller can be used to perform various automated task from hardware, generally hardware controller consist of GPIO(general purpose input and output) pins, these pin's behaviour controlled by the piece of code. [ 6 ] The hardware engineer or architect deals (more or less) exclusively with the hardware device; the software engineer or architect deals (more or less) exclusively with the program; and the systems engineer or systems architect is responsible for seeing that the programming is capable of properly running within the hardware device, and that the system composed of the two entities is capable of properly interacting with its external environment, especially the user, and performing its intended function. A hardware architecture, then, is an abstract representation of an electronic or an electromechanical device capable of running a fixed or changeable program. [ 7 ] [ 8 ] A hardware architecture generally includes some form of analog, digital, or hybrid electronic computer , along with electronic and mechanical sensors and actuators. Hardware design may be viewed as a 'partitioning scheme ,' or algorithm , which considers all of the system's present and foreseeable requirements and arranges the necessary hardware components into a workable set of cleanly bounded subsystems with no more parts than are required. That is, it is a partitioning scheme that is exclusive, inclusive, and exhaustive . A major purpose of the partitioning is to arrange the elements in the hardware subsystems so that there is a minimum of electrical connections and electronic communications needed among them. In both software and hardware, a good subsystem tends to be seen as a meaningful " object ." Moreover, a clear allocation of user requirements to the architecture (hardware and software) provides an effective basis for validation tests of the user's requirements in the as-built system.
https://en.wikipedia.org/wiki/Hardware_architecture
Hardware interface design ( HID ) is a cross-disciplinary design field that shapes the physical connection between people and technology in order to create new hardware interfaces that transform purely digital processes into analog methods of interaction. It employs a combination of filmmaking tools, software prototyping, and electronics breadboarding. Through this parallel visualization and development, hardware interface designers are able to shape a cohesive vision alongside business and engineering that more deeply embeds design throughout every stage of the product. The development of hardware interfaces as a field continues to mature as more things connect to the internet. Hardware interface designers draw upon industrial design , interaction design and electrical engineering . Interface elements include touchscreens , knobs, buttons, sliders and switches as well as input sensors such as microphones, cameras, and accelerometers. In the last decade a trend had evolved in the area of human-machine-communication, taking the user experience from haptic, tactile and acoustic interfaces to a more digitally graphical approach. Important tasks that had been assigned to the industrial designers so far, had instead been moved into fields like UI and UX design and usability engineering. The creation of good user interaction was more a question of software than hardware. Things like having to push two buttons on the tape recorder to have them pop back out again and the cradle of some older telephones remain mechanical haptic relicts that have long found their digital nemesis and are waiting to disappear. However, this excessive use of GUIs in today’s world has led to a worsening impairment of the human cognitive capabilities. [ citation needed ] Visual interfaces are at the maximum of their upgradability. Even though the resolution of new screens is constantly rising, you can see a change of direction away from the descriptive intuitive design to natural interface strategies, based on learnable habits (Google’s Material Design , Apple’s iOS flat design, Microsoft’s Metro Design Language ). Several of the more important commands are not shown directly but can be accessed through dragging, holding and swiping across the screen; gestures which have to be learned once but feel very natural afterwards and are easy to remember. In the area of controlling these systems, there is a need to move away from GUIs and instead find other means of interaction which use the full capabilities of all our senses. Hardware interface design solves this by taking physical forms and objects and connecting them with digital information to have the user control virtual data flow through grasping, moving and manipulating the used physical forms. If you see the classic industrial hardware interface design as an “analog” method, it finds its digital counterpart in the HID approach. Instead of translating analog methods of control into a virtual form via a GUI, one can see the TUI as an approach to do the exact opposite: transmitting purely digital processes into analog methods of interaction. [ 1 ] [ unreliable source ] Example hardware interfaces include a computer mouse , TV remote control, kitchen timer, control panel for a nuclear power plant [ 2 ] and an aircraft cockpit. [ 3 ]
https://en.wikipedia.org/wiki/Hardware_interface_design
In a computer or data transmission system, a reset clears any pending errors or events and brings a system to normal condition or an initial state, usually in a controlled manner. It is usually done in response to an error condition when it is impossible or undesirable for a processing activity to proceed and all error recovery mechanisms fail. A computer storage program would normally perform a "reset" if a command times out and error recovery schemes like retry or abort also fail. [ 1 ] A software reset (or soft reset) is initiated by the software, for example, Control-Alt-Delete key combination have been pressed, or execute restart in Microsoft Windows . Most computers have a reset line that brings the device into the startup state and is active for a short time after powering on. For example, in the x86 architecture, asserting the RESET line halts the CPU; this is done after the system is switched on and before the power supply has asserted "power good" to indicate that it is ready to supply stable voltages at sufficient power levels. [ 2 ] Reset places less stress on the hardware than power cycling , as the power is not removed. Many computers, especially older models, have user accessible "reset" buttons that assert the reset line to facilitate a system reboot in a way that cannot be trapped (i.e. prevented) by the operating system, or holding a combination of buttons on some mobile devices. [ 3 ] [ 4 ] Devices may not have a dedicated Reset button, but have the user hold the power button to cut power, which the user can then turn the computer back on. [ 5 ] Out-of-band management also frequently provides the possibility to reset the remote system in this way. Many memory-capable digital circuits ( flip-flops , registers, counters and so on) accept the reset signal that sets them to the pre-determined state. This signal is often applied after powering on but may also be applied under other circumstances. After a hard reset, the register states of many hardware have been cleared. The ability for an electronic device to reset itself in case of error or abnormal power loss is an important aspect of embedded system design and programming . This ability can be observed with everyday electronics such as a television , audio equipment or the electronics of a car , which are able to function as intended again even after having lost power suddenly. A sudden and strange error with a device might sometimes be fixed by removing and restoring power, making the device reset. Some devices, such as portable media players , very often have a dedicated reset button as they are prone to freezing or locking up. The lack of a proper reset ability could otherwise possibly render the device useless after a power loss or malfunction. User initiated hard resets can be used to reset the device if the software hangs, crashes, or is otherwise unresponsive. However, data may become corrupted if this occurs. [ 6 ] Generally, a hard reset is initiated by pressing a dedicated reset button On some systems (e.g, the PlayStation 2 video game console), pressing and releasing the power button initiates a hard reset, and holding the button turns the system off. The 8086 microprocessors provide RESET pin that is used to do the hardware reset. When a HIGH is applied to the pin, the CPU immediately stops, and sets the major registers to these values: The CPU uses the values of CS and IP registers to find the location of the next instruction to execute. Location of next instruction is calculated using this simple equation: Location of next instruction = (CS<<4) + (IP) This implies that after the hardware reset, the CPU will start execution at the physical address 0xFFFF0. In IBM PC compatible computers , This address maps to BIOS ROM . The memory word at 0xFFFF0 usually contains a JMP instruction that redirects the CPU to execute the initialization code of BIOS. This JMP instruction is absolutely the first instruction executed after the reset. [ 7 ] Later x86 processors reset the CS and IP registers similarly, refer to Reset vector . Apple Mac computers allow various levels of resetting, [ 8 ] including (CTL,CMD,EJECT) analogous to the three-finger salute (CTL,ALT,DEL) on Windows computers.
https://en.wikipedia.org/wiki/Hardware_reset
In digital computing, hardware security bugs are hardware bugs or flaws that create vulnerabilities affecting computer central processing units (CPUs), or other devices which incorporate programmable processors or logic and have direct memory access , which allow data to be read by a rogue process when such reading is not authorized. Such vulnerabilities are considered "catastrophic" by security analysts. [ 1 ] [ 2 ] [ 3 ] Starting in 2017, a series of security vulnerabilities were found in the implementations of speculative execution on common processor architectures which effectively enabled an elevation of privileges . These include: In 2019 researchers discovered that a manufacturer debugging mode, known as VISA, had an undocumented feature on Intel Platform Controller Hubs, which are the chipsets included on most Intel-based motherboards and which have direct memory access, which made the mode accessible with a normal motherboard possibly leading to a security vulnerability. [ 4 ] This computer-engineering -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Hardware_security_bug
Hardware watermarking , also known as IP core watermarking is the process of embedding covert marks as design attributes inside a hardware or IP core design itself. Hardware Watermarking can represent watermarking of either DSP Cores (widely used in consumer electronics devices) or combinational/sequential circuits. Both forms of Hardware Watermarking are very popular. In DSP Core Watermarking a secret mark is embedded within the logic elements of the DSP Core itself. DSP Core Watermark usually implants this secret mark in the form of a robust signature either in the RTL design or during High Level Synthesis (HLS) design. The watermarking process of a DSP Core leverages on the High Level Synthesis framework and implants a secret mark in one (or more) of the high level synthesis phases such as scheduling, allocation and binding. DSP Core Watermarking is performed to protect a DSP core from hardware threats such as IP piracy, forgery and false claim of ownership. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Some examples of DSP cores are FIR filter, IIR filter, FFT, DFT, JPEG, HWT etc. Few of the most important properties of a DSP core watermarking process are as follows: (a) Low embedding cost (b) Secret mark (c) Low creation time (d) Strong tamper tolerance (e) Fault tolerance. [ 5 ] [ 6 ] Hardware or IP core watermarking in the context of DSP/Multimedia Cores are significantly different from watermarking of images/digital content. IP Cores are usually complex in size and nature and thus require highly sophisticated mechanisms to implant signatures within their design without disturbing the functionality. Any small change in the functionality of the IP core renders the hardware watermarking process futile. Such is the sensitivity of this process. Hardware Watermarking [ 7 ] [ 8 ] [ 9 ] can be performed in two ways: (a) Single-phase watermarking, (b) Multi-phase watermarking. As the name suggests, in single-phase watermarking process the secret marks in the form of additional constraints are inserted in a particular phase of design abstraction level. Among the all design abstraction level of Electronic design automation process inserting watermarking constraints at High-level synthesis is always beneficial, especially where the applications have complex algorithms (such as Digital signal processor , Media processor ). The register allocation phase of High-level synthesis is one of the popular locations where single-phase watermarking constraints are inserted. Mostly the secret marks are inserted in the register allocation phase using the concept of Graph coloring , where each additional constraint is added as an additional edge of the graph. Moreover, the additional constraints are mapped with an encoded variable for providing another layer of security. In binary encoding process [ 2 ] the signature is provided in the form of 0 and 1; where each digit indicates a decoded constraints. In multi-variable encoding process [ 3 ] [ 7 ] the signature is provided in the form of four different variables viz. 'i', 'T', 'I', '!'. As the name suggests, in the multi-phase watermarking process the additional constraints are inserted in multiple phases of a particular design abstraction level. For example, in High-level synthesis scheduling, hardware allocation and register allocation are used to insert a watermark. The main challenge of the multi-phase watermarking process is the dependence between additional constraints of multiple phases. In an ideal scenario, the watermarking constraints of each phase should not depend on each other. In other words, if somehow the watermarking constraints of a particular phase are tampered or removed, it should not impact the constraints of other phases. In multi-phase encoding process [ 1 ] [ 4 ] the signature is provided in the form of seven different variables viz. 'α', 'β', 'γ' 'i', 'T', 'I', '!'; where γ inserts watermark in scheduling phase, α and β insert watermark in hardware allocation phase, i, T, I, and ! insert watermark in the register allocation phase.
https://en.wikipedia.org/wiki/Hardware_watermarking
Hardy's inequality is an inequality in mathematics , named after G. H. Hardy . Its discrete version states that if a 1 , a 2 , a 3 , … {\displaystyle a_{1},a_{2},a_{3},\dots } is a sequence of non-negative real numbers , then for every real number p > 1 one has If the right-hand side is finite, equality holds if and only if a n = 0 {\displaystyle a_{n}=0} for all n . An integral version of Hardy's inequality states the following: if f is a measurable function with non-negative values, then If the right-hand side is finite, equality holds if and only if f ( x ) = 0 almost everywhere . Hardy's inequality was first published and proved (at least the discrete version with a worse constant) in 1920 in a note by Hardy. [ 1 ] The original formulation was in an integral form slightly different from the above. The general weighted one dimensional version reads as follows: [ 2 ] : §332 if a n ≥ 0 {\displaystyle a_{n}\geq 0} , λ n > 0 {\displaystyle \lambda _{n}>0} and p > 1 {\displaystyle p>1} , The general weighted one dimensional version reads as follows: [ 2 ] : §330 In the multidimensional case, Hardy's inequality can be extended to L p {\displaystyle L^{p}} -spaces, taking the form [ 3 ] where f ∈ C c ∞ ( R n ) {\displaystyle f\in C_{c}^{\infty }(\mathbb {R} ^{n})} , and where the constant p n − p {\displaystyle {\frac {p}{n-p}}} is known to be sharp; by density it extends then to the Sobolev space W 1 , p ( R n ) {\displaystyle W^{1,p}(\mathbb {R} ^{n})} . Similarly, if p > n ≥ 2 {\displaystyle p>n\geq 2} , then one has for every f ∈ C c ∞ ( R n ) {\displaystyle f\in C_{c}^{\infty }(\mathbb {R} ^{n})} If Ω ⊊ R n {\displaystyle \Omega \subsetneq \mathbb {R} ^{n}} is an nonempty convex open set, then for every f ∈ W 1 , p ( Ω ) {\displaystyle f\in W^{1,p}(\Omega )} , and the constant cannot be improved. [ 4 ] If 1 ≤ p < ∞ {\displaystyle 1\leq p<\infty } and 0 < λ < ∞ {\displaystyle 0<\lambda <\infty } , λ ≠ 1 {\displaystyle \lambda \neq 1} , there exists a constant C {\displaystyle C} such that for every f : ( 0 , ∞ ) → R {\displaystyle f:(0,\infty )\to \mathbb {R} } satisfying ∫ 0 ∞ | f ( x ) | p / x λ d x < ∞ {\displaystyle \int _{0}^{\infty }\vert f(x)\vert ^{p}/x^{\lambda }\,dx<\infty } , one has [ 5 ] : Lemma 2 Hardy’s original proof [ 1 ] [ 2 ] : §327 (ii) begins with an integration by parts to get Then, by Hölder's inequality , and the conclusion follows. A change of variables gives which is less or equal than ∫ 0 1 ( ∫ 0 ∞ f ( s x ) p d x ) 1 / p d s {\displaystyle \int _{0}^{1}\left(\int _{0}^{\infty }f(sx)^{p}\,dx\right)^{1/p}\,ds} by Minkowski's integral inequality . Finally, by another change of variables, the last expression equals Assuming the right-hand side to be finite, we must have a n → 0 {\displaystyle a_{n}\to 0} as n → ∞ {\displaystyle n\to \infty } . Hence, for any positive integer j , there are only finitely many terms bigger than 2 − j {\displaystyle 2^{-j}} . This allows us to construct a decreasing sequence b 1 ≥ b 2 ≥ ⋯ {\displaystyle b_{1}\geq b_{2}\geq \dotsb } containing the same positive terms as the original sequence (but possibly no zero terms). Since a 1 + a 2 + ⋯ + a n ≤ b 1 + b 2 + ⋯ + b n {\displaystyle a_{1}+a_{2}+\dotsb +a_{n}\leq b_{1}+b_{2}+\dotsb +b_{n}} for every n , it suffices to show the inequality for the new sequence. This follows directly from the integral form, defining f ( x ) = b n {\displaystyle f(x)=b_{n}} if n − 1 < x < n {\displaystyle n-1<x<n} and f ( x ) = 0 {\displaystyle f(x)=0} otherwise. Indeed, one has and, for n − 1 < x < n {\displaystyle n-1<x<n} , there holds (the last inequality is equivalent to ( n − x ) ( b 1 + ⋯ + b n − 1 ) ≥ ( n − 1 ) ( n − x ) b n {\displaystyle (n-x)(b_{1}+\dots +b_{n-1})\geq (n-1)(n-x)b_{n}} , which is true as the new sequence is decreasing) and thus Let p > 1 {\displaystyle p>1} and let b 1 , … , b n {\displaystyle b_{1},\dots ,b_{n}} be positive real numbers. Set S k = ∑ i = 1 k b i {\displaystyle S_{k}=\sum _{i=1}^{k}b_{i}} . First we prove the inequality Let T n = S n n {\displaystyle T_{n}={\frac {S_{n}}{n}}} and let Δ n {\displaystyle \Delta _{n}} be the difference between the n {\displaystyle n} -th terms in the right-hand side and left-hand side of * , that is, Δ n := T n p − p p − 1 b n T n p − 1 {\displaystyle \Delta _{n}:=T_{n}^{p}-{\frac {p}{p-1}}b_{n}T_{n}^{p-1}} . We have: or According to Young's inequality we have: from which it follows that: By telescoping we have: proving * . Applying Hölder's inequality to the right-hand side of * we have: from which we immediately obtain: Letting N → ∞ {\displaystyle N\rightarrow \infty } we obtain Hardy's inequality.
https://en.wikipedia.org/wiki/Hardy's_inequality
In mathematics , a Hardy field is a field consisting of germs of real-valued functions at infinity that are closed under differentiation . They are named after the English mathematician G. H. Hardy . Initially at least, Hardy fields were defined in terms of germs of real functions at infinity. Specifically we consider a collection H of functions that are defined for all large real numbers, that is functions f that map ( u ,∞) to the real numbers R , for some real number u depending on f . Here and in the rest of the article we say a function has a property " eventually " if it has the property for all sufficiently large x , so for example we say a function f in H is eventually zero if there is some real number U such that f ( x ) = 0 for all x ≥ U . We can form an equivalence relation on H by saying f is equivalent to g if and only if f − g is eventually zero. The equivalence classes of this relation are called germs at infinity. If H forms a field under the usual addition and multiplication of functions then so will H modulo this equivalence relation under the induced addition and multiplication operations. Moreover, if every function in H is eventually differentiable and the derivative of any function in H is also in H then H modulo the above equivalence relation is called a Hardy field. [ 1 ] Elements of a Hardy field are thus equivalence classes and should be denoted, say, [ f ] ∞ to denote the class of functions that are eventually equal to the representative function f . However, in practice the elements are normally just denoted by the representatives themselves, so instead of [ f ] ∞ one would just write f . If F is a subfield of R then we can consider it as a Hardy field by considering the elements of F as constant functions, that is by considering the number α in F as the constant function f α that maps every x in R to α. This is a field since F is, and since the derivative of every function in this field is 0 which must be in F it is a Hardy field. A less trivial example of a Hardy field is the field of rational functions on R , denoted R ( x ). This is the set of functions of the form P ( x )/ Q ( x ) where P and Q are polynomials with real coefficients. Since the polynomial Q can have only finitely many zeros by the fundamental theorem of algebra , such a rational function will be defined for all sufficiently large x , specifically for all x larger than the largest real root of Q . Adding and multiplying rational functions gives more rational functions, and the quotient rule shows that the derivative of rational function is again a rational function, so R ( x ) forms a Hardy field. Another example is the field of functions that can be expressed using the standard arithmetic operations, exponents, and logarithms, and are well-defined on some interval of the form ( x , ∞ ) {\displaystyle (x,\infty )} . [ 2 ] Such functions are sometimes called Hardy L-functions . Much bigger Hardy fields (that contain Hardy L-functions as a subfield) can be defined using transseries . Every element of a Hardy field is eventually either strictly positive, strictly negative, or zero. This follows fairly immediately from the facts that the elements in a Hardy field are eventually differentiable and hence continuous and eventually either have a multiplicative inverse or are zero. This means periodic functions such as the sine and cosine functions cannot exist in Hardy fields. This avoidance of periodic functions also means that every element in a Hardy field has a (possibly infinite) limit at infinity, so if f is an element of H , then exists in R ∪ {−∞,+∞}. [ 3 ] It also means we can place an ordering on H by saying f < g if g − f is eventually strictly positive. Note that this is not the same as stating that f < g if the limit of f is less than the limit of g . For example, if we consider the germs of the identity function f ( x ) = x and the exponential function g ( x ) = e x then since g ( x ) − f ( x ) > 0 for all x we have that g > f . But they both tend to infinity. In this sense the ordering tells us how quickly all the unbounded functions diverge to infinity. Even finite limits being equal is not enough: consider f ( x ) = 1/ x and g ( x ) = 0. The modern theory of Hardy fields doesn't restrict to real functions but to those defined in certain structures expanding real closed fields . Indeed, if R is an o-minimal expansion of a field, then the set of unary definable functions in R that are defined for all sufficiently large elements forms a Hardy field denoted H ( R ). [ 4 ] The properties of Hardy fields in the real setting still hold in this more general setting.
https://en.wikipedia.org/wiki/Hardy_field
In computability theory , computational complexity theory and proof theory , the Hardy hierarchy , named after G. H. Hardy , is a hierarchy of sets of numerical functions generated from an ordinal-indexed family of functions h α : N → N (where N is the set of natural numbers , {0, 1, ...}) called Hardy functions . It is related to the fast-growing hierarchy and slow-growing hierarchy . The Hardy hierarchy was introduced by Stanley S. Wainer in 1972, [ 1 ] [ 2 ] but the idea of its definition comes from Hardy's 1904 paper, [ 2 ] [ 3 ] in which Hardy exhibits a set of reals with cardinality ℵ 1 {\displaystyle \aleph _{1}} . Let μ be a large countable ordinal such that a fundamental sequence is assigned to every limit ordinal less than μ . The Hardy functions h α : N → N , for α < μ , is then defined as follows: Here α [ n ] denotes the n th element of the fundamental sequence assigned to the limit ordinal α . A standardized choice of fundamental sequence for all α ≤ ε 0 is described in the article on the fast-growing hierarchy . The Hardy hierarchy { H α } α < μ {\displaystyle \{{\mathcal {H}}_{\alpha }\}_{\alpha <\mu }} is a family of numerical functions. For each ordinal α , a set H α {\displaystyle {\mathcal {H}}_{\alpha }} is defined as the smallest class of functions containing H α , zero, successor and projection functions, and closed under limited primitive recursion and limited substitution [ 2 ] (similar to Grzegorczyk hierarchy ). Caicedo (2007) defines a modified Hardy hierarchy of functions H α {\displaystyle H_{\alpha }} by using the standard fundamental sequences, but with α [ n +1] (instead of α [ n ]) in the third line of the above definition. The Wainer hierarchy of functions f α and the Hardy hierarchy of functions H α are related by f α = H ω α for all α < ε 0 . Thus, for any α < ε 0 , H α grows much more slowly than does f α . However, the Hardy hierarchy "catches up" to the Wainer hierarchy at α = ε 0 , such that f ε 0 and H ε 0 have the same growth rate, in the sense that f ε 0 ( n -1) ≤ H ε 0 ( n ) ≤ f ε 0 ( n +1) for all n ≥ 1. ( Gallier 1991 )
https://en.wikipedia.org/wiki/Hardy_hierarchy
Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity. Big O is a member of a family of notations invented by German mathematicians Paul Bachmann , [ 1 ] Edmund Landau , [ 2 ] and others, collectively called Bachmann–Landau notation or asymptotic notation . The letter O was chosen by Bachmann to stand for Ordnung , meaning the order of approximation . In computer science , big O notation is used to classify algorithms according to how their run time or space requirements [ a ] grow as the input size grows. [ 3 ] In analytic number theory , big O notation is often used to express a bound on the difference between an arithmetical function and a better understood approximation; a famous example of such a difference is the remainder term in the prime number theorem . Big O notation is also used in many other fields to provide similar estimates. Big O notation characterizes functions according to their growth rates: different functions with the same asymptotic growth rate may be represented using the same O notation. The letter O is used because the growth rate of a function is also referred to as the order of the function . A description of a function in terms of big O notation usually only provides an upper bound on the growth rate of the function. Associated with big O notation are several related notations, using the symbols o {\displaystyle o} , Ω {\displaystyle \Omega } , ω {\displaystyle \omega } , and Θ {\displaystyle \Theta } to describe other kinds of bounds on asymptotic growth rates. [ 3 ] Let f , {\displaystyle f,} the function to be estimated, be a real or complex valued function, and let g , {\displaystyle g,} the comparison function, be a real valued function. Let both functions be defined on some unbounded subset of the positive real numbers , and g ( x ) {\displaystyle g(x)} be non-zero (often, but not necessarily, strictly positive) for all large enough values of x . {\displaystyle x.} [ 4 ] One writes f ( x ) = O ( g ( x ) ) as x → ∞ {\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}x\to \infty } and it is read " f ( x ) {\displaystyle f(x)} is big O of g ( x ) {\displaystyle g(x)} " or more often " f ( x ) {\displaystyle f(x)} is of the order of g ( x ) {\displaystyle g(x)} " if the absolute value of f ( x ) {\displaystyle f(x)} is at most a positive constant multiple of the absolute value of g ( x ) {\displaystyle g(x)} for all sufficiently large values of x . {\displaystyle x.} That is, f ( x ) = O ( g ( x ) ) {\displaystyle f(x)=O{\bigl (}g(x){\bigr )}} if there exists a positive real number M {\displaystyle M} and a real number x 0 {\displaystyle x_{0}} such that | f ( x ) | ≤ M | g ( x ) | for all x ≥ x 0 . {\displaystyle |f(x)|\leq M\ |g(x)|\quad {\text{ for all }}x\geq x_{0}~.} In many contexts, the assumption that we are interested in the growth rate as the variable x {\displaystyle \ x\ } goes to infinity or to zero is left unstated, and one writes more simply that f ( x ) = O ( g ( x ) ) . {\displaystyle f(x)=O{\bigl (}g(x){\bigr )}.} The notation can also be used to describe the behavior of f {\displaystyle f} near some real number a {\displaystyle a} (often, a = 0 {\displaystyle a=0} ): we say f ( x ) = O ( g ( x ) ) as x → a {\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}\ x\to a} if there exist positive numbers δ {\displaystyle \delta } and M {\displaystyle M} such that for all defined x {\displaystyle x} with 0 < | x − a | < δ , {\displaystyle 0<|x-a|<\delta ,} | f ( x ) | ≤ M | g ( x ) | . {\displaystyle |f(x)|\leq M|g(x)|.} As g ( x ) {\displaystyle g(x)} is non-zero for adequately large (or small) values of x , {\displaystyle x,} both of these definitions can be unified using the limit superior : f ( x ) = O ( g ( x ) ) as x → a {\displaystyle f(x)=O{\bigl (}g(x){\bigr )}\quad {\text{ as }}\ x\to a} if lim sup x → a | f ( x ) | | g ( x ) | < ∞ . {\displaystyle \limsup _{x\to a}{\frac {\left|f(x)\right|}{\left|g(x)\right|}}<\infty .} And in both of these definitions the limit point a {\displaystyle a} (whether ∞ {\displaystyle \infty } or not) is a cluster point of the domains of f {\displaystyle f} and g , {\displaystyle g,} i. e., in every neighbourhood of a {\displaystyle a} there have to be infinitely many points in common. Moreover, as pointed out in the article about the limit inferior and limit superior , the lim sup x → a {\displaystyle \textstyle \limsup _{x\to a}} (at least on the extended real number line ) always exists. In computer science, a slightly more restrictive definition is common: f {\displaystyle f} and g {\displaystyle g} are both required to be functions from some unbounded subset of the positive integers to the nonnegative real numbers; then f ( x ) = O ( g ( x ) ) {\displaystyle f(x)=O{\bigl (}g(x){\bigr )}} if there exist positive integer numbers M {\displaystyle M} and n 0 {\displaystyle n_{0}} such that | f ( n ) | ≤ M | g ( n ) | {\displaystyle |f(n)|\leq M|g(n)|} for all n ≥ n 0 . {\displaystyle n\geq n_{0}.} [ 5 ] In typical usage the O {\displaystyle O} notation is asymptotical, that is, it refers to very large x {\displaystyle x} . In this setting, the contribution of the terms that grow "most quickly" will eventually make the other ones irrelevant. As a result, the following simplification rules can be applied: For example, let f ( x ) = 6 x 4 − 2 x 3 + 5 {\displaystyle f(x)=6x^{4}-2x^{3}+5} , and suppose we wish to simplify this function, using O {\displaystyle O} notation, to describe its growth rate as x → ∞ {\displaystyle x\rightarrow \infty } . This function is the sum of three terms: 6 x 4 {\displaystyle 6x^{4}} , − 2 x 3 {\displaystyle -2x^{3}} , and 5 {\displaystyle 5} . Of these three terms, the one with the highest growth rate is the one with the largest exponent as a function of x {\displaystyle x} , namely 6 x 4 {\displaystyle 6x^{4}} . Now one may apply the second rule: 6 x 4 {\displaystyle 6x^{4}} is a product of 6 {\displaystyle 6} and x 4 {\displaystyle x^{4}} in which the first factor does not depend on x {\displaystyle x} . Omitting this factor results in the simplified form x 4 {\displaystyle x^{4}} . Thus, we say that f ( x ) {\displaystyle f(x)} is a "big O" of x 4 {\displaystyle x^{4}} . Mathematically, we can write f ( x ) = O ( x 4 ) {\displaystyle f(x)=O(x^{4})} . One may confirm this calculation using the formal definition: let f ( x ) = 6 x 4 − 2 x 3 + 5 {\displaystyle f(x)=6x^{4}-2x^{3}+5} and g ( x ) = x 4 {\displaystyle g(x)=x^{4}} . Applying the formal definition from above, the statement that f ( x ) = O ( x 4 ) {\displaystyle f(x)=O(x^{4})} is equivalent to its expansion, | f ( x ) | ≤ M x 4 {\displaystyle |f(x)|\leq Mx^{4}} for some suitable choice of a real number x 0 {\displaystyle x_{0}} and a positive real number M {\displaystyle M} and for all x > x 0 {\displaystyle x>x_{0}} . To prove this, let x 0 = 1 {\displaystyle x_{0}=1} and M = 13 {\displaystyle M=13} . Then, for all x > x 0 {\displaystyle x>x_{0}} : | 6 x 4 − 2 x 3 + 5 | ≤ 6 x 4 + | 2 x 3 | + 5 ≤ 6 x 4 + 2 x 4 + 5 x 4 = 13 x 4 {\displaystyle {\begin{aligned}|6x^{4}-2x^{3}+5|&\leq 6x^{4}+|2x^{3}|+5\\&\leq 6x^{4}+2x^{4}+5x^{4}\\&=13x^{4}\end{aligned}}} so | 6 x 4 − 2 x 3 + 5 | ≤ 13 x 4 . {\displaystyle |6x^{4}-2x^{3}+5|\leq 13x^{4}.} Big O notation has two main areas of application: In both applications, the function g ( x ) {\displaystyle g(x)} appearing within the O ( ⋅ ) {\displaystyle O(\cdot )} is typically chosen to be as simple as possible, omitting constant factors and lower order terms. There are two formally close, but noticeably different, usages of this notation: [ citation needed ] This distinction is only in application and not in principle, however—the formal definition for the "big O" is the same for both cases, only with different limits for the function argument. [ original research? ] Big O notation is useful when analyzing algorithms for efficiency. For example, the time (or the number of steps) it takes to complete a problem of size n {\displaystyle n} might be found to be T ( n ) = 4 n 2 − 2 n + 2 {\displaystyle T(n)=4n^{2}-2n+2} . As n {\displaystyle n} grows large, the n 2 {\displaystyle n^{2}} term will come to dominate, so that all other terms can be neglected—for instance when n = 500 {\displaystyle n=500} , the term 4 n 2 {\displaystyle 4n^{2}} is 1000 times as large as the 2 n {\displaystyle 2n} term. Ignoring the latter would have negligible effect on the expression's value for most purposes. Further, the coefficients become irrelevant if we compare to any other order of expression, such as an expression containing a term n 3 {\displaystyle n^{3}} or n 4 {\displaystyle n^{4}} . Even if T ( n ) = 1000000 n 2 {\displaystyle T(n)=1000000n^{2}} , if U ( n ) = n 3 {\displaystyle U(n)=n^{3}} , the latter will always exceed the former once n grows larger than 1000000 {\displaystyle 1000000} , viz. T ( 1000000 ) = 1000000 3 = U ( 1000000 ) {\displaystyle T(1000000)=1000000^{3}=U(1000000)} . Additionally, the number of steps depends on the details of the machine model on which the algorithm runs, but different types of machines typically vary by only a constant factor in the number of steps needed to execute an algorithm. So the big O notation captures what remains: we write either or and say that the algorithm has order of n 2 time complexity. The sign " = " is not meant to express "is equal to" in its normal mathematical sense, but rather a more colloquial "is", so the second expression is sometimes considered more accurate (see the " Equals sign " discussion below) while the first is considered by some as an abuse of notation . [ 6 ] Big O can also be used to describe the error term in an approximation to a mathematical function. The most significant terms are written explicitly, and then the least-significant terms are summarized in a single big O term. Consider, for example, the exponential series and two expressions of it that are valid when x is small: e x = 1 + x + x 2 2 ! + x 3 3 ! + x 4 4 ! + ⋯ for all finite x = 1 + x + x 2 2 + O ( x 3 ) as x → 0 = 1 + x + O ( x 2 ) as x → 0 {\displaystyle {\begin{aligned}e^{x}&=1+x+{\frac {x^{2}}{2!}}+{\frac {x^{3}}{3!}}+{\frac {x^{4}}{4!}}+\dotsb &&{\text{for all finite }}x\\[4pt]&=1+x+{\frac {x^{2}}{2}}+O(x^{3})&&{\text{as }}x\to 0\\[4pt]&=1+x+O(x^{2})&&{\text{as }}x\to 0\end{aligned}}} The middle expression (the one with O ( x 3 ) {\displaystyle O(x^{3})} ) means the absolute-value of the error e x − ( 1 + x + x 2 2 ) {\displaystyle e^{x}-(1+x+{\frac {x^{2}}{2}})} is at most some constant times | x 3 ! | {\displaystyle |x^{3}!|} when x {\displaystyle x} is close enough to 0 {\displaystyle 0} . If the function f can be written as a finite sum of other functions, then the fastest growing one determines the order of f ( n ) . For example, In particular, if a function may be bounded by a polynomial in n , then as n tends to infinity , one may disregard lower-order terms of the polynomial. The sets O ( n c ) and O ( c n ) are very different. If c is greater than one, then the latter grows much faster. A function that grows faster than n c for any c is called superpolynomial . One that grows more slowly than any exponential function of the form c n is called subexponential . An algorithm can require time that is both superpolynomial and subexponential; examples of this include the fastest known algorithms for integer factorization and the function n log n . We may ignore any powers of n inside of the logarithms. The set O (log n ) is exactly the same as O (log( n c )) . The logarithms differ only by a constant factor (since log( n c ) = c log n ) and thus the big O notation ignores that. Similarly, logs with different constant bases are equivalent. On the other hand, exponentials with different bases are not of the same order. For example, 2 n and 3 n are not of the same order. Changing units may or may not affect the order of the resulting algorithm. Changing units is equivalent to multiplying the appropriate variable by a constant wherever it appears. For example, if an algorithm runs in the order of n 2 , replacing n by cn means the algorithm runs in the order of c 2 n 2 , and the big O notation ignores the constant c 2 . This can be written as c 2 n 2 = O( n 2 ) . If, however, an algorithm runs in the order of 2 n , replacing n with cn gives 2 cn = (2 c ) n . This is not equivalent to 2 n in general. Changing variables may also affect the order of the resulting algorithm. For example, if an algorithm's run time is O ( n ) when measured in terms of the number n of digits of an input number x , then its run time is O (log x ) when measured as a function of the input number x itself, because n = O (log x ) . If f 1 = O ( g 1 ) {\displaystyle f_{1}=O(g_{1})} and f 2 = O ( g 2 ) {\displaystyle f_{2}=O(g_{2})} then f 1 + f 2 = O ( max ( | g 1 | , | g 2 | ) ) {\displaystyle f_{1}+f_{2}=O(\max(|g_{1}|,|g_{2}|))} . It follows that if f 1 = O ( g ) {\displaystyle f_{1}=O(g)} and f 2 = O ( g ) {\displaystyle f_{2}=O(g)} then f 1 + f 2 ∈ O ( g ) {\displaystyle f_{1}+f_{2}\in O(g)} . Let k be a nonzero constant. Then O ( | k | ⋅ g ) = O ( g ) {\displaystyle O(|k|\cdot g)=O(g)} . In other words, if f = O ( g ) {\displaystyle f=O(g)} , then k ⋅ f = O ( g ) . {\displaystyle k\cdot f=O(g).} Big O (and little o, Ω, etc.) can also be used with multiple variables. To define big O formally for multiple variables, suppose f {\displaystyle f} and g {\displaystyle g} are two functions defined on some subset of R n {\displaystyle \mathbb {R} ^{n}} . We say if and only if there exist constants M {\displaystyle M} and C > 0 {\displaystyle C>0} such that | f ( x ) | ≤ C | g ( x ) | {\displaystyle |f(\mathbf {x} )|\leq C|g(\mathbf {x} )|} for all x {\displaystyle \mathbf {x} } with x i ≥ M {\displaystyle x_{i}\geq M} for some i . {\displaystyle i.} [ 7 ] Equivalently, the condition that x i ≥ M {\displaystyle x_{i}\geq M} for some i {\displaystyle i} can be written ‖ x ‖ ∞ ≥ M {\displaystyle \|\mathbf {x} \|_{\infty }\geq M} , where ‖ x ‖ ∞ {\displaystyle \|\mathbf {x} \|_{\infty }} denotes the Chebyshev norm . For example, the statement asserts that there exist constants C and M such that whenever either m ≥ M {\displaystyle m\geq M} or n ≥ M {\displaystyle n\geq M} holds. This definition allows all of the coordinates of x {\displaystyle \mathbf {x} } to increase to infinity. In particular, the statement (i.e., ∃ C ∃ M ∀ n ∀ m ⋯ {\displaystyle \exists C\,\exists M\,\forall n\,\forall m\,\cdots } ) is quite different from (i.e., ∀ m ∃ C ∃ M ∀ n ⋯ {\displaystyle \forall m\,\exists C\,\exists M\,\forall n\,\cdots } ). Under this definition, the subset on which a function is defined is significant when generalizing statements from the univariate setting to the multivariate setting. For example, if f ( n , m ) = 1 {\displaystyle f(n,m)=1} and g ( n , m ) = n {\displaystyle g(n,m)=n} , then f ( n , m ) = O ( g ( n , m ) ) {\displaystyle f(n,m)=O(g(n,m))} if we restrict f {\displaystyle f} and g {\displaystyle g} to [ 1 , ∞ ) 2 {\displaystyle [1,\infty )^{2}} , but not if they are defined on [ 0 , ∞ ) 2 {\displaystyle [0,\infty )^{2}} . This is not the only generalization of big O to multivariate functions, and in practice, there is some inconsistency in the choice of definition. [ 8 ] The statement " f ( x ) is O [ g ( x )] " as defined above is usually written as f ( x ) = O [ g ( x )] . Some consider this to be an abuse of notation , since the use of the equals sign could be misleading as it suggests a symmetry that this statement does not have. As de Bruijn says, O [ x ] = O [ x 2 ] is true but O [ x 2 ] = O [ x ] is not. [ 9 ] Knuth describes such statements as "one-way equalities", since if the sides could be reversed, "we could deduce ridiculous things like n = n 2 from the identities n = O [ n 2 ] and n 2 = O [ n 2 ] ". [ 10 ] In another letter, Knuth also pointed out that [ 11 ] the equality sign is not symmetric with respect to such notations [as, in this notation,] mathematicians customarily use the '=' sign as they use the word 'is' in English: Aristotle is a man, but a man isn't necessarily Aristotle. For these reasons, it would be more precise to use set notation and write f ( x ) ∈ O [ g ( x )] – read as: " f ( x ) is an element of O [ g ( x )] ", or " f ( x ) is in the set O [ g ( x )] " – thinking of O [ g ( x )] as the class of all functions h ( x ) such that | h ( x ) | ≤ C | g ( x ) | for some positive real number C . [ 10 ] However, the use of the equals sign is customary. [ 9 ] [ 10 ] Big O notation can also be used in conjunction with other arithmetic operators in more complicated equations. For example, h ( x ) + O ( f ( x )) denotes the collection of functions having the growth of h ( x ) plus a part whose growth is limited to that of f ( x ) . Thus, g ( x ) = h ( x ) + O ( f ( x ) ) {\displaystyle g(x)=h(x)+O(f(x))} expresses the same as g ( x ) − h ( x ) = O ( f ( x ) ) . {\displaystyle g(x)-h(x)=O(f(x)).} Suppose an algorithm is being developed to operate on a set of n elements. Its developers are interested in finding a function T ( n ) that will express how long the algorithm will take to run (in some arbitrary measurement of time) in terms of the number of elements in the input set. The algorithm works by first calling a subroutine to sort the elements in the set and then perform its own operations. The sort has a known time complexity of O ( n 2 ) , and after the subroutine runs the algorithm must take an additional 55 n 3 + 2 n + 10 steps before it terminates. Thus the overall time complexity of the algorithm can be expressed as T ( n ) = 55 n 3 + O ( n 2 ) . Here the terms 2 n + 10 are subsumed within the faster-growing O ( n 2 ) . Again, this usage disregards some of the formal meaning of the " = " symbol, but it does allow one to use the big O notation as a kind of convenient placeholder. In more complicated usage, O (·) can appear in different places in an equation, even several times on each side. For example, the following are true for n → ∞ {\displaystyle n\to \infty } : ( n + 1 ) 2 = n 2 + O ( n ) , ( n + O ( n 1 / 2 ) ) ⋅ ( n + O ( log ⁡ n ) ) 2 = n 3 + O ( n 5 / 2 ) , n O ( 1 ) = O ( e n ) . {\displaystyle {\begin{aligned}(n+1)^{2}&=n^{2}+O(n),\\(n+O(n^{1/2}))\cdot (n+O(\log n))^{2}&=n^{3}+O(n^{5/2}),\\n^{O(1)}&=O(e^{n}).\end{aligned}}} The meaning of such statements is as follows: for any functions which satisfy each O (·) on the left side, there are some functions satisfying each O (·) on the right side, such that substituting all these functions into the equation makes the two sides equal. For example, the third equation above means: "For any function f ( n ) = O (1) , there is some function g ( n ) = O ( e n ) such that n f ( n ) = g ( n ) ". In terms of the "set notation" above, the meaning is that the class of functions represented by the left side is a subset of the class of functions represented by the right side. In this use the " = " is a formal symbol that unlike the usual use of " = " is not a symmetric relation . Thus for example n O (1) = O ( e n ) does not imply the false statement O ( e n ) = n O (1) . Big O is typeset as an italicized uppercase " O " , as in the following example: O ( n 2 ) {\displaystyle O(n^{2})} . [ 12 ] [ 13 ] In TeX , it is produced by simply typing 'O' inside math mode. Unlike Greek-named Bachmann–Landau notations, it needs no special symbol. However, some authors use the calligraphic variant O {\displaystyle {\mathcal {O}}} instead. [ 14 ] [ 15 ] Here is a list of classes of functions that are commonly encountered when analyzing the running time of an algorithm. In each case, c is a positive constant and n increases without bound. The slower-growing functions are generally listed first. The statement f ( n ) = O ( n ! ) {\displaystyle f(n)=O(n!)} is sometimes weakened to f ( n ) = O ( n n ) {\displaystyle f(n)=O\left(n^{n}\right)} to derive simpler formulas for asymptotic complexity. For any k > 0 {\displaystyle k>0} and c > 0 {\displaystyle c>0} , O ( n c ( log ⁡ n ) k ) {\displaystyle O(n^{c}(\log n)^{k})} is a subset of O ( n c + ε ) {\displaystyle O(n^{c+\varepsilon })} for any ε > 0 {\displaystyle \varepsilon >0} , so may be considered as a polynomial with some bigger order. Big O is widely used in computer science. Together with some other related notations, it forms the family of Bachmann–Landau notations. [ citation needed ] Intuitively, the assertion " f ( x ) is o ( g ( x )) " (read " f ( x ) is little-o of g ( x ) " or " f ( x ) is of inferior order to g ( x ) ") means that g ( x ) grows much faster than f ( x ) , or equivalently f ( x ) grows much slower than g ( x ) . As before, let f be a real or complex valued function and g a real valued function, both defined on some unbounded subset of the positive real numbers , such that g ( x ) {\displaystyle g(x)} is strictly positive for all large enough values of x . One writes if for every positive constant ε there exists a constant x 0 {\displaystyle x_{0}} such that For example, one has The difference between the definition of the big-O notation and the definition of little-o is that while the former has to be true for at least one constant M , the latter must hold for every positive constant ε , however small. [ 18 ] In this way, little-o notation makes a stronger statement than the corresponding big-O notation: every function that is little-o of g is also big-O of g , but not every function that is big-O of g is little-o of g . For example, 2 x 2 = O ( x 2 ) {\displaystyle 2x^{2}=O(x^{2})} but 2 x 2 ≠ o ( x 2 ) {\displaystyle 2x^{2}\neq o(x^{2})} . If g ( x ) {\displaystyle g(x)} is nonzero, or at least becomes nonzero beyond a certain point, the relation f ( x ) = o ( g ( x ) ) {\displaystyle f(x)=o(g(x))} is equivalent to Little-o respects a number of arithmetic operations. For example, It also satisfies a transitivity relation: Little-o can also be generalized to the finite case: [ 19 ] f ( x ) = o ( g ( x ) ) as x → x 0 {\displaystyle f(x)=o(g(x))\quad {\text{ as }}x\to x_{0}} if f ( x ) = α ( x ) g ( x ) {\displaystyle f(x)=\alpha (x)g(x)} for some α ( x ) {\displaystyle \alpha (x)} with lim x → x 0 α ( x ) = 0 {\displaystyle \lim _{x\to x_{0}}\alpha (x)=0} . Or, if g ( x ) {\displaystyle g(x)} is nonzero in a neighbourhood around x 0 {\displaystyle x_{0}} : f ( x ) = o ( g ( x ) ) as x → x 0 {\displaystyle f(x)=o(g(x))\quad {\text{ as }}x\to x_{0}} if lim x → x 0 f ( x ) g ( x ) = 0 {\displaystyle \lim _{x\to x_{0}}{\frac {f(x)}{g(x)}}=0} . This definition especially useful in the computation of limits using Taylor series . For example: sin ⁡ x = x − x 3 3 ! + … = x + o ( x 2 ) as x → 0 {\displaystyle \sin x=x-{\frac {x^{3}}{3!}}+\ldots =x+o(x^{2}){\text{ as }}x\to 0} , so lim x → 0 sin ⁡ x x = lim x → 0 x + o ( x 2 ) x = lim x → 0 1 + o ( x ) = 1 {\displaystyle \lim _{x\to 0}{\frac {\sin x}{x}}=\lim _{x\to 0}{\frac {x+o(x^{2})}{x}}=\lim _{x\to 0}1+o(x)=1} Another asymptotic notation is Ω {\displaystyle \Omega } , read "big omega". [ 20 ] There are two widespread and incompatible definitions of the statement where a is some real number, ∞ {\displaystyle \infty } , or − ∞ {\displaystyle -\infty } , where f and g are real functions defined in a neighbourhood of a , and where g is positive in this neighbourhood. The Hardy–Littlewood definition is used mainly in analytic number theory , and the Knuth definition mainly in computational complexity theory ; the definitions are not equivalent. In 1914 G.H. Hardy and J.E. Littlewood introduced the new symbol Ω , {\displaystyle \ \Omega \ ,} [ 21 ] which is defined as follows: Thus f ( x ) = Ω ( g ( x ) ) {\displaystyle ~f(x)=\Omega {\bigl (}\ g(x)\ {\bigr )}~} is the negation of f ( x ) = o ( g ( x ) ) . {\displaystyle ~f(x)=o{\bigl (}\ g(x)\ {\bigr )}~.} In 1916 the same authors introduced the two new symbols Ω R {\displaystyle \ \Omega _{R}\ } and Ω L , {\displaystyle \ \Omega _{L}\ ,} defined as: [ 22 ] These symbols were used by E. Landau , with the same meanings, in 1924. [ 23 ] Authors that followed Landau, however, use a different notation for the same definitions: [ citation needed ] The symbol Ω R {\displaystyle \ \Omega _{R}\ } has been replaced by the current notation Ω + {\displaystyle \ \Omega _{+}\ } with the same definition, and Ω L {\displaystyle \ \Omega _{L}\ } became Ω − . {\displaystyle \ \Omega _{-}~.} These three symbols Ω , Ω + , Ω − , {\displaystyle \ \Omega \ ,\Omega _{+}\ ,\Omega _{-}\ ,} as well as f ( x ) = Ω ± ( g ( x ) ) {\displaystyle \ f(x)=\Omega _{\pm }{\bigl (}\ g(x)\ {\bigr )}\ } (meaning that f ( x ) = Ω + ( g ( x ) ) {\displaystyle \ f(x)=\Omega _{+}{\bigl (}\ g(x)\ {\bigr )}\ } and f ( x ) = Ω − ( g ( x ) ) {\displaystyle \ f(x)=\Omega _{-}{\bigl (}\ g(x)\ {\bigr )}\ } are both satisfied), are now currently used in analytic number theory . [ 24 ] [ 25 ] We have and more precisely We have and more precisely however In 1976 Donald Knuth published a paper to justify his use of the Ω {\displaystyle \Omega } -symbol to describe a stronger property. [ 26 ] Knuth wrote: "For all the applications I have seen so far in computer science, a stronger requirement ... is much more appropriate". He defined with the comment: "Although I have changed Hardy and Littlewood's definition of Ω {\displaystyle \Omega } , I feel justified in doing so because their definition is by no means in wide use, and because there are other ways to say what they want to say in the comparatively rare cases when their definition applies." [ 26 ] The limit definitions assume g ( n ) > 0 {\displaystyle g(n)>0} for sufficiently large n {\displaystyle n} . The table is (partly) sorted from smallest to largest, in the sense that o , O , Θ , ∼ , {\displaystyle o,O,\Theta ,\sim ,} (Knuth's version of) Ω , ω {\displaystyle \Omega ,\omega } on functions correspond to < , ≤ , ≈ , = , {\displaystyle <,\leq ,\approx ,=,} ≥ , > {\displaystyle \geq ,>} on the real line [ 29 ] (the Hardy–Littlewood version of Ω {\displaystyle \Omega } , however, doesn't correspond to any such description). Computer science uses the big O {\displaystyle O} , big Theta Θ {\displaystyle \Theta } , little o {\displaystyle o} , little omega ω {\displaystyle \omega } and Knuth's big Omega Ω {\displaystyle \Omega } notations. [ 30 ] Analytic number theory often uses the big O {\displaystyle O} , small o {\displaystyle o} , Hardy's ≍ {\displaystyle \asymp } , [ 31 ] Hardy–Littlewood's big Omega Ω {\displaystyle \Omega } (with or without the +, − or ± subscripts) and ∼ {\displaystyle \sim } notations. [ 24 ] The small omega ω {\displaystyle \omega } notation is not used as often in analysis. [ 32 ] Informally, especially in computer science, the big O notation often can be used somewhat differently to describe an asymptotic tight bound where using big Theta Θ notation might be more factually appropriate in a given context. [ 33 ] For example, when considering a function T ( n ) = 73 n 3 + 22 n 2 + 58, all of the following are generally acceptable, but tighter bounds (such as numbers 2 and 3 below) are usually strongly preferred over looser bounds (such as number 1 below). The equivalent English statements are respectively: So while all three statements are true, progressively more information is contained in each. In some fields, however, the big O notation (number 2 in the lists above) would be used more commonly than the big Theta notation (items numbered 3 in the lists above). For example, if T ( n ) represents the running time of a newly developed algorithm for input size n , the inventors and users of the algorithm might be more inclined to put an upper asymptotic bound on how long it will take to run without making an explicit statement about the lower asymptotic bound. In their book Introduction to Algorithms , Cormen , Leiserson , Rivest and Stein consider the set of functions f which satisfy In a correct notation this set can, for instance, be called O ( g ), where O ( g ) = { f : there exist positive constants c and n 0 such that 0 ≤ f ( n ) ≤ c g ( n ) for all n ≥ n 0 } . {\displaystyle O(g)=\{f:{\text{there exist positive constants}}~c~{\text{and}}~n_{0}~{\text{such that}}~0\leq f(n)\leq cg(n){\text{ for all }}n\geq n_{0}\}.} [ 34 ] The authors state that the use of equality operator (=) to denote set membership rather than the set membership operator (∈) is an abuse of notation, but that doing so has advantages. [ 6 ] Inside an equation or inequality, the use of asymptotic notation stands for an anonymous function in the set O ( g ), which eliminates lower-order terms, and helps to reduce inessential clutter in equations, for example: [ 35 ] Another notation sometimes used in computer science is Õ (read soft-O ), which hides polylogarithmic factors. There are two definitions in use: some authors use f ( n ) = Õ ( g ( n )) as shorthand for f ( n ) = O ( g ( n ) log k n ) for some k , while others use it as shorthand for f ( n ) = O ( g ( n ) log k g ( n )) . [ 36 ] When g ( n ) is polynomial in n , there is no difference; however, the latter definition allows one to say, e.g. that n 2 n = O ~ ( 2 n ) {\displaystyle n2^{n}={\tilde {O}}(2^{n})} while the former definition allows for log k ⁡ n = O ~ ( 1 ) {\displaystyle \log ^{k}n={\tilde {O}}(1)} for any constant k . Some authors write O * for the same purpose as the latter definition. [ 37 ] Essentially, it is big O notation, ignoring logarithmic factors because the growth-rate effects of some other super-logarithmic function indicate a growth-rate explosion for large-sized input parameters that is more important to predicting bad run-time performance than the finer-point effects contributed by the logarithmic-growth factor(s). This notation is often used to obviate the "nitpicking" within growth-rates that are stated as too tightly bounded for the matters at hand (since log k n is always o ( n ε ) for any constant k and any ε > 0 ). Also, the L notation , defined as is convenient for functions that are between polynomial and exponential in terms of ln ⁡ n {\displaystyle \ln n} . The generalization to functions taking values in any normed vector space is straightforward (replacing absolute values by norms), where f and g need not take their values in the same space. A generalization to functions g taking values in any topological group is also possible [ citation needed ] . The "limiting process" x → x o can also be generalized by introducing an arbitrary filter base , i.e. to directed nets f and g . The o notation can be used to define derivatives and differentiability in quite general spaces, and also (asymptotical) equivalence of functions, which is an equivalence relation and a more restrictive notion than the relationship " f is Θ( g )" from above. (It reduces to lim f / g = 1 if f and g are positive real valued functions.) For example, 2 x is Θ( x ), but 2 x − x is not o ( x ). The symbol O was first introduced by number theorist Paul Bachmann in 1894, in the second volume of his book Analytische Zahlentheorie (" analytic number theory "). [ 1 ] The number theorist Edmund Landau adopted it, and was thus inspired to introduce in 1909 the notation o; [ 2 ] hence both are now called Landau symbols. These notations were used in applied mathematics during the 1950s for asymptotic analysis. [ 38 ] The symbol Ω {\displaystyle \Omega } (in the sense "is not an o of") was introduced in 1914 by Hardy and Littlewood. [ 21 ] Hardy and Littlewood also introduced in 1916 the symbols Ω R {\displaystyle \Omega _{R}} ("right") and Ω L {\displaystyle \Omega _{L}} ("left"), [ 22 ] precursors of the modern symbols Ω + {\displaystyle \Omega _{+}} ("is not smaller than a small o of") and Ω − {\displaystyle \Omega _{-}} ("is not larger than a small o of"). Thus the Omega symbols (with their original meanings) are sometimes also referred to as "Landau symbols". This notation Ω {\displaystyle \Omega } became commonly used in number theory at least since the 1950s. [ 39 ] The symbol ∼ {\displaystyle \sim } , although it had been used before with different meanings, [ 29 ] was given its modern definition by Landau in 1909 [ 40 ] and by Hardy in 1910. [ 41 ] Just above on the same page of his tract Hardy defined the symbol ≍ {\displaystyle \asymp } , where f ( x ) ≍ g ( x ) {\displaystyle f(x)\asymp g(x)} means that both f ( x ) = O ( g ( x ) ) {\displaystyle f(x)=O(g(x))} and g ( x ) = O ( f ( x ) ) {\displaystyle g(x)=O(f(x))} are satisfied. The notation is still currently used in analytic number theory. [ 42 ] [ 31 ] In his tract Hardy also proposed the symbol ≍ − {\displaystyle \mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} } , where f ≍ − g {\displaystyle f\mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} g} means that f ∼ K g {\displaystyle f\sim Kg} for some constant K ≠ 0 {\displaystyle K\not =0} . In the 1970s the big O was popularized in computer science by Donald Knuth , who proposed the different notation f ( x ) = Θ ( g ( x ) ) {\displaystyle f(x)=\Theta (g(x))} for Hardy's f ( x ) ≍ g ( x ) {\displaystyle f(x)\asymp g(x)} , and proposed a different definition for the Hardy and Littlewood Omega notation. [ 26 ] Two other symbols coined by Hardy were (in terms of the modern O notation) (Hardy however never defined or used the notation ≺ ≺ {\displaystyle \prec \!\!\prec } , nor ≪ {\displaystyle \ll } , as it has been sometimes reported). Hardy introduced the symbols ≼ {\displaystyle \preccurlyeq } and ≺ {\displaystyle \prec } (as well as the already mentioned other symbols) in his 1910 tract "Orders of Infinity", and made use of them only in three papers (1910–1913). In his nearly 400 remaining papers and books he consistently used the Landau symbols O and o. Hardy's symbols ≼ {\displaystyle \preccurlyeq } and ≺ {\displaystyle \prec } (as well as ≍ − {\displaystyle \mathbin {\,\asymp \;\;\;\;\!\!\!\!\!\!\!\!\!\!\!\!\!-} } ) are not used anymore. On the other hand, in the 1930s, [ 43 ] the Russian number theorist Ivan Matveyevich Vinogradov introduced his notation ≪ {\displaystyle \ll } , which has been increasingly used in number theory instead of the O {\displaystyle O} notation. We have and frequently both notations are used in the same paper. The big-O originally stands for "order of" ("Ordnung", Bachmann 1894), and is thus a Latin letter. Neither Bachmann nor Landau ever call it "Omicron". The symbol was much later on (1976) viewed by Knuth as a capital omicron , [ 26 ] probably in reference to his definition of the symbol Omega . The digit zero should not be used.
https://en.wikipedia.org/wiki/Hardy_notation
In mathematical analysis , the Hardy–Littlewood inequality , named after G. H. Hardy and John Edensor Littlewood , states that if f {\displaystyle f} and g {\displaystyle g} are nonnegative measurable real functions vanishing at infinity that are defined on n {\displaystyle n} - dimensional Euclidean space R n {\displaystyle \mathbb {R} ^{n}} , then where f ∗ {\displaystyle f^{*}} and g ∗ {\displaystyle g^{*}} are the symmetric decreasing rearrangements of f {\displaystyle f} and g {\displaystyle g} , respectively. [ 1 ] [ 2 ] The decreasing rearrangement f ∗ {\displaystyle f^{*}} of f {\displaystyle f} is defined via the property that for all r > 0 {\displaystyle r>0} the two super-level sets have the same volume ( n {\displaystyle n} -dimensional Lebesgue measure) and E f ∗ ( r ) {\displaystyle E_{f^{*}}(r)} is a ball in R n {\displaystyle \mathbb {R} ^{n}} centered at x = 0 {\displaystyle x=0} , i.e. it has maximal symmetry. The layer cake representation [ 1 ] [ 2 ] allows us to write the general functions f {\displaystyle f} and g {\displaystyle g} in the form f ( x ) = ∫ 0 ∞ χ f ( x ) > r d r {\displaystyle f(x)=\int _{0}^{\infty }\chi _{f(x)>r}\,dr\quad } and g ( x ) = ∫ 0 ∞ χ g ( x ) > s d s {\displaystyle \quad g(x)=\int _{0}^{\infty }\chi _{g(x)>s}\,ds} where r ↦ χ f ( x ) > r {\displaystyle r\mapsto \chi _{f(x)>r}} equals 1 {\displaystyle 1} for r < f ( x ) {\displaystyle r<f(x)} and 0 {\displaystyle 0} otherwise. Analogously, s ↦ χ g ( x ) > s {\displaystyle s\mapsto \chi _{g(x)>s}} equals 1 {\displaystyle 1} for s < g ( x ) {\displaystyle s<g(x)} and 0 {\displaystyle 0} otherwise. Now the proof can be obtained by first using Fubini's theorem to interchange the order of integration. When integrating with respect to x ∈ R n {\displaystyle x\in \mathbb {R} ^{n}} the conditions f ( x ) > r {\displaystyle f(x)>r} and g ( x ) > s {\displaystyle g(x)>s} the indicator functions x ↦ χ E f ( r ) ( x ) {\displaystyle x\mapsto \chi _{E_{f}(r)}(x)} and x ↦ χ E g ( s ) ( x ) {\displaystyle x\mapsto \chi _{E_{g}(s)}(x)} appear with the superlevel sets E f ( r ) {\displaystyle E_{f}(r)} and E g ( s ) {\displaystyle E_{g}(s)} as introduced above: Denoting by μ {\displaystyle \mu } the n {\displaystyle n} -dimensional Lebesgue measure we continue by estimating the volume of the intersection by the minimum of the volumes of the two sets. Then, we can use the equality of the volumes of the superlevel sets for the rearrangements: Now, we use that the superlevel sets E f ∗ ( r ) {\displaystyle E_{f^{*}}(r)} and E g ∗ ( s ) {\displaystyle E_{g^{*}}(s)} are balls in R n {\displaystyle \mathbb {R} ^{n}} centered at x = 0 {\displaystyle x=0} , which implies that E f ∗ ( r ) ∩ E g ∗ ( s ) {\displaystyle E_{f^{*}}(r)\,\cap \,E_{g^{*}}(s)} is exactly the smaller one of the two balls: The last identity follows by reversing the initial five steps that even work for general functions. This finishes the proof. Let X {\displaystyle X} be a normally-distributed random variable with mean μ {\displaystyle \mu } and finite non-zero variance σ 2 {\displaystyle \sigma ^{2}} . Using the Hardy–Littlewood inequality, it can be proved that for 0 < δ < 1 {\displaystyle 0<\delta <1} the δ th {\displaystyle \delta ^{\text{th}}} reciprocal moment for the absolute value of X {\displaystyle X} is bounded above as The technique used to obtain the above property of the normal distribution can be applied to other unimodal distributions.
https://en.wikipedia.org/wiki/Hardy–Littlewood_inequality
A harem is an animal group consisting of one or two males, a number of females, and their offspring. The dominant male drives off other males and maintains the unity of the group. If present, the second male is subservient to the dominant male. As juvenile males grow, they leave the group and roam as solitary individuals or join bachelor herds . Females in the group may be inter-related. The dominant male mates with the females as they become sexually active and drives off competitors, until he is displaced by another male. In some species, incoming males that achieve dominant status may commit infanticide . For the male, the primary benefit of the harem system is obtaining exclusive access to a group of mature females. The females benefit from being in a stable social group and the associated benefits of grooming, predator avoidance and cooperative defense of territory. The disadvantages for the male are the energetic costs of gaining or defending a harem which may leave him with reduced reproductive success . The females are disadvantaged if their offspring are killed during dominance battles or by incoming males. The term harem is used in zoology to distinguish social organization consisting of a group of females, their offspring, and one to two males. [ 1 ] The single male, called the dominant male , may be accompanied by another young male, called a "follower" male. Females that closely associate with the dominant male are called "central females," while females who associate less frequently with the dominant male are called "peripheral females." [ 2 ] Juvenile male offspring leave the harem and live either solitarily, or, with other young males in groups known as bachelor herds . [ 3 ] Sexually mature female offspring may stay within their natal harem, or may join another harem. [ 4 ] The females in a harem may be, but are not exclusively, genetically related. [ 1 ] [ 5 ] [ 6 ] For instance, the females in hamadryas baboon harems are not usually genetically related because their harems are formed by "kidnapping" females from other harems and subsequent herding . [ 1 ] In contrast, gelada harems are based on kinship ties to genetically related females. [ 7 ] Multiple harems may assemble into larger groups known as "clans" or "teams". [ 8 ] Harem cohesiveness is mediated by the dominant male who fights off invading males to keep claim over the harem. [ 9 ] [ 10 ] [ 11 ] In some harem-forming species, when a dominant male vacates his harem (due to death, defection to another harem, or usurpation) the incoming male sometimes commits infanticide of the offspring. [ 12 ] Because time and resources are no longer being devoted to the offspring, infanticide often stimulates the female to return to sexual receptivity and fertility sooner than if the offspring were to survive. Furthermore, while lactating , females do not ovulate and consequently are not fertile. Infanticide therefore has the potential to increase the incoming male's reproductive success . [ 12 ] [ 13 ] Harems are a beneficial social structure for the dominant male, as it allows him access to several reproductively available females at a time. [ 10 ] Harems provide protection for the females within a particular harem, as dominant males will fiercely ward off potential invaders. [ 11 ] This level of protection may also, such in the case of the common pheasant , reduce the energy expended by females on remaining alert to, or fleeing from, invading males. [ 11 ] Harems allow bonding and socialization among the female members, which can result in greater control over access to females as determined by the females' preferences. Harems also facilitate socialized behavior such as grooming and cooperative defense of territory. [ 1 ] [ 14 ] Harems can prove energetically costly for both males and females. Males spend substantial amounts of energy engaging in battles to invade a harem, or to keep hold of a harem once dominance has been established. [ 9 ] Such energy expenditure can result in reduced reproductive success such as in the case of red deer . [ 9 ] This is especially true when there is high turnover rates of dominant males, as frequent intense fighting can result in great expenditure of energy. [ 9 ] High turnover rate of dominant males can also be energetically costly for the females as their offspring are frequently killed in harems where infanticide occurs. Harems can also negatively affect females if there is intra-harem competition among females for resources. [ 15 ] A lower-cost alternative mating strategy , useful to bachelors without a harem, is kleptogyny (from Greek klepto- "stealing" and -gyny "female"), where a male sneaks in to mate while the harem owner is distracted: in the case of red deer , when the harem stag is involved in a fight with another older stag. [ 16 ] [ 17 ] The strategy is also recorded in the elephant seal . [ 18 ] Animals that form harems include:
https://en.wikipedia.org/wiki/Harem_(zoology)
Hari Punja , OF , OBE (born 1936) is an Indo-Fijian businessman and Chairman of Hari Punja Group of Companies . Hari Punja and Sons Limited is a very diversified (and probably the largest) company in Fiji . [ 1 ] Hari Punja was born in Fiji and received his education in Fiji and Australia . He trained as a chemical engineer . Punja joined the business in 1960. He has served as a mayor of Lautoka [ 2 ] and on a number of prestigious boards such as Fiji Broadcasting Commission and Fiji Sugar Corporation . [ 2 ] He served as a Senator [ 2 ] from 1996 to 1999. Following the passage of the Media Industry Development Decree 2010 by the military regime, Punja resigned from the board of Fiji Television [ 3 ] and sold his stake in radio company Communications Fiji Limited . [ 4 ] Punja has been bestowed with many credentials and honors. Some of these include:
https://en.wikipedia.org/wiki/Hari_Punja
In mathematics, Harish-Chandra's class is a class of Lie groups used in representation theory . Harish-Chandra's class contains all semisimple connected linear Lie groups and is closed under natural operations, most importantly, the passage to Levi subgroups . This closure property is crucial for many inductive arguments in representation theory of Lie groups, whereas the classes of semisimple or connected semisimple Lie groups are not closed in this sense. A Lie group G with the Lie algebra g is said to be in Harish-Chandra's class if it satisfies the following conditions: This algebra -related article is a stub . You can help Wikipedia by expanding it .
https://en.wikipedia.org/wiki/Harish-Chandra_class
In mathematics , the Harish-Chandra isomorphism , introduced by Harish-Chandra ( 1951 ), is an isomorphism of commutative rings constructed in the theory of Lie algebras . The isomorphism maps the center Z ( U ( g ) ) {\displaystyle {\mathcal {Z}}(U({\mathfrak {g}}))} of the universal enveloping algebra U ( g ) {\displaystyle U({\mathfrak {g}})} of a reductive Lie algebra g {\displaystyle {\mathfrak {g}}} to the elements S ( h ) W {\displaystyle S({\mathfrak {h}})^{W}} of the symmetric algebra S ( h ) {\displaystyle S({\mathfrak {h}})} of a Cartan subalgebra h {\displaystyle {\mathfrak {h}}} that are invariant under the Weyl group W {\displaystyle W} . Let g {\displaystyle {\mathfrak {g}}} be a semisimple Lie algebra , h {\displaystyle {\mathfrak {h}}} its Cartan subalgebra and λ , μ ∈ h ∗ {\displaystyle \lambda ,\mu \in {\mathfrak {h}}^{*}} be two elements of the weight space (where h ∗ {\displaystyle {\mathfrak {h}}^{*}} is the dual of h {\displaystyle {\mathfrak {h}}} ) and assume that a set of positive roots Φ + {\displaystyle \Phi _{+}} have been fixed. Let V λ {\displaystyle V_{\lambda }} and V μ {\displaystyle V_{\mu }} be highest weight modules with highest weights λ {\displaystyle \lambda } and μ {\displaystyle \mu } respectively. The g {\displaystyle {\mathfrak {g}}} -modules V λ {\displaystyle V_{\lambda }} and V μ {\displaystyle V_{\mu }} are representations of the universal enveloping algebra U ( g ) {\displaystyle U({\mathfrak {g}})} and its center acts on the modules by scalar multiplication (this follows from the fact that the modules are generated by a highest weight vector). So, for v ∈ V λ {\displaystyle v\in V_{\lambda }} and x ∈ Z ( U ( g ) ) {\displaystyle x\in {\mathcal {Z}}(U({\mathfrak {g}}))} , x ⋅ v := χ λ ( x ) v {\displaystyle x\cdot v:=\chi _{\lambda }(x)v} and similarly for V μ {\displaystyle V_{\mu }} , where the functions χ λ , χ μ {\displaystyle \chi _{\lambda },\,\chi _{\mu }} are homomorphisms from Z ( U ( g ) ) {\displaystyle {\mathcal {Z}}(U({\mathfrak {g}}))} to scalars called central characters . For any λ , μ ∈ h ∗ {\displaystyle \lambda ,\mu \in {\mathfrak {h}}^{*}} , the characters χ λ = χ μ {\displaystyle \chi _{\lambda }=\chi _{\mu }} if and only if λ + δ {\displaystyle \lambda +\delta } and μ + δ {\displaystyle \mu +\delta } are on the same orbit of the Weyl group of h ∗ {\displaystyle {\mathfrak {h}}^{*}} , where δ {\displaystyle \delta } is the half-sum of the positive roots , sometimes known as the Weyl vector . [ 1 ] Another closely related formulation is that the Harish-Chandra homomorphism from the center of the universal enveloping algebra Z ( U ( g ) ) {\displaystyle {\mathcal {Z}}(U({\mathfrak {g}}))} to S ( h ) W {\displaystyle S({\mathfrak {h}})^{W}} (the elements of the symmetric algebra of the Cartan subalgebra fixed by the Weyl group) is an isomorphism . More explicitly, the isomorphism can be constructed as the composition of two maps, one from Z = Z ( U ( g ) ) {\displaystyle {\mathfrak {Z}}={\mathcal {Z}}(U({\mathfrak {g}}))} to U ( h ) = S ( h ) , {\displaystyle U({\mathfrak {h}})=S({\mathfrak {h}}),} and another from S ( h ) {\displaystyle S({\mathfrak {h}})} to itself. The first is a projection γ : Z → S ( h ) {\displaystyle \gamma :{\mathfrak {Z}}\rightarrow S({\mathfrak {h}})} . For a choice of positive roots Φ + {\displaystyle \Phi _{+}} , defining n + = ⨁ α ∈ Φ + g α , n − = ⨁ α ∈ Φ − g α {\displaystyle n^{+}=\bigoplus _{\alpha \in \Phi _{+}}{\mathfrak {g}}_{\alpha },n^{-}=\bigoplus _{\alpha \in \Phi _{-}}{\mathfrak {g}}_{\alpha }} as the corresponding positive nilpotent subalgebra and negative nilpotent subalgebra respectively, due to the Poincaré–Birkhoff–Witt theorem there is a decomposition U ( g ) = U ( h ) ⊕ ( U ( g ) n + + n − U ( g ) ) . {\displaystyle U({\mathfrak {g}})=U({\mathfrak {h}})\oplus (U({\mathfrak {g}}){\mathfrak {n}}^{+}+{\mathfrak {n}}^{-}U({\mathfrak {g}})).} If z ∈ Z {\displaystyle z\in {\mathfrak {Z}}} is central, then in fact z ∈ U ( h ) ⊕ ( U ( g ) n + ∩ n − U ( g ) ) . {\displaystyle z\in U({\mathfrak {h}})\oplus (U({\mathfrak {g}}){\mathfrak {n}}^{+}\cap {\mathfrak {n}}^{-}U({\mathfrak {g}})).} The restriction of the projection U ( g ) → U ( h ) {\displaystyle U({\mathfrak {g}})\rightarrow U({\mathfrak {h}})} to the centre is γ : Z → S ( h ) {\displaystyle \gamma :{\mathfrak {Z}}\rightarrow S({\mathfrak {h}})} , and is a homomorphism of algebras. This is related to the central characters by χ λ ( x ) = γ ( x ) ( λ ) {\displaystyle \chi _{\lambda }(x)=\gamma (x)(\lambda )} The second map is the twist map τ : S ( h ) → S ( h ) {\displaystyle \tau :S({\mathfrak {h}})\rightarrow S({\mathfrak {h}})} . On h {\displaystyle {\mathfrak {h}}} viewed as a subspace of U ( h ) {\displaystyle U({\mathfrak {h}})} it is defined τ ( h ) = h − δ ( h ) 1 {\displaystyle \tau (h)=h-\delta (h)1} with δ {\displaystyle \delta } the Weyl vector. Then γ ~ = τ ∘ γ : Z → S ( h ) {\displaystyle {\tilde {\gamma }}=\tau \circ \gamma :{\mathfrak {Z}}\rightarrow S({\mathfrak {h}})} is the isomorphism. The reason this twist is introduced is that χ λ {\displaystyle \chi _{\lambda }} is not actually Weyl-invariant, but it can be proven that the twisted character χ ~ λ = χ λ − δ {\displaystyle {\tilde {\chi }}_{\lambda }=\chi _{\lambda -\delta }} is. The theorem has been used to obtain a simple Lie algebraic proof of Weyl's character formula for finite-dimensional irreducible representations. [ 2 ] The proof has been further simplified by Victor Kac , so that only the quadratic Casimir operator is required; there is a corresponding streamlined treatment proof of the character formula in the second edition of Humphreys (1978 , pp. 143–144). Further, it is a necessary condition for the existence of a non-zero homomorphism of some highest weight modules (a homomorphism of such modules preserves central character). A simple consequence is that for Verma modules or generalized Verma modules V λ {\displaystyle V_{\lambda }} with highest weight λ {\displaystyle \lambda } , there exist only finitely many weights μ {\displaystyle \mu } for which a non-zero homomorphism V λ → V μ {\displaystyle V_{\lambda }\rightarrow V_{\mu }} exists. For g {\displaystyle {\mathfrak {g}}} a simple Lie algebra, let r {\displaystyle r} be its rank , that is, the dimension of any Cartan subalgebra h {\displaystyle {\mathfrak {h}}} of g {\displaystyle {\mathfrak {g}}} . H. S. M. Coxeter observed that S ( h ) W {\displaystyle S({\mathfrak {h}})^{W}} is isomorphic to a polynomial algebra in r {\displaystyle r} variables (see Chevalley–Shephard–Todd theorem for a more general statement). Therefore, the center of the universal enveloping algebra of a simple Lie algebra is isomorphic to a polynomial algebra. The degrees of the generators of the algebra are the degrees of the fundamental invariants given in the following table. The number of the fundamental invariants of a Lie group is equal to its rank. Fundamental invariants are also related to the cohomology ring of a Lie group. In particular, if the fundamental invariants have degrees d 1 , ⋯ , d r {\displaystyle d_{1},\cdots ,d_{r}} , then the generators of the cohomology ring have degrees 2 d 1 − 1 , ⋯ , 2 d r − 1 {\displaystyle 2d_{1}-1,\cdots ,2d_{r}-1} . Due to this, the degrees of the fundamental invariants can be calculated from the Betti numbers of the Lie group and vice versa. In another direction, fundamental invariants are related to cohomology of the classifying space . The cohomology ring H ∗ ( B G , R ) {\displaystyle H^{*}(BG,\mathbb {R} )} is isomorphic to a polynomial algebra on generators with degrees 2 d 1 , ⋯ , 2 d r {\displaystyle 2d_{1},\cdots ,2d_{r}} . [ 3 ] The above result holds for reductive , and in particular semisimple Lie algebras . There is a generalization to affine Lie algebras shown by Feigin and Frenkel showing that an algebra known as the Feigin–Frenkel center is isomorphic to a W-algebra associated to the Langlands dual Lie algebra L g {\displaystyle ^{L}{\mathfrak {g}}} . [ 4 ] [ 5 ] The Feigin–Frenkel center of an affine Lie algebra g ^ {\displaystyle {\hat {\mathfrak {g}}}} is not exactly the center of the universal enveloping algebra Z ( U ( g ^ ) ) {\displaystyle {\mathcal {Z}}(U({\hat {\mathfrak {g}}}))} . They are elements S {\displaystyle S} of the vacuum affine vertex algebra at critical level k = − h ∨ {\displaystyle k=-h^{\vee }} , where h ∨ {\displaystyle h^{\vee }} is the dual Coxeter number for g {\displaystyle {\mathfrak {g}}} which are annihilated by the positive loop algebra g [ t ] {\displaystyle {\mathfrak {g}}[t]} part of g ^ {\displaystyle {\hat {\mathfrak {g}}}} , that is, Z ( g ^ ) := { S ∈ V cri ( g ) | g [ t ] S = 0 } {\displaystyle {\mathfrak {Z}}({\hat {\mathfrak {g}}}):=\{S\in V_{\text{cri}}({\mathfrak {g}})|{\mathfrak {g}}[t]S=0\}} where V cri ( g ) {\displaystyle V_{\text{cri}}({\mathfrak {g}})} is the affine vertex algebra at the critical level. Elements of this center are also known as singular vectors or Segal–Sugawara vectors . The isomorphism in this case is an isomorphism between the Feigin–Frenkel center and the W-algebra constructed associated to the Langlands dual Lie algebra by Drinfeld–Sokolov reduction : Z ( g ^ ) ≅ W ( L g ) . {\displaystyle {\mathfrak {Z}}({\hat {\mathfrak {g}}})\cong {\mathcal {W}}(^{L}{\mathfrak {g}}).} There is also a description of Z ( g ^ ) {\displaystyle {\mathfrak {Z}}({\hat {\mathfrak {g}}})} as a polynomial algebra in a finite number of countably infinite families of generators, ∂ n S i , i = 1 , ⋯ , l , n ≥ 0 {\displaystyle \partial ^{n}S_{i},i=1,\cdots ,l,n\geq 0} , where S i , i = 1 , ⋯ , l {\displaystyle S_{i},i=1,\cdots ,l} have degrees d i + 1 , i = 1 , ⋯ , l {\displaystyle d_{i}+1,i=1,\cdots ,l} and ∂ {\displaystyle \partial } is the (negative of) the natural derivative operator on the loop algebra. Notes on the Harish-Chandra isomorphism
https://en.wikipedia.org/wiki/Harish-Chandra_isomorphism
A harmful algal bloom ( HAB ), or excessive algae growth , sometimes called a red tide in marine environments, is an algal bloom that causes negative impacts to other organisms by production of natural algae-produced toxins , water deoxygenation , mechanical damage to other organisms, or by other means. HABs are sometimes defined as only those algal blooms that produce toxins, and sometimes as any algal bloom that can result in severely lower oxygen levels in natural waters, killing organisms in marine or fresh waters . [ 1 ] Blooms can last from a few days to many months. After the bloom dies, the microbes that decompose the dead algae use up more of the oxygen, generating a " dead zone " which can cause fish die-offs . When these zones cover a large area for an extended period of time, neither fish nor plants are able to survive. It is sometimes unclear what causes specific HABs as their occurrence in some locations appears to be entirely natural, [ 2 ] while in others they appear to be a result of human activities. [ 3 ] In certain locations there are links to particular drivers like nutrients, but HABs have also been occurring since before humans started to affect the environment. HABs are induced by eutrophication , which is an overabundance of nutrients in the water. The two most common nutrients are fixed nitrogen ( nitrates , ammonia , and urea ) and phosphate . [ 4 ] The excess nutrients are emitted by agriculture , industrial pollution, excessive fertilizer use in urban/suburban areas, and associated urban runoff . Higher water temperature and low circulation also contribute. [ citation needed ] HABs can cause significant harm to animals, the environment and economies. They have been increasing in size and frequency worldwide, a fact that many experts attribute to global climate change . The U.S. National Oceanic and Atmospheric Administration (NOAA) predicts more harmful blooms in the Pacific Ocean . [ 5 ] Potential remedies include chemical treatment, additional reservoirs, sensors and monitoring devices, reducing nutrient runoff, research and management as well as monitoring and reporting. [ 6 ] Terrestrial runoff, containing fertilizer, sewage and livestock wastes, transports abundant nutrients to the seawater and stimulates bloom events. Natural causes, such as river floods or upwelling of nutrients from the sea floor , often following massive storms, provide nutrients and trigger bloom events as well. Increasing coastal developments and aquaculture also contribute to the occurrence of coastal HABs. [ 7 ] [ 8 ] Effects of HABs can worsen locally due to wind driven Langmuir circulation and their biological effects . HABs from cyanobacteria (blue-green algae) can appear as a foam, scum, or mat on or just below the surface of water and can take on various colors depending on their pigments. [ 4 ] Cyanobacteria blooms in freshwater lakes or rivers may appear bright green, often with surface streaks that look like floating paint. [ 9 ] Cyanobacterial blooms are a global problem. [ 10 ] Most blooms occur in warm waters with excessive nutrients. [ 4 ] The harmful effects from such blooms are due to the toxins they produce or from using up oxygen in the water which can lead to fish die-offs. [ 11 ] Not all algal blooms produce toxins, however, with some only discoloring water, producing a smelly odor, or adding a bad taste to the water. Unfortunately, it is not possible to tell if a bloom is harmful from just appearances, since sampling and microscopic examination is required. [ 4 ] In many cases microscopy is not sufficient to tell the difference between toxic and non-toxic populations. In these cases, tools can be employed to measure the toxin level or to determine if the toxin-production genes are present. [ 12 ] In a narrow definition, harmful algal blooms are only those blooms that release toxins that affect other species. On the other hand, any algal bloom can cause dead zones due to low oxygen levels , and could therefore be called "harmful" in that sense. The usage of the term "harmful algal blooms" in the media and scientific literature is varied. In a broader definition, all "organisms and events are considered to be HABs if they negatively impact human health or socioeconomic interests or are detrimental to aquatic systems". [ 13 ] A harmful algal bloom is "a societal concept rather than a scientific definition". [ 13 ] A similarly broad definition of HABs was adopted by the US Environmental Protection Agency in 2008 who stated that HABs include "potentially toxic (auxotrophic, heterotrophic) species and high-biomass producers that can cause hypoxia and anoxia and indiscriminate mortalities of marine life after reaching dense concentrations, whether or not toxins are produced". [ 1 ] Harmful algal bloom in coastal areas are also often referred to as "red tides". [ 13 ] The term "red tide" is derived from blooms of any of several species of dinoflagellate , such as Karenia brevis . [ 14 ] However, the term is misleading since algal blooms can widely vary in color, and growth of algae is unrelated to the tides . Not all red tides are produced by dinoflagellates. The mixotrophic ciliate Mesodinium rubrum produces non-toxic blooms coloured deep red by chloroplasts it obtains from the algae it eats. [ 15 ] As a technical term, it is being replaced in favor of more precise terminology, including the generic term "harmful algal bloom" for harmful species, and " algal bloom " for benign species. [ citation needed ] There are three main types of phytoplankton which can form into harmful algal blooms: cyanobacteria , dinoflagellates , and diatoms . All three are made up of microscopic floating organisms which, like plants, can create their own food from sunlight by means of photosynthesis . That ability makes the majority of them an essential part of the food web for small fish and other organisms. [ 16 ] : 246 Harmful algal blooms in freshwater lakes and rivers, or at estuaries , where rivers flow into the ocean, are caused by cyanobacteria, which are commonly referred to as "blue-green algae", [ 17 ] but are in fact prokaryotic bacteria, [ 18 ] as opposed to algae which are eukaryotes . [ 19 ] Some cyanobacteria, including the widespread genus Microsystis , can produce hazardous cyanotoxins such as microcystins , [ 20 ] which are hepatotoxins that harm the liver of mammals. [ 21 ] Other types of cyanobacteria can also produce hepatotoxins, as well as neurotoxins, cytotoxins, and endotoxins. [ 22 ] Water purification plants may be unable to remove these toxins, leading to increasingly common localised advisories against drinking tap water, as happened in Toledo, Ohio in August 2014. [ 23 ] In August 2021, there were 47 lakes confirmed to have algal blooms in New York State alone. [ 24 ] [ 25 ] In September 2021, Spokane County's Environmental Programs issued a HAB alert for Newman Lake following tests showing potentially harmful toxicity levels for cyanobacteria, [ 26 ] while in the same month record-high levels of microcystins were reported leading to an extended 'Do Not Drink' advisory for 280 households at Clear Lake , California's second-largest freshwater lake. [ 27 ] Water conditions in Florida, meanwhile, continue to deteriorate under increasing nutrient inflows, causing severe HAB events in both freshwater and marine areas. [ 28 ] HABs also cause harm by blocking the sunlight used by plants and algae to photosynthesise, or by depleting the dissolved oxygen needed by fish and other aquatic animals, which can lead to fish die-offs. [ 11 ] When such oxygen-depleted water covers a large area for an extended period of time, it can become hypoxic or even anoxic; these areas are commonly called dead zones . These dead zones can be the result of numerous different factors ranging from natural phenomenon to deliberate human intervention, and are not just limited to large bodies of fresh water as found in the great lakes, but are also prone to bodies of salt water as well. [ 29 ] Many of the species that form harmful algae blooms will undergo a dual-stage life system. These species will alternate between a benthic resting stage and a pelagic vegetative state. The benthic resting stage corresponds to when these species are resting near the ocean floor. In this stage, the species cells are waiting for optimal conditions so that they can move towards the surface. These species will then transition from the benthic resting stage into the pelagic vegetative state - where they are more active and found near the water body surface. In the pelagic vegetative state, these cells are able to grow and multiply. It is within the pelagic vegetative state that a bloom is able to occur - as the cells rapidly reproduce and take over the upper regions of the body of water. The transition between these two life stages can have multiple effects on the algae bloom (such as rapid termination of the HAB as cells convert from the pelagic state to the benthic state). Many of the algal species that undergo this dual-stage life cycle are capable of rapid vertical migration. This migration is required for the movement from the benthic area of bodies of water to the pelagic zone. These species require immense amounts of energy as they pass through the various thermoclines , haloclines , and pycnoclines that are associated with the bodies of water in which these cells exist. [ 30 ] The other types of algae are diatoms and dinoflagellates , found primarily in marine environments, such as ocean coastlines or bays, where they can also form algal blooms. Coastal HABs are a natural phenomenon, [ 31 ] [ 32 ] although in many instances, particularly when they form close to coastlines or in estuaries, it has been shown that they are exacerbated by human-induced eutrophication and / or climate change. [ 33 ] [ 34 ] [ 35 ] [ 36 ] They can occur when warmer water, lower salinity, and nutrients reach certain levels, which then stimulates their growth. [ 37 ] Most HAB algae are dinoflagellates. [ 38 ] They are visible in water at a concentration of 1,000 algae cells/ml, while in dense blooms they can measure over 200,000/ml. [ 39 ] Diatoms produce domoic acid , another neurotoxin, which can cause seizures in higher vertebrates and birds as it concentrates up the food chain. Domoic acid readily accumulates in the bodies of shellfish , sardines , and anchovies , which if then eaten by sea lions , otters , cetaceans , birds or people, can affect the nervous system causing serious injury or death. [ 40 ] In the summer of 2015, the state governments closed important shellfish fisheries in Washington , Oregon , and California because of high concentrations of domoic acid in shellfish. [ 41 ] In the marine environment, single-celled, microscopic, plant-like organisms naturally occur in the well-lit surface layer of any body of water. These organisms, referred to as phytoplankton or microalgae , form the base of the food web upon which nearly all other marine organisms depend. Of the 5000+ species of marine phytoplankton that exist worldwide, about 2% are known to be harmful or toxic. [ 42 ] Blooms of harmful algae can have large and varied impacts on marine ecosystems, depending on the species involved, the environment where they are found, and the mechanism by which they exert negative effects. [ 43 ] It is sometimes unclear what causes specific HABs as their occurrence in some locations appears to be entirely natural, [ 2 ] while in others they appear to be a result of human activities. [ 3 ] Furthermore, there are many different species of algae that can form HABs, each with different environmental requirements for optimal growth. The frequency and severity of HABs in some parts of the world have been linked to increased nutrient loading from human activities. In other areas, HABs are a predictable seasonal occurrence resulting from coastal upwelling, a natural result of the movement of certain ocean currents. [ 45 ] The growth of marine phytoplankton (both non-toxic and toxic) is generally limited by the availability of nitrates and phosphates, which can be abundant in coastal upwelling zones as well as in agricultural run-off. The type of nitrates and phosphates available in the system are also a factor, since phytoplankton can grow at different rates depending on the relative abundance of these substances (e.g. ammonia , urea , nitrate ion). [ 46 ] A variety of other nutrient sources can also play an important role in affecting algal bloom formation, including iron, silica or carbon. Coastal water pollution produced by humans (including iron fertilization) and systematic increase in sea water temperature have also been suggested as possible contributing factors in HABs. [ 46 ] Among the causes of algal blooms are: [ 47 ] Nutrients enter freshwater or marine environments as surface runoff from agricultural pollution and urban runoff from fertilized lawns, golf courses and other landscaped properties; and from sewage treatment plants that lack nutrient control systems. [ 52 ] Additional nutrients are introduced from atmospheric pollution. [ 53 ] Coastal areas worldwide, especially wetlands and estuaries, coral reefs and swamps, are prone to being overloaded with those nutrients. [ 53 ] Most of the large cities along the Mediterranean Sea , for example, discharge all of their sewage into the sea untreated. [ 53 ] The same is true for most coastal developing countries, while in parts of the developing world, as much as 70% of wastewater from large cities may re-enter water systems without being treated. [ 54 ] Residual nutrients in treated wastewater can also accumulate in downstream source water areas [ 55 ] and fuel eutrophication, which leads progressively to a cyanobacteria-dominated system characterized by seasonal HABs. As more wastewater treatment infrastructure is built, more treated wastewater is returned to the natural water system, leading to a significant increase in these residual nutrients. [ citation needed ] Residual nutrients combine with nutrients from other sources to increase the sediment nutrient stockpile that is the driving force behind phase shifts to entrenched eutrophic conditions. [ citation needed ] This contributes to the ongoing degradation of dams, lakes, rivers, and reservoirs - source water areas that are starting to become known as ecological infrastructure, [ 56 ] placing increasing pressure on wastewater treatment works and water purification plants. Such pressures, in turn, intensify seasonal HABs. [ citation needed ] Climate change contributes to warmer waters which makes conditions more favorable for algae growth in more regions and farther north. [ 57 ] [ 48 ] In general, still, warm, shallow water, combined with high-nutrient conditions in lakes or rivers, increases the risk of harmful algal blooms. [ 50 ] Warming of summer surface temperatures of lakes, which rose by 0.34 °C decade per decade between 1985 and 2009 due to global warming, also will likely increase algal blooming by 20% over the next century. [ 58 ] Although the drivers of harmful algal blooms are poorly understood, they do appear to have increased in range expansion and frequency in coastal areas since the 1980s. [ 59 ] : 16 The is the result of human induced factors such as increased nutrient inputs ( nutrient pollution ) and climate change (in particular the warming of water temperatures). [ 59 ] : 16 The parameters that affect the formation of HABs are ocean warming , marine heatwaves, oxygen loss , eutrophication and water pollution . [ 60 ] : 582 HABs contain dense concentrations of organisms and appear as discolored water, often reddish-brown in color. It is a natural phenomenon, but the exact cause or combination of factors that result in a HAB event are not necessarily known. [ 61 ] However, three key natural factors are thought to play an important role in a bloom - salinity, temperature, and wind. HABs cause economic harm, so outbreaks are carefully monitored. For example, the Florida Fish and Wildlife Conservation Commission provides an up-to-date status report on HABs in Florida. [ 62 ] The Texas Parks and Wildlife Department also provides a status report. [ 63 ] While no particular cause of HABs has been found, many different factors can contribute to their presence. These factors can include water pollution , which originates from sources such as human sewage and agricultural runoff . [ 64 ] The occurrence of HABs in some locations appears to be entirely natural (algal blooms are a seasonal occurrence resulting from coastal upwelling, a natural result of the movement of certain ocean currents) [ 65 ] [ 66 ] while in others they appear to be a result of increased nutrient pollution from human activities. [ 67 ] The growth of marine phytoplankton is generally limited by the availability of nitrates and phosphates , which can be abundant in agricultural run-off as well as coastal upwelling zones. Other factors such as iron-rich dust influx from large desert areas such as the Sahara Desert are thought to play a major role in causing HAB events. [ 68 ] Some algal blooms on the Pacific Coast have also been linked to occurrences of large-scale climatic oscillations such as El Niño events. [ citation needed ] Other factors such as iron-rich dust influx from large desert areas such as the Sahara are thought to play a role in causing HABs. [ 69 ] Some algal blooms on the Pacific coast have also been linked to natural occurrences of large-scale climatic oscillations such as El Niño events. HABs are also linked to heavy rainfall. [ 70 ] Although HABs in the Gulf of Mexico were witnessed in the early 1500s by explorer Cabeza de Vaca , [ 71 ] it is unclear what initiates these blooms and how large a role n anthropogenic and natural factors play in their development. [ citation needed ] The number of reported harmful algal blooms (cyanobacterial) has been increasing throughout the world. [ 72 ] It is unclear whether the apparent increase in frequency and severity of HABs in various parts of the world is in fact a real increase or is due to increased observation effort and advances in species identification technology. [ 73 ] [ 74 ] In 2008, the U.S. government prepared a report on the problem, "Harmful Algal Bloom Management and Response: Assessment and Plan". [ 75 ] The report recognized the seriousness of the problem: It is widely believed that the frequency and geographic distribution of HABs have been increasing worldwide. All U.S. coastal states have experienced HABs over the last decade, and new species have emerged in some locations that were not previously known to cause problems. HAB frequency is also thought to be increasing in freshwater systems. [ 75 ] Researchers have reported the growth of HABs in Europe, Africa and Australia. Those have included blooms on some of the African Great Lakes , such as Lake Victoria , the second largest freshwater lake in the world. [ 76 ] India has been reporting an increase in the number of blooms each year. [ 77 ] In 1977 Hong Kong reported its first coastal HAB. By 1987 they were getting an average of 35 per year. [ 78 ] Additionally, there have been reports of harmful algal blooms throughout popular Canadian lakes such as Beaver Lake and Quamichan Lake. These blooms were responsible for the deaths of a few animals and led to swimming advisories. [ 79 ] Global warming and pollution is causing algal blooms to form in places previously considered "impossible" or rare for them to exist, such as under the ice sheets in the Arctic , [ 80 ] in Antarctica , [ 81 ] the Himalayan Mountains , [ 82 ] the Rocky Mountains , [ 83 ] and in the Sierra Nevada Mountains . [ 84 ] In the U.S., every coastal state has had harmful algal blooms over the last decade and new species have emerged in new locations that were not previously known to have caused problems. Inland, major rivers have seen an increase in their size and frequency. In 2015 the Ohio River had a bloom which stretched an "unprecedented" 650 miles (1,050 km) into adjoining states and tested positive for toxins, which created drinking water and recreation problems. [ 85 ] A portion of Utah's Jordan River was closed due to toxic algal bloom in 2016. [ 86 ] Off the west coast of South Africa , HABs caused by Alexandrium catanella occur every spring. These blooms of organisms cause severe disruptions in fisheries of these waters as the toxins in the phytoplankton cause filter-feeding shellfish in affected waters to become poisonous for human consumption. [ 87 ] As algal blooms grow, they deplete the oxygen in the water and block sunlight from reaching fish and plants. Such blooms can last from a few days to many months. [ 86 ] With less light, plants beneath the bloom can die and fish can starve. Furthermore, the dense population of a bloom reduces oxygen saturation during the night by respiration. And when the algae eventually die off, the microbes which decompose the dead algae use up even more oxygen, which in turn causes more fish to die or leave the area. When oxygen continues to be depleted by blooms it can lead to hypoxic dead zones , where neither fish nor plants are able to survive. [ 88 ] These dead zones in the case of the Chesapeake Bay, where they are a normal occurrence, are also suspected of being a major source of methane . [ 89 ] Scientists have found that HABs were a prominent feature of previous mass extinction events , including the End-Permian Extinction . [ 90 ] Tests have shown some toxins near blooms can be in the air and thereby be inhaled, which could affect health. [ 91 ] This occurs due to the aerosolization of the toxins which are then transported by the wind. Not all algal blooms release harmful toxins into the atmosphere, because it is dependent on both the species and environmental conditions. Some species of microalgae/bacteria release toxins through cell lysis induced by physical stresses such as wave action. Other species release toxins through lysis induced by cellular processes, viral stresses, or chemical stresses in the water column. The toxins are mainly aerosolized into spray aerosols caused by wave action. The surface movement of water creates bubbles in the water, and when these bubbles reach the surface they pop. When the bubbles pop they eject water droplets which contain lysed cell material from harmful algal blooms (HAB's). Once ejected the aerosolized toxins are transported by the wind. [ 92 ] A 2017 study found that sea spray aerosol can be transported up to 1000 km by the wind. [ 93 ] It is important to stay informed of local policies and response measures that are in place for HAB's even if you do not live directly on the coast because of the distances that these toxins can be transported. Eating fish or shellfish from lakes with a bloom nearby is not recommended. [ 9 ] Potent toxins are accumulated in shellfish that feed on the algae. If the shellfish are consumed, various types of poisoning may result. These include amnesic shellfish poisoning (ASP), diarrhetic shellfish poisoning , neurotoxic shellfish poisoning , and paralytic shellfish poisoning . [ 94 ] A 2002 study has shown that algal toxins may be the cause for as many as 60,000 intoxication cases in the world each year. [ 94 ] In 1987 a new illness emerged: amnesic shellfish poisoning (ASP). People who had eaten mussels from Prince Edward Island were found to have ASP. The illness was caused by domoic acid , produced by a diatom found in the area where the mussels were cultivated. [ 95 ] A 2013 study found that toxic paralytic shellfish poisoning in the Philippines during HABs has caused at least 120 deaths over a few decades. [ 96 ] After a 2014 HAB incident in Monterey Bay , California, health officials warned people not to eat certain parts of anchovy, sardines, or crab caught in the bay. [ 97 ] In 2015 most shellfish fisheries in Washington, Oregon and California were shut down because of high concentrations of toxic domoic acid in shellfish. [ 41 ] People have been warned that inhaling vapors from waves or wind during a HAB event may cause asthma attacks or lead to other respiratory ailments. [ 98 ] In 2018 agricultural officials in Utah worried that even crops could become contaminated if irrigated with toxic water, although they admit that they can't measure contamination accurately because of so many variables in farming. They issued warnings to residents, however, out of caution. [ 99 ] Persons are generally warned not to enter or drink water from algal blooms, or let their pets swim in the water since many pets have died from algal blooms. [ 50 ] In at least one case, people began getting sick before warnings were issued. [ 100 ] There is no treatment available for animals, including livestock cattle, if they drink from algal blooms where such toxins are present. Pets are advised to be kept away from algal blooms to avoid contact. [ 101 ] In some locations visitors have been warned not to even touch the water. [ 9 ] Boaters have been told that toxins in the water can be inhaled from the spray from wind or waves. [ 17 ] [ 9 ] Ocean beaches, [ 102 ] lakes [ 21 ] and rivers have been closed due to algal blooms. [ 86 ] After a dog died in 2015 from swimming in a bloom in California's Russian River , officials likewise posted warnings for parts of the river. [ 103 ] Boiling the water at home before drinking does not remove the toxins. [ 9 ] In August 2014 the city of Toledo, Ohio , advised its 500,000 residents to not drink tap water as the high toxin level from an algal bloom in western Lake Erie had affected their water treatment plant's ability to treat the water to a safe level. [ 23 ] The emergency required using bottled water for all normal uses except showering, which seriously affected public services and commercial businesses. The bloom returned in 2015 [ 104 ] and was forecast again for the summer of 2016. [ 105 ] In 2004, a bloom in Kisumu Bay, which is the drinking water source for 500,000 people in Kisumu , Kenya , suffered from similar water contamination. [ 76 ] In China, water was cut off to residents in 2007 due to an algal bloom in its third largest lake, which forced 2 million people to use bottled water. [ 106 ] [ 107 ] A smaller water shut-down in China affected 15,000 residents two years later at a different location. [ 108 ] Australia in 2016 also had to cut off water to farmers. [ 109 ] Alan Steinman of Grand Valley State University has explained that among the major causes for the algal blooms in general, and Lake Erie specifically, is because blue-green algae thrive with high nutrients, along with warm and calm water. Lake Erie is more prone to blooms because it has a high nutrient level and is shallow, which causes it to warm up more quickly during the summer. [ 110 ] Symptoms from drinking toxic water can show up within a few hours after exposure. They can include nausea, vomiting, and diarrhea, or trigger headaches and gastrointestinal problems. [ 21 ] Although rare, liver toxicity can cause death. [ 21 ] Those symptoms can then lead to dehydration, another major concern. In high concentrations, the toxins in the algal waters when simply touched can cause skin rashes, irritate the eyes, nose, mouth or throat. [ 9 ] Those with suspected symptoms are told to call a doctor if symptoms persist or they can't hold down fluids after 24 hours. [ citation needed ] In studies at the population level bloom coverage has been significantly related to the risk of non-alcoholic liver disease death. [ 111 ] Toxic algae blooms are thought to play a role in humans developing degenerative neurological disorders such as amyotrophic lateral sclerosis and Parkinson's disease . [ 112 ] Less than one percent of algal blooms produce hazardous toxins, such as microcystins . [ 20 ] Although blue-green or other algae do not usually pose a direct threat to health, the toxins (poisons) which they produce are considered dangerous to humans, land animals, sea mammals, birds [ 86 ] and fish when the toxins are ingested. [ 20 ] The toxins are neurotoxins which destroy nerve tissue which can affect the nervous system, brain, and liver, and can lead to death. [ 21 ] Humans are affected by the HAB species by ingesting improperly harvested shellfish, breathing in aerosolized brevetoxins (i.e. PbTx or Ptychodiscus toxins) and in some cases skin contact. [ 113 ] The brevetoxins bind to voltage-gated sodium channels , important structures of cell membranes. Binding results in persistent activation of nerve cells, which interferes with neural transmission leading to health problems. These toxins are created within the unicellular organism, or as a metabolic product. [ 114 ] The two major types of brevetoxin compounds have similar but distinct backbone structures. PbTx-2 is the primary intracellular brevetoxin produced by K. brevis blooms. However, over time, the PbTx-2 brevetoxin can be converted to PbTx-3 through metabolic changes. [ 114 ] Researchers found that PbTx-2 has been the primary intracellular brevetoxin that converts over time into PbTx-3. [ 115 ] In the U.S., the seafood consumed by humans is tested regularly for toxins by the USDA to ensure safe consumption. Such testing is common in other nations. However, improper harvesting of shellfish can cause paralytic shellfish poisoning and neurotoxic shellfish poisoning in humans. [ 116 ] [ 117 ] Some symptoms include drowsiness, diarrhea, nausea, loss of motor control, tingling, numbing or aching of extremities, incoherence, and respiratory paralysis. [ 118 ] Reports of skin irritation after swimming in the ocean during a HAB are common. [ 119 ] When the HAB cells rupture, they release extracellular brevetoxins into the environment. Some of those stay in the ocean, while other particles get aerosolized. During onshore winds, brevetoxins can become aerosolized by bubble-mediated transport, causing respiratory irritation, bronchoconstriction , coughing, and wheezing, among other symptoms. [ 119 ] It is recommended to avoid contact with wind-blown aerosolized toxin. Some individuals report a decrease in respiratory function after only 1 hour of exposure to a K. brevis red-tide beach and these symptoms may last for days. [ 120 ] People with severe or persistent respiratory conditions (such as chronic lung disease or asthma) may experience stronger adverse reactions. [ medical citation needed ] The National Oceanic and Atmospheric Administration 's National Ocean Service provides a public conditions report identifying possible respiratory irritation impacts in areas affected by HABs. [ 121 ] The hazards which accompany harmful algal blooms have hindered visitors' enjoyment of beaches and lakes in places in the U.S. such as Florida, [ 102 ] California, [ 9 ] Vermont, [ 122 ] and Utah. [ 86 ] Persons hoping to enjoy their vacations or days off have been kept away to the detriment of local economies. Lakes and rivers in North Dakota , Minnesota , Utah, California and Ohio have had signs posted warning about the potential of health risk. [ 123 ] Similar blooms have become more common in Europe, with France among the countries reporting them. In the summer of 2009, beaches in northern Brittany became covered by tonnes of potentially lethal rotting green algae. A horse being ridden along the beach collapsed and died from fumes given off by the rotting algae. [ 124 ] The economic damage resulting from lost business has become a serious concern. According to one report in 2016, the four main economic impacts from harmful algal blooms come from damage to human health, fisheries, tourism and recreation, and the cost of monitoring and management of area where blooms appear. [ 125 ] EPA estimates that algal blooms impact 65 percent of the country's major estuaries, with an annual cost of $2.2 billion. [ 99 ] In the U.S. there are an estimated 166 coastal dead zones. [ 99 ] Because data collection has been more difficult and limited from sources outside the U.S., most of the estimates as of 2016 have been primarily for the U.S. [ 125 ] In port cities in the Shandong Province of eastern China, residents are no longer surprised when massive algal blooms arrive each year and inundate beaches. Prior to the Beijing Olympics in 2008, over 10,000 people worked to clear 20,000 tons of dead algae from beaches. [ 126 ] In 2013 another bloom in China, thought to be its largest ever, [ 127 ] covered an area of 7,500 square miles, [ 126 ] and was followed by another in 2015 which blanketed an even greater 13,500 square miles. The blooms in China are thought to be caused by pollution from untreated agricultural and industrial discharges into rivers leading to the ocean. [ 128 ] As early as 1976 a short-term, relatively small, dead zone off the coasts of New York and New Jersey cost commercial and recreational fisheries over $500 million. [ 129 ] In 1998 a HAB in Hong Kong killed over $10 million in high-value fish. [ 78 ] In 2009, the economic impact for the state of Washington's coastal counties dependent on its fishing industry was estimated to be $22 million. [ 130 ] In 2016, the U.S. seafood industry expected future lost revenue could amount to $900 million annually. [ 125 ] NOAA has provided a few cost estimates for various blooms over the past few years: [ 131 ] $10.3 million in 2011 due to a HAB at Texas oyster landings; $2.4 million lost income by tribal commerce from 2015 fishery closures in the pacific northwest; $40 million from Washington state's loss of tourism from the same fishery closure. Along with damage to businesses, the toll from human sickness results in lost wages and damaged health. The costs of medical treatment, investigation by health agencies through water sampling and testing, and the posting of warning signs at effected locations is also costly. [ 132 ] The closures applied to areas where this algae bloom occurs has a big negative impact of the fishing industries, add to that the high fish mortality that follows, the increase in price due to the shortage of fish available and decrease in the demand for seafood due to the fear of contamination by toxins. [ 133 ] This causes a big economic loss for the industry. Economic costs are estimated to rise. In June 2015, for instance, the largest known toxic HAB forced the shutdown of the west coast shellfish industry, the first time that has ever happened. One Seattle NOAA expert commented, "This is unprecedented in terms of the extent and magnitude of this harmful algal bloom and the warm water conditions we're seeing offshore...." [ 134 ] The bloom covered a range from Santa Barbara, California northward to Alaska . [ 135 ] The negative impact on fish can be even more severe when they are confined to pens, as they are in fish farms. In 2007 a fish farm in British Columbia lost 260 tons of salmon as a result of blooms, [ 136 ] and in 2016 a farm in Chile lost 23 million salmon after an algal bloom . [ 137 ] The presence of harmful algae bloom's can lead to hypoxia or anoxia in a body of water. The depletion of oxygen within a body of water can lead to the creation of a dead zone . Dead zones occur when a body of water has become unsuitable for organism survival in that location. HAB's cause dead zones by consuming oxygen in these bodies of water - leaving minimal oxygen available to other marine organisms. When the HAB's die, their bodies will sink to the bottom of the body of water - as the decaying of their bodies (through bacteria) is what causes the consumption of oxygen. Once oxygen levels get so low, the HAB's have placed the body of water in hypoxia - and these low oxygen levels will cause marine organisms to seek out better suited locations for their survival. [ 138 ] Blooms can harm the environment even without producing toxins by depleting oxygen from the water when growing and while decaying after they die. Blooms can also block sunlight to organisms living beneath it. A record-breaking number and size of blooms have formed in the Pacific coast, in Lake Erie, in the Chesapeake Bay and in the Gulf of Mexico, where a number of dead zones were created as a result. [ 139 ] In the 1960s the number of dead zones worldwide was 49; the number rose to over 400 by 2008. [ 129 ] Among the largest dead zones were those in northern Europe's Baltic Sea and the Gulf of Mexico, which affects a $2.8 billion U.S. fish industry. [ 76 ] Unfortunately, dead zones rarely recover and usually grow in size. [ 129 ] One of the few dead zones to ever recover was in the Black Sea , which returned to normal fairly quickly after the collapse of the Soviet Union in the 1990s due to a resulting reduction in fertilizer use. [ 129 ] Massive fish die-offs have been caused by HABs. [ 140 ] In 2016, 23 million salmon which were being farmed in Chile died from a toxic algae bloom. [ 141 ] To get rid of the dead fish, the ones fit for consumption were made into fishmeal and the rest were dumped 60 miles offshore to avoid risks to human health. [ 141 ] The economic cost of that die-off is estimated to have been $800 million. [ 141 ] Environmental expert Lester Brown has written that the farming of salmon and shrimp in offshore ponds concentrates waste, which contributes to eutrophication and the creation of dead zones. [ 142 ] Other countries have reported similar impacts, with cities such as Rio de Janeiro , Brazil seeing major fish die-offs from blooms becoming a common occurrence. [ 143 ] In early 2015, Rio collected an estimated 50 tons of dead fish from the lagoon where water events in the 2016 Olympics were planned to take place. [ 143 ] The Monterey Bay has suffered from harmful algal blooms, most recently in 2015: "Periodic blooms of toxin-producing Pseudo-nitzschia diatoms have been documented for over 25 years in Monterey Bay and elsewhere along the U.S. west coast. During large blooms, the toxin accumulates in shellfish and small fish such as anchovies and sardines that feed on algae, forcing the closure of some fisheries and poisoning marine mammals and birds that feed on contaminated fish." [ 144 ] Similar fish die-offs from toxic algae or lack of oxygen have been seen in Russia, [ 145 ] Colombia, [ 146 ] Vietnam, [ 147 ] China, [ 148 ] Canada, [ 149 ] Turkey, [ 150 ] Indonesia, [ 151 ] and France. [ 152 ] Land animals, including livestock and pets have been affected. Dogs have died from the toxins after swimming in algal blooms. [ 153 ] Warnings have come from government agencies in the state of Ohio, which noted that many dogs and livestock deaths resulted from HAB exposure in the U.S. and other countries. They also noted in a 2003 report that during the previous 30 years, they have seen more frequent and longer-lasting harmful algal blooms." [ 154 ] In 50 countries and 27 states that year there were reports of human and animal illnesses linked to algal toxins. [ 154 ] In Australia, the department of agriculture warned farmers that the toxins from a HAB had the "potential to kill large numbers of livestock very quickly." [ 155 ] Marine mammals have also been seriously harmed, as over 50 percent of unusual marine mammal deaths are caused by harmful algal blooms. [ 156 ] In 1999, over 65 bottlenose dolphins died during a coastal HAB in Florida. [ 157 ] In 2013 a HAB in southwest Florida killed a record number of Manatee . [ 158 ] Whales have also died in large numbers. During the period from 2005 to 2014, Argentina reported an average 65 baby whales dying which experts have linked to algal blooms. A whale expert there expects the whale population to be reduced significantly. [ 159 ] In 2003 off Cape Cod in the North Atlantic, at least 12 humpback whales died from toxic algae from a HAB. [ 160 ] In 2015 Alaska and British Columbia reported many humpback whales had likely died from HAB toxins, with 30 having washed ashore in Alaska. "Our leading theory at this point is that the harmful algal bloom has contributed to the deaths," said a NOAA spokesperson. [ 161 ] [ 162 ] Birds have died after eating dead fish contaminated with toxic algae. Rotting and decaying fish are eaten by birds such as pelicans , seagulls , cormorants , and possibly marine or land mammals, which then become poisoned. [ 140 ] The nervous systems of dead birds were examined and had failed from the toxin's effect. [ 97 ] On the Oregon and Washington coast, a thousand scoters , or sea ducks, were also killed in 2009. "This is huge," said a university professor. [ 163 ] As dying or dead birds washed up on the shore, wildlife agencies went into "an emergency crisis mode." [ 163 ] It has even been suggested that harmful algal blooms are responsible for the deaths of animals found in fossil troves, [ 164 ] such as the dozens of cetacean skeletons found at Cerro Ballena . [ 165 ] Harmful algal blooms in marine ecosystems have been observed to cause adverse effects to a wide variety of aquatic organisms, most notably marine mammals, sea turtles, seabirds and finfish. The impacts of HAB toxins on these groups can include harmful changes to their developmental, immunological, neurological, or reproductive capacities. The most conspicuous effects of HABs on marine wildlife are large-scale mortality events associated with toxin-producing blooms. For example, a mass mortality event of 107 bottlenose dolphins occurred along the Florida panhandle in the spring of 2004 due to ingestion of contaminated menhaden with high levels of brevetoxin . [ 166 ] Manatee mortalities have also been attributed to brevetoxin but unlike dolphins, the main toxin vector was endemic seagrass species ( Thalassia testudinum ) in which high concentrations of brevetoxins were detected and subsequently found as a main component of the stomach contents of manatees. [ 166 ] Additional marine mammal species, like the highly endangered North Atlantic right whale , have been exposed to neurotoxins by preying on highly contaminated zooplankton . [ 167 ] With the summertime habitat of this species overlapping with seasonal blooms of the toxic dinoflagellate Alexandrium fundyense , and subsequent copepod grazing, foraging right whales will ingest large concentrations of these contaminated copepods . Ingestion of such contaminated prey can affect respiratory capabilities, feeding behavior, and ultimately the reproductive condition of the population. [ 167 ] Immune system responses have been affected by brevetoxin exposure in another critically endangered species, the loggerhead sea turtle . Brevetoxin exposure, from inhalation of aerosolized toxins and ingestion of contaminated prey, can have clinical signs of increased lethargy and muscle weakness in loggerhead sea turtles causing these animals to wash ashore in a decreased metabolic state with increases of immune system responses upon blood analysis. [ 168 ] Examples of common harmful effects of HABs include: HABs occur naturally off coasts all over the world. Marine dinoflagellates produce ichthyotoxins. Where HABs occur, dead fish wash up on shore for up to two weeks after a HAB has been through the area. In addition to killing fish, the toxic algae contaminate shellfish. Some mollusks are not susceptible to the toxin, and store it in their fatty tissues. By consuming the organisms responsible for HABs, shellfish can accumulate and retain saxitoxin produced by these organisms. Saxitoxin blocks sodium channels and ingestion can cause paralysis within 30 minutes. [ 117 ] In addition to directly harming marine animals and vegetation loss, harmful algal blooms can also lead to ocean acidification , which occurs when the amount of carbon dioxide in the water is increased to unnatural levels. Ocean acidification slows the growth of certain species of fish and shellfish, and even prevents shell formation in certain species of mollusks. These subtle, small changes can add up over time to cause chain reactions and devastating effects on whole marine ecosystems. [ 170 ] Other animals that eat exposed shellfish are susceptible to the neurotoxin, which may lead to neurotoxic shellfish poisoning [ 116 ] and sometimes even death. Most mollusks and clams filter feed, which results in higher concentrations of the toxin than just drinking the water. [ 171 ] Scaup , for example, are diving ducks whose diet mainly consists of mollusks. When scaup eat the filter-feeding shellfish that have accumulated high levels of the HAB toxin, their population becomes a prime target for poisoning. However, even birds that do not eat mollusks can be affected by simply eating dead fish on the beach or drinking the water. [ 172 ] The toxins released by the blooms can kill marine animals including dolphins , sea turtles, birds, and manatees . [ 173 ] [ 174 ] The Florida Manatee, a subspecies of the West Indian Manatee, is a species often impacted by red tide blooms. Florida manatees are often exposed to the poisonous red-tide toxins either by consumption or inhalation. There are many small barnacles, crustaceans, and other epiphytes that grow on the blades of seagrass. These tiny creatures filter particles from the water around them and use these particles as their main food source. During red tide blooms, they also filter the toxic red tide cells from the water, which then becomes concentrated inside them. Although these toxins do not harm epiphytes, they are extremely poisonous to marine creatures who consume (or accidentally consume) the exposed epiphytes, such as manatees. When manatees unknowingly consume exposed epiphytes while grazing on sea grass, the toxins are subsequently released from the epiphytes and ingested by the manatees. In addition to consumption, manatees may also become exposed to air-borne Brevetoxins released from harmful red-tide cells when passing through algal blooms. [ 175 ] Manatees also have an immunoresponse to HABs and their toxins that can make them even more susceptible to other stressors. Due to this susceptibility, manatees can die from either the immediate, or the after effects of the HAB. [ 176 ] In addition to causing manatee mortalities, red-tide exposure also causes severe sublethal health problems among Florida manatee populations . Studies have shown that red-tide exposure among free-ranging Florida manatees has been shown to negatively impact immune functioning by causing increased inflammation, a reduction in lymphocyte proliferation responses, and oxidative stress. [ 177 ] Fish such as Atlantic herring, American pollock, winter flounder, Atlantic salmon, and cod were dosed orally with these toxins in an experiment, and within minutes the subjects started to exhibit a loss of equilibrium and began to swim in an irregular, jerking pattern, followed by paralysis and shallow, arrhythmic breathing and eventually death, after about an hour. [ 178 ] HABs have been shown to have a negative effect also in the memory functions of sea lions. [ 179 ] Since many algal blooms are caused by a major influx of nutrient-rich runoff into a water body, programs to treat wastewater, reduce the overuse of fertilizers in agriculture and reducing the bulk flow of runoff can be effective for reducing severe algal blooms at river mouths, estuaries , and the ocean directly in front of the river's mouth. The nitrates and phosphorus in fertilizers cause algal blooms when they run off into lakes and rivers after heavy rains. Modifications in farming methods have been suggested, such as only using fertilizer in a targeted way at the appropriate time exactly where it can do the most good for crops to reduce potential runoff. [ 180 ] A method used successfully is drip irrigation , which instead of widely dispersing fertilizers on fields, drip-irrigates plant roots through a network of tubes and emitters, leaving no traces of fertilizer to be washed away. [ 181 ] Drip irrigation also prevents the formation of algal blooms in reservoirs for drinking water while saving up to 50% of water typically used by agriculture. [ 182 ] [ 183 ] There have also been proposals to create buffer zones of foliage and wetlands to help filter out the phosphorus before it reaches water. [ 180 ] Other experts have suggested using conservation tillage, changing crop rotations, and restoring wetlands. [ 180 ] It is possible for some dead zones to shrink within a year under proper management. [ 184 ] There have been a few success stories in controlling chemicals. After Norway's lobster fishery collapsed in 1986 due to low oxygen levels, for instance, the government in neighboring Denmark took action and reduced phosphorus output by 80 percent which brought oxygen levels closer to normal. [ 184 ] Similarly, dead zones in the Black Sea and along the Danube River recovered after phosphorus applications by farmers were reduced by 60%. [ 184 ] Nutrients can be permanently removed from wetlands harvesting wetland plants, reducing nutrient influx into surrounding bodies of water. [ 185 ] [ 186 ] Research is ongoing to determine the efficacy of floating mats of cattails in removing nutrients from surface waters too deep to sustain the growth of wetland plants. [ 187 ] In the U.S., surface runoff is the largest source of nutrients added to rivers and lakes, but is mostly unregulated under the federal Clean Water Act . [ 188 ] : 10 [ 189 ] [ 190 ] Locally developed initiatives to reduce nutrient pollution are underway in various areas of the country, such as the Great Lakes region and the Chesapeake Bay . [ 191 ] [ 192 ] To help reduce algal blooms in Lake Erie , the State of Ohio presented a plan in 2016 to reduce phosphorus runoff. [ 193 ] Although a number of algaecides have been effective in killing algae, they have been used mostly in small bodies of water. For large algal blooms, however, adding algaecides such as silver nitrate or copper sulfate can have worse effects, such as killing fish outright and harming other wildlife. [ 194 ] Cyanobacteria can also develop resistance to copper-containing algaecides, requiring a larger quantity of the chemical to be effective for HAB management, but introducing a greater risk to other species in the region. [ 195 ] The negative effects can therefore be worse than letting the algae die off naturally. [ 194 ] [ 196 ] In 2019, Chippewa Lake in Northeast Ohio became the first lake in the U.S. to successfully test a new chemical treatment. The chemical formula killed all of the toxic algae in the lake within a single day. The formula has already been used in China, South Africa and Israel. [ 198 ] In February 2020, Roodeplaat Dam in Gauteng Province, South Africa was treated with a new algicide formulation against a severe bloom of Microcystis sp. This formulation allows the granular product to float and slow-release its active ingredient, sodium percarbonate , that releases hydrogen peroxide (H 2 O 2 ), on the water surface. Consequently, the effective concentrations are limited, vertically, to the surface of the water; and spatially to areas where cyanobacteria are abundant. This provide the aquatic organisms a "safe haven" in untreated areas and avoids the adverse effects associated with the use of standard algicides . [ 199 ] Bioactive compounds isolated from terrestrial and aquatic plants, particularly seaweeds, have seen results as a more environmentally friendly control for HABs. Molecules found in seaweeds such as Corallina , Sargassum , and Saccharina japonica have shown to inhibit some bloom-forming microalgae. In addition to their anti-microalgal effects, the bioactive molecules found in these seaweeds also have antibacterial, antifungal, and antioxidant properties. [ 195 ] Other chemicals are being tested for their efficacy for removing cyanobacteria during blooms. Modified clays, such as aluminum chloride modified clay (AC-MC), aluminum sulfide modified clay (AS-MC) and polyaluminum chloride modified clay (PAC-MC) have shown positive results in vitro for the removal of Aureococcus by trapping the microalgae in the sediment of clay, removing it from the top layer of water where harmful blooms can occur. [ 197 ] Many efforts have been made in an attempt to control HAB's so that the harm that they cause can be kept at a minimum. Studies into the use of clay to control HAB's have proven that this method may be an effective way to reduce the negative effects caused by HAB's. The addition of aluminum chloride , aluminum sulfate , or polyaluminum chloride to clay can modify the clay surface and increase its efficiency in the removal of HAB's from a body of water. The addition of aluminum-containing compounds causes the clay particles to achieve a positive charge, with these particles then undergoing flocculation with the harmful algae cells. The algae cells then group together: becoming a sediment instead of a suspension . The process of flocculation will limit the bloom growth and reduce the impact in which the bloom can have on an area. [ 200 ] In the Netherlands, successful algae and phosphate removal from surface water has been obtained by pumping affected water through a hydrodynamic separator. The treated water is then free from algae and contains a significant lower amount of phosphate since the removed algae cells contain a lot of phosphate. The treated water also gets a lower turbidity. Future projects will study the positive effects on the ecology and marine life as it is expected plant life will be restored and a reduction in bottom dwelling fish will automatically reduce the turbidity of the cleaned water. The removed algae and phosphate may find its way not as waste but as infeed to bio digesters. Other experts have proposed building reservoirs to prevent the movement of algae downstream. However, that can lead to the growth of algae within the reservoir, which become sediment traps with a resultant buildup of nutrients. [ 194 ] Some researchers found that intensive blooms in reservoirs were the primary source of toxic algae observed downstream, but the movement of algae has so far been less studied, although it is considered a likely cause of algae transport. [ 196 ] [ 201 ] The decline of filter-feeding shellfish populations, such as oysters , likely contribute to HAB occurrence. [ 202 ] As such, numerous research projects are assessing the potential of restored shellfish populations to reduce HAB occurrence. [ 203 ] [ 204 ] [ 205 ] Other remedies include using improved monitoring methods, trying to improve predictability, and testing new potential methods of controlling HABs. [ 75 ] Some countries surrounding the Baltic Sea, which has the world's largest dead zone, have considered using massive geoengineering options, such as forcing air into bottom layers to aerate them. [ 129 ] Mathematical models are useful to predict future algal blooms. [ 47 ] A growing number of scientists agree that there is an urgent need to protect the public by being able to forecast harmful algal blooms. [ 206 ] One way they hope to do that is with sophisticated sensors which can help warn about potential blooms. [ 207 ] The same types of sensors can also be used by water treatment facilities to help them prepare for higher toxic levels. [ 206 ] [ 208 ] The only sensors now in use are located in the Gulf of Mexico. In 2008 similar sensors in the Gulf forewarned of an increased level of toxins that led to a shutdown of shellfish harvesting in Texas along with a recall of mussels, clams, and oysters, possibly saving many lives. With an increase in the size and frequency of HABs, experts state the need for significantly more sensors located around the country. [ 206 ] The same kinds of sensors can also be used to detect threats to drinking water from intentional contamination. [ 209 ] Satellite and remote sensing technologies are growing in importance for monitoring, tracking, and detecting HABs. [ 210 ] [ 211 ] [ 212 ] [ 213 ] Four U.S. federal agencies—EPA, the National Aeronautics and Space Administration (NASA), NOAA, and the U.S. Geological Survey (USGS)—are working on ways to detect and measure cyanobacteria blooms using satellite data. [ 214 ] The data may help develop early-warning indicators of cyanobacteria blooms by monitoring both local and national coverage. [ 215 ] In 2016 automated early-warning monitoring systems were successfully tested, and for the first time proven to identify the rapid growth of algae and the subsequent depletion of oxygen in the water. [ 216 ] In July 2016 Florida declared a state of emergency for four counties as a result of blooms. They were said to be "destroying" a number of businesses and affecting local economies, with many needing to shut down entirely. [ 259 ] Some beaches were closed, and hotels and restaurants suffered a drop in business. Tourist sporting activities such as fishing and boating were also affected. [ 260 ] [ 261 ] In 2019, the biggest Sargassum bloom ever seen created a crisis in the Tourism industry in North America . This event was likely caused by climate change and nutrient pollution from fertilizers . [ 262 ] Several Caribbean countries considered declaring a state of emergency due to the impact on tourism as a result of environmental damage and potentially toxic and harmful health effects. [ 263 ] The Gulf of Maine frequently experiences blooms of the dinoflagellate Alexandrium fundyense , an organism that produces saxitoxin , the neurotoxin responsible for paralytic shellfish poisoning . The well-known "Florida red tide" that occurs in the Gulf of Mexico is a HAB caused by Karenia brevis , another dinoflagellate which produces brevetoxin, the neurotoxin responsible for neurotoxic shellfish poisoning . California coastal waters also experience seasonal blooms of Pseudo-nitzschia , a diatom known to produce domoic acid , the neurotoxin responsible for amnesic shellfish poisoning . The term red tide is most often used in the US to refer to Karenia brevis blooms in the eastern Gulf of Mexico , also called the Florida red tide. K. brevis is one of many different species of the genus Karenia found in the world's oceans. [ 264 ] Major advances have occurred in the study of dinoflagellates and their genomics. Some include identification of the toxin-producing genes ( PKS genes), exploration of environmental changes (temperature, light/dark, etc.) have on gene expression, as well as an appreciation of the complexity of the Karenia genome. [ 264 ] These blooms have been documented since the 1800s, and occur almost annually along Florida's coasts. [ 264 ] There was increased research activity of harmful algae blooms (HABs) in the 1980s and 1990s. This was primarily driven by media attention from the discovery of new HAB organisms and the potential adverse health effects of their exposure to animals and humans. [ 265 ] [ full citation needed ] The Florida red tides have been observed to have spread as far as the eastern coast of Mexico. [ 264 ] The density of these organisms during a bloom can exceed tens of millions of cells per litre of seawater, and often discolor the water a deep reddish-brown hue. Red tide is also sometimes used to describe harmful algal blooms on the northeast coast of the United States, particularly in the Gulf of Maine . This type of bloom is caused by another species of dinoflagellate known as Alexandrium fundyense . These blooms of organisms cause severe disruptions in fisheries of these waters, as the toxins in these organism cause filter-feeding shellfish in affected waters to become poisonous for human consumption due to saxitoxin. [ 266 ] The related Alexandrium monilatum is found in subtropical or tropical shallow seas and estuaries in the western Atlantic Ocean , the Caribbean Sea , the Gulf of Mexico, and the eastern Pacific Ocean . Natural water reservoirs in Texas have been threatened by anthropogenic activities due to large petroleum refineries and oil wells (i.e. emission and wastewater discharge), massive agricultural activities (i.e. pesticide release) and mining extractions (i.e. toxic wastewater) as well as natural phenomena involving frequent HAB events. For the first time in 1985, the state of Texas documented the presence of the P. parvum (golden alga) bloom along the Pecos River . This phenomenon has affected 33 reservoirs in Texas along major river systems, including the Brazos, Canadian, Rio Grande, Colorado, and Red River, and has resulted in the death of more than 27 million fish and caused tens of millions of dollars in damage. [ 267 ] The Chesapeake Bay , the largest estuary in the U.S., has suffered from repeated large algal blooms for decades due to chemical runoff from multiple sources, [ 268 ] including 9 large rivers and 141 smaller streams and creeks in parts of six states. In addition, the water is quite shallow and only 1% of the waste entering it gets flushed into the ocean. [ 53 ] By weight, 60% of the phosphates entering the bay in 2003 were from sewage treatment plants, while 60% of its nitrates came from fertilizer runoff, farm animal waste, and the atmosphere. [ 53 ] About 300 million pounds (140 Gg) of nitrates are added to the bay each year. [ 269 ] The population increase in the bay watershed , from 3.7 million people in 1940 to 18 million in 2015 is also a major factor, [ 53 ] as economic growth leads to the increased use of fertilizers and rising emissions of industrial waste. [ 270 ] [ 271 ] As of 2015, the six states and the local governments in the Chesapeake watershed have upgraded their sewage treatment plants to control nutrient discharges. The U.S. Environmental Protection Agency (EPA) estimates that sewage treatment plant improvements in the Chesapeake region between 1985 and 2015 have prevented the discharge of 900 million pounds (410 Gg) of nutrients, with nitrogen discharges reduced by 57% and phosphorus by 75%. [ 272 ] Agricultural and urban runoff pollution continue to be major sources of nutrients in the bay, and efforts to manage those problems are continuing throughout the 64,000 square miles (170,000 km 2 ) watershed. [ 273 ] Recent algae blooms in Lake Erie have been fed primarily by agricultural runoff and have led to warnings for some people in Canada and Ohio not to drink their water. [ 274 ] [ 275 ] The International Joint Commission has called on United States and Canada to drastically reduce phosphorus loads into Lake Erie to address the threat. [ 276 ] [ 277 ] [ 278 ] Green Bay has a dead zone caused by phosphorus pollution that appears to be getting worse. [ 279 ] Lake Okeechobee is an ideal habitat for cyanobacteria because its shallow, sunny, and laden with nutrients from Florida's agriculture. [ 280 ] The Okeechobee Waterway connects the lake to the Atlantic Ocean and the Gulf of Mexico through the St. Lucie River and the Caloosahatchee respectively. This means that harmful algal blooms are carried down the estuaries as water is released during the wet summer months. In July 2018 up to 90% of Lake Okeechobee was covered in algae. [ 281 ] [ 282 ] Water draining from the lake filled the region with a noxious odor and caused respiratory problems in some humans during the following month. [ 283 ] To make matters worse, harmful red tide blooms are historically common on Florida's coasts during these same summer months. [ 284 ] Cyanobacteria in the rivers die as they reach saltwater but their nitrogen fixation feeds the red tide on the coast. [ 284 ] Areas at the mouth of the estuaries such as Cape Coral and Port St. Lucie therefore experience the compounded effects of both types of harmful algal bloom. Cleanup crews hired by authorities in Lee County - where the Caloosahatchee meets the Gulf of Mexico - removed more than 1700 tons of dead marine life in August 2018. [ 285 ] In 2020, a large harmful algal bloom closed beaches in Poland and Finland, brought on by a combination of fertilizer runoff and extreme heat, posing a risk to flounder and mussel beds. [ 286 ] [ 287 ] This is seen by the Baltic Sea Action Group as a threat to biodiversity and regional fishing stocks. [ 288 ] Open defecation is common in south Asia, but human waste is an often overlooked source of nutrient pollution in marine pollution modeling. When nitrogen (N) and phosphorus (P) contributed by human waste was included in models for Bangladesh, India, and Pakistan, the estimated N and P inputs to bodies of water increased one to two orders of magnitude compared to previous models. [ 49 ] River export of nutrients to coastal seas increases coastal eutrophication potential (ICEP). The ICEP of the Godavari River is three times higher when N and P inputs from human waste are included.
https://en.wikipedia.org/wiki/Harmful_algal_bloom
In physics , acoustics , and telecommunications , a harmonic is a sinusoidal wave with a frequency that is a positive integer multiple of the fundamental frequency of a periodic signal . The fundamental frequency is also called the 1st harmonic ; the other harmonics are known as higher harmonics . As all harmonics are periodic at the fundamental frequency, the sum of harmonics is also periodic at that frequency. The set of harmonics forms a harmonic series . The term is employed in various disciplines, including music, physics, acoustics , electronic power transmission, radio technology, and other fields. For example, if the fundamental frequency is 50 Hz , a common AC power supply frequency, the frequencies of the first three higher harmonics are 100 Hz (2nd harmonic), 150 Hz (3rd harmonic), 200 Hz (4th harmonic) and any addition of waves with these frequencies is periodic at 50 Hz. An n {\displaystyle \ n} th characteristic mode, for n > 1 , {\displaystyle \ n>1\ ,} will have nodes that are not vibrating. For example, the 3 rd characteristic mode will have nodes at 1 3 L {\displaystyle \ {\tfrac {1}{3}}\ L\ } and 2 3 L , {\displaystyle \ {\tfrac {2}{3}}\ L\ ,} where L {\displaystyle \ L\ } is the length of the string. In fact, each n {\displaystyle \ n} th characteristic mode, for n {\displaystyle \ n\ } not a multiple of 3, will not have nodes at these points. These other characteristic modes will be vibrating at the positions 1 3 L {\displaystyle \ {\tfrac {1}{3}}\ L\ } and 2 3 L . {\displaystyle \ {\tfrac {2}{3}}\ L~.} If the player gently touches one of these positions, then these other characteristic modes will be suppressed. The tonal harmonics from these other characteristic modes will then also be suppressed. Consequently, the tonal harmonics from the n {\displaystyle \ n} th characteristic characteristic modes, where n {\displaystyle \ n\ } is a multiple of 3, will be made relatively more prominent. [ 1 ] In music, harmonics are used on string instruments and wind instruments as a way of producing sound on the instrument, particularly to play higher notes and, with strings, obtain notes that have a unique sound quality or "tone colour". On strings, bowed harmonics have a "glassy", pure tone. On stringed instruments, harmonics are played by touching (but not fully pressing down the string) at an exact point on the string while sounding the string (plucking, bowing, etc.); this allows the harmonic to sound, a pitch which is always higher than the fundamental frequency of the string. Harmonics may be called "overtones", "partials", or "upper partials", and in some music contexts, the terms "harmonic", "overtone" and "partial" are used fairly interchangeably. But more precisely, the term "harmonic" includes all pitches in a harmonic series (including the fundamental frequency) while the term "overtone" only includes pitches above the fundamental. A whizzing, whistling tonal character, distinguishes all the harmonics both natural and artificial from the firmly stopped intervals; therefore their application in connection with the latter must always be carefully considered. [ citation needed ] Most acoustic instruments emit complex tones containing many individual partials (component simple tones or sinusoidal waves), but the untrained human ear typically does not perceive those partials as separate phenomena. Rather, a musical note is perceived as one sound, the quality or timbre of that sound being a result of the relative strengths of the individual partials. Many acoustic oscillators , such as the human voice or a bowed violin string, produce complex tones that are more or less periodic , and thus are composed of partials that are nearly matched to the integer multiples of fundamental frequency and therefore resemble the ideal harmonics and are called "harmonic partials" or simply "harmonics" for convenience (although it's not strictly accurate to call a partial a harmonic ,  the first being actual and the second being theoretical). Oscillators that produce harmonic partials behave somewhat like one-dimensional resonators , and are often long and thin, such as a guitar string or a column of air open at both ends (as with the metallic modern orchestral transverse flute ). Wind instruments whose air column is open at only one end, such as trumpets and clarinets , also produce partials resembling harmonics. However they only produce partials matching the odd harmonics—at least in theory. In practical use, no real acoustic instrument behaves as perfectly as the simplified physical models predict; for example, instruments made of non-linearly elastic wood, instead of metal, or strung with gut instead of brass or steel strings , tend to have not-quite-integer partials. Partials whose frequencies are not integer multiples of the fundamental are referred to as inharmonic partials . Some acoustic instruments emit a mix of harmonic and inharmonic partials but still produce an effect on the ear of having a definite fundamental pitch, such as pianos , strings plucked pizzicato , vibraphones, marimbas, and certain pure-sounding bells or chimes. Antique singing bowls are known for producing multiple harmonic partials or multiphonics . [ 3 ] [ 4 ] Other oscillators, such as cymbals , drum heads, and most percussion instruments, naturally produce an abundance of inharmonic partials and do not imply any particular pitch, and therefore cannot be used melodically or harmonically in the same way other instruments can. Building on of Sethares (2004), [ 5 ] dynamic tonality introduces the notion of pseudo-harmonic partials, in which the frequency of each partial is aligned to match the pitch of a corresponding note in a pseudo-just tuning, thereby maximizing the consonance of that pseudo-harmonic timbre with notes of that pseudo-just tuning. [ 6 ] [ 7 ] [ 8 ] [ 9 ] An overtone is any partial higher than the lowest partial in a compound tone. The relative strengths and frequency relationships of the component partials determine the timbre of an instrument. The similarity between the terms overtone and partial sometimes leads to their being loosely used interchangeably in a musical context, but they are counted differently, leading to some possible confusion. In the special case of instrumental timbres whose component partials closely match a harmonic series (such as with most strings and winds) rather than being inharmonic partials (such as with most pitched percussion instruments), it is also convenient to call the component partials "harmonics", but not strictly correct, because harmonics are numbered the same even when missing, while partials and overtones are only counted when present. This chart demonstrates how the three types of names (partial, overtone, and harmonic) are counted (assuming that the harmonics are present): In many musical instruments , it is possible to play the upper harmonics without the fundamental note being present. In a simple case (e.g., recorder ) this has the effect of making the note go up in pitch by an octave , but in more complex cases many other pitch variations are obtained. In some cases it also changes the timbre of the note. This is part of the normal method of obtaining higher notes in wind instruments , where it is called overblowing . The extended technique of playing multiphonics also produces harmonics. On string instruments it is possible to produce very pure sounding notes, called harmonics or flageolets by string players, which have an eerie quality, as well as being high in pitch. Harmonics may be used to check at a unison the tuning of strings that are not tuned to the unison. For example, lightly fingering the node found halfway down the highest string of a cello produces the same pitch as lightly fingering the node ⁠ 1 / 3 ⁠ of the way down the second highest string. For the human voice see Overtone singing , which uses harmonics. While it is true that electronically produced periodic tones (e.g. square waves or other non-sinusoidal waves) have "harmonics" that are whole number multiples of the fundamental frequency, practical instruments do not all have this characteristic. For example, higher "harmonics" of piano notes are not true harmonics but are "overtones" and can be very sharp, i.e. a higher frequency than given by a pure harmonic series . This is especially true of instruments other than strings , brass , or woodwinds . Examples of these "other" instruments are xylophones, drums, bells, chimes, etc.; not all of their overtone frequencies make a simple whole number ratio with the fundamental frequency. (The fundamental frequency is the reciprocal of the longest time period of the collection of vibrations in some single periodic phenomenon. [ 10 ] ) Harmonics may be singly produced [on stringed instruments] (1) by varying the point of contact with the bow, or (2) by slightly pressing the string at the nodes, or divisions of its aliquot parts ( 1 2 {\displaystyle {\tfrac {1}{2}}} , 1 3 {\displaystyle {\tfrac {1}{3}}} , 1 4 {\displaystyle {\tfrac {1}{4}}} , etc.). (1) In the first case, advancing the bow from the usual place where the fundamental note is produced, towards the bridge, the whole scale of harmonics may be produced in succession, on an old and highly resonant instrument. The employment of this means produces the effect called ' sul ponticello .' (2) The production of harmonics by the slight pressure of the finger on the open string is more useful. When produced by pressing slightly on the various nodes of the open strings they are called 'natural harmonics'. ... Violinists are well aware that the longer the string in proportion to its thickness, the greater the number of upper harmonics it can be made to yield. The following table displays the stop points on a stringed instrument at which gentle touching of a string will force it into a harmonic mode when vibrated. String harmonics (flageolet tones) are described as having a "flutelike, silvery quality" that can be highly effective as a special color or tone color ( timbre ) when used and heard in orchestration . [ 12 ] It is unusual to encounter natural harmonics higher than the fifth partial on any stringed instrument except the double bass, on account of its much longer strings. [ 12 ] Occasionally a score will call for an artificial harmonic , produced by playing an overtone on an already stopped string. As a performance technique, it is accomplished by using two fingers on the fingerboard, the first to shorten the string to the desired fundamental, with the second touching the node corresponding to the appropriate harmonic. Harmonics may be either used in or considered as the basis of just intonation systems. Composer Arnold Dreyblatt is able to bring out different harmonics on the single string of his modified double bass by slightly altering his unique bowing technique halfway between hitting and bowing the strings. Composer Lawrence Ball uses harmonics to generate music electronically.
https://en.wikipedia.org/wiki/Harmonic