text stringlengths 11 320k | source stringlengths 26 161 |
|---|---|
In mathematics , a system of linear equations (or linear system ) is a collection of two or more linear equations involving the same variables . [ 1 ] [ 2 ] For example,
is a system of three equations in the three variables x , y , z . A solution to a linear system is an assignment of values to the variables such that all the equations are simultaneously satisfied. In the example above, a solution is given by the ordered triple ( x , y , z ) = ( 1 , − 2 , − 2 ) , {\displaystyle (x,y,z)=(1,-2,-2),} since it makes all three equations valid.
Linear systems are a fundamental part of linear algebra , a subject used in most modern mathematics. Computational algorithms for finding the solutions are an important part of numerical linear algebra , and play a prominent role in engineering , physics , chemistry , computer science , and economics . A system of non-linear equations can often be approximated by a linear system (see linearization ), a helpful technique when making a mathematical model or computer simulation of a relatively complex system .
Very often, and in this article, the coefficients and solutions of the equations are constrained to be real or complex numbers , but the theory and algorithms apply to coefficients and solutions in any field . For other algebraic structures , other theories have been developed. For coefficients and solutions in an integral domain , such as the ring of integers , see Linear equation over a ring . For coefficients and solutions that are polynomials, see Gröbner basis . For finding the "best" integer solutions among many, see Integer linear programming . For an example of a more exotic structure to which linear algebra can be applied, see Tropical geometry .
The system of one equation in one unknown
has the solution
However, most interesting linear systems have at least two equations.
The simplest kind of nontrivial linear system involves two equations and two variables:
One method for solving such a system is as follows. First, solve the top equation for x {\displaystyle x} in terms of y {\displaystyle y} :
Now substitute this expression for x into the bottom equation:
This results in a single equation involving only the variable y {\displaystyle y} . Solving gives y = 1 {\displaystyle y=1} , and substituting this back into the equation for x {\displaystyle x} yields x = 3 2 {\displaystyle x={\frac {3}{2}}} . This method generalizes to systems with additional variables (see "elimination of variables" below, or the article on elementary algebra .)
A general system of m linear equations with n unknowns and coefficients can be written as
where x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} are the unknowns, a 11 , a 12 , … , a m n {\displaystyle a_{11},a_{12},\dots ,a_{mn}} are the coefficients of the system, and b 1 , b 2 , … , b m {\displaystyle b_{1},b_{2},\dots ,b_{m}} are the constant terms. [ 3 ]
Often the coefficients and unknowns are real or complex numbers , but integers and rational numbers are also seen, as are polynomials and elements of an abstract algebraic structure .
One extremely helpful view is that each unknown is a weight for a column vector in a linear combination .
This allows all the language and theory of vector spaces (or more generally, modules ) to be brought to bear. For example, the collection of all possible linear combinations of the vectors on the left-hand side (LHS) is called their span , and the equations have a solution just when the right-hand vector is within that span. If every vector within that span has exactly one expression as a linear combination of the given left-hand vectors, then any solution is unique. In any event, the span has a basis of linearly independent vectors that do guarantee exactly one expression; and the number of vectors in that basis (its dimension ) cannot be larger than m or n , but it can be smaller. This is important because if we have m independent vectors a solution is guaranteed regardless of the right-hand side (RHS), and otherwise not guaranteed.
The vector equation is equivalent to a matrix equation of the form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } where A is an m × n matrix, x is a column vector with n entries, and b is a column vector with m entries. [ 4 ]
A = [ a 11 a 12 ⋯ a 1 n a 21 a 22 ⋯ a 2 n ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 ⋯ a m n ] , x = [ x 1 x 2 ⋮ x n ] , b = [ b 1 b 2 ⋮ b m ] . {\displaystyle A={\begin{bmatrix}a_{11}&a_{12}&\cdots &a_{1n}\\a_{21}&a_{22}&\cdots &a_{2n}\\\vdots &\vdots &\ddots &\vdots \\a_{m1}&a_{m2}&\cdots &a_{mn}\end{bmatrix}},\quad \mathbf {x} ={\begin{bmatrix}x_{1}\\x_{2}\\\vdots \\x_{n}\end{bmatrix}},\quad \mathbf {b} ={\begin{bmatrix}b_{1}\\b_{2}\\\vdots \\b_{m}\end{bmatrix}}.} The number of vectors in a basis for the span is now expressed as the rank of the matrix.
A solution of a linear system is an assignment of values to the variables x 1 , x 2 , … , x n {\displaystyle x_{1},x_{2},\dots ,x_{n}} such that each of the equations is satisfied. The set of all possible solutions is called the solution set . [ 5 ]
A linear system may behave in any one of three possible ways:
For a system involving two variables ( x and y ), each linear equation determines a line on the xy - plane . Because a solution to a linear system must satisfy all of the equations, the solution set is the intersection of these lines, and is hence either a line, a single point, or the empty set .
For three variables, each linear equation determines a plane in three-dimensional space , and the solution set is the intersection of these planes. Thus the solution set may be a plane, a line, a single point, or the empty set. For example, as three parallel planes do not have a common point, the solution set of their equations is empty; the solution set of the equations of three planes intersecting at a point is single point; if three planes pass through two points, their equations have at least two common solutions; in fact the solution set is infinite and consists in all the line passing through these points. [ 6 ]
For n variables, each linear equation determines a hyperplane in n -dimensional space . The solution set is the intersection of these hyperplanes, and is a flat , which may have any dimension lower than n .
In general, the behavior of a linear system is determined by the relationship between the number of equations and the number of unknowns. Here, "in general" means that a different behavior may occur for specific values of the coefficients of the equations.
In the first case, the dimension of the solution set is, in general, equal to n − m , where n is the number of variables and m is the number of equations.
The following pictures illustrate this trichotomy in the case of two variables:
The first system has infinitely many solutions, namely all of the points on the blue line. The second system has a single unique solution, namely the intersection of the two lines. The third system has no solutions, since the three lines share no common point.
It must be kept in mind that the pictures above show only the most common case (the general case). It is possible for a system of two equations and two unknowns to have no solution (if the two lines are parallel), or for a system of three equations and two unknowns to be solvable (if the three lines intersect at a single point).
A system of linear equations behave differently from the general case if the equations are linearly dependent , or if it is inconsistent and has no more equations than unknowns.
The equations of a linear system are independent if none of the equations can be derived algebraically from the others. When the equations are independent, each equation contains new information about the variables, and removing any of the equations increases the size of the solution set. For linear equations, logical independence is the same as linear independence .
For example, the equations
are not independent — they are the same equation when scaled by a factor of two, and they would produce identical graphs. This is an example of equivalence in a system of linear equations.
For a more complicated example, the equations
are not independent, because the third equation is the sum of the other two. Indeed, any one of these equations can be derived from the other two, and any one of the equations can be removed without affecting the solution set. The graphs of these equations are three lines that intersect at a single point.
A linear system is inconsistent if it has no solution, and otherwise, it is said to be consistent . [ 7 ] When the system is inconsistent, it is possible to derive a contradiction from the equations, that may always be rewritten as the statement 0 = 1 .
For example, the equations
are inconsistent. In fact, by subtracting the first equation from the second one and multiplying both sides of the result by 1/6, we get 0 = 1 . The graphs of these equations on the xy -plane are a pair of parallel lines.
It is possible for three linear equations to be inconsistent, even though any two of them are consistent together. For example, the equations
are inconsistent. Adding the first two equations together gives 3 x + 2 y = 2 , which can be subtracted from the third equation to yield 0 = 1 . Any two of these equations have a common solution. The same phenomenon can occur for any number of equations.
In general, inconsistencies occur if the left-hand sides of the equations in a system are linearly dependent, and the constant terms do not satisfy the dependence relation. A system of equations whose left-hand sides are linearly independent is always consistent.
Putting it another way, according to the Rouché–Capelli theorem , any system of equations (overdetermined or otherwise) is inconsistent if the rank of the augmented matrix is greater than the rank of the coefficient matrix . If, on the other hand, the ranks of these two matrices are equal, the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has k free parameters where k is the difference between the number of variables and the rank; hence in such a case there is an infinitude of solutions. The rank of a system of equations (that is, the rank of the augmented matrix) can never be higher than [the number of variables] + 1, which means that a system with any number of equations can always be reduced to a system that has a number of independent equations that is at most equal to [the number of variables] + 1.
Two linear systems using the same set of variables are equivalent if each of the equations in the second system can be derived algebraically from the equations in the first system, and vice versa. Two systems are equivalent if either both are inconsistent or each equation of each of them is a linear combination of the equations of the other one. It follows that two linear systems are equivalent if and only if they have the same solution set.
There are several algorithms for solving a system of linear equations.
When the solution set is finite, it is reduced to a single element. In this case, the unique solution is described by a sequence of equations whose left-hand sides are the names of the unknowns and right-hand sides are the corresponding values, for example ( x = 3 , y = − 2 , z = 6 ) {\displaystyle (x=3,\;y=-2,\;z=6)} . When an order on the unknowns has been fixed, for example the alphabetical order the solution may be described as a vector of values, like ( 3 , − 2 , 6 ) {\displaystyle (3,\,-2,\,6)} for the previous example.
To describe a set with an infinite number of solutions, typically some of the variables are designated as free (or independent , or as parameters ), meaning that they are allowed to take any value, while the remaining variables are dependent on the values of the free variables.
For example, consider the following system:
The solution set to this system can be described by the following equations:
Here z is the free variable, while x and y are dependent on z . Any point in the solution set can be obtained by first choosing a value for z , and then computing the corresponding values for x and y .
Each free variable gives the solution space one degree of freedom , the number of which is equal to the dimension of the solution set. For example, the solution set for the above equation is a line, since a point in the solution set can be chosen by specifying the value of the parameter z . An infinite solution of higher order may describe a plane, or higher-dimensional set.
Different choices for the free variables may lead to different descriptions of the same solution set. For example, the solution to the above equations can alternatively be described as follows:
Here x is the free variable, and y and z are dependent.
The simplest method for solving a system of linear equations is to repeatedly eliminate variables. This method can be described as follows:
For example, consider the following system:
Solving the first equation for x gives x = 5 + 2 z − 3 y {\displaystyle x=5+2z-3y} , and plugging this into the second and third equation yields
Since the LHS of both of these equations equal y , equating the RHS of the equations. We now have:
Substituting z = 2 into the second or third equation gives y = 8, and the values of y and z into the first equation yields x = −15. Therefore, the solution set is the ordered triple ( x , y , z ) = ( − 15 , 8 , 2 ) {\displaystyle (x,y,z)=(-15,8,2)} .
In row reduction (also known as Gaussian elimination ), the linear system is represented as an augmented matrix [ 8 ]
This matrix is then modified using elementary row operations until it reaches reduced row echelon form . There are three types of elementary row operations: [ 8 ]
Because these operations are reversible, the augmented matrix produced always represents a linear system that is equivalent to the original.
There are several specific algorithms to row-reduce an augmented matrix, the simplest of which are Gaussian elimination and Gauss–Jordan elimination . The following computation shows Gauss–Jordan elimination applied to the matrix above:
The last matrix is in reduced row echelon form, and represents the system x = −15 , y = 8 , z = 2 . A comparison with the example in the previous section on the algebraic elimination of variables shows that these two methods are in fact the same; the difference lies in how the computations are written down.
Cramer's rule is an explicit formula for the solution of a system of linear equations, with each variable given by a quotient of two determinants . [ 9 ] For example, the solution to the system
is given by
For each variable, the denominator is the determinant of the matrix of coefficients , while the numerator is the determinant of a matrix in which one column has been replaced by the vector of constant terms.
Though Cramer's rule is important theoretically, it has little practical value for large matrices, since the computation of large determinants is somewhat cumbersome. (Indeed, large determinants are most easily computed using row reduction.)
Further, Cramer's rule has very poor numerical properties, making it unsuitable for solving even small systems reliably, unless the operations are performed in rational arithmetic with unbounded precision. [ citation needed ]
If the equation system is expressed in the matrix form A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } , the entire solution set can also be expressed in matrix form. If the matrix A is square (has m rows and n = m columns) and has full rank (all m rows are independent), then the system has a unique solution given by
where A − 1 {\displaystyle A^{-1}} is the inverse of A . More generally, regardless of whether m = n or not and regardless of the rank of A , all solutions (if any exist) are given using the Moore–Penrose inverse of A , denoted A + {\displaystyle A^{+}} , as follows:
where w {\displaystyle \mathbf {w} } is a vector of free parameters that ranges over all possible n ×1 vectors. A necessary and sufficient condition for any solution(s) to exist is that the potential solution obtained using w = 0 {\displaystyle \mathbf {w} =\mathbf {0} } satisfy A x = b {\displaystyle A\mathbf {x} =\mathbf {b} } — that is, that A A + b = b . {\displaystyle AA^{+}\mathbf {b} =\mathbf {b} .} If this condition does not hold, the equation system is inconsistent and has no solution. If the condition holds, the system is consistent and at least one solution exists. For example, in the above-mentioned case in which A is square and of full rank, A + {\displaystyle A^{+}} simply equals A − 1 {\displaystyle A^{-1}} and the general solution equation simplifies to
as previously stated, where w {\displaystyle \mathbf {w} } has completely dropped out of the solution, leaving only a single solution. In other cases, though, w {\displaystyle \mathbf {w} } remains and hence an infinitude of potential values of the free parameter vector w {\displaystyle \mathbf {w} } give an infinitude of solutions of the equation.
While systems of three or four equations can be readily solved by hand (see Cracovian ), computers are often used for larger systems. The standard algorithm for solving a system of linear equations is based on Gaussian elimination with some modifications. Firstly, it is essential to avoid division by small numbers, which may lead to inaccurate results. This can be done by reordering the equations if necessary, a process known as pivoting . Secondly, the algorithm does not exactly do Gaussian elimination, but it computes the LU decomposition of the matrix A . This is mostly an organizational tool, but it is much quicker if one has to solve several systems with the same matrix A but different vectors b .
If the matrix A has some special structure, this can be exploited to obtain faster or more accurate algorithms. For instance, systems with a symmetric positive definite matrix can be solved twice as fast with the Cholesky decomposition . Levinson recursion is a fast method for Toeplitz matrices . Special methods exist also for matrices with many zero elements (so-called sparse matrices ), which appear often in applications.
A completely different approach is often taken for very large systems, which would otherwise take too much time or memory. The idea is to start with an initial approximation to the solution (which does not have to be accurate at all), and to change this approximation in several steps to bring it closer to the true solution. Once the approximation is sufficiently accurate, this is taken to be the solution to the system. This leads to the class of iterative methods . For some sparse matrices, the introduction of randomness improves the speed of the iterative methods. [ 10 ] One example of an iterative method is the Jacobi method , where the matrix A {\displaystyle A} is split into its diagonal component D {\displaystyle D} and its non-diagonal component L + U {\displaystyle L+U} . An initial guess x ( 0 ) {\displaystyle {\mathbf {x}}^{(0)}} is used at the start of the algorithm. Each subsequent guess is computed using the iterative equation:
When the difference between guesses x ( k ) {\displaystyle {\mathbf {x}}^{(k)}} and x ( k + 1 ) {\displaystyle {\mathbf {x}}^{(k+1)}} is sufficiently small, the algorithm is said to have converged on the solution. [ 11 ]
There is also a quantum algorithm for linear systems of equations . [ 12 ]
A system of linear equations is homogeneous if all of the constant terms are zero:
A homogeneous system is equivalent to a matrix equation of the form
where A is an m × n matrix, x is a column vector with n entries, and 0 is the zero vector with m entries.
Every homogeneous system has at least one solution, known as the zero (or trivial ) solution, which is obtained by assigning the value of zero to each of the variables. If the system has a non-singular matrix ( det( A ) ≠ 0 ) then it is also the only solution. If the system has a singular matrix then there is a solution set with an infinite number of solutions. This solution set has the following additional properties:
These are exactly the properties required for the solution set to be a linear subspace of R n . In particular, the solution set to a homogeneous system is the same as the null space of the corresponding matrix A .
There is a close relationship between the solutions to a linear system and the solutions to the corresponding homogeneous system:
Specifically, if p is any specific solution to the linear system A x = b , then the entire solution set can be described as
Geometrically, this says that the solution set for A x = b is a translation of the solution set for A x = 0 . Specifically, the flat for the first system can be obtained by translating the linear subspace for the homogeneous system by the vector p .
This reasoning only applies if the system A x = b has at least one solution. This occurs if and only if the vector b lies in the image of the linear transformation A . | https://en.wikipedia.org/wiki/Homogeneous_linear_equation |
In chemistry , a mixture is a material made up of two or more different chemical substances which can be separated by physical method. It is an impure substance made up of 2 or more elements or compounds mechanically mixed together in any proportion. [ 1 ] A mixture is the physical combination of two or more substances in which the identities are retained and are mixed in the form of solutions , suspensions or colloids . [ 2 ] [ 3 ]
Mixtures are one product of mechanically blending or mixing chemical substances such as elements and compounds , without chemical bonding or other chemical change, so that each ingredient substance retains its own chemical properties and makeup. [ 4 ] Despite the fact that there are no chemical changes to its constituents, the physical properties of a mixture, such as its melting point , may differ from those of the components. Some mixtures can be separated into their components by using physical (mechanical or thermal) means. Azeotropes are one kind of mixture that usually poses considerable difficulties regarding the separation processes required to obtain their constituents (physical or chemical processes or, even a blend of them). [ 5 ] [ 6 ] [ 7 ]
All mixtures can be characterized as being separable by mechanical means (e.g. purification , distillation , electrolysis , chromatography , heat , filtration , gravitational sorting, centrifugation ). [ 8 ] [ 9 ] Mixtures differ from chemical compounds in the following ways:
In the example of sand and water, neither one of the two substances changed in any way when they are mixed. Although the sand is in the water it still keeps the same properties that it had when it was outside the water.
The following table shows the main properties and examples for all possible phase combinations of the three "families" of mixtures :
Mixtures can be either homogeneous or heterogeneous : a mixture of uniform composition and in which all components are in the same phase, such as salt in water, is called homogeneous, whereas a mixture of non-uniform composition and of which the components can be easily identified, such as sand in water, it is called heterogeneous.
In addition, " uniform mixture " is another term for homogeneous mixture and " non-uniform mixture " is another term for heterogeneous mixture . These terms are derived from the idea that a homogeneous mixture has a uniform appearance , or only one phase , because the particles are evenly distributed. However, a heterogeneous mixture has constituent substances that are in different phases and easily distinguishable from one another. In addition, a heterogeneous mixture may have a uniform (e.g. a colloid) or non-uniform (e.g. a pencil) composition.
Several solid substances, such as salt and sugar , dissolve in water to form homogeneous mixtures or " solutions ", in which there are both a solute (dissolved substance) and a solvent (dissolving medium) present. Air is an example of a solution as well: a homogeneous mixture of gaseous nitrogen solvent, in which oxygen and smaller amounts of other gaseous solutes are dissolved. Mixtures are not limited in either their number of substances or the amounts of those substances, though in most solutions, the solute-to-solvent proportion can only reach a certain point before the mixture separates and becomes heterogeneous.
A homogeneous mixture is characterized by uniform dispersion of its constituent substances throughout; the substances exist in equal proportion everywhere within the mixture. Differently put, a homogeneous mixture will be the same no matter from where in the mixture it is sampled. For example, if a solid-liquid solution is divided into two halves of equal volume , the halves will contain equal amounts of both the liquid medium and dissolved solid (solvent and solute)
A solution is equivalent to a "homogeneous mixture". In solutions, solutes will not settle out after any period of time and they cannot be removed by physical methods, such as a filter or centrifuge . [ 12 ] As a homogeneous mixture, a solution has one phase (solid, liquid, or gas), although the phase of the solute and solvent may initially have been different (e.g., salt water).
Gases exhibit by far the greatest space (and, consequently, the weakest intermolecular forces) between their atoms or molecules; since intermolecular interactions are minuscule in comparison to those in liquids and solids, dilute gases very easily form solutions with one another. Air is one such example: it can be more specifically described as a gaseous solution of oxygen and other gases dissolved in nitrogen (its major component).
Examples of heterogeneous mixtures are emulsions and foams . In most cases, the mixture consists of two main constituents. For an emulsion, these are immiscible fluids such as water and oil. For a foam, these are a solid and a fluid, or a liquid and a gas. On larger scales both constituents are present in any region of the mixture, and in a well-mixed mixture in the same or only slightly varying concentrations. On a microscopic scale, however, one of the constituents is absent in almost any sufficiently small region. (If such absence is common on macroscopic scales, the combination of the constituents is a dispersed medium , not a mixture.) One can distinguish different characteristics of heterogeneous mixtures by the presence or absence of continuum percolation of their constituents. For a foam, a distinction is made between reticulated foam in which one constituent forms a connected network through which the other can freely percolate, or a closed-cell foam in which one constituent is present as trapped in small cells whose walls are formed by the other constituents. A similar distinction is possible for emulsions. In many emulsions, one constituent is present in the form of isolated regions of typically a globular shape, dispersed throughout the other constituent. However, it is also possible each constituent forms a large, connected network. Such a mixture is then called bicontinuous . [ 13 ]
Making a distinction between homogeneous and heterogeneous mixtures is a matter of the scale of sampling. On a coarse enough scale, any mixture can be said to be homogeneous, if the entire article is allowed to count as a "sample" of it. On a fine enough scale, any mixture can be said to be heterogeneous, because a sample could be as small as a single molecule. In practical terms, if the property of interest of the mixture is the same regardless of which sample of it is taken for the examination used, the mixture is homogeneous.
Gy's sampling theory quantitatively defines the heterogeneity of a particle as: [ 14 ]
where h i {\displaystyle h_{i}} , c i {\displaystyle c_{i}} , c batch {\displaystyle c_{\text{batch}}} , m i {\displaystyle m_{i}} , and m aver {\displaystyle m_{\text{aver}}} are respectively: the heterogeneity of the i {\displaystyle i} th particle of the population, the mass concentration of the property of interest in the i {\displaystyle i} th particle of the population, the mass concentration of the property of interest in the population, the mass of the i {\displaystyle i} th particle in the population, and the average mass of a particle in the population.
During sampling of heterogeneous mixtures of particles, the variance of the sampling error is generally non-zero.
Pierre Gy derived, from the Poisson sampling model, the following formula for the variance of the sampling error in the mass concentration in a sample:
in which V is the variance of the sampling error, N is the number of particles in the population (before the sample was taken), q i is the probability of including the i th particle of the population in the sample (i.e. the first-order inclusion probability of the i th particle), m i is the mass of the i th particle of the population and a i is the mass concentration of the property of interest in the i th particle of the population.
The above equation for the variance of the sampling error is an approximation based on a linearization of the mass concentration in a sample.
In the theory of Gy, correct sampling is defined as a sampling scenario in which all particles have the same probability of being included in the sample. This implies that q i no longer depends on i , and can therefore be replaced by the symbol q . Gy's equation for the variance of the sampling error becomes:
where a batch is that concentration of the property of interest in the population from which the sample is to be drawn and M batch is the mass of the population from which the sample is to be drawn.
Air pollution research [ 15 ] [ 16 ] show biological and health effects after exposure to mixtures are more potent than effects from exposures of individual components. [ 17 ] | https://en.wikipedia.org/wiki/Homogeneous_mixture |
In mathematics , a homogeneous space is, very informally, a space that looks the same everywhere, as you move through it, with movement given by the action of a group . Homogeneous spaces occur in the theories of Lie groups , algebraic groups and topological groups . More precisely, a homogeneous space for a group G is a non-empty manifold or topological space X on which G acts transitively . The elements of G are called the symmetries of X . A special case of this is when the group G in question is the automorphism group of the space X – here "automorphism group" can mean isometry group , diffeomorphism group , or homeomorphism group . In this case, X is homogeneous if intuitively X looks locally the same at each point, either in the sense of isometry (rigid geometry), diffeomorphism ( differential geometry ), or homeomorphism ( topology ). Some authors insist that the action of G be faithful (non-identity elements act non-trivially), although the present article does not. Thus there is a group action of G on X that can be thought of as preserving some "geometric structure" on X , and making X into a single G -orbit .
Let X be a non-empty set and G a group. Then X is called a G -space if it is equipped with an action of G on X . [ 1 ] Note that automatically G acts by automorphisms (bijections) on the set. If X in addition belongs to some category , then the elements of G are assumed to act as automorphisms in the same category. That is, the maps on X coming from elements of G preserve the structure associated with the category (for example, if X is an object in Diff then the action is required to be by diffeomorphisms ). A homogeneous space is a G -space on which G acts transitively.
If X is an object of the category C , then the structure of a G -space is a homomorphism :
into the group of automorphisms of the object X in the category C . The pair ( X , ρ ) defines a homogeneous space provided ρ ( G ) is a transitive group of symmetries of the underlying set of X .
For example, if X is a topological space , then group elements are assumed to act as homeomorphisms on X . The structure of a G -space is a group homomorphism ρ : G → Homeo( X ) into the homeomorphism group of X .
Similarly, if X is a differentiable manifold , then the group elements are diffeomorphisms . The structure of a G -space is a group homomorphism ρ : G → Diffeo( X ) into the diffeomorphism group of X .
Riemannian symmetric spaces are an important class of homogeneous spaces, and include many of the examples listed below.
Concrete examples include:
From the point of view of the Erlangen program , one may understand that "all points are the same", in the geometry of X . This was true of essentially all geometries proposed before Riemannian geometry , in the middle of the nineteenth century.
Thus, for example, Euclidean space , affine space and projective space are all in natural ways homogeneous spaces for their respective symmetry groups . The same is true of the models found of non-Euclidean geometry of constant curvature , such as hyperbolic space .
A further classical example is the space of lines in projective space of three dimensions (equivalently, the space of two-dimensional subspaces of a four-dimensional vector space ). It is simple linear algebra to show that GL 4 acts transitively on those. We can parameterize them by line co-ordinates : these are the 2×2 minors of the 4×2 matrix with columns two basis vectors for the subspace. The geometry of the resulting homogeneous space is the line geometry of Julius Plücker .
In general, if X is a homogeneous space of G , and H o is the stabilizer of some marked point o in X (a choice of origin ), the points of X correspond to the left cosets G / H o , and the marked point o corresponds to the coset of the identity. Conversely, given a coset space G / H , it is a homogeneous space for G with a distinguished point, namely the coset of the identity. Thus a homogeneous space can be thought of as a coset space without a choice of origin.
For example, if H is the identity subgroup { e }, then X is the G -torsor , which explains why G -torsors are often described intuitively as " G with forgotten identity".
In general, a different choice of origin o will lead to a quotient of G by a different subgroup H o′ that is related to H o by an inner automorphism of G . Specifically,
where g is any element of G for which go = o ′ . Note that the inner automorphism (1) does not depend on which such g is selected; it depends only on g modulo H o .
If the action of G on X is continuous and X is Hausdorff , then H is a closed subgroup of G . In particular, if G is a Lie group , then H is a Lie subgroup by Cartan's theorem . Hence G / H is a smooth manifold and so X carries a unique smooth structure compatible with the group action.
One can go further to double coset spaces, notably Clifford–Klein forms Γ\ G / H , where Γ is a discrete subgroup (of G ) acting properly discontinuously .
For example, in the line geometry case, we can identify H as a 12-dimensional subgroup of the 16-dimensional general linear group , GL(4), defined by conditions on the matrix entries
by looking for the stabilizer of the subspace spanned by the first two standard basis vectors. That shows that X has dimension 4.
Since the homogeneous coordinates given by the minors are 6 in number, this means that the latter are not independent of each other. In fact, a single quadratic relation holds between the six minors, as was known to nineteenth-century geometers.
This example was the first known example of a Grassmannian , other than a projective space. There are many further homogeneous spaces of the classical linear groups in common use in mathematics.
The idea of a prehomogeneous vector space was introduced by Mikio Sato .
It is a finite-dimensional vector space V with a group action of an algebraic group G , such that there is an orbit of G that is open for the Zariski topology (and so, dense). An example is GL(1) acting on a one-dimensional space.
The definition is more restrictive than it initially appears: such spaces have remarkable properties, and there is a classification of irreducible prehomogeneous vector spaces, up to a transformation known as "castling".
Given the Poincaré group G and its subgroup the Lorentz group H , the space of cosets G / H is the Minkowski space . [ 3 ] Together with de Sitter space and Anti-de Sitter space these are the maximally symmetric lorentzian spacetimes. There are also homogeneous spaces of relevance in physics that are non-lorentzian, for example Galilean, Carrollian or Aristotelian spacetimes. [ 2 ]
Physical cosmology using the general theory of relativity makes use of the Bianchi classification system. Homogeneous spaces in relativity represent the space part of background metrics for some cosmological models ; for example, the three cases of the Friedmann–Lemaître–Robertson–Walker metric may be represented by subsets of the Bianchi I (flat), V (open), VII (flat or open) and IX (closed) types, while the Mixmaster universe represents an anisotropic example of a Bianchi IX cosmology. [ 4 ]
A homogeneous space of N dimensions admits a set of 1 / 2 N ( N + 1) Killing vectors . [ 5 ] For three dimensions, this gives a total of six linearly independent Killing vector fields; homogeneous 3-spaces have the property that one may use linear combinations of these to find three everywhere non-vanishing Killing vector fields ξ ( a ) i ,
where the object C a bc , the " structure constants ", form a constant order-three tensor antisymmetric in its lower two indices (on the left-hand side, the brackets denote antisymmetrisation and ";" represents the covariant differential operator ). In the case of a flat isotropic universe , one possibility is C a bc = 0 (type I), but in the case of a closed FLRW universe, C a bc = ε a bc , where ε a bc is the Levi-Civita symbol . | https://en.wikipedia.org/wiki/Homogeneous_space |
Homogenization , in cell biology or molecular biology , is a process whereby different fractions of a biological sample become equal in composition. It can be a disease sign in histopathology , or an intentional process in research: A homogenized sample is equal in composition throughout, so that removing a fraction does not alter the overall molecular make-up of the sample remaining, and is identical to the fraction removed. Induced homogenization in biology is often followed by molecular extraction and various analytical techniques, including ELISA and western blot. [ 1 ]
Homogenization of tissue in solution is often performed simultaneously with cell lysis . To prevent lysis however, the tissue (or collection of cells, e.g. from cell culture ) can be kept at temperatures slightly above zero to prevent autolysis , and in an isotonic solution to prevent osmotic damage. [ 2 ]
If freezing the tissue is possible, cryohomogenization can be performed under "dry" conditions, and is often the method of choice whenever it is desirable to collect several distinct molecular classes (e.g. both protein and RNA ) from a single sample, or combined set of samples, or when long-term storage of part of the sample is desired. Cryohomogenization can be carried out using a supercooled mortar and pestle (classic approach), or the tissue can be homogenized by crushing it into a fine powder inside a clean plastic bag resting against a supercooled solid metal block [ 3 ] (more recently developed and more efficient technique).
High-pressure homogenization is used to isolate the contents of Gram-positive bacteria , since these cells are exceptionally resistant to lysis, and may be combined with high-temperature sterilization. [ 4 ]
Dounce homogenization is a technique suitable for soft mammalian tissues, while lysis of mammalian cells has also been demonstrated via centrifugation. [ 5 ] | https://en.wikipedia.org/wiki/Homogenization_(biology) |
Homogenization or homogenisation is any of several processes used to make a mixture of two mutually non-soluble liquids the same throughout. [ 2 ] This is achieved by turning one of the liquids into a state consisting of extremely small particles distributed uniformly throughout the other liquid. A typical example is the homogenization of milk , wherein the milk fat globules are reduced in size and dispersed [ 3 ] uniformly through the rest of the milk. [ 4 ]
Homogenization (from " homogeneous ;" Greek , homogenes : homos, same + genos, kind) [ 5 ] is the process of converting two immiscible liquids (i.e. liquids that are not soluble, in all proportions, one in another) into an emulsion [ 6 ] (Mixture of two or more liquids that are generally immiscible). Sometimes two types of homogenization are distinguished: primary homogenization, when the emulsion is created directly from separate liquids; and secondary homogenization, when the emulsion is created by the reduction in size of droplets in an existing emulsion. [ 6 ] Homogenization is achieved by a mechanical device called a homogenizer . [ 6 ]
One of the oldest applications of homogenization is in milk processing. [ 7 ] It is normally preceded by "standardization" (the mixing of milk from several different herds or dairies to produce a more consistent raw milk prior to processing). [ 7 ] The fat in milk normally separates from the water and collects at the top. Homogenization breaks the fat into smaller sizes so it no longer separates, allowing the sale of non-separating milk at any fat specification. [ 3 ]
In high-pressure homogenization, a liquid product is forced through a narrow orifice under pressures typically ranging from 1,500 to 35,000 psi. This process reduces particle and droplet size through a combination of shear, turbulence, and cavitation. It is commonly used in the dairy industry to homogenize milk , producing uniform fat distribution and improving product stability. [ 7 ]
High-pressure homogenization is also applied in other beverage categories, such as soft drinks and vegetable-based drinks, to prevent the separation of components during storage. Ultra-high-pressure homogenization (UHPH) systems have been developed to further enhance microbiological stability and shelf life. [ 8 ] [ 9 ]
High-shear homogenization uses a rotor/stator mechanism to apply intense mechanical shear to a product, promoting dispersion and droplet size reduction. This method is widely used in industries such as food, pharmaceuticals, and cosmetics. Rotor/stator mixers typically achieve droplet sizes in the range of 2–5 microns, with finer distributions possible depending on formulation and processing conditions. [ 10 ]
A key advantage of high-shear homogenization is that it can improve emulsion uniformity and stability without altering formulation components. This is especially important for commercial products with fixed or regulated ingredient profiles. In a 2016 study, applying high-shear homogenization at 3600 rpm significantly reduced droplet size, improved viscosity, and eliminated phase separation in oil-in-water emulsions, all while maintaining the original formula. [ 11 ] | https://en.wikipedia.org/wiki/Homogenization_(chemistry) |
Homogentisic acid ( 2,5-dihydroxyphenylacetic acid ) is a phenolic acid usually found in Arbutus unedo (strawberry-tree) honey. [ 1 ] It is also present in the bacterial plant pathogen Xanthomonas campestris pv. phaseoli [ 2 ] as well as in the yeast Yarrowia lipolytica [ 3 ] where it is associated with the production of brown pigments. It is oxidatively dimerised to form hipposudoric acid , one of the main constituents of the 'blood sweat' of hippopotamuses .
It is less commonly known as melanic acid , the name chosen by William Prout .
Accumulation of excess homogentisic acid and its oxide, named alkapton , is a result of the failure of the enzyme homogentisic acid 1,2-dioxygenase (typically due to a mutation) in the degradative pathway of tyrosine , consequently associated with alkaptonuria . [ 4 ]
It is an intermediate in the catabolism of aromatic amino acids such as phenylalanine and tyrosine .
4-Hydroxyphenylpyruvate (produced by transamination of tyrosine) is acted upon by the enzyme 4-hydroxyphenylpyruvate dioxygenase to yield homogentisate. [ 5 ] If active and present, the enzyme homogentisate 1,2-dioxygenase further degrades homogentisic acid to yield 4-maleylacetoacetic acid . [ 6 ] | https://en.wikipedia.org/wiki/Homogentisic_acid |
In projective geometry , a homography is an isomorphism of projective spaces , induced by an isomorphism of the vector spaces from which the projective spaces derive. [ 1 ] It is a bijection that maps lines to lines, and thus a collineation . In general, some collineations are not homographies, but the fundamental theorem of projective geometry asserts that is not so in the case of real projective spaces of dimension at least two. Synonyms include projectivity , projective transformation , and projective collineation .
Historically, homographies (and projective spaces) have been introduced to study perspective and projections in Euclidean geometry , and the term homography , which, etymologically, roughly means "similar drawing", dates from this time. At the end of the 19th century, formal definitions of projective spaces were introduced, which extended Euclidean and affine spaces by the addition of new points called points at infinity . The term "projective transformation" originated in these abstract constructions. These constructions divide into two classes that have been shown to be equivalent. A projective space may be constructed as the set of the lines of a vector space over a given field (the above definition is based on this version); this construction facilitates the definition of projective coordinates and allows using the tools of linear algebra for the study of homographies. The alternative approach consists in defining the projective space through a set of axioms, which do not involve explicitly any field ( incidence geometry , see also synthetic geometry ); in this context, collineations are easier to define than homographies, and homographies are defined as specific collineations, thus called "projective collineations".
For sake of simplicity, unless otherwise stated, the projective spaces considered in this article are supposed to be defined over a (commutative) field . Equivalently Pappus's hexagon theorem and Desargues's theorem are supposed to be true. A large part of the results remain true, or may be generalized to projective geometries for which these theorems do not hold.
Historically, the concept of homography had been introduced to understand, explain and study visual perspective , and, specifically, the difference in appearance of two plane objects viewed from different points of view.
In three-dimensional Euclidean space, a central projection from a point O (the center) onto a plane P that does not contain O is the mapping that sends a point A to the intersection (if it exists) of the line OA and the plane P . The projection is not defined if the point A belongs to the plane passing through O and parallel to P . The notion of projective space was originally introduced by extending the Euclidean space, that is, by adding points at infinity to it, in order to define the projection for every point except O .
Given another plane Q , which does not contain O , the restriction to Q of the above projection is called a perspectivity .
With these definitions, a perspectivity is only a partial function , but it becomes a bijection if extended to projective spaces. Therefore, this notion is normally defined for projective spaces. The notion is also easily generalized to projective spaces of any dimension, over any field , in the following way:
Given two projective spaces P and Q of dimension n , a perspectivity is a bijection from P to Q that may be obtained by embedding P and Q in a projective space R of dimension n + 1 and restricting to P a central projection onto Q .
If f is a perspectivity from P to Q , and g a perspectivity from Q to P , with a different center, then g ⋅ f is a homography from P to itself, which is called a central collineation , when the dimension of P is at least two. (See § Central collineations below and Perspectivity § Perspective collineations .)
Originally, a homography was defined as the composition of a finite number of perspectivities. [ 2 ] It is a part of the fundamental theorem of projective geometry (see below) that this definition coincides with the more algebraic definition sketched in the introduction and detailed below.
A projective space P( V ) of dimension n over a field K may be defined as the set of the lines through the origin in a K -vector space V of dimension n + 1 . If a basis of V has been fixed, a point of V may be represented by a point ( x 0 , ..., x n ) of K n +1 . A point of P( V ), being a line in V , may thus be represented by the coordinates of any nonzero point of this line, which are thus called homogeneous coordinates of the projective point.
Given two projective spaces P( V ) and P( W ) of the same dimension, a homography is a mapping from P( V ) to P( W ), which is induced by an isomorphism of vector spaces f : V → W . Such an isomorphism induces a bijection from P( V ) to P( W ), because of the linearity of f . Two such isomorphisms, f and g , define the same homography if and only if there is a nonzero element a of K such that g = af .
This may be written in terms of homogeneous coordinates in the following way: A homography φ may be defined by a nonsingular ( n +1) × ( n +1) matrix [ a i , j ], called the matrix of the homography . This matrix is defined up to the multiplication by a nonzero element of K . The homogeneous coordinates [ x 0 : ... : x n ] of a point and the coordinates [ y 0 : ... : y n ] of its image by φ are related by
When the projective spaces are defined by adding points at infinity to affine spaces (projective completion) the preceding formulas become, in affine coordinates,
which generalizes the expression of the homographic function of the next section. This defines only a partial function between affine spaces, which is defined only outside the hyperplane where the denominator is zero.
The projective line over a field K may be identified with the union of K and a point, called the "point at infinity" and denoted by ∞ (see Projective line ). With this representation of the projective line, the homographies are the mappings
which are called homographic functions or linear fractional transformations .
In the case of the complex projective line , which can be identified with the Riemann sphere , the homographies are called Möbius transformations .
These correspond precisely with those bijections of the Riemann sphere that preserve orientation and are conformal. [ 3 ]
In the study of collineations, the case of projective lines is special due to the small dimension. When the line is viewed as a projective space in isolation, any permutation of the points of a projective line is a collineation, [ 4 ] since every set of points are collinear. However, if the projective line is embedded in a higher-dimensional projective space, the geometric structure of that space can be used to impose a geometric structure on the line. Thus, in synthetic geometry, the homographies and the collineations of the projective line that are considered are those obtained by restrictions to the line of collineations and homographies of spaces of higher dimension. This means that the fundamental theorem of projective geometry (see below) remains valid in the one-dimensional setting. A homography of a projective line may also be properly defined by insisting that the mapping preserves cross-ratios . [ 5 ]
A projective frame or projective basis of a projective space of dimension n is an ordered set of n + 2 points such that no hyperplane contains n + 1 of them. A projective frame is sometimes called a simplex , [ 6 ] although a simplex in a space of dimension n has at most n + 1 vertices.
Projective spaces over a commutative field K are considered in this section, although most results may be generalized to projective spaces over a division ring .
Let P ( V ) be a projective space of dimension n , where V is a K -vector space of dimension n + 1 , and p : V ∖ {0} → P ( V ) be the canonical projection that maps a nonzero vector to the vector line that contains it.
For every frame of P ( V ) , there exists a basis e 0 , ..., e n of V such that the frame is ( p ( e 0 ), ..., p ( e n ), p ( e 0 + ... + e n )) , and this basis is unique up to the multiplication of all its elements by the same nonzero element of K . Conversely, if e 0 , ..., e n is a basis of V , then ( p ( e 0 ), ..., p ( e n ), p ( e 0 + ... + e n )) is a frame of P ( V )
It follows that, given two frames, there is exactly one homography mapping the first one onto the second one. In particular, the only homography fixing the points of a frame is the identity map . This result is much more difficult in synthetic geometry (where projective spaces are defined through axioms). It is sometimes called the first fundamental theorem of projective geometry . [ 7 ]
Every frame ( p ( e 0 ), ..., p ( e n ), p ( e 0 + ... + e n )) allows to define projective coordinates , also known as homogeneous coordinates : every point may be written as p ( v ) ; the projective coordinates of p ( v ) on this frame are the coordinates of v on the base ( e 0 , ..., e n ) . It is not difficult to verify that changing the e i and v , without changing the frame nor p ( v ), results in multiplying the projective coordinates by the same nonzero element of K .
The projective space P n ( K ) = P ( K n +1 ) has a canonical frame consisting of the image by p of the canonical basis of K n +1 (consisting of the elements having only one nonzero entry, which is equal to 1), and (1, 1, ..., 1) . On this basis, the homogeneous coordinates of p ( v ) are simply the entries (coefficients) of the tuple v . Given another projective space P ( V ) of the same dimension, and a frame F of it, there is one and only one homography h mapping F onto the canonical frame of P n ( K ) . The projective coordinates of a point a on the frame F are the homogeneous coordinates of h ( a ) on the canonical frame of P n ( K ) .
In above sections, homographies have been defined through linear algebra. In synthetic geometry , they are traditionally defined as the composition of one or several special homographies called central collineations . It is a part of the fundamental theorem of projective geometry that the two definitions are equivalent.
In a projective space, P , of dimension n ≥ 2 , a collineation of P is a bijection from P onto P that maps lines onto lines. A central collineation (traditionally these were called perspectivities , [ 8 ] but this term may be confusing, having another meaning; see Perspectivity ) is a bijection α from P to P , such that there exists a hyperplane H (called the axis of α ), which is fixed pointwise by α (that is, α ( X ) = X for all points X in H ) and a point O (called the center of α ), which is fixed linewise by α (any line through O is mapped to itself by α , but not necessarily pointwise). [ 9 ] There are two types of central collineations. Elations are the central collineations in which the center is incident with the axis and homologies are those in which the center is not incident with the axis. A central collineation is uniquely defined by its center, its axis, and the image α ( P ) of any given point P that differs from the center O and does not belong to the axis. (The image α ( Q ) of any other point Q is the intersection of the line defined by O and Q and the line passing through α ( P ) and the intersection with the axis of the line defined by P and Q .)
A central collineation is a homography defined by a ( n +1) × ( n +1) matrix that has an eigenspace of dimension n . It is a homology, if the matrix has another eigenvalue and is therefore diagonalizable . It is an elation, if all the eigenvalues are equal and the matrix is not diagonalizable.
The geometric view of a central collineation is easiest to see in a projective plane. Given a central collineation α , consider a line ℓ that does not pass through the center O , and its image under α , ℓ ′ = α (ℓ) . Setting R = ℓ ∩ ℓ ′ , the axis of α is some line M through R . The image of any point A of ℓ under α is the intersection of OA with ℓ ′ . The image B ′ of a point B that does not belong to ℓ may be constructed in the following way: let S = AB ∩ M , then B ′ = SA ′ ∩ OB .
The composition of two central collineations, while still a homography in general, is not a central collineation. In fact, every homography is the composition of a finite number of central collineations. In synthetic geometry, this property, which is a part of the fundamental theory of projective geometry is taken as the definition of homographies. [ 10 ]
There are collineations besides the homographies. In particular, any field automorphism σ of a field F induces a collineation of every projective space over F by applying σ to all homogeneous coordinates (over a projective frame) of a point. These collineations are called automorphic collineations .
The fundamental theorem of projective geometry consists of the three following theorems.
If projective spaces are defined by means of axioms ( synthetic geometry ), the third part is simply a definition. On the other hand, if projective spaces are defined by means of linear algebra , the first part is an easy corollary of the definitions. Therefore, the proof of the first part in synthetic geometry, and the proof of the third part in terms of linear algebra both are fundamental steps of the proof of the equivalence of the two ways of defining projective spaces.
As every homography has an inverse mapping and the composition of two homographies is another, the homographies of a given projective space form a group . For example, the Möbius group is the homography group of any complex projective line.
As all the projective spaces of the same dimension over the same field are isomorphic, the same is true for their homography groups. They are therefore considered as a single group acting on several spaces, and only the dimension and the field appear in the notation, not the specific projective space.
Homography groups also called projective linear groups are denoted PGL( n + 1, F ) when acting on a projective space of dimension n over a field F . Above definition of homographies shows that PGL( n + 1, F ) may be identified to the quotient group GL( n + 1, F ) / F × I , where GL( n + 1, F ) is the general linear group of the invertible matrices , and F × I is the group of the products by a nonzero element of F of the identity matrix of size ( n + 1) × ( n + 1) .
When F is a Galois field GF( q ) then the homography group is written PGL( n , q ) . For example, PGL(2, 7) acts on the eight points in the projective line over the finite field GF(7), while PGL(2, 4) , which is isomorphic to the alternating group A 5 , is the homography group of the projective line with five points. [ 12 ]
The homography group PGL( n + 1, F ) is a subgroup of the collineation group PΓL( n + 1, F ) of the collineations of a projective space of dimension n . When the points and lines of the projective space are viewed as a block design , whose blocks are the sets of points contained in a line, it is common to call the collineation group the automorphism group of the design .
The cross-ratio of four collinear points is an invariant under the homography that is fundamental for the study of the homographies of the lines.
Three distinct points a , b and c on a projective line over a field F form a projective frame of this line. There is therefore a unique homography h of this line onto F ∪ {∞} that maps a to ∞ , b to 0, and c to 1. Given a fourth point on the same line, the cross-ratio of the four points a , b , c and d , denoted [ a , b ; c , d ] , is the element h ( d ) of F ∪ {∞} . In other words, if d has homogeneous coordinates [ k : 1] over the projective frame ( a , b , c ) , then [ a , b ; c , d ] = k . [ 13 ]
Suppose A is a ring and U is its group of units . Homographies act on a projective line over A , written P( A ), consisting of points U [ a, b ] with projective coordinates . The homographies on P( A ) are described by matrix mappings
When A is a commutative ring , the homography may be written
but otherwise the linear fractional transformation is seen as an equivalence:
The homography group of the ring of integers Z is modular group PSL(2, Z ) . Ring homographies have been used in quaternion analysis , and with dual quaternions to facilitate screw theory . The conformal group of spacetime can be represented with homographies where A is the composition algebra of biquaternions . [ 14 ]
The homography h = ( 1 1 0 1 ) {\displaystyle h={\begin{pmatrix}1&1\\0&1\end{pmatrix}}} is periodic when the ring is Z / n Z (the integers modulo n ) since then h n = ( 1 n 0 1 ) = ( 1 0 0 1 ) . {\displaystyle h^{n}={\begin{pmatrix}1&n\\0&1\end{pmatrix}}={\begin{pmatrix}1&0\\0&1\end{pmatrix}}.} Arthur Cayley was interested in periodicity when he calculated iterates in 1879. [ 15 ] In his review of a brute force approach to periodicity of homographies, H. S. M. Coxeter gave this analysis: | https://en.wikipedia.org/wiki/Homography |
In the field of computer vision , any two images of the same planar surface in space are related by a homography (assuming a pinhole camera model ). This has many practical applications, such as image rectification , image registration , or camera motion—rotation and translation—between two images. Once camera resectioning has been done from an estimated homography matrix, this information may be used for navigation, or to insert models of 3D objects into an image or video, so that they are rendered with the correct perspective and appear to have been part of the original scene (see Augmented reality ).
We have two cameras a and b , looking at points P i {\displaystyle P_{i}} in a plane.
Passing from the projection b p i = ( b u i ; b v i ; 1 ) {\displaystyle {}^{b}p_{i}=\left({}^{b}u_{i};{}^{b}v_{i};1\right)} of P i {\displaystyle P_{i}} in b to the projection a p i = ( a u i ; a v i ; 1 ) {\displaystyle {}^{a}p_{i}=\left({}^{a}u_{i};{}^{a}v_{i};1\right)} of P i {\displaystyle P_{i}} in a :
where a z i {\displaystyle {}^{a}z_{i}} and b z i {\displaystyle {}^{b}z_{i}} are the z coordinates of P in each camera frame and where the homography matrix H a b {\displaystyle H_{ab}} is given by
R {\displaystyle R} is the rotation matrix by which b is rotated in relation to a ; t is the translation vector from a to b ; n and d are the normal vector of the plane and the distance from origin to the plane respectively. K a and K b are the cameras' intrinsic parameter matrices.
The figure shows camera b looking at the plane at distance d .
Note: From above figure, assuming n T P i + d = 0 {\displaystyle n^{T}P_{i}+d=0} as plane model, n T P i {\displaystyle n^{T}P_{i}} is the projection of vector P i {\displaystyle P_{i}} along n {\displaystyle n} , and equal to − d {\displaystyle -d} . So t = t ⋅ 1 = t ( − n T P i d ) {\displaystyle t=t\cdot 1=t\left(-{\frac {n^{T}P_{i}}{d}}\right)} . And we have H a b P i = R P i + t {\displaystyle H_{ab}P_{i}=RP_{i}+t} where H a b = R − t n T d {\displaystyle H_{ab}=R-{\frac {tn^{T}}{d}}} .
This formula is only valid if camera b has no rotation and no translation. In the general case where R a , R b {\displaystyle R_{a},R_{b}} and t a , t b {\displaystyle t_{a},t_{b}} are the respective rotations and translations of camera a and b , R = R a R b T {\displaystyle R=R_{a}R_{b}^{T}} and the homography matrix H a b {\displaystyle H_{ab}} becomes
where d is the distance of the camera b to the plane.
When the image region in which the homography is computed is small or the image has been acquired with a large focal length, an affine homography is a more appropriate model of image displacements. An affine homography is a special type of a general homography whose last row is fixed to | https://en.wikipedia.org/wiki/Homography_(computer_vision) |
Monokaryotic (adj.) is a term used to refer to multinucleate cells where all nuclei are genetically identical. In multinucleate cells, nuclei share one common cytoplasm , as is found in hyphal cells or mycelium of filamentous fungi .
This cell biology article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homokaryotic |
In inorganic chemistry , a homoleptic chemical compound is a metal compound with all ligands identical. [ 1 ] The term uses the " homo- " prefix to indicate that something is the same for all. Any metal species which has more than one type of ligand is heteroleptic .
Some compounds with names that suggest that they are homoleptic are in fact heteroleptic, because they have ligands in them which are not featured in the name. For instance dialkyl magnesium complexes, which are found in the equilibrium which exists in a solution of a Grignard reagent in an ether , have two ether ligands attached to each magnesium centre. Another example is a solution of trimethyl aluminium in an ether solvent (such as THF ); similar chemistry should be expected for a triaryl or trialkyl borane .
It is possible for some ligands such as DMSO to bind with two or more different coordination modes. It would still be reasonable to consider a complex which has only one type of ligand but with different coordination modes to be homoleptic. For example, the complex dichlorotetrakis(dimethyl sulfoxide)ruthenium(II) features DMSO coordinating via both sulfur and oxygen atoms (though this is not homoleptic since there are also chloride ligands).
This inorganic compound –related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homoleptic_and_heteroleptic_compounds |
HomoloGene , a tool of the United States National Center for Biotechnology Information (NCBI), is a system for automated detection of homologs (similarity attributable to descent from a common ancestor) among the annotated genes of several completely sequenced eukaryotic genomes. [ 1 ]
The HomoloGene processing consists of the protein analysis from the input organisms. Sequences are compared using blastp, then matched up and put into groups, using a taxonomic tree built from sequence similarity, where closer related organisms are matched up first, and then further organisms are added to the tree. The protein alignments are mapped back to their corresponding DNA sequences, and then distance metrics as molecular distances Jukes and Cantor (1969) , Ka/Ks ratio can be calculated.
The sequences are matched up by using a heuristic algorithm for maximizing the score globally, rather than locally, in a bipartite matching (see complete bipartite graph ). And then it calculates the statistical significance of each match. Cutoffs are made per position and Ks values are set to prevent false "orthologs" from being grouped together. “Paralogs” are identified by finding sequences that are closer within species than other species.
This resource ceased making updates in 2014. [ 2 ]
" Homo sapiens , Pan troglodytes , Mus musculus , Rattus norvegicus , Canis lupus familiaris , Bos taurus , Gallus gallus , Xenopus tropicalis , Danio rerio "
" Drosophila melanogaster , Anopheles gambiae , Caenorhabditis elegans "
" Saccharomyces cerevisiae , Schizosaccharomyces pombe , Kluyveromyces lactis , Eremothecium gossypii , Magnaporthe grisea , Neurospora crassa "
" Arabidopsis thaliana "
" Oryza sativa "
" Plasmodium falciparum ".
The HomoloGene is linked to all Entrez databases and based on homology and phenotype information of these links:
As a result, HomoloGene displays information about Genes, Proteins, Phenotypes, and Conserved Domains. | https://en.wikipedia.org/wiki/HomoloGene |
Homological mirror symmetry is a mathematical conjecture made by Maxim Kontsevich . It seeks a systematic mathematical explanation for a phenomenon called mirror symmetry first observed by physicists studying string theory .
In an address to the 1994 International Congress of Mathematicians in Zürich , Kontsevich (1994) speculated that mirror symmetry for a pair of Calabi–Yau manifolds X and Y could be explained as an equivalence of a triangulated category constructed from the algebraic geometry of X (the derived category of coherent sheaves on X ) and another triangulated category constructed from the symplectic geometry of Y (the derived Fukaya category ).
Edward Witten originally described the topological twisting of the N=(2,2) supersymmetric field theory into what he called the A and B model topological string theories [ citation needed ] . These models concern maps from Riemann surfaces into a fixed target—usually a Calabi–Yau manifold. Most of the mathematical predictions of mirror symmetry are embedded in the physical equivalence of the A-model on Y with the B-model on its mirror X . When the Riemann surfaces have empty boundary, they represent the worldsheets of closed strings. To cover the case of open strings, one must introduce boundary conditions to preserve the supersymmetry. In the A-model, these boundary conditions come in the form of Lagrangian submanifolds of Y with some additional structure (often called a brane structure). In the B-model, the boundary conditions come in the form of holomorphic (or algebraic) submanifolds of X with holomorphic (or algebraic) vector bundles on them. These are the objects one uses to build the relevant categories [ citation needed ] . They are often called A and B branes respectively. Morphisms in the categories are given by the massless spectrum of open strings stretching between two branes [ citation needed ] .
The closed string A and B models only capture the so-called topological sector—a small portion of the full string theory. Similarly, the branes in these models are only topological approximations to the full dynamical objects that are D-branes . Even so, the mathematics resulting from this small piece of string theory has been both deep and difficult.
The School of Mathematics at the Institute for Advanced Study in Princeton devoted a whole year to Homological Mirror Symmetry during the 2016-17 academic year. Among the participants were Paul Seidel from MIT , Maxim Kontsevich from IHÉS , and Denis Auroux, from UC Berkeley . [ 1 ]
Only in a few examples have mathematicians been able to verify the conjecture. In his seminal address, Kontsevich commented that the conjecture could be proved in the case of elliptic curves using theta functions . Following this route, Alexander Polishchuk and Eric Zaslow provided a proof of a version of the conjecture for elliptic curves. Kenji Fukaya was able to establish elements of the conjecture for abelian varieties . Later, Kontsevich and Yan Soibelman provided a proof of the majority of the conjecture for nonsingular torus bundles over affine manifolds using ideas from the SYZ conjecture . In 2003, Paul Seidel proved the conjecture in the case of the quartic surface . In 2002 Hausel & Thaddeus (2002) explained SYZ conjecture in the context of Hitchin system and Langlands duality.
The dimensions h p , q of spaces of harmonic ( p , q )-differential forms (equivalently, the cohomology, i.e., closed forms modulo exact forms) are conventionally arranged in a diamond shape called the Hodge diamond . These (p,q)-Betti numbers can be computed for complete intersections using a generating function described by Friedrich Hirzebruch . [ 2 ] [ 3 ] [ 4 ] For a three-dimensional manifold, for example, the Hodge diamond has p and q ranging from 0 to 3:
Mirror symmetry translates the dimension number of the (p, q)-th differential form h p , q for the original manifold into h n-p , q of that for the counter pair manifold. Namely, for any Calabi–Yau manifold the Hodge diamond is unchanged by a rotation by π radians and the Hodge diamonds of mirror Calabi–Yau manifolds are related by a rotation by π/2 radians.
In the case of an elliptic curve , which is viewed as a 1-dimensional Calabi–Yau manifold, the Hodge diamond is especially simple: it is the following figure.
In the case of a K3 surface , which is viewed as 2-dimensional Calabi–Yau manifold, since the Betti numbers are {1, 0, 22, 0, 1}, their Hodge diamond is the following figure.
In the 3-dimensional case, usually called the Calabi–Yau manifold , a very interesting thing happens. There are sometimes mirror pairs, say M and W , that have symmetric Hodge diamonds with respect to each other along a diagonal line.
M' s diamond:
W' s diamond:
M and W correspond to A- and B-model in string theory. Mirror symmetry not only replaces the homological dimensions but also the symplectic structure and complex structure on the mirror pairs. That is the origin of homological mirror symmetry.
In 1990-1991, Candelas et al. 1991 had a major impact not only on enumerative algebraic geometry but on the whole mathematics and motivated Kontsevich (1994) . The mirror pair of two quintic threefolds in this paper have the following Hodge diamonds. | https://en.wikipedia.org/wiki/Homological_mirror_symmetry |
Homologous desensitization occurs when a receptor decreases its response to an agonist at high concentration. [ 1 ] It is a process through which, after prolonged agonist exposure, the receptor is uncoupled from its signaling cascade and thus the cellular effect of receptor activation is attenuated. [ 2 ]
Homologous desensitization is distinguished from heterologous desensitization , a process in which repeated stimulation of a receptor by an agonist results in desensitization of the stimulated receptor as well as other, usually inactive, receptors on the same cell. They are sometimes denoted as agonist-dependent and agonist-independent desensitization respectively. While heterologous desensitization occurs rapidly at low agonist concentrations, homologous desensitization shows a dose dependent response and usually begins at significantly higher concentrations. [ 3 ] [ 4 ]
Homologous desensitization serves as a mechanism for tachyphylaxis and helps organisms to maintain homeostasis . The process of homologous desensitization has been extensively studied utilizing G protein–coupled receptors (GPCRs). [ 3 ] [ 5 ] While the different mechanisms for desensitization are still being characterized, there are currently four known mechanisms: uncoupling of receptors from associated G proteins , endocytosis , degradation , and downregulation . The degradation and downregulation of receptors is often also associated with drug tolerance since it has a longer onset, from hours to days. [ 6 ] It has been shown that these mechanisms can happen independently of one another, but that they also influence one another. In addition, the same receptor expressed in different cell types can be desensitized by different mechanisms. [ 5 ]
For GPCRs generally, each mechanism of homologous desensitization begins with receptor phosphorylation by an associated G protein-coupled receptor kinase (GRK). GRKs selectively modify activated receptors such that no heterogeneous desensitization will occur. This phosphorylation then acts to recruit other proteins, such as arrestins , that participate in one or more of the following mechanisms.
Receptor uncoupling/phosphorylation is the most rapid form of desensitization that happens within a cell, as its effects are seen within seconds to minutes of agonist application. [ 5 ] The ß 2 adrenergic receptor was the first to have its desensitization studied and characterized. The mechanism of desensitization involves the action of a specific GRK, denoted ßARK , and also ß-arrestins. The ß-arrestins show high affinity for receptors that are both phosphorylated and activated, but are still able to bind non-phosphorylated receptors with a lower affinity. Additionally, ß-arrestins are better at inactivating ßARK-phosphorylated receptors rather than protein kinase A -phosphorylated receptors, which suggests that the arrestins preferentially mediate homologous desensitization. [ 6 ]
The mechanism of homologous desensitization for the β 2 receptor is as follows:
In contrast to receptor uncoupling, endocytosis can occur through multiple pathways. GPCR endocytosis has been shown to be either dependent or independent of arrestin activity, depending on the cell type used in the experiment; however, the former is more common. Furthermore, the same receptor expressed in two distinct cell types can be endocytosed through different mechanisms due to differences in GRK and arrestin expression: either through clathrin-coated vesicles or caveolae . [ 4 ] In general, receptor sequestration preferentially affects receptors that are both activated and phosphorylated, but the phosphorylation is not always a necessary component of endocytosis. After being sequestered, the affected receptors can either be degraded by lysosomes or reinserted into the plasma membrane , which is called receptor recycling. [ 5 ]
Post-translational modification also affects receptor endocytosis. For example, different glycosylations on the exterior N-terminus of dopamine receptors D 2 and D 3 were associated with specific endocytotic pathways. Additionally, palmitoylation , which primarily mediates receptor localization within the membrane, can also affect endocytosis. It is required for the endocytosis of thyrotropin-releasing hormone and D 3 receptors, and it is inhibitory for leutinizing hormone and vasopressin receptor 1A receptors. It has been shown to have no effect on adrenergic receptors (specifically ß 2 and α 1 ). [ 3 ] | https://en.wikipedia.org/wiki/Homologous_desensitization |
In organic chemistry , a homologous series is a sequence of compounds with the same functional group and similar chemical properties in which the members of the series differ by the number of repeating units they contain. [ 1 ] [ 2 ] This can be the length of a carbon chain , [ 2 ] for example in the straight-chained alkanes (paraffins), or it could be the number of monomers in a homopolymer such as amylose . [ 3 ] A homologue (also spelled as homolog ) is a compound belonging to a homologous series. [ 1 ]
Compounds within a homologous series typically have a fixed set of functional groups that gives them similar chemical and physical properties . (For example, the series of primary straight-chained alcohols has a hydroxyl at the end of the carbon chain .) These properties typically change gradually along the series, and the changes can often be explained by mere differences in molecular size and mass. The name "homologous series" is also often used for any collection of compounds that have similar structures or include the same functional group, such as the general alkanes (straight and branched), the alkenes (olefins), the carbohydrates , etc. However, if the members cannot be arranged in a linear order by a single parameter, the collection may be better called a "chemical family" or "class of homologous compounds" than a "series".
The concept of homologous series was proposed in 1843 by the French chemist Charles Gerhardt . [ 4 ] A homologation reaction is a chemical process that converts one member of a homologous series to the next member.
The homologous series of straight-chained alkanes begins methane (CH 4 ), ethane (C 2 H 6 ), propane (C 3 H 8 ), butane (C 4 H 10 ), and pentane (C 5 H 12 ). In that series, successive members differ in mass by an extra methylene bridge (-CH 2 - unit) inserted in the chain. Thus the molecular mass of each member differs by 14 atomic mass units. Adjacent members in such a series, such as methane and ethane, are known as "adjacent homologues". [ 5 ]
Within that series, many physical properties such as boiling point gradually change with increasing mass. For example, ethane (C 2 H 6 ), has a higher boiling point than methane (CH 4 ). This is because the London dispersion forces between ethane molecules are higher than that between methane molecules, resulting in stronger forces of intermolecular attraction, raising the boiling point.
Some important classes of organic molecules are derivatives of alkanes, such as the primary alcohols , aldehydes , and (mono) carboxylic acids form analogous series to the alkanes. The corresponding homologous series of primary straight-chained alcohols comprises methanol (CH 4 O), ethanol (C 2 H 6 O), 1-propanol (C 3 H 8 O), 1-butanol , and so on. The single-ring cycloalkanes form another such series, starting with cyclopropane .
Biopolymers also form homologous series, for example the polymers of glucose such as cellulose oligomers [ 6 ] starting with cellobiose , or the series of amylose oligomers starting with maltose , which are sometimes called maltooligomers. [ 7 ] Homooligopeptides, oligopeptides made up of repetitions of only one amino acid can also be studied as homologous series. [ 8 ]
Homologous series are not unique to organic chemistry . Titanium , vanadium , and molybdenum oxides all form homologous series (e.g. V n O 2 n − 1 for 2 < n < 10), called Magnéli phases , [ 9 ] as do the silanes , Si n H 2 n + 2 (with n up to 8) that are analogous to the alkanes, C n H 2 n + 2 .
On the periodic table , homologous elements share many electrochemical properties and appear in the same group (column) of the table. For example, all noble gases are colorless, monatomic gases with very low reactivity. These similarities are due to similar structure in their outer shells of valence electrons . [ 10 ] Mendeleev used the prefix eka- for an unknown element below a known one in the same group. | https://en.wikipedia.org/wiki/Homologous_series |
Somatic pairing of homologous chromosomes is similar to pre- and early meiotic pairing (see article: Homologous chromosome#In meiosis ), and has been observed in Diptera [ 1 ] ( Drosophila ), and budding yeast , [ 2 ] for example (whether it evolved multiple times in metazoans is unclear [ 3 ] ). Mammals show little pairing apart from in germline cells, taking place at specific loci, and under the control of developmental signalling (understood as a subset of other long-range interchromosomal interactions such as looping, and organisation into chromosomal territories ). [ 4 ]
While meiotic pairing has been extensively studied, the role of somatic pairing has remained less well understood, and even whether it is mechanistically related to meiotic pairing is unknown. [ 5 ]
The first review of somatic pairing was made by Metz in 1916, [ 1 ] citing the first descriptions of pairing made in 1907 [ 6 ] and 1908 by N. M. Stevens in germline cells, who noted: [ 7 ]
“it may therefore be true that pairing of homologous chromosomes occurs in connection with each mitosis throughout the life history of these insects” (p.215)
Stevens noted the potential for communication and a role in heredity. [ 7 ]
While meiotic homologous pairing subsequently became well studied, somatic pairing remained neglected due to what has been described as " limitations in cytological tools for measuring pairing and genetic tools for perturbing pairing dynamics ". [ 5 ]
In 1998 it was determined that homologous pairing in Drosophila occurs through independent initiations (as opposed to a directed, 'processive zippering' motion). [ 4 ] [ 8 ]
The first RNAi screen (based on DNA FISH [ 9 ] ) was carried out to identify genes regulating D. melanogaster somatic pairing in 2012, [ 10 ] described at the time as providing "an extensive “parts list” of mostly novel factors". These comprised 40 pairing promoting genes and 65 'anti-pairing' genes (of which 2 and 1 were already known, respectively), many of which have human orthologs . [ 5 ]
An earlier RNAi screen in 2007 showed the disruption of Topoisomerase II activity impairs somatic pairing within Drosophila tissue culture, [ 11 ] indicating a role for topoisomerase-mediated organisation (or the direct interactions of topoisomerase enzymes) in pairing. [ 4 ] Condensin (despite dependent interactions with Topoisomerase II) is antagonistic to Drosophila homologous pairing. [ 12 ] | https://en.wikipedia.org/wiki/Homologous_somatic_pairing |
Homology in psychology , as in biology , refers to a relationship between characteristics that reflects the characteristics' origins in either evolution or development. Homologous behaviors can theoretically be of at least two different varieties. [ 1 ] As with homologous anatomical characteristics , behaviors present in different species can be considered homologous if they are likely present in those species because the behaviors were present in a common ancestor of the two species. Alternatively, in much the same way as reproductive structures (e.g., the penis and the clitoris) are considered homologous because they share a common origin in embryonic tissues, [ 2 ] behaviors—or the neural substrates associated with those behaviors [ 3 ] —can also be considered homologous if they share common origins in development.
Behavioral homologies have been considered since at least 1958, when Konrad Lorenz studied the evolution of behavior. [ 4 ] More recently, the question of behavioral homologies has been addressed by philosophers of science such as Marc Ereshefsky , [ 5 ] [ 6 ] psychologists such as Drew Rendall, [ 7 ] and neuroscientists such as Georg Striedter and Glenn Northcutt. [ 8 ] It is debatable whether the concept of homology is useful in developmental psychology . [ 9 ] [ 10 ] [ 11 ]
For example, D. W. Rajecki and Randall C. Flanery, using data on humans and on nonhuman primates , argue that patterns of behaviour in dominance hierarchies are homologous across the primates. [ 12 ] | https://en.wikipedia.org/wiki/Homology_(psychology) |
Homology modeling , also known as comparative modeling of protein, refers to constructing an atomic-resolution model of the " target " protein from its amino acid sequence and an experimental three-dimensional structure of a related homologous protein (the " template "). Homology modeling relies on the identification of one or more known protein structures likely to resemble the structure of the query sequence, and on the production of a sequence alignment that maps residues in the query sequence to residues in the template sequence. It has been seen that protein structures are more conserved than protein sequences amongst homologues, but sequences falling below a 20% sequence identity can have very different structure. [ 1 ]
Evolutionarily related proteins have similar sequences and naturally occurring homologous proteins have similar protein structure.
It has been shown that three-dimensional protein structure is evolutionarily more conserved than would be expected on the basis of sequence conservation alone. [ 2 ]
The sequence alignment and template structure are then used to produce a structural model of the target. Because protein structures are more conserved than DNA sequences, and detectable levels of sequence similarity usually imply significant structural similarity. [ 3 ]
The quality of the homology model is dependent on the quality of the sequence alignment and template structure. The approach can be complicated by the presence of alignment gaps (commonly called indels) that indicate a structural region present in the target but not in the template, and by structure gaps in the template that arise from poor resolution in the experimental procedure (usually X-ray crystallography ) used to solve the structure. Model quality declines with decreasing sequence identity ; a typical model has ~1–2 Å root mean square deviation between the matched C α atoms at 70% sequence identity but only 2–4 Å agreement at 25% sequence identity. However, the errors are significantly higher in the loop regions, where the amino acid sequences of the target and template proteins may be completely different.
Regions of the model that were constructed without a template, usually by loop modeling , are generally much less accurate than the rest of the model. Errors in side chain packing and position also increase with decreasing identity, and variations in these packing configurations have been suggested as a major reason for poor model quality at low identity. [ 4 ] Taken together, these various atomic-position errors are significant and impede the use of homology models for purposes that require atomic-resolution data, such as drug design and protein–protein interaction predictions; even the quaternary structure of a protein may be difficult to predict from homology models of its subunit(s). Nevertheless, homology models can be useful in reaching qualitative conclusions about the biochemistry of the query sequence, especially in formulating hypotheses about why certain residues are conserved, which may in turn lead to experiments to test those hypotheses. For example, the spatial arrangement of conserved residues may suggest whether a particular residue is conserved to stabilize the folding, to participate in binding some small molecule, or to foster association with another protein or nucleic acid. [ 5 ]
Homology modeling can produce high-quality structural models when the target and template are closely related, which has inspired the formation of a structural genomics consortium dedicated to the production of representative experimental structures for all classes of protein folds. [ 6 ] The chief inaccuracies in homology modeling, which worsen with lower sequence identity , derive from errors in the initial sequence alignment and from improper template selection. [ 7 ] Like other methods of structure prediction, current practice in homology modeling is assessed in a biennial large-scale experiment known as the Critical Assessment of Techniques for Protein Structure Prediction, or Critical Assessment of Structure Prediction ( CASP ).
The method of homology modeling is based on the observation that protein tertiary structure is better conserved than amino acid sequence . [ 3 ] Thus, even proteins that have diverged appreciably in sequence but still share detectable similarity will also share common structural properties, particularly the overall fold. Because it is difficult and time-consuming to obtain experimental structures from methods such as X-ray crystallography and protein NMR for every protein of interest, homology modeling can provide useful structural models for generating hypotheses about a protein's function and directing further experimental work.
There are exceptions to the general rule that proteins sharing significant sequence identity will share a fold. For example, a judiciously chosen set of mutations of less than 50% of a protein can cause the protein to adopt a completely different fold. [ 8 ] [ 9 ] However, such a massive structural rearrangement is unlikely to occur in evolution , especially since the protein is usually under the constraint that it must fold properly and carry out its function in the cell. Consequently, the roughly folded structure of a protein (its "topology") is conserved longer than its amino-acid sequence and much longer than the corresponding DNA sequence; in other words, two proteins may share a similar fold even if their evolutionary relationship is so distant that it cannot be discerned reliably. For comparison, the function of a protein is conserved much less than the protein sequence, since relatively few changes in amino-acid sequence are required to take on a related function.
The homology modeling procedure can be broken down into four sequential steps: template selection, target-template alignment, model construction, and model assessment. [ 3 ] The first two steps are often essentially performed together, as the most common methods of identifying templates rely on the production of sequence alignments; however, these alignments may not be of sufficient quality because database search techniques prioritize speed over alignment quality. These processes can be performed iteratively to improve the quality of the final model, although quality assessments that are not dependent on the true target structure are still under development.
Optimizing the speed and accuracy of these steps for use in large-scale automated structure prediction is a key component of structural genomics initiatives, partly because the resulting volume of data will be too large to process manually and partly because the goal of structural genomics requires providing models of reasonable quality to researchers who are not themselves structure prediction experts. [ 3 ]
The critical first step in homology modeling is the identification of the best template structure, if indeed any are available. The simplest method of template identification relies on serial pairwise sequence alignments aided by database search techniques such as FASTA and BLAST . More sensitive methods based on multiple sequence alignment – of which PSI-BLAST is the most common example – iteratively update their position-specific scoring matrix to successively identify more distantly related homologs. This family of methods has been shown to produce a larger number of potential templates and to identify better templates for sequences that have only distant relationships to any solved structure. Protein threading , [ 10 ] also known as fold recognition or 3D-1D alignment, can also be used as a search technique for identifying templates to be used in traditional homology modeling methods. [ 3 ] Recent CASP experiments indicate that some protein threading methods such as RaptorX are more sensitive than purely sequence(profile)-based methods when only distantly-related templates are available for the proteins under prediction. When performing a BLAST search, a reliable first approach is to identify hits with a sufficiently low E -value, which are considered sufficiently close in evolution to make a reliable homology model. Other factors may tip the balance in marginal cases; for example, the template may have a function similar to that of the query sequence, or it may belong to a homologous operon . However, a template with a poor E -value should generally not be chosen, even if it is the only one available, since it may well have a wrong structure, leading to the production of a misguided model. A better approach is to submit the primary sequence to fold-recognition servers [ 10 ] or, better still, consensus meta-servers which improve upon individual fold-recognition servers by identifying similarities (consensus) among independent predictions.
Often several candidate template structures are identified by these approaches. Although some methods can generate hybrid models with better accuracy from multiple templates, [ 10 ] [ 11 ] most methods rely on a single template. Therefore, choosing the best template from among the candidates is a key step, and can affect the final accuracy of the structure significantly. This choice is guided by several factors, such as the similarity of the query and template sequences, of their functions, and of the predicted query and observed template secondary structures . Perhaps most importantly, the coverage of the aligned regions: the fraction of the query sequence structure that can be predicted from the template, and the plausibility of the resulting model. Thus, sometimes several homology models are produced for a single query sequence, with the most likely candidate chosen only in the final step.
It is possible to use the sequence alignment generated by the database search technique as the basis for the subsequent model production; however, more sophisticated approaches have also been explored. One proposal generates an ensemble of stochastically defined pairwise alignments between the target sequence and a single identified template as a means of exploring "alignment space" in regions of sequence with low local similarity. [ 12 ] "Profile-profile" alignments that first generate a sequence profile of the target and systematically compare it to the sequence profiles of solved structures; the coarse-graining inherent in the profile construction is thought to reduce noise introduced by sequence drift in nonessential regions of the sequence. [ 13 ]
Given a template and an alignment, the information contained therein must be used to generate a three-dimensional structural model of the target, represented as a set of Cartesian coordinates for each atom in the protein. Three major classes of model generation methods have been proposed. [ 14 ] [ 15 ]
The original method of homology modeling relied on the assembly of a complete model from conserved structural fragments identified in closely related solved structures. For example, a modeling study of serine proteases in mammals identified a sharp distinction between "core" structural regions conserved in all experimental structures in the class, and variable regions typically located in the loops where the majority of the sequence differences were localized. Thus unsolved proteins could be modeled by first constructing the conserved core and then substituting variable regions from other proteins in the set of solved structures. [ 16 ] Current implementations of this method differ mainly in the way they deal with regions that are not conserved or that lack a template. [ 17 ] The variable regions are often constructed with the help of a protein fragment library .
The segment-matching method divides the target into a series of short segments, each of which is matched to its own template fitted from the Protein Data Bank . Thus, sequence alignment is done over segments rather than over the entire protein. Selection of the template for each segment is based on sequence similarity, comparisons of alpha carbon coordinates, and predicted steric conflicts arising from the van der Waals radii of the divergent atoms between target and template. [ 18 ]
The most common current homology modeling method takes its inspiration from calculations required to construct a three-dimensional structure from data generated by NMR spectroscopy . One or more target-template alignments are used to construct a set of geometrical criteria that are then converted to probability density functions for each restraint. Restraints applied to the main protein internal coordinates – protein backbone distances and dihedral angles – serve as the basis for a global optimization procedure that originally used conjugate gradient energy minimization to iteratively refine the positions of all heavy atoms in the protein. [ 19 ]
This method had been dramatically expanded to apply specifically to loop modeling, which can be extremely difficult due to the high flexibility of loops in proteins in aqueous solution. [ 20 ] A more recent expansion applies the spatial-restraint model to electron density maps derived from cryoelectron microscopy studies, which provide low-resolution information that is not usually itself sufficient to generate atomic-resolution structural models. [ 21 ] To address the problem of inaccuracies in initial target-template sequence alignment, an iterative procedure has also been introduced to refine the alignment on the basis of the initial structural fit. [ 22 ] The most commonly used software in spatial restraint-based modeling is MODELLER and a database called ModBase has been established for reliable models generated with it. [ 23 ]
Regions of the target sequence that are not aligned to a template are modeled by loop modeling ; they are the most susceptible to major modeling errors and occur with higher frequency when the target and template have low sequence identity. The coordinates of unmatched sections determined by loop modeling programs are generally much less accurate than those obtained from simply copying the coordinates of a known structure, particularly if the loop is longer than 10 residues. The first two sidechain dihedral angles (χ 1 and χ 2 ) can usually be estimated within 30° for an accurate backbone structure; however, the later dihedral angles found in longer side chains such as lysine and arginine are notoriously difficult to predict. Moreover, small errors in χ 1 (and, to a lesser extent, in χ 2 ) can cause relatively large errors in the positions of the atoms at the terminus of side chain; such atoms often have a functional importance, particularly when located near the active site .
A large number of methods have been developed for selecting a native-like structure from a set of models. Scoring functions have been based on both molecular mechanics energy functions (Lazaridis and Karplus 1999; Petrey and Honig 2000; Feig and Brooks 2002; Felts et al. 2002; Lee and Duan 2004), statistical potentials (Sippl 1995; Melo and Feytmans 1998; Samudrala and Moult 1998; Rojnuckarin and Subramaniam 1999; Lu and Skolnick 2001; Wallqvist et al. 2002; Zhou and Zhou 2002), residue environments (Luthy et al. 1992; Eisenberg et al. 1997; Park et al. 1997; Summa et al. 2005), local side-chain and backbone interactions (Fang and Shortle 2005), orientation-dependent properties (Buchete et al. 2004a,b; Hamelryck 2005), packing estimates (Berglund et al. 2004), solvation energy (Petrey and Honig 2000; McConkey et al. 2003; Wallner and Elofsson 2003; Berglund et al. 2004), hydrogen bonding (Kortemme et al. 2003), and geometric properties (Colovos and Yeates 1993; Kleywegt 2000; Lovell et al. 2003; Mihalek et al. 2003). A number of methods combine different potentials into a global score, usually using a linear combination of terms (Kortemme et al. 2003; Tosatto 2005), or with the help of machine learning techniques, such as neural networks (Wallner and Elofsson 2003) and support vector machines (SVM) (Eramian et al. 2006). Comparisons of different global model quality assessment programs can be found in recent papers by Pettitt et al. (2005), Tosatto (2005), and Eramian et al. (2006).
Less work has been reported on the local quality assessment of models. Local scores are important in the context of modeling because they can give an estimate of the reliability of different regions of a predicted structure. This information can be used in turn to determine which regions should be refined, which should be considered for modeling by multiple templates, and which should be predicted ab initio. Information on local model quality could also be used to reduce the combinatorial problem when considering alternative alignments; for example, by scoring different local models separately, fewer models would have to be built (assuming that the interactions between the separate regions are negligible or can be estimated separately).
One of the most widely used local scoring methods is Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), which combines secondary structure, solvent accessibility, and polarity of residue environments. ProsaII (Sippl 1993), which is based on a combination of a pairwise statistical potential and a solvation term, is also applied extensively in model evaluation. Other methods include the Errat program (Colovos and Yeates 1993), which considers distributions of nonbonded atoms according to atom type and distance, and the energy strain method (Maiorov and Abagyan 1998), which uses differences from average residue energies in different environments to indicate which parts of a protein structure might be problematic. Melo and Feytmans (1998) use an atomic pairwise potential and a surface-based solvation potential (both knowledge-based) to evaluate protein structures. Apart from the energy strain method, which is a semiempirical approach based on the ECEPP3 force field (Nemethy et al. 1992), all of the local methods listed above are based on statistical potentials. A conceptually distinct approach is the ProQres method, which was very recently introduced by Wallner and Elofsson (2006). ProQres is based on a neural network that combines structural features to distinguish correct from incorrect regions. ProQres was shown to outperform earlier methodologies based on statistical approaches (Verify3D, ProsaII, and Errat). The data presented in Wallner and Elofsson's study suggests that their machine-learning approach based on structural features is indeed superior to statistics-based methods. However, the knowledge-based methods examined in their work, Verify3D (Luthy et al. 1992; Eisenberg et al. 1997), Prosa (Sippl 1993), and Errat (Colovos and Yeates 1993), are not based on newer statistical potentials.
Several large-scale benchmarking efforts have been made to assess the relative quality of various current homology modeling methods. Critical Assessment of Structure Prediction ( CASP ) is a community-wide prediction experiment that runs every two years during the summer months and challenges prediction teams to submit structural models for a number of sequences whose structures have recently been solved experimentally but have not yet been published. Its partner Critical Assessment of Fully Automated Structure Prediction ( CAFASP ) has run in parallel with CASP but evaluates only models produced via fully automated servers. Continuously running experiments that do not have prediction 'seasons' focus mainly on benchmarking publicly available webservers. LiveBench and EVA run continuously to assess participating servers' performance in prediction of imminently released structures from the PDB. CASP and CAFASP serve mainly as evaluations of the state of the art in modeling, while the continuous assessments seek to evaluate the model quality that would be obtained by a non-expert user employing publicly available tools.
The accuracy of the structures generated by homology modeling is highly dependent on the sequence identity between target and template. Above 50% sequence identity, models tend to be reliable, with only minor errors in side chain packing and rotameric state, and an overall RMSD between the modeled and the experimental structure falling around 1 Å . This error is comparable to the typical resolution of a structure solved by NMR. In the 30–50% identity range, errors can be more severe and are often located in loops. Below 30% identity, serious errors occur, sometimes resulting in the basic fold being mis-predicted. [ 14 ] This low-identity region is often referred to as the "twilight zone" within which homology modeling is extremely difficult, and to which it is possibly less suited than fold recognition methods. [ 24 ]
At high sequence identities, the primary source of error in homology modeling derives from the choice of the template or templates on which the model is based, while lower identities exhibit serious errors in sequence alignment that inhibit the production of high-quality models. [ 7 ] It has been suggested that the major impediment to quality model production is inadequacies in sequence alignment, since "optimal" structural alignments between two proteins of known structure can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure. [ 25 ]
Attempts have been made to improve the accuracy of homology models built with existing methods by subjecting them to molecular dynamics simulation in an effort to improve their RMSD to the experimental structure. However, current force field parameterizations may not be sufficiently accurate for this task, since homology models used as starting structures for molecular dynamics tend to produce slightly worse structures. [ 26 ] Slight improvements have been observed in cases where significant restraints were used during the simulation. [ 27 ]
The two most common and large-scale sources of error in homology modeling are poor template selection and inaccuracies in target-template sequence alignment. [ 7 ] [ 28 ] Controlling for these two factors by using a structural alignment , or a sequence alignment produced on the basis of comparing two solved structures, dramatically reduces the errors in final models; these "gold standard" alignments can be used as input to current modeling methods to produce quite accurate reproductions of the original experimental structure. [ 25 ] Results from the most recent CASP experiment suggest that "consensus" methods collecting the results of multiple fold recognition and multiple alignment searches increase the likelihood of identifying the correct template; similarly, the use of multiple templates in the model-building step may be worse than the use of the single correct template but better than the use of a single suboptimal one. [ 28 ] Alignment errors may be minimized by the use of a multiple alignment even if only one template is used, and by the iterative refinement of local regions of low similarity. [ 3 ] [ 12 ] A lesser source of model errors are errors in the template structure. The PDBREPORT Archived 2007-05-31 at the Wayback Machine database lists several million, mostly very small but occasionally dramatic, errors in experimental (template) structures that have been deposited in the PDB .
Serious local errors can arise in homology models where an insertion or deletion mutation or a gap in a solved structure result in a region of target sequence for which there is no corresponding template. This problem can be minimized by the use of multiple templates, but the method is complicated by the templates' differing local structures around the gap and by the likelihood that a missing region in one experimental structure is also missing in other structures of the same protein family. Missing regions are most common in loops where high local flexibility increases the difficulty of resolving the region by structure-determination methods. Although some guidance is provided even with a single template by the positioning of the ends of the missing region, the longer the gap, the more difficult it is to model. Loops of up to about 9 residues can be modeled with moderate accuracy in some cases if the local alignment is correct. [ 3 ] Larger regions are often modeled individually using ab initio structure prediction techniques, although this approach has met with only isolated success. [ 29 ]
The rotameric states of side chains and their internal packing arrangement also present difficulties in homology modeling, even in targets for which the backbone structure is relatively easy to predict. This is partly due to the fact that many side chains in crystal structures are not in their "optimal" rotameric state as a result of energetic factors in the hydrophobic core and in the packing of the individual molecules in a protein crystal. [ 30 ] One method of addressing this problem requires searching a rotameric library to identify locally low-energy combinations of packing states. [ 31 ] It has been suggested that a major reason that homology modeling so difficult when target-template sequence identity lies below 30% is that such proteins have broadly similar folds but widely divergent side chain packing arrangements. [ 4 ]
Uses of the structural models include protein–protein interaction prediction , protein–protein docking , molecular docking , and functional annotation of genes identified in an organism's genome . [ 32 ] Even low-accuracy homology models can be useful for these purposes, because their inaccuracies tend to be located in the loops on the protein surface, which are normally more variable even between closely related proteins. The functional regions of the protein, especially its active site , tend to be more highly conserved and thus more accurately modeled. [ 14 ]
Homology models can also be used to identify subtle differences between related proteins that have not all been solved structurally. For example, the method was used to identify cation binding sites on the Na + /K + ATPase and to propose hypotheses about different ATPases' binding affinity. [ 33 ] Used in conjunction with molecular dynamics simulations, homology models can also generate hypotheses about the kinetics and dynamics of a protein, as in studies of the ion selectivity of a potassium channel. [ 34 ] Large-scale automated modeling of all identified protein-coding regions in a genome has been attempted for the yeast Saccharomyces cerevisiae , resulting in nearly 1000 quality models for proteins whose structures had not yet been determined at the time of the study, and identifying novel relationships between 236 yeast proteins and other previously solved structures. [ 35 ] | https://en.wikipedia.org/wiki/Homology_modeling |
In chemistry , homolysis (from Greek ὅμοιος (homoios) ' equal ' and λύσις (lusis) ' loosening ' ) or homolytic fission is the dissociation of a molecular bond by a process where each of the fragments (an atom or molecule ) retains one of the originally bonded electrons . During homolytic fission of a neutral molecule with an even number of electrons, two radicals will be generated. [ 1 ] That is, the two electrons involved in the original bond are distributed between the two fragment species. Bond cleavage is also possible by a process called heterolysis .
The energy involved in this process is called bond dissociation energy (BDE). [ 2 ] BDE is defined as the " enthalpy (per mole ) required to break a given bond of some specific molecular entity by homolysis," symbolized as D . [ 3 ] BDE is dependent on the strength of the bond, which is determined by factors relating to the stability of the resulting radical species .
Because of the relatively high energy required to break bonds in this manner, homolysis occurs primarily under certain circumstances:
Adenosylcobalamin is the cofactor which creates the deoxyadenosyl radical by homolytic cleavage of a cobalt-carbon bond in reactions catalysed by methylmalonyl-CoA mutase , isobutyryl-CoA mutase and related enzymes. This triggers rearrangement reactions in the carbon framework of the substrates on which the enzymes act. [ 5 ]
Homolytic cleavage is driven by the ability of a molecule to absorb energy from light or heat, and the bond dissociation energy ( enthalpy ). If the radical species is better able to stabilize the radical, the energy of the SOMO will be lowered, as will the bond dissociation energy. Bond dissociation energy is determined by multiple factors: [ 4 ] | https://en.wikipedia.org/wiki/Homolysis_(chemistry) |
In chemistry and crystallography , crystal structures that have the same set of interatomic distances are called homometric structures . [ 1 ] Homometric structures need not be congruent (that is, related by a rigid motion or reflection). Homometric crystal structures produce identical diffraction patterns; therefore, they cannot be distinguished by a diffraction experiment.
Recently, a Monte Carlo algorithm was proposed to calculate the number of homometric structures corresponding to any given set of interatomic distances. [ 2 ]
This stereochemistry article is a stub . You can help Wikipedia by expanding it .
This crystallography -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homometric_structures |
In graph theory , a branch of mathematics , two graphs G and H are called homomorphically equivalent if there exists a graph homomorphism f : G → H {\displaystyle f\colon G\to H} and a graph homomorphism g : H → G {\displaystyle g\colon H\to G} . An example usage of this notion is that any two cores of a graph are homomorphically equivalent.
Homomorphic equivalence also comes up in the theory of databases . Given a database schema , two instances I and J on it are called homomorphically equivalent if there exists an instance homomorphism f : I → J {\displaystyle f\colon I\to J} and an instance homomorphism g : J → I {\displaystyle g\colon J\to I} .
Deciding whether two graphs are homomorphically equivalent is NP-complete . [ 1 ]
In fact for any category C , one can define homomorphic equivalence. It is used in the theory of accessible categories , where "weak universality" is the best one can hope for in terms of injectivity classes; see [ 2 ]
This graph theory -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homomorphic_equivalence |
In chemistry , homonuclear molecules , or elemental molecules , or homonuclear species , are molecules composed of only one element . Homonuclear molecules may consist of various numbers of atoms . The size of the molecule an element can form depends on the element's properties, and some elements form molecules of more than one size. The most familiar homonuclear molecules are diatomic molecules , which consist of two atoms, although not all diatomic molecules are homonuclear. Homonuclear diatomic molecules include hydrogen ( H 2 ), oxygen ( O 2 ), nitrogen ( N 2 ) and all of the halogens . Ozone ( O 3 ) is a common triatomic homonuclear molecule. Homonuclear tetratomic molecules include arsenic ( As 4 ) and phosphorus ( P 4 ).
Allotropes are different chemical forms of the same element (not containing any other element). In that sense, allotropes are all homonuclear. Many elements have multiple allotropic forms. In addition to the most common form of gaseous oxygen, O 2 , and ozone, there are other allotropes of oxygen . Sulfur forms several allotropes containing different numbers of sulfur atoms, including diatomic , triatomic, hexatomic and octatomic ( S 2 , S 3 , S 6 , S 8 ) forms, though the first three are rare. The element carbon is known to have a number of homonuclear molecules, including diamond and graphite .
Sometimes a cluster of atoms of a single kind of metallic element is considered a single molecule. [ 1 ]
This chemistry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homonuclear_molecule |
In biology , a homonym is a name for a taxon that is identical in spelling to another such name, that belongs to a different taxon .
The rule in the International Code of Zoological Nomenclature is that the first such name to be published is the senior homonym and is to be used (it is " valid "); any others are junior homonyms and must be replaced with new names. It is, however, possible that if a senior homonym is archaic, and not in "prevailing usage," it may be declared a nomen oblitum and rendered unavailable, while the junior homonym is preserved as a nomen protectum .
Similarly, the International Code of Nomenclature for algae, fungi, and plants (ICN) specifies that the first published of two or more homonyms is to be used: a later homonym is " illegitimate " and is not to be used unless conserved (or sanctioned, in the case of fungi). [ 1 ]
Under the zoological code, homonymy can only occur within each of the three nomenclatural ranks (family-rank, genus-rank, and species-rank) but not between them; there are thousands of cases where a species epithet is identical to a genus name but not a homonym (sometimes even occurring in the genus it is identical to, such as Gorilla gorilla , termed a " tautonym "), and there are some rare cases where a family-rank name and a genus-rank name are identical (e.g., the superfamily name Ranoidea and the genus name Ranoidea are not homonyms). The botanical code is generally similar, but prohibits tautonyms.
Under the botanical code, names that are similar enough that they are likely to be confused are also considered to be homonymous (article 53.3). For example, Astrostemma Benth. (1880) is an illegitimate homonym of Asterostemma Decne. (1838). The zoological code considers even a single letter difference to be sufficient to render family-rank and genus-rank names distinct (Article 56.2), though for species names, the ICZN specifies a number of spelling variations (Article 58) that are considered to be identical.
Both codes only consider taxa that are in their respective scope (animals for the ICZN; primarily plants for the ICN). Therefore, if an animal taxon has the same name as a plant taxon, both names are valid. Such names are called hemihomonyms . [ 2 ]
For example, the name Erica has been given to both a genus of spiders, Erica Peckham & Peckham, 1892, and to a genus of heaths, Erica L.
Another example is Cyanea , applied to the lion's mane jellyfish Cyanea Péron and Lesueur and to the Hawaiian lobelioid Cyanea Gaudich.
Hemihomonyms are possible at the species level as well, with organisms in different kingdoms sharing the same binomial nomenclature . For instance, Orestias elegans denotes both a species of fish (kingdom Animalia ) and a species of orchid (kingdom Plantae ). Such duplication of binomials occurs in at least nine instances.
Data related to List of valid homonyms at Wikispecies | https://en.wikipedia.org/wiki/Homonym_(biology) |
Cell junctions [ 1 ] or junctional complexes are a class of cellular structures consisting of multiprotein complexes that provide contact or adhesion between neighboring cells or between a cell and the extracellular matrix in animals. [ 2 ] They also maintain the paracellular barrier of epithelia and control paracellular transport . Cell junctions are especially abundant in epithelial tissues. Combined with cell adhesion molecules and extracellular matrix , cell junctions help hold animal cells together.
Cell junctions are also especially important in enabling communication between neighboring cells via specialized protein complexes called communicating (gap) junctions . Cell junctions are also important in reducing stress placed upon cells.
In plants, similar communication channels are known as plasmodesmata , and in fungi they are called septal pores . [ 3 ]
In vertebrates, there are three major types of cell junction: [ 4 ]
Invertebrates have several other types of specific junctions, for example septate junctions (a type of occluding junction) [ 4 ] or the C. elegans apical junction. In multicellular plants , the structural functions of cell junctions are instead provided for by cell walls . The analogues of communicative cell junctions in plants are called plasmodesmata .
Cells within tissues and organs must be anchored to one another and attached to components of the extracellular matrix . Cells have developed several types of junctional complexes to serve these functions, and in each case, anchoring proteins extend through the plasma membrane to link cytoskeletal proteins in one cell to cytoskeletal proteins in neighboring cells as well as to proteins in the extracellular matrix. [ 6 ]
Three types of anchoring junctions are observed, and differ from one another in the cytoskeletal protein anchor as well as the transmembrane linker protein that extends through the membrane:
Anchoring-type junctions not only hold cells together but provide tissues with structural cohesion. These junctions are most abundant in tissues that are subject to constant mechanical stress such as skin and heart. [ 6 ]
Desmosomes, also termed as maculae adherentes, can be visualized as rivets through the plasma membrane of adjacent cells. Intermediate filaments composed of keratin or desmin are attached to membrane-associated attachment proteins that form a dense plaque on the cytoplasmic face of the membrane. Cadherin molecules form the actual anchor by attaching to the cytoplasmic plaque, extending through the membrane and binding strongly to cadherins coming through the membrane of the adjacent cell. [ 7 ]
Hemidesmosomes form rivet-like links between cytoskeleton and extracellular matrix components such as the basal laminae that underlie epithelia. Like desmosomes, they tie to intermediate filaments in the cytoplasm, but in contrast to desmosomes, their transmembrane anchors are integrins rather than cadherins. [ 8 ]
Adherens junctions share the characteristic of anchoring cells through their cytoplasmic actin filaments . Similarly to desmosomes and hemidesmosomes, their transmembrane anchors are composed of cadherins in those that anchor to other cells and integrins (focal adhesion) in those that anchor to extracellular matrix. There is considerable morphologic diversity among adherens junctions. Those that tie cells to one another are seen as isolated streaks or spots, or as bands that completely encircle the cell. The band-type of adherens junctions is associated with bundles of actin filaments that also encircle the cell just below the plasma membrane. Spot-like adherens junctions called focal adhesions help cells adhere to extracellular matrix. The cytoskeletal actin filaments that tie into adherens junctions are contractile proteins and in addition to providing an anchoring function, adherens junctions are thought to participate in folding and bending of epithelial cell sheets. Thinking of the bands of actin filaments as being similar to 'drawstrings' allows one to envision how contraction of the bands within a group of cells would distort the sheet into interesting patterns. [ 6 ]
Gap junctions or communicating junctions, allow for direct chemical communication between adjacent cellular cytoplasm through diffusion without contact with the extracellular fluid. [ 9 ] This is possible due to six connexin proteins interacting to form a cylinder with a pore in the centre called a connexon . [ 10 ] The connexon complexes stretches across the cell membrane and when two adjacent cell connexons interact, they form a complete gap junction channel. [ 9 ] [ 10 ] Connexon pores vary in size, polarity and therefore can be specific depending on the connexin proteins that constitute each individual connexon. [ 9 ] [ 10 ] Whilst variation in gap junction channels do occur, their structure remains relatively standard, and this interaction ensures efficient communication without the escape of molecules or ions to the extracellular fluid. [ 10 ]
Gap junctions play vital roles in the human body, [ 11 ] including their role in the uniform contractile of the heart muscle . [ 11 ] They are also relevant in signal transfers in the brain , and their absence shows a decreased cell density in the brain. [ 12 ] Retinal and skin cells are also dependent on gap junctions in cell differentiation and proliferation. [ 11 ] [ 12 ]
Found in vertebrate epithelia , tight junctions act as barriers that regulate the movement of water and solutes between epithelial layers. Tight junctions are classified as a paracellular barrier which is defined as not having directional discrimination; however, movement of the solute is largely dependent upon size and charge. There is evidence to suggest that the structures in which solutes pass through are somewhat like pores.
Physiological pH plays a part in the selectivity of solutes passing through tight junctions with most tight junctions being slightly selective for cations. Tight junctions present in different types of epithelia are selective for solutes of differing size, charge, and polarity.
There have been approximately 40 proteins identified to be involved in tight junctions. These proteins can be classified into four major categories;signalling proteins.
It is believed that claudin is the protein molecule responsible for the selective permeability between epithelial layers.
A three-dimensional image is still yet to be achieved and as such specific information about the function of tight junctions is yet to be determined.
Tricellular junctions seal epithelia at the corners of three cells. Due to the geometry of three-cell vertices, the sealing of the cells at these sites requires a specific junctional organization, different from those in bicellular junctions. In vertebrates, components tricellular junctions are tricellulin and lipolysis-stimulated lipoprotein receptors. In invertebrates, the components are gliotactin and anakonda. [ 13 ]
Tricellular junctions are also implicated in the regulation of cytoskeletal organization and cell divisions. In particular they ensure that cells divide according to the Hertwig rule . In some Drosophila epithelia, during cell divisions tricellular junctions establish physical contact with spindle apparatus through astral microtubules. Tricellular junctions exert a pulling force on the spindle apparatus and serve as a geometrical clue to determine orientation of cell divisions. [ 14 ]
The molecules responsible for creating cell junctions include various cell adhesion molecules . There are four main types: selectins , cadherins , integrins , and the immunoglobulin superfamily . [ 15 ]
Selectins are cell adhesion molecules that play an important role in the initiation of inflammatory processes. [ 16 ] The functional capacity of selectin is limited to leukocyte collaborations with vascular endothelium. There are three types of selectins found in humans; L-selectin, P-selectin and E-selectin. L-selectin deals with lymphocytes, monocytes and neutrophils, P-selectin deals with platelets and endothelium and E-selectin deals only with endothelium. They have extracellular regions made up of an amino-terminal lectin domain, attached to a carbohydrate ligand, growth factor-like domain, and short repeat units (numbered circles) that match the complementary binding protein domains. [ 17 ]
Cadherins are calcium-dependent adhesion molecules. Cadherins are extremely important in the process of morphogenesis – fetal development . Together with an alpha-beta catenin complex, the cadherin can bind to the microfilaments of the cytoskeleton of the cell. This allows for homophilic cell–cell adhesion. [ 18 ] The β-catenin – α-catenin linked complex at the adherens junctions allows for the formation of a dynamic link to the actin cytoskeleton. [ 19 ]
Integrins act as adhesion receptors, transporting signals across the plasma membrane in multiple directions. These molecules are an invaluable part of cellular communication, as a single ligand can be used for many integrins. Unfortunately, these molecules still have a long way to go in the ways of research. [ 20 ]
Immunoglobulin superfamily are a group of calcium independent proteins capable of homophilic and heterophilic adhesion. Homophilic adhesion involves the immunoglobulin-like domains on the cell surface binding to the immunoglobulin-like domains on an opposing cell's surface while heterophilic adhesion refers to the binding of the immunoglobulin-like domains to integrins and carbohydrates instead. [ 21 ]
Cell adhesion is a vital component of the body. Loss of this adhesion effects cell structure, cellular functioning and communication with other cells and the extracellular matrix and can lead to severe health issues and diseases. | https://en.wikipedia.org/wiki/Homophilic_binding |
Homoplasy , in biology and phylogenetics , is the term used to describe a feature that has been gained or lost independently in separate lineages over the course of evolution. This is different from homology , which is the term used to characterize the similarity of features that can be parsimoniously explained by common ancestry . [ 1 ] Homoplasy can arise from both similar selection pressures acting on adapting species, and the effects of genetic drift . [ 2 ] [ 3 ]
Most often, homoplasy is viewed as a similarity in morphological characters. However, homoplasy may also appear in other character types, such as similarity in the genetic sequence, [ 4 ] [ 5 ] life cycle types [ 6 ] or even behavioral traits. [ 7 ] [ 5 ]
The term homoplasy was first used by Ray Lankester in 1870. [ 8 ] The corresponding adjective is either homoplasic or homoplastic .
It is derived from the two Ancient Greek words ὁμός ( homós ), meaning "similar, alike, the same", and πλάσσω ( plássō ), meaning "to shape, to mold". [ 9 ] [ 10 ] [ 11 ] [ 4 ]
Parallel and convergent evolution lead to homoplasy when different species independently evolve or gain apparently identical features, which are different from the feature inferred to have been present in their common ancestor. When the similar features are caused by an equivalent developmental mechanism, the process is referred to as parallel evolution. [ 12 ] [ 13 ] The process is called convergent evolution when the similarity arises from different developmental mechanisms. [ 13 ] [ 14 ] These types of homoplasy may occur when different lineages live in comparable ecological niches that require similar adaptations for an increase in fitness. An interesting example is that of the marsupial moles ( Notoryctidae ), golden moles ( Chrysochloridae ) and northern moles ( Talpidae ). These are mammals from different geographical regions and lineages, and have all independently evolved very similar burrowing characteristics (such as cone-shaped heads and flat frontal claws) to live in a subterranean ecological niche. [ 15 ]
In contrast, reversal (a.k.a. vestigialization) leads to homoplasy through the disappearance of previously gained features. [ 16 ] This process may result from changes in the environment in which certain gained characteristics are no longer relevant, or have even become costly. [ 17 ] [ 3 ] This can be observed in subterranean and cave-dwelling animals by their loss of sight, [ 15 ] [ 18 ] in cave-dwelling animals through their loss of pigmentation, [ 18 ] and in both snakes and legless lizards through their loss of limbs. [ 19 ] [ 20 ]
Homoplasy, especially the type that occurs in more closely related phylogenetic groups, can make phylogenetic analysis more challenging. Phylogenetic trees are often selected by means of parsimony analysis . [ 21 ] [ 22 ] These analyses can be done with phenotypic characters, as well as DNA sequences. [ 23 ] Using parsimony analysis, the hypothesis of relationships that requires the fewest (or least costly) character state transformations is preferred over alternative hypotheses. Evaluation of these trees may become a challenge when clouded by the occurrence of homoplasy in the characters used for the analysis. The most important approach to overcoming these challenges is to increase the number of independent (non- pleiotropic , non- linked ) characteristics used in the phylogenetic analysis. Along with parsimony analysis, one could perform a likelihood analysis , where the most likely tree, given a particular model of evolution, is selected, and branch lengths are inferred.
According to the cladistic interpretation , homoplasy is invoked when the distribution of a character state cannot be explained parsimoniously (without extra inferred character state transformations between the terminals and their ancestral node) on a preferred phylogenetic hypothesis - that is, the feature in question arises (or disappears) at more than one point on the tree. [ 16 ]
In the case of DNA sequences, homoplasy is very common due to the redundancy of the genetic code. An observed homoplasy may simply be the result of random nucleotide substitutions accumulating over time, and thus may not need an adaptationist evolutionary explanation. [ 5 ]
There are numerous documented examples of homoplasy within the following taxa:
The occurrence of homoplasy can also be used to make predictions about evolution. Recent studies have used homoplasy to predict the possibility and the path of extraterrestrial evolution. For example, Levin et al. (2017) suggest that the development of eye-like structures is highly likely, due to its numerous, independently evolved incidences on earth. [ 16 ] [ 32 ]
In his book Wonderful Life , Stephen Jay Gould claims that repeating the evolutionary process, from any point in time onward, would not produce the same results. [ 33 ] The occurrence of homoplasy is viewed by some biologists as an argument against Gould's theory of evolutionary contingency . Powell & Mariscal (2015) argue that this disagreement is caused by an equivocation and that both the theory of contingency and homoplastic occurrence can be true at the same time. [ 34 ] | https://en.wikipedia.org/wiki/Homoplasy |
Homoserine (also called isothreonine) is an α- amino acid with the chemical formula HO 2 CCH(NH 2 )CH 2 CH 2 OH. L -Homoserine is not one of the common amino acids encoded by DNA. It differs from the proteinogenic amino acid serine by insertion of an additional −CH 2 − unit into the sidechain. Homoserine, or its lactone , is the product of a cyanogen bromide cleavage of a peptide by degradation of methionine . Homoserine is an intermediate in the biosynthesis of three essential amino acids : methionine , threonine (an isomer of homoserine), and isoleucine . [ 1 ]
Commercially, homoserine can serve as precursor to the synthesis of isobutanol and 1,4-butanediol . [ 2 ] Purified homoserine is used in enzyme structural studies. [ 3 ] Also, homoserine has played important roles in studies to elucidate peptide synthesis and synthesis of proteoglycan glycopeptides. [ 4 ] Bacterial cell lines can make copious amounts of this amino acid. [ 5 ] [ 2 ]
Its complete biosynthetic pathway includes glycolysis , the tricarboxylic acid (TCA) or citric acid cycle (Krebs cycle), and the aspartate metabolic pathway. [ clarification needed ] It forms by two reductions of aspartic acid via the intermediacy of aspartate semialdehyde. [ 6 ] Specifically, the enzyme homoserine dehydrogenase , in association with NADPH , catalyzes a reversible reaction that interconverts L -aspartate-4-semialdehyde to L -homoserine. Homoserine kinase and homoserine O-succinyltransferase convert homoserine to phosphohomoserine and O -succinyl homoserine, respectively. [ 5 ] Homoserine is produced from aspartate via the intermediate aspartate-4-semialdehyde, which is produced from β-phosphoaspartate. By the action of homoserine dehydrogenases , the semialdehyde is converted to homoserine. [ 7 ]
L -Homoserine is substrate for homoserine kinase , yielding phosphohomoserine (homoserine-phosphate), which is converted by threonine synthase to L -threonine.
Homoserine is converted to O -succinyl homoserine by homoserine O-succinyltransferase . O -succinyl homoserine is a precursor to L - methionine . [ 8 ]
Homoserine inhibits aspartate kinase and glutamate dehydrogenase . [ 5 ] Glutamate dehydrogenase reversibly converts glutamate to α-ketoglutarate and α-ketoglutarate coverts to oxaloacetate through the citric cycle. Threonine acts as another allosteric inhibitor of aspartate kinase and homoserine dehydrogenase, but it is a competitive inhibitor of homoserine kinase. [ 8 ]
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homoserine |
Homothallic refers to the possession, within a single organism, of the resources to reproduce sexually; [ 1 ] i.e., having male and female reproductive structures on the same thallus . The opposite sexual functions are performed by different cells of a single mycelium. [ 2 ]
It can be contrasted to heterothallic .
It is often used to categorize fungi . In yeast, heterothallic cells have mating types a and α . An experienced mother cell (one that has divided at least once) will switch mating type every cell division cycle because of the HO allele.
Sexual reproduction commonly occurs in two fundamentally different ways in fungi. These are outcrossing (in heterothallic fungi) in which two different individuals contribute nuclei to form a zygote, and self-fertilization or selfing (in homothallic fungi) in which both nuclei are derived from the same individual. Homothallism in fungi can be defined as the capability of an individual spore to produce a sexually reproducing colony when propagated in isolation. [ 3 ] Homothallism occurs in fungi by a wide variety of genetically distinct mechanisms that all result in sexually reproducing cultures from a single cell. [ 3 ]
Among the 250 known species of aspergilli, about 36% have an identified sexual state. [ 4 ] Among those Aspergillus species for which a sexual cycle has been observed, the majority in nature are homothallic (self-fertilizing). [ 4 ] Selfing in the homothallic fungus Aspergillus nidulans involves activation of the same mating pathways characteristic of sex in outcrossing species, i.e. self-fertilization does not bypass required pathways for outcrossing sex but instead requires activation of these pathways within a single individual. [ 5 ] Fusion of haploid nuclei occurs within reproductive structures termed cleistothecia , in which the diploid zygote undergoes meiotic divisions to yield haploid ascospores .
Several ascomycete fungal species of the genus Cochliobolus ( C. luttrellii , C. cymbopogonis , C. kusanoi and C. homomorphus ) are homothallic. [ 6 ] The ascomycete fungus Pneumocystis jirovecii is considered to be primarily homothallic. [ 7 ] The ascomycete fungus Neosartorya fischeri is also homothallic. [ 8 ] Cryptococcus depauperatus , a homothallic basidiomycete fungus, grows as long, branching filaments (hyphae). [ 9 ] C. depauperatus can undergo meiosis and reproduce sexually with itself throughout its life cycle. [ 9 ]
A lichen is a composite organism consisting of a fungus and a photosynthetic partner that are growing together in a symbiotic relationship. The photosynthetic partner is usually either a green alga or a cyanobacterium . Lichens occur in some of the most extreme environments on Earth— arctic tundra , hot deserts , rocky coasts, and toxic slag heaps . Most lichenized fungi produce abundant sexual structures and in many species sexual spores appear to be the only means of dispersal (Murtagh et al., 2000). The lichens Graphis scripta and Ochrolechia parella do not produce symbiotic vegetative propagules. Rather the lichen-forming fungi of these species reproduce sexually by self-fertilization (i.e. they are homothallic), and it was proposed that this breeding system allows successful reproduction in harsh environments (Murtagh et al., 2000). [ 10 ] Homothallism appears to be common in natural populations of fungi. Although self-fertilization employs meiosis, it produces minimal genetic variability. Homothallism is thus a form of sex that is unlikely to be adaptively maintained by a benefit related to producing variability. However, homothallic meiosis may be maintained in fungi as an adaptation for surviving stressful conditions; a proposed benefit of meiosis is the promoted homologous meiotic recombinational repair of DNA damages that are ordinarily caused by a stressful environment. [ 11 ]
Homothallism evolved repeatedly from heterothallism . [ 12 ] | https://en.wikipedia.org/wiki/Homothallism |
In mathematics , a homothety (or homothecy , or homogeneous dilation ) is a transformation of an affine space determined by a point S called its center and a nonzero number k called its ratio , which sends point X to a point X ′ by the rule, [ 1 ]
Using position vectors:
In case of S = O {\displaystyle S=O} (Origin):
which is a uniform scaling and shows the meaning of special choices for k {\displaystyle k} :
For 1 / k {\displaystyle 1/k} one gets the inverse mapping defined by k {\displaystyle k} .
In Euclidean geometry homotheties are the similarities that fix a point and either preserve (if k > 0 {\displaystyle k>0} ) or reverse (if k < 0 {\displaystyle k<0} ) the direction of all vectors. Together with the translations , all homotheties of an affine (or Euclidean) space form a group , the group of dilations or homothety-translations . These are precisely the affine transformations with the property that the image of every line g is a line parallel to g .
In projective geometry , a homothetic transformation is a similarity transformation (i.e., fixes a given elliptic involution) that leaves the line at infinity pointwise invariant . [ 2 ]
In Euclidean geometry, a homothety of ratio k {\displaystyle k} multiplies distances between points by | k | {\displaystyle |k|} , areas by k 2 {\displaystyle k^{2}} and volumes by | k | 3 {\displaystyle |k|^{3}} . Here k {\displaystyle k} is the ratio of magnification or dilation factor or scale factor or similitude ratio . Such a transformation can be called an enlargement if the scale factor exceeds 1. The above-mentioned fixed point S is called homothetic center or center of similarity or center of similitude .
The term, coined by French mathematician Michel Chasles , is derived from two Greek elements: the prefix homo- ( όμο ' similar ' }; and transl. grc – transl. thesis ( Θέσις ) ' position ' ). It describes the relationship between two figures of the same shape and orientation. For example, two Russian dolls looking in the same direction can be considered homothetic.
Homotheties are used to scale the contents of computer screens; for example, smartphones, notebooks, and laptops.
The following properties hold in any dimension.
A homothety has the following properties:
Both properties show:
Derivation of the properties: In order to make calculations easy it is assumed that the center S {\displaystyle S} is the origin: x → k x {\displaystyle \mathbf {x} \to k\mathbf {x} } . A line g {\displaystyle g} with parametric representation x = p + t v {\displaystyle \mathbf {x} =\mathbf {p} +t\mathbf {v} } is mapped onto the point set g ′ {\displaystyle g'} with equation x = k ( p + t v ) = k p + t k v {\displaystyle \mathbf {x} =k(\mathbf {p} +t\mathbf {v} )=k\mathbf {p} +tk\mathbf {v} } , which is a line parallel to g {\displaystyle g} .
The distance of two points P : p , Q : q {\displaystyle P:\mathbf {p} ,\;Q:\mathbf {q} } is | p − q | {\displaystyle |\mathbf {p} -\mathbf {q} |} and | k p − k q | = | k | | p − q | {\displaystyle |k\mathbf {p} -k\mathbf {q} |=|k||\mathbf {p} -\mathbf {q} |} the distance between their images. Hence, the ratio (quotient) of two line segments remains unchanged.
In case of S ≠ O {\displaystyle S\neq O} the calculation is analogous but a little extensive.
Consequences: A triangle is mapped on a similar one. The homothetic image of a circle is a circle. The image of an ellipse is a similar one. i.e. the ratio of the two axes is unchanged.
If for a homothety with center S {\displaystyle S} the image Q 1 {\displaystyle Q_{1}} of a point P 1 {\displaystyle P_{1}} is given (see diagram) then the image Q 2 {\displaystyle Q_{2}} of a second point P 2 {\displaystyle P_{2}} , which lies not on line S P 1 {\displaystyle SP_{1}} can be constructed graphically using the intercept theorem: Q 2 {\displaystyle Q_{2}} is the common point th two lines P 1 P 2 ¯ {\displaystyle {\overline {P_{1}P_{2}}}} and S P 2 ¯ {\displaystyle {\overline {SP_{2}}}} . The image of a point collinear with P 1 , Q 1 {\displaystyle P_{1},Q_{1}} can be determined using P 2 , Q 2 {\displaystyle P_{2},Q_{2}} .
Before computers became ubiquitous, scalings of drawings were done by using a pantograph , a tool similar to a compass .
Construction and geometrical background:
Because of | S Q 0 | / | S P 0 | = | Q 0 Q | / | P P 0 | {\displaystyle |SQ_{0}|/|SP_{0}|=|Q_{0}Q|/|PP_{0}|} (see diagram) one gets from the intercept theorem that the points S , P , Q {\displaystyle S,P,Q} are collinear (lie on a line) and equation | S Q | = k | S P | {\displaystyle |SQ|=k|SP|} holds. That shows: the mapping P → Q {\displaystyle P\to Q} is a homothety with center S {\displaystyle S} and ratio k {\displaystyle k} .
Derivation:
For the composition σ 2 σ 1 {\displaystyle \sigma _{2}\sigma _{1}} of the two homotheties σ 1 , σ 2 {\displaystyle \sigma _{1},\sigma _{2}} with centers S 1 , S 2 {\displaystyle S_{1},S_{2}} with
one gets by calculation for the image of point X : x {\displaystyle X:\mathbf {x} } :
Hence, the composition is
is a fixpoint (is not moved) and the composition
is a homothety with center S 3 {\displaystyle S_{3}} and ratio k 1 k 2 {\displaystyle k_{1}k_{2}} . S 3 {\displaystyle S_{3}} lies on line S 1 S 2 ¯ {\displaystyle {\overline {S_{1}S_{2}}}} .
Derivation:
The composition of the homothety
which is a homothety with center s ′ = s + v 1 − k {\displaystyle \mathbf {s} '=\mathbf {s} +{\frac {\mathbf {v} }{1-k}}} and ratio k {\displaystyle k} .
The homothety σ : x → s + k ( x − s ) {\displaystyle \sigma :\mathbf {x} \to \mathbf {s} +k(\mathbf {x} -\mathbf {s} )} with center S = ( u , v ) {\displaystyle S=(u,v)} can be written as the composition of a homothety with center O {\displaystyle O} and a translation:
Hence σ {\displaystyle \sigma } can be represented in homogeneous coordinates by the matrix:
A pure homothety linear transformation is also conformal because it is composed of translation and uniform scale. | https://en.wikipedia.org/wiki/Homothety |
In mathematics , homotopical algebra is a collection of concepts comprising the nonabelian aspects of homological algebra , and possibly the abelian aspects as special cases. The homotopical nomenclature stems from the fact that a common approach to such generalizations is via abstract homotopy theory , as in nonabelian algebraic topology , and in particular the theory of closed model categories .
This subject has received much attention in recent years due to new foundational work of Vladimir Voevodsky , Eric Friedlander , Andrei Suslin , and others resulting in the A 1 homotopy theory for quasiprojective varieties over a field . Voevodsky has used this new algebraic homotopy theory to prove the Milnor conjecture (for which he was awarded the Fields Medal ) and later, in collaboration with Markus Rost , the full Bloch–Kato conjecture .
This geometry-related article is a stub . You can help Wikipedia by expanding it .
This topology-related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Homotopical_algebra |
In mathematics , the homotopy principle (or h-principle ) is a very general way to solve partial differential equations (PDEs), and more generally partial differential relations (PDRs). The h-principle is good for underdetermined PDEs or PDRs, such as the immersion problem, isometric immersion problem, fluid dynamics, and other areas.
The theory was started by Yakov Eliashberg , Mikhail Gromov and Anthony V. Phillips. It was based on earlier results that reduced partial differential relations to homotopy , particularly for immersions. The first evidence of h-principle appeared in the Whitney–Graustein theorem . This was followed by the Nash–Kuiper isometric C 1 embedding theorem and the Smale–Hirsch immersion theorem.
Assume we want to find a function f {\displaystyle f} on R m {\displaystyle \mathbb {R} ^{m}} which satisfies a partial differential equation of degree k {\displaystyle k} , in coordinates ( u 1 , u 2 , … , u m ) {\displaystyle (u_{1},u_{2},\dots ,u_{m})} . One can rewrite it as
where J f k {\displaystyle J_{f}^{k}} stands for all partial derivatives of f {\displaystyle f} up to order k {\displaystyle k} . Exchanging every variable in J f k {\displaystyle J_{f}^{k}} for new independent variables y 1 , y 2 , … , y N {\displaystyle y_{1},y_{2},\dots ,y_{N}} turns our equations into
and some number of equations of the type
A solution of
is called a non-holonomic solution , and a solution of the system which is also solution of our original PDE is called a holonomic solution .
In order to check whether a solution to our original equation exists, one can first check if there is a non-holonomic solution. Usually this is quite easy, and if there is no non-holonomic solution, then our original equation did not have any solutions.
A PDE satisfies the h {\displaystyle h} -principle if any non-holonomic solution can be deformed into a holonomic one in the class of non-holonomic solutions. Thus in the presence of h-principle, a differential topological problem reduces to an algebraic topological problem. More explicitly this means that apart from the topological obstruction there is no other obstruction to the existence of a holonomic solution. The topological problem of finding a non-holonomic solution is much easier to handle and can be addressed with the obstruction theory for topological bundles.
While many underdetermined partial differential equations satisfy the h-principle, the falsity of one is also an interesting statement. Intuitively this means that the objects being studied have non-trivial geometry which can not be reduced to topology. As an example, embedded Lagrangians in a symplectic manifold do not satisfy an h-principle, to prove this one can for instance find invariants coming from pseudo-holomorphic curves .
Perhaps the simplest partial differential relation is for the derivative to not vanish: f ′ ( x ) ≠ 0. {\displaystyle f'(x)\neq 0.} Properly, this is an ordinary differential relation, as this is a function in one variable.
A holonomic solution to this relation is a function whose derivative is nowhere vanishing, i.e. a strictly monotone differentiable function, either increasing or decreasing. The space of such functions consists of two disjoint convex sets : the increasing ones and the decreasing ones, and has the homotopy type of two points.
A non-holonomic solution to this relation would consist in the data of two functions, a differentiable function f(x), and a continuous function g(x), with g(x) nowhere vanishing. A holonomic solution gives rise to a non-holonomic solution by taking g(x) = f'(x). The space of non-holonomic solutions again consists of two disjoint convex sets, according as g(x) is positive or negative.
Thus the inclusion of holonomic into non-holonomic solutions satisfies the h-principle.
This trivial example has nontrivial generalizations:
extending this to immersions of a circle into itself classifies them by order (or winding number ), by lifting the map to the universal covering space and applying the above analysis to the resulting monotone map – the linear map corresponds to multiplying angle: θ ↦ n θ {\displaystyle \theta \mapsto n\theta } ( z ↦ z n {\displaystyle z\mapsto z^{n}} in complex numbers). Note that here there are no immersions of order 0, as those would need to turn back on themselves. Extending this to circles immersed in the plane – the immersion condition is precisely the condition that the derivative does not vanish – the Whitney–Graustein theorem classified these by turning number by considering the homotopy class of the Gauss map and showing that this satisfies an h-principle; here again order 0 is more complicated.
Smale's classification of immersions of spheres as the homotopy groups of Stiefel manifolds , and Hirsch's generalization of this to immersions of manifolds being classified as homotopy classes of maps of frame bundles are much further-reaching generalizations, and much more involved, but similar in principle – immersion requires the derivative to have rank k, which requires the partial derivatives in each direction to not vanish and to be linearly independent, and the resulting analog of the Gauss map is a map to the Stiefel manifold, or more generally between frame bundles.
As another simple example, consider a car moving in the plane. The position of a car in the plane is determined by three parameters: two coordinates x {\displaystyle x} and y {\displaystyle y} for the location (a good choice is the location of the midpoint between the back wheels) and an angle α {\displaystyle \alpha } which describes the orientation of the car. The motion of the car satisfies the equation
since a non-skidding car must move in the direction of its wheels. In robotics terms, not all paths in the task space are holonomic .
A non-holonomic solution in this case, roughly speaking, corresponds to a motion of the car by sliding in the plane. In this case the non-holonomic solutions are not only homotopic to holonomic ones but also can be arbitrarily well approximated by the holonomic ones (by going back and forth, like parallel parking in a limited space) – note that this approximates both the position and the angle of the car arbitrarily closely. This implies that, theoretically, it is possible to parallel park in any space longer than the length of your car. It also implies that, in a contact 3 manifold, any curve is C 0 {\displaystyle C^{0}} -close to a Legendrian curve.
This last property is stronger than the general h-principle; it is called the C 0 {\displaystyle C^{0}} - dense h-principle .
While this example is simple, compare to the Nash embedding theorem , specifically the Nash–Kuiper theorem , which says that any short smooth ( C ∞ {\displaystyle C^{\infty }} ) embedding or immersion of M m {\displaystyle M^{m}} in R m + 1 {\displaystyle \mathbf {R} ^{m+1}} or larger can be arbitrarily well approximated by an isometric C 1 {\displaystyle C^{1}} -embedding (respectively, immersion). This is also a dense h-principle, and can be proven by an essentially similar "wrinkling" – or rather, circling – technique to the car in the plane, though it is much more involved.
Here we list a few counter-intuitive results which can be proved by applying the
h-principle: | https://en.wikipedia.org/wiki/Homotopy_principle |
In mathematical logic and computer science , homotopy type theory ( HoTT ) refers to various lines of development of intuitionistic type theory , based on the interpretation of types as objects to which the intuition of (abstract) homotopy theory applies.
This includes, among other lines of work, the construction of homotopical and higher-categorical models for such type theories; the use of type theory as a logic (or internal language ) for abstract homotopy theory and higher category theory ; the development of mathematics within a type-theoretic foundation (including both previously existing mathematics and new mathematics that homotopical types make possible); and the formalization of each of these in computer proof assistants .
There is a large overlap between the work referred to as homotopy type theory, and that called the univalent foundations project. Although neither is precisely delineated, and the terms are sometimes used interchangeably, the choice of usage also sometimes corresponds to differences in viewpoint and emphasis. [ 1 ] As such, this article may not represent the views of all researchers in the fields equally. This kind of variability is unavoidable when a field is in rapid flux.
At one time, the idea that types in intensional type theory with their identity types could be regarded as groupoids was mathematical folklore . It was first made precise semantically in the 1994 paper of Martin Hofmann and Thomas Streicher called "The groupoid model refutes uniqueness of identity proofs", [ 2 ] in which they showed that intensional type theory had a model in the category of groupoids . This was the first truly " homotopical " model of type theory, albeit only "1- dimensional " (the traditional models in the category of sets being homotopically 0-dimensional).
Their follow-up paper [ 3 ] foreshadowed several later developments in homotopy type theory. For instance, they noted that the groupoid model satisfies a rule they called "universe extensionality", which is none other than the restriction to 1-types of the univalence axiom that Vladimir Voevodsky proposed ten years later. (The axiom for 1-types is notably simpler to formulate, however, since a coherent notion of "equivalence" is not required.) They also defined "categories with isomorphism as equality" and conjectured that in a model using higher-dimensional groupoids, for such categories one would have "equivalence is equality"; this was later proven by Benedikt Ahrens, Krzysztof Kapulkin, and Michael Shulman . [ 4 ]
The first higher-dimensional models of intensional type theory were constructed by Steve Awodey and his student Michael Warren in 2005 using Quillen model categories . These results were first presented in public at the conference FMCS 2006 [ 5 ] at which Warren gave a talk titled "Homotopy models of intensional type theory", which also served as his thesis prospectus (the dissertation committee present were Awodey, Nicola Gambino and Alex Simpson). A summary is contained in Warren's thesis prospectus abstract. [ 6 ]
At a subsequent workshop about identity types at Uppsala University in 2006 [ 7 ] there were two talks about the relation between intensional type theory and factorization systems: one by Richard Garner, "Factorisation systems for type theory", [ 8 ] and one by Michael Warren, "Model categories and intensional identity types". Related ideas were discussed in the talks by Steve Awodey, "Type theory of higher-dimensional categories", and Thomas Streicher , "Identity types vs. weak omega-groupoids: some ideas, some problems". At the same conference Benno van den Berg gave a talk titled "Types as weak omega-categories" where he outlined the ideas that later became the subject of a joint paper with Richard Garner.
All early constructions of higher dimensional models had to deal with the problem of coherence typical of models of dependent type theory, and various solutions were developed. One such was given in 2009 by Voevodsky, another in 2010 by van den Berg and Garner. [ 9 ] A general solution, building on Voevodsky's construction, was eventually given by Lumsdaine and Warren in 2014. [ 10 ]
At the PSSL86 in 2007 [ 11 ] Awodey gave a talk titled "Homotopy type theory" (this was the first public usage of that term, which was coined by Awodey [ 12 ] ). Awodey and Warren summarized their results in the paper "Homotopy theoretic models of identity types", which was posted on the ArXiv preprint server in 2007 [ 13 ] and published in 2009; a more detailed version appeared in Warren's thesis "Homotopy theoretic aspects of constructive type theory" in 2008.
At about the same time, Vladimir Voevodsky was independently investigating type theory in the context of the search of a language for practical formalization of mathematics. In September 2006 he posted to the Types mailing list "A very short note on homotopy lambda calculus ", [ 14 ] which sketched the outlines of a type theory with dependent products, sums and universes and of a model of this type theory in Kan simplicial sets . It began by saying "The homotopy λ-calculus is a hypothetical (at the moment) type system" and ended with "At the moment much of what I said above is at the level of conjectures. Even the definition of the model of TS in the homotopy category is non-trivial" referring to the complex coherence issues that were not resolved until 2009. This note included a syntactic definition of "equality types" that were claimed to be interpreted in the model by path-spaces, but did not consider Per Martin-Löf 's rules for identity types. It also stratified the universes by homotopy dimension in addition to size, an idea that later was mostly discarded.
On the syntactic side, Benno van den Berg conjectured in 2006 that the tower of identity types of a type in intensional type theory should have the structure of an ω-category, and indeed a ω-groupoid, in the "globular, algebraic" sense of Michael Batanin. This was later proven independently by van den Berg and Garner in the paper "Types are weak omega-groupoids" (published 2008), [ 15 ] and by Peter Lumsdaine in the paper "Weak ω-Categories from Intensional Type Theory" (published 2009) and as part of his 2010 Ph.D. thesis "Higher Categories from Type Theories". [ 16 ]
The concept of a univalent fibration was introduced by Voevodsky in early 2006. [ 17 ] However, because of the insistence of all presentations of the Martin-Löf type theory on the property that the identity types, in the empty context, may contain only reflexivity, Voevodsky did not recognize until 2009 that these identity types can be used in combination with the univalent universes. In particular, the idea that univalence can be introduced simply by adding an axiom to the existing Martin-Löf type theory appeared only in 2009. [ a ] [ b ]
Also in 2009, Voevodsky worked out more of the details of a model of type theory in Kan complexes , and observed that the existence of a universal Kan fibration could be used to resolve the coherence problems for categorical models of type theory. He also proved, using an idea of A. K. Bousfield, that this universal fibration was univalent: the associated fibration of pairwise homotopy equivalences between the fibers is equivalent to the paths-space fibration of the base.
To formulate univalence as an axiom Voevodsky found a way to define "equivalences" syntactically that had the important property that the type representing the statement "f is an equivalence" was (under the assumption of function extensionality) (-1)-truncated (i.e. contractible if inhabited). This enabled him to give a syntactic statement of univalence, generalizing Hofmann and Streicher's "universe extensionality" to higher dimensions. He was also able to use these definitions of equivalences and contractibility to start developing significant amounts of "synthetic homotopy theory" in the proof assistant Rocq (previously known as Coq ); this formed the basis of the library later called "Foundations" and eventually "UniMath". [ 19 ]
Unification of the various threads began in February 2010 with an informal meeting at Carnegie Mellon University , where Voevodsky presented his model in Kan complexes, and his version of Rocq, to a group including Awodey, Warren, Lumsdaine, Robert Harper , Dan Licata, Michael Shulman , and others. This meeting produced the outlines of a proof (by Warren, Lumsdaine, Licata, and Shulman) that every homotopy equivalence is an equivalence (in Voevodsky's good coherent sense), based on the idea from category theory of improving equivalences to adjoint equivalences. Soon afterwards, Voevodsky proved that the univalence axiom implies function extensionality.
The next pivotal event was a mini-workshop at the Mathematical Research Institute of Oberwolfach in March 2011 organized by Steve Awodey, Richard Garner, Per Martin-Löf, and Vladimir Voevodsky, titled "The homotopy interpretation of constructive type theory". [ 20 ] As part of a Coq tutorial for this workshop, Andrej Bauer wrote a small Coq library [ 21 ] based on Voevodsky's ideas (but not actually using any of his code); this eventually became the kernel of the first version of the "HoTT" Coq library [ 22 ] (the first commit of the latter [ 23 ] by Michael Shulman notes "Development based on Andrej Bauer's files, with many ideas taken from Vladimir Voevodsky's files"). One of the most important things to come out of the Oberwolfach meeting was the basic idea of higher inductive types, due to Lumsdaine, Shulman, Bauer, and Warren. The participants also formulated a list of important open questions, such as whether the univalence axiom satisfies canonicity (still open, although some special cases have been resolved positively [ 24 ] [ 25 ] ), whether the univalence axiom has nonstandard models (since answered positively by Shulman), and how to define (semi)simplicial types (still open in MLTT, although it can be done in Voevodsky's Homotopy Type System (HTS), a type theory with two equality types).
Soon after the Oberwolfach workshop, the Homotopy Type Theory website and blog [ 26 ] was established, and the subject began to be popularized under that name. An idea of some of the important progress during this period can be obtained from the blog history. [ 27 ]
The phrase "univalent foundations" is agreed by all to be closely related to homotopy type theory, but not everyone uses it in the same way. It was originally used by Vladimir Voevodsky to refer to his vision of a foundational system for mathematics in which the basic objects are homotopy types, based on a type theory satisfying § the univalence axiom , and formalized in a computer proof assistant. [ 28 ]
As Voevodsky's work became integrated with the community of other researchers working on homotopy type theory, "univalent foundations" was sometimes used interchangeably with "homotopy type theory", [ 29 ] and other times to refer only to its use as a foundational system (excluding, for example, the study of model-categorical semantics or computational metatheory). [ 30 ] For instance, the subject of the IAS special year was officially given as "univalent foundations", although a lot of the work done there focused on semantics and metatheory in addition to foundations. The book produced by participants in the IAS program was titled "Homotopy type theory: Univalent foundations of mathematics"; although this could refer to either usage, since the book only discusses HoTT as a mathematical foundation. [ 29 ]
In 2012–13 researchers at the Institute for Advanced Study held "A Special Year on Univalent Foundations of Mathematics". [ 31 ] The special year brought together researchers in topology , computer science , category theory , and mathematical logic . The program was organized by Steve Awodey , Thierry Coquand and Vladimir Voevodsky .
During the program Peter Aczel , who was one of the participants, initiated a working group which investigated how to do type theory informally but rigorously, in a style that is analogous to ordinary mathematicians doing set theory. After initial experiments it became clear that this was not only possible but highly beneficial, and that a book (the so-called HoTT Book ) [ 29 ] [ 32 ] could and should be written. Many other participants of the project then joined the effort with technical support, writing, proof reading, and offering ideas. Unusually for a mathematics text, it was developed collaboratively and in the open on GitHub , is released under a Creative Commons license that allows people to fork their own version of the book, and is both purchasable in print and downloadable free of charge. [ 33 ] [ 34 ] [ 35 ]
More generally, the special year was a catalyst for the development of the entire subject; the HoTT Book was only one, albeit the most visible, result.
Official participants in the special year
ACM Computing Reviews listed the book as a notable 2013 publication in the category "mathematics of computing". [ 36 ]
HoTT uses a modified version of the " propositions as types " interpretation of type theory, according to which types can also represent propositions and terms can then represent proofs. In HoTT, however, unlike in standard "propositions as types", a special role is played by 'mere propositions' which, roughly speaking, are those types having at most one term, up to propositional equality . These are more like conventional logical propositions than are general types, in that they are proof-irrelevant.
The fundamental concept of homotopy type theory is the path . In HoTT, the type a = b {\displaystyle a=b} is the type of all paths from the point a {\displaystyle a} to the point b {\displaystyle b} . (Therefore, a proof that a point a {\displaystyle a} equals a point b {\displaystyle b} is the same thing as a path from the point a {\displaystyle a} to the point b {\displaystyle b} .) For any point a {\displaystyle a} , there exists a path of type a = a {\displaystyle a=a} , corresponding to the reflexive property of equality. A path of type a = b {\displaystyle a=b} can be inverted, forming a path of type b = a {\displaystyle b=a} , corresponding to the symmetric property of equality. Two paths of type a = b {\displaystyle a=b} resp. b = c {\displaystyle b=c} can be concatenated, forming a path of type a = c {\displaystyle a=c} ; this corresponds to the transitive property of equality.
Most importantly, given a path p : a = b {\displaystyle p:a=b} , and a proof of some property P ( a ) {\displaystyle P(a)} , the proof can be "transported" along the path p {\displaystyle p} to yield a proof of the property P ( b ) {\displaystyle P(b)} . (Equivalently stated, an object of type P ( a ) {\displaystyle P(a)} can be turned into an object of type P ( b ) {\displaystyle P(b)} .) This corresponds to the substitution property of equality . Here, an important difference between HoTT and classical mathematics comes in. In classical mathematics, once the equality of two values a {\displaystyle a} and b {\displaystyle b} has been established, a {\displaystyle a} and b {\displaystyle b} may be used interchangeably thereafter, with no regard to any distinction between them. In homotopy type theory, however, there may be multiple different paths a = b {\displaystyle a=b} , and transporting an object along two different paths will yield two different results. Therefore, in homotopy type theory, when applying the substitution property, it is necessary to state which path is being used.
In general, a "proposition" can have multiple different proofs. (For example, the type of all natural numbers, when considered as a proposition, has every natural number as a proof.) Even if a proposition has only one proof a {\displaystyle a} , the space of paths a = a {\displaystyle a=a} may be non-trivial in some way. A "mere proposition" is any type which either is empty, or contains only one point with a trivial path space .
Note that people write a = b {\displaystyle a=b} for I d A ( a , b ) {\displaystyle Id_{A}(a,b)} ,
thereby leaving the type A {\displaystyle A} of a , b {\displaystyle a,b} implicit.
Do not confuse it with i d A : A → A {\displaystyle id_{A}:A\to A} , denoting the identity function on A {\displaystyle A} . [ c ]
Two functions f , g : A → B {\displaystyle f,g:A\rightarrow B} are homotopies by pointwise identification: [ 29 ] : 2.4.1
Equivalences between two types A {\displaystyle A} and B {\displaystyle B} belonging to some universe U {\displaystyle U} are defined by the functions f : A → B {\displaystyle f:A\rightarrow B} together with the proof of having retractions and sections with respect to homotopies: [ 29 ] : 2.4.11,2.4.10
Together with the univalence axiom below, one receives a non-circular " ∞ {\displaystyle \infty } -isomorphism" expanded to identity. [ 37 ]
Having defined functions that are equivalences as above, one can show that there is a canonical way to turn paths to equivalences.
In other words, there is a function of the type
which expresses that types A , B {\displaystyle A,B} that are equal are, in particular, also equivalent.
The univalence axiom states that this function is itself an equivalence. [ 29 ] : 115 [ 18 ] : 4–6 Therefore, we have
"In other words, identity is equivalent to equivalence. In particular, one may say that 'equivalent types are identical'." [ 29 ] : 4
Martín Hötzel Escardó has shown that the property of univalence is independent of Martin-Löf Type Theory (MLTT). [ 18 ] : 6 [ d ] This is because type equivalence is compatible with all constructions of the type theory [ 29 ] : 2.6-2.15 .
Advocates claim that HoTT allows mathematical proofs to be translated into a computer programming language for computer proof assistants much more easily than before. They argue this approach increases the potential for computers to check difficult proofs. [ 38 ] However, these claims aren't universally accepted and many research efforts and proof assistants don't make use of HoTT.
HoTT adopts the univalence axiom, which relates the equality of logical-mathematical propositions to homotopy theory. An equation such as a = b {\displaystyle a=b} is a mathematical proposition in which two different symbols have the same value. In homotopy type theory, this is taken to mean that the two shapes which represent the values of the symbols are topologically equivalent. [ 38 ]
These equivalence relationships, ETH Zürich Institute for Theoretical Studies director Giovanni Felder argues, can be better formulated in homotopy theory because it is more comprehensive: Homotopy theory explains not only why "a equals b" but also how to derive this. In set theory, this information would have to be defined additionally, which, advocates argue, makes the translation of mathematical propositions into programming languages more difficult. [ 38 ]
As of 2015, intense research work was underway to model and formally analyse the computational behavior of the univalence axiom in homotopy type theory. [ 39 ]
Cubical type theory is one attempt to give computational content to homotopy type theory. [ 40 ]
However, it is believed that certain objects, such as semi-simplicial types, cannot be constructed without reference to some notion of exact equality. Therefore, various two-level type theories have been developed which partition their types into fibrant types, which respect paths, and non-fibrant types, which do not. Cartesian cubical computational type theory is the first two-level type theory which gives a full computational interpretation to homotopy type theory. [ 41 ] | https://en.wikipedia.org/wiki/Homotopy_type_theory |
Homovanillic acid ( HVA ) is a major catecholamine metabolite that is produced by a consecutive action of monoamine oxidase and catechol-O-methyltransferase on dopamine . [ 1 ] Homovanillic acid is used as a reagent to detect oxidative enzymes , and is associated with dopamine levels in the brain .
In psychiatry and neuroscience , brain and cerebrospinal fluid levels of HVA are measured as a marker of metabolic stress caused by 2-deoxy- D -glucose . [ 2 ] HVA presence supports a diagnosis of neuroblastoma and malignant pheochromocytoma .
Fasting plasma levels of HVA are known to be higher in females than in males. [ citation needed ] This does not seem to be influenced by adult hormonal changes, as the pattern is retained in the elderly and post- menopausal as well as transgender people according to their genetic sex , both before and during cross- sex hormone administration. [ 3 ] Differences in HVA have also been correlated to tobacco usage, with smokers showing significantly lower amounts of plasma HVA. | https://en.wikipedia.org/wiki/Homovanillic_acid |
The Honcheonsigye (meaning armillary clock ) is an astronomical clock made by Song Yi-Yeong ( 송이영 ; 宋以潁 ), a professor of Gwansanggam ( 관상감 ; 觀象監 ) (one of the scientific institution of Joseon) in 1669. [ 1 ] It was designated as South Korean national treasure number 230 in August 9, 1985.
The clock used the alarm clock technology created by Christiaan Huygens in 1657. [ 2 ] This relic shows that Huygens' technology was spread to East Asia in just 12 years. Also, It demonstrates the astronomy and mechanical engineering technology of the Joseon Dynasty .
The clock has an armillary sphere with a diameter of 40 cm. The sphere is activated by a clockwork mechanism, designed to display the position of the heavens at any given time, as well as displaying the hours and marking their passage with a chiming bell. The device is no longer in working order.
The clock is owned by Korea University Museum . It is the only remaining astronomical clock from the Joseon period.
The clock was purchased from an antiques dealer some time before WWII by Mr Kim Seong-su 김성수 金性洙, the rich businessman and politician who founded Korea University . [ 3 ] The historian of science Jeon Sang-Woon 전상운 全相運, who examined the device in 1962, assumed that it was the clockwork driven sphere known to have been made by Song Yiyeong 송이영 宋以穎 in 1669 for King Hyeonjong of Joseon 현종 顯宗, and the British historian of science Joseph Needham adopted this view, giving a detailed citation of the relevant Korean texts from that period, and a detailed description of the mechanism. [ 4 ]
However, the historian of Korean cartography, Gary Ledyard, argued that this device could not have been made as early as 1669, since the names given on the map of the earth on the terrestrial globe at the centre of the object shows a name for part of the southern continent that could not have been known in Korea at that period. [ 5 ]
More recently, O Sanghag 오상학 has argued that the object may date from as late as the beginning of the 19th century, in the time of Crown Prince Ikjong 익종 翼宗 (1809–1830), before the prince became regent in 1827. [ 6 ]
An image of the clock's sphere is shown on the reverse of the 2007 issued 10,000 won banknotes. [ 7 ]
This Korea -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Honcheonsigye |
The Honda Prize is awarded by the Honda Foundation. [ 1 ] [ 2 ] [ 3 ] [ 4 ] [ 5 ] It is awarded for "the efforts of an individual or group who contribute new ideas which may lead the next generation in the field of ecotechnology". It is sometimes referred to as the "Nobel Prize in Technology" since it has put a spotlight on achievements in a variety of fields based on a wide perspective in the future, including two Turing -awarded artificial intelligence accomplishments. [ 6 ] [ 7 ] [ 8 ]
The prize consists of a diploma, medal, and a reward of 10 million yen. | https://en.wikipedia.org/wiki/Honda_Prize |
The Honey Bee Genome Sequencing Consortium is an international collaborative group of genomics scientists, scientific organisations and universities trying to decipher the genome sequences of the honey bee ( Apis mellifera ). It was formed in 2001 by American scientists. In the US, the project is funded by the National Human Genome Research Institute (a division of the National Institutes of Health (NIH) ), the United States Department of Agriculture (USDA) , the Texas Agricultural Experiment Station, the University of Illinois Sociogenomics Initiative, and various beekeepers association and the bee industry.
First scientific findings show that the honey bee genome may have evolved more slowly than the genomes of the fruit fly and malaria mosquito . [ 1 ] The bee genome contains versions of some important mammalian genes .
The complete genome of Apis mellifera has been sequenced and consists of 10,000 genes with approximately 236 million base pairs . The size of the genome is a tenth of the human genome . [ 2 ] The Western honey bee gene sequence showed 163 chemical receptors for smell but only 10 for taste. Besides the discovery of new genes for the use of pollen and nectar, researchers found that, in comparison with other insects, Apis mellifera has fewer genes for immunity, detoxification and the development of the cuticula . [ 3 ] The population genetic analysis showed Africa as the origin and hypothesized that the spread into Europe happened in at least two independent waves. [ 4 ]
Data from the scientific collaboration was made available on BeeBase led by Texas A&M University . [ 5 ]
BeeSpace led by the University of Illinois [ 6 ] is an effort to complete a web navigable catalog of related information. | https://en.wikipedia.org/wiki/Honey_Bee_Genome_Sequencing_Consortium |
The honeycomb theorem, formerly the honeycomb conjecture , states that a regular hexagonal grid or honeycomb has the least total perimeter of any subdivision of the plane into regions of equal area. The conjecture was proven in 1999 by mathematician Thomas C. Hales . [ 1 ]
Let Γ {\displaystyle \Gamma } be any system of smooth curves in R 2 {\displaystyle \mathbb {R} ^{2}} , subdividing the plane into regions (connected components of the complement of Γ {\displaystyle \Gamma } ) all of which are bounded and have unit area. Then, averaged over large disks in the plane, the average length of Γ {\displaystyle \Gamma } per unit area is at least as large as for the hexagon tiling. The theorem applies even if the complement of Γ {\displaystyle \Gamma } has additional components that are unbounded or whose area is not one; allowing these additional components cannot shorten Γ {\displaystyle \Gamma } . Formally, let B ( 0 , r ) {\displaystyle B(0,r)} denote the disk of radius r {\displaystyle r} centered at the origin, let L r {\displaystyle L_{r}} denote the total length of Γ ∩ B ( 0 , r ) {\displaystyle \Gamma \cap B(0,r)} , and let A r {\displaystyle A_{r}} denote the total area of B ( 0 , r ) {\displaystyle B(0,r)} covered by bounded unit-area components. (If these are the only components, then A r = π r 2 {\displaystyle A_{r}=\pi r^{2}} .) Then the theorem states that lim sup r → ∞ L r A r ≥ 12 4 . {\displaystyle \limsup _{r\to \infty }{\frac {L_{r}}{A_{r}}}\geq {\sqrt[{4}]{12}}.} The value on the right hand side of the inequality is the limiting length per unit area of the hexagonal tiling.
The first record of the conjecture dates back to 36 BC, from Marcus Terentius Varro , but is often attributed to Pappus of Alexandria ( c. 290 – c. 350 ). [ 2 ] In the 17th century, Jan Brożek used a similar theorem to argue why bees create hexagonal honeycombs . In 1943, László Fejes Tóth published a proof for a special case of the conjecture, in which each cell is required to be a convex polygon . [ 3 ] The full conjecture was proven in 1999 by mathematician Thomas C. Hales , who mentions in his work that there is reason to believe that the conjecture may have been present in the minds of mathematicians before Varro. [ 1 ] [ 2 ]
It is also related to the densest circle packing of the plane, in which every circle is tangent to six other circles, which fill just over 90% of the area of the plane.
The case when the problem is restricted to a square grid was solved in 1989 by Jaigyoung Choe who proved that the optimal figure is an irregular hexagon. [ 4 ] [ 5 ] | https://en.wikipedia.org/wiki/Honeycomb_theorem |
Honeyguides ( family Indicatoridae ) are a family of birds in the order Piciformes . They are also known as indicator birds , or honey birds , although the latter term is also used more narrowly to refer to species of the genus Prodotiscus . They have an Old World tropical distribution, with the greatest number of species in Africa and two in Asia . These birds are best known for their interaction with humans. Honeyguides are noted and named for one or two species that will deliberately lead humans (but, contrary to popular claims, most likely not honey badgers [ 1 ] ) directly to bee colonies, so that they can feast on the grubs and beeswax that are left behind.
The Indicatoridae were noted for their barbet-like structure and brood-parasitic behavior and morphologically considered unique among the non-passerines in having nine primaries. [ 2 ] The phylogenetic relationship between the honeyguides and the eight other families that make up the order Piciformes is shown in the cladogram below. [ 3 ] [ 4 ] The number of species in each family is taken from the list maintained by Frank Gill , Pamela C. Rasmussen and David Donsker on behalf of the International Ornithological Committee (IOC). [ 5 ]
Galbulidae – jacamars (18 species)
Bucconidae – puffbirds (38 species)
Indicatoridae – honeyguides (16 species)
Picidae – woodpeckers (240 species)
Megalaimidae – Asian barbets (35 species)
Lybiidae – African barbets (42 species)
Capitonidae – New World barbets (15 species)
Semnornithidae – toucan barbets (2 species)
Ramphastidae – toucans (43 species)
Most honeyguides are dull-colored, though some have bright yellow coloring in the plumage. All have light outer tail feathers, which are white in all the African species. The smallest species by body mass appears to be the green-backed honeyguide , at an average of 10.2 g (0.36 oz), and by length appears to be the Cassin's honeyguide , at an average of 10 cm (3.9 in), while the largest species by weight is the lyre-tailed honeyguide , at 54.2 g (1.91 oz), and by length, is the greater honeyguide , at 19.5 cm (7.7 in). [ 6 ] [ 7 ] [ 8 ]
They are among the few birds that feed regularly on wax — beeswax in most species, and presumably the waxy secretions of scale insects in the genus Prodotiscus and to a lesser extent in Melignomon and the smaller species of Indicator . They also feed on waxworms which are the larvae of the waxmoth Galleria mellonella , on bee colonies, and on flying and crawling insects, spiders , and occasional fruits. Many species join mixed-species feeding flocks .
Honeyguides are named for a remarkable habit seen in one or two species: guiding humans to bee colonies . Once the hive is open and the honey is taken, the bird feeds on larvae and wax. This behavior has been studied in the greater honeyguide ; some authorities (following Friedmann, 1955) state that it also occurs in the scaly-throated honeyguide , while others disagree. [ 6 ] Wild honeyguides understand various types of human calls that attract them to engage in the foraging mutualism. [ 9 ] In northern Tanzania , honeyguides partner with Hadza hunter-gatherers, and the bird assistance has been shown to increase honey-hunters' rates of finding bee colonies by 560%, and led men to significantly higher yielding nests than those found without honeyguides. [ 10 ] Contrary to most depictions of the human-honeyguide relationship, the Hadza did not actively repay honeyguides, but instead, hid, buried, and burned honeycomb, with the intent of keeping the bird hungry and thus more likely to guide again. [ 10 ] Some experts believe that honeyguide co-evolution with humans goes back to the stone-tool making human ancestor Homo erectus , about 1.9 million years ago. [ 11 ] [ 10 ] Despite some assumptions, no evidence indicates that honeyguides guide the honey badger ; though videos about this exist, there have been accusations that they were staged. [ 12 ] [ 13 ]
Sometimes honeyguides lead humans to animals that are not bees, such as snakes. The reason for this behavior is not clear. [ 14 ]
Although most members of the family are not known to recruit "followers" in their quest for wax, they are also referred to as "honeyguides" by linguistic extrapolation.
The breeding behavior of eight species in Indicator and Prodotiscus is known. They are all brood parasites that lay one egg in a nest of another species, laying eggs in series of about five during a period of 5–7 days. Most favor hole-nesting species, often the related barbets and woodpeckers , but Prodotiscus parasitizes cup-nesters such as white-eyes and warblers . Honeyguide nestlings have been known to physically eject their hosts' chicks from the nests and they have needle-sharp hooks on their beaks with which they puncture the hosts' eggs or kill the nestlings. [ 15 ]
African honeyguide birds are known to lay their eggs in underground nests of other bee-eating bird species. The honeyguide chicks kill the hatchlings of the host using their needle-sharp beaks just after hatching, much as cuckoo hatchlings do. The honeyguide mother ensures her chick hatches first by internally incubating the egg for an extra day before laying it, so that it has a head start in development compared to the hosts' offspring. [ 16 ] | https://en.wikipedia.org/wiki/Honeyguide |
Honeywell, Inc. v. Sperry Rand Corp., et al. , 180 U.S.P.Q. 673 ( D. Minn. 1973) (Case 4-67 Civil 138, 180 USPO 670), was a landmark U.S. federal court case that in October 1973 invalidated the 1964 patent for the ENIAC , the world's first general-purpose electronic digital computer. The decision held, in part, the following: 1. that the ENIAC inventors had derived the subject matter of the electronic digital computer from the Atanasoff–Berry computer (ABC), prototyped in 1939 by John Atanasoff and Clifford Berry , 2. that Atanasoff should have legal recognition as the inventor of the first electronic digital computer and 3. that the invention of the electronic digital computer ought to be placed in the public domain .
The case was a combination of two separate lawsuits: one brought by Sperry Rand Corporation and its holding company Illinois Scientific Developments against Honeywell Corporation in Washington, D.C. , charging Honeywell with patent infringement and demanding royalties, and a countersuit filed in Minneapolis, Minnesota by Honeywell charging Sperry Rand with monopoly and fraud and seeking the invalidation of the ENIAC patent, alleged to be infirm. Both suits were filed on May 26, 1967, with Honeywell filing just minutes earlier, a fact that would later have tremendous bearing on the case.
The trial was presided over by U.S. District Court Judge Earl R. Larson between June 1, 1971, and March 13, 1972, in Minneapolis, Minnesota, a jurisdiction decided when D.C. Circuit Chief Judge John Sirica ruled that Honeywell had won the May 26 race to file the suit in court. Attorneys for Sperry Rand wanted the case to be tried in Washington, D.C., a district perceived to be friendlier to the rights of patent holders; by contrast, Honeywell was at the time the largest private employer in Minnesota. The plaintiff's final 500-page brief in the case was filed September 30, 1972.
Chief among the disputes Honeywell v. Sperry Rand was to resolve were:
With 135 days of oral courtroom testimony by 77 witnesses—and the presentation of the deposition of an additional 80 witnesses—for a total trial transcript of 20,667 pages, Honeywell v. Sperry Rand was at that time the longest trial in the history of the federal court system. It was preceded by six years of litigation that produced thousands of pages of under-oath depositions. The court marked 25,686 exhibits for the plaintiff Honeywell; defendants Sperry Rand and its subsidiary Illinois Scientific Developments contributed 6,968 exhibits. The corporations on the two sides spent a combined more than $8 million pursuing the case. The resulting exhibits and testimony constitute a massive evidentiary record describing the invention and development of the electronic digital computer. Materials relevant to the case but not entered into evidence have appeared, but sparsely and infrequently, since the case's conclusion in 1973.
Computers played a major role in the prosecution of the case for plaintiff Honeywell. A computerized record of documents pertaining to the case, known as Electronic Legal Files (or ELF), allowed Honeywell attorneys to store, sort, recall, and print information on hundreds of different subjects.
More than seven months following the end of courtroom testimony, Judge Earl R. Larson's decision was published on October 19, 1973, in a document, over 248 pages long, titled Findings of Fact, Conclusions of Law, and Order for Judgment . [ 1 ] Its conclusions defy summarization, but key findings include:
The publication of the Honeywell v. Sperry Rand decision coincided with the event of the Saturday Night Massacre , one of many events in the ongoing Watergate scandal of Richard Nixon 's presidency. As a result of the media's focus on Watergate , news of the decision did not attract public attention at the time. [ citation needed ]
Finding 3 was the most controversial, as it ascribed the invention of the electronic digital computer to John V. Atanasoff :
3.1.2 Eckert and Mauchly did not themselves invent the automatic electronic computer, but instead derived that subject matter from one Dr. John Vincent Atanasoff.
Charges of derivation stemmed from testimony and correspondence describing meetings between Atanasoff and Mauchly in December 1940 and June 1941, the first at the University of Pennsylvania where Atanasoff attended a talk given by Mauchly at a meeting of the American Association for the Advancement of Science on use of Mauchly's harmonic analyzer (a simple analog computer ) to speed the calculation of meteorological data to test for periodicities in precipitation, and the second in Ames, Iowa where Mauchly had driven to visit Atanasoff for a period of five days and to examine his progress on a special-purpose computing machine whose construction Atanasoff had described for Mauchly at the prior meeting. (In the discovery process leading up to Honeywell v. Sperry Rand , this device came to be called the Atanasoff–Berry Computer , or ABC; Clifford Berry had been Atanasoff's graduate student assistant in the computer development project in the basement of the physics building at Iowa State College and in 1942 the two of them left Iowa State for positions in war research—Atanasoff in Washington, D.C. , and Berry in Pasadena, California .)
All parties agree that Mauchly had opportunity to see the ABC, which was then in a sufficiently advanced state of construction to demonstrate many if not all of its general principles. There is disagreement about (and no definitive evidence regarding) the extent to which Mauchly understood—or indeed was interested in or capable of understanding—the circuit designs incorporated in the machine. The ABC's inventors considered their invention novel and patentable. The same trip to Philadelphia in December 1940 included a visit to the Patent Office in Washington, D.C., to conduct patent searches—so Dr. Mauchly's contention under oath that the ABC's inventors were deliberately hesitant about revealing all of the machine's details would seem to be credible. All parties agreed that Mauchly took away with him no written technical description of the ABC. However, he was familiar enough with the ABC's basic method of operation, particularly the involvement of its rotating capacitor memory drum, to have described it to J. Presper Eckert in 1943 or 1944, and to have recounted it in some detail in a 1967 deposition, over 26 years after having visited the ABC in June 1941.
Correspondence from Mauchly to Atanasoff following Mauchly's visit was touted by the plaintiff as a smoking gun . Considered to be particularly damning to the Sperry Rand case were the following often-quoted excerpts:
A number of different ideas have come to me recently anent computing circuits—some of which are more or less hybrids, combining your methods with other things, and some of which are nothing like your machine. The question in my mind is this: is there any objection, from your point of view, to my building some sort of computer which incorporates some of the features of your machine? ... Ultimately a second question might come up, of course, and that is, in the event that your present design were to hold the field against all challengers, and I got the Moore School interested in having something of the sort, would the way be open for us to build an " Atanasoff Calculator " (a la Bush analyzer) here?
Taken in context, this and other letters entered into evidence in Honeywell v. Sperry Rand evinced a spirit of cordiality and mutual admiration between Mauchly and Atanasoff, one that would continue into the 1940s, as Atanasoff recommended Mauchly for part-time consulting work at the Naval Ordnance Laboratory in 1943 and Mauchly continued to visit Atanasoff in White Oak, Maryland throughout 1944, where Mauchly served as mentor, guide, and sounding board to some of those on Atanasoff's staff.
Honeywell v. Sperry Rand and the decision it culminated in emphasized the differences between the ENIAC and the ABC, some of which were:
Following the ruling, some writers perceived recognition of Atanasoff for his title as "father of the computer" was slow in coming, and wrote books of their own. These included Pulitzer Prize-winning Iowan reporter Clark R. Mollenhoff and wife-and-husband team Alice Burks and Arthur Burks . (Arthur had been on the ENIAC 's engineering staff and had requested to be added as a co-inventor following the issuance of the ENIAC patent; Alice Burks had been a computer at the Moore School .)
Since the ruling, the IEEE Annals of the History of Computing has been the principal battleground for articles debating the derivation controversy. Therein John Mauchly's widow Kay published her retort to the first Burks article following her husband's 1980 death. An article by Calvin Mooers , a former employee of Atanasoff's at the Naval Ordnance Laboratory , was published posthumously; in it, he questioned Atanasoff's commitment to and capacity for development of computing machines even when provided with ample financial resources. | https://en.wikipedia.org/wiki/Honeywell,_Inc._v._Sperry_Rand_Corp. |
COM DEV International was a satellite technology , space sciences , and telecommunications company based in Cambridge, Ontario , Canada . [ 3 ] The company had branches and offices in Ottawa , the United States , the United Kingdom , China and India .
COM DEV developed and manufactured specialized satellite systems, including microwave systems , switches, optical systems , specialized satellite antennas , as well as components for the aviation and aerospace industry . COM DEV also produced custom equipment designs for commercial, military and civilian purposes, as well as providing contract research for the space sciences.
COM DEV International was founded in 1974 and specialized in microwave technology for the aviation and aerospace industry . The company would go on to become a leader in space satellite componentry and hardware, specializing in telecommunication systems; [ 3 ] a global designer and builder of telecommunication components and systems for space satellites; as well as one of Canada's largest sources of spacecraft instrumentation . [ 4 ]
In 2001, its space products division opened an approximate $7-million Surface Acoustic Wave (SAW) development and manufacturing laboratory in its Cambridge facility. [ 5 ]
In 2005, it purchased the EMS Technologies Space Science optical division in Ottawa, formerly CAL Corporation, from MacDonald, Dettwiler and Associates for $5 million. [ 6 ]
In 2007, it purchased a Passive Microwave division in El Segundo, California , for $8.75 million. [ 7 ] In 2010, it purchased Ottawa-based space instrument supplier Routes AstroEngineering for $1.7 million. [ 8 ] Later that year, it established a subsidiary called exactEarth offering global ship tracking data services. [ 9 ] In 2015, it purchased MESL Microwave of Edinburgh, Scotland . [ 10 ] Also that year, it entered the waveguide market with the purchase of Pacific Wave Systems (PWS) of Garden Grove, California . [ 11 ]
On November 15, 2015, Honeywell announced that it would acquire COM DEV, which would become part of Honeywell's Defense and Space business. On February 4, 2016, Honeywell announced that it had completed the acquisition, [ 12 ] and COM DEV has since been renamed Honeywell Cambridge . [ 13 ]
Since the 1990s, the company has manufactured components for satellites including:
The company has developed and built satellites assemblies or components for over 900 satellite missions, including:
Upcoming missions include:
Past projects have also included an Automatic Identification System (AIS) validation nanosatellite launched on an Antrix PSLV-C9 vehicle from the Satish Dhawan Space Centre in Sriharikota , India in April 2008. The AIS experimental spacecraft was built under contract by the University of Toronto Institute for Aerospace Studies (UTIAS) Space Flight Laboratory (SFL), which also designated with the responsibility for its operation. [ 14 ]
COM DEV provided research and development work in aeronautics and space technology . Many modules of the company are used in many well-known space probes and satellites. COM DEV was known for cooperating with major space agencies , including NASA , the European Space Agency (ESA), JAXA , Indian Space Research Organisation and the Canadian Space Agency (CSA).
Citations
Bibliography | https://en.wikipedia.org/wiki/Honeywell_Aerospace,_Cambridge |
Honeywell Primus is a range of Electronic Flight Instrument System (EFIS) glass cockpits manufactured by Honeywell Aerospace .
Each system is composed of multiple display units used as primary flight display and multi-function display .
Primus 1000 is used on:
Primus 2000 and Primus 2000XP are used on:
Primus Elite is an upgrade to older SPZ-8000 series and Primus 1000 and 2000/2000XP flight decks. The upgrade includes replacing the cathode ray tube (CRT) display with new lightweight liquid-crystal displays (LCD). The Primus Elite displays also include enhanced capability of SVS ( Synthetic vision system ), Jeppesen Charts, Enhanced with XM weather, airports, Navaids, TAF, METARs, Geopolitical boundary, Airways, Airspace information, NOTAMs and many more features. The multi-function display will have cursor control device (CCD) to select the various above listed options.
Primus Apex is based on the Primus Epic and is designed for single-pilot turboprop aircraft and very light jets .
It is installed in:
Primus Epic and Primus Epic 2 are designed for two-crew business or regional jets .
They are used on:
While primarily designed for jet aircraft, the Epic cockpit is also used on the AgustaWestland AW139 medium helicopter, which is certified for single-pilot IFR operations. [ 2 ]
Dassault 's Enhanced Avionics System (EASy) was jointly developed with Honeywell and is based on the Primus Epic .
Gulfstream Aerospace 's PlaneView cockpit is also based on the Primus Epic .
Primus Apex flight deck competes with Garmin G1000 and G3000 and Avidyne Entegra while Primus Epic competes with Rockwell Collins Pro Line and Garmin G3000 and G5000 on larger aircraft. | https://en.wikipedia.org/wiki/Honeywell_Primus |
Honeywell UOP , formerly known as UOP LLC or Universal Oil Products , is an American multi-national company developing and delivering technology to the petroleum refining , gas processing, petrochemical production, and major manufacturing industries.
The company's roots date back to 1914, when the revolutionary Dubbs thermal cracking process created the technological foundation for today's modern refining industry. [ citation needed ] In the ensuing decades, UOP engineers generated thousands of patents, leading to important advances in process technology, profitability consultation, and equipment design. [ 2 ]
UOP was founded in 1914 to exploit the market potential of patents held by inventors Jesse A. Dubbs and his son, Carbon Petroleum (C. P.) Dubbs. Perhaps because he was born in Pennsylvania oil country, Jesse Dubbs was enamored with the oil business. He even named his son Carbon after one of the elemental constituents of oil. Later, Carbon added the P. to make his name "euphonious," he said. People started calling him "Petroleum" for fun, and the name stuck. C. P.'s son and grandson were also named Carbon, but each had a different middle initial. [ 3 ] [ 4 ]
When founded in 1914 it was a privately held firm known as the National Hydrocarbon Company . J. Ogden Armour provided initial seed money and kept the firm going the first years it lost money. [ 5 ] [ 4 ] Most of the losses were incurred during lengthy legal battles with petroleum firms that were using technology patented by Dubbs. [ citation needed ]
In 1919 the firm's name became Universal Oil Products. [ citation needed ]
By 1931, petroleum firms saw a possible competitive advantage to owning UOP. A consortium of firms banded together to purchase the firm. These firms were Shell Oil Company, Standard Oil Company of California, Standard Oil Company of Indiana, Standard Oil Company of New Jersey, The Texas Company, and N. V. de Bataafsche Petroleum Maatschappij. This worried oil firms that were not part of the group and it helped prompt the Justice Department to begin an investigation of this arrangement as a possible violation of antitrust laws. [ citation needed ]
The oil firms placed the assets of UOP into a trust to support the American Chemical Society (ACS). In 1959 UOP went public and the income from that sale still provides monies to ACS to administer grants to universities worldwide. [ 3 ]
In the 1970s UOP was acquired by The Signal Companies, which merged with Allied Corporation in 1985, becoming AlliedSignal . [ 6 ]
In August 1988 Union Carbide Corporation and AlliedSignal formed a joint venture combining the latter's wholly owned subsidiary, UOP Inc., and the Catalyst, Adsorbents and Process Systems (CAPS) business of Union Carbide.
AlliedSignal acquired Honeywell in 1999 and assumed the latter's name. In 2005, what was now known as Honeywell acquired Union Carbide's stake in UOP, making it again a wholly owned subsidiary. The reported payment to Union Carbide was $835 million, valuing UOP at $1.6 billion. [ 7 ]
The UOP Riverside research and development laboratory in McCook, Illinois was conceived in 1921 by Hiram J. Halle , the chief executive officer of Universal Oil Products (now simply UOP), as a focal point where the best and brightest scientists could create new products and provide scientific support for the oil refining industry. Between 1921 and 1955, Riverside research resulted in 8,790 U.S. and foreign patents and provided the foundation on which UOP built its success. [ 3 ]
The company benefited immensely by the addition to its research staff of Professor Vladimir Ipatieff , famous Russian scientist known internationally for his work in high-pressure catalysis. His contribution in catalytic chemistry gave UOP a position of leadership in the development of catalysis as applied to petroleum processing, the first being catalytic polymerization . Vladimir Haensel, a student of Ipatieff’s, joined UOP and developed Platforming in the 1950s. This process used very small amounts of platinum as a catalyst for the high yield of high-octane gasoline from petroleum-based feeds. [ 8 ]
In 1963 Universal Oil Products purchased a chemical plant in East Rutherford, New Jersey . The plant was used for solvent recovery operations from waste chemicals. Operations ended in 1979, and ownership of the site was retained by Honeywell. Some of the chemical operations had contaminated adjacent soils, groundwater and waterways in the New Jersey Meadowlands . The New Jersey Department of Environmental Protection and the U.S. Environmental Protection Agency (EPA) ordered cleanup of the plant site, and in 1983 EPA designated the plant as a Superfund site. Honeywell signed agreements and orders to cooperate with EPA in the cleanup operations. As of 2023, several stages of the cleanup have been completed. Remediation of the adjoining wetlands and plans for long-term site maintenance are pending. [ 6 ]
The Riverside facility was recognized as a National Historic Chemical Landmark by the American Chemical Society in 1995. [ 3 ]
Distillation is the most common way to separate chemicals with different boiling points . The greater the difference in boiling points, the easier it is to do. However, when boiling points are too similar, this isn't feasible. Adsorption separation might be possible. In adsorption separation, a mixture of chemicals flows past a porous solid called the adsorbent and some chemicals tend to "hang out" longer. A valid analogy is to imagine a busy street with people walking in the same direction past great places to eat. The hungriest people will tend to stop right away. The people that were pretty full will make it far down the street. Now imagine flooding the whole town with water and everyone runs out where you can collect them according to how hungry they were. In technical terms the liquid flush is called the desorbent.
This type of separation was first commonly used in the laboratory to separate small test samples. UOP pioneered a method of separating large volumes of chemicals. They call the counter-current embodiment of it the Sorbex family of processes. [ 9 ] These are the major ones designed by UOP:
Parex: separation of para-xylene from a mixture of xylene isomers MX Sorbex: separation of meta-xylene from a mixed of xylene isomers Molex: linear paraffins from branched and cyclic hydrocarbons Olex: olefins from paraffins Cresex: para-cresol or meta-cresol from other cresol isomers Cymex: para-cymene or meta-cymene from other cymene isomers Sarex: fructose from mixed sugars
In 2008, UOP revealed its Ecofining process which takes vegetable oils , or lipids , and converts them into replacements for diesel and jet fuels. The resultant fuels from this refining process are indistinguishable from existing fossil-based petro-diesels and jet fuels. [ 10 ]
Most of UOP's work is not known to the general public since most applications are within refineries and petrochemical plants. However, one technology UOP helped develop is familiar to automobile owners. During the 1970s, UOP worked on pioneering a combined muffler catalytic converter . To help publicize their work they sponsored CanAm and Formula One teams. The race cars used were developed by Shadow Racing Cars . Many race fans were drawn to the team's innovative designs and underdog status. UOP finally achieved a goal when California adopted the catalytic converter after the UOP governmental relations rep, Donald Gazzaniga, helped push legislation through the state Senate and Assembly. [ 11 ] | https://en.wikipedia.org/wiki/Honeywell_UOP |
The Hong Kong Academy of Engineering ( HKAE ), formerly Hong Kong Academy of Engineering Sciences, is an engineering science institution based in Hong Kong . It aims on encouraging and maintaining distinction in the field of engineering with useful resolution, and to promote the development of the science, art and practice of engineering for the social well-being.
The Academy was established on 13 September 1994, by Sir S.Y. Chung, Prof. Yau-Kai Cheung, Sir Charles K. Kao and other engineering scholars in Hong Kong. [ 1 ] | https://en.wikipedia.org/wiki/Hong_Kong_Academy_of_Engineering_Sciences |
The Hong Kong International Convention for the safe and environmentally sound recycling of ships , or Hong Kong Convention , is a multilateral convention adopted in 2009, which has not entered into force. The conference that created the convention was attended by 63 countries, and overseen by the International Maritime Organization (IMO).
The convention has been designed to improve the health and safety of current ship breaking practices. Ship breaking is considered to be "amongst the most dangerous of occupations, with unacceptably high levels of fatalities, injuries and work-related diseases" [ 3 ] by the ILO as large ships are often beached and then dismantled by hand by workers with very little personal protective equipment (PPE). This is most common in Asia, with India , Bangladesh, China, and Pakistan holding the largest ship breaking yards . [ 4 ]
The Hong Kong Convention recognised that ship recycling is the most environmentally sound way to dispose of a ship at the end of its life, as most of the ship's materials can be reused. However, it sees current methods as unacceptable. The work sees many injuries and fatalities to workers, as they lack the correct safety equipment to handle the large ship correctly as it is dismantled and most vessels contain a large amount of hazardous materials such as asbestos , PCBs , TBT , and CFCs , which can also lead to highly life-threatening diseases such as mesothelioma and lung cancer. [ 5 ]
In advance of ratification of the Hong Kong Convention, the Industry Working Group on Ship Recycling issued Guidelines on Transitional Measures for Shipowners Selling Ships for Recycling . [ 6 ]
The Inventory of Hazardous Materials has been designed to try to minimise the dangers of these hazards. The Convention defines a hazard as: “any material or substance which is liable to create hazards to human health and/or the environment ". [ 7 ]
All vessels over 500 gross tons (GT) have to comply with the convention once it comes into force. Each party that does wish to comply must restrict the use of hazardous materials on all ships that fly the flag of that party. [ 7 ]
New ships must all carry an Inventory of Hazardous Materials. The inventory will list all 'hazardous materials' on board the vessel, including their amounts and locations. Existing ships must comply no later than five years after the convention comes into force, or prior to being recycled if this occurs before the five-year period. The inventory will remain with a vessel throughout its lifespan, being updated as all new installations enter the ship, as these may potentially contain hazards. The presence of the inventory will then ensure the safety of crew members during the vessel's working life, and also the safety of workers during the recycling process.
The convention was open for signature between 1 September 2009 and 31 August 2010, and remained open for accession afterwards. It will enter into force two years after "15 states, representing 40% of the world merchant shipping by gross tonnage, and on average 3% of recycling tonnage for the previous 10 years, have either signed it without reservation as to ratification , acceptance or approval, or have deposited instruments of ratification, acceptance, approval or accession with the Secretary General". [ 2 ] [ 1 ] The convention will enter into force on 26 June 2025. [ 8 ]
In advance of ratification of the Hong Kong Convention, the Industry Working Group on Ship Recycling in 2009 issued the first edition of Guidelines on Transitional Measures for Shipowners Selling Ships for Recycling . These are supported by maritime organizations: International Chamber of Shipping (ICS), the Baltic and International Maritime Council (BIMCO), the International Association of Classification Societies (IACS), Intercargo , the International Parcel Tankers Association (IPTA), Intertanko , the Oil Companies International Marine Forum (OCIMF), and the International Transport Workers' Federation (ITF). The Transitional Measures are also supported by the national shipowners' associations of Australia, Bahamas, Belgium,
Canada, Chile, Cyprus, Denmark, Faroe Islands, Finland, France, Germany, Greece, Hong Kong, India, Ireland, Italy, Japan, Korea, Kuwait, Liberia, Mexico, Netherlands, Norway, Portugal, Philippines, Russia, Singapore, Spain, Sweden, Switzerland, Turkey, United Kingdom and United States. [ 6 ]
The EU Ship Recycling Regulation [ 9 ] entered into force on 30 December 2013. Although this regulation closely follows the Hong Kong convention, there are important differences. The Regulation sets out a number of requirements for European ships, European ship owners, ship recycling facilities willing to recycle European ships, and the relevant competent authorities or administrations. The argument for developing a specified regulation for ship recycling in the European Union , was the fact that the EU noticed how many EU ships that ended up in unsustainable recycling facilities. Europeans own around 40% of the world fleet, around 15000 ships. Among these around 10000 fly an EU Member-State flag, but only 7% of the EU-flagged ships are dismantled in the EU territory, and the rest are mostly dismantled in South Asia [ 10 ]
The SRR aims to address the environmental and health hazards associated with ship dismantling by setting high standards for EU-flagged vessels at the end of their operational lives. One of the key components developed by the EU is the European List of Approved Ship Recycling Facilities , identifying the approved ports for all EU flagged ships to be recycled. For a ship recycling yard to be included in the list, the facilities must comply with strict environmental and worker safety standards, reducing toxic waste release and promoting safe dismantling practices. Member States report to the Commission on which facilities in their territory that comply to the requirements, and thereby get included on the list. Shipyards outside the EU can also be included on the European List but must apply to the Commission with proof of the yard’s standards. [ 11 ]
To be included on the European List, ship recycling facilities must adhere to specific requirements set by the EU and aligned with the Hong Kong Convention and other international guidelines. Facilities need authorization, robust structural and operational standards, environmental safety protocols, and measures for monitoring health and safety risks to workers and nearby populations. This includes handling hazardous materials on impermeable surfaces, training workers and provide them with protective equipment, implementing emergency plans, and recording incidents. Operators must also submit recycling plans and completion reports, ensuring full compliance and minimizing environmental and health impacts during ship recycling activities. As of November 2024, it contains 45 shipyards. Because the list works as a guarantee for a yards safety and validity, shipyards can both be removed and added to the list if they cease to comply with the regulation. [ 12 ] [ 13 ] [ 14 ]
Additionally, to the list of approved facilities, the SRR also mandates each ship to hold an Inventory of Hazardous Materials (IHM), listing hazardous substances used in each ship's construction. By hazardous material’ means any material or substance which is liable to create hazards to human health and/or the environment. New installation of material such as asbestos and ozone-depleting substances are prohibited, and the occurrence of materials containing lead, mercury and radioactive substances, to name a few, are to be reported and restricted. This inventory, which must be maintained throughout the ship's life, helps guide shipyards and recyclers on safe waste management and reduces accidental environmental contamination. The ships also report on the operationally generated waste, meaning wastewater and residues generated by the normal operation of ships. By EU standards, any EU ship going for dismantling, all new European ships, and third-country ships stopping in EU ports need to have an inventory of hazardous materials on board [ 12 ] [ 13 ] [ 14 ]
This list, as of 27 July 2023, contains 48 ship-recycling facilities, including 38 yards in Europe (EU, Norway and UK), 9 yards in Turkey and 1 yard in the USA. Several yards on the European List are also capable of recycling large vessels. [ 15 ] The list excluded some of the most major ship recycling yards in India and Bangladesh, which have achieved SoCs with the HKC in various class societies. [ 16 ] This exclusion has led to many ship owners changing the flag of their vessel before recycling or sell the ship to cash buyers, to evade the regulations. [ 17 ] [ 18 ] Excluded countries strive towards bringing the HKC into force as the universal regulation, arguing that it would be irrational if international shipping were regulated by multiple and competing standards. [ 16 ]
The SRR does however come in conflict with other laws of the sea. When a ship receives a recycling certificate under the Hong Kong Convention, it may also be classified as hazardous waste under the Basel Convention . Throughout the certificate's validity, which can last up to three months, the ship's owners may face the risk of arrest in some ports for violating the Basel Convention. Trough the EUs Waste Shipment Regulations they intend to implement the same rules as the Basel Convention (WSR). However, when implementing the SRR, the EU opted to unilaterally exclude EU-flagged vessels from the scope of the existing WSR. This decision effectively created an unauthorized exemption from the Basel Regime for certain types of hazardous waste, lacking sufficient justification. [ 19 ] | https://en.wikipedia.org/wiki/Hong_Kong_International_Convention_for_the_safe_and_environmentally_sound_recycling_of_ships |
Honing oil is a liquid, solution or emulsion used to aid in the cutting or grinding of metal, typically by abrasive tools or stones, and may or may not contain oil. It can also be called machining oil, tool oil , cutting fluid , and cutting oil .
In the context of hand blade sharpening , honing oil is used on a sharpening stone to protect the stone, carry away the debris ( swarf ), and to more efficiently produce a keen edge on a metal blade such as a knife. [ citation needed ] In a machine shop it also carries away excess heat and depending on composition, may prevent unintentional tearing and welding of the metal. Or when used with materials such as soft copper, it may have extra additives to prevent stone loading, or metal deactivators to prevent staining of copper containing alloys. [ citation needed ] To achieve maximum cutting rates and abrasive life with petroleum (mineral) based machining oils when honing difficult materials like stainless steel, a higher level of surface active lubricity agents are combined with sulfur extreme pressure additives. Industrial honing oil is typically available in large quantities, home knife honing oils in small bottles.
There are many different kinds of honing oils to suit different needs. It is important to use the appropriate solution for the job. In the case of knife sharpening, motor oil is too thick or heavy and can over-lubricate or clog a sharpening stone, whereas WD-40 is too light an oil and will not carry the metal filings plus stone dust (collectively known as swarf) away from the stone, and clog it. Not using any oil at all will also clog or glaze the stone, again reducing its cutting power. Historically sperm whale oil , Neatsfoot oil , and other animal fats were popular.
Oils were once exclusively used in part because the high carbon steels of the time, such as 1095, could rust using simple water-based solutions, and the term honing "oil" is used today even for water based honing solutions.
Commercial honing oil, light sewing machine oil or, in a pinch, heavier oil thinned with paint thinner ( white spirit ) or kerosine is suggested by veteran Swedish wood carver Wille Sundqvist . [ 1 ] He further suggests " Kerosine alone works well on fine, hard stones."
The two most common classes of honing oil are petroleum based (typically mineral oils), and non-petroleum (typically water or vegetable oil or animal, such as neats foot oil or whale oil) based. Common additives include chlorine, sulfur, rust inhibitors, and detergents.
Honing oil has just the right consistency for sharpening stones. It will not gum it up nor glaze it, and it will provide just enough lubrication to avoid wearing out the stone prematurely. Importantly, it is also used to "float" off generated swarf and thereby prevent clogging of the sharpening stone, which would diminish its future cutting ability. | https://en.wikipedia.org/wiki/Honing_oil |
The backstaff is a navigational instrument that was used to measure the altitude of a celestial body , in particular the Sun or Moon . When observing the Sun, users kept the Sun to their back (hence the name) and observed the shadow cast by the upper vane on a horizon vane. It was invented by the English navigator John Davis , who described it in his book Seaman's Secrets in 1594. [ 1 ]
Backstaff is the name given to any instrument that measures the altitude of the sun by the projection of a shadow. It appears that the idea for measuring the sun's altitude using back observations originated with Thomas Harriot . [ 2 ] Many types of instruments evolved from the cross-staff that can be classified as backstaffs. Only the Davis quadrant remains dominant in the history of navigation instruments. Indeed, the Davis quadrant is essentially synonymous with backstaff. However, Davis was neither the first nor the last to design such an instrument and others are considered here as well.
Captain John Davis invented a version of the backstaff in 1594. Davis was a navigator who was quite familiar with the instruments of the day such as the mariner's astrolabe , the quadrant and the cross-staff . He recognized the inherent drawbacks of each and endeavoured to create a new instrument that could reduce those problems and increase the ease and accuracy of obtaining solar elevations .
One early version of the quadrant staff is shown in Figure 1 . [ 3 ] It had an arc affixed to a staff so that it could slide along the staff (the shape is not critical, though the curved shape was chosen). The arc (A) was placed so that it would cast its shadow on the horizon vane (B). The navigator would look along the staff and observe the horizon through a slit in the horizon vane. By sliding the arc so that the shadow aligned with the horizon, the angle of the sun could be read on the graduated staff. This was a simple quadrant, but it was not as accurate as one might like. The accuracy in the instrument is dependent on the length of the staff, but a long staff made the instrument more unwieldy. The maximum altitude that could be measured with this instrument was 45°.
The next version of his quadrant is shown in Figure 2 . [ 3 ] The arc on the top of the instrument in the previous version was replaced with a shadow vane placed on a transom. This transom could be moved along a graduated scale to indicate the angle of the shadow above the staff. Below the staff, a 30° arc was added. The horizon, seen through the horizon vane on the left, is aligned with the shadow. The sighting vane on the arc is moved until it aligns with the view of the horizon. The angle measured is the sum of the angle indicated by the position of the transom and the angle measured on the scale on the arc.
The instrument that is now identified with Davis is shown in Figure 3 . [ 4 ] This form evolved by the mid-17th century. [ 4 ] The quadrant arc has been split into two parts. The smaller radius arc, with a span of 60°, was mounted above the staff. The longer radius arc, with a span of 30° was mounted below. Both arcs have a common centre. At the common centre, a slotted horizon vane was mounted (B). A moveable shadow vane was placed on the upper arc so that its shadow was cast on the horizon vane. A moveable sight vane was mounted on the lower arc (C).
It is easier for a person to place a vane at a specific location than to read the arc at an arbitrary position. This is due to Vernier acuity , the ability of a person to align two line segments accurately. Thus an arc with a small radius, marked with relatively few graduations, can be used to place the shadow vane accurately at a specific angle. On the other hand, moving the sight vane to the location where the line to the horizon meets the shadow requires a large arc. This is because the position may be at a fraction of a degree and a large arc allows one to read smaller graduations with greater accuracy. The large arc of the instrument, in later years, was marked with transversals to allow the arc to be read to greater accuracy than the main graduations allow. [ 5 ]
Thus Davis was able to optimize the construction of the quadrant to have both a small and a large arc, allowing the effective accuracy of a single arc quadrant of large radius without making the entire instrument so large. This form of the instrument became synonymous with the backstaff. It was one of the most widely used forms of the backstaff. Continental European navigators called it the English Quadrant .
A later modification of the Davis quadrant was to use a Flamsteed glass in place of the shadow vane; this was suggested by John Flamsteed . [ 4 ] This placed a lens on the vane that projected an image of the sun on the horizon vane instead of a shadow. It was useful under conditions where the sky was hazy or lightly overcast; the dim image of the sun was shown more brightly on the horizon vane where a shadow could not be seen. [ 5 ]
In order to use the instrument, the navigator would place the shadow vane at a location anticipating the altitude of the sun. Holding the instrument in front of him, with the sun at his back, he holds the instrument so that the shadow cast by the shadow vane falls on the horizon vane at the side of the slit. He then moves the sight vane so that he observes the horizon in a line from the sight vane through the horizon vane's slit while simultaneously maintaining the position of the shadow. This permits him to measure the angle between the horizon and the sun as the sum of the angle read from the two arcs.
Since the shadow's edge represents the limb of the sun, he must correct the value for the semidiameter of the sun.
The Elton's quadrant derived from the Davis quadrant. It added an index arm with spirit levels to provide an artificial horizon.
The demi-cross was an instrument that was contemporary with the Davis quadrant. It was popular outside England. [ 4 ]
The vertical transom was like a half-transom on a cross-staff , hence the name demi-cross . It supported a shadow vane (A in Figure 4 ) that could be set to one of several heights (three according to May, [ 4 ] four according to de Hilster [ 6 ] ). By setting the shadow vane height, the range of angles that could be measured was set. The transom could be slid along the staff and the angle read from one of the graduated scales on the staff.
The sight vane (C) and horizon vane (B) were aligned visually with the horizon. With the shadow vane's shadow cast on the horizon vane and aligned with the horizon, the angle was determined. In practice, the instrument was accurate but more unwieldy than the Davis quadrant. [ 6 ]
The plough was the name given to an unusual instrument that existed for a short time. [ 4 ] It was part cross-staff and part backstaff. In Figure 5 , A is the transom that casts its shadow on the horizon vane at B . It functions in the same manner as the staff in Figure 1 . C is the sighting vane. The navigator uses the sighting vane and the horizon vane to align the instrument horizontally. The sighting vane can be moved left to right along the staff. D is a transom just as one finds on a cross-staff. This transom has two vanes on it that can be moved closer or farther from the staff to emulate different-length transoms. The transom can be moved on the staff and used to measure angles.
The Almucantar staff is a device specifically used for measuring the altitude of the sun at low altitudes.
The cross-staff was normally a direct observation instrument. However, in later years it was modified for use with back observations.
There was a variation of the quadrant – the Back observation quadrant – that was used for measuring the sun's altitude by observing the shadow cast on a horizon vane.
Thomas Hood invented this cross-staff in 1590. [ 4 ] It could be used for surveying, astronomy or other geometric problems.
It consists of two components, a transom and a yard. The transom is the vertical component and is graduated from 0° at the top to 45° at the bottom. At the top of the transom, a vane is mounted to cast a shadow. The yard is horizontal and is graduated from 45° to 90°. The transom and yard are joined by a special fitting (the double socket in Figure 6 ) that permits independent adjustments of the transom vertically and the yard horizontally.
It was possible to construct the instrument with the yard at the top of the transom rather than at the bottom. [ 7 ]
Initially, the transom and yard are set so that the two are joined at their respective 45° settings. The instrument is held so that the yard is horizontal (the navigator can view the horizon along the yard to assist in this). The socket is loosened so that the transom is moved vertically until the shadow of the vane is cast at the yard's 90° setting. If the movement of just the transom can accomplish this, the altitude is given by the transom's graduations. If the sun is too high for this, the yard horizontal opening in the socket is loosened and the yard is moved to allow the shadow to land on the 90° mark. The yard then yields the altitude.
It was a fairly accurate instrument, as the graduations were well spaced compared to a conventional cross-staff . However, it was a bit unwieldy and difficult to handle in wind.
A late addition to the collection of backstaves in the navigation world, this device was invented by Benjamin Cole in 1748. [ 4 ]
The instrument consists of a staff with a pivoting quadrant on one end. The quadrant has a shadow vane , which can optionally take a lens like the Davis quadrant's Flamsteed glass, at the upper end of the graduated scale (A in Figure 7 ). This casts a shadow or projects an image of the sun on the horizon vane (B). The observer views the horizon through a hole in the sight vane (D) and a slit in the horizon vane to ensure the instrument is level. The quadrant component is rotated until the horizon and the sun's image or shadow are aligned. The altitude can then be read from the quadrant's scale. In order to refine the reading, a circular vernier is mounted on the staff (C).
The fact that such an instrument was introduced in the middle of the 18th century shows that the quadrant was still a viable instrument even in the presence of the octant .
English scientist George Adams created a very similar backstaff at the same time. Adam's version ensured that the distance between the Flamsteed glass and horizon vane was the same as the distance from the vane to the sight vane. [ 8 ]
Edmund Gunter invented the cross bow quadrant , also called the mariner's bow , around 1623. [ 4 ] It gets its name from the similarity to the archer's crossbow .
This instrument is interesting in that the arc is 120° but is only graduated as a 90° arc. [ 4 ] As such, the angular spacing of a degree on the arc is slightly greater than one degree. Examples of the instrument can be found with a 0° to 90° graduation or with two mirrored 0° to 45° segments centred on the midpoint of the arc. [ 4 ]
The instrument has three vanes, a horizon vane (A in Figure 8 ) which has an opening in it to observe the horizon, a shadow vane (B) to cast a shadow on the horizon vane and a sighting vane (C) that the navigator uses to view the horizon and shadow at the horizon vane. This serves to ensure the instrument is level while simultaneously measuring the altitude of the sun. The altitude is the difference in the angular positions of the shadow and sighting vanes.
With some versions of this instrument, the sun's declination for each day of the year was marked on the arc. This permitted the navigator to set the shadow vane to the date and the instrument would read the altitude directly.
This article incorporates text from a publication now in the public domain : Chambers, Ephraim , ed. (1728). Cyclopædia, or an Universal Dictionary of Arts and Sciences (1st ed.). James and John Knapton, et al. {{ cite encyclopedia }} : Missing or empty |title= ( help ) | https://en.wikipedia.org/wiki/Hood's_cross-staff |
The hook effect refers to the prozone phenomenon , also known as antibody excess , or the postzone phenomenon , also known as antigen excess . It is an immunologic phenomenon whereby the effectiveness of antibodies to form immune complexes can be impaired when concentrations of an antibody or an antigen are very high. The formation of immune complexes stops increasing with greater concentrations and then decreases at extremely high concentrations, producing a hook shape on a graph of measurements. An important practical relevance of the phenomenon is as a type of interference that plagues certain immunoassays and nephelometric assays , resulting in false negatives or inaccurately low results. Other common forms of interference include antibody interference, cross-reactivity and signal interference. The phenomenon is caused by very high concentrations of a particular analyte or antibody and is most prevalent in one-step (sandwich) immunoassays . [ 2 ] [ 3 ]
In an agglutination test, a person's serum (which contains antibodies ) is added to a test tube , which contains a particular antigen . If the antibodies interact with the antigen to form immune complexes , called agglutination, then the test is interpreted as positive. However, if too many antibodies that can bind to the antigen are present, then the antigenic sites are coated by antibodies, and few or no antibodies directed toward the pathogen are able to bind more than one antigenic particle. [ 4 ] Since the antibodies do not bridge between antigens, no agglutination occurs. Because no agglutination occurs, the test is interpreted as negative. In this case, the result is a false negative. The range of relatively high antibody concentrations within which no reaction occurs is called the prozone . [ 5 ]
The effect can also occur because of antigen excess, when both the capture and detection antibodies become saturated by the high analyte concentration. In this case, no sandwich can be formed by the capturing antibody, the antigen and the detection antibody. In this case, free antigen is in competition with captured antigen for detection antibody binding. [ 6 ] Sequential addition of antigen and antibody, paired with stringent washing, can prevent the effect, as can increasing the relative concentration of antibody to antigen, thereby mediating the effect. [ citation needed ]
Examples include high levels of syphilis antibodies in HIV patients or high levels of cryptococcal antigen leading to false negative tests in undiluted samples. [ 7 ] [ 8 ] This phenomenon is also seen in serological tests for Brucellosis. [ citation needed ] It may be seen in precipitation reactions. The antibody that fails to react is known as the blocking antibody and prevents the precipitating antibody from binding to the antigens. Thus the proper precipitation reaction does not take place. However, when the serum is diluted, the blocking antibody is as well and its concentration decreases enough for the proper precipitation reaction to occur. [ 9 ]
Lewis Thomas described in his memoir a physiologic experiment of 1941 in which he observed the prozone effect in vivo : immunity in rabbits to meningococcus , which was robust, unexpectedly decreased when immunization was used to induce a heightened antibody response. [ 10 ] In other words, getting the rabbits' bodies to produce more antibodies against this bacterium had the counterproductive effect of decreasing their immunity to it. From the viewpoint of an overly simplistic notion of the antibody/antigen relationship, this seems paradoxical , although it is clearly logical from a viewpoint duly informed by present-day molecular biology. Thomas was interested in pursuing this physiologic research further, and remained so for decades afterward, but his career took him in other directions and he was not aware of anyone having pursued it by the time of his memoir. [ 10 ] One kind of relevance that he hypothesized for this in vivo blocking antibody concept was as a driver of human susceptibility to certain infectious diseases. [ 10 ] In the decades since, the concept has also been found to have clinical relevance in allergen immunotherapy , where blocking antibodies can interfere with other antibodies involved in hypersensitivity and thus improve allergy treatment. [ 11 ] | https://en.wikipedia.org/wiki/Hook_effect |
In combinatorial mathematics , the hook length formula is a formula for the number of standard Young tableaux whose shape is a given Young diagram .
It has applications in diverse areas such as representation theory , probability , and algorithm analysis ; for example, the problem of longest increasing subsequences . A related formula gives the number of semi-standard Young tableaux, which is a specialization of a Schur polynomial .
Let λ = ( λ 1 ≥ ⋯ ≥ λ k ) {\displaystyle \lambda =(\lambda _{1}\geq \cdots \geq \lambda _{k})} be a partition of n = λ 1 + ⋯ + λ k {\displaystyle n=\lambda _{1}+\cdots +\lambda _{k}} .
It is customary to interpret λ {\displaystyle \lambda } graphically as a Young diagram , namely a left-justified array of square cells with k {\displaystyle k} rows of lengths λ 1 , … , λ k {\displaystyle \lambda _{1},\ldots ,\lambda _{k}} .
A (standard) Young tableau of shape λ {\displaystyle \lambda } is a filling of the n {\displaystyle n} cells of the Young diagram with all the integers { 1 , … , n } {\displaystyle \{1,\ldots ,n\}} , with no repetition, such that each row and each column form increasing sequences.
For the cell in position ( i , j ) {\displaystyle (i,j)} , in the i {\displaystyle i} th row and j {\displaystyle j} th column, the hook H λ ( i , j ) {\displaystyle H_{\lambda }(i,j)} is the set of cells ( a , b ) {\displaystyle (a,b)} such that a = i {\displaystyle a=i} and b ≥ j {\displaystyle b\geq j} or a ≥ i {\displaystyle a\geq i} and b = j {\displaystyle b=j} .
The hook length h λ ( i , j ) {\displaystyle h_{\lambda }(i,j)} is the number of cells in H λ ( i , j ) {\displaystyle H_{\lambda }(i,j)} .
The hook length formula expresses the number of standard Young tableaux of shape λ {\displaystyle \lambda } , denoted by f λ {\displaystyle f^{\lambda }} or d λ {\displaystyle d_{\lambda }} , as
where the product is over all cells ( i , j ) {\displaystyle (i,j)} of the Young diagram.
The figure on the right shows hook lengths for the cells in the Young diagram λ = ( 4 , 3 , 1 , 1 ) {\displaystyle \lambda =(4,3,1,1)} , corresponding to the partition 9 = 4 + 3 + 1 + 1. The hook length formula gives the number of standard Young tableaux as:
A Catalan number C n {\displaystyle C_{n}} counts Dyck paths with n {\displaystyle n} steps going up (U) interspersed with n {\displaystyle n} steps going down (D), such that at each step there are never more preceding D's than U's. These are in bijection with the Young tableaux of shape λ = ( n , n ) {\displaystyle \lambda =(n,n)} : a Dyck path corresponds to the tableau whose first row lists the positions of the U-steps, while the second row lists the positions of the D-steps. For example, UUDDUD correspond to the tableaux with rows 125 and 346.
This shows that C n = f ( n , n ) {\displaystyle C_{n}=f^{(n,n)}} , so the hook formula specializes to the well-known product formula
There are other formulas for f λ {\displaystyle f^{\lambda }} , but the hook length formula is particularly simple and elegant.
A less convenient formula expressing f λ {\displaystyle f^{\lambda }} in terms of a determinant was deduced independently by Frobenius and Young in 1900 and 1902 respectively using algebraic methods. [ 1 ] [ 2 ] MacMahon found an alternate proof for the Young–Frobenius formula in 1916 using difference methods. [ 3 ]
The hook length formula itself was discovered in 1953 by Frame, Robinson , and Thrall as an improvement to the Young–Frobenius formula. Sagan [ 4 ] describes the discovery as follows.
One Thursday in May of 1953, Robinson was visiting Frame at Michigan State University. Discussing the work of Staal (a student of Robinson), Frame was led to conjecture the hook formula. At first Robinson could not believe that such a simple formula existed, but after trying some examples he became convinced, and together they proved the identity. On Saturday they went to the University of Michigan, where Frame presented their new result after a lecture by Robinson. This surprised Thrall, who was in the audience, because he had just proved the same result on the same day!
Despite the simplicity of the hook length formula, the Frame–Robinson–Thrall proof is not very insightful and does not provide any intuition for the role of the hooks. The search for a short, intuitive explanation befitting such a simple result gave rise to many alternate proofs. [ 5 ] Hillman and Grassl gave the first proof that illuminates the role of hooks in 1976 by proving a special case of the Stanley hook-content formula, which is known to imply the hook length formula. [ 6 ] Greene , Nijenhuis , and Wilf found a probabilistic proof using the hook walk in which the hook lengths appear naturally in 1979. [ 7 ] Remmel adapted the original Frame–Robinson–Thrall proof into the first bijective proof for the hook length formula in 1982. [ 8 ] A direct bijective proof was first discovered by Franzblau and Zeilberger in 1982. [ 9 ] Zeilberger also converted the Greene–Nijenhuis–Wilf hook walk proof into a bijective proof in 1984. [ 10 ] A simpler direct bijection was announced by Pak and Stoyanovskii in 1992, and its complete proof was presented by the pair and Novelli in 1997. [ 11 ] [ 12 ] [ 4 ]
Meanwhile, the hook length formula has been generalized in several ways.
R. M. Thrall found the analogue to the hook length formula for shifted Young Tableaux in 1952. [ 13 ] Sagan gave a shifted hook walk proof for the hook length formula for shifted Young tableaux in 1980. [ 14 ] Sagan and Yeh proved the hook length formula for binary trees using the hook walk in 1989. [ 15 ] Proctor gave a poset generalization (see below).
The hook length formula can be understood intuitively using the following heuristic, but incorrect, argument suggested by D. E. Knuth . [ 16 ] Given that each element of a tableau is the smallest in its hook and filling the tableau shape at random, the probability that cell ( i , j ) {\displaystyle (i,j)} will contain the minimum element of the corresponding hook is the reciprocal of the hook length. Multiplying these probabilities over all i {\displaystyle i} and j {\displaystyle j} gives the formula. This argument is fallacious since the events are not independent.
Knuth's argument is however correct for the enumeration of labellings on trees satisfying monotonicity properties analogous to those of a Young tableau. In this case, the 'hook' events in question are in fact independent events.
This is a probabilistic proof found by C. Greene , A. Nijenhuis , and H. S. Wilf in 1979. [ 7 ] Define
We wish to show that f λ = e λ {\displaystyle f^{\lambda }=e_{\lambda }} . First,
where the sum runs over all Young diagrams μ {\displaystyle \mu } obtained from λ {\displaystyle \lambda } by deleting one corner cell. (The maximal entry of the Young tableau of shape λ {\displaystyle \lambda } occurs at one of its corner cells, so deleting it gives a Young tableaux of shape μ {\displaystyle \mu } .)
We define f ∅ = 1 {\displaystyle f^{\emptyset }=1} and e ∅ = 1 {\displaystyle e_{\emptyset }=1} , so it is enough to show the same recurrence
which would imply f λ = e λ {\displaystyle f^{\lambda }=e_{\lambda }} by induction. The above sum can be viewed as a sum of probabilities by writing it as
We therefore need to show that the numbers e μ e λ {\displaystyle {\frac {e_{\mu }}{e_{\lambda }}}} define a probability measure on the set of Young diagrams μ {\displaystyle \mu } with μ ↑ λ {\displaystyle \mu \uparrow \lambda } . This is done in a constructive way by defining a random walk, called the hook walk , on the cells of the Young diagram λ {\displaystyle \lambda } , which eventually selects one of the corner cells of λ {\displaystyle \lambda } (which are in bijection with diagrams μ {\displaystyle \mu } for which μ ↑ λ {\displaystyle \mu \uparrow \lambda } ). The hook walk is defined by the following rules.
Proposition: For a given corner cell ( a , b ) {\displaystyle (a,b)} of λ {\displaystyle \lambda } , we have
where μ = λ ∖ { ( a , b ) } {\displaystyle \mu =\lambda \setminus \{(a,b)\}} .
Given this, summing over all corner cells ( a , b ) {\displaystyle (a,b)} gives ∑ μ ↑ λ e μ e λ = 1 {\displaystyle \sum _{\mu \uparrow \lambda }{\frac {e_{\mu }}{e_{\lambda }}}=1} as claimed.
The hook length formula is of great importance in the representation theory of the symmetric group S n {\displaystyle S_{n}} , where the number f λ {\displaystyle f^{\lambda }} is known to be equal to the dimension of the complex irreducible representation V λ {\displaystyle V_{\lambda }} associated to λ {\displaystyle \lambda } .
The complex irreducible representations V λ {\displaystyle V_{\lambda }} of the symmetric group are indexed by partitions λ {\displaystyle \lambda } of n {\displaystyle n} (see Specht module ) . Their characters are related to the theory of symmetric functions via the Hall inner product:
where s λ {\displaystyle s_{\lambda }} is the Schur function associated to λ {\displaystyle \lambda } and p τ ( w ) {\displaystyle p_{\tau (w)}} is the power-sum symmetric function of the partition τ ( w ) {\displaystyle \tau (w)} associated to the cycle decomposition of w {\displaystyle w} . For example, if w = ( 154 ) ( 238 ) ( 6 ) ( 79 ) {\displaystyle w=(154)(238)(6)(79)} then τ ( w ) = ( 3 , 3 , 2 , 1 ) {\displaystyle \tau (w)=(3,3,2,1)} .
Since the identity permutation e {\displaystyle e} has the form e = ( 1 ) ( 2 ) ⋯ ( n ) {\displaystyle e=(1)(2)\cdots (n)} in cycle notation, τ ( e ) = ( 1 , … , 1 ) = 1 ( n ) {\displaystyle \tau (e)=(1,\ldots ,1)=1^{(n)}} , the formula says
The expansion of Schur functions in terms of monomial symmetric functions uses the Kostka numbers :
Then the inner product with p 1 ( n ) = h 1 ( n ) {\displaystyle p_{1^{(n)}}=h_{1^{(n)}}} is K λ 1 ( n ) {\displaystyle K_{\lambda 1^{(n)}}} , because ⟨ m μ , h ν ⟩ = δ μ ν {\displaystyle \langle m_{\mu },h_{\nu }\rangle =\delta _{\mu \nu }} . Note that K λ 1 ( n ) {\displaystyle K_{\lambda 1^{(n)}}} is equal to f λ = dim V λ {\displaystyle f^{\lambda }=\dim V_{\lambda }} , so that ∑ λ ⊢ n ( f λ ) 2 = n ! {\displaystyle \textstyle \sum _{\lambda \vdash n}\left(f^{\lambda }\right)^{2}=n!} from considering the regular representation of S n {\displaystyle S_{n}} , or combinatorially from the Robinson–Schensted–Knuth correspondence .
The computation also shows that:
This is the expansion of p 1 ( n ) {\displaystyle p_{1^{(n)}}} in terms of Schur functions using the coefficients given by the inner product, since ⟨ s μ , s ν ⟩ = δ μ ν {\displaystyle \langle s_{\mu },s_{\nu }\rangle =\delta _{\mu \nu }} .
The above equality can be proven also checking the coefficients of each monomial at both sides and using the Robinson–Schensted–Knuth correspondence or, more conceptually, looking at the decomposition of V ⊗ n {\displaystyle V^{\otimes n}} by irreducible G L ( V ) {\displaystyle GL(V)} modules, and taking characters. See Schur–Weyl duality .
Source: [ 17 ]
By the above considerations
so that
where Δ ( x ) = ∏ i < j ( x i − x j ) {\displaystyle \Delta (x)=\prod _{i<j}(x_{i}-x_{j})} is the Vandermonde determinant .
For the partition λ = ( λ 1 ≥ ⋯ ≥ λ k ) {\displaystyle \lambda =(\lambda _{1}\geq \cdots \geq \lambda _{k})} , define l i = λ i + k − i {\displaystyle l_{i}=\lambda _{i}+k-i} for i = 1 , … , k {\displaystyle i=1,\ldots ,k} . For the following we need at least as many variables as rows in the partition, so from now on we work with n {\displaystyle n} variables x 1 , ⋯ , x n {\displaystyle x_{1},\cdots ,x_{n}} .
Each term Δ ( x ) s λ {\displaystyle \Delta (x)s_{\lambda }} is equal to
(See Schur function .) Since the vector ( l 1 , … , l k ) {\displaystyle (l_{1},\ldots ,l_{k})} is different for each partition, this means that the coefficient of x 1 l 1 ⋯ x k l k {\displaystyle x_{1}^{l_{1}}\cdots x_{k}^{l_{k}}} in Δ ( x ) p 1 ( n ) {\displaystyle \Delta (x)p_{1^{(n)}}} , denoted [ Δ ( x ) p 1 ( n ) ] l 1 , ⋯ , l k {\displaystyle \left[\Delta (x)p_{1^{(n)}}\right]_{l_{1},\cdots ,l_{k}}} , is equal to f λ {\displaystyle f^{\lambda }} . This is known as the Frobenius Character Formula , which gives one of the earliest proofs. [ 17 ]
It remains only to simplify this coefficient. Multiplying
and
we conclude that our coefficient as
which can be written as
The latter sum is equal to the following determinant
which column-reduces to a Vandermonde determinant , and we obtain the formula
Note that l i {\displaystyle l_{i}} is the hook length of the first box in each row of the Young diagram, and this expression is easily transformed into the desired form n ! ∏ h λ ( i , j ) {\displaystyle {\frac {n!}{\prod h_{\lambda }(i,j)}}} : one shows l i ! = ∏ j > i ( l i − l j ) ⋅ ∏ j ≤ λ i h λ ( i , j ) {\displaystyle \textstyle l_{i}!=\prod _{j>i}(l_{i}-l_{j})\cdot \prod _{j\leq \lambda _{i}}h_{\lambda }(i,j)} , where the latter product runs over the i {\displaystyle i} th row of the Young diagram.
The hook length formula also has important applications to the analysis of longest increasing subsequences in random permutations. If σ n {\displaystyle \sigma _{n}} denotes a uniformly random permutation of order n {\displaystyle n} , L ( σ n ) {\displaystyle L(\sigma _{n})} denotes the maximal length of an increasing subsequence of σ n {\displaystyle \sigma _{n}} , and ℓ n {\displaystyle \ell _{n}} denotes the expected (average) value of L ( σ n ) {\displaystyle L(\sigma _{n})} , Anatoly Vershik and Sergei Kerov [ 18 ] and independently Benjamin F. Logan and Lawrence A. Shepp [ 19 ] showed that when n {\displaystyle n} is large, ℓ n {\displaystyle \ell _{n}} is approximately equal to 2 n {\displaystyle 2{\sqrt {n}}} . This answers a question originally posed by Stanislaw Ulam . The proof is based on translating the question via the Robinson–Schensted correspondence to a problem about the limiting shape of a random Young tableau chosen according to Plancherel measure . Since the definition of Plancherel measure involves the quantity f λ {\displaystyle f^{\lambda }} , the hook length formula can then be used to perform an asymptotic analysis of the limit shape and thereby also answer the original question.
The ideas of Vershik–Kerov and Logan–Shepp were later refined by Jinho Baik, Percy Deift and Kurt Johansson, who were able to achieve a much more precise analysis of the limiting behavior of the maximal increasing subsequence length, proving an important result now known as the Baik–Deift–Johansson theorem. Their analysis again makes crucial use of the fact that f λ {\displaystyle f^{\lambda }} has a number of good formulas, although instead of the hook length formula it made use of one of the determinantal expressions.
The formula for the number of Young tableaux of shape λ {\displaystyle \lambda } was originally derived from the Frobenius determinant formula in connection to representation theory: [ 20 ]
Hook lengths can also be used to give a product representation to the generating function for the number of reverse plane partitions of a given shape. [ 21 ] If λ is a partition of some integer p , a reverse plane partition of n with shape λ is obtained by filling in the boxes in the Young diagram with non-negative integers such that the entries add to n and are non-decreasing along each row and down each column. The hook lengths h 1 , … , h p {\displaystyle h_{1},\dots ,h_{p}} can be defined as with Young tableaux. If π n denotes the number of reverse plane partitions of n with shape λ , then the generating function can be written as
Stanley discovered another formula for the same generating function. [ 22 ] In general, if A {\displaystyle A} is any poset with n {\displaystyle n} elements, the generating function for reverse A {\displaystyle A} -partitions is
where P ( x ) {\displaystyle P(x)} is a polynomial such that P ( 1 ) {\displaystyle P(1)} is the number of linear extensions of A {\displaystyle A} .
In the case of a partition λ {\displaystyle \lambda } , we are considering the poset in its cells given by the relation
So a linear extension is simply a standard Young tableau, i.e. P ( 1 ) = f λ {\displaystyle P(1)=f^{\lambda }}
Combining the two formulas for the generating functions we have
Both sides converge inside the disk of radius one and the following expression makes sense for | x | < 1 {\displaystyle |x|<1}
It would be violent to plug in 1, but the right hand side is a continuous function inside the unit disk and a polynomial is continuous everywhere so at least we can say
Applying L'Hôpital's rule n {\displaystyle n} times yields the hook length formula
The Schur polynomial s λ ( x 1 , … , x k ) {\displaystyle s_{\lambda }(x_{1},\ldots ,x_{k})} is the generating function of semistandard Young tableaux with shape λ {\displaystyle \lambda } and entries in { 1 , … , k } {\displaystyle \{1,\ldots ,k\}} . Specializing this to x i = 1 {\displaystyle x_{i}=1} gives the number of semi-standard tableaux, which can be written in terms of hook lengths:
s λ ( 1 , … , 1 ) = ∏ ( i , j ) ∈ Y ( λ ) k − i + j h λ ( i , j ) . {\displaystyle s_{\lambda }(1,\ldots ,1)\ =\ \prod _{(i,j)\in \mathrm {Y} (\lambda )}{\frac {k-i+j}{h_{\lambda }(i,j)}}.}
The Young diagram λ {\displaystyle \lambda } corresponds to an irreducible representation of the special linear group S L k ( C ) {\displaystyle \mathrm {SL} _{k}(\mathbb {C} )} , and the Schur polynomial is also the character of the diagonal matrix d i a g ( x 1 , … , x k ) {\displaystyle \mathrm {diag} (x_{1},\ldots ,x_{k})} acting on this representation. The above specialization is thus the dimension of the irreducible representation, and the formula is an alternative to the more general Weyl dimension formula .
We may refine this by taking the principal specialization of the Schur function in the variables 1 , t , t 2 , t 3 , … {\displaystyle 1,t,t^{2}\!,t^{3}\!,\ldots } :
where n ( λ ) = ∑ i ( i − 1 ) λ i = ∑ i ( λ i ′ 2 ) {\displaystyle n(\lambda )=\sum _{i}(i{-}1)\lambda _{i}=\sum _{i}{\tbinom {\lambda _{i}'}{2}}} for the conjugate partition λ ′ {\displaystyle \lambda '} .
There is a generalization of this formula for skew shapes, [ 23 ]
where the sum is taken over excited diagrams of shape λ {\displaystyle \lambda } and boxes distributed according to μ {\displaystyle \mu } .
A variation on the same theme is given by Okounkov and Olshanski [ 24 ] of the form
where s μ ∗ {\displaystyle s_{\mu }^{*}} is the so-called shifted Schur function s μ ∗ ( x 1 , … , x n ) = det [ ( x i + n − 1 ) ! / ( μ j + n − j ) ! ] det [ ( x i + n − i ) ! / ( n − j ) ! ] {\displaystyle s_{\mu }^{*}(x_{1},\dots ,x_{n})={\frac {\det[(x_{i}+n-1)!/(\mu _{j}+n-j)!]}{\det[(x_{i}+n-i)!/(n-j)!]}}} .
Young diagrams can be considered as finite order ideals in the poset N × N {\displaystyle \mathbb {N} \times \mathbb {N} } , and standard Young tableaux are their linear extensions . Robert Proctor has given a generalization of the hook length formula to count linear extensions of a larger class of posets generalizing both trees and skew diagrams. [ 25 ] [ 26 ] | https://en.wikipedia.org/wiki/Hook_length_formula |
Hooke's atom , also known as harmonium or hookium , refers to an artificial helium -like atom where the Coulombic electron-nucleus interaction potential is
replaced by a harmonic potential . [ 1 ] [ 2 ] This system is of significance as it is, for certain values of the force constant defining the harmonic containment, an exactly solvable [ 3 ] ground-state many-electron problem that explicitly includes electron correlation . As such it can provide insight into quantum correlation (albeit in the presence of a non-physical nuclear potential) and can act as a test system for judging the accuracy of approximate quantum chemical methods for solving the Schrödinger equation . [ 4 ] [ 5 ] The name "Hooke's atom" arises because the harmonic potential used to describe the electron-nucleus interaction is a consequence of Hooke's law .
Employing atomic units , the Hamiltonian defining the Hooke's atom is
As written, the first two terms are the kinetic energy operators of the two electrons, the third term is the harmonic electron-nucleus potential, and the final term the electron-electron interaction potential. The non-relativistic Hamiltonian of the helium atom differs only in the replacement:
The equation to be solved is the two electron Schrödinger equation:
For arbitrary values of the force constant, k , the Schrödinger equation does not have an analytic solution. However, for a countably infinite number of values, such as k =¼ , simple closed form solutions can be derived. [ 5 ] Given the artificial nature of the system this restriction does not hinder the usefulness of the solution.
To solve, the system is first transformed from the Cartesian electronic coordinates, ( r 1 , r 2 ) , to the center of mass coordinates, ( R , u ) , defined as
Under this transformation, the Hamiltonian becomes separable – that is, the | r 1 - r 2 | term coupling the two electrons is removed (and not replaced by some other form) allowing the general separation of variables technique to be applied to further a solution for the wave function in the form Ψ ( r 1 , r 2 ) = χ ( R ) Φ ( u ) {\displaystyle \Psi (\mathbf {r} _{1},\mathbf {r} _{2})=\chi (\mathbf {R} )\Phi (\mathbf {u} )} . The original Schrödinger equation is then replaced by:
The first equation for χ ( R ) {\displaystyle \chi (\mathbf {R} )} is the Schrödinger equation for an isotropic quantum harmonic oscillator with ground-state energy E R = ( 3 / 2 ) k E h {\displaystyle E_{\mathbf {R} }=(3/2){\sqrt {k}}E_{\mathrm {h} }} and (unnormalized) wave function
Asymptotically, the second equation again behaves as a harmonic oscillator of the form exp ( − ( k / 4 ) u 2 ) {\displaystyle \exp(-({\sqrt {k}}/4)u^{2})\,} and the rotationally invariant ground state can be expressed, in general, as Φ ( u ) = f ( u ) exp ( − ( k / 4 ) u 2 ) {\displaystyle \Phi (\mathbf {u} )=f(u)\exp(-({\sqrt {k}}/4)u^{2})\,} for some function f ( u ) {\displaystyle f(u)\,} . It was long noted that f ( u ) is very well approximated by a linear function in u . [ 2 ] Thirty years after the proposal of the model an exact solution was discovered for k =¼ , [ 3 ] and it was seen that f ( u )=1+ u /2 . It was later shown that there are many values of k which lead to an exact solution for the ground state, [ 5 ] as will be shown in the following.
Decomposing Φ ( u ) = R l ( u ) Y l m {\displaystyle \Phi (\mathbf {u} )=R_{l}(u)Y_{lm}} and expressing the Laplacian in spherical coordinates ,
one further decomposes the radial wave function as R l ( u ) = S l ( u ) / u {\displaystyle R_{l}(u)=S_{l}(u)/u\,} which removes the first derivative to yield
The asymptotic behavior S l ( u ) ∼ e − k 4 u 2 {\displaystyle S_{l}(u)\sim e^{-{\frac {\sqrt {k}}{4}}u^{2}}\,} encourages a solution of the form
The differential equation satisfied by T l ( u ) {\displaystyle T_{l}(u)\,} is
This equation lends itself to a solution by way of the Frobenius method . That is, T l ( u ) {\displaystyle T_{l}(u)\,} is expressed as
for some m {\displaystyle m\,} and { a k } k = 0 k = ∞ {\displaystyle \{a_{k}\}_{k=0}^{k=\infty }\,} which satisfy:
The two solutions to the indicial equation are m = l + 1 {\displaystyle m=l+1} and m = − l {\displaystyle m=-l} of which the former is taken as it yields the regular (bounded, normalizable ) wave function. For a simple solution to exist, the infinite series is sought to terminate and it is here where particular values of k are exploited for an exact closed-form solution. Terminating the polynomial at any particular order can be accomplished with different values of k defining the Hamiltonian. As such there exists an infinite number of systems, differing only in the strength of the harmonic containment, with exact ground-state solutions. Most simply, to impose a k = 0 for k ≥ 2 , two conditions must be satisfied:
These directly force a 2 = 0 and a 3 = 0 respectively, and as a consequence of the three term recession, all higher coefficients also vanish. Solving for k {\displaystyle {\sqrt {k}}\,} and E l {\displaystyle E_{l}\,} yields
and the radial wave function
Transforming back to R l ( u ) {\displaystyle R_{l}(u)\,}
the ground-state (with l = 0 {\displaystyle l=0\,} and energy 5 / 4 E h {\displaystyle 5/4E_{\mathrm {h} }\,} ) is finally
Combining, normalizing, and transforming back to the original coordinates yields the ground state wave function:
The corresponding ground-state total energy is then E = E R + E u = 3 4 + 5 4 = 2 E h {\displaystyle E=E_{R}+E_{u}={\frac {3}{4}}+{\frac {5}{4}}=2E_{\mathrm {h} }} .
The exact ground state electronic density of the Hooke atom for the special case k = 1 / 4 {\displaystyle k=1/4} is [ 4 ]
From this we see that the radial derivative of the density vanishes at the nucleus. This is in stark contrast to the real (non-relativistic) helium atom where the density displays a cusp at the nucleus as a result of the unbounded Coulomb potential. | https://en.wikipedia.org/wiki/Hooke's_atom |
In the Hooker reaction (1936) an alkyl chain in a certain naphthoquinone (phenomenon first observed in the compound lapachol ) is reduced by one methylene unit as carbon dioxide in each potassium permanganate oxidation . [ 1 ] [ 2 ] | https://en.wikipedia.org/wiki/Hooker_reaction |
In mathematics , Hooley's delta function ( Δ ( n ) {\displaystyle \Delta (n)} ) , also called Erdős--Hooley delta-function , defines the maximum number of divisors of n {\displaystyle n} in [ u , e u ] {\displaystyle [u,eu]} for all u {\displaystyle u} , where e {\displaystyle e} is the Euler's number . The first few terms of this sequence are
The sequence was first introduced by Paul Erdős in 1974, [ 1 ] then studied by Christopher Hooley in 1979. [ 2 ]
In 2023, Dimitris Koukoulopoulos and Terence Tao proved that the sum of the first n {\displaystyle n} terms, ∑ k = 1 n Δ ( k ) ≪ n ( log log n ) 11 / 4 {\displaystyle \textstyle \sum _{k=1}^{n}\Delta (k)\ll n(\log \log n)^{11/4}} , for n ≥ 100 {\displaystyle n\geq 100} . [ 3 ] In particular, the average order of Δ ( n ) {\displaystyle \Delta (n)} to k {\displaystyle k} is O ( ( log n ) k ) {\displaystyle O((\log n)^{k})} for any k > 0 {\displaystyle k>0} . [ 4 ]
Later in 2023 Kevin Ford , Koukoulopoulos , and Tao proved the lower bound ∑ k = 1 n Δ ( k ) ≫ n ( log log n ) 1 + η − ϵ {\displaystyle \textstyle \sum _{k=1}^{n}\Delta (k)\gg n(\log \log n)^{1+\eta -\epsilon }} , where η = 0.3533227 … {\displaystyle \eta =0.3533227\ldots } , fixed ϵ {\displaystyle \epsilon } , and n ≥ 100 {\displaystyle n\geq 100} . [ 5 ]
This function measures the tendency of divisors of a number to cluster.
The growth of this sequence is limited by Δ ( m n ) ≤ Δ ( n ) d ( m ) {\displaystyle \Delta (mn)\leq \Delta (n)d(m)} where d ( n ) {\displaystyle d(n)} is the number of divisors of n {\displaystyle n} . [ 6 ] | https://en.wikipedia.org/wiki/Hooley's_delta_function |
In mechanics , a cylinder stress is a stress distribution with rotational symmetry; that is, which remains unchanged if the stressed object is rotated about some fixed axis.
Cylinder stress patterns include:
These three principal stresses- hoop, longitudinal, and radial can be calculated analytically using a mutually perpendicular tri-axial stress system. [ 1 ]
The classical example (and namesake) of hoop stress is the tension applied to the iron bands, or hoops, of a wooden barrel . In a straight, closed pipe , any force applied to the cylindrical pipe wall by a pressure differential will ultimately give rise to hoop stresses. Similarly, if this pipe has flat end caps, any force applied to them by static pressure will induce a perpendicular axial stress on the same pipe wall. Thin sections often have negligibly small radial stress , but accurate models of thicker-walled cylindrical shells require such stresses to be considered.
In thick-walled pressure vessels, construction techniques allowing for favorable initial stress patterns can be utilized. These compressive stresses at the inner surface reduce the overall hoop stress in pressurized cylinders. Cylindrical vessels of this nature are generally constructed from concentric cylinders shrunk over (or expanded into) one another, i.e., built-up shrink-fit cylinders, but can also be performed to singular cylinders though autofrettage of thick cylinders. [ 2 ]
The hoop stress is the force over area exerted circumferentially (perpendicular to the axis and the radius of the object) in both directions on every particle in the cylinder wall. It can be described as:
where:
An alternative to hoop stress in describing circumferential stress is wall stress or wall tension ( T ), which usually is defined as the total circumferential force exerted along the entire radial thickness: [ 3 ]
Along with axial stress and radial stress , circumferential stress is a component of the stress tensor in cylindrical coordinates .
It is usually useful to decompose any force applied to an object with rotational symmetry into components parallel to the cylindrical coordinates r , z , and θ . These components of force induce corresponding stresses: radial stress, axial stress, and hoop stress, respectively.
For the thin-walled assumption to be valid, the vessel must have a wall thickness of no more than about one-tenth (often cited as Diameter / t > 20) of its radius. [ 4 ] This allows for treating the wall as a surface, and subsequently using the Young–Laplace equation for estimating the hoop stress created by an internal pressure on a thin-walled cylindrical pressure vessel:
where
The hoop stress equation for thin shells is also approximately valid for spherical vessels, including plant cells and bacteria in which the internal turgor pressure may reach several atmospheres. In practical engineering applications for cylinders (pipes and tubes), hoop stress is often re-arranged for pressure, and is called Barlow's formula .
Inch-pound-second system (IPS) units for P are pounds-force per square inch (psi). Units for t , and d are inches (in).
SI units for P are pascals (Pa), while t and d =2 r are in meters (m).
When the vessel has closed ends, the internal pressure acts on them to develop a force along the axis of the cylinder. This is known as the axial stress and is usually less than the hoop stress.
Though this may be approximated to
There is also a radial stress σ r {\displaystyle \sigma _{r}\ } that is developed perpendicular to the surface and may be estimated in thin walled cylinders as:
In the thin-walled assumption the ratio r t {\displaystyle {\dfrac {r}{t}}\ } is large, so in most cases this component is considered negligible compared to the hoop and axial stresses. [ 5 ]
When the cylinder to be studied has a radius / thickness {\displaystyle {\text{radius}}/{\text{thickness}}} ratio of less than 10 (often cited as diameter / thickness < 20 {\displaystyle {\text{diameter}}/{\text{thickness}}<20} ) the thin-walled cylinder equations no longer hold since stresses vary significantly between inside and outside surfaces and shear stress through the cross section can no longer be neglected.
These stresses and strains can be calculated using the Lamé equations , [ 6 ] a set of equations developed by French mathematician Gabriel Lamé .
where:
For cylinder with boundary conditions:
the following constants are obtained:
Using these constants, the following equation for radial stress and hoop stress are obtained, respectively:
Note that when the results of these stresses are positive, it indicates tension, and negative values, compression.
For a solid cylinder: R i = 0 {\displaystyle R_{i}=0} then B = 0 {\displaystyle B=0} and a solid cylinder cannot have an internal pressure so A = P o {\displaystyle A=P_{o}} .
Being that for thick-walled cylinders, the ratio r t {\displaystyle {\dfrac {r}{t}}\ } is less than 10, the radial stress, in proportion to the other stresses, becomes non-negligible (i.e. P is no longer much, much less than Pr/t and Pr/2t), and so the thickness of the wall becomes a major consideration for design (Harvey, 1974, pp. 57).
In pressure vessel theory, any given element of the wall is evaluated in a tri-axial stress system, with the three principal stresses being hoop, longitudinal, and radial. Therefore, by definition, there exist no shear stresses on the transverse, tangential, or radial planes. [ 1 ]
In thick-walled cylinders, the maximum shear stress at any point is given by half of the algebraic difference between the maximum and minimum stresses, which is, therefore, equal to half the difference between the hoop and radial stresses. The shearing stress reaches a maximum at the inner surface, which is significant because it serves as a criterion for failure since it correlates well with actual rupture tests of thick cylinders (Harvey, 1974, p. 57).
Fracture is governed by the hoop stress in the absence of other external loads since it is the largest principal stress. Note that a hoop experiences the greatest stress at its inside (the outside and inside experience the same total strain, which is distributed over different circumferences); hence cracks in pipes should theoretically start from inside the pipe. This is why pipe inspections after earthquakes usually involve sending a camera inside a pipe to inspect for cracks.
Yielding is governed by an equivalent stress that includes hoop stress and the longitudinal or radial stress when absent.
In the pathology of vascular or gastrointestinal walls , the wall tension represents the muscular tension on the wall of the vessel. As a result of the Law of Laplace , if an aneurysm forms in a blood vessel wall, the radius of the vessel has increased. This means that the inward force on the vessel decreases, and therefore the aneurysm will continue to expand until it ruptures. A similar logic applies to the formation of diverticuli in the gut . [ 7 ]
The first theoretical analysis of the stress in cylinders was developed by the mid-19th century engineer William Fairbairn , assisted by his mathematical analyst Eaton Hodgkinson . Their first interest was in studying the design and failures of steam boilers . [ 9 ] Fairbairn realized that the hoop stress was twice the longitudinal stress, an important factor in the assembly of boiler shells from rolled sheets joined by riveting . Later work was applied to bridge-building and the invention of the box girder . In the Chepstow Railway Bridge , the cast iron pillars are strengthened by external bands of wrought iron . The vertical, longitudinal force is a compressive force, which cast iron is well able to resist. The hoop stress is tensile, and so wrought iron, a material with better tensile strength than cast iron, is added. | https://en.wikipedia.org/wiki/Hoop_stress |
Hooper's paradox is a falsidical paradox based on an optical illusion. A geometric shape with an area of 32 units is dissected into four parts, which afterwards get assembled into a rectangle with an area of only 30 units.
Upon close inspection one can notice that the triangles of the dissected shape are not identical to the triangles in the rectangle. The length of the shorter side at the right angle measures 2 units in the original shape but only 1.8 units in the rectangle. This means, the real triangles of the original shape overlap in the rectangle. The overlapping area is a parallelogram, the diagonals and sides of which can be computed via the Pythagorean theorem .
The area of this parallelogram can determined using Heron's formula for triangles. This yields
for the halved circumference of the triangle (half of the parallelogram) and with that for the area of the parallelogram
So the overlapping area of the two triangles accounts exactly for the vanished area of 2 units.
William Hooper published the paradox in 1774 in his book Rational Recreations , calling it "The geometric money". The 1774 edition of his book still contained a false drawing, which got corrected in the 1782 edition. However Hooper was not the first to publish this geometric fallacy, since Hooper's book was largely an adaption of Edmé-Gilles Guyot 's Nouvelles récréations physiques et mathétiques , which had been published in France in 1769. The description in this book contains the same false drawing as in Hooper's book, but it got corrected in a later edition as well. | https://en.wikipedia.org/wiki/Hooper's_paradox |
The Hoopes process is a metallurgical process, used to obtain aluminium metal of very high purity (about 99.99% pure). The process was patented by William Hoopes , a chemist of the Aluminum Company of America (ALCOA), in 1925. [ 1 ]
It is a method used to obtain aluminium of very high purity. The metal obtained in the Hall–Héroult process is about 99.5% pure, and for most purposes it is taken as pure metal.
However, further purification of aluminium can be carried out by the Hoopes process. This is an electrolytic process .
The cell used in this process consists of an iron tank lined with carbon at the bottom.
A molten alloy of copper , crude aluminium and silicon is used as the anode. It forms the lowermost layer in the cell.
The middle layer consists of molten mixture of fluorides of sodium , aluminium and barium ( cryolite + BaF 2 ).
The uppermost layer consists of molten aluminium.
A set of graphite rods dipped in molten aluminium serve as the cathode.
During electrolysis , Al 3+ ions from the middle layer migrate to the upper layer, where they are reduced to aluminum by gaining 3 electrons .
Equal numbers of Al 3+ ions are produced in the lower layer. These ions migrate to the middle layer. Pure aluminium is tapped off from time to time. The Hoopes process gives about 99.99% pure aluminium. | https://en.wikipedia.org/wiki/Hoopes_process |
In telecommunications , a hop is a portion of a signal's journey from source to receiver. Examples include:
In computer networks , a hop is the step from one network segment to the next.
This article incorporates public domain material from Federal Standard 1037C . General Services Administration . Archived from the original on 2022-01-22. (in support of MIL-STD-188 ).
This article related to telecommunications is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hop_(telecommunications) |
Hopanoids are a diverse subclass of triterpenoids with the same hydrocarbon skeleton as the compound hopane . This group of pentacyclic molecules therefore refers to simple hopenes, hopanols and hopanes, but also to extensively functionalized derivatives such as bacteriohopanepolyols (BHPs) and hopanoids covalently attached to lipid A . [ 1 ] [ 2 ]
The first known hopanoid, hydroxyhopanone, was isolated by two chemists at The National Gallery, London working on the chemistry of dammar gum , a natural resin used as a varnish for paintings. [ 3 ] While hopanoids are often assumed to be made only in bacteria, their name actually comes from the abundance of hopanoid compounds in the resin of plants from the genus Hopea . In turn, this genus is named after John Hope , the first Regius Keeper of the Royal Botanic Garden, Edinburgh .
Since their initial discovery in an angiosperm , hopanoids have been found in plasma membranes of bacteria , lichens , bryophytes , ferns , tropical trees and fungi . [ 4 ] Hopanoids have stable polycyclic structures that are well-preserved in petroleum reservoirs , rocks and sediment, allowing the diagenetic products of these molecules to be interpreted as biomarkers for the presence of specific microbes and potentially for chemical or physical conditions at the time of deposition . [ 5 ] Hopanoids have not been detected in archaea . [ 6 ] [ 7 ]
About 10% of sequenced bacterial genomes have a putative shc gene encoding a squalene-hopene cyclase and can presumably make hopanoids, which have been shown to play diverse roles in the plasma membrane and may allow some organisms to adapt in extreme environments. [ 8 ] [ 9 ]
Since hopanoids modify plasma membrane properties in bacteria, they are frequently compared to sterols (e.g., cholesterol ), which modulate membrane fluidity and serve other functions in eukaryotes . [ 10 ] Although hopanoids do not rescue sterol deficiency, they are thought to increase membrane rigidity and decrease permeability. [ 9 ] [ 11 ] [ 12 ] Also, gammaproteobacteria and eukaryotic organisms such as lichens and bryophytes have been shown to produce both sterols and hopanoids, suggesting these lipids may have other distinct functions. [ 4 ] [ 13 ] Notably, the way hopanoids pack into the plasma membrane can change depending on what functional groups are attached. The hopanoid bacteriohopanetetrol assumes a transverse orientation in lipid bilayers , but diploptene localizes between the inner and outer leaflet, presumably thickening the membrane to decrease permeability. [ 14 ]
The hopanoid diplopterol orders membranes by interacting with lipid A , a common membrane lipid in bacteria, in ways similar to how cholesterol and sphingolipids interact in eukaryotic plasma membranes. [ 10 ] Diplopterol and cholesterol were demonstrated to promote condensation and inhibit gel-phase formation in both sphingomyelin monolayers and monolayers of glycan-modified lipid A. Furthermore, both diplopterol and cholesterol could rescue pH-dependent phase transitions in glycan-modified lipid A monolayers. [ 10 ] The role of hopanoids in membrane-mediated acid tolerance is further supported by observations of acid-inhibited growth and morphological abnormalities of the plasma membrane in hopanoid-deficient bacteria with mutant squalene-hopene cyclases. [ 15 ] [ 16 ]
Hopanoids are produced in several nitrogen-fixing bacteria . [ 9 ] In the actinomycete Frankia , hopanoids in the membranes of vesicles specialized for nitrogen fixation likely restrict the entry of oxygen by making the lipid bilayer more tight and compact. [ 17 ] In Bradyrhizobium , hopanoids chemically bonded to lipid A increase membrane stability and rigidity, enhancing stress tolerance and intracellular survival in Aeschynomene legumes . [ 18 ] In the cyanobacterium Nostoc punctiforme , large quantities of 2-methylhopanoids localize to the outer membranes of survival structures called akinetes . [ 19 ] In another example of stress tolerance, hopanoids in the aerial hyphae (spore bearing structures) of the prokaryotic soil bacteria Streptomyces are thought to minimize water loss across the membrane to the air. [ 20 ]
Since hopanoids are a C 30 terpenoid, biosynthesis begins with isopentenyl pyrophosphate (IPP) and dimethylallyl pyrophosphate (DMAP), which are combined to form longer chain isoprenoids . [ 2 ] Synthesis of these smaller precursors proceeds either via the mevalonate pathway or the methylerythritol-4-phosphate pathway depending on the bacterial species, although the latter tends to be more common. [ 21 ] DMAP condenses with one molecule of IPP to geranyl pyrophosphate , which in turn condenses with another IPP to generate farnesyl pyrophosphate (FPP) . [ 2 ] Squalene synthase , coded for by the gene sqs , then catalyzes the condensation of two FPP molecules to presqualene pyrophosphate (PSPP) before oxidizing NADPH to release squalene . [ 22 ] However, some hopanoid-producing bacteria lack squalene synthase and instead use the three enzymes HpnC, HpnD and HpnE, which are encoded in the hpn operon with many other hopanoid biosynthesis genes. [ 23 ] In this alternative yet seemingly more widespread squalene synthesis pathway, HpnD releases pyrophosphate as it condenses two molecules of FPP to PSPP, which HpnC converts to hydroxysqualene, consuming a water molecule and releasing another pyrophosphate. Then, hydroxysqualene is reduced to squalene in a dehydration reaction mediated by the FAD -dependent enzyme HpnE. [ 22 ]
Next, a squalene-hopene cyclase catalyzes an elaborate cyclization reaction, engaging squalene in an energetically favorable all-chair conformation before creating five cycles, six covalent bonds, and nine chiral centers on the molecule in a single step. [ 24 ] [ 25 ] This enzyme, coded for by the gene shc ( also called hpnF in some bacteria), has a double ⍺-barrel fold characteristic of terpenoid biosynthesis [ 26 ] and is present in the cell as a monotopic homodimer , meaning pairs of the cyclase are embedded in but do not span the plasma membrane. [ 24 ] [ 27 ] In vitro , this enzyme exhibits promiscuous substrate specificity, also cyclizing 2,3-oxidosqualene . [ 28 ]
Aromatic residues in the active site form several unfavorable carbocations on the substrate which are quenched by a rapid polycyclization. [ 25 ] In the last substep of the cyclization reaction, after electrons comprising the terminal alkene bond on the squalene have attacked the hopenyl carbocation to close the E ring, the C 22 carbocation may be quenched by mechanisms that lead to different hopanoid products. Nucleophilic attack of water will yield diplopterol, while deprotonation at an adjacent carbon will form one of several hopene isomers, often diploptene. [ 4 ]
After cyclization, hopanoids are frequently modified by hopanoid biosynthesis enzymes encoded by genes in the same operon as shc , hpn . [ 29 ] For instance, the radical SAM protein HpnH adds an adenosine group to diploptene, forming the extended C 35 hopanoid adenosylhopane, which can then be further functionalized by other hpn gene products. [ 30 ] HpnG catalyzes the removal of adenine from adenosylhopane to make ribosyl hopane, which reacts to form bacteriohopanetetrol (BHT) in a reaction mediated by an unknown enzyme. [ 31 ] Additional modifications may occurs as HpnO aminates the terminal hydroxyl on BHT, producing amino bacteriohopanetriol, or as the glycosyltransferase HpnI converts BHT to N-acetylglucosaminyl-BHT. [ 32 ] In sequence, the hopanoid biosynthesis associated protein HpnK mediates deacetylation to glucosaminyl-BHT, from which radical SAM protein HpnJ generates a cyclitol ether. [ 32 ]
Importantly, C 30 and C 35 hopanoids alike may be methylated at C 2 and C 3 positions by the radical SAM methyltransferases HpnP and HpnR, respectively. [ 33 ] [ 34 ] These two methylations are particularly geostable compared to side-chain modifications and have entertained geobiologists for decades. [ 9 ]
In a biosynthetic pathway exclusive to some bacteria, the enzyme tetrahymanol synthase catalyzes the conversion of the hopanoid diploptene to the pentacyclic triterpenoid tetrahymanol . In eukaryotes like Tetrahymena , tetrahymanol is instead synthesized directly from squalene by a cyclase with no homology to the bacterial tetrahymanol synthase. [ 35 ]
Hopanoids have been estimated to be the most abundant natural products on Earth, remaining in the organic fraction of all sediments, independent of age, origin or nature. The total amount in the Earth was estimated as 10 x 10 18 gram (10 12 ton) in 1992. [ 36 ] Biomolecules like DNA and proteins are degraded during diagenesis , but polycyclic lipids persist in the environment over geologic timescales due to their fused, stable structures. [ 37 ] Although hopanoids and sterols are reduced to hopanes and steranes during deposition, these diagenetic products can still be useful biomarkers, or molecular fossils , for studying the coevolution of early life and Earth. [ 37 ] [ 38 ]
Currently, the oldest detected undisputed triterpenoid fossils are Mesoproterozoic okenanes , steranes, and methylhopanes from a 1.64 Ga (billion year) old basin in Australia. [ 39 ] However, molecular clock analyses estimate that the earliest sterols were likely produced around 2.3 Ga ago, around the same time as the Great Oxidation Event , with hopanoid synthesis arising even earlier. [ 40 ]
For several reasons, hopanoids and squalene-hopene cyclases have been hypothesized to be more ancient than sterols and oxidosqualene cyclases. First, diplopterol is synthesized when water quenches the C 22 carbocation formed during polycyclization. This indicates that hopanoids can be made without molecular oxygen and could have served as a sterol surrogate before the atmosphere accumulated oxygen, which reacts with squalene in a reaction catalyzed by squalene monooxygenase during sterol biosynthesis. [ 1 ] Furthermore, squalene binds to squalene-hopene cyclases in a low-energy, all-chair conformation while oxidosqualene is cyclized in a more strained, chair-boat-chair-boat conformation. [ 4 ] [ 41 ] Squalene-hopene cyclases also display more substrate promiscuity in that they cyclize oxidosqualene in vitro , causing some scientists to hypothesize that they are evolutionary predecessors to oxidosqualene cyclases. [ 41 ] Other scientists have proposed that squalene-hopene and oxidosqualene cyclases diverged from a common ancestor, a putative bacterial cyclase that would have made a tricyclic malabaricanoid or tetracyclic dammaranoid product. [ 1 ] [ 42 ]
2-methylhopanes, often quantified as the 2-α-methylhopane index, were first proposed as a biomarker for oxygenic photosynthesis by Roger Summons and colleagues following the discovery of the precursor lipids, 2-methylhopanols, in cyanobacterial cultures and mats. [ 43 ] The subsequent discovery of 2-α-methylhopanes supposedly from photosynthetic cyanobacteria in 2.7 Ga old shales from the Pilbara Craton of Western Australia suggested a 400 Ma (million year) gap between the evolution of oxygenic metabolism and the Great Oxidation Event. [ 44 ] However, these findings were later rejected due to potential contamination by modern hydrocarbons. [ 45 ]
Putative cyanobacterial presence on the basis of abundant 2-methylhopanes has been used to explain black shale deposition during Aptian and Cenomanian–Turonian Ocean Anoxic Events (OAEs) and the associated 15 N isotopic signatures indicative of N 2 -fixation. [ 46 ] In contrast, 2-α-methylhopane index values are relatively low across similar Frasnian and Famennian sediments corresponding to the Kellwasser event(s) , [ 47 ] though higher levels have been reported in later Lower Famennian sections. [ 48 ]
The status of 2-methylhopanoids as a cyanobacterial biomarker was challenged by a number of microbiological discoveries. Geobacter sulfurreducens was demonstrated to synthesize diverse hopanols, although not 2-methyl-hopanols, when grown under strictly anaerobic conditions. [ 8 ] Furthermore, the anoxygenic phototroph Rhodopseudomonas palustris was found to produce 2-methyl-BHPs only under anoxic conditions. [ 49 ] This latter discovery also lead to the identification of the gene encoding the key methyltransferase HpnP. [ 33 ] hpnP was subsequently identified in an acidobacterium and numerous alphaproteobacteria , and phylogenetic analysis of the gene concluded that it originated in the alphaproteobacteria and was acquired by the cyanobacteria and acidobacteriota via horizontal gene transfer . [ 50 ]
Among cyanobacteria, hopanoid production is generally limited to terrestrial cyanobacteria. Among marine cyanobacteria, culture experiments in conducted by Helen Talbot and colleagues concluded that only two marine species– Trichodesmium and Crocosphaera –produced bacteriohopanepolyols. [ 51 ] A later gene-based search for hpnP in available cyanobacterial genomes and Metagenome Assembled Genomes (MAGs) drew similar conclusions, identifying the gene in ~30% of terrestrial and freshwater species, and only one of the 739 marine cyanobacterial genomes and MAGs. [ 52 ] Additionally, Nostoc punctiforme produces the greatest amount of 2-methylhopanoids when differentiated into akinetes . These cold- and desiccation-resistant cell structures are dormant and therefore not photosynthetically active, further challenging the association between 2-methylhopanes and oxygenic photosynthesis. [ 19 ]
Research demonstrating that the nitrite-oxidizing bacteria (NOB) Nitrobacter vulgaris increases its production of 2-methylhopanoids 33-fold when supplemented with cobalamin has furthered a non-cyanobacterial explanation for the observed abundance of 2-methylhopanes associated with Cretaceous OAEs. Felix Elling and colleagues propose that overturning circulation brought ammonia- and cobalt-rich deep waters to the surface, promoting aerobic nitrite oxidation and cobalamin synthesis, respectively. This model also addresses the conspicuous lack of 2-methylhopanes associated with Mediterranean sapropel events and in modern Black Sea sediments. Because both environments feature much less upwelling, 2-methylhopanoid-producing NOB such as N. vulgaris are outcompeted by NOB with higher nitrite affinity and anammox bacteria. [ 52 ]
An environmental survey by Jessica Ricci and coauthors using metagenomes and clone libraries found significant correlation between plant-associated microbial communities and hpnP presence, based on which they propose that 2-methylhopanoids are a biomarker for sessile microbial communities high in osmolarity and low in oxygen and fixed nitrogen. [ 53 ]
3-methylhopanoids have historically been associated with aerobic methanotrophy based on culture experiments [ 54 ] and co-occurrence with aerobic methanotrophs in the environment. [ 55 ] As such, the presence of 3-methylhopanes, together with 13 C depletion, are considered markers of ancient aerobic methanotrophy. [ 34 ] However, acetic acid bacteria have been known for decades to also produce 2-methylhopanoids. [ 54 ] Additionally, following their identification of hpnR , the gene responsible for methylating hopanoids at the C 3 position, Paula Welander and Roger Summons identified putative hpnR homologs in members of alpha -, beta -, and gammaproteobacteria , actinomycetota , nitrospirota , candidate phylum NC10 , and an acidobacterium , as well as in three metagenomes. As such, Welander and Summons conclude that 3-methylhopanoids alone cannot constitute evidence of aerobic methanotrophy. [ 34 ]
The elegant mechanism behind the protonase activity of squalene-hopene cyclase was appreciated and adapted by chemical engineers at the University of Stuttgart, Germany. Active site engineering resulted in loss of the enzyme's ability to form hopanoids, but enabled Brønsted acid catalysis for the stereoselective cyclization of the monoterpenoids geraniol , epoxygeraniol, and citronellal . [ 56 ]
The application of hopanoids and hopanoid-producing nitrogen fixers to soil has been proposed and patented as a biofertilizer technique that increases environmental resistance of plant-associated microbial symbionts, including nitrogen-fixing bacteria that are essential for transforming atmospheric nitrogen to soluble forms available to crops. [ 57 ]
During later studies of interactions between diplopterol and lipid A in Methylobacterium extorquens , multidrug transport was found to be a hopanoid-dependent process. Squalene-hopene cyclase mutants derived from a wild type capable of multidrug efflux , a drug-resistance mechanism mediated by integral transport proteins, lost the ability to perform both multidrug transport and hopanoid synthesis. [ 12 ] Researchers indicate this could be due to direct regulation of transport proteins by hopanoids or indirectly by altering membrane ordering in a way that disrupts the transport system. [ 12 ] | https://en.wikipedia.org/wiki/Hopanoids |
In mathematics, Hopf conjecture may refer to one of several conjectural statements from differential geometry and topology attributed to Heinz Hopf .
The Hopf conjecture is an open problem in global Riemannian geometry. It goes back to questions of Heinz Hopf from 1931. A modern formulation is:
For surfaces , these statements follow from the Gauss–Bonnet theorem . For four-dimensional manifolds , this follows from the finiteness of the fundamental group and Poincaré duality and Euler–Poincaré formula equating for 4-manifolds the Euler characteristic with b 0 − b 1 + b 2 − b 3 + b 4 {\displaystyle b_{0}-b_{1}+b_{2}-b_{3}+b_{4}} and Synge's theorem , assuring that the orientation cover is simply connected, so that the Betti numbers vanish b 1 = b 3 = 0 {\displaystyle b_{1}=b_{3}=0} . For 4-manifolds, the statement also follows from the Chern–Gauss–Bonnet theorem as noticed by John Milnor in 1955 (written down by Shiing-Shen Chern in 1955. [ 1 ] ). For manifolds of dimension 6 or higher the conjecture is open. An example of Robert Geroch had shown that the Chern–Gauss–Bonnet integrand can become negative for d > 2 {\displaystyle d>2} . [ 2 ] The positive curvature case is known to hold however for hypersurfaces in R 2 d + 1 {\displaystyle \mathbb {R} ^{2d+1}} (Hopf) or codimension two surfaces embedded in R 2 d + 2 {\displaystyle \mathbb {R} ^{2d+2}} . [ 3 ] For sufficiently pinched positive curvature manifolds, the Hopf conjecture (in the positive curvature case) follows from the sphere theorem , a theorem which had also been conjectured first by Hopf. One of the lines of attacks is by looking for manifolds with more symmetry. It is particular for example that all known manifolds of positive sectional curvature allow for an isometric circle action. The corresponding vector field is called a killing vector field . The conjecture (for the positive curvature case) has also been proved for manifolds of dimension 4 k + 2 {\displaystyle 4k+2} or 4 k + 4 {\displaystyle 4k+4} admitting an isometric torus action of a k -dimensional torus and for manifolds M admitting an isometric action of a compact Lie group G with principal isotropy subgroup H and cohomogeneity k such that k − ( rank G − rank H ) ≤ 5. {\displaystyle k-(\operatorname {rank} G-\operatorname {rank} H)\leq 5.} Some references about manifolds with some symmetry are [ 4 ] and [ 5 ]
On the history of the problem:
the first written explicit appearance of the conjecture is in the proceedings of the German Mathematical Society , [ 6 ] which is a paper based on talks, Heinz Hopf gave in the spring of 1931 in Fribourg , Switzerland and at Bad Elster in the fall of 1931. Marcel Berger discusses the conjecture in his book, [ 7 ] and points to the work of Hopf from the 1920s which was influenced by such type of questions. The conjectures are listed as problem 8 (positive curvature case) and 10 (negative curvature case) in "Yau's problems" of 1982. [ 8 ]
There are analogue conjectures if the curvature is allowed to become zero too. The statement should still be attributed to Hopf (for example in a talk given in 1953 in Italy). [ 9 ]
This version was stated as such as Question 1 in the paper [ 10 ] or then in a paper of Chern. [ 11 ]
An example for which the conjecture is confirmed is for the product M = M 1 × M 2 × ⋯ × M d {\displaystyle M=M_{1}\times M_{2}\times \cdots \times M_{d}} of 2-dimensional manifolds with curvature sign e k ∈ { − 1 , 0 , 1 } {\displaystyle e_{k}\in \{-1,0,1\}} . As the Euler characteristic satisfies χ ( M ) = ∏ k = 1 d χ ( M k ) {\displaystyle \chi (M)=\prod _{k=1}^{d}\chi (M_{k})} which has the sign ∏ k = 1 d e k {\displaystyle \prod _{k=1}^{d}e_{k}} , the sign conjecture is confirmed in that case (if e k > 0 {\displaystyle e_{k}>0} for all k, then χ ( M ) > 0 {\displaystyle \chi (M)>0} and if e k < 0 {\displaystyle e_{k}<0} for all k, then χ ( M ) > 0 {\displaystyle \chi (M)>0} for even d and χ ( M ) < 0 {\displaystyle \chi (M)<0} for odd d, and if one of the e k {\displaystyle e_{k}} is zero, then χ ( M ) = 0 {\displaystyle \chi (M)=0} ).
Hopf asked whether every continuous self-map of an oriented closed manifold of degree 1 is necessarily a homotopy equivalence. [ 12 ]
It is easy to see that any map f : M → M {\displaystyle f\colon M\to M} of degree 1 induces a surjection on π 1 {\displaystyle \pi _{1}} ; if not, then f {\displaystyle f} factors through a non-trivial covering space, contradicting the degree-1 assumption.
This implies that the conjecture holds for Hopfian groups , as for them one then gets that f ∗ {\displaystyle f_{*}} is an isomorphism on π 1 {\displaystyle \pi _{1}} .
An argument for higher homotopy groups remains open. Also there are non-Hopfian groups.
Another famous question of Hopf is the Hopf product conjecture:
The conjecture was popularized in the book of Gromoll, Klingenberg and Meyer from 1968, [ 13 ] and was prominently displayed as Problem 1 in Yau's list of problems. [ 8 ] Shing-Tung Yau formulated there an interesting new observation (which could be reformulated as a conjecture).
At present, the 4-sphere S 4 {\displaystyle \mathbb {S} ^{4}} and the complex projective plane C P 2 {\displaystyle \mathbb {CP} ^{2}} are the only simply-connected 4-manifolds which are known to admit a metric of positive curvature. Wolfgang Ziller once conjectured this might be the full list and that in dimension 5, the only simply-connected 5-manifold of positive curvature is the 5-sphere S 5 {\displaystyle \mathbb {S} ^{5}} . [ 14 ] Of course, solving the Hopf product conjecture would settle the Yau question. Also the Ziller conjecture that S 4 {\displaystyle \mathbb {S} ^{4}} and C P 2 {\displaystyle \mathbb {CP} ^{2}} are the only simply connected positive curvature 4-manifolds would settle the Hopf product conjecture. Back to the case S 2 × S 2 {\displaystyle \mathbb {S} ^{2}\times \mathbb {S} ^{2}} : it is known from work of Jean-Pierre Bourguignon that in the neighborhood of the product metric, there is no metric of positive curvature. [ 15 ] It is also known from work of Alan Weinstein that if a metric is given on S 2 × S 2 {\displaystyle \mathbb {S} ^{2}\times \mathbb {S} ^{2}} exists with positive curvature, then this Riemannian manifold can not be embedded in R 6 {\displaystyle \mathbb {R} ^{6}} . [ 16 ] (It follows already from a result of Hopf that an embedding in R 5 {\displaystyle \mathbb {R} ^{5}} is not possible as then the manifold has to be a sphere.) A general reference for manifolds with non-negative sectional curvature giving many examples is [ 17 ] as well as. [ 18 ] A related conjecture is that
This would also imply that S 2 × S 2 {\displaystyle \mathbb {S} ^{2}\times \mathbb {S} ^{2}} admits no Riemannian metric with positive sectional curvature. So, when looking at the evidence and the work done so far, it appears that the Hopf question most likely will be answered as the statement "There is no metric of positive curvature on S 2 × S 2 {\displaystyle \mathbb {S} ^{2}\times \mathbb {S} ^{2}} " because so far, the theorems of Bourguignon (perturbation result near product metric), Hopf (codimension 1), Weinstein (codimension 2) as well as the sphere theorem excluding pinched positive curvature metrics, point towards this outcome. The construction of a positive curvature metric on S 2 × S 2 {\displaystyle \mathbb {S} ^{2}\times \mathbb {S} ^{2}} would certainly be a surprise in global differential geometry, but it is not excluded yet that such a metric exists.
Finally, one can ask why one would be interested in such a special case like the Hopf product conjecture. Hopf himself was motivated by problems from physics. When Hopf started to work in the mid 1920s, the theory of relativity was only 10 years old and it sparked a great deal of interest in differential geometry, especially in global structure of 4-manifolds, as such manifolds appear in cosmology as models of the universe.
There is a conjecture which relates to the Hopf sign conjecture but which does not refer to Riemannian geometry at all. Aspherical manifolds are connected manifolds for which all higher homotopy groups disappear. The Euler characteristic then should satisfy the same condition as a negatively curved manifold is conjectured to satisfy in Riemannian geometry:
There can not be a direct relation to the Riemannian case as there are aspherical manifolds that are not homeomorphic to a smooth Riemannian manifold with negative sectional curvature.
This topological version of Hopf conjecture is due to William Thurston . Ruth Charney and Michael Davis conjectured that the same inequality holds for a non-positively curved piecewise Euclidean (PE) manifold.
There had been a bit of confusion about the word "Hopf conjecture" as an unrelated mathematician Eberhard Hopf and contemporary of Heinz Hopf worked on topics like geodesic flows. ( Eberhard Hopf and Heinz Hopf are unrelated and might never have met even so they were both students of Erhard Schmidt ). There is a theorem of Eberhard Hopf stating that if the 2-torus T 2 {\displaystyle \mathbb {T} ^{2}} has no conjugate points, then it must be flat (the Gauss curvature is zero everywhere). [ 19 ] The theorem of Eberhard Hopf generalized a theorem of Marston Morse and Gustav Hedlund (a PhD student of Morse) from a year earlier. [ 20 ] The problem to generalize this to higher dimensions was for some time known as the Hopf conjecture too. In any case, this is now a theorem: A Riemannian metric without conjugate points on the n-dimensional torus is flat. [ 21 ] | https://en.wikipedia.org/wiki/Hopf_conjecture |
In mathematics , the Hopf lemma , named after Eberhard Hopf , states that if a continuous real-valued function in a domain in Euclidean space with sufficiently smooth boundary is harmonic in the interior and the value of the function at a point on the boundary is greater than the values at nearby points inside the domain, then the derivative of the function in the direction of the outward pointing normal is strictly positive. The lemma is an important tool in the proof of the maximum principle and in the theory of partial differential equations . The Hopf lemma has been generalized to describe the behavior of the solution to an elliptic problem as it approaches a point on the boundary where its maximum is attained.
In the special case of the Laplacian, the Hopf lemma had been discovered by Stanisław Zaremba in 1910. [ 1 ] In the more general setting for elliptic equations, it was found independently by Hopf and Olga Oleinik in 1952, although Oleinik's work is not as widely known as Hopf's in Western countries. [ 2 ] [ 3 ] There are also extensions which allow domains with corners. [ 4 ]
Let Ω be a bounded domain in R n with smooth boundary. Let f be a real-valued function continuous on the closure of Ω and harmonic on Ω. If x is a boundary point such that f ( x ) > f ( y ) for all y in Ω sufficiently close to x , then the (one-sided) directional derivative of f in the direction of the outward pointing normal to the boundary at x is strictly positive.
Subtracting a constant, it can be assumed that f ( x ) = 0 and f is strictly negative at interior points near x . Since the boundary of Ω is smooth there is a small ball contained in Ω the closure of which is tangent to the boundary at x and intersects the boundary only at x . It is then sufficient to check the result with Ω replaced by this ball. Scaling and translating, it is enough to check the result for the unit ball in R n , assuming f ( x ) is zero for some unit vector x and f ( y ) < 0 if | y | < 1.
By Harnack's inequality applied to − f
for r < 1. Hence
Hence the directional derivative at x is bounded below by the strictly positive constant on the right hand side.
Consider a second order, uniformly elliptic operator of the form
In particular, the smallest eigenvalue of
the real symmetric matrix a i j ( x ) {\displaystyle a_{ij}(x)} is bounded from below by a positive constant that is independent of x {\displaystyle x} .
Here Ω {\displaystyle \Omega } is an open, bounded subset of R n {\displaystyle \mathbb {R} ^{n}} and one assumes that c ≤ 0 {\displaystyle c\leq 0} .
The Weak Maximum Principle states that a solution of the equation L u = 0 {\displaystyle Lu=0} in Ω {\displaystyle \Omega } attains its maximum value on the closure Ω ¯ {\displaystyle {\overline {\Omega }}} at some point on the boundary ∂ Ω {\displaystyle \partial \Omega } . Let x 0 ∈ ∂ Ω {\displaystyle x_{0}\in \partial \Omega } be such a point, then necessarily
where ∂ / ∂ ν {\displaystyle \partial /\partial \nu } denotes the outer normal derivative . This is simply a consequence of the fact that u ( x ) {\displaystyle u(x)} must be nondecreasing as x {\displaystyle x} approach x 0 {\displaystyle x_{0}} . The Hopf Lemma strengthens this observation by proving that, under mild assumptions on Ω {\displaystyle \Omega } and L {\displaystyle L} , we have
A precise statement of the Lemma is as follows. Suppose that Ω {\displaystyle \Omega } is a bounded region in R 2 {\displaystyle \mathbb {R} ^{2}} and let L {\displaystyle L} be the operator described above. Let u {\displaystyle u} be of class C 2 ( Ω ) ∩ C 1 ( Ω ¯ ) {\displaystyle C^{2}(\Omega )\cap C^{1}({\overline {\Omega }})} and satisfy the differential inequality
Let x 0 ∈ ∂ Ω {\displaystyle x_{0}\in \partial \Omega } be given so that 0 ≤ u ( x 0 ) = max x ∈ Ω ¯ u ( x ) {\displaystyle 0\leq u(x_{0})=\max _{x\in {\overline {\Omega }}}u(x)} .
If (i) Ω {\displaystyle \Omega } is C 2 {\displaystyle C^{2}} at x 0 {\displaystyle x_{0}} , and (ii) c ≤ 0 {\displaystyle c\leq 0} , then either u {\displaystyle u} is a constant, or ∂ u ∂ ν ( x 0 ) > 0 {\displaystyle {\frac {\partial u}{\partial \nu }}(x_{0})>0} , where ν {\displaystyle \nu } is the outward pointing unit normal, as above.
The above result can be generalized in several respects. The regularity assumption on Ω {\displaystyle \Omega } can be replaced with an interior ball condition: the lemma holds provided that there exists an open ball B ⊂ Ω {\displaystyle B\subset \Omega } with x 0 ∈ ∂ B {\displaystyle x_{0}\in \partial B} . It is also possible to consider functions c {\displaystyle c} that take positive values, provided that u ( x 0 ) = 0 {\displaystyle u(x_{0})=0} . For the proof and other discussion, see the references below. | https://en.wikipedia.org/wiki/Hopf_lemma |
The Hopf maximum principle is a maximum principle in the theory of second order elliptic partial differential equations and has been described as the "classic and bedrock result" of that theory. Generalizing the maximum principle for harmonic functions which was already known to Gauss in 1839, Eberhard Hopf proved in 1927 that if a function satisfies a second order partial differential inequality of a certain kind in a domain of R n and attains a maximum in the domain then the function is constant. The simple idea behind Hopf's proof, the comparison technique he introduced for this purpose, has led to an enormous range of important applications and generalizations.
Let u = u ( x ), x = ( x 1 , ..., x n ) be a C 2 function which satisfies the differential inequality
in an open domain (connected open subset of R n ) Ω, where the symmetric matrix a ij = a ji ( x ) is locally uniformly positive definite in Ω and the coefficients a ij , b i are locally bounded . If u takes a maximum value M in Ω then u ≡ M .
The coefficients a ij , b i are just functions. If they are known to be continuous then it is sufficient to demand pointwise positive definiteness of a ij on the domain.
It is usually thought that the Hopf maximum principle applies only to linear differential operators L . In particular, this is the point of view taken by Courant and Hilbert's Methoden der mathematischen Physik . In the later sections of his original paper, however, Hopf considered a more general situation which permits certain nonlinear operators L and, in some cases, leads to uniqueness statements in the Dirichlet problem for the mean curvature operator and the Monge–Ampère equation .
If the domain Ω {\displaystyle \Omega } has the interior sphere property (for example, if Ω {\displaystyle \Omega } has a smooth boundary), slightly more can be said. If in addition to the assumptions above, u ∈ C 1 ( Ω ¯ ) {\displaystyle u\in C^{1}({\overline {\Omega }})} and u takes a maximum value M at a point x 0 in ∂ Ω {\displaystyle \partial \Omega } , then for any outward direction ν at x 0 , there holds ∂ u ∂ ν ( x 0 ) > 0 {\displaystyle {\frac {\partial u}{\partial \nu }}(x_{0})>0} unless u ≡ M {\displaystyle u\equiv M} . [ 1 ] | https://en.wikipedia.org/wiki/Hopf_maximum_principle |
In quantum mechanics , the Hopfield dielectric is a model of dielectric consisting of quantum harmonic oscillators interacting with the modes of the quantum electromagnetic field . The collective interaction of the charge polarization modes with the vacuum excitations, photons leads to the perturbation of both the linear dispersion relation of photons and constant dispersion of charge waves by the avoided crossing between the two dispersion lines of polaritons . [ 1 ] Similar to the acoustic and the optical phonons and far from the resonance one branch is photon-like while the other charge is wave-like. The model was developed by John Hopfield in 1958. [ 1 ]
The Hamiltonian of the quantized Lorentz dielectric consisting of N {\displaystyle N} harmonic oscillators interacting with the
quantum electromagnetic field can be written in the dipole approximation as:
where
is the electric field operator acting at the position r A {\displaystyle r_{A}} .
Expressing it in terms of the creation and annihilation operators for the harmonic oscillators we get
Assuming oscillators to be on some kind of the regular solid lattice and applying the polaritonic Fourier transform
and defining projections of oscillator charge waves onto the electromagnetic field
polarization directions
after dropping the longitudinal contributions not interacting with the electromagnetic field one may obtain the Hopfield Hamiltonian
Because the interaction is not
mixing polarizations this can be transformed to the normal form with the eigen-frequencies of two polaritonic branches:
with the eigenvalue equation
where
with
(vacuum photon dispersion) and
is the dimensionless coupling constant proportional to the density N / V {\displaystyle N/V} of the dielectric with
the Lorentz frequency ω {\displaystyle \omega } ( tight-binding charge wave dispersion).
Mathematically the Hopfield dielectric for the one mode of excitation is equivalent to the trojan wave packet in the harmonic
approximation. The Hopfield model of the dielectric predicts the existence of eternal trapped frozen photons similar to the Hawking radiation inside the matter with the density proportional to the strength of the matter-field coupling.
One may notice that unlike in the vacuum of the electromagnetic field without matter the expectation value of the average photon number ⟨ a λ k + a λ k ⟩ {\displaystyle \langle a_{\lambda k}^{+}a_{\lambda k}\rangle } is non zero in the ground state of the polaritonic Hamiltonian C k ± | 0 >= 0 {\displaystyle C_{k\pm }|\mathbf {0} >=0} similarly to the Hawking radiation in the neighbourhood of the black hole because of the Unruh–Davies effect . One may readily notice that the lower eigenfrequency Ω − {\displaystyle \Omega _{-}} becomes imaginary when the coupling constant becomes critical at g > 1 {\displaystyle g>1} which suggests that Hopfield dielectric will undergo the superradiant phase transition . | https://en.wikipedia.org/wiki/Hopfield_dielectric |
The Hopkins Ultraviolet Telescope ( HUT ) was a space telescope designed to make spectroscopic observations in the far-ultraviolet region of the electromagnetic spectrum . It was flown into orbit on the Space Shuttle and operated from the Shuttle's payload bay on two occasions: in December 1990, as part of Shuttle mission STS-35 , and in March 1995, as part of mission STS-67 . [ 1 ]
HUT was designed and built by a team based at Johns Hopkins University , led by Arthur Davidsen. [ 2 ] [ 3 ] The telescope consisted of a 90 cm main mirror used to focus ultraviolet light onto a spectrograph situated at the prime focus . This instrument had a spectroscopic range of 82.5 to 185 nms , and a spectral resolution of about 0.3 nm. [ 2 ] It weighed 789 kilograms (1736 pounds). [ 2 ]
HUT was used to observe a wide range of astrophysical sources, including supernova remnants , active galactic nuclei , cataclysmic variable stars, as well as various planets in the Solar System . [ 4 ] During the 1990 flight, HUT was used to make 106 observations of 77 astronomical targets. During the 1995 flight, 385 observations were made of 265 targets. [ 5 ]
HUT was co-mounted with WUPPE , Ultraviolet Imaging Telescope [UIT], and BBXRT on the Astro-1 mission (1990) and with just WUPPE and UIT on Astro-2 (in 1995). [ 6 ]
As of January 2023, HUT is now in storage at the Smithsonian National Air and Space Museum in Washington, D.C. in the United States. [ 7 ] | https://en.wikipedia.org/wiki/Hopkins_Ultraviolet_Telescope |
The Hopkins-Cole reaction , also known as the glyoxylic acid reaction , is a chemical test used for detecting the presence of tryptophan in proteins. [ 1 ] A protein solution is mixed with Hopkins Cole reagent, which consists of glyoxylic acid . Concentrated sulfuric acid is slowly added to form two layers. A purple ring appears between the two layers if the test is positive for tryptophan. [ 2 ] [ 3 ] Nitrites , chlorates , nitrates and excess chlorides prevent the reaction from occurring. [ 4 ]
The reaction was first reported by Frederick Gowland Hopkins and Sydney W. Cole in 1901, [ 5 ] as part of their work on the first isolation of tryptophan itself.
This article about analytical chemistry is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hopkins–Cole_reaction |
The Hopp–Woods hydrophilicity scale of amino acids is a method of ranking the amino acids in a protein according to their water solubility in order to search for surface locations on proteins, and especially those locations that tend to form strong interactions with other macromolecules such as proteins, DNA , and RNA . [ 1 ] [ 2 ]
Given the amino acid sequence of any protein, likely interaction sites can be identified by taking the moving average of six amino acid hydrophilicity values along the polypeptide chain , and looking for local peaks in the data plot .
In subsequent papers after their initial publication of the method, Hopp and Woods demonstrated that the data plots, or hydrophilicity profiles, contained much information about protein folding , and that the hydrophobic valleys of the profiles corresponded to internal structures of proteins such as beta-strands and alpha-helices . Furthermore, long hydrophobic valleys were shown to correspond quite closely to the membrane-spanning helices identified by the later-published Kyte and Doolittle hydropathic plotting method.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hopp–Woods_scale |
Hordein is a prolamin glycoprotein , present in barley and some other cereals , together with gliadin and other glycoproteins (such as glutelins ) coming under the general name of gluten . Hordeins are found in the endosperm where one of their functions is to act as a storage unit. [ 1 ]
In comparison to other proteins, hordeins are less soluble when compared to proteins such as albumin and globulins. [ 1 ]
In relation to amino acids, hordeins have a substantial amount of proline and glutamine but lack charged amino acids such as lysine. [ 1 ]
Some people are sensitive to hordein due to disorders such as celiac disease or gluten intolerance.
Along with gliadin (the prolamin gluten found in wheat), hordein is present in many foods and also may be found in beer. Hordein is usually the main problem for coeliacs wishing to drink beer.
Coeliacs are able to find specialist breads that are low in hordein, gliadin and other problematic glycoproteins, just as they can find gluten free beer which either uses ingredients that do not contain gluten, or otherwise has the amounts of gliadin or hordein present controlled to stated limits.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hordein |
The horizon is the apparent curve that separates the surface of a celestial body from its sky when viewed from the perspective of an observer on or near the surface of the relevant body. This curve divides all viewing directions based on whether it intersects the relevant body's surface or not.
The true horizon is a theoretical line, which can only be observed to any degree of accuracy when it lies along a relatively smooth surface such as that of Earth's oceans . At many locations, this line is obscured by terrain , and on Earth it can also be obscured by life forms such as trees and/or human constructs such as buildings. The resulting intersection of such obstructions with the sky is called the visible horizon . On Earth, when looking at a sea from a shore, the part of the sea closest to the horizon is called the offing . [ 1 ]
The true horizon surrounds the observer and it is typically assumed to be a circle, drawn on the surface of a perfectly spherical model of the relevant celestial body, i.e., a small circle of the local osculating sphere . With respect to Earth, the center of the true horizon is below the observer and below sea level . Its radius or horizontal distance from the observer varies slightly from day to day due to atmospheric refraction , which is greatly affected by weather conditions. Also, the higher the observer's eyes are from sea level, the farther away the horizon is from the observer. For instance, in standard atmospheric conditions , for an observer with eye level above sea level by 1.8 metres (6 ft), the horizon is at a distance of about 4.8 kilometres (3 mi). [ 2 ] When observed from very high standpoints, such as a space station , the horizon is much farther away and it encompasses a much larger area of Earth's surface. In this case, the horizon would no longer be a perfect circle, not even a plane curve such as an ellipse, especially when the observer is above the equator, as the Earth's surface can be better modeled as an oblate ellipsoid than as a sphere.
The word horizon derives from the Greek ὁρίζων κύκλος ( horízōn kýklos ) 'separating circle', [ 3 ] where ὁρίζων is from the verb ὁρίζω ( horízō ) 'to divide, to separate', [ 4 ] which in turn derives from ὅρος ( hóros ) 'boundary, landmark'. [ 5 ]
Historically, the distance to the visible horizon has long been vital to survival and successful navigation, especially at sea, because it determined an observer's maximum range of vision and thus of communication , with all the obvious consequences for safety and the transmission of information that this range implied. This importance lessened with the development of the radio and the telegraph , but even today, when flying an aircraft under visual flight rules , a technique called attitude flying is used to control the aircraft, where the pilot uses the visual relationship between the aircraft's nose and the horizon to control the aircraft. Pilots can also retain their spatial orientation by referring to the horizon.
In many contexts, especially perspective drawing , the curvature of the Earth is disregarded and the horizon is considered the theoretical line to which points on any horizontal plane converge (when projected onto the picture plane) as their distance from the observer increases. For observers near sea level, the difference between this geometrical horizon (which assumes a perfectly flat, infinite ground plane) and the true horizon (which assumes a spherical Earth surface) is imperceptible to the unaided eye. However, for someone on a 1,000 m (3,300 ft) hill looking out across the sea, the true horizon will be about a degree below a horizontal line.
In astronomy, the horizon is the horizontal plane through the eyes of the observer. It is the fundamental plane of the horizontal coordinate system , the locus of points that have an altitude of zero degrees. While similar in ways to the geometrical horizon, in this context a horizon may be considered to be a plane in space, rather than a line on a picture plane.
Ignoring the effect of atmospheric refraction , distance to the true horizon from an observer close to the Earth's surface is about [ 2 ]
where h is height above sea level and R is the Earth radius .
The expression can be simplified as:
where the constant equals k = 3.57 km/m ½ = 1.22 mi/ft ½ .
In this equation, Earth's surface is assumed to be perfectly spherical, with R equal to about 6,371 kilometres (3,959 mi).
Assuming no atmospheric refraction and a spherical Earth with radius R=6,371 kilometres (3,959 mi):
On terrestrial planets and other solid celestial bodies with negligible atmospheric effects, the distance to the horizon for a "standard observer" varies as the square root of the planet's radius. Thus, the horizon on Mercury is 62% as far away from the observer as it is on Earth, on Mars the figure is 73%, on the Moon the figure is 52%, on Mimas the figure is 18%, and so on.
If the Earth is assumed to be a featureless sphere (rather than an oblate spheroid ) with no atmospheric refraction, then the distance to the horizon can easily be calculated. [ 6 ]
The tangent-secant theorem states that
Make the following substitutions:
with d, D, and h all measured in the same units. The formula now becomes
or
where R is the radius of the Earth.
The same equation can also be derived using the Pythagorean theorem .
At the horizon, the line of sight is a tangent to the Earth and is also perpendicular to Earth's radius.
This sets up a right triangle, with the sum of the radius and the height as the hypotenuse.
With
referring to the second figure at the right leads to the following:
The exact formula above can be expanded as:
where R is the radius of the Earth ( R and h must be in the same units). For example,
if a satellite is at a height of 2000 km, the distance to the horizon is 5,430 kilometres (3,370 mi);
neglecting the second term in parentheses would give a distance of 5,048 kilometres (3,137 mi), a 7% error.
If the observer is close to the surface of the Earth, then h is a negligible fraction of R and can be disregarded the term (2 R + h ) , and the formula becomes-
Using kilometres for d and R , and metres for h , and taking the radius of the Earth as 6371 km, the distance to the horizon is
Using imperial units , with d and R in statute miles (as commonly used on land), and h in feet, the distance to the horizon is
If d is in nautical miles , and h in feet, the constant factor is about 1.06, which is close enough to 1 that it is often ignored, giving:
These formulas may be used when h is much smaller than the radius of the Earth (6371 km or 3959 mi), including all views from any mountaintops, airplanes, or high-altitude balloons. With the constants as given, both the metric and imperial formulas are precise to within 1% (see the next section for how to obtain greater precision).
If h is significant with respect to R , as with most satellites, then the approximation is no longer valid, and the exact formula is required.
Another relationship involves the great-circle distance s along the arc over the curved surface of the Earth to the horizon; this is more directly comparable to the geographical distance on a map.
It can be formulated in terms of γ in radians ,
then
Solving for s gives
The distance s can also be expressed in terms of the line-of-sight distance d ; from the second figure at the right,
substituting for γ and rearranging gives
The distances d and s are nearly the same when the height of the object is negligible compared to the radius (that is, h ≪ R ).
When the observer is elevated, the horizon zenith angle can be greater than 90°. The maximum visible zenith angle occurs when the ray is tangent to Earth's surface; from triangle OCG in the figure at right,
where h {\displaystyle h} is the observer's height above the surface and γ {\displaystyle \gamma } is the angular dip of the horizon. It is related to the horizon zenith angle z {\displaystyle z} by:
For a non-negative height h {\displaystyle h} , the angle z {\displaystyle z} is always ≥ 90°.
To compute the greatest distance D BL at which an observer B can see the top of an object L above the horizon, simply add the distances to the horizon from each of the two points:
For example, for an observer B with a height of h B =1.70 m standing on the ground, the horizon is D B =4.65 km away. For a tower with a height of h L =100 m, the horizon distance is D L =35.7 km. Thus an observer on a beach can see the top of the tower as long as it is not more than D BL =40.35 km away. Conversely, if an observer on a boat ( h B =1.7 m) can just see the tops of trees on a nearby shore ( h L =10 m), the trees are probably about D BL =16 km away.
Referring to the figure at the right, and using the approximation above , the top of the lighthouse will be visible to a lookout in a crow's nest at the top of a mast of the boat if
where D BL is in kilometres and h B and h L are in metres.
As another example, suppose an observer, whose eyes are two metres above the level ground, uses binoculars to look at a distant building which he knows to consist of thirty storeys , each 3.5 metres high. He counts the stories he can see and finds there are only ten. So twenty stories or 70 metres of the building are hidden from him by the curvature of the Earth. From this, he can calculate his distance from the building:
which comes to about 35 kilometres.
It is similarly possible to calculate how much of a distant object is visible above the horizon. Suppose an observer's eye is 10 metres above sea level, and he is watching a ship that is 20 km away. His horizon is:
kilometres from him, which comes to about 11.3 kilometres away. The ship is a further 8.7 km away. The height of a point on the ship that is just visible to the observer is given by:
which comes to almost exactly six metres. The observer can therefore see that part of the ship that is more than six metres above the level of the water. The part of the ship that is below this height is hidden from him by the curvature of the Earth. In this situation, the ship is said to be hull-down .
Due to atmospheric refraction the distance to the visible horizon is further than the distance based on a simple geometric calculation. If the ground (or water) surface is colder than the air above it, a cold, dense layer of air forms close to the surface, causing light to be refracted downward as it travels, and therefore, to some extent, to go around the curvature of the Earth. The reverse happens if the ground is hotter than the air above it, as often happens in deserts, producing mirages . As an approximate compensation for refraction, surveyors measuring distances longer than 100 meters subtract 14% from the calculated curvature error and ensure lines of sight are at least 1.5 metres from the ground, to reduce random errors created by refraction.
If the Earth were an airless world like the Moon, the above calculations would be accurate. However, Earth has an atmosphere of air , whose density and refractive index vary considerably depending on the temperature and pressure. This makes the air refract light to varying extents, affecting the appearance of the horizon. Usually, the density of the air just above the surface of the Earth is greater than its density at greater altitudes. This makes its refractive index greater near the surface than at higher altitudes, which causes light that is travelling roughly horizontally to be refracted downward. [ 7 ] This makes the actual distance to the horizon greater than the distance calculated with geometrical formulas. With standard atmospheric conditions, the difference is about 8%. This changes the factor of 3.57, in the metric formulas used above, to about 3.86. [ 2 ] For instance, if an observer is standing on seashore, with eyes 1.70 m above sea level, according to the simple geometrical formulas given above the horizon should be 4.7 km away. Actually, atmospheric refraction allows the observer to see 300 metres farther, moving the true horizon 5 km away from the observer.
This correction can be, and often is, applied as a fairly good approximation when atmospheric conditions are close to standard . When conditions are unusual, this approximation fails. Refraction is strongly affected by temperature gradients, which can vary considerably from day to day, especially over water. In extreme cases, usually in springtime, when warm air overlies cold water, refraction can allow light to follow the Earth's surface for hundreds of kilometres. Opposite conditions occur, for example, in deserts, where the surface is very hot, so hot, low-density air is below cooler air. This causes light to be refracted upward, causing mirage effects that make the concept of the horizon somewhat meaningless. Calculated values for the effects of refraction under unusual conditions are therefore only approximate. [ 2 ] Nevertheless, attempts have been made to calculate them more accurately than the simple approximation described above.
Outside the visual wavelength range, refraction will be different. For radar (e.g. for wavelengths 300 to 3 mm i.e. frequencies between 1 and 100 GHz) the radius of the Earth may be multiplied by 4/3 to obtain an effective radius giving a factor of 4.12 in the metric formula i.e. the radar horizon will be 15% beyond the geometrical horizon or 7% beyond the visual. The 4/3 factor is not exact, as in the visual case the refraction depends on atmospheric conditions.
If the density profile of the atmosphere is known, the distance d to the horizon is given by [ 8 ]
where R E is the radius of the Earth, ψ is the dip of the horizon and δ is the refraction of the horizon. The dip is determined fairly simply from
where h is the observer's height above the Earth, μ is the index of refraction of air at the observer's height, and μ 0 is the index of refraction of air at Earth's surface.
The refraction must be found by integration of
where ϕ {\displaystyle \phi \,\!} is the angle between the ray and a line through the center of the Earth. The angles ψ and ϕ {\displaystyle \phi \,\!} are related by
A much simpler approach, which produces essentially the same results as the first-order approximation described above, uses the geometrical model but uses a radius R′ = 7/6 R E . The distance to the horizon is then [ 2 ]
Taking the radius of the Earth as 6371 km, with d in km and h in m,
with d in mi and h in ft,
In the case of radar one typically has R′ = 4/3 R E resulting (with d in km and h in m) in
Results from Young's method are quite close to those from Sweer's method, and are sufficiently accurate for many purposes.
The horizon is a key feature of the picture plane in the science of graphical perspective . Assuming the picture plane stands vertical to ground, and P is the perpendicular projection of the eye point O on the picture plane, the horizon is defined as the horizontal line through P . The point P is the vanishing point of lines perpendicular to the picture. If S is another point on the horizon, then it is the vanishing point for all lines parallel to OS . But Brook Taylor (1719) indicated that the horizon plane determined by O and the horizon was like any other plane :
The peculiar geometry of perspective where parallel lines converge in the distance, stimulated the development of projective geometry which posits a point at infinity where parallel lines meet. In her book Geometry of an Art (2007), Kirsti Andersen described the evolution of perspective drawing and science up to 1800, noting that vanishing points need not be on the horizon. In a chapter titled "Horizon", John Stillwell recounted how projective geometry has led to incidence geometry , the modern abstract study of line intersection. Stillwell also ventured into foundations of mathematics in a section titled "What are the Laws of Algebra ?" The "algebra of points", originally given by Karl von Staudt deriving the axioms of a field was deconstructed in the twentieth century, yielding a wide variety of mathematical possibilities. Stillwell states | https://en.wikipedia.org/wiki/Horizon |
A horizon of predictability is the point after which a dynamical system becomes unpredictable given initial conditions . This includes | https://en.wikipedia.org/wiki/Horizon_of_predictability |
Horizons: Software Starter Pack is a software compilation for the ZX Spectrum , designed by Psion Software Ltd and published by Sinclair Research Ltd in 1982. [ 1 ]
It was not released on its own, but came bundled with new ZX Spectrums. [ 2 ] Side A of the cassette tape contains lessons and tutorials pertaining to the Spectrum and Side B contains eight programmes written in BASIC . It was considered a good companion to the Spectrum manual. [ 3 ]
Side A contains six separately-loading tutorials. The first is an overview of the Spectrum hardware. Programmes 2 to 5 are specific computing lessons. The final programme is a glossary of ZX Spectrum BASIC keywords. [ 4 ]
Side B contains eight programmes written in BASIC.
This software article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Horizons:_Software_Starter_Pack |
Horizontal Environmental Genetic Alteration Agents (HEGAAs) are any artificially developed agents that are engineered to edit the genome of eukaryotic species they infect when intentionally dispersed into the environment (outside of contained facilities such as laboratories or hospitals).
The term “ genetic alteration agent ” first appears in a 2016 work plan by the Defense Advanced Research Projects Agency ( DARPA ) describing a tender for contracts to develop genetically modified plant viruses for an approach involving their dispersion into the environment. [ 1 ] The prefixing of “ horizontal environmental ” to the former to generate the acronym HEGAA was first used in a 2018 scientific publication. [ 2 ] The acronym HEGAA or its plural HEGAAs has subsequently been used in scientific [ 3 ] [ 4 ] defence [ 5 ] and general media. [ 6 ] [ 7 ]
Agents such as pathogens , symbionts or synthetic protein assemblages [ 8 ] that can be acquired through horizontal transmission in the environment can potentially be engineered to become HEGAAs. This would be achieved using biotechnology methods to confer to them the capacity to alter nucleotides in the chromosomes of infected individuals through sequence-specific editing systems like CRISPR , ZFNs or TALENs . No known infectious agent naturally has the capacity to gene edit eukaryotes in a manner that can be flexibly targeted to specific sequences (distinct from substantially random natural processes like retroviral integration ).
By definition, HEGAA induced gene editing events are intended to occur outside of contained facilities such as laboratories or hospitals. While genetically modified viruses with CRISPR editing have been successfully used as research tools in laboratories [ 9 ] [ 10 ] or for gene therapy in clinical settings, all gene editing events are intended to physically occur within contained facilities. By contrast, HEGAAs for their intended mode of action relies on inducing gene editing events that occur largely or exclusively in the environment.
Where HEGAAs are engineered to target obligate sexually reproducing species they can usefully be thought of being of two types:-
Where HEGAAs are engineered to target host species that can reproduce asexually, for example vegetatively reproducing plants, the above distinctions are largely no longer meaningful.
Despite an expanding number of techniques which employ engineered infectious agents to alter the genetic material of a second species, often involving genetically modified viruses , only a very small minority rely on gene editing events occurring in the environment. Furthermore, while there are a number of proposed applications which rely on the intentional dispersion of genetically modified infectious agents in the environment, only those where gene editing occurs are considered HEGAAs. Consequently, proposed applications of viral immuno-contraception , [ 11 ] [ 12 ] transmissible vaccines, [ 13 ] [ 14 ] and agricultural field transient expression systems [ 15 ] [ 16 ] are not examples of HEGAA approaches, because none currently involve gene editing. HEGAAs are only those agents that are proposed for applications that require both horizontal acquisition (infection) and gene editing events that are intended to occur in the environment.
No HEGAAs have been intentionally dispersed into the environment, though some are reportedly in development.
Horizontal Environmental Genetic Alteration Agents is agricultural technology currently being developed by DARPA to ensure long term food security. It is under research in DARPA known as the Insect Allies project. [ 17 ] On a high level, insects serve as vectors to "infect" crops with a virus to alter their genes to become more resilient against pests, weeds and climate changes.
Fictional plagues of engineered pathogens have been a feature of science fiction literature considerably prior to the advent of targetable gene editing systems. However, despite informed conjecture in media sources [ 18 ] [ 19 ] [ 20 ] or reports [ 21 ] on HEGAA like scenarios, there have been few in a fictional context.
An example of a HEGAA like scenario is a storyline in the season 10 of The X-files , with a virus engineered to contain a CRISPR system targeted to disrupt the sequence of the human adenosine deaminase gene. Gene editing of the virus is triggered in the environment as a means to destroy the human immune system in the fictional story. [ 22 ] | https://en.wikipedia.org/wiki/Horizontal_Environmental_Genetic_Alteration_Agents |
Many East Asian scripts can be written horizontally or vertically. Chinese characters , Korean hangul , and Japanese kana may be oriented along either axis, as they consist mainly of disconnected logographic or syllabic units, each occupying a square block of space, thus allowing for flexibility for which direction texts can be written, be it horizontally from left-to-right, horizontally from right-to-left, vertically from top-to-bottom, and even vertically from bottom-to-top.
Traditionally, written Chinese , Vietnamese , Korean , and Japanese are written vertically in columns going from top to bottom and ordered from right to left, with each new column starting to the left of the preceding one. The stroke order and stroke direction of Chinese characters, Vietnamese chữ Nôm , Korean hangul, and kana all facilitate writing in this manner. [ why? ] In addition, writing in vertical columns from right to left facilitated writing with a brush in the right hand while continually unrolling the sheet of paper or scroll with the left. Since the nineteenth century, it has become increasingly common for these languages to be written horizontally, from left to right, with successive rows going from top to bottom, under the influence of European languages such as English, although vertical writing is still frequently used in Hong Kong, Japan, Korea, Macau, and Taiwan.
Chinese characters, Japanese kana, Vietnamese chữ Nôm and Korean hangul can be written horizontally or vertically. There are some small differences in orthography . In horizontal writing it is more common to use Arabic numerals , whereas Chinese numerals are more common in vertical text.
In these scripts, the positions of punctuation marks, for example the relative position of commas and full stops (periods), differ between horizontal and vertical writing. Punctuation such as the parentheses, quotation marks, book title marks (Chinese), ellipsis mark, dash, wavy dash (Japanese), proper noun mark (Chinese), wavy book title mark (Chinese), emphasis mark, and chōon mark (Japanese) are all rotated 90 degrees when switching between horizontal and vertical text.
Where a text is written in horizontal format, pages are read in the same order as English books, with the binding at the left and pages progressing to the right. Vertical books are printed the other way round, with the binding at the right, and pages progressing to the left.
Ruby characters like furigana in Japanese which provides a phonetic guide for unusual or difficult-to-read characters, follow the direction of the main text. Example in Japanese, with furigana in green:
Bopomofo in Taiwan is usually written vertically regardless of the direction of the main text. Text in the Latin alphabet is usually written horizontally when it appears in vertical text, or else it is turned sideways with the base of the characters on the left.
Historically, vertical writing was the standard system, and horizontal writing was only used where a sign had to fit in a constrained space, such as over the gate of a temple or the signboard of a shop. Before the end of World War II in Japan, those signs were read right to left.
Today, the left-to-right direction is dominant in all three languages for horizontal writing: this is due partly to the influence of English and other Western languages to make it easier to read when the two languages are found together—for example, on airport signs at a train station—and partly to the increased use of computerized typesetting and word processing software, most of which does not directly support right-to-left layout of East Asian languages. However, right-to-left horizontal writing is still seen in these scripts, in such places as signs, on the right-hand side of vehicles, and on the right-hand side of stands selling food at festivals. It is also used to simulate archaic writing, for example in reconstructions of old Japan for tourists, and it is still found in the captions and titles of some newspapers.
There are only two types of vertical scripts known to have been written from left to right: the Old Uyghur script and its descendants — Traditional Mongolian , Oirat Clear , Manchu , and Buryat — and the 'Phags-pa script . The former developed because the Uyghurs rotated their Sogdian -derived script, originally written right to left, 90 degrees counter-clockwise to emulate Chinese writing, but without changing the relative orientation of the letters. 'Phags-pa in turn was an adaptation of Tibetan script written vertically on the model of Mongolian to supplant those writing systems current in the Mongol Empire . Of these, only traditional Mongolian still remains in use today in Inner Mongolia . [ 1 ] [ 2 ] : 36
The first [ citation needed ] printed Chinese text in horizontal alignment was Robert Morrison 's Dictionary of the Chinese language , published in 1815–1823 in Macau.
The earliest widely known Chinese publication using horizontal alignment was the magazine Science ( 科學 ). Its first issue in January 1915 explained the (then) unusual format:
本雜誌印法,旁行上左,並用西文句讀點之,以便插寫算術及物理化學諸程式,非故好新奇,讀者諒之。 This magazine is printed sideways from the top left, and marked with Western punctuation. This is to make more convenient the insertion of mathematical, physical and chemical formulae, and not for novelty's sake. We ask for our readers' understanding.
In the subsequent decades, the occurrence of words in a Western script became more frequent, and readers began to appreciate the unwieldiness of rotating the paper at each occurrence for vertically set texts. This accelerated acceptance of horizontal writing.
With the proliferation of horizontal text, both horizontal and vertical came to be used concurrently. Proponents of horizontal text argued that vertical text in right-to-left columns was smudged easily when written, and moreover demanded greater movement from the eyes when read. Vertical text proponents considered horizontal text to be a break from established tradition.
After their victory in the Chinese Civil War , the People's Republic of China decided to use horizontal writing. All newspapers in China changed from vertical to horizontal alignment on 1 January 1956. In publications, text is run horizontally although book titles on spines and some newspaper headlines remain vertical for convenience. Inscriptions of signs on most state organs are still vertical.
In Singapore, vertical writing has also become rare. In Taiwan, Hong Kong, Macau , and among older overseas Chinese communities, horizontal writing has been gradually adopted since the 1990s. By the early 2000s, most newspapers in these areas had switched to left-to-right horizontal writing, either entirely or in a combination of vertical text with horizontal left-to-right headings.
Horizontal text came into Japanese in the Meiji era , when the Japanese began to print Western language dictionaries. Initially they printed the dictionaries in a mixture of horizontal Western and vertical Japanese text, which meant readers had to rotate the book ninety degrees to read the Western text. Because this was unwieldy, the idea of yokogaki came to be accepted. One of the first publications to partially use yokogaki was a German to Japanese dictionary ( 袖珍挿図独和辞書 Shūchinsōzu Dokuwa Jisho 'pocket illustrated German to Japanese dictionary') published in 1885 (Meiji 18).
At the beginning of the change to horizontal alignment in Meiji era Japan, there was a short-lived form called migi yokogaki ( 右横書き , literally "right horizontal writing"), in contrast to hidari yokogaki , ( 左横書き 'left horizontal writing'), the current standard. This resembled the right-to-left horizontal writing style of languages such as Arabic with line breaks on the left. It was probably based on the traditional single-row right-to-left writing. This form was widely used for pre-WWII advertisements and official documents (like banknotes of the Japanese yen ), but has not survived outside of old-fashioned signboards.
Vertical writing ( tategaki 縦書き ) is still commonly used in Japan in novels, newspapers and magazines, including most Japanese comics and graphic novels (also known as manga ), while horizontal writing is used more often in other media, especially those containing English language references. In general, dialogue in manga is written vertically. However, in scenes where a character speaks a foreign language, the dialogue may be written horizontally. In this case, there is a mixture of vertical and horizontal writing on a single page.
Traditionally, Korean writing has been vertical, with columns running right to left.
In 1988, The Hankyoreh became the first Korean newspaper to use horizontal writing. The Chosun Ilbo was the last major newspaper to publish in vertical right-to-left writing; it published its last issue in vertical right-to-left writing on 1 March 1999, four days before its 79th anniversary. [ 3 ] Other major newspapers had already switched to horizontal writing by 1 January 1998; Dong-A Ilbo published its last vertical issue on 31 December 1997, [ 4 ] Kyunghyang Shinmun on 6 April 1997 [ 5 ] (the day before its 50th anniversary), and Maeil Kyungje on 14 September 1996. [ 6 ] Announcements about the impending change in these newspapers in the days preceding often shared a common theme of "appealing to younger audiences". Today, major Korean newspapers hardly ever run text vertically.
Traditionally, Vietnamese writing was vertical, with columns running right to left as the language used a mixture of Chinese characters and independently developed characters derived from Chinese ones to write the native language in a script called chữ Nôm .
After the 1920s, when the Vietnamese alphabet , influenced by the Portuguese alphabet, started to be used nationwide to replace chữ Nôm, vertical writing style fell into disuse.
In East Asian calligraphy, vertical writing remains the dominant direction.
Japanese manga tend to use vertical direction for text. Manga frames tend to flow in right-to-left horizontal direction. Frames in yonkoma manga tend to flow in a vertical direction. Page ordering is the same as books that use vertical direction: from right to left. Frames that are chronologically before or after each other use less spacing in between as a visual cue.
Most text in manga is written vertically, which dictates the vertical shapes of the speech bubbles . Some, however, such as Levius , are aimed at the international market and strive to optimize for translation and localization, therefore make use of horizontal text and speech bubbles.
In some cases, horizontal writing in text bubbles may be used to indicate that a translation convention is in use – for example, Kenshi Hirokane uses Japanese text arranged horizontally to imply that a character is actually speaking in a foreign language, like English.
Some publishers that translate manga into European languages may choose to keep the original page order (a notable example is Shonen Jump magazine), while other publishers may reverse the page flow with use of mirrored pages. When manga was first released in Anglophone countries, it was usually in the reversed format (also known as "flipped" or "flopped"), but the non-reversed format eventually came to predominate. [ 7 ] [ 8 ] [ 9 ]
Both horizontal and vertical writing are used in Japan, Hong Kong, Macau and Taiwan. Traditional characters are also used in mainland China in a few limited contexts, such as some books on ancient literature, or as an aesthetic choice for some signs on shops, temples, etc. In those contexts, both horizontal and vertical writing are used as well.
Vertical writing is commonly used for novels, newspapers, manga, and many other forms of writing. Because it goes downward, vertical writing is invariably used on the spines of books. Some newspapers combine the two forms, using the vertical format for most articles but including some written horizontally, especially for headlines. Musical notation for some Japanese instruments such as the shakuhachi is written vertically.
Horizontal writing is easier for some purposes; academic texts are sometimes written this way since they often include words and phrases in other languages, which are more easily incorporated horizontally. Scientific and mathematical texts are nearly always written horizontally, since in vertical writing equations must be turned sideways, making them more difficult to read.
Similarly, English language textbooks, which contain many English words, are usually printed in horizontal writing. This is not a fixed rule, however, and it is also common to see English words printed sideways in vertical writing texts.
Japanese business cards ( meishi ) are often printed vertically in Japanese on one side, and horizontally in English on the other. Postcards and handwritten letters may be arranged horizontally or vertically, but the more formal the letter the more likely it is to be written vertically. Envelope addresses are usually vertical, with the recipient's address on the right and the recipient's name in the exact centre of the envelope. See also Japanese etiquette .
In mainland China , vertical writing using simplified characters is now comparatively rare, more so in print than in writing and signage. Most publications are now printed in horizontal alignment, like English. Horizontal writing is written left to right in the vast majority of cases, with a few exceptions such as bilingual dictionaries of Chinese and right-to-left scripts like Arabic, in which case Chinese may follow the right-to-left alignment. Right-to-left writing direction can also often be seen on the right side of tourist buses, as it is customary to have the text run (on both sides of the vehicle) from the front of the bus to its rear.
Vertical alignment is generally used for artistic or aesthetic purposes like in logos and on book covers, in scholarly works on Literary Chinese works, or when space constraints demand it, like on the spines of books and when labelling diagrams. Naturally, vertical text is also used on signs that are longer than they are wide; such signs are the norm at the entrances of schools, government offices and police stations. Calligraphy – in simplified or traditional characters – is invariably written vertically. Additionally, vertical text may still be encountered on some business cards and personal letters in China.
Since 2012, street markings are written vertically, but unusually from bottom to top. This is so that the characters are read from nearest to furthest from the drivers' perspective. [ 10 ]
In modern Korea, vertical writing is uncommon. Modern Korean is usually written horizontally from left to right. Vertical writing is used when the writing space is long vertically and narrow horizontally. For example, titles on the spines of books are usually written vertically. When a foreign language film is subtitled into Korean, the subtitles are sometimes written vertically at the right side of the screen.
In the standard language ( 표준어 ; 標準語 ) of South Korea, punctuation marks are used differently in horizontal and vertical writing. Western punctuation marks are used in horizontal writing and the Japanese punctuation marks are used in vertical writing. However, vertical writing using Western punctuation marks is sometimes found.
Early computer installations were designed only to support left-to-right horizontal writing based on the Latin alphabet . Today, most computer programs do not fully support the vertical writing system; however, most advanced word processing and publication software that target the East Asian region support the vertical writing system either fully or to a limited extent.
Even though vertical text display is generally not well supported, composing vertical text for print has been made possible. For example, in Asian editions of Windows, Asian fonts are also available in a vertical version, with font names prefixed by "@". [ 11 ] Users can compose and edit the document as normal horizontal text. When complete, changing the text font to a vertical font converts the document to vertical orientation for printing purposes. Additionally, OpenType also has valt , vert , vhal , vkna , vkrn , vpal , vrt2 , vrtr "feature tags" to define glyphs that can be transformed or adjusted within vertical text; they can be enabled or disabled in CSS3 using font-feature-settings property. [ 12 ]
Since the late 1990s, W3C (World Wide Web Consortium) has been drafting Cascading Style Sheets properties to enable display on the Web of the various languages of the world according to their heritage text directions. Their latest efforts in 2011 show some revisions to the previous format for the Writing Mode property which provides for vertical layout and text display. The format "writing-mode:tb-rl" has been revised as "writing-mode: vertical-rl" in CSS, but the former syntax was preserved as a part of SVG 1.1 specification.
Among Web browsers, Internet Explorer was the first one that had been supporting vertical text and layout coded in HTML. Starting with IE 5.5 in 2000, Microsoft has enabled the writing mode property as a "Microsoft extension to Cascading Style Sheets (CSS)". Google Chrome (since 8.0), Safari (since 5.1), Opera (since 15.0) has supported the -webkit-writing-mode property. [ 13 ] Mozilla Firefox got support for unprefixed writing-mode property in version 38.0, later enabled by default in version 41.0. [ 13 ] [ 14 ] [ 15 ] Starting with Google Chrome version 48 in 2016, the unprefixed writing-mode property is now also supported by Chromium browsers, with the exception of the sideways-lr and sideways-rl values . | https://en.wikipedia.org/wiki/Horizontal_and_vertical_writing_in_East_Asian_scripts |
The horizontal coordinate system is a celestial coordinate system that uses the observer's local horizon as the fundamental plane to define two angles of a spherical coordinate system : altitude and azimuth .
Therefore, the horizontal coordinate system is sometimes called the az/el system , [ 1 ] the alt/az system , or the alt-azimuth system , among others. In an altazimuth mount of a telescope , the instrument's two axes follow altitude and azimuth. [ 2 ]
This celestial coordinate system divides the sky into two hemispheres : The upper hemisphere, where objects are above the horizon and are visible, and the lower hemisphere, where objects are below the horizon and cannot be seen, since the Earth obstructs views of them. [ a ] The great circle separating the hemispheres is called the celestial horizon , which is defined as the great circle on the celestial sphere whose plane is normal to the local gravity vector (the vertical direction ). [ 3 ] [ a ] In practice, the horizon can be defined as the plane tangent to a quiet, liquid surface, such as a pool of mercury , or by using a bull's eye level . [ 4 ] The pole of the upper hemisphere is called the zenith and the pole of the lower hemisphere is called the nadir . [ 5 ]
The following are two independent horizontal angular coordinates :
A horizontal coordinate system should not be confused with a topocentric coordinate system . Horizontal coordinates define the observer's orientation, but not location of the origin, while topocentric coordinates define the origin location, on the Earth's surface, in contrast to a geocentric celestial system .
The horizontal coordinate system is fixed to a location on Earth, not the stars. Therefore, the altitude and azimuth of an object in the sky changes with time, as the object appears to drift across the sky with Earth's rotation . In addition, since the horizontal system is defined by the observer's local horizon, [ a ] the same object viewed from different locations on Earth at the same time will have different values of altitude and azimuth.
The cardinal points on the horizon have specific values of azimuth that are helpful references.
Horizontal coordinates are very useful for determining the rise and set times of an object in the sky. When an object's altitude is 0°, it is on the horizon. [ a ] If at that moment its altitude is increasing, it is rising, but if its altitude is decreasing, it is setting. However, all objects on the celestial sphere are subject to diurnal motion , which always appears to be westward.
A northern observer can determine whether altitude is increasing or decreasing by instead considering the azimuth of the celestial object:
There are the following special cases: [ a ] | https://en.wikipedia.org/wiki/Horizontal_coordinate_system |
Horizontal correlation is a methodology for gene sequence analysis. Rather than referring to one specific technique, horizontal correlation instead encompasses a variety of approaches to sequence analysis that are unified by two specific themes:
The core ideas of the horizontal correlation approach were first presented in a year 2000 paper by Grosse, Herzel, Buldyrev, and Stanley (Grosse, et al. 2000). In this first formulation, Grosse and colleagues sought to characterize a large genetic sequence by dividing the sequence into coding and non-coding regions. Whereas traditional approaches to the coding-vs.-non-coding problem generally relied on sophisticated pattern recognition systems that were first trained on small inputs and then run over the entire sequence (Ohler, et al. 1999), the horizontal correlation approach of Grosse and colleagues worked instead by breaking the sequence into many relatively short sequence fragments, each only 500 base pairs in length. They then sought to characterize each of these fragments as either coding or non-coding. This was accomplished by comparing each size 3 window along the length of a fragment with the first size 3 window in that fragment, then measuring the value of the mutual information function between the two windows. Coding sequences were found to display a stylized pattern of 3-periodicity that non-coding sequences did not. Such a pattern was easy to recognize, and enabled significantly more rapid, more species-independent identification of coding regions (Grosse, et al. 2000).
Since 2000, horizontal correlation methodologies emphasizing the measurement of information theoretic quantities along the length of a gene sequence have been put to widespread use, and have even found application in shotgun sequencing fragment assembly (Otu & Sayood, 2004). | https://en.wikipedia.org/wiki/Horizontal_correlation |
The phrase horizontal evolution is used in evolutionary biology to refer to:
It is sometimes used by creationists as a synonym for | https://en.wikipedia.org/wiki/Horizontal_evolution |
Horizontal gene transfer ( HGT ) refers to the transfer of genes between distant branches on the tree of life . In evolution , it can scramble the information needed to reconstruct the phylogeny of organisms , how they are related to one another.
HGT can also help scientists to reconstruct and date the tree of life, as a gene transfer can be used as a phylogenetic marker, or as the proof of contemporaneity of the donor and recipient organisms, and as a trace of extinct biodiversity.
HGT happens very infrequently – at the individual organism level, it is highly improbable for any such event to take place. However, on the grander scale of evolutionary history, these events occur with some regularity. On one hand, this forces biologists to abandon the use of individual genes as good markers for the history of life. On the other hand, this provides an almost unexploited large source of information about the past.
The three main early branches of the tree of life have been intensively studied by microbiologists because the first organisms were microorganisms. Microbiologists (led by Carl Woese ) have introduced the term domain for the three main branches of this tree, where domain is a phylogenetic term similar in meaning to biological kingdom . To reconstruct this tree of life, the gene sequence encoding the small subunit of ribosomal RNA (SSU rRNA, 16s rRNA ) has proven useful, and the tree (as shown in the picture) relies heavily on information from this single gene.
These three domains of life represent the main evolutionary lineages of early cellular life and currently include Bacteria , Archaea (single-celled organisms superficially similar to bacteria), and Eukarya . Eukarya includes only organisms having a well-defined nucleus, such as fungi , protists , and all organisms in the plant and animals kingdoms (see figure).
The gene most commonly used for constructing phylogenetic relationships in microorganisms is the small subunit ribosomal RNA gene, as its sequences tend to be conserved among members with close phylogenetic distances, yet variable enough that differences can be measured. [ 1 ] The SSU rRNA as a measure of evolutionary distances was pioneered by Carl Woese when formulating the first modern "tree of life", and his results led him to propose the Archaea as a third domain of life . However, recently it has been argued that SSU rRNA genes can also be horizontally transferred. [ 2 ] Although this may be rare, this possibility is forcing scrutiny of the validity of phylogenetic trees based on SSU rRNAs.
Recent discoveries of "rampant" HGT in microorganisms, and the detection of horizontal movement of even genes for the small subunit of ribosomal RNA, have forced biologists to question the accuracy of at least the early branches in the tree, and even question the validity of trees as useful models of how early evolution occurs. [ 3 ] In fact, early evolution is considered to have occurred starting from a community of progenotes , able to exchange large molecules when HGT was the standard. This lateral gene transfer occurred also beyond the Darwinian threshold , after heredity or vertical gene transfer was established. [ 4 ] [ 5 ] "Sequence comparisons suggest recent horizontal transfer of many genes among diverse species including across the boundaries of phylogenetic "domains". Thus determining the phylogenetic history of a species can not be done conclusively by determining evolutionary trees for single genes." [ 6 ] HGT is thus a potential confounding factor in inferring phylogenetic trees from the sequence of one gene . For example, if two distantly related bacteria have exchanged a gene, a phylogenetic tree including those species will show them to be closely related even though most other genes have diverged substantially. For this reason it is important to use other information to infer phylogenies, such as the presence or absence of genes, or, more commonly, to include as wide a range of genes for analysis as possible. [ 6 ]
Earlier HGTs are thought to have happened. The first universal common ancestor (FUCA), earliest ancestor of the last common ancestor to all life (LUCA), is thought to have had other descendants that had their own lineages. [ 7 ] [ 8 ] These now-extinct sister lineages of LUCA descending from FUCA are thought to have horizontally transferred some of their genes into the genome of early descendants of LUCA. [ 8 ]
In his article Uprooting the Tree of Life , W. Ford Doolittle discusses the Last Universal Common Ancestor – the root of the Tree of Life – and the problems with that concept posed by HGT. [ 9 ] He describes the microorganism Archaeoglobus fulgidus as an anomaly with respect to a phylogenetic tree based upon the code for the enzyme HMGCoA reductase – the organism is definitely an archaean, with all the cell lipids and transcription machinery expected of an archaean, but its HMGCoA genes are of bacterial origin. In the article, Doolittle says that while it is now widely accepted that mitochondria in eukaryotes derived from alpha-proteobacterial cells and that chloroplasts came from ingested cyanobacteria , [ 9 ]
".. it is no longer safe to assume that those were the only lateral gene transfers that occurred after the first eukaryotes arose. Only in later, multicellular eukaryotes do we know of definite restrictions on horizontal gene exchange, such as the advent of separated (and protected) germ cells ...
If there had never been any lateral gene transfer, all these individual gene trees would have the same topology (the same branching order), and the ancestral genes at the root of each tree would have all been present in the last universal common ancestor, a single ancient cell. But extensive transfer means that neither is the case: gene trees will differ (although many will have regions of similar topology) and there would never have been a single cell that could be called the last universal common ancestor..." [ 9 ]
Doolittle suggested that the universal common ancestor cannot have been one particular organism, but must have been a loose, diverse conglomeration of primitive cells that evolved together. These early cells, each with relatively few genes, differed in many ways, and swapped their genes freely. Eventually, from these eclectic cells came the three domains of life as we know them today: bacteria , archaea and eukaryote . These domains are now recognizably distinct because much of the gene transfer that still occurs is within these domains, rather than between them. Biologist Peter Gogarten reinforced these arguments, and suggested that the metaphor of a tree does not fit the data from recent genome research, and that biologists should now instead use "the metaphor of a mosaic to describe the different histories combined in individual genomes and use [the] metaphor of a net to visualize the rich exchange and cooperative effects of HGT among microbes." [ 10 ]
Despite the uncertainties in reconstructing phylogenies back to the beginnings of life, progress is being made in reconstructing the tree of life in the face of uncertainties raised by HGT. The uncertainty of any inferred phylogenetic tree based on a single gene can be resolved by using several common genes or even evidence from whole genomes. [ 12 ] One such approach, sometimes called 'multi-locus typing', has been used to deduce phylogenic trees for organisms that exchange genes, such as meningitis bacteria. [ 13 ]
Jonathan Eisen and Claire Fraser have pointed out that:
"In building the tree of life, analysis of whole genomes has begun to supplement, and in some cases to improve upon, studies previously done with one or a few genes. For example, recent studies of complete bacterial genomes have suggested that the hyperthermophilic species are not deeply branching; if this is true, it casts doubt on the idea that the first forms of life were thermophiles. Analysis of the genome of the eukaryotic parasite Encephalitozoon cuniculi supports suggestions that the group Microsporidia are not deep branching protists but are in fact members of the fungal kingdom. Genome analysis can even help resolve relationships within species, such as by providing new genetic markers for population genetics studies in the bacteria causing anthrax or tuberculosis. In all these studies, it is the additional data provided by a complete genome sequence that allows one to separate the phylogenetic signal from the noise. This is not to say the tree of life is now resolved – we only have sampled a smattering of genomes, and many groups are not yet touched" [ 14 ]
These approaches are enabling estimates of the relative frequency of HGT; the relatively low values that have been observed suggests that the 'tree' is still a valid metaphor for evolution – but the tree is adorned with 'cobwebs' of horizontally transferred genes. This is the main conclusion of a 2005 study of more than 40 complete microbial genomic sequences by Fan Ge, Li-San Wang, and Junhyong Kim . They estimate the frequency of HGT events at about 2% of core genes per genome. [ 15 ] Similar whole genome approaches to assessing evolution are also enabling progress in identifying very early events in the tree of life, such as a proposal that eukaryotes arose by fusion of two complete but very diverse prokaryote genomes: one from a bacterium and one from an archaeal cell. [ 3 ]
Such a fusion of organisms hypothesis for the origin of complex nucleated cells has been put forward by Lynn Margulis using quite different reasoning about symbiosis between a bacterium and an archaen arising in an ancient consortium of microbes. [ 16 ]
While HGT is often seen as a challenge for the reconstruction of the tree of life, an alternative view is that oppositely it provides additional valuable information for its reconstruction.
First, for the recipient organism, HGT is a DNA mutation like others, and as such, it can be modeled and used in tree reconstruction and rooting. [ 17 ]
Second, it is necessary that the recipient of a gene acquisition by HGT lives at the same time, or at an ulterior time, as the donor. [ 18 ] In consequence there is an information on the timing of diversification in HGT. [ 19 ] This is all the more remarkable since the principal usual source for dating in the living world, the fossil record, is absent precisely where HGT is abundant, in the microbial world.
Third, it provides information about the extinct biodiversity, because transfers are likely from extinct species. [ 20 ] | https://en.wikipedia.org/wiki/Horizontal_gene_transfer_in_evolution |
In computer software , horizontal market software is a type of application software that is useful in a wide range of industries. This is the opposite of vertical market software , which has a scope of usefulness limited to few industries. Horizontal market software is also known as " productivity software ." [ 1 ]
Examples of horizontal market software include word processors , web browsers , spreadsheets, calendars, project management applications, and generic bookkeeping applications. Since horizontal market software is developed to be used by a broad audience, it generally lacks any market-specific customizations. [ 2 ] [ 3 ] | https://en.wikipedia.org/wiki/Horizontal_market_software |
In astronomy , geography , and related sciences and contexts, a direction or plane passing by a given point is said to be vertical if it contains the local gravity direction at that point. [ 1 ]
Conversely, a direction, plane, or surface is said to be horizontal (or leveled ) if it is everywhere perpendicular to the vertical direction.
In general, something that is vertical can be drawn from up to down (or down to up), such as the y-axis in the Cartesian coordinate system .
The word horizontal is derived from the Latin horizon , which derives from the Greek ὁρῐ́ζων , meaning 'separating' or 'marking a boundary'. [ 2 ] The word vertical is derived from the late Latin verticalis , which is from the same root as vertex , meaning 'highest point' or more literally the 'turning point' such as in a whirlpool. [ 3 ]
Girard Desargues defined the vertical to be perpendicular to the horizon in his 1636 book Perspective .
In physics, engineering and construction, the direction designated as vertical is usually that along which a plumb-bob hangs. Alternatively, a spirit level that exploits the buoyancy of an air bubble and its tendency to go vertically upwards may be used to test for horizontality. A water level device may also be used to establish horizontality.
Modern rotary laser levels that can level themselves automatically are robust sophisticated instruments and work on the same fundamental principle. [ 4 ] [ 5 ]
When the curvature of the Earth is taken into account, the concepts of vertical and horizontal take on yet another meaning. On the surface of a smoothly spherical, homogenous, non-rotating planet, the plumb bob picks out as vertical the radial direction. Strictly speaking, it is now no longer possible for vertical walls to be parallel: all verticals intersect. This fact has real practical applications in construction and civil engineering, e.g., the tops of the towers of a suspension bridge are further apart than at the bottom. [ 6 ]
Also, horizontal planes can intersect when they are tangent planes to separated points on the surface of the Earth. In particular, a plane tangent to a point on the equator intersects the plane tangent to the North Pole at a right angle . (See diagram).
Furthermore, the equatorial plane is parallel to the tangent plane at the North Pole and as such has claim to be a horizontal plane. But it is. at the same time, a vertical plane for points on the equator. In this sense, a plane can, arguably, be both horizontal and vertical, horizontal at one place , and vertical at another .
For a spinning earth, the plumb line deviates from the radial direction as a function of latitude. [ 7 ] Only on the equator and at the North and South Poles does the plumb line align with the local radius. The situation is actually even more complicated because Earth is not a homogeneous smooth sphere. It is a non homogeneous, non spherical, knobby planet in motion, and the vertical not only need not lie along a radial, it may even be curved and be varying with time. On a smaller scale, a mountain to one side may deflect the plumb bob away from the true zenith . [ 8 ]
On a larger scale the gravitational field of the Earth, which is at least approximately radial near the Earth, is not radial when it is affected by the Moon at higher altitudes. [ 9 ] [ 10 ]
Neglecting the curvature of the earth, horizontal and vertical motions of a projectile moving under gravity are independent of each other. [ 11 ] Vertical displacement of a projectile is not affected by the horizontal component of the launch velocity, and, conversely, the horizontal displacement is unaffected by the vertical component. The notion dates at least as far back as Galileo. [ 12 ]
When the curvature of the Earth is taken into account, the independence of the two motion does not hold. For example, even a projectile fired in a horizontal direction (i.e., with a zero vertical component) may leave the surface of the spherical Earth and indeed escape altogether. [ 13 ]
In the context of a 1-dimensional orthogonal Cartesian coordinate system on a Euclidean plane, to say that a line is horizontal or vertical, an initial designation has to be made. One can start off by designating the vertical direction, usually labelled the Y direction. [ 14 ] The horizontal direction, usually labelled the X direction, [ 15 ] is then automatically determined. Or, one can do it the other way around, i.e., nominate the x -axis, in which case the y -axis is then automatically determined. There is no special reason to choose the horizontal over the vertical as the initial designation: the two directions are on par in this respect.
The following hold in the two-dimensional case:
Not all of these elementary geometric facts are true in the 3-D context.
In the three-dimensional case, the situation is more complicated as now one has horizontal and vertical planes in addition to horizontal and vertical lines. Consider a point P and designate a direction through P as vertical. A plane which contains P and is normal to the designated direction is the horizontal plane at P. Any plane going through P, normal to the horizontal plane is a vertical plane at P. Through any point P, there is one and only one horizontal plane but a multiplicity of vertical planes. This is a new feature that emerges in three dimensions. The symmetry that exists in the two-dimensional case no longer holds.
In the 2-dimension case, as mentioned already, the usual designation of the vertical coincides with the y-axis in co-ordinate geometry. This convention can cause confusion in the classroom. For the teacher, writing perhaps on a white board, the y -axis really is vertical in the sense of the plumbline verticality but for the student the axis may well lie on a horizontal table.
Although the word horizontal is commonly used in daily life and language (see below), it is subject to many misconceptions.
In general or in practice, something that is horizontal can be drawn from left to right (or right to left), such as the x-axis in the Cartesian coordinate system . [ citation needed ]
The concept of a horizontal plane is thus anything but simple, although, in practice, most of these effects and variations are rather small: they are measurable and can be predicted with great accuracy, but they may not greatly affect our daily life.
This dichotomy between the apparent simplicity of a concept and an actual complexity of defining (and measuring) it in scientific terms arises from the fact that the typical linear scales and dimensions of relevance in daily life are 3 orders of magnitude (or more) smaller than the size of the Earth. Hence, the world appears to be flat locally, and horizontal planes in nearby locations appear to be parallel. Such statements are nevertheless approximations; whether they are acceptable in any particular context or application depends on the applicable requirements, in particular in terms of accuracy.
In graphical contexts, such as drawing and drafting and Co-ordinate geometry on rectangular paper, it is very common to associate one of the dimensions of the paper with a horizontal, even though the entire sheet of paper is standing on a flat horizontal (or slanted) table. In this case, the horizontal direction is typically from the left side of the paper to the right side. This is purely conventional (although it is somehow 'natural' when drawing a natural scene as it is seen in reality), and may lead to misunderstandings or misconceptions, especially in an educational context. | https://en.wikipedia.org/wiki/Horizontal_plane |
In genetics , the term horizontal resistance was first used by J. E. Vanderplank [ 1 ] to describe many-gene resistance, which is sometimes also called generalized resistance . [ 2 ] This contrasts with the term vertical resistance which was used to describe single-gene resistance. Raoul A. Robinson [ 3 ] further refined the definition of horizontal resistance. Unlike vertical resistance and parasitic ability, horizontal resistance and horizontal parasitic ability are entirely independent of each other in genetic terms.
In the first round of breeding for horizontal resistance, plants are exposed to pathogens and selected for partial resistance. Those with no resistance die, and plants unaffected by the pathogen have vertical resistance and are removed. The remaining plants have partial resistance and their seed is stored and bred back up to sufficient volume for further testing. The hope is that in these remaining plants are multiple types of partial-resistance genes, and by crossbreeding this pool back on itself, multiple partial resistance genes will come together and provide resistance to a larger variety of pathogens.
Successive rounds of breeding for horizontal resistance proceed in a more traditional fashion, selecting plants for disease resistance as measured by yield. These plants are exposed to native regional pathogens, and given minimal assistance in fighting them. [ 4 ]
This genetics article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Horizontal_resistance |
Horizontal scan rate , or horizontal frequency, usually expressed in kilohertz , is the number of times per second that a raster-scan video system transmits or displays a complete horizontal line, as opposed to vertical scan rate , the number of times per second that an entire screenful of image data is transmitted or displayed.
Within a cathode-ray tube (CRT), the horizontal scan rate is how many times in a second that the electron beam moves from the left side of the display to the right and back. The number of horizontal lines displayed per second can be roughly derived from this number multiplied by the vertical scan rate.
The horizontal scan frequencies of a CRT include some intervals that occur during the vertical blanking interval, so the horizontal scan rate does not directly correlate to visible display lines unless the quantity of unseen lines are also known.
The horizontal scan rate is one of the primary figures determining the resolution capability of a CRT, since it is determined by how quickly the electromagnetic deflection system can reverse the current flowing in the deflection coil in order to move the electron beam from one side of the display to the other. Reversing the current more quickly requires higher voltages , which require more expensive electrical components.
In analog television systems, the horizontal frequency is between 15.625 kHz and 15.750 kHz. [ 1 ]
While other display technologies such as liquid-crystal displays do not have the specific electrical characteristics that constrain horizontal scan rates on CRTs, there is still a horizontal scan rate characteristic in the signals that drive these displays.
This computing article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Horizontal_scan_rate |
Hormesis is a two-phased dose-response relationship to an environmental agent whereby low-dose amounts have a beneficial effect and high-dose amounts are either inhibitory to function or toxic. [ 1 ] [ 2 ] [ 3 ] [ 4 ] Within the hormetic zone , the biological response to low-dose amounts of some stressors is generally favorable. An example is the breathing of oxygen , which is required in low amounts (in air) via respiration in living animals, but can be toxic in high amounts, even in a managed clinical setting. [ 5 ]
In toxicology , hormesis is a dose-response phenomenon to xenobiotics or other stressors.
In physiology and nutrition, hormesis has regions extending from low-dose deficiencies to homeostasis, and potential toxicity at high levels. [ 6 ] Physiological concentrations of an agent above or below homeostasis may adversely affect an organism, where the hormetic zone is a region of homeostasis of balanced nutrition. [ 7 ] In pharmacology , the hormetic zone is similar to the therapeutic window .
In the context of toxicology, the hormesis model of dose response is vigorously debated. [ 8 ] The biochemical mechanisms by which hormesis works (particularly in applied cases pertaining to behavior and toxins) remain under early laboratory research and are not well understood. [ 6 ]
The term "hormesis" derives from Greek hórmēsis for "rapid motion, eagerness", itself from ancient Greek hormáein to excite. [ 4 ] The same Greek root provides the word hormone . The term "hormetics" is used for the study of hormesis. [ 6 ] The word hormesis was first reported in English in 1943. [ 4 ]
A form of hormesis famous in antiquity was Mithridatism , the practice whereby Mithridates VI of Pontus supposedly made himself immune to a variety of toxins by regular exposure to small doses. Mithridate and theriac , polypharmaceutical electuaries claiming descent from his formula and initially including flesh from poisonous animals, were consumed for centuries by emperors, kings, and queens as protection against poison and ill health. In the Renaissance , the Swiss doctor Paracelsus said, " All things are poison, and nothing is without poison; the dosage alone makes it so a thing is not a poison. "
German pharmacologist Hugo Schulz first described such a phenomenon in 1888 following his own observations that the growth of yeast could be stimulated by small doses of poisons. This was coupled with the work of German physician Rudolph Arndt , who studied animals given low doses of drugs, eventually giving rise to the Arndt–Schulz rule . [ 8 ] Arndt's advocacy of homeopathy contributed to the rule's diminished credibility in the 1920s and 1930s. [ 8 ] The term "hormesis" was coined and used for the first time in a scientific paper by Chester M. Southam and J. Ehrlich in 1943 in the journal Phytopathology , volume 33, pp. 517–541.
In 2004, Edward Calabrese evaluated the concept of hormesis. [ 9 ] [ 10 ] Over 600 substances show a U-shaped dose–response relationship ; Calabrese and Baldwin wrote: "One percent (195 out of 20,285) of the published articles contained 668 dose-response relationships that met the entry criteria [of a U-shaped response indicative of hormesis]" [ 11 ]
Carbon monoxide is produced in small quantities across phylogenetic kingdoms, where it has essential roles as a neurotransmitter (subcategorized as a gasotransmitter ). The majority of endogenous carbon monoxide is produced by heme oxygenase ; the loss of heme oxygenase and subsequent loss of carbon monoxide signaling has catastrophic implications for an organism. [ 12 ] In addition to physiological roles, small amounts of carbon monoxide can be inhaled or administered in the form of carbon monoxide-releasing molecules as a therapeutic agent. [ 13 ]
Regarding the hormetic curve graph:
Many organisms maintain a hormesis relationship with oxygen, which follows a hormetic curve similar to carbon monoxide:
Physical exercise intensity may exhibit a hormetic curve. Individuals with low levels of physical activity are at risk for some diseases; however, individuals engaged in moderate, regular exercise may experience less disease risk. [ 15 ]
The possible effect of small amounts of oxidative stress is under laboratory research. [ 16 ] Mitochondria are sometimes described as "cellular power plants" because they generate most of the cell's supply of adenosine triphosphate (ATP), a source of chemical energy. Reactive oxygen species (ROS) have been discarded as unwanted byproducts of oxidative phosphorylation in mitochondria by the proponents of the free-radical theory of aging promoted by Denham Harman . The free-radical theory states that compounds inactivating ROS would lead to a reduction of oxidative stress and thereby produce an increase in lifespan, although this theory holds only in basic research . [ 17 ] However, in over 19 clinical trials , "nutritional and genetic interventions to boost antioxidants have generally failed to increase life span." [ 18 ]
Whether this concept applies to humans remains to be shown, although a 2007 epidemiological study supports the possibility of mitohormesis, indicating that supplementation with beta-carotene , vitamin A or vitamin E may increase disease prevalence in humans. [ 19 ] More recent studies have reported that rapamycin exhibits hormesis, where low doses can enhance cellular longevity by partially inhibiting mTOR, unlike higher doses that are toxic due to complete inhibition. This partial inhibition of mTOR (by the hormetic effect of low-dose rapamycin) modulates mTOR–mitochondria cross-talk , thereby demonstrating mitohormesis; and consequently reducing oxidative damage , metabolic dysregulation, and mitochondrial dysfunction , thus slowing cellular aging . [ 2 ] [ 3 ]
Alcohol is believed to be hormetic in preventing heart disease and stroke, [ 20 ] although the benefits of light drinking may have been exaggerated. [ 21 ] [ 22 ] The gut microbiome of a typical healthy individual naturally ferments small amounts of ethanol, and in rare cases dysbiosis leads to auto-brewery syndrome , therefore whether benefits of alcohol are derived from the behavior of consuming alcoholic drinks or as a homeostasis factor in normal physiology via metabolites from commensal microbiota remains unclear. [ 23 ] [ 24 ]
In 2012, researchers at UCLA found that tiny amounts (1 mM, or 0.005%) of ethanol doubled the lifespan of Caenorhabditis elegans , a roundworm frequently used in biological studies, that were starved of other nutrients. Higher doses of 0.4% provided no longevity benefit. [ 25 ] However, worms exposed to 0.005% did not develop normally (their development was arrested). The authors argue that the worms were using ethanol as an alternative energy source in the absence of other nutrition, or had initiated a stress response. They did not test the effect of ethanol on worms fed a normal diet.
In 2010, a paper in the journal Environmental Toxicology & Chemistry showed that low doses of methylmercury , a potent neurotoxic pollutant, improved the hatching rate of mallard eggs. [ 26 ] The author of the study, Gary Heinz, who led the study for the U.S. Geological Survey at the Patuxent Wildlife Research Center in Beltsville , stated that other explanations are possible. For instance, the flock he studied might have harbored some low, subclinical infection and that mercury, well known to be antimicrobial, might have killed the infection that otherwise hurt reproduction in the untreated birds. [ 26 ]
Hormesis has been observed in a number of cases in humans and animals exposed to chronic low doses of ionizing radiation. A-bomb survivors who received high doses exhibited shortened lifespan and increased cancer mortality, but those who received low doses had lower cancer mortality than the Japanese average. [ 27 ] [ 28 ]
In Taiwan, recycled radiocontaminated steel was inadvertently used in the construction of over 100 apartment buildings, causing the long-term exposure of 10,000 people. The average dose rate was 50 mSv/year and a subset of the population (1,000 people) received a total dose over 4,000 mSv over ten years. In the widely used linear no-threshold model used by regulatory bodies, the expected cancer deaths in this population would have been 302 with 70 caused by the extra ionizing radiation, with the remainder caused by natural background radiation. The observed cancer rate, though, was quite low at 7 cancer deaths when 232 would be predicted by the LNT model had they not been exposed to the radiation from the building materials. Ionizing radiation hormesis appears to be at work. [ 29 ]
No experiment can be performed in perfect isolation. Thick lead shielding around a chemical dose experiment to rule out the effects of ionizing radiation is built and rigorously controlled for in the laboratory, and certainly not the field. Likewise the same applies for ionizing radiation studies. Ionizing radiation is released when an unstable particle releases radiation, creating two new substances and energy in the form of an electromagnetic wave . The resulting materials are then free to interact with any environmental elements, and the energy released can also be used as a catalyst in further ionizing radiation interactions. [ 30 ]
The resulting confusion in the low-dose exposure field (radiation and chemical) arise from lack of consideration of this concept as described by Mothersill and Seymory. [ 31 ]
Veterans of the Gulf War (1991) who suffered from the persistent symptoms of Gulf War Illness (GWI) were likely exposed to stresses from toxic chemicals and/or radiation. [ 32 ] The DNA damaging ( genotoxic ) effects of such exposures can be, at least partially, overcome by the DNA nucleotide excision repair (NER) pathway. Lymphocytes from GWI veterans exhibited a significantly elevated level of NER repair. [ 32 ] It was suggested that this increased NER capability in exposed veterans was likely a hormetic response, that is, an induced protective response resulting from battlefield exposure. [ 32 ]
One of the areas where the concept of hormesis has been explored extensively with respect to its applicability is aging. [ 33 ] [ 34 ] Since the basic survival capacity of any biological system depends on its homeostatic ability, biogerontologists proposed that exposing cells and organisms to mild stress should result in the adaptive or hormetic response with various biological benefits. This idea has preliminary evidence showing that repetitive mild stress exposure may have anti-aging effects in laboratory models. [ 35 ] [ 36 ] Some mild stresses used for such studies on the application of hormesis in aging research and interventions are heat shock , irradiation, prooxidants , hypergravity , and food restriction. [ 35 ] [ 36 ] [ 37 ] The example of heat shock refers to the proteostasis network. The addition of a bit of stress on the cell can lead to activation of signaling pathways and unfolded protein response pathways that upregulate chaperones, downregulate translation, and other processes that allow the cell to respond to stress. In this way, the activation of these pathways prepares the cell for other stressors since the pathways are already activated. However, too much stress or prolonged stress can actually damage the cell and lead to cell death on occasion. [ 38 ] Such compounds that may modulate stress responses in cells have been termed "hormetins". [ 35 ]
Hormesis suggests dangerous substances have benefits. Concerns exist that the concept has been leveraged by lobbyists to weaken environmental regulations of some well-known toxic substances in the US. [ 39 ]
The hypothesis of hormesis has generated the most controversy when applied to ionizing radiation . This hypothesis is called radiation hormesis. For policy-making purposes, the commonly accepted model of dose response in radiobiology is the linear no-threshold model (LNT), which assumes a strictly linear dependence between the risk of radiation-induced adverse health effects and radiation dose, implying that there is no safe dose of radiation for humans.
Nonetheless, many countries including the Czech Republic , Germany , Austria , Poland , and the United States have radon therapy centers whose whole primary operating principle is the assumption of radiation hormesis, or beneficial impact of small doses of radiation on human health. Countries such as Germany and Austria at the same time have imposed very strict antinuclear regulations, which have been described as radiophobic inconsistency.
The United States National Research Council (part of the National Academy of Sciences ), [ 40 ] the National Council on Radiation Protection and Measurements (a body commissioned by the United States Congress ) [ 41 ] and the United Nations Scientific Committee on the Effects of Ionizing Radiation all agree that radiation hormesis is not clearly shown, nor clearly the rule for radiation doses.
A United States–based National Council on Radiation Protection and Measurements stated in 2001 that evidence for radiation hormesis is insufficient and radiation protection authorities should continue to apply the LNT model for purposes of risk estimation. [ 41 ]
A 2005 report commissioned by the French National Academy concluded that evidence for hormesis occurring at low doses is sufficient and LNT should be reconsidered as the methodology used to estimate risks from low-level sources of radiation, such as deep geological repositories for nuclear waste . [ 42 ]
Hormesis remains largely unknown to the public, requiring a policy change for a possible toxin to consider exposure risk of small doses. [ 43 ] | https://en.wikipedia.org/wiki/Hormesis |
Hormonal imprinting (HI) is a phenomenon which takes place at the first encounter between a hormone and its developing receptor in the critical periods of life (in unicellulars during the whole life) and determines the later signal transduction capacity of the cell. The most important period in mammals is the perinatal one, however this system can be imprinted at weaning , at puberty and in case of continuously dividing cells during the whole life. Faulty imprinting is caused by drugs , environmental pollutants and other hormone-like molecules present in excess at the critical periods with lifelong receptorial, morphological, biochemical and behavioral consequences. HI is transmitted to the hundreds of progeny generations in unicellulars and (as proved) to a few generations also in mammals. | https://en.wikipedia.org/wiki/Hormonal_imprinting |
Hormonal sentience , first described by Robert A. Freitas Jr. , describes the information processing rate in plants , which are mostly based on hormones instead of neurons like in all major animals (except sponges). Plants can to some degree communicate with each other and there are even examples of one-way-communication with animals.
Acacia trees produce tannin to defend themselves when they are grazed upon by animals. The airborne scent of the tannin is picked up by other acacia trees, which then start to produce tannin themselves as a protection from the nearby animals.
When attacked by caterpillars , some plants can release chemical signals to attract parasitic wasps that attack the caterpillars. [ 1 ]
A similar phenomenon can be found not only between plants and animals, but also between fungi and animals. There exists some sort of communication between a fungus garden and workers of the leaf-cutting ant Atta sexdens rubropilosa . If the garden is fed with plants that are poisonous for the fungus, it signals this to the ants, which then will avoid fertilizing the fungus garden with any more of the poisonous plant.
The Venus flytrap , during a 1- to 20-second sensitivity interval, counts two stimuli before snapping shut on its insect prey, a processing peak of 1 bit/s. Mass is 10–100 grams, so the flytrap's SQ is about +1. Plants generally take hours to respond to stimuli though, so vegetative SQs (Sentience Quotient) tend to cluster around -2. | https://en.wikipedia.org/wiki/Hormonal_sentience |
A hormone (from the Greek participle ὁρμῶν , "setting in motion") is a class of signaling molecules in multicellular organisms that are sent to distant organs or tissues by complex biological processes to regulate physiology and behavior . [ 1 ] Hormones are required for the normal development of animals , plants and fungi . Due to the broad definition of a hormone (as a signaling molecule that exerts its effects far from its site of production), numerous kinds of molecules can be classified as hormones. Among the substances that can be considered hormones, are eicosanoids (e.g. prostaglandins and thromboxanes ), steroids (e.g. oestrogen and brassinosteroid ), amino acid derivatives (e.g. epinephrine and auxin ), protein or peptides (e.g. insulin and CLE peptides ), and gases (e.g. ethylene and nitric oxide ).
Hormones are used to communicate between organs and tissues . In vertebrates , hormones are responsible for regulating a wide range of processes including both physiological processes and behavioral activities such as digestion , metabolism , respiration , sensory perception , sleep , excretion , lactation , stress induction, growth and development , movement , reproduction , and mood manipulation. [ 2 ] [ 3 ] [ 4 ] In plants, hormones modulate almost all aspects of development, from germination to senescence . [ 5 ]
Hormones affect distant cells by binding to specific receptor proteins in the target cell, resulting in a change in cell function. When a hormone binds to the receptor, it results in the activation of a signal transduction pathway that typically activates gene transcription , resulting in increased expression of target proteins . Hormones can also act in non-genomic pathways that synergize with genomic effects. [ 6 ] Water-soluble hormones (such as peptides and amines) generally act on the surface of target cells via second messengers . Lipid soluble hormones, (such as steroids ) generally pass through the plasma membranes of target cells (both cytoplasmic and nuclear ) to act within their nuclei . Brassinosteroids, a type of polyhydroxysteroids, are a sixth class of plant hormones and may be useful as an anticancer drug for endocrine-responsive tumors to cause apoptosis and limit plant growth. Despite being lipid soluble, they nevertheless attach to their receptor at the cell surface. [ 7 ]
In vertebrates, endocrine glands are specialized organs that secrete hormones into the endocrine signaling system . Hormone secretion occurs in response to specific biochemical signals and is often subject to negative feedback regulation . For instance, high blood sugar (serum glucose concentration) promotes insulin synthesis. Insulin then acts to reduce glucose levels and maintain homeostasis , leading to reduced insulin levels. Upon secretion, water-soluble hormones are readily transported through the circulatory system. Lipid-soluble hormones must bond to carrier plasma glycoproteins (e.g., thyroxine-binding globulin (TBG)) to form ligand -protein complexes. Some hormones, such as insulin and growth hormones, can be released into the bloodstream already fully active. Other hormones, called prohormones , must be activated in certain cells through a series of steps that are usually tightly controlled. [ 8 ] The endocrine system secretes hormones directly into the bloodstream , typically via fenestrated capillaries , whereas the exocrine system secretes its hormones indirectly using ducts . Hormones with paracrine function diffuse through the interstitial spaces to nearby target tissue.
Plants lack specialized organs for the secretion of hormones, although there is spatial distribution of hormone production. For example, the hormone auxin is produced mainly at the tips of young leaves and in the shoot apical meristem . The lack of specialised glands means that the main site of hormone production can change throughout the life of a plant, and the site of production is dependent on the plant's age and environment. [ 9 ]
Hormone producing cells are found in the endocrine glands , such as the thyroid gland , ovaries , and testes . [ 10 ] Hormonal signaling involves the following steps: [ 11 ]
Exocytosis and other methods of membrane transport are used to secrete hormones when the endocrine glands are signaled. The hierarchical model is an oversimplification of the hormonal signaling process. Cellular recipients of a particular hormonal signal may be one of several cell types that reside within a number of different tissues, as is the case for insulin , which triggers a diverse range of systemic physiological effects. Different tissue types may also respond differently to the same hormonal signal. [ citation needed ]
Arnold Adolph Berthold was a German physiologist and zoologist , who, in 1849, had a question about the function of the testes . He noticed in castrated roosters that they did not have the same sexual behaviors as roosters with their testes intact. He decided to run an experiment on male roosters to examine this phenomenon. He kept a group of roosters with their testes intact, and saw that they had normal sized wattles and combs (secondary sexual organs ), a normal crow, and normal sexual and aggressive behaviors. He also had a group with their testes surgically removed, and noticed that their secondary sexual organs were decreased in size, had a weak crow, did not have sexual attraction towards females, and were not aggressive. He realized that this organ was essential for these behaviors, but he did not know how. To test this further, he removed one testis and placed it in the abdominal cavity. The roosters acted and had normal physical anatomy . He was able to see that location of the testes does not matter. He then wanted to see if it was a genetic factor that was involved in the testes that provided these functions. He transplanted a testis from another rooster to a rooster with one testis removed, and saw that they had normal behavior and physical anatomy as well. Berthold determined that the location or genetic factors of the testes do not matter in relation to sexual organs and behaviors, but that some chemical in the testes being secreted is causing this phenomenon. It was later identified that this factor was the hormone testosterone . [ 12 ] [ 13 ]
Although known primarily for his work on the Theory of Evolution , Charles Darwin was also keenly interested in plants. Through the 1870s, he and his son Francis studied the movement of plants towards light. They were able to show that light is perceived at the tip of a young stem (the coleoptile ), whereas the bending occurs lower down the stem. They proposed that a 'transmissible substance' communicated the direction of light from the tip down to the stem. The idea of a 'transmissible substance' was initially dismissed by other plant biologists, but their work later led to the discovery of the first plant hormone. [ 14 ] In the 1920s Dutch scientist Frits Warmolt Went and Russian scientist Nikolai Cholodny (working independently of each other) conclusively showed that asymmetric accumulation of a growth hormone was responsible for this bending. In 1933 this hormone was finally isolated by Kögl, Haagen-Smit and Erxleben and given the name ' auxin '. [ 14 ] [ 15 ] [ 16 ]
British physician George Oliver and physiologist Edward Albert Schäfer , professor at University College London, collaborated on the physiological effects of adrenal extracts. They first published their findings in two reports in 1894, a full publication followed in 1895. [ 17 ] [ 18 ] Though frequently falsely attributed to secretin , found in 1902 by Bayliss and Starling, Oliver and Schäfer's adrenal extract containing adrenaline , the substance causing the physiological changes, was the first hormone to be discovered. The term hormone would later be coined by Starling. [ 19 ]
William Bayliss and Ernest Starling , a physiologist and biologist , respectively, wanted to see if the nervous system had an impact on the digestive system . They knew that the pancreas was involved in the secretion of digestive fluids after the passage of food from the stomach to the intestines , which they believed to be due to the nervous system. They cut the nerves to the pancreas in an animal model and discovered that it was not nerve impulses that controlled secretion from the pancreas. It was determined that a factor secreted from the intestines into the bloodstream was stimulating the pancreas to secrete digestive fluids. This was named secretin : a hormone.
Hormonal effects are dependent on where they are released, as they can be released in different manners. [ 20 ] Not all hormones are released from a cell and into the blood until it binds to a receptor on a target. The major types of hormone signaling are:
As hormones are defined functionally, not structurally, they may have diverse chemical structures. Hormones occur in multicellular organisms ( plants , animals , fungi , brown algae , and red algae ). These compounds occur also in unicellular organisms , and may act as signaling molecules however there is no agreement that these molecules can be called hormones. [ 21 ] [ 22 ]
Peptides
Derivatives
Compared with vertebrates, insects and crustaceans possess a number of structurally unusual hormones such as the juvenile hormone , a sesquiterpenoid . [ 24 ]
Examples include abscisic acid , auxin , cytokinin , ethylene , and gibberellin . [ 25 ]
Most hormones initiate a cellular response by initially binding to either cell surface receptors or intracellular receptors . A cell may have several different receptors that recognize the same hormone but activate different signal transduction pathways, or a cell may have several different receptors that recognize different hormones and activate the same biochemical pathway. [ 26 ]
Receptors for most peptide as well as many eicosanoid hormones are embedded in the cell membrane as cell surface receptors, and the majority of these belong to the G protein-coupled receptor (GPCR) class of seven alpha helix transmembrane proteins. The interaction of hormone and receptor typically triggers a cascade of secondary effects within the cytoplasm of the cell, described as signal transduction , often involving phosphorylation or dephosphorylation of various other cytoplasmic proteins, changes in ion channel permeability, or increased concentrations of intracellular molecules that may act as secondary messengers (e.g., cyclic AMP ). Some protein hormones also interact with intracellular receptors located in the cytoplasm or nucleus by an intracrine mechanism. [ 27 ] [ 28 ]
For steroid or thyroid hormones, their receptors are located inside the cell within the cytoplasm of the target cell. These receptors belong to the nuclear receptor family of ligand-activated transcription factors . To bind their receptors, these hormones must first cross the cell membrane. They can do so because they are lipid-soluble. The combined hormone-receptor complex then moves across the nuclear membrane into the nucleus of the cell, where it binds to specific DNA sequences , regulating the expression of certain genes , and thereby increasing the levels of the proteins encoded by these genes. [ 29 ] However, it has been shown that not all steroid receptors are located inside the cell. Some are associated with the plasma membrane . [ 30 ]
Hormones have the following effects on the body: [ 31 ]
A hormone may also regulate the production and release of other hormones. Hormone signals control the internal environment of the body through homeostasis .
The rate of hormone biosynthesis and secretion is often regulated by a homeostatic negative feedback control mechanism. Such a mechanism depends on factors that influence the metabolism and excretion of hormones. Thus, higher hormone concentration alone cannot trigger the negative feedback mechanism. Negative feedback must be triggered by overproduction of an "effect" of the hormone. [ 32 ] [ 33 ]
Hormone secretion can be stimulated and inhibited by:
One special group of hormones is the tropic hormones that stimulate the hormone production of other endocrine glands . For example, thyroid-stimulating hormone (TSH) causes growth and increased activity of another endocrine gland, the thyroid , which increases output of thyroid hormones . [ 34 ]
To release active hormones quickly into the circulation , hormone biosynthetic cells may produce and store biologically inactive hormones in the form of pre- or prohormones . These can then be quickly converted into their active hormone form in response to a particular stimulus. [ 34 ]
Eicosanoids are considered to act as local hormones. They are considered to be "local" because they possess specific effects on target cells close to their site of formation. They also have a rapid degradation cycle, making sure they do not reach distant sites within the body. [ 35 ]
Hormones are also regulated by receptor agonists. Hormones are ligands, which are any kinds of molecules that produce a signal by binding to a receptor site on a protein. Hormone effects can be inhibited, thus regulated, by competing ligands that bind to the same target receptor as the hormone in question. When a competing ligand is bound to the receptor site, the hormone is unable to bind to that site and is unable to elicit a response from the target cell. These competing ligands are called antagonists of the hormone. [ 36 ]
Many hormones and their structural and functional analogs are used as medication . The most commonly prescribed hormones are estrogens and progestogens (as methods of hormonal contraception and as HRT ), [ 37 ] thyroxine (as levothyroxine , for hypothyroidism ) and steroids (for autoimmune diseases and several respiratory disorders ). Insulin is used by many diabetics . Local preparations for use in otolaryngology often contain pharmacologic equivalents of adrenaline , while steroid and vitamin D creams are used extensively in dermatological practice. [ 38 ]
A "pharmacologic dose" or "supraphysiological dose" of a hormone is a medical usage referring to an amount of a hormone far greater than naturally occurs in a healthy body. The effects of pharmacologic doses of hormones may be different from responses to naturally occurring amounts and may be therapeutically useful, though not without potentially adverse side effects. An example is the ability of pharmacologic doses of glucocorticoids to suppress inflammation .
At the neurological level, behavior can be inferred based on hormone concentration, which in turn are influenced by hormone-release patterns; the numbers and locations of hormone receptors; and the efficiency of hormone receptors for those involved in gene transcription. Hormone concentration does not incite behavior, as that would undermine other external stimuli; however, it influences the system by increasing the probability of a certain event to occur. [ 39 ]
Not only can hormones influence behavior, but also behavior and the environment can influence hormone concentration. [ 40 ] Thus, a feedback loop is formed, meaning behavior can affect hormone concentration, which in turn can affect behavior, which in turn can affect hormone concentration, and so on. [ 41 ] For example, hormone-behavior feedback loops are essential in providing constancy to episodic hormone secretion, as the behaviors affected by episodically secreted hormones directly prevent the continuous release of sad hormones. [ 42 ]
Three broad stages of reasoning may be used to determine if a specific hormone-behavior interaction is present within a system: [ citation needed ]
Though colloquially oftentimes used interchangeably, there are various clear distinctions between hormones and neurotransmitters : [ 43 ] [ 44 ] [ 36 ]
Neurohormones are a type of hormone that share a commonality with neurotransmitters. [ 47 ] They are produced by endocrine cells that receive input from neurons, or neuroendocrine cells. [ 47 ] Both classic hormones and neurohormones are secreted by endocrine tissue; however, neurohormones are the result of a combination between endocrine reflexes and neural reflexes, creating a neuroendocrine pathway. [ 36 ] While endocrine pathways produce chemical signals in the form of hormones, the neuroendocrine pathway involves the electrical signals of neurons. [ 36 ] In this pathway, the result of the electrical signal produced by a neuron is the release of a chemical, which is the neurohormone . [ 36 ] Finally, like a classic hormone, the neurohormone is released into the bloodstream to reach its target. [ 36 ]
Hormone transport and the involvement of binding proteins is an essential aspect when considering the function of hormones. [ 48 ]
The formation of a complex with a binding protein has several benefits: the effective half-life of the bound hormone is increased, and a reservoir of bound hormones is created, which evens the variations in concentration of unbound hormones (bound hormones will replace the unbound hormones when these are eliminated). [ 49 ] An example of the usage of hormone-binding proteins is in the thyroxine-binding protein which carries up to 80% of all thyroxine in the body, a crucial element in regulating the metabolic rate. [ 50 ] | https://en.wikipedia.org/wiki/Hormone |
For the use of hormone antagonists in cancer , see hormonal therapy (oncology)
A hormone antagonist is a molecule, produced either synthetically or endogenously, that binds to a specific hormone receptor to block the effect or synthesis of that hormone. [ 1 ] There are many types of hormone antagonists, such as gonadotropin-releasing hormone (GnRH) antagonists , estrogen antagonists , and androgen antagonists .
Organisms may use hormone antagonists to modify the action of their hormone receptors. [ 1 ] For example, ghrelin is a hormone that stimulates appetite and growth hormone release by activating the growth hormone secretagogue receptor (GHSR). [ 2 ] LEAP2 was found to be a peptide hormone synthesized by the liver and small intestine that blocks the GHSR activation by ghrelin, thereby reducing appetite. [ 2 ]
Synthetically produced hormone antagonists can also be used as anticancer treatments for hormone-sensitive cancers like breast cancer and prostate cancer. [ 3 ]
Hormone antagonists are used widely in anticancer treatments, such as the drug tamoxifen , an anti-estrogen that binds to estrogen receptors to slow the growth of some estrogen receptor-positive breast cancers. [ 4 ] Aromatase inhibitors (AIs) are also prescribed as a breast cancer treatment, especially after mastectomies . [ 5 ] AIs work by blocking the action of the aromatase enzyme, which converts androgens, like testosterone, into estrogen. [ 6 ] Aromatase inhibitors can be used in conjunction with estrogen receptor inhibitors to treat estrogen receptor-positive breast cancers in women who have gone through menopause already. [ 6 ] [ 5 ]
Anti-androgens such as enzalutamide can be used as a treatment for prostate cancer, which, by binding to the androgen receptor, can inhibit the binding of testosterone. Androgens may promote prostate cancer, with the main androgens secreted by the testicles being testosterone and dihydroxytestosterone (DHT). [ 7 ] Some androgens can be made by the adrenal glands, which are located above the kidneys. [ 8 ]
Gonadotropin-releasing hormone (GnRH) antagonists, or luteinizing hormone-releasing (LHRH) hormone antagonists may also be used in prostate cancer treatment. LHRH antagonists work by blocking the pituitary gland from making follicle-stimulating hormone (FSH) and luteinizing hormone (LH), which [ 9 ] The drug degarelix , administered as a monthly shot, is currently the only LHRH antagonist approved for treatment of advanced prostate cancers. [ 10 ] Compared to LHRH agonists, LHRH antagonists do not cause a tumor flare and decrease testosterone immediately. [ 8 ] Relugolix is an oral GnRH antagonist drug also used for advanced prostate cancer treatment.
This biochemistry article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hormone_antagonist |
Hormone replacement therapy ( HRT ), also known as menopausal hormone therapy or postmenopausal hormone therapy , is a form of hormone therapy used to treat symptoms associated with female menopause . [ 1 ] [ 2 ] Effects of menopause can include symptoms such as hot flashes , accelerated skin aging, vaginal dryness , decreased muscle mass , and complications such as osteoporosis (bone loss), sexual dysfunction , and vaginal atrophy . They are mostly caused by low levels of female sex hormones (e.g. estrogens ) that occur during menopause. [ 1 ] [ 2 ]
Estrogens and progestogens are the main hormone drugs used in HRT. Progesterone is the main female sex hormone that occurs naturally and is also manufactured into a drug that is used in menopausal hormone therapy. [ 1 ] Although both classes of hormones can have symptomatic benefit, progestogen is specifically added to estrogen regimens, unless the uterus has been removed, to avoid the increased risk of endometrial cancer. Unopposed estrogen therapy promotes endometrial hyperplasia and increases the risk of cancer , while progestogen reduces this risk. [ 3 ] [ 4 ] Androgens like testosterone are sometimes used as well. [ 5 ] HRT is available through a variety of different routes . [ 1 ] [ 2 ]
The long-term effects of HRT on most organ systems vary by age and time since the last physiological exposure to hormones, and there can be large differences in individual regimens, factors which have made analyzing effects difficult. [ 6 ] The Women's Health Initiative (WHI) is an ongoing study of over 27,000 women that began in 1991, with the most recent analyses suggesting that, when initiated within 10 years of menopause, HRT reduces all-cause mortality and risks of coronary disease, osteoporosis, and dementia; after 10 years the beneficial effects on mortality and coronary heart disease are no longer apparent, though there are decreased risks of hip and vertebral fractures and an increased risk of venous thromboembolism when taken orally. [ 7 ] [ 8 ]
"Bioidentical" hormone replacement is a development in the 21st century and uses manufactured compounds with "exactly the same chemical and molecular structure as hormones that are produced in the human body." [ 9 ] These are mainly manufactured from plant steroids [ 10 ] and can be a component of either registered pharmaceutical or custom-made compounded preparations, with the latter generally not recommended by regulatory bodies due to their lack of standardization and formal oversight. [ 11 ] Bioidentical hormone replacement has inadequate clinical research to determine its safety and efficacy as of 2017. [ 12 ]
The current indications for use from the United States Food and Drug Administration (FDA) include short-term treatment of menopausal symptoms , such as vasomotor hot flashes or vaginal atrophy , and prevention of osteoporosis . [ 13 ]
Approved uses of HRT in the United States include short-term treatment of menopausal symptoms such as hot flashes and vaginal atrophy, and prevention of osteoporosis. [ 13 ] The American College of Obstetrics and Gynecology (ACOG) approves of HRT for symptomatic relief of menopausal symptoms, [ 14 ] and advocates its use beyond the age of 65 in appropriate scenarios. [ 15 ] The North American Menopause Society (NAMS) 2016 annual meeting mentioned that HRT may have more benefits than risks in women before the age of 60. [ 16 ]
A consensus expert opinion published by The Endocrine Society stated that when taken during perimenopause or the initial years of menopause, HRT carries fewer risks than previously published, and reduces all cause mortality in most scenarios. [ 2 ] The American Association of Clinical Endocrinologists (AACE) has also released position statements approving of HRT when appropriate. [ 12 ]
Women receiving this treatment are usually post- , peri- , or surgically induced menopausal . Menopause is the permanent cessation of menstruation resulting from loss of ovarian follicular activity, defined as beginning twelve months after the final natural menstrual cycle. This twelve month time point divides menopause into early and late transition periods known as 'perimenopause' and 'postmenopause'. [ 4 ] Premature menopause can occur if the ovaries are surgically removed , as can be done to treat ovarian or uterine cancer .
Demographically, the vast majority of data available is in postmenopausal American women with concurrent pre-existing conditions and an average age of over 60 years. [ 17 ]
HRT is often given as a short-term relief from menopausal symptoms during perimenopause . [ 18 ] Potential menopausal symptoms include: [ 1 ] [ 2 ]
The most common of these are loss of sexual drive and vaginal dryness . [ 4 ] [ 21 ]
The use of hormone therapy for heart health among menopausal women has declined significantly over the past few decades. [ 22 ] In 1999, nearly 27% of menopausal women in the U.S. used estrogen, but by 2020, that figure had dropped to less than 5%. [ 23 ] [ 24 ] Recent evidence in 2024 suggests evidence supporting the cardiovascular benefits of hormone therapy, including improvements in insulin resistance and other heart-related markers. [ 25 ] This adds to a growing body of research highlighting hormone therapy’s effectiveness, not only for heart health but also for managing menopausal symptoms like hot flashes, disrupted sleep, vaginal dryness, and painful intercourse. [ 26 ] Despite its proven benefits, many menopausal women avoid hormone therapy, often due to lingering misconceptions about its risks and societal discomfort with openly discussing menopause. [ 22 ]
HRT can help with the lack of sexual desire and sexual dysfunction that can occur with menopause. Epidemiological surveys of women between 40 and 69 years suggest that 75% of women remain sexually active after menopause. [ 4 ] With increasing life spans, women today are living one third or more of their lives in a postmenopausal state, a period during which healthy sexuality can be integral to their quality of life . [ 27 ]
Decreased libido and sexual dysfunction are common issues in postmenopausal women, an entity referred to hypoactive sexual desire disorder (HSDD); its signs and symptoms can both be improved by HRT. [ 5 ] [ 28 ] Several hormonal changes take place during this period, including a decrease in estrogen and an increase in follicle-stimulating hormone . For most women, the majority of change occurs during the late perimenopausal and postmenopausal stages. [ 4 ] Decreases in sex hormone-binding globulin (SHBG) and inhibin (A and B) also occur. Testosterone is present in women at a lower level than men, peaking at age 30 and declining gradually with age; there is less variation during the menopausal transition relative to estrogen and progesterone. [ 4 ]
A global consensus position statement has advised that postmenopausal testosterone replacement to premenopausal levels can be effective for HSDD. Safety information for testosterone treatment is not available beyond two years of continuous therapy however and dosing above physiologic levels is not advised. [ 29 ] Testosterone patches have been found to restore sexual desire in post menopausal women. [ 30 ] There is insufficient data to evaluate the impact of testosterone replacement on heart disease, breast cancer, with most trials having included women taking concomitant estrogen and progesterone and with testosterone therapy itself being relatively short in duration. In the setting of this limited data, testosterone therapy has not been associated with adverse events. [ 29 ]
Not all women are responsive, especially those with preexisting sexual difficulties. [ 21 ] Estrogen replacement can restore vaginal cells, pH levels, and blood flow to the vagina, all of which tend to deteriorate at the onset of menopause. Pain or discomfort with sex appears to be the most responsive component to estrogen. [ 21 ] It also has been shown to have positive effects on the urinary tract. [ 21 ] Estrogen can also reduce vaginal atrophy and increase sexual arousal , frequency and orgasm . [ 21 ]
The effectiveness of hormone replacement can decline in some women after long-term use. [ 21 ] A number of studies have also found that the combined effects of estrogen/androgen replacement therapy can increase libido and arousal over estrogen alone. [ 21 ] Tibolone , a synthetic steroid with estrogenic, androgenic, and progestogenic properties that is available in Europe, has the ability to improve mood, libido, and physical symptomatology. In various placebo-controlled studies, improvements in vasomotor symptoms, emotional response, sleep disturbances, physical symptoms, and sexual desire have been seen, though it also carries a similar risk profile to conventional HRT. [ 5 ]
There is a significant decrease in hip fracture risk during treatment that to a lesser degree persists after HRT is stopped. [ 31 ] [ 32 ] It also helps collagen formation, which in turn improves intervertebral disc and bone strength. [ 33 ]
Hormone replacement therapy in the form of estrogen and androgen can be effective at reversing the effects of aging on muscle. [ 34 ] Lower testosterone is associated with lower bone density and higher free testosterone is associated with lower hip fracture rates in older women. [ 35 ] Testosterone therapy, which can be used for decreased sexual function, can also increase bone mineral density and muscle mass. [ 29 ]
Side effects in HRT occur with varying frequency and include: [ 36 ]
The effect of HRT in menopause appears to be divergent, with lower risk of heart disease when started within five years, but no impact after ten. [ 38 ] [ 39 ] [ 40 ] For women who are in early menopause and have no issues with their cardiovascular health, HRT comes with a low risk of adverse cardiovascular events. [ 41 ] There may be an increase in heart disease if HRT is given twenty years post-menopause. [ 31 ] This variability has led some reviews to suggest an absence of significant effect on morbidity . [ 42 ] Importantly, there is no difference in long-term mortality from HRT, regardless of age. [ 6 ]
A Cochrane review suggested that women starting HRT less than 10 years after menopause had lower mortality and coronary heart disease , without any strong effect on the risk of stroke and pulmonary embolism . [ 38 ] Those starting therapy more than 10 years after menopause showed little effect on mortality and coronary heart disease, but an increased risk of stroke. Both therapies had an association with venous clots and pulmonary embolism. [ 38 ]
HRT with estrogen and progesterone also improves cholesterol levels . With menopause, HDL decreases, while LDL , triglycerides and lipoprotein a increase, patterns that reverse with estrogen. Beyond this, HRT improves heart contraction , coronary blood flow, sugar metabolism , and decreases platelet aggregation and plaque formation . HRT may promote reverse cholesterol transport through induction of cholesterol ABC transporters . [ 43 ] Atherosclerosis imaging trials show that HRT decreases the formation of new vascular lesions, but does not reverse the progression of existing lesions. [ 44 ] HRT also results in a large reduction in the pro-thrombotic lipoprotein a . [ 45 ]
Studies on cardiovascular disease with testosterone therapy have been mixed, with some suggesting no effect or a mild negative effect, though others have shown an improvement in surrogate markers such as cholesterol, triglycerides and weight. [ 29 ] [ 46 ] Testosterone has a positive effect on vascular endothelial function and tone with observational studies suggesting that women with lower testosterone may be at greater risk for heart disease. Available studies are limited by small sample size and study design. Low sex hormone-binding globulin , which occurs with menopause, is associated with increased body mass index and risk for type 2 diabetes. [ 35 ]
Effects of hormone replacement therapy on venous blood clot formation and potential for pulmonary embolism may vary with different estrogen and progestogen therapies, and with different doses or method of use. [ 17 ] Comparisons between routes of administration suggest that when estrogens are applied to the skin or vagina, there is a lower risk of blood clots, [ 17 ] [ 47 ] whereas when used orally, the risk of blood clots and pulmonary embolism is increased. [ 38 ] Skin and vaginal routes of hormone therapy are not subject to first pass metabolism , and so lack the anabolic effects that oral therapy has on liver synthesis of vitamin K -dependent clotting factors , possibly explaining why oral therapy may increase blood clot formation. [ 48 ]
While a 2018 review found that taking progesterone and estrogen together can decrease this risk, [ 47 ] other reviews reported an increased risk of blood clots and pulmonary embolism when estrogen and progestogen were combined, particularly when treatment was started 10 years or more after menopause and when the women were older than 60 years. [ 17 ] [ 38 ]
The risk of venous thromboembolism may be reduced with bioidentical preparations, though research on this is only preliminary. [ 49 ]
Multiple studies suggest that the possibility of HRT related stroke is absent if therapy is started within five years of menopause, [ 50 ] and that the association is absent or even preventive when given by non-oral routes. [ 8 ] Ischemic stroke risk was increased during the time of intervention in the WHI, with no significant effect after the cessation of therapy [ 31 ] and no difference in mortality at long term follow up. [ 6 ] When oral synthetic estrogen or combined estrogen-progestogen treatment is delayed until five years from menopause, cohort studies in Swedish women have suggested an association with hemorrhagic and ischemic stroke. [ 50 ] Another large cohort of Danish women suggested that the specific route of administration was important, finding that although oral estrogen increased risk of stroke, absorption through the skin had no impact, and vaginal estrogen actually had a decreased risk. [ 8 ]
In postmenopausal women, continuous combined estrogen plus progestin decreases endometrial cancer incidence. [ 51 ] The duration of progestogen therapy should be at least 14 days per cycle to prevent endometrial disease. [ 52 ]
Endometrial cancer has been grouped into two forms in the context of hormone replacement. Type 1 is the most common, can be associated with estrogen therapy, and is usually low grade. Type 2 is not related to estrogen stimulation and usually higher grade and poorer in prognosis. [ 53 ] The endometrial hyperplasia that leads to endometrial cancer with estrogen therapy can be prevented by concomitant administration of progestogen . [ 53 ] The extensive use of high-dose estrogens for birth control in the 1970s is thought to have resulted in a significant increase in the incidence of type 1 endometrial cancer. [ 54 ]
Paradoxically, progestogens do promote the growth of uterine fibroids , and a pelvic ultrasound can be performed before beginning HRT to make sure there are no underlying uterine or endometrial lesions. [ 53 ]
Androgens do not stimulate endometrial proliferation in post menopausal women, and appear to inhibit the proliferation induced by estrogen to a certain extent. [ 55 ]
There is insufficient high‐quality evidence to inform women considering hormone replacement therapy after treatment for endometrial cancer. [ 56 ]
In general, hormone replacement therapy to treat menopause is associated with only a small increased risk of breast cancer . [ 57 ] [ 58 ] [ 59 ] The level of risk also depends on the type of HRT, the duration of the treatment and the age of the person. [ 58 ] [ 60 ] Oestrogen -only HRT, taken by people who had a hysterectomy , comes with an extremely low level of breast cancer risk. The most commonly taken combined HRT (oestrogen and progestogen ) is linked to a small risk of breast cancer. This risk is lower for women in their 50s and higher for older women. The risk increases with the duration of HRT. When HRT is taken for a year or less, there is no increased risk of breast cancer. HRT taken for more than 5 years comes with an increased risk but the risk reduces after the therapy is stopped. [ 58 ] [ 59 ]
There is a non-statistically significant increased rate of breast cancer for hormone replacement therapy with synthetic progestogens . [ 6 ] The risk may be reduced with bioidentical progesterone, [ 61 ] though the only prospective study that suggested this was underpowered due to the rarity of breast cancer in the control population . There have been no randomized controlled trials as of 2018. [ 62 ] The relative risk of breast cancer also varies depending on the interval between menopause and HRT and route of synthetic progestin administration. [ 63 ] [ 64 ]
The most recent follow up of the Women's Health Initiative participants demonstrated a lower incidence of breast cancer in post-hysterectomy participants taking equine estrogen alone, though the relative risk was increased if estrogen was taken with medroxyprogesterone. [ 24 ] Estrogen is usually only given alone in the setting of a hysterectomy due to the increased risk of vaginal bleeding and uterine cancer with unopposed estrogen. [ 65 ] [ 66 ]
HRT has been more strongly associated with risk of breast cancer in women with lower body mass indices (BMIs). No breast cancer association has been found with BMIs of over 25. [ 67 ] It has been suggested by some that the absence of significant effect in some of these studies could be due to selective prescription to overweight women who have higher baseline estrone , or to the very low progesterone serum levels after oral administration leading to a high tumor inactivation rate. [ 68 ]
Evaluating the response of breast tissue density to HRT using mammography appears to help assessing the degree of breast cancer risk associated with therapy; women with dense or mixed- dense breast tissue have a higher risk of developing breast cancer than those with low density tissue. [ 69 ]
Micronized progesterone does not appear to be associated with breast cancer risk when used for less than five years with limited data suggesting an increased risk when used for longer duration. [ 70 ]
For women who previously have had breast cancer, it is recommended to first consider other options for menopausal effects, such as bisphosphonates or selective estrogen receptor modulators (SERMs) for osteoporosis, cholesterol-lowering agents and aspirin for cardiovascular disease, and vaginal estrogen for local symptoms. Observational studies of systemic HRT after breast cancer are generally reassuring. If HRT is necessary after breast cancer, estrogen-only therapy or estrogen therapy with a progestogen may be safer options than combined systemic therapy. [ 71 ] In women who are BRCA1 or BRCA2 mutation carriers, HRT does not appear to impact breast cancer risk. [ 72 ] The relative number of women using HRT who also obtain regular screening mammograms is higher than that in women who do not use HRT, a factor which has been suggested as contributing to different breast cancer detection rates in the two groups. [ 73 ]
With androgen therapy, pre-clinical studies have suggested an inhibitory effect on breast tissue though the majority of epidemiological studies suggest a positive association. [ 74 ]
HRT is associated with an increased risk of ovarian cancer , with women using HRT having about one additional case of ovarian cancer per 1,000 users. [ 75 ] This risk is decreased when progestogen therapy is given concomitantly, as opposed to estrogen alone, and also decreases with increasing time since stopping HRT. [ 76 ] Regarding the specific subtype , there may be a higher risk of serous cancer , but no association with clear cell , endometrioid , or mucinous ovarian cancer . [ 76 ] Hormonal therapy in ovarian cancer survivors after surgical removal of the ovaries is generally thought to improval survival rates. [ 77 ]
In the WHI, women who took combined estrogen-progesterone therapy had a lower risk of getting colorectal cancer . However, the cancers they did have were more likely to have spread to lymph nodes or distant sites than colorectal cancer in women not taking hormones. [ 78 ] In colorectal cancer survivors, usage of HRT is thought to lead to lower recurrence risk and overall mortality. [ 79 ]
There appears to be a significantly decreased risk of cervical squamous cell cancer in post menopausal women treated with HRT and a weak increase in adenocarcinoma. No studies have reported an increased risk of recurrence when HRT is used with cervical cancer survivors. [ 80 ]
As of 2024 there has been conflicting evidence from clinical studies regarding the beneficial effects of estrogens at reducing the risk of Alzheimer's Disease . [ 81 ] For prevention, the WHI suggested in 2013, that HRT may increase risk of dementia if initiated after 65 years of age, but have a neutral outcome or be neuroprotective for those between 50 and 55 years. [ 31 ] However, the prospective ELITE trial showed negligible effects on verbal memory and other mental skills regardless of how soon after menopause a woman began HRT. [ 82 ]
A 2012 review of clinical and epidemiological studies of HRT and AD, PD, FTD and HIV related dementia concluded results were inconclusive at this time. [ 83 ]
The majority of clinical and epidemiological studies show either no association with the risk of developing Parkinson's disease [ 84 ] [ 85 ] or inconclusive results. [ 83 ] [ 86 ] One Danish study suggested an increased risk of Parkinson's with HRT in cyclical dosing schedules. [ 87 ]
Other randomized trials have shown HRT to improve executive and attention processes outside of the context of dementia in postmenopausal women, both in asymptomatic and those with mild cognitive impairment. [ 88 ] [ 89 ] [ 90 ]
As of 2011, estrogen replacement in post menopausal women with Parkinson's disease appeared to improve motor symptoms and activities of daily living , with significant improvement of UPDRS scores . [ 91 ] Testosterone replacement has also shown to be associated with small statistically significant improvements in verbal learning and memory in postmenopausal women [ 92 ] but DHEA has not been found to improve cognitive performance after menopause. [ 35 ]
Pre-clinical studies have indicated that endogenous estrogen and testosterone are neuroprotective and can prevent brain amyloid deposition. [ 93 ] [ 94 ]
The following are absolute and relative contraindications to HRT: [ 95 ]
The extraction of CEEs from the urine of pregnant mares led to the marketing in 1942 of Premarin , one of the earlier forms of estrogen to be introduced. [ 96 ] [ 97 ] From that time until the mid-1970s, estrogen was administered without a supplemental progestogen. Beginning in 1975, studies began to show that without a progestogen, unopposed estrogen therapy with Premarin resulted in an eight-fold increased risk of endometrial cancer , eventually causing sales of Premarin to plummet. [ 96 ] It was recognized in the early 1980s that the addition of a progestogen to estrogen reduced this risk to the endometrium. [ 96 ] This led to the development of combined estrogen–progestogen therapy, most commonly with a combination of conjugated equine estrogen (Premarin) and medroxyprogesterone (Provera). [ 96 ]
The Women's Health Initiative trials were conducted between 1991 and 2006 and were the first large, double-blind , placebo-controlled clinical trials of HRT in healthy women. [ 96 ] Their results were both positive and negative, suggesting that during the time of hormone therapy itself, there are increases in invasive breast cancer, stroke and lung clots . Other risks include increased endometrial cancer , gallbladder disease, and urinary incontinence , while benefits include decreased hip fractures , decreased incidence of diabetes , and improvement of vasomotor symptoms. There is also an increased risk of dementia with HRT in women over 65, though at younger ages it appears to be neuroprotective. After the cessation of HRT, the WHI continued to observe its participants, and found that most of these risks and benefits dissipated, though some elevation in breast cancer risk did persist. [ 31 ] Other studies have also suggested an increased risk of ovarian cancer . [ 76 ]
The arm of the WHI receiving combined estrogen and progestin therapy was closed prematurely in 2002 by its Data Monitoring Committee (DMC) due to perceived health risks, though this occurred a full year after the data suggesting increased risk became manifest. In 2004, the arm of the WHI in which post-hysterectomy patients were being treated with estrogen alone was also closed by the DMC. Clinical medical practice changed based upon two parallel Women's Health Initiative (WHI) studies of HRT. Prior studies were smaller, and many were of women who electively took hormonal therapy. One portion of the parallel studies followed over 16,000 women for an average of 5.2 years, half of whom took placebo , while the other half took a combination of CEEs and MPA (Prempro). This WHI estrogen-plus-progestin trial was stopped prematurely in 2002 because preliminary results suggested risks of combined CEEs and progestins exceeded their benefits. The first report on the halted WHI estrogen-plus-progestin study came out in July 2002. [ 98 ]
Initial data from the WHI in 2002 suggested mortality to be lower when HRT was begun earlier, between age 50 to 59, but higher when begun after age 60. In older patients, there was an apparent increased incidence of breast cancer, heart attacks, venous thrombosis , and stroke, although a reduced incidence of colorectal cancer and bone fracture . At the time, The WHI recommended that women with non-surgical menopause take the lowest feasible dose of HRT for the shortest possible time to minimize associated risks. [ 98 ] Some of the WHI findings were again found in a larger national study done in the United Kingdom, known as the Million Women Study (MWS). As a result of these findings, the number of women taking HRT dropped precipitously. [ 99 ] In 2012, the United States Preventive Task Force (USPSTF) concluded that the harmful effects of combined estrogen and progestin therapy likely exceeded their chronic disease prevention benefits. [ 100 ] [ 101 ]
In 2002 when the first WHI follow up study was published, with HRT in post menopausal women, both older and younger age groups had a slightly higher incidence of breast cancer, and both heart attack and stroke were increased in older patients, although not in younger participants. Breast cancer was increased in women treated with estrogen and a progestin, but not with estrogen and progesterone or estrogen alone. Treatment with unopposed estrogen (i.e., an estrogen alone without a progestogen) is contraindicated if the uterus is still present, due to its proliferative effect on the endometrium . The WHI also found a reduced incidence of colorectal cancer when estrogen and a progestogen were used together, and most importantly, a reduced incidence of bone fractures. Ultimately, the study found disparate results for all cause mortality with HRT, finding it to be lower when HRT was begun during ages 50–59, but higher when begun after age 60. The authors of the study recommended that women with non-surgical menopause take the lowest feasible dose of hormones for the shortest time to minimize risk. [ 98 ]
The data published by the WHI suggested supplemental estrogen increased risk of venous thromboembolism and breast cancer but was protective against osteoporosis and colorectal cancer , while the impact on cardiovascular disease was mixed. [ 102 ] These results were later supported in trials from the United Kingdom, but not in more recent studies from France and China. Genetic polymorphism appears to be associated with inter-individual variability in metabolic response to HRT in postmenopausal women. [ 103 ] [ 104 ]
The WHI reported statistically significant increases in rates of breast cancer, coronary heart disease , strokes and pulmonary emboli . The study also found statistically significant decreases in rates of hip fracture and colorectal cancer . "A year after the study was stopped in 2002, an article was published indicating that estrogen plus progestin also increases the risks of dementia." [ 105 ] The conclusion of the study was that the HRT combination presented risks that outweighed its measured benefits. The results were almost universally reported as risks and problems associated with HRT in general, rather than with Prempro, the specific proprietary combination of CEEs and MPA studied. [ citation needed ]
After the increased clotting found in the first WHI results was reported in 2002, the number of Prempro prescriptions filled reduced by almost half. Following the WHI results, a large percentage of HRT users opted out of them, which was quickly followed by a sharp drop in breast cancer rates. The decrease in breast cancer rates has continued in subsequent years. [ 106 ] An unknown number of women started taking alternatives to Prempro, such as compounded bioidentical hormones, though researchers have asserted that compounded hormones are not significantly different from conventional hormone therapy. [ 107 ]
The other portion of the parallel studies featured women who were post hysterectomy and so received either placebo progestogen or CEEs alone. This group did not show the risks demonstrated in the combination hormone study, and the estrogen-only study was not halted in 2002. However, in February 2004 it, too, was halted. While there was a 23% decreased incidence of breast cancer in the estrogen-only study participants, risks of stroke and pulmonary embolism were increased slightly, predominantly in patients who began HRT over the age of 60. [ 108 ]
Several other large studies and meta-analyses have reported reduced mortality for HRT in women younger than age 60 or within 10 years of menopause, and a debatable or absent effect on mortality in women over 60. [ 2 ] [ 109 ] [ 110 ] [ 111 ] [ 112 ] [ 113 ]
Though research thus far has been substantial, further investigation is needed to fully understand differences in effect for different types of HRT and lengths of time since menopause. [ 114 ] [ 115 ] [ 33 ] As of 2023 [update] , for example, no trial has studied women who begin taking HRT around age 50 and continue taking it for longer than 10 years. [ 116 ]
There are five major human steroid hormones: estrogens, progestogens, androgens , mineralocorticoids , and glucocorticoids . Estrogens and progestogens are the two most often used in menopause. They are available in a wide variety of FDA approved and non–FDA-approved formulations. [ 9 ]
In women with intact uteruses , estrogens are almost always given in combination with progestogens, as long-term unopposed estrogen therapy is associated with a markedly increased risk of endometrial hyperplasia and endometrial cancer . [ 1 ] [ 2 ] Conversely, in women who have undergone a hysterectomy or do not have a uterus, a progestogen is not required, and estrogen can be used alone. There are many combined formulations which include both estrogen and progestogen. [ citation needed ]
Specific types of hormone replacement include: [ 1 ] [ 2 ]
Tibolone – a synthetic medication available in Europe but not the United States– is more effective than placebo but less effective than combination hormone therapy in postmenopausal women. It may have a decreased risk of breast and colorectal cancer, though conversely it can be associated with vaginal bleeding, endometrial cancer, and increase the risk of stroke in women over age 60 years. [ 119 ] [ 120 ]
Vaginal estrogen can improve local atrophy and dryness, with fewer systemic effects than estrogens delivered by other routes. [ 121 ] Sometimes an androgen, generally testosterone, can be added to treat diminished libido . [ 122 ] [ 123 ]
Dosage is often varied cyclically to more closely mimic the ovarian hormone cycle, with estrogens taken daily and progestogens taken for about two weeks every month or every other month, a schedule referred to as 'cyclic' or 'sequentially combined'. Alternatively, 'continuous combined' HRT can be given with a constant daily hormonal dosage. [ 124 ] Continuous combined HRT is associated with less complex endometrial hyerplasia than cyclic. [ 125 ] Impact on breast density appears to be similar in both regimen timings. [ 126 ]
The medications used in menopausal HRT are available in numerous different formulations for use by a variety of different routes of administration : [ 1 ] [ 2 ]
More recently developed forms of drug delivery are alleged to have increased local effect lower dosing, fewer side effects, and constant rather than cyclical serum hormone levels. [ 1 ] [ 2 ] Transdermal and vaginal estrogen, in particular, avoid first pass metabolism through the liver. This in turn prevents an increase in clotting factors and accumulation of anti-estrogenic metabolites, resulting in fewer adverse side effects, particularly with regard to cardiovascular disease and stroke. [ 127 ]
Injectable forms of estradiol exist and have been used occasionally in the past. [ 128 ] [ 129 ] However, they are rarely used in menopausal hormone therapy in modern times and are no longer recommended. [ 128 ] [ 130 ] Instead, other non-oral forms of estradiol such as transdermal estradiol are recommended and may be used. [ 128 ] Estradiol injectables are generally well-tolerated and convenient, requiring infrequent administration. [ 128 ] [ 129 ] However, this form of estradiol does not release estradiol at a constant rate and there are very high circulating estradiol levels soon after injection followed by a rapid decline in levels. [ 128 ] Injections may also be painful. [ 128 ] Examples of estradiol injectables that may be used in menopausal hormone therapy include estradiol valerate and estradiol cypionate . [ 128 ] [ 129 ] In terms of injectable progestogens, injectable progesterone is associated with pain and injection site reactions as well as a short duration of action requiring very frequent injections, and is similarly not recommended in menopausal hormone therapy. [ 131 ] [ 129 ]
Bioidentical hormone therapy (BHT) is the usage of hormones that are chemically identical to those produced in the body. Although proponents of BHT claim advantages over non-bioidentical or conventional hormone therapy, the FDA does not recognize the term 'bioidentical hormone', stating there is no scientific evidence that these hormones are identical to their naturally occurring counterparts. [ 11 ] [ 132 ] There are, however, FDA approved products containing hormones classified as 'bioidentical'. [ 12 ] [ 9 ]
Bioidentical hormones can be used in either pharmaceutical or compounded preparations, with the latter generally not recommended by regulatory bodies due to their lack of standardization and regulatory oversight. [ 11 ] Most classifications of bioidentical hormones do not take into account manufacturing, source, or delivery method of the products, and so describe both non-FDA approved compounded products and FDA approved pharmaceuticals as 'bioidentical'. [ 9 ] The British Menopause Society has issued a consensus statement endorsing the distinction between "compounded" forms (cBHRT), described as unregulated, custom made by specialty pharmacies and subject to heavy marketing and "regulated" pharmaceutical grade forms (rBHRT), which undergo formal oversight by entities such as the FDA and form the basis of most clinical trials. [ 133 ] Some practitioners recommending compounded bioidentical HRT also use salivary or serum hormonal testing to monitor response to therapy, a practice not endorsed by current clinical guidelines in the United States and Europe. [ 134 ]
Bioidentical hormones in pharmaceuticals may have very limited clinical data, with no randomized controlled prospective trials to date comparing them to their animal derived counterparts. Some pre-clinical data has suggested a decreased risk of venous thromboembolism , cardiovascular disease , and breast cancer. [ 11 ] As of 2012, guidelines from the North American Menopause Society , the Endocrine Society , the International Menopause Society , and the European Menopause and Andropause Society endorsed the reduced risk of bioidentical pharmaceuticals for those with increased clotting risk. [ 11 ] [ 135 ]
Compounding for HRT is generally discouraged by the FDA and medical industry in the United States due to a lack of regulation and standardized dosing. [ 11 ] [ 132 ] The U.S. Congress did grant the FDA explicit but limited oversight of compounded drugs in a 1997 amendment to the Federal Food, Drug, and Cosmetic Act (FDCA), but they have encountered obstacles in this role since that time. After 64 patient deaths and 750 harmed patients from a 2012 meningitis outbreak due to contaminated steroid injections, Congress passed the 2013 Drug Quality and Security Act , authorizing creation by the FDA of a voluntary registration for facilities that manufactured compounded drugs, and reinforcing FDCA regulations for traditional compounding. [ 136 ] The DQSA and its reinforcement of provision §503A of the FDCA solidifies FDA authority to enforce FDCA regulation of against compounders of bioidentical hormone therapy. [ 136 ]
In the United Kingdom, on the other hand, compounding is a regulated activity. The Medicines and Healthcare products Regulatory Agency regulates compounding performed under a Manufacturing Specials license and the General Pharmaceutical Council regulates compounding performed within a pharmacy. All testosterone prescribed in the United Kingdom is bioidentical, with its use supported by the National Health Service . There is also marketing authorisation for male testosterone products. National Institute for Health and Care Excellence guideline 1.4.8 states: "consider testosterone supplementation for menopausal women with low sexual desire if HRT alone is not effective". The footnote adds: "at the time of publication (November 2015), testosterone did not have a United Kingdom marketing authorisation for this indication in women. Bioidentical progesterone is used in IVF treatment and for pregnant women who are at risk of premature labour." [ citation needed ]
Wyeth , now a subsidiary of Pfizer , was a pharmaceutical company that marketed the HRT products Premarin (CEEs) and Prempro (CEEs + MPA). [ 137 ] [ 138 ] In 2009, litigation involving Wyeth resulted in the release of 1,500 documents that revealed practices concerning its promotion of these medications. [ 137 ] [ 138 ] [ 139 ] The documents showed that Wyeth commissioned dozens of ghostwritten reviews and commentaries that were published in medical journals to promote unproven benefits of its HRT products, downplay their harms and risks, and cast competing therapies in a negative light. [ 137 ] [ 138 ] [ 139 ] Starting in the mid-1990s and continuing for over a decade, Wyeth pursued an aggressive "publication plan" strategy to promote its HRT products through the use of ghostwritten publications. [ 139 ] It worked mainly with DesignWrite, a medical writing firm. [ 139 ] Between 1998 and 2005, Wyeth had 26 papers promoting its HRT products published in scientific journals. [ 137 ]
These favorable publications emphasized the benefits and downplayed the risks of its HRT products, especially the "misconception" of the association of its products with breast cancer. [ 139 ] The publications defended unsupported cardiovascular "benefits" of its products, downplayed risks such as breast cancer, and promoted off-label and unproven uses like prevention of dementia, Parkinson's disease , vision problems , and wrinkles . [ 138 ] In addition, Wyeth emphasized negative messages against the SERM raloxifene for osteoporosis, instructed writers to stress the fact that "alternative therapies have increased in usage since the WHI even though there is little evidence that they are effective or safe...", called into question the quality and therapeutic equivalence of approved generic CEE products, and made efforts to spread the notion that the unique risks of CEEs and MPA were a class effect of all forms of menopausal HRT: "Overall, these data indicate that the benefit/risk analysis that was reported in the Women's Health Initiative can be generalized to all postmenopausal hormone replacement therapy products." [ 138 ]
Following the publication of the WHI data in 2002, the stock prices for the pharmaceutical industry plummeted, and huge numbers of women stopped using HRT. [ 140 ] The stocks of Wyeth, which supplied the Premarin and Prempro that were used in the WHI trials, decreased by more than 50%, and never fully recovered. [ 140 ] Some of their articles in response promoted themes such as the following: "the WHI was flawed; the WHI was a controversial trial; the population studied in the WHI was inappropriate or was not representative of the general population of menopausal women; results of clinical trials should not guide treatment for individuals; observational studies are as good as or better than randomized clinical trials; animal studies can guide clinical decision-making; the risks associated with hormone therapy have been exaggerated; the benefits of hormone therapy have been or will be proven, and the recent studies are an aberration." [ 96 ] Similar findings were observed in a 2010 analysis of 114 editorials, reviews, guidelines, and letters by five industry-paid authors. [ 96 ] These publications promoted positive themes and challenged and criticized unfavorable trials such as the WHI and MWS. [ 96 ] In 2009, Wyeth was acquired by Pfizer in a deal valued at US$68 billion. [ 141 ] [ 142 ] Pfizer, a company that produces Provera and Depo-Provera (MPA) and has also engaged in medical ghostwriting, continues to market Premarin and Prempro, which remain best-selling medications. [ 96 ] [ 139 ]
According to Fugh-Berman (2010), "Today, despite definitive scientific data to the contrary, many gynecologists still believe that the benefits of [HRT] outweigh the risks in asymptomatic women. This non-evidence–based perception may be the result of decades of carefully orchestrated corporate influence on medical literature." [ 138 ] As many as 50% of physicians have expressed skepticism about large trials like the WHI and HERS in a 2011 survey. [ 143 ] The positive perceptions of many physicians of HRT in spite of large trials showing risks that potentially outweigh any benefits may be due to the efforts of pharmaceutical companies like Wyeth, according to May and May (2012) and Fugh-Berman (2015). [ 139 ] [ 96 ]
The 1990s showed a dramatic decline in prescription rates, though more recently they have begun to rise again. [ 127 ] [ 82 ] Transdermal therapy, in part due to its lack of increase in venous thromboembolism, is now often the first choice for HRT in the United Kingdom. Conjugate equine estrogen, in distinction, has a potentially higher thrombosis risk and is now not commonly used in the UK, replaced by estradiol based compounds with lower thrombosis risk. Oral progestogen combinations such as medroxyprogesterone acetate have changed to dyhydrogesterone, due to a lack of association of the latter with venous clot. [ 144 ] | https://en.wikipedia.org/wiki/Hormone_replacement_therapy |
Response elements are short sequences of DNA within a gene promoter or enhancer region that are able to bind specific transcription factors and regulate transcription of genes .
Under conditions of stress, a transcription activator protein binds to the response element and stimulates transcription. If the same response element sequence is located in the control regions of different genes, then these genes will be activated by the same stimuli, thus producing a coordinated response.
A hormone response element (HRE) is a short sequence of DNA within the promoter of a gene, that is able to bind to a specific hormone receptor complex and therefore regulate transcription . [ 1 ] The sequence is most commonly a pair of inverted repeats separated by three nucleotides, which also indicates that the receptor binds as a dimer . Specifically, HRE responds to steroid hormones, as the activated steroid receptor is the transcription factor binding HRE. This regulates the transcription of genes signalled by the steroid hormone.
A gene may have many different response elements, allowing complex control to be exerted over the level and rate of transcription. [ 2 ]
HRE are used in transgenic animal cells as inducers of gene expression.
Examples of HREs include estrogen response elements and androgen response elements.
Examples of response elements include: | https://en.wikipedia.org/wiki/Hormone_response_element |
In mathematics , a horn angle , also called a cornicular angle, is a type of curvilinear angle defined as the angle formed between a circle and a straight line tangent to it, or, more generally, the angle formed between two curves at a point where they are tangent to each other.
This elementary geometry -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Horn_angle |
In mathematical logic and logic programming , a Horn clause is a logical formula of a particular rule-like form that gives it useful properties for use in logic programming, formal specification , universal algebra and model theory . Horn clauses are named for the logician Alfred Horn , who first pointed out their significance in 1951. [ 1 ]
A Horn clause is a disjunctive clause (a disjunction of literals ) with at most one positive, i.e. unnegated , literal.
Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause .
A Horn clause with exactly one positive literal is a definite clause or a strict Horn clause ; [ 2 ] a definite clause with no negative literals is a unit clause , [ 3 ] and a unit clause without variables is a fact ; [ 4 ] a Horn clause without a positive literal is a goal clause .
The empty clause, consisting of no literals (which is equivalent to false ), is a goal clause.
These three kinds of Horn clauses are illustrated in the following propositional example:
All variables in a clause are implicitly universally quantified with the scope being the entire clause. Thus, for example:
stands for:
which is logically equivalent to:
Horn clauses play a basic role in constructive logic and computational logic . They are important in automated theorem proving by first-order resolution , because the resolvent of two Horn clauses is itself a Horn clause, and the resolvent of a goal clause and a definite clause is a goal clause. These properties of Horn clauses can lead to greater efficiency of proving a theorem: the goal clause is the negation of this theorem; see Goal clause in the above table. Intuitively, if we wish to prove φ, we assume ¬φ (the goal) and check whether such assumption leads to a contradiction. If so, then φ must hold. This way, a mechanical proving tool needs to maintain only one set of formulas (assumptions), rather than two sets (assumptions and (sub)goals).
Propositional Horn clauses are also of interest in computational complexity . The problem of finding truth-value assignments to make a conjunction of propositional Horn clauses true is known as HORNSAT . This problem is P-complete and solvable in linear time . [ 6 ] In contrast, the unrestricted Boolean satisfiability problem is an NP-complete problem.
In universal algebra , definite Horn clauses are generally called quasi-identities ; classes of algebras definable by a set of quasi-identities are called quasivarieties and enjoy some of the good properties of the more restrictive notion of a variety , i.e., an equational class. [ 7 ] From the model-theoretical point of view, Horn sentences are important since they are exactly (up to logical equivalence) those sentences preserved under reduced products ; in particular, they are preserved under direct products . On the other hand, there are sentences that are not Horn but are nevertheless preserved under arbitrary direct products. [ 8 ]
Horn clauses are also the basis of logic programming , where it is common to write definite clauses in the form of an implication :
In fact, the resolution of a goal clause with a definite clause to produce a new goal clause is the basis of the SLD resolution inference rule, used in implementation of the logic programming language Prolog .
In logic programming, a definite clause behaves as a goal-reduction procedure. For example, the Horn clause written above behaves as the procedure:
To emphasize this reverse use of the clause, it is often written in the reverse form:
In Prolog this is written as:
In logic programming, a goal clause, which has the logical form
represents the negation of a problem to be solved. The problem itself is an existentially quantified conjunction of positive literals:
The Prolog notation does not have explicit quantifiers and is written in the form:
This notation is ambiguous in the sense that it can be read either as a statement of the problem or as a statement of the denial of the problem. However, both readings are correct. In both cases, solving the problem amounts to deriving the empty clause. In Prolog notation this is equivalent to deriving:
If the top-level goal clause is read as the denial of the problem, then the empty clause represents false and the proof of the empty clause is a refutation of the denial of the problem. If the top-level goal clause is read as the problem itself, then the empty clause represents true , and the proof of the empty clause is a proof that the problem has a solution.
The solution of the problem is a substitution of terms for the variables X in the top-level goal clause, which can be extracted from the resolution proof. Used in this way, goal clauses are similar to conjunctive queries in relational databases , and Horn clause logic is equivalent in computational power to a universal Turing machine .
Van Emden and Kowalski (1976) investigated the model-theoretic properties of Horn clauses in the context of logic programming, showing that every set of definite clauses D has a unique minimal model M . An atomic formula A is logically implied by D if and only if A is true in M . It follows that a problem P represented by an existentially quantified conjunction of positive literals is logically implied by D if and only if P is true in M . The minimal model semantics of Horn clauses is the basis for the stable model semantics of logic programs. [ 9 ] | https://en.wikipedia.org/wiki/Horn_logic |
In mathematics and computer science , Horner's method (or Horner's scheme ) is an algorithm for polynomial evaluation . Although named after William George Horner , this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horner himself, and can be traced back many hundreds of years to Chinese and Persian mathematicians. [ 1 ] After the introduction of computers, this algorithm became fundamental for computing efficiently with polynomials.
The algorithm is based on Horner's rule , in which a polynomial is written in nested form : a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle {\begin{aligned}&a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\={}&a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}.\end{aligned}}}
This allows the evaluation of a polynomial of degree n with only n {\displaystyle n} multiplications and n {\displaystyle n} additions. This is optimal, since there are polynomials of degree n that cannot be evaluated with fewer arithmetic operations. [ 2 ]
Alternatively, Horner's method and Horner–Ruffini method also refers to a method for approximating the roots of polynomials, described by Horner in 1819. It is a variant of the Newton–Raphson method made more efficient for hand calculation by application of Horner's rule. It was widely used until computers came into general use around 1970.
Given the polynomial p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n , {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},} where a 0 , … , a n {\displaystyle a_{0},\ldots ,a_{n}} are constant coefficients, the problem is to evaluate the polynomial at a specific value x 0 {\displaystyle x_{0}} of x . {\displaystyle x.}
For this, a new sequence of constants is defined recursively as follows:
Then b 0 {\displaystyle b_{0}} is the value of p ( x 0 ) {\displaystyle p(x_{0})} .
To see why this works, the polynomial can be written in the form p ( x ) = a 0 + x ( a 1 + x ( a 2 + x ( a 3 + ⋯ + x ( a n − 1 + x a n ) ⋯ ) ) ) . {\displaystyle p(x)=a_{0}+x{\bigg (}a_{1}+x{\Big (}a_{2}+x{\big (}a_{3}+\cdots +x(a_{n-1}+x\,a_{n})\cdots {\big )}{\Big )}{\bigg )}\ .}
Thus, by iteratively substituting the b i {\displaystyle b_{i}} into the expression, p ( x 0 ) = a 0 + x 0 ( a 1 + x 0 ( a 2 + ⋯ + x 0 ( a n − 1 + b n x 0 ) ⋯ ) ) = a 0 + x 0 ( a 1 + x 0 ( a 2 + ⋯ + x 0 b n − 1 ) ) ⋮ = a 0 + x 0 b 1 = b 0 . {\displaystyle {\begin{aligned}p(x_{0})&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}(a_{n-1}+b_{n}x_{0})\cdots {\big )}{\Big )}\\&=a_{0}+x_{0}{\Big (}a_{1}+x_{0}{\big (}a_{2}+\cdots +x_{0}b_{n-1}{\big )}{\Big )}\\&~~\vdots \\&=a_{0}+x_{0}b_{1}\\&=b_{0}.\end{aligned}}}
Now, it can be proven that;
This expression constitutes Horner's practical application, as it offers a very quick way of determining the outcome of; p ( x ) / ( x − x 0 ) {\displaystyle p(x)/(x-x_{0})} with b 0 {\displaystyle b_{0}} (which is equal to p ( x 0 ) {\displaystyle p(x_{0})} ) being the division's remainder, as is demonstrated by the examples below. If x 0 {\displaystyle x_{0}} is a root of p ( x ) {\displaystyle p(x)} , then b 0 = 0 {\displaystyle b_{0}=0} (meaning the remainder is 0 {\displaystyle 0} ), which means you can factor p ( x ) {\displaystyle p(x)} as x − x 0 {\displaystyle x-x_{0}} .
To finding the consecutive b {\displaystyle b} -values, you start with determining b n {\displaystyle b_{n}} , which is simply equal to a n {\displaystyle a_{n}} . Then you then work recursively using the formula: b n − 1 = a n − 1 + b n x 0 {\displaystyle b_{n-1}=a_{n-1}+b_{n}x_{0}} till you arrive at b 0 {\displaystyle b_{0}} .
Evaluate f ( x ) = 2 x 3 − 6 x 2 + 2 x − 1 {\displaystyle f(x)=2x^{3}-6x^{2}+2x-1} for x = 3 {\displaystyle x=3} .
We use synthetic division as follows:
The entries in the third row are the sum of those in the first two. Each entry in the second row is the product of the x -value ( 3 in this example) with the third-row entry immediately to the left. The entries in the first row are the coefficients of the polynomial to be evaluated. Then the remainder of f ( x ) {\displaystyle f(x)} on division by x − 3 {\displaystyle x-3} is 5 .
But by the polynomial remainder theorem , we know that the remainder is f ( 3 ) {\displaystyle f(3)} . Thus, f ( 3 ) = 5 {\displaystyle f(3)=5} .
In this example, if a 3 = 2 , a 2 = − 6 , a 1 = 2 , a 0 = − 1 {\displaystyle a_{3}=2,a_{2}=-6,a_{1}=2,a_{0}=-1} we can see that b 3 = 2 , b 2 = 0 , b 1 = 2 , b 0 = 5 {\displaystyle b_{3}=2,b_{2}=0,b_{1}=2,b_{0}=5} , the entries in the third row. So, synthetic division (which was actually invented and published by Ruffini 10 years before Horner's publication) is easier to use; it can be shown to be equivalent to Horner's method.
As a consequence of the polynomial remainder theorem, the entries in the third row are the coefficients of the second-degree polynomial, the quotient of f ( x ) {\displaystyle f(x)} on division by x − 3 {\displaystyle x-3} .
The remainder is 5 . This makes Horner's method useful for polynomial long division .
Divide x 3 − 6 x 2 + 11 x − 6 {\displaystyle x^{3}-6x^{2}+11x-6} by x − 2 {\displaystyle x-2} :
The quotient is x 2 − 4 x + 3 {\displaystyle x^{2}-4x+3} .
Let f 1 ( x ) = 4 x 4 − 6 x 3 + 3 x − 5 {\displaystyle f_{1}(x)=4x^{4}-6x^{3}+3x-5} and f 2 ( x ) = 2 x − 1 {\displaystyle f_{2}(x)=2x-1} . Divide f 1 ( x ) {\displaystyle f_{1}(x)} by f 2 ( x ) {\displaystyle f_{2}\,(x)} using Horner's method.
The third row is the sum of the first two rows, divided by 2 . Each entry in the second row is the product of 1 with the third-row entry to the left. The answer is f 1 ( x ) f 2 ( x ) = 2 x 3 − 2 x 2 − x + 1 − 4 2 x − 1 . {\displaystyle {\frac {f_{1}(x)}{f_{2}(x)}}=2x^{3}-2x^{2}-x+1-{\frac {4}{2x-1}}.}
Evaluation using the monomial form of a degree n {\displaystyle n} polynomial requires at most n {\displaystyle n} additions and ( n 2 + n ) / 2 {\displaystyle (n^{2}+n)/2} multiplications, if powers are calculated by repeated multiplication and each monomial is evaluated individually. The cost can be reduced to n {\displaystyle n} additions and 2 n − 1 {\displaystyle 2n-1} multiplications by evaluating the powers of x {\displaystyle x} by iteration.
If numerical data are represented in terms of digits (or bits), then the naive algorithm also entails storing approximately 2 n {\displaystyle 2n} times the number of bits of x {\displaystyle x} : the evaluated polynomial has approximate magnitude x n {\displaystyle x^{n}} , and one must also store x n {\displaystyle x^{n}} itself. By contrast, Horner's method requires only n {\displaystyle n} additions and n {\displaystyle n} multiplications, and its storage requirements are only n {\displaystyle n} times the number of bits of x {\displaystyle x} . Alternatively, Horner's method can be computed with n {\displaystyle n} fused multiply–adds . Horner's method can also be extended to evaluate the first k {\displaystyle k} derivatives of the polynomial with k n {\displaystyle kn} additions and multiplications. [ 3 ]
Horner's method is optimal, in the sense that any algorithm to evaluate an arbitrary polynomial must use at least as many operations. Alexander Ostrowski proved in 1954 that the number of additions required is minimal. [ 4 ] Victor Pan proved in 1966 that the number of multiplications is minimal. [ 5 ] However, when x {\displaystyle x} is a matrix, Horner's method is not optimal .
This assumes that the polynomial is evaluated in monomial form and no preconditioning of the representation is allowed, which makes sense if the polynomial is evaluated only once. However, if preconditioning is allowed and the polynomial is to be evaluated many times, then faster algorithms are possible . They involve a transformation of the representation of the polynomial. In general, a degree- n {\displaystyle n} polynomial can be evaluated using only ⌊ n /2 ⌋ +2 multiplications and n {\displaystyle n} additions. [ 6 ]
A disadvantage of Horner's rule is that all of the operations are sequentially dependent , so it is not possible to take advantage of instruction level parallelism on modern computers. In most applications where the efficiency of polynomial evaluation matters, many low-order polynomials are evaluated simultaneously (for each pixel or polygon in computer graphics, or for each grid square in a numerical simulation), so it is not necessary to find parallelism within a single polynomial evaluation.
If, however, one is evaluating a single polynomial of very high order, it may be useful to break it up as follows: p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n = ( a 0 + a 2 x 2 + a 4 x 4 + ⋯ ) + ( a 1 x + a 3 x 3 + a 5 x 5 + ⋯ ) = ( a 0 + a 2 x 2 + a 4 x 4 + ⋯ ) + x ( a 1 + a 3 x 2 + a 5 x 4 + ⋯ ) = ∑ i = 0 ⌊ n / 2 ⌋ a 2 i x 2 i + x ∑ i = 0 ⌊ n / 2 ⌋ a 2 i + 1 x 2 i = p 0 ( x 2 ) + x p 1 ( x 2 ) . {\displaystyle {\begin{aligned}p(x)&=\sum _{i=0}^{n}a_{i}x^{i}\\[1ex]&=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n}\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+\left(a_{1}x+a_{3}x^{3}+a_{5}x^{5}+\cdots \right)\\[1ex]&=\left(a_{0}+a_{2}x^{2}+a_{4}x^{4}+\cdots \right)+x\left(a_{1}+a_{3}x^{2}+a_{5}x^{4}+\cdots \right)\\[1ex]&=\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i}x^{2i}+x\sum _{i=0}^{\lfloor n/2\rfloor }a_{2i+1}x^{2i}\\[1ex]&=p_{0}(x^{2})+xp_{1}(x^{2}).\end{aligned}}}
More generally, the summation can be broken into k parts: p ( x ) = ∑ i = 0 n a i x i = ∑ j = 0 k − 1 x j ∑ i = 0 ⌊ n / k ⌋ a k i + j x k i = ∑ j = 0 k − 1 x j p j ( x k ) {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=\sum _{j=0}^{k-1}x^{j}\sum _{i=0}^{\lfloor n/k\rfloor }a_{ki+j}x^{ki}=\sum _{j=0}^{k-1}x^{j}p_{j}(x^{k})} where the inner summations may be evaluated using separate parallel instances of Horner's method. This requires slightly more operations than the basic Horner's method, but allows k -way SIMD execution of most of them. Modern compilers generally evaluate polynomials this way when advantageous, although for floating-point calculations this requires enabling (unsafe) reassociative math [ citation needed ] . Another use of breaking a polynomial down this way is to calculate steps of the inner summations in an alternating fashion to take advantage of instruction-level parallelism .
Horner's method is a fast, code-efficient method for multiplication and division of binary numbers on a microcontroller with no hardware multiplier . One of the binary numbers to be multiplied is represented as a trivial polynomial, where (using the above notation) a i = 1 {\displaystyle a_{i}=1} , and x = 2 {\displaystyle x=2} . Then, x (or x to some power) is repeatedly factored out. In this binary numeral system (base 2), x = 2 {\displaystyle x=2} , so powers of 2 are repeatedly factored out.
For example, to find the product of two numbers (0.15625) and m : ( 0.15625 ) m = ( 0.00101 b ) m = ( 2 − 3 + 2 − 5 ) m = ( 2 − 3 ) m + ( 2 − 5 ) m = 2 − 3 ( m + ( 2 − 2 ) m ) = 2 − 3 ( m + 2 − 2 ( m ) ) . {\displaystyle {\begin{aligned}(0.15625)m&=(0.00101_{b})m=\left(2^{-3}+2^{-5}\right)m=\left(2^{-3})m+(2^{-5}\right)m\\&=2^{-3}\left(m+\left(2^{-2}\right)m\right)=2^{-3}\left(m+2^{-2}(m)\right).\end{aligned}}}
To find the product of two binary numbers d and m :
In general, for a binary number with bit values ( d 3 d 2 d 1 d 0 {\displaystyle d_{3}d_{2}d_{1}d_{0}} ) the product is ( d 3 2 3 + d 2 2 2 + d 1 2 1 + d 0 2 0 ) m = d 3 2 3 m + d 2 2 2 m + d 1 2 1 m + d 0 2 0 m . {\displaystyle (d_{3}2^{3}+d_{2}2^{2}+d_{1}2^{1}+d_{0}2^{0})m=d_{3}2^{3}m+d_{2}2^{2}m+d_{1}2^{1}m+d_{0}2^{0}m.} At this stage in the algorithm, it is required that terms with zero-valued coefficients are dropped, so that only binary coefficients equal to one are counted, thus the problem of multiplication or division by zero is not an issue, despite this implication in the factored equation: = d 0 ( m + 2 d 1 d 0 ( m + 2 d 2 d 1 ( m + 2 d 3 d 2 ( m ) ) ) ) . {\displaystyle =d_{0}\left(m+2{\frac {d_{1}}{d_{0}}}\left(m+2{\frac {d_{2}}{d_{1}}}\left(m+2{\frac {d_{3}}{d_{2}}}(m)\right)\right)\right).}
The denominators all equal one (or the term is absent), so this reduces to = d 0 ( m + 2 d 1 ( m + 2 d 2 ( m + 2 d 3 ( m ) ) ) ) , {\displaystyle =d_{0}(m+2{d_{1}}(m+2{d_{2}}(m+2{d_{3}}(m)))),} or equivalently (as consistent with the "method" described above) = d 3 ( m + 2 − 1 d 2 ( m + 2 − 1 d 1 ( m + d 0 ( m ) ) ) ) . {\displaystyle =d_{3}(m+2^{-1}{d_{2}}(m+2^{-1}{d_{1}}(m+{d_{0}}(m)))).}
In binary (base-2) math, multiplication by a power of 2 is merely a register shift operation. Thus, multiplying by 2 is calculated in base-2 by an arithmetic shift . The factor (2 −1 ) is a right arithmetic shift , a (0) results in no operation (since 2 0 = 1 is the multiplicative identity element ), and a (2 1 ) results in a left arithmetic shift.
The multiplication product can now be quickly calculated using only arithmetic shift operations, addition and subtraction.
The method is particularly fast on processors supporting a single-instruction shift-and-addition-accumulate. Compared to a C floating-point library, Horner's method sacrifices some accuracy, however it is nominally 13 times faster (16 times faster when the " canonical signed digit " (CSD) form is used) and uses only 20% of the code space. [ 7 ]
Horner's method can be used to convert between different positional numeral systems – in which case x is the base of the number system, and the a i coefficients are the digits of the base- x representation of a given number – and can also be used if x is a matrix , in which case the gain in computational efficiency is even greater. However, for such cases faster methods are known. [ 8 ]
Using the long division algorithm in combination with Newton's method , it is possible to approximate the real roots of a polynomial. The algorithm works as follows. Given a polynomial p n ( x ) {\displaystyle p_{n}(x)} of degree n {\displaystyle n} with zeros z n < z n − 1 < ⋯ < z 1 , {\displaystyle z_{n}<z_{n-1}<\cdots <z_{1},} make some initial guess x 0 {\displaystyle x_{0}} such that z 1 < x 0 {\displaystyle z_{1}<x_{0}} . Now iterate the following two steps:
These two steps are repeated until all real zeros are found for the polynomial. If the approximated zeros are not precise enough, the obtained values can be used as initial guesses for Newton's method but using the full polynomial rather than the reduced polynomials. [ 9 ]
Consider the polynomial p 6 ( x ) = ( x + 8 ) ( x + 5 ) ( x + 3 ) ( x − 2 ) ( x − 3 ) ( x − 7 ) {\displaystyle p_{6}(x)=(x+8)(x+5)(x+3)(x-2)(x-3)(x-7)} which can be expanded to p 6 ( x ) = x 6 + 4 x 5 − 72 x 4 − 214 x 3 + 1127 x 2 + 1602 x − 5040. {\displaystyle p_{6}(x)=x^{6}+4x^{5}-72x^{4}-214x^{3}+1127x^{2}+1602x-5040.}
From the above we know that the largest root of this polynomial is 7 so we are able to make an initial guess of 8. Using Newton's method the first zero of 7 is found as shown in black in the figure to the right. Next p ( x ) {\displaystyle p(x)} is divided by ( x − 7 ) {\displaystyle (x-7)} to obtain p 5 ( x ) = x 5 + 11 x 4 + 5 x 3 − 179 x 2 − 126 x + 720 {\displaystyle p_{5}(x)=x^{5}+11x^{4}+5x^{3}-179x^{2}-126x+720} which is drawn in red in the figure to the right. Newton's method is used to find the largest zero of this polynomial with an initial guess of 7. The largest zero of this polynomial which corresponds to the second largest zero of the original polynomial is found at 3 and is circled in red. The degree 5 polynomial is now divided by ( x − 3 ) {\displaystyle (x-3)} to obtain p 4 ( x ) = x 4 + 14 x 3 + 47 x 2 − 38 x − 240 {\displaystyle p_{4}(x)=x^{4}+14x^{3}+47x^{2}-38x-240} which is shown in yellow. The zero for this polynomial is found at 2 again using Newton's method and is circled in yellow. Horner's method is now used to obtain p 3 ( x ) = x 3 + 16 x 2 + 79 x + 120 {\displaystyle p_{3}(x)=x^{3}+16x^{2}+79x+120} which is shown in green and found to have a zero at −3. This polynomial is further reduced to p 2 ( x ) = x 2 + 13 x + 40 {\displaystyle p_{2}(x)=x^{2}+13x+40} which is shown in blue and yields a zero of −5. The final root of the original polynomial may be found by either using the final zero as an initial guess for Newton's method, or by reducing p 2 ( x ) {\displaystyle p_{2}(x)} and solving the linear equation . As can be seen, the expected roots of −8, −5, −3, 2, 3, and 7 were found.
Horner's method can be modified to compute the divided difference ( p ( y ) − p ( x ) ) / ( y − x ) . {\displaystyle (p(y)-p(x))/(y-x).} Given the polynomial (as before) p ( x ) = ∑ i = 0 n a i x i = a 0 + a 1 x + a 2 x 2 + a 3 x 3 + ⋯ + a n x n , {\displaystyle p(x)=\sum _{i=0}^{n}a_{i}x^{i}=a_{0}+a_{1}x+a_{2}x^{2}+a_{3}x^{3}+\cdots +a_{n}x^{n},} proceed as follows [ 10 ] b n = a n , d n = b n , b n − 1 = a n − 1 + b n x , d n − 1 = b n − 1 + d n y , ⋮ ⋮ b 1 = a 1 + b 2 x , d 1 = b 1 + d 2 y , b 0 = a 0 + b 1 x . {\displaystyle {\begin{aligned}b_{n}&=a_{n},&\quad d_{n}&=b_{n},\\b_{n-1}&=a_{n-1}+b_{n}x,&\quad d_{n-1}&=b_{n-1}+d_{n}y,\\&{}\ \ \vdots &\quad &{}\ \ \vdots \\b_{1}&=a_{1}+b_{2}x,&\quad d_{1}&=b_{1}+d_{2}y,\\b_{0}&=a_{0}+b_{1}x.\end{aligned}}}
At completion, we have p ( x ) = b 0 , p ( y ) − p ( x ) y − x = d 1 , p ( y ) = b 0 + ( y − x ) d 1 . {\displaystyle {\begin{aligned}p(x)&=b_{0},\\{\frac {p(y)-p(x)}{y-x}}&=d_{1},\\p(y)&=b_{0}+(y-x)d_{1}.\end{aligned}}} This computation of the divided difference is subject to less round-off error than evaluating p ( x ) {\displaystyle p(x)} and p ( y ) {\displaystyle p(y)} separately, particularly when x ≈ y {\displaystyle x\approx y} . Substituting y = x {\displaystyle y=x} in this method gives d 1 = p ′ ( x ) {\displaystyle d_{1}=p'(x)} , the derivative of p ( x ) {\displaystyle p(x)} .
Horner's paper, titled "A new method of solving numerical equations of all orders, by continuous approximation", [ 12 ] was read before the Royal Society of London, at its meeting on July 1, 1819, with a sequel in 1823. [ 12 ] Horner's paper in Part II of Philosophical Transactions of the Royal Society of London for 1819 was warmly and expansively welcomed by a reviewer [ permanent dead link ] in the issue of The Monthly Review: or, Literary Journal for April, 1820; in comparison, a technical paper by Charles Babbage is dismissed curtly in this review. The sequence of reviews in The Monthly Review for September, 1821, concludes that Holdred was the first person to discover a direct and general practical solution of numerical equations. Fuller [ 13 ] showed that the method in Horner's 1819 paper differs from what afterwards became known as "Horner's method" and that in consequence the priority for this method should go to Holdred (1820).
Unlike his English contemporaries, Horner drew on the Continental literature, notably the work of Arbogast . Horner is also known to have made a close reading of John Bonneycastle's book on algebra, though he neglected the work of Paolo Ruffini .
Although Horner is credited with making the method accessible and practical, it was known long before Horner. In reverse chronological order, Horner's method was already known to:
Qin Jiushao , in his Shu Shu Jiu Zhang ( Mathematical Treatise in Nine Sections ; 1247), presents a portfolio of methods of Horner-type for solving polynomial equations, which was based on earlier works of the 11th century Song dynasty mathematician Jia Xian ; for example, one method is specifically suited to bi-quintics, of which Qin gives an instance, in keeping with the then Chinese custom of case studies. Yoshio Mikami in Development of Mathematics in China and Japan (Leipzig 1913) wrote:
"... who can deny the fact of Horner's illustrious process being used in China at least nearly six long centuries earlier than in Europe ... We of course don't intend in any way to ascribe Horner's invention to a Chinese origin, but the lapse of time sufficiently makes it not altogether impossible that the Europeans could have known of the Chinese method in a direct or indirect way." [ 20 ]
Ulrich Libbrecht concluded: It is obvious that this procedure is a Chinese invention ... the method was not known in India . He said, Fibonacci probably learned of it from Arabs, who perhaps borrowed from the Chinese. [ 21 ] The extraction of square and cube roots along similar lines is already discussed by Liu Hui in connection with Problems IV.16 and 22 in Jiu Zhang Suan Shu , while Wang Xiaotong in the 7th century supposes his readers can solve cubics by an approximation method described in his book Jigu Suanjing . | https://en.wikipedia.org/wiki/Horner's_method |
The Horner–Wadsworth–Emmons (HWE) reaction is a chemical reaction used in organic chemistry of stabilized phosphonate carbanions with aldehydes (or ketones ) to produce predominantly E- alkenes . [ 1 ]
In 1958, Leopold Horner published a modified Wittig reaction using phosphonate-stabilized carbanions. [ 2 ] [ 3 ] William S. Wadsworth and William D. Emmons further defined the reaction. [ 4 ] [ 5 ]
In contrast to phosphonium ylides used in the Wittig reaction , phosphonate-stabilized carbanions are more nucleophilic but less basic. Likewise, phosphonate-stabilized carbanions can be alkylated. Unlike phosphonium ylides, the dialkylphosphate salt byproduct is easily removed by aqueous extraction .
Several reviews have been published. [ 6 ] [ 7 ] [ 8 ] [ 9 ] [ 10 ] [ 11 ]
The Horner–Wadsworth–Emmons reaction begins with the deprotonation of the phosphonate to give the phosphonate carbanion 1 . Nucleophilic addition of the carbanion onto the aldehyde 2 (or ketone) producing 3a or 3b is the rate-limiting step . [ 12 ] If R 2 = H, then intermediates 3a and 4a and intermediates 3b and 4b can interconvert with each other. [ 13 ] The final elimination of oxaphosphetanes 4a and 4b yield ( E )-alkene 5 and ( Z )-alkene 6 , with the by-product being a dialkyl- phosphate .
The ratio of alkene isomers 5 and 6 is not dependent upon the stereochemical outcome of the initial carbanion addition and upon the ability of the intermediates to equilibrate .
The electron-withdrawing group (EWG) alpha to the phosphonate is necessary for the final elimination to occur. In the absence of an electron-withdrawing group, the final product is the β-hydroxyphosphonate 3a and 3b . [ 14 ] However, these β-hydroxyphosphonates can be transformed to alkenes by reaction with diisopropylcarbodiimide . [ 15 ]
The Horner–Wadsworth–Emmons reaction favours the formation of ( E )-alkenes. In general, the more equilibration amongst intermediates, the higher the selectivity for ( E )-alkene formation.
Thompson and Heathcock have performed a systematic study of the reaction of methyl 2-(dimethoxyphosphoryl)acetate with various aldehydes. [ 16 ] While each effect was small, they had a cumulative effect making it possible to modify the stereochemical outcome without modifying the structure of the phosphonate. They found greater ( E )-stereoselectivity with the following conditions:
In a separate study, it was found that bulky phosphonate and bulky electron-withdrawing groups enhance E-alkene selectivity.
The steric bulk of the phosphonate and electron-withdrawing groups plays a critical role in the reaction of α-branched phosphonates with aliphatic aldehydes. [ 17 ]
Aromatic aldehydes produce almost exclusively ( E )-alkenes. In case ( Z )-alkenes from aromatic aldehydes are needed, the Still–Gennari modification (see below) can be used.
The stereoselectivity of the Horner–Wadsworth–Emmons reaction of ketones is poor to modest.
Since many substrates are not stable to sodium hydride , several procedures have been developed using milder bases. Masamune and Roush have developed mild conditions using lithium chloride and DBU . [ 18 ] Rathke extended this to lithium or magnesium halides with triethylamine . [ 19 ] Several other bases have been found effective. [ 20 ] [ 21 ] [ 22 ]
W. Clark Still and C. Gennari have developed conditions that give Z -alkenes with excellent stereoselectivity. [ 23 ] [ 24 ] Using phosphonates with electron-withdrawing groups (trifluoroethyl [ 25 ] ) together with strongly dissociating conditions ( KHMDS and 18-crown-6 in THF ) nearly exclusive Z-alkene production can be achieved.
Ando has suggested that the use of electron-deficient phosphonates accelerates the elimination of the oxaphosphetane intermediates. [ 26 ] | https://en.wikipedia.org/wiki/Horner–Wadsworth–Emmons_reaction |
William Horrocks , a cotton manufacturer of Stockport built an early power loom in 1803, based on the principles of Cartwright but including some significant improvements to cloth take up and in 1813 battening.
Edmund Cartwright bought and patented a power loom in 1785, and it was this loom that was adopted by the nascent cotton industry in England. The silk loom made by Jacques Vaucanson in 1745 operated on the same principles but wasn't developed further. The invention of the flying shuttle by John Kay was critical to the development of a commercially successful power loom. [ 1 ] Cartwright's loom was impractical but the ideas behind it were developed by numerous inventors in the Manchester area of England.
Cartwright's loom was little used; he established a powerloom factory at Doncaster in 1787 which closed after few years. Grimshaw's factory at Manchester, which opened in 1790 and contained twenty four Cartwright looms, was burnt down by protesting hand loom weavers. [ 2 ] It is speculated that Cartwright's failure was due to the loom's wooden frame and crude construction, Cartwright's inexperience in business, and the lack of an adequate method to prepare the warp.
To prepare the loom, the warp threads needed to be strengthened by applying a wet size (a process called dressing) and then wound onto a beam or roller that fitted on the back of the loom (a process called warping or beaming). These processes were time-consuming; if dressing took place on the loom, the loom had to remain idle until the threads dried. Because of this, the economics of weaving still favoured the handloom weaver. [ 2 ] It was William Radcliffe , also of Stockport, who introduced the dressing frame in 1803.
William Horrocks secured several patents to improve the loom. The Horrocks loom , introduced in 1803, featured an improved method of taking up the cloth onto the beam once it was woven. It had a metal frame, and was described as neat and compact so hundreds could be at work in a single room. [ 3 ] As the warp was now dressed away from the loom, the Horrocks loom could be run continuously, being stopped only to piece broken threads and to replenish the weft in the shuttle. [ a ] In the vicinity of Stockport, approximately 2,000 looms were in use by 1818, and by 1821 there were 32 factories containing 5,732 looms. [ 5 ] According to a 1830 report to the British House of Commons , by 1820 there were an estimated 14,150 power looms in both England and Scotland; that number increased to 55,500 by 1829. [ 6 ] However, these counts were outnumbered 4 to 1 by the number of handlooms. [ 2 ] Official figures (The Factories Inspectors' count) were first compiled in 1835 and they showed 108,189 power looms used for cotton, 1,713 for silk, 2,330 for wool and 2,846 for worsted , but not all of these would have been Horrocks looms; the 1830 Roberts Loom (based on 1822 patents) had become more popular. [ 7 ]
Notes
Footnotes
Bibliography | https://en.wikipedia.org/wiki/Horrocks_loom |
Intel "Horse Ridge" is a cryogenic control chip that presented at the International Solid State Circuits Conference 2020 of San Francisco . [ 1 ] [ 2 ]
Horse Ridge is based on Intel's 22nm FFL ( FinFET Low Power) CMOS technology. [ 3 ] [ 4 ] Intel and QuTech published a study in Nature in which they demonstrate that they have been able to operate qubits at temperatures above 1 degree Kelvin (-272.15 degrees Celsius ). [ 5 ]
In December 2020, Intel released Horse Ridge II, adding enhanced capabilities and higher levels of integration for sophisticated control of the quantum system . New features include the ability to manipulate and read qubit states (and drive up to 16 spin qubits with a direct digital synthesis (DDS) architecture) and control the potential of multiple gates needed to correlate multiple qubits (features 22 high-speed digital-to-analog converters (DACs)).
Horse Ridge II is also implemented using Intel's low-power 22 nm FinFET technology (22FFL) and its operation has been tested at a temperature of 4 degree Kelvin. [ 6 ]
This computer hardware article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Horse_Ridge_(chip) |
Horse cloning is the process of obtaining a horse with genes identical to that of another horse, using an artificial fertilization technique. Interest in this technique began in the 1980s. The Haflinger foal Prometea , the first living cloned horse, was obtained in 2003 in an Italian laboratory. Over the years, the technique has improved. It is mainly used on high-performance but castrated or infertile animals, for reproductive cloning. These horses are then used as breeding stock. Horse cloning is only mastered by a handful of laboratories worldwide, notably in France, Argentina, North America and China. The technique is limited by the fact that some differences remain between the original and its clone, due to the influence of mitochondrial DNA .
Reproductive cloning of the Pieraz and Quidam de Revel horses began in 2005. The International Federation for Equestrian Sports (FEI by its acronym in French) decided to ban clones from competition in 2007, before authorizing them in 2012. A few clones are used in equestrian sports , winning major titles such as the Argentine polo championship in 2013. Nevertheless, the number of cloned horses is growing every year. The practice is highly controversial, particularly for bioethical reasons , since it involves a high failure rate on embryos . It also raises questions about the management of horses' genetic diversity , the future of the horse breeding profession, and the outbreak of new genetic disorders or fraud.
The horse is the seventh species to be cloned yet. [ 1 ]
Horse cloning has undergone a rapid qualitative and quantitative evolution. [ 2 ] While Italian professor Cesare Galli believes that horse cloning has aroused less interest than that of other large mammals, [ 3 ] other scientists believe that the high commercial value attained by some horses has created immediate interest, unlike in the case of less valuable agricultural animal species. [ 4 ] Equine cloning owes much of its development to the Belgian stud farm of Zangersheide , one of the pioneers of artificial insemination and embryo transfer . According to Éric Palmer, a French biologist specializing in horse reproduction (who introduced ultrasound to mares and produced the first foal by in vitro fertilization ), [ 5 ] the way for the use of cloning was initiated in the 1980s by veterinarian surgeon Dr. Leo de Backer. He was in contact with some of the world's leading sports stables. According to Palmer, the ones that are interested are Alwin Schockemöhle , Jan Tops , Thomas Fruhman, John and Michael Whitaker , Willi Melliger , Jean-Claude Van Geensbergen and La Silla (in Mexico), among many others. [ 6 ] The value of cloning high lineage horses was recognized as early as 1998, with the Westhusin study. [ 7 ] Research to this end was publicly announced in 2001. [ 8 ] That same year, with the support of Genopole , Éric Palmer founded Cryozootech, a company dedicated to preserving the genes of horses with exceptional performance, with a view to future cloning. The horse is not the first large mammal to be cloned, as Dolly the sheep and other animals precede it, making it the seventh mammal to be cloned. [ 9 ]
The birth of three cloned mules in the United States on May 4, 2003, came just before that of the first horse. [ 10 ] The first successful attempt to produce a viable clone was made by the Italian laboratory LTR-CIZ, which gave birth to Prometea on May 28, 2003, a Haflinger foal carried to term by her mother, whose genetic copy she is. [ 9 ] Her birth was announced publicly on August 6, 2003. Born 36 kilogram after a natural delivery and a full-term pregnancy in Laboratory of Reproductive Technology, Cremona , Italy , [ 11 ] [ full citation needed ] At 2 months old, Prometea weighed 100 kg (220 lb) [ citation needed ] The name "Prometea" is the feminine form of Prometeo (" Prometheus " in Greek ). These scientists worked under the guidance of Professor Cesare Galli. [ 9 ]
Dr. Cesare Galli and others at the lab experimented with 841 reconstructed embryos; of the 14 viable embryos, four were implanted in surrogate mothers - only that of Prometea succeeded in being born. Prometea was born to her twin mother who her cloning cells originated from. [ 12 ] Texas A&M University was also undertaking a horse-cloning project when the Italian team first succeeded.
In 2002, LTR-CIZ merged with Cryozootech. In Italy, they produced the world's second cloned horse, Pieraz-Cryozootech-Stallion . This is a purely commercial clone, aimed at obtaining a fertile genetic copy of a successful but castrated horse. According to Bernard Debré , the birth of Pieraz-étalon heralded the commercial direction that equine cloning would later take. [ 13 ] Prometea and Pieraz were obtained using the same method, that of Professor Galli. [ 10 ] [ 14 ]
Shortly afterwards, on 13 March 2005, Dr Katrin Hinrichs gave birth to Paris-Texas, a clone of Quidam de Revel , in a laboratory at Texas A&M University in Texas (USA). The clone foal is also produced commercially for breeding purposes, at the request of Quidam's owner. The technique used is slightly different from that of the Italians. [ 15 ] As a result, the number of clones produced has increased over the years. In 2009, the E.T. FRH clone became the first cloned show jumping horse authorized for breeding by a studbook (the Zangersheide studbook), while endurance champion Pieraz's clone entered its third breeding season. [ 16 ] In Argentina, polo player Adolfo Cambiaso uses Crestview Genetics to clone his polo ponies . At the end of 2010, a clone of his polo mare Cuartetera was sold at auction for a record $ 800,000. [ 17 ] [ 18 ] In May 2013, a non-clone foal was born for the first time from two parents cloned by embryo transfer . [ 19 ] [ 20 ] On 7 December 2013, a cloned polo pony won a major sporting competition for the first time. It was the Argentine polo championship. [ 21 ] In 2018, equine cloning was widely used in Argentina's polo scene. [ 22 ] The Argentine polo horse has become the most cloned animal in the world. [ 23 ]
Cloning research is often carried out in secret, due to poor public acceptance. [ 19 ] Commercial cloning companies sometimes reveal the births of these horses, but the techniques employed remain mostly secret. [ 24 ] According to the French national stud farm, the cloning technique used, known as "somatic" cloning, involves taking cells by biopsy , usually from the breast of an adult animal. Fibroblasts are extracted and cultured in vitro until a sufficient number is obtained, then stored in liquid nitrogen . Oocytes are harvested from a mare, either living or dead. The DNA is removed by enucleation , then in vitro culture makes it suitable for receiving fibroblast DNA from the animal to be cloned. [ 25 ] [ 26 ] Due to the high demand for mare oocytes, these are usually obtained from slaughterhouses. [ 3 ]
After a week or so of in vitro culture, the resulting embryo is implanted into the uterus of a carrier mare, using the embryo transfer technique. After eleven months' gestation , the mare gives birth to the cloned foal. However, this type of gestation is much riskier than a conventional one. [ 25 ]
According to a Belgian researcher interviewed by Le Vif , the failure rate is the main reason for opposition to cloning, for bioethical reasons due to the mortality of embryos , fetuses and newborn foals . [ 27 ] This rate is high, but is gradually decreasing thanks to better-controlled techniques. Professor Galli obtained 15% viable embryos for his second clone, Pieraz , compared with only 3% for Prometea , [ 28 ] the first horse, which required 328 attempts. [ 29 ] The first mule trials, in 2003, involved 118 embryos, 13 of which produced a gestation, resulting in 3 live mules. [ 24 ] To clone Calvaro V in 2006, Cryozootech used over 2,000 oocytes , which produced 22 embryos, only one of which was carried to term. [ 30 ]
Estimates of this rate vary from source to source. In 2012, according to a Belgian researcher, the average success rate for animal cloning was around 5%. [ 27 ] Argentine researchers estimate that 6 or 7 embryos are needed out of 20 trials (in 2013). [ 31 ] In 2010, according to a French source, around 2,500 mare oocytes had to be used to obtain a single viable foal. [ 26 ] There are many abortions as well. Despite increased susceptibility to neonatal disease, a clone has the same life expectancy and robustness as a conventional horse. [ 32 ] There is nothing to differentiate a clone from a conventionally bred horse. [ 4 ]
The cost of equine cloning varies between €200,000 [ 32 ] and €300,000, [ 33 ] depending on the source. In 2010, clones intended for sporting competitions represented just 22% of operations. [ 34 ] Cloning is therefore mainly carried out in Europe for the purpose of breeding high-performance horses. A gelding can be copied to ensure its progeny. [ 35 ] The same applies to a stallion that has become too old to reproduce, or to a mare whose number of foals is naturally limited. [ 25 ] The use of cloning relies heavily on the belief that DNA is the most important factor in competition performance. [ 36 ] Anne Ricard's study estimates that, in equestrian disciplines ( show jumping , dressage and endurance ) where geldings represent around 40% of competitors, the use of reproductive clones will enable a genetic improvement of 4% per generation. Once their fertility has been established, their semen is frozen as for any other stallion. [ 34 ] [ 37 ]
Effective cloning remains very marginal [ 34 ] due to its cost. [ 31 ] In the US and Argentina, requests for equine cloning come mainly from polo players (who allow their mares to play all the seasons) and Arabian horse breeders. [ 10 ] Cloning can also be used to preserve rare breeds threatened with extinction, [ 38 ] [ 39 ] but customers' motivations are essentially commercial. [ 34 ] Nevertheless, the discovery of a perfectly preserved prehistoric foal in Siberia (in 2018) augurs well for cloning trials by Russian and Korean researchers, to resurrect extinct equine breeds or species. [ 40 ]
If the foal is the genetic copy of its donor, the question of the influence of the mitochondria that remain present in the recipient oocyte is still open. [ 41 ] Mitochondria represent only 1 or 2% of the genome, but could influence the clone's sporting performance. [ 25 ] They are more important in the case of a mare than a stallion, since the mare transmits her mitochondria during reproduction, unlike the stallion. [ 42 ] Similarly, the cloned horse is not necessarily a perfect copy of the donor in terms of phenotype and character. The horse markings may vary, and the character, depending less on genetics than on the influence of the mother and upbringing, may also turn out to be very different. The technique also has its limits when it comes to breeding, as the pattern sought in horses evolves over time. There is therefore little point in cloning a horse clone. [ 32 ] It also takes a long time to implement, and the number of specialized laboratories and companies is limited. The Kheiron company in Argentina, for example, estimates its waiting time at eighteen years, with demand far outstripping supply. [ 31 ] The ban on clones in a large number of stud books and in certain competitions also limits interest. [ 26 ]
A few companies are known for their specialization in commercial equine cloning: ViaGen, Replica Farms, Crestview Genetics, Kheiron and Cryozootech. Competition between these laboratories is fierce. [ 31 ] French company Cryozootech is a pioneer in the field, having produced the first commercial clone in 2005. [ 34 ] It has made the production of famous horse clones its specialty. [ 43 ] ViaGen was originally based in Texas in the United States, but the laboratory moved to Canada after the last American slaughterhouses closed in 2007, to source mare oocytes . [ 44 ] Kheiron was set up in Argentina in 2009 with a team of eight people. Equine cloning has developed strongly in this country, thanks in particular to demand from polo players, the profusion of mare oocytes available for research (the country exports a lot of horse meat , and has numerous slaughterhouses supplying oocytes) and the easy breeding conditions in the Pampas . [ 31 ] In 2012, Argentina was estimated to be the country producing the most horse clones in the world. [ 45 ] In Texas , more than 900 clones were born between the creation of the first laboratory and 2014. [ 31 ] In 2019, equine cloning companies are expected to open in China. [ 46 ]
Canada since 2007 [ 44 ]
Argentina
Canada
The use of clones for the genetic improvement of horses' sporting performance is recognized, including by veterinarians, although skepticism remains high among some professionals. [ 2 ] According to Éric Palmer, acceptance of horse cloning is growing, and attitudes are changing, in the same way as the gradual acceptance of in vitro fertilization and artificial insemination techniques in horses. [ 32 ] Fears of the birth of malformed or monstrous animals diminished when clone owners realized that their animals were in good health. [ 52 ]
In 2007, the International Federation for Equestrian Sports ruled that cloned horses should be banned from the official competitions it organizes, believing that opening up participation to clones would be unjust and unfair to the competition. It revised its opinion in July 2012. Horse clones are now allowed in all FEI competitions. [ 53 ] This reversal is seen as an important sign of recognition of the usefulness of clones in sport horse breeding. [ 37 ]
In the United States, the National Cutting Horse Association and the National Barrel Horse Association allow clones in cutting and barrel racing competitions. [ 34 ] The American Quarter Horse Association was taken to court by owners and riders of cloned horses in 2012, for refusing to allow these horses to take part in official breed competitions. [ 21 ] [ 54 ] The initial ruling ordered the association to amend its bylaws to allow clones to compete. [ 55 ]
Sporting training of the Levisto Alpha Z clone makes it possible, even probable, for a clone horse to win an Olympic title in the future. [ 56 ]
Clones are generally not entered in the stud books of their respective breeds. The American Jockey Club refuses to allow cloned horses to race. In France, clones are also banned from trotting and galloping races. [ 21 ] Several European sport horse and warmblood studbooks accept clones: Zangersheide ; Anglo-European (AES); Irish Sport Horse (ISH); Dutch Warmblood (KWPN); Belgian Warmblood (BWP); and Holsteiner . [ 25 ] [ 57 ] [ 58 ] France's national stud farms advise against banning clones from the various studbooks, arguing that this will ultimately drive the best gene pool abroad. [ 34 ]
The existence of clones is not always made public, due to the poor reception they receive. [ 10 ] Although in Belgium, Isabelle Donnay believes that commercial cloning of horses has not been very successful, [ 27 ] on a global scale, their numbers have clearly increased over time. Equidia Life's 2013 survey describes the practice as "booming". [ 19 ] In winter 2010, 56 clones were counted worldwide, produced by laboratories in Europe, the United States and South America. Americans clone more mares than Europeans. [ 34 ] Between 2006 and 2011, at least 20 American Quarter Horses were cloned. [ 59 ] In 2014, there were an estimated 900 clones in the state of Texas . [ 10 ] In Europe, the Belgian Zangersheide stable regularly uses this technique, with four horses cloned between 2006 and 2013. [ 33 ] The stallion Salute, one of Smart Little Lena's clones, was exported to Australia in 2010 for breeding. [ 44 ]
Dr. Éric Palmer, Cryozootech, France
Dr. Éric Palmer, Cryozootech, France
Poetin II Z ( Zangersheide )
Murka's Gem
Gem Twist Alpha Z ( Zangersheide )
2011
July 2012
2008 (success)
Ratina Z Beta ( Zangersheide )
Ratina Z Gamma ( Zangersheide )
South America's first cloned horse
Jazz Clone 2
Cruising Encore
As Cold As Ice Beta
Cocaine Beta Z
2016
Chilli Morning III (Trey)
Chilli Morning IV (Quattro)
Nintendo 64
According to various surveys, including one carried out by Cheval Savoir in 2009, horse cloning is generally poorly accepted by riders and horse professionals. They believe it introduces unfair competition to "normal" horse breeders, while constituting a highly lucrative and ethically unacceptable activity. For French scientist Éric Palmer, the technique is demonized due to misunderstandings. [ 32 ] The American Quarter Horse Association has stated that "[...] clones have no parents, cloning is not breeding. It's just photocopies of the same horse", pointing to its low success rate and the risk of as yet unknown genetic disorder developing. [ 88 ] The Jockey Club was also strongly opposed. [ 37 ] Dr. Thomas Reed, who owns the private stud Morningside in Ireland, is also publicly opposed to cloning after the accidental death of his stallion Hickstead in competition at the end of 2011. [ 51 ]
In 2015, the European Union voted to ban the cloning of farm animals (cattle, pigs, sheep, goats, and horses), and the sale of cloned livestock, their offspring, and products derived from them, such as meat and milk. The ban excluded cloning for research, and for the conservation of rare breeds and endangered species. [ 89 ] [ 90 ] However, no law was passed after the vote. As of 2024, horse cloning continues to be legal in the EU, with the Zangersheide registry in Belgium offering three cloned stallions for breeding. [ 91 ]
Horse cloning, like that of other animal species, raises bioethical issues, since it involves a high mortality rate of embryos , fetuses and young foals. The Swiss National Stud's ethics study reports "massive loss during gestation", with less than 1% of oocytes obtained resulting in a live foal. Furthermore, foals born from cloning suffer from frequent health problems. An American study looked at 14 clones born between 2004 and 2008. [ 92 ] Six (43%) were normal, while the other eight suffered from neonatal disorders, umbilical problems and limb deformities. [ 92 ] There are a large number of stillborn foals, deaths in the first few days after birth, immune deficiencies, and muscle and bone deformities. Problems at foaling are common for both carrier mare and foal, with cesarean section being a common option. If foals survive their postnatal period, they do not appear to be more susceptible to disease thereafter. The question of their longevity remains unknown, as the first clones are still too young to draw any statistics. [ 93 ]
In the UK, researcher William (Twink) Allen was refused permission to continue his cloning trials in 2004 for these ethical reasons, as the cloned animals could present malformations, anomalies and diseases, according to the British authorities. [ 94 ] Dr. Natasha Lane, of the Society for the Prevention of Cruelty to Animals (SPCA by its acronym in French), said it was not acceptable to clone animals by sacrificing embryos "just to get a gold medal". [ 95 ] Allen spoke out against the decision, saying that the British government had "caved in to the animal protection lobby". [ 28 ]
Although horses are not threatened with extinction or other major problem now, cloning may create less genetic diversity among horses by using these horses to breed. This increases the life time of one breeding set of genetics resulting in less variability in a population. In conservation biology, there are concerns related to the lack of genetic diversity that allows for continuation of the species through genetic variation. [ 96 ]
On 8 June 2005, a number of French farmers belonging to the Confédération Paysanne demonstrated in front of the Genopole d' Évry , Cryozootech's headquarters, to denounce the "seizure of living matter" and a future loss of genetic diversity , arguing that the development of cloning will eventually lead to the disappearance of the breeding profession. [ 10 ] A number of specialists warn against the widespread use of cloning, believing that it will seriously damage the equine breeding industry, particularly in equestrian sport, by reducing demand for naturally-born foals. Cloning would also drastically reduce genetic diversity, as the same genes "could be reproduced over and over again". [ 97 ] [ 98 ]
One fear that has arisen with cloning is that of new forms of fraud. In studbooks that refuse to accept clones, particularly the Thoroughbred studbook, it would be possible to pass off a horse cloned from a champion as another animal by falsifying its identification documents. [ 97 ] | https://en.wikipedia.org/wiki/Horse_cloning |
Horse teeth refers to the dentition of equine species, including horses and donkeys . Equines are both heterodontous and diphyodontous , which means that they have teeth in more than one shape (there are up to five shapes of tooth in a horse's mouth), and have two successive sets of teeth, the deciduous ("baby teeth") and permanent sets.
For grazing animals, good dentition is essential to survival. Continued grazing creates specific patterns of wear, which can be used along with patterns of eruption to estimate the age of the horse. [ 1 ]
A fully developed horse of around five years of age will have between 36 and 44 teeth. All equines are heterodontous , which means that they have different shaped teeth for different purposes.
All horses have twelve incisors at the front of the mouth, used primarily for cutting food, most often grass, whilst grazing . [ 2 ] They are also used as part of a horse's attack or defence against predators, or as part of establishing social hierarchy within the herd.
Immediately behind the front incisors is the interdental space, where no teeth grow from the gums. This is where the bit is placed, if used, when horses are ridden.
Behind the interdental space, all horses also have twelve premolars and twelve molars , also known as cheek teeth or jaw teeth. [ 2 ] These teeth chew food bitten off by incisors, prior to swallowing.
In addition to the incisors, premolars and molars, some, but not all, horses may also have canine teeth and wolf teeth . A horse can have between zero and four canine teeth, also known as tusks (tushes), with a clear prevalence towards male horses ( stallions and geldings ) who normally have a full set of four. Fewer than 28% of female horses ( mares ) have any canine teeth. Those that do normally only have one or two, and these may be only partially erupted. [ 3 ]
Between 13 and 32% of horses, split equally between male and female, also have wolf teeth, which are not related to canine teeth, but are vestigial first premolars. Wolf teeth are more common on the upper jaw, and can present a problem for horses in work, as they can interfere with the bit. They may also make it difficult during equine dentistry work to rasp the second premolar, and are frequently removed. [ 2 ]
Horses are diphyodontous , erupting a set of first deciduous teeth (also known as milk, temporary, or baby teeth) soon after birth, with these being replaced by permanent teeth by the age of approximately five years old. The horse will normally have 24 deciduous teeth, emerging in pairs, and eventually pushed out by the permanent teeth, which normally number between 36 and 40. As the deciduous teeth are pushed up, they are termed "caps". Caps will eventually shed on their own, but may cause discomfort when still loose, requiring extraction.
It is possible to estimate the age of a young horse by observing the pattern of teeth in the mouth, based on which teeth have erupted, although the difference between breeds and individuals make precise dating impossible.
All teeth are normally erupted by the age of five, at which point the horse is said to have a "full mouth", but the actual age this occurs will depend on the individual horse, and also by breed, with certain breeds having different average eruption times. For instance, in Shetland ponies the middle and corner incisor tend to erupt late, and in both draft horses and miniature horses , the permanent middle and corner incisors are usually late appearing.
Horse teeth often wear in specific patterns, based on the way the horse eats its food, and these patterns are often used to conjecture on the age of the horse after it has developed a full mouth. As with aging through observing tooth eruption, this can be imprecise, and may be affected by diet, natural abnormalities, and vices such as cribbing .
The importance of dentition in assessing the age of horses led to veterinary dentistry techniques being used as a method of fraud, with owners and traders altering the teeth of horses to mimic the tooth shapes and characteristics of horses younger than the actual age of the equine. [ 4 ]
Equine teeth have evolved to wear against the tooth above or below as the horse chews, thus preventing excess growth. The upper jaw is wider than the lower one. In some cases, sharp edges can occur on the outside of the upper molars and the inside of the lower molars, as they are unopposed by an opposite grinding surface. These sharp edges can reduce chewing efficiency of the teeth, interfere with jaw motion, and in extreme cases can cut the tongue or cheek, making eating and riding painful.
In the wild, natural foodstuffs may have allowed teeth to wear more evenly. Because many modern horses often graze on lusher, softer forage than their ancestors, and are also frequently fed grain or other concentrated feed, it is possible some natural wear may be reduced in the domestic horse. On the other hand, this same uneven wear in the wild may have at times contributed to a shorter lifespan. Modern wild horses live an estimated 20 years at most, while a domesticated horse, depending on breed and management, quite often lives 25 to 30 years. Thus, because domesticated animals also live longer, they may simply have more time to develop dental issues that their wild forebears never faced. [ citation needed ]
The following are typical wear patterns in horses.
A horse's incisors, premolars, and molars, once fully developed, continue to erupt as the grinding surface is worn down through chewing. A young adult horse's teeth are typically 4.5–5 inches long, but the majority of the crown remaining below the gumline in the dental socket. The rest of the tooth slowly emerges from the jaw, erupting about 1/8" each year, as the horse ages. When the animal reaches old age, the crowns of the teeth are very short and the teeth are often lost altogether. Very old horses, if lacking molars to chew, may need soft feeds to maintain adequate levels of nutrition .
Older horses may appear to have a lean, shallow lower jaw, as the roots of the teeth have begun to disappear. [ 7 ] Younger horses may seem to have a lumpy jaw, due to the presence of permanent teeth within the jaw.
If a bit is fitted to a horse, along with a bridle , the normally metal bar of the bit lies in the interdental space between the incisors (or canines, where present) and premolars. If the bridle is adjusted so that the bit rests too low, or too high, it may push against the teeth and cause discomfort.
Sometimes, a "bit seat" is filed in the first premolar, where the surface is rounded so that the flesh of the cheek is not pushed into the sharp edge of the tooth, making riding more comfortable for the horse, although the practice is controversial. [ 8 ]
Like all mammals, horses can develop a variety of dental problems, with a variety of dental services available to minimise problems through reactive or prophylactic intervention.
Equine dentistry can be undertaken by a vet or by a trained specialist such as an equine dental technician , or in some cases is performed by lay persons, including owners or trainers.
Problems with dentition for horses in work can result in poor performance or behavioural issues, and must be ruled out as the cause of negative behaviours in horse. Most authorities recommend regular checks by a professional, normally six monthly or annually.
The wear of the teeth can cause problems if it is uneven, with sharp points appearing, especially on the outer edge of the molars, the inner edge of the premolars and the posterior end of the last molars on the bottom jaw.
Other specific conditions relating to wear include a "step mouth", where one molar or premolar grows longer than the others in that jaw, normally because the corresponding tooth in the opposite jaw is missing or broken, and therefore could not wear down its opposite, a "wave mouth", where at least two molars or premolars are higher than the others, so that, when viewed from the side, the grinding surfaces produce a wave-like pattern rather than a straight line, leading to periodontal disease and excessive wear of some of the teeth, and a "shear mouth" when the grinding surfaces of the molars or premolars are severely sloped on each individual tooth (so the inner side of the teeth are much higher or lower than the outer side of the teeth), severely affecting chewing.
Horses may also experience an overbite/ brachygnathism (parrot mouth), or an underbite/ prognathism (sow mouth, monkey mouth). These may affect how the incisors wear. In severe cases, the horse's ability to graze may be affected. Horses also sometimes suffer from equine malocclusion where there is a misalignment between their upper and lower jaws.
The curvature of the incisors may also vary from the normal, straight bite. The curvature may be dorsal or ventral . These curvatures may be the result of an incisor malocclusion (e.g. ventral=overbite/dorsal=underbite). The curvature may also be diagonal, stemming from a wear pattern, offset incisors, or pain in the cheek teeth (rather than the incisors), which causes the horse to chew in one direction over the other.
Other common problems include abscessed, loose, infected, or cracked teeth, retained deciduous teeth, and plaque buildup. Wolf teeth may also cause problems, and are many times removed, as are retained caps.
To help prevent dental problems, it is recommended to get a horse's teeth checked by a vet or equine dental technician every 6 months. However, regular checks may be needed more often for individuals, especially if the horse is very young or very old. Additionally, the horse's teeth should be checked if it is having major performance problems or showing any of the above signs of a dental problem.
Many horses require floating (or rasping) of teeth once every 12 months, although this, too, is variable and dependent on the individual horse. The first four or five years of a horse's life are when the most growth-related changes occur and hence frequent checkups may prevent problems from developing. Equine teeth get harder as the horse gets older and may not have rapid changes during the prime adult years of life, but as horses become aged, particularly from the late teens on, additional changes in incisor angle and other molar growth patterns often necessitate frequent care. Once a horse is in its late 20s or early 30s, molar loss becomes a concern. Floating involves a veterinarian wearing down the surface of the teeth, usually to remove sharp points or to balance out the mouth. However, the veterinarian must be careful not to take off too much of the surface, or there will not be enough roughened area on the tooth to allow it to properly tear apart food. Additionally, too much work on a tooth can cause thermal damage (which could lead to having to extract the tooth), or expose the sensitive interior of the tooth ( pulp ). A person without a veterinary degree who performs this service is called a horse floater or equine dental technician. [ 9 ]
The common folk saying "don't look a gift horse in the mouth" is taken from the era when gifting horses was common. The teeth of a horse are a good indication of the age of the animal, and it was considered rude to inspect the teeth of a gifted animal as you would one that you were purchasing. The saying is used in reference to being an ungrateful gift receiver. [ 10 ] | https://en.wikipedia.org/wiki/Horse_teeth |
Timothy Walker is a British botanist. He was the Horti Praefectus (Director) of the University of Oxford Botanic Garden and Harcourt Arboretum . [ 1 ] [ 2 ]
After attending Abingdon School from 1971 to 1976 Walker studied for a BA degree in Botany at University College , Oxford from 1977 to 1980. [ 1 ] From 1980 to 1982, he was a trainee gardener at the Oxford Botanic Garden . He studied for a National Certificate in Horticulture at Askham Bryan College in North Yorkshire during 1982–83. Then during 1983–84 he was a trainee gardener at the Savill Garden in Windsor Great Park . He was a diploma student at Kew Gardens during 1984–85.
From 1986 to 1988 he was General Foreman at the Oxford Botanic Garden then from 1988 to 2014 he was Horti Praefectus of the Garden. [ 3 ] He also holds a lectureship in Plant Conservation at Somerville College, Oxford and is a lecturer in biology at the Department of Biology, University of Oxford . He has won four gold medals at the Chelsea Flower Show in London .
In June 2011, Walker presented Botany – A Blooming History , a series of three television programmes broadcast on BBC Four , covering the history of botany . [ 4 ] The series was repeated, again on BBC4, in August 2022.
This article about a British botanist is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Horti_Praefectus |
A horticultural flora , also known as a garden flora, is a plant identification aid structured in the same way as a native plants flora . It serves the same purpose: to facilitate plant identification; however, it only includes plants that are under cultivation as ornamental plants growing within the prescribed climate zone or region. Traditionally published in book form, often in several volumes, such floras are increasingly likely to be produced as websites or CD ROMs.
Horticultural floras include both cultigens (plants deliberately altered in some way by human activity) and those wild plants brought directly into cultivation that do not have cultigen names. They might also include colour images and useful information specific to the zone or region including:
Written by a professional plant taxonomist or plantsperson , a horticultural flora assists clarification of scientific and common names, the identification of plant characteristics that occur in cultivated plants that are additional to those in wild counterparts, and descriptions of cultivars .
Although horticultural floras may include a range of food plants, their emphasis is generally on ornamental plants and so these floras are sometimes referred to as "garden floras". Increasingly they provide data for sustainable landscaping , such as:
Numerous encyclopaedic listings of cultivated plants have been compiled but only four substantial horticultural floras have ever been produced, these being for: North America; [ 1 ] Europe; [ 2 ] [ 3 ] South-eastern Australia, [ 4 ] Hawaii and the tropics . [ 5 ]
There are several publications on trees which follow the format of botanical keys and descriptions for the trees of a specific region, notably for North America [ 6 ] and California. [ 7 ] | https://en.wikipedia.org/wiki/Horticultural_flora |
Horticultural oils are refined petroleum fractions ( mineral oils ) widely used as insecticides . [ 1 ] [ 2 ] [ 3 ] They are used against various insects ( aphids , mites , beetle larvae, leaf miners , thrips , leafhopper , whitefly , scale ) on fruit, vegetable and other crops, as well as against powdery mildew . [ 2 ] They are approved for use in organic farming under the U.S. National Organic Program . [ 2 ]
Mineral oils were long believed to act by blocking the spiracles of the insect and thus causing suffocation. However recently several additional effects were described. [ 1 ] IRAC categorises mineral oil in group UNM (non-specific mechanical and physical disruptors). Resistance to mineral oil has never been observed. [ 1 ]
Horticultural oils are prepared from crude petroleum fractions by distillation and various chemical processes. [ 1 ] [ 2 ] [ 3 ] This removes or hydrogenates the unsaturated ( alkene and aromatic ) molecules, which cause plant damage ( phytotoxicity ), and delivers the C20-C25 fractions, which are the most effective insecticides. Mineral oils have been used since the 19th century, but the grades used then were cruder, and they could not be used for all applications due to phytotoxicity. [ 3 ] The grades of oil are given by the amount of unsaturated components (unsulfonated residues UR), by the distillation temperature (°C), by the viscosity (SUS), and by the carbon number (nCy). Vegetable oils have been shown to kill insects, but they are more phytotoxic than mineral oils. [ 2 ]
Global marked data is not available, but reliable data is provided by the state of California. [ 4 ] Mineral oil is the most used insecticide, both in acreage and in volume. 34,508,857 pounds (15,652,972 kg) of mineral oil was sprayed on 4,543,066 acres (about 1.8 million hectares).
1 to 4% solutions in water are sprayed, which is hundreds of times more than modern synthetic insecticides. Mineral oil is correspondingly cheaper.
There are many synonyms used for horticultural oil. Often they are not fully synonymous but refer to different grades of oil.
The following names can be found: petroleum distillates, refined petroleum distillates, spray oils, petroleum derived spray oils or PDSOs, petroleum spray oils or PSOs, hydrocarbon oils, lubricating oils, narrow-range oils, white mineral oils, aliphatic solvents, paraffin oils, paraffinic oils, mineral oils, horticultural oils, agricultural oils, supreme oil, Volck oils, dormant oils, foliage or foliar oils, or summer oils. superior oils. [ 1 ] [ 2 ]
Dormant oil is used on woody plants during the dormant season. Originally used cruder oils were used, but the term now refers to the time of application. Summer oil or foliar oil refers to its use on plants when foliage is present, for which cruder grades could not be used.
The US EPA recognises hundreds of grades of mineral oil. [ 1 ] White oils are the most refined and most consistent of the mineral oils, and are approved for pharmaceutical or food use. [ 1 ]
Mineral oil has low acute and sub-acute toxicity in laboratory animals. The US EPA classified aliphatic solvents as Toxicity Category IV (lowest toxicity—regarded as practically non-toxic). [ 2 ]
It is non-toxic to pollinators, fish, and birds. [ 2 ] It is highly toxic to aquatic invertebrates, but not very mobile in soil. [ 1 ] | https://en.wikipedia.org/wiki/Horticultural_oil |
In soil science , Horton overland flow describes the tendency of water to flow horizontally across land surfaces when rainfall has exceeded infiltration capacity and depression storage capacity . It is named after Robert E. Horton , the engineer who made the first detailed studies of the phenomenon.
Paved surfaces such as asphalt , which are designed to be flat and impermeable , rapidly achieve Horton overland flow. It is shallow, sheetlike, and fast-moving, and hence capable of extensively eroding soil and bedrock .
Horton overland flow is most commonly encountered in urban construction sites and unpaved rural roads, where vegetation has been stripped away, exposing bare dirt. The process also poses a significant problem in areas with steep terrain, where water can build up great speed and where soil is less stable, and in farmlands , where soil is flat and loose. | https://en.wikipedia.org/wiki/Horton_overland_flow |
A Horton sphere (sometimes spelled Hortonsphere ), also referred to as a spherical tank or simply sphere , is a spherical pressure vessel , which is used for industrial-scale storage of liquefied gases . Example of materials that can be stored in Horton spheres are liquefied petroleum gas (LPG), liquefied natural gas (LNG), and anhydrous ammonia . [ 1 ]
The Horton sphere is named after Horace Ebenezer Horton (1843–1912), founder and financier of a bridge design and construction firm in about 1860, merged to form the Chicago Bridge & Iron Company (CB&I) in 1889 as a bridge building firm and constructed the first bulk liquid storage tanks in the late nineteenth and early twentieth centuries. CB&I built the first field-erected spherical pressure vessels in the world at the Port Arthur, Texas refinery in 1923, [ 2 ] and subsequently claimed 'Hortonsphere' as a registered trademark. [ 3 ] G.T. Horton was issued a patent on Sept 23, 1947, describing how to make the welded steel support columns resistant to thermal expansion and wind load of the sphere. [ 4 ]
Because of their distinctive form, some have become subject to conservation campaigns such as that at Poughkeepsie , New York. [ 5 ]
Initially, Horton spheres were constructed by riveting together separate wrought iron or steel plates, but from the 1940s, were of welded construction. The plates are formed in roller plants and cut to patterns. [ citation needed ] Today, spherical tanks are designed to codes such as ASME VIII , PD 5500 , or EN 13445 . [ 6 ]
The spherical geometry minimizes both the mechanical stress imposed on the tank walls by the internal pressure and the heat transfer through the walls. This makes spherical tanks the optimal solution for the storage of large amounts of liquefied gases , where liquefaction is achieved by pressurization, cryogenic refrigeration , or a combination thereof. Minimization of heat transfer is due to the sphere being the solid figure with the minimum surface area per unit volume . This is an advantage because it reduces the production of boil-off gas from both pressurized and refrigerated liquefied gases. [ citation needed ]
Spherical tanks are used extensively for LPG and associated gases, such as propane , propylene , butane , and butadiene . They can be used for cryogenic storage of LNG , methane , ethane , ethylene , hydrogen , oxygen , nitrogen , etc. [ 7 ]
Support is usually provided by the use of legs attached to the sphere at its equator. Legs are typically braced together with diagonal rods to provide lateral support against wind and seismic loads. Legs are fireproofed if the material is flammable. Pressure relief valves are installed at the top, from where level instrumentation is also accessed. Liquid inlet and outlet connections are at the bottom of the sphere. Bunds are usually provided around the tanks or tank clusters to contain potential leakage. [ 6 ] However, if the gas is prone to boiling liquid vapor expanding explosions (BLEVE), spills should be directed away from the leaking tank. [ 8 ]
Other uses have been applied to the Horton sphere including space chambers, hyperbaric chambers , environmental chambers, vacuum vessels, process vessels, test vessels, containment vessels and surge vessels. [ 7 ]
Spherical tanks are a distinctive feature of certain sea-going gas carriers . [ 6 ] | https://en.wikipedia.org/wiki/Horton_sphere |
Hosaka–Cohen transformation (also called H–C transformation) is a mathematical method of converting a particular two-dimensional scalar magnetic field map to a particular two-dimensional vector map. The scalar field map is of the component of magnetic field which is normal to a two-dimensional surface of a volume conductor; this volume conductor contains the currents producing the magnetic field. The resulting vector map, sometimes called "an arrowmap" roughly mimics those currents under the surface which are parallel to the surface, which produced the field. Therefore, the purpose in performing the transformation is to allow a rough visualization of the underlying, parallel currents.
The transformation was proposed by Cohen and Hosaka of the biomagnetism group at MIT , [ 1 ] then was used by Hosaka and Cohen to visualize the current sources of the magnetocardiogram . [ 2 ]
Each arrow is defined as:
where x {\displaystyle x} of the local x , y , z {\displaystyle x,y,z} coordinate system is normal to the volume conductor surface, x ^ {\displaystyle {\hat {x}}} and y ^ {\displaystyle {\hat {y}}} are unit vectors, and B z {\displaystyle Bz} is the normal component of magnetic field. This is a form of two-dimensional gradient of the scalar quantity B z {\displaystyle Bz} and is rotated by 90° from the conventional gradient.
Almost any scalar field, magnetic or otherwise, can be displayed in this way, if desired, as an aid to the eye, to help see the underlying sources of the field.
This biophysics -related article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hosaka–Cohen_transformation |
A hose is a flexible hollow tube or pipe designed to carry fluids from one location to another, often from a faucet or hydrant. [ 1 ]
Early hoses were made of leather, although modern hoses are typically made of rubber, canvas, and helically wound wire. Hoses may also be made from plastics such as polyvinyl chloride , polytetrafluoroethylene , and polyethylene terephthalate , or from metals such as stainless steel . [ 2 ]
This tool article is a stub . You can help Wikipedia by expanding it . | https://en.wikipedia.org/wiki/Hose |
The Hosford yield criterion is a function that is used to determine whether a material has undergone plastic yielding under the action of stress.
The Hosford yield criterion for isotropic materials [ 1 ] is a generalization of the von Mises yield criterion . It has the form
where σ i {\displaystyle \sigma _{i}} , i=1,2,3 are the principal stresses , n {\displaystyle n} is a material-dependent exponent and σ y {\displaystyle \sigma _{y}} is the yield stress in uniaxial tension/compression.
Alternatively, the yield criterion may be written as
This expression has the form of an L p norm which is defined as
When p = ∞ {\displaystyle p=\infty } , the we get the L ∞ norm,
indicates that if n = ∞, we have
This is identical to the Tresca yield criterion .
Therefore, when n = 1 or n goes to infinity the Hosford criterion reduces to the Tresca yield criterion . When n = 2 the Hosford criterion reduces to the von Mises yield criterion .
Note that the exponent n does not need to be an integer.
For the practically important situation of plane stress, the Hosford yield criterion takes the form
A plot of the yield locus in plane stress for various values of the exponent n ≥ 1 {\displaystyle n\geq 1} is shown in the adjacent figure.
The Logan-Hosford yield criterion for anisotropic plasticity [ 2 ] [ 3 ] is similar to Hill's generalized yield criterion and has the form
where F,G,H are constants, σ i {\displaystyle \sigma _{i}} are the principal stresses, and the exponent n depends on the type of crystal (bcc, fcc, hcp, etc.) and has a value much greater than 2. [ 4 ] Accepted values of n {\displaystyle n} are 6 for bcc materials and 8 for fcc materials.
Though the form is similar to Hill's generalized yield criterion , the exponent n is independent of the R-value unlike the Hill's criterion.
Under plane stress conditions, the Logan-Hosford criterion can be expressed as
where R {\displaystyle R} is the R-value and σ y {\displaystyle \sigma _{y}} is the yield stress in uniaxial tension/compression. For a derivation of this relation see Hill's yield criteria for plane stress . A plot of the yield locus for the anisotropic Hosford criterion is shown in the adjacent figure. For values of n {\displaystyle n} that are less than 2, the yield locus exhibits corners and such values are not recommended. [ 4 ] | https://en.wikipedia.org/wiki/Hosford_yield_criterion |
Hoshi Meguri no Uta ( 星巡りの歌 , lit. ' Song of the Pilgrimage of the Stars ' ) is a piece of music composed in the pentatonic scale by Miyazawa Kenji in 1918. [ 1 ] It is featured in his 1934 novel Night on the Galactic Railroad as well as its 1985 animated adaptation , where it appears in a music box arrangement by Shimizu Osamu and Haruomi Hosono . Its name in Esperanto is La Kanto de la Rondiro de la Steloj .
The red-eyed Scorpion , and the Eagle 's spread wings The blue-eyed Little Dog , and the coiled Snake of Light Orion sings in the heavens From where fall dew and frost
The cloud of Andromeda in the shape of a fish's mouth, and the Great Bear who reaches out five times to the North to the brow of the Little Bear , where shines the guide of the pilgrimage of the sky
The lyrics are full of fantastic images of the night sky, although in some places they differ from the usual interpretation of the constellations. The "red eye" of the Scorpion mentioned in the song is Antares , heart of the constellation Scorpius , and the "blue eye" of Canis Minor is Procyon . The "guide of the pilgrimage of the sky" on the brow of Ursa Minor is thought to refer to Polaris , which is actually at the end of that constellation's tail.
Hoshi meguri no uta received renewed notoriety when it was sung by Ōtake Shinobu and the Suginami Children's Chorus in the course of the closing ceremonies of the 2020 Summer Olympics in Tokyo . [ 2 ] It has also been used as ending song in the visual novel and anime Planetarian: The Reverie of a Little Planet . | https://en.wikipedia.org/wiki/Hoshi_Meguri_no_Uta |
The Hosoya index , also known as the Z index , of a graph is the total number of matchings in it. The Hosoya index is always at least one, because the empty set of edges is counted as a matching for this purpose. Equivalently, the Hosoya index is the number of non-empty matchings plus one. The index is named after Haruo Hosoya . It is used as a topological index in chemical graph theory .
Complete graphs have the largest Hosoya index for any given number of vertices; their Hosoya indices are the telephone numbers .
This graph invariant was introduced by Haruo Hosoya in 1971. [ 1 ] It is often used in chemoinformatics for investigations of organic compounds . [ 2 ] [ 3 ]
In his article, "The Topological Index Z Before and After 1971," on the history of the notion and the associated inside stories, Hosoya writes that he introduced the Z index to report a good correlation of the boiling points of alkane isomers and their Z indices, basing on his unpublished 1957 work carried out while he was an undergraduate student at the University of Tokyo . [ 2 ]
A linear alkane , for the purposes of the Hosoya index, may be represented as a path graph without any branching. A path with one vertex and no edges (corresponding to the methane molecule) has one (empty) matching, so its Hosoya index is one; a path with one edge ( ethane ) has two matchings (one with zero edges and one with one edges), so its Hosoya index is two. Propane (a length-two path) has three matchings: either of its edges, or the empty matching. n - butane (a length-three path) has five matchings, distinguishing it from isobutane which has four. More generally, a matching in a path with k {\displaystyle k} edges either forms a matching in the first k − 1 {\displaystyle k-1} edges, or it forms a matching in the first k − 2 {\displaystyle k-2} edges together with the final edge of the path. This case analysis shows that the Hosoya indices of linear alkanes obey the recurrence governing the Fibonacci numbers , and because they also have the same base case they must equal the Fibonacci numbers. The structure of the matchings in these graphs may be visualized using a Fibonacci cube .
The largest possible value of the Hosoya index, on a graph with n {\displaystyle n} vertices, is given by the complete graph K n {\displaystyle K_{n}} . The Hosoya indices for the complete graphs are the telephone numbers
These numbers can be expressed by a summation formula involving factorials , as ∑ k = 0 ⌊ n / 2 ⌋ n ! 2 k ⋅ k ! ⋅ ( n − 2 k ) ! . {\displaystyle \sum _{k=0}^{\lfloor n/2\rfloor }{\frac {n!}{2^{k}\cdot k!\cdot \left(n-2k\right)!}}.} Every graph that is not complete has a smaller Hosoya index than this upper bound . [ 4 ]
The Hosoya index is #P-complete to compute, even for planar graphs . [ 5 ] However, it may be calculated by evaluating the matching polynomial m G at the argument 1. [ 6 ] Based on this evaluation, the calculation of the Hosoya index is fixed-parameter tractable for graphs of bounded treewidth [ 7 ] and polynomial (with an exponent that depends linearly on the width) for graphs of bounded clique-width . [ 8 ] The Hosoya index can be efficiently approximated to any desired constant approximation ratio using a fully-polynomial randomized approximation scheme . [ 9 ] | https://en.wikipedia.org/wiki/Hosoya_index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.